diff --git a/spaces/0xSynapse/Image_captioner/README.md b/spaces/0xSynapse/Image_captioner/README.md deleted file mode 100644 index 81b4424c7903de18e57ce1b99332aba6d79fcf79..0000000000000000000000000000000000000000 --- a/spaces/0xSynapse/Image_captioner/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image Captioner -emoji: ⚡ -colorFrom: indigo -colorTo: green -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Bartender 2022 Full Crack.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Bartender 2022 Full Crack.md deleted file mode 100644 index e4ca48ad9fb0789951d65df7c439bcf6b91a2fb0..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Bartender 2022 Full Crack.md +++ /dev/null @@ -1,27 +0,0 @@ - -

How to Download Bartender 2022 Full Crack for Free

-

Bartender 2022 is a software that allows you to design and print labels, barcodes, RFID tags, cards, and more. It is widely used by businesses and industries to create professional and compliant labels for various purposes. However, Bartender 2022 is not a cheap software, and you might be tempted to look for a cracked version online.

-

download bartender 2022 full crack


Download Zip ✪✪✪ https://byltly.com/2uKx88



-

But before you do that, you should know that downloading Bartender 2022 full crack is not only illegal, but also risky. You could face legal consequences, damage your computer, or expose your data to hackers and malware. In this article, we will explain why you should avoid downloading Bartender 2022 full crack and what are some better alternatives.

-

Why You Should Not Download Bartender 2022 Full Crack

-

Downloading Bartender 2022 full crack is a bad idea for several reasons:

- -

As you can see, downloading Bartender 2022 full crack is not worth the risk or the hassle. You are better off using a legitimate version of the software that is safe, legal, reliable, and ethical.

-

What Are Some Better Alternatives to Downloading Bartender 2022 Full Crack

-

If you want to use Bartender 2022 without breaking the law or endangering your computer, here are some better alternatives:

- -

By using these alternatives, you can enjoy the benefits of Bartender 2022 without risking your legal status or computer security.

-

Conclusion

-

Bartender 2022 is a powerful and versatile software that can help you create professional and compliant labels for various purposes. However, downloading Bartender 2022 full crack is not a smart or safe option. You could face legal troubles, damage your computer, or expose your data to hackers and malware.

-

Instead of downloading Bartender 2022 full crack, you should use a legitimate version of the software that is safe, legal, reliable, and ethical

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Acrobat Xi Pro Free Download For Windows 8 !EXCLUSIVE!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Acrobat Xi Pro Free Download For Windows 8 !EXCLUSIVE!.md deleted file mode 100644 index 789df0006c9f0ee52515219e1324a2810fe21e9a..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Adobe Acrobat Xi Pro Free Download For Windows 8 !EXCLUSIVE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

adobe acrobat xi pro free download for windows 8


Download Zip ✫✫✫ https://imgfil.com/2uy1yu



- -Acrobat: Commercial software; Reader: Freeware. Website. acrobat.adobe.com. Adobe Acrobat is a family of application software and Web services developed by Adobe Inc. to ... Acrobat XI Pro (for Windows and macOS); Acrobat XI Standard (for Windows only) ... "Download new and previous versions of Adobe Reader". 1fdad05405
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Alan Wake (2012) PC Fitgirl Repack [Extra Quality].md b/spaces/1gistliPinn/ChatGPT4/Examples/Alan Wake (2012) PC Fitgirl Repack [Extra Quality].md deleted file mode 100644 index 314243b83afa0d2c7f065e34658fd0142b495a74..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Alan Wake (2012) PC Fitgirl Repack [Extra Quality].md +++ /dev/null @@ -1,6 +0,0 @@ -

Alan Wake (2012) PC Fitgirl Repack


DOWNLOADhttps://imgfil.com/2uy0ay



- -TOHU (2021) PC | RePack от FitGirl · www.trackeroc.... 2 | 0. 980 MB. 0 | 313. 0. 2021-02-02 12:19. www.trackeroc.... √· Keep Out [ENG + 3] (2021) (1.0.0.6). 1fdad05405
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Basha Tamil Movie HOT Download Dvdrip 20.md b/spaces/1gistliPinn/ChatGPT4/Examples/Basha Tamil Movie HOT Download Dvdrip 20.md deleted file mode 100644 index 65c5fa3cdce6eb42acbce8a7e9683b8cb6c27fea..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Basha Tamil Movie HOT Download Dvdrip 20.md +++ /dev/null @@ -1,7 +0,0 @@ -
-

download bollywood full movie bollywood movies (2018) download free bollywood movies, download.. itubego youtube downloader 1.3.1 with [latest] crack. apps full version is a world famous website to download latest softwares free download for windows, mac os, android, pc,. itubego youtube downloader 4.1.1 + crack. flixicam netflix video downloader 1.1 + patch.1 with crack download [latest] save. idle shopping mall (mod apk) start with a little coffee shop and.

-

basha tamil movie download dvdrip 20


Download File »»» https://imgfil.com/2uxYkm



-

youtube downloader 4.9.5.2023 + crack. youtube downloader 4.2023 [crack + patch] is a powerful and safe video downloader for windows. you can download any video from youtube. it is not a.. itubego youtube downloader 4.1.5 + crack.5 with crack free download is a convenient downloader application that allows you to save videos and audio for free.5 with crack and serial free download
.5 with crack free download
.5 with crack free download
itubego youtube downloader 4.

-

get itubego youtube downloader 4.1.1 + crack + serial number.. get itubego youtube downloader 4.1 + crack + serial number from given link. you can download itubego youtube downloader 4.1 + crack + serial number free without any charges. itubego youtube downloader 4.1 + crack + serial number is a.1 with crack + serial number free download.1 with crack + serial number is a handy application that can be used to download your favorite videos from youtube. the publisher of this app has not provided any details about this itubego youtube downloader 4.1 with crack + serial number free download at this.1 with crack free download.1 with crack is a convenient application that can be used to download your favorite videos from youtube.1 with crack is a handy application that can be used to download your favorite videos from youtube.

899543212b
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Dreamup 1 3 3 8 Exe Downloadl BEST.md b/spaces/1gistliPinn/ChatGPT4/Examples/Dreamup 1 3 3 8 Exe Downloadl BEST.md deleted file mode 100644 index 03883600588bc8ee299aebd1f18a704b580892bd..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Dreamup 1 3 3 8 Exe Downloadl BEST.md +++ /dev/null @@ -1,94 +0,0 @@ -
-

Dreamup 1 3 3 8 Exe Download: How to Flash Your Dreambox with Ease

-

If you are looking for a tool to flash your Dreambox hardware, you might want to try Dreamup 1 3 3 8 Exe Download. This is a free and easy-to-use program that allows you to load images into your Dreambox via serial. In this article, we will show you how to use Dreamup 1 3 3 8 Exe Download and what are its benefits.

-

What is Dreamup 1 3 3 8 Exe Download?

-

Dreamup 1 3 3 8 Exe Download is the official loader from Dream Multimedia, the company that produces Dreambox devices. Dreambox is a series of Linux-powered satellite receivers that can be customized with various software and plugins. Dreamup 1 3 3 8 Exe Download allows you to flash your Dreambox with new firmware or images, which can enhance its performance and features.

-

Dreamup 1 3 3 8 Exe Downloadl


Download Zip ››› https://imgfil.com/2uxYqU



-

How to Use Dreamup 1 3 3 8 Exe Download?

-

Using Dreamup 1 3 3 8 Exe Download is very simple and takes only about 15 minutes to complete the flashing process. Here are the steps you need to follow:

-
    -
  1. Download Dreamup 1 3 3 8 Exe from a reliable source and install it on your computer.
  2. -
  3. Connect your Dreambox to your computer via a serial cable.
  4. -
  5. Run Dreamup and select your Dreambox model from the drop-down menu.
  6. -
  7. Click on Connect and wait for the program to detect your device.
  8. -
  9. Click on Flash and browse for the image file you want to load into your Dreambox.
  10. -
  11. Click on Open and wait for the program to transfer the image to your device.
  12. -
  13. Once completed, the program will calculate CRC32 on STB, erase flash and flash from its memory.
  14. -
  15. Click on OK and disconnect your device.
  16. -
  17. Restart your Dreambox and enjoy the new image.
  18. -
-

What are the Benefits of Dreamup 1 3 3 8 Exe Download?

-

Dreamup 1 3 3 8 Exe Download has several benefits for Dreambox users, such as:

- -

Conclusion

-

Dreamup 1 3 3 8 Exe Download is a handy tool for anyone who owns a Dreambox device and wants to flash it with new images. It is free, easy to use and supports all models of Dreambox devices. It can help you fix some issues with your device, as well as enhance its performance and features. If you are looking for a way to flash your Dreambox with ease, you should give Dreamup 1 3 3 8 Exe Download a try.

-

Where to Download Dreamup 1 3 3 8 Exe?

-

If you want to download Dreamup 1 3 3 8 Exe, you need to be careful about the source you choose. There are many websites that offer this program, but some of them might be unreliable or unsafe. You should always download Dreamup 1 3 3 8 Exe from a trusted and reputable source, such as the official website of Dream Multimedia or a well-known software informer site. This way, you can avoid downloading viruses, malware or corrupted files that might damage your device or compromise your privacy.

-

How to Update Dreamup 1 3 3 8 Exe?

-

Dreamup 1 3 3 8 Exe is not the latest version of the program. There are newer versions available that might have some bug fixes or improvements. If you want to update Dreamup 1 3 3 8 Exe, you can check the official website of Dream Multimedia or a software informer site for the latest version of Dreamup. You can also use the built-in update feature of the program, which will automatically check for updates and download them if available. To use this feature, you need to run Dreamup and click on Help > Check for Updates.

-

What are the Alternatives to Dreamup 1 3 3 8 Exe?

-

Dreamup 1 3 3 8 Exe is not the only tool that can flash your Dreambox device. There are some alternatives that you might want to try, such as:

- -

Conclusion

-

Dreamup 1 3 3 8 Exe is a useful tool for flashing your Dreambox device with new images. It is free, easy to use and supports all models of Dreambox devices. It can help you fix some issues with your device, as well as enhance its performance and features. However, you should always download it from a reliable source, update it regularly and consider some alternatives if you want more features or options. We hope this article has helped you learn more about Dreamup 1 3 3 8 Exe and how to use it.

-

-

How to Troubleshoot Dreamup 1 3 3 8 Exe Download?

-

Sometimes, you might encounter some problems when using Dreamup 1 3 3 8 Exe Download. For example, you might get an error message, a connection failure, a corrupted image or a bricked device. In such cases, you need to troubleshoot Dreamup 1 3 3 8 Exe Download and find out the cause of the problem. Here are some common troubleshooting tips:

- -

If none of these tips work, you might need to contact the support team of Dream Multimedia or a professional technician for further assistance.

-

What are the Reviews of Dreamup 1 3 3 8 Exe Download?

-

Dreamup 1 3 3 8 Exe Download has received many positive reviews from users who have used it to flash their Dreambox devices. Here are some of the reviews from different sources:

-
"Dreamup is a great tool for flashing your Dreambox. It is easy to use and works flawlessly. I have used it several times to update my device and never had any issues. Highly recommended." - User from Software Informer
-
"I have been using Dreamup for years and it never disappoints me. It is the best way to flash your Dreambox with any image you want. It is fast, reliable and safe. I love it." - User from SoundCloud
-
"Dreamup is a must-have for any Dreambox owner. It is the official loader from Dream Multimedia and it supports all models of Dreambox devices. It can fix any problem with your device and improve its performance and features. It is awesome." - User from Dreambox4U
-

Conclusion

-

Dreamup 1 3 3 8 Exe Download is a useful tool for flashing your Dreambox device with new images. It is free, easy to use and supports all models of Dreambox devices. It can help you fix some issues with your device, as well as enhance its performance and features. However, you should always download it from a reliable source, update it regularly and consider some alternatives if you want more features or options. We hope this article has helped you learn more about Dreamup 1 3 3 8 Exe Download and how to use it.

-

How to Choose the Best Image for Your Dreambox Device?

-

One of the advantages of using Dreamup 1 3 3 8 Exe Download is that you can flash your Dreambox device with any image you want. However, not all images are created equal. Some images might have more features, plugins, skins or compatibility than others. Therefore, you need to choose the best image for your Dreambox device according to your preferences and needs. Here are some tips on how to choose the best image for your Dreambox device:

- -

By following these tips, you can choose the best image for your Dreambox device that suits your preferences and needs.

-

How to Backup and Restore Your Dreambox Settings?

-

Before you use Dreamup 1 3 3 8 Exe Download to flash your Dreambox device with a new image, you might want to backup your current settings first. This way, you can restore them later if you are not satisfied with the new image or if something goes wrong during the flashing process. Here are the steps to backup and restore your Dreambox settings:

-
    -
  1. Connect your Dreambox device to your computer via network or USB.
  2. -
  3. Run a backup tool such as Dreambox Control Center or FlashWizard Pro.
  4. -
  5. Select your Dreambox model from the drop-down menu.
  6. -
  7. Select Backup from the menu bar.
  8. -
  9. Select a location on your computer where you want to save your backup file.
  10. -
  11. Click on Start Backup and wait for the process to complete.
  12. -
  13. To restore your settings, run the same backup tool and select Restore from the menu bar.
  14. -
  15. Select your backup file from your computer.
  16. -
  17. Click on Start Restore and wait for the process to complete.
  18. -
-

By following these steps, you can backup and restore your Dreambox settings easily and safely.

-

Conclusion

-

Dreamup 1 3 3 8 Exe Download is a useful tool for flashing your Dreambox device with new images. It is free, easy to use and supports all models of Dreambox devices. It can help you fix some issues with your device, as well as enhance its performance and features. However, you should always download it from a reliable source, update it regularly and consider some alternatives if you want more features or options. You should also choose the best image for your device, backup and restore your settings before flashing and troubleshoot any problems that might occur. We hope this article has helped you learn more about Dreamup 1 3 3 8 Exe Download and how to use it.

-

Conclusion

-

Dreamup 1 3 3 8 Exe Download is a useful tool for flashing your Dreambox device with new images. It is free, easy to use and supports all models of Dreambox devices. It can help you fix some issues with your device, as well as enhance its performance and features. However, you should always download it from a reliable source, update it regularly and consider some alternatives if you want more features or options. You should also choose the best image for your device, backup and restore your settings before flashing and troubleshoot any problems that might occur. We hope this article has helped you learn more about Dreamup 1 3 3 8 Exe Download and how to use it.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Anger of Stick 5 Mod APK How to Get Free Money and Unlock All Levels.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Anger of Stick 5 Mod APK How to Get Free Money and Unlock All Levels.md deleted file mode 100644 index c42b4e9869d3305ccfa104f37cd20e4914bd3500..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Anger of Stick 5 Mod APK How to Get Free Money and Unlock All Levels.md +++ /dev/null @@ -1,119 +0,0 @@ -
-

How to Download Hack Anger of Stick 5 and Enjoy Unlimited Fun

|

Are you a fan of stickman action games? Do you love fighting zombies and enemies with your stick friends? If yes, then you must have heard of Anger of Stick 5, one of the most popular stickman games on Android and iOS devices.

-

Anger of Stick 5 is a thrilling game that lets you control a stickman hero and his allies as they fight against a strange group of enemies that have captured innocent people and turned them into zombies. You can use various weapons, skills, helicopters, robots, and more to defeat your foes and save the city.

-

download hack anger of stick 5


Download File ✺✺✺ https://urlin.us/2uST3m



-

But what if you want to have more fun and excitement in this game? What if you want to unlock all the features, items, modes, and levels without spending any money or time? What if you want to become the ultimate stickman warrior and dominate every battle?

-

Well, there is a way to do that. You can download hack Anger of Stick 5 and enjoy unlimited fun in this game. In this article, we will tell you everything you need to know about hack Anger of Stick 5, including what it is, why you need it, how to download it, how to use it, and what are the risks involved. So, let's get started!

-

What is Anger of Stick 5?

-

Anger of Stick 5 is a stickman action game developed by J-PARK. It is available on both Android and iOS platforms. It has over 100 million downloads on Google Play Store and over 10 million downloads on App Store. It has a rating of 4.5 stars out of 5 on both platforms.

-

The game has two modes: single mode and zombie mode. In single mode, you can choose from six different stickman heroes, each with their own skills and abilities. You can also recruit up to three allies to help you in your missions. You can upgrade your weapons and skills as you progress in the game. You can also use helicopters, robots, and mechs to enhance your firepower and mobility.

-

In zombie mode, you can fight against endless waves of zombies and other enemies. You can use various weapons and items to survive as long as possible. You can also compete with other players on the leaderboard and see who can score the highest.

-

Anger of Stick 5 is a fun and addictive game that will keep you entertained for hours. However, it is not an easy game. You will face many challenges and difficulties as you play. You will need a lot of coins, gems, and energy to unlock all the features, items, modes, and levels in the game. You will also need a lot of skill and strategy to win every battle.

-

download anger of stick 5 mod apk unlimited money
-how to hack anger of stick 5 zombie with lucky patcher
-anger of stick 5 cheats codes for android
-download anger of stick 5 mod menu apk
-anger of stick 5 hack version download for pc
-download anger of stick 5 zombie mod apk latest version
-anger of stick 5 unlimited coins and gems hack
-how to get free diamonds in anger of stick 5
-download anger of stick 5 mod apk revdl
-anger of stick 5 hack online generator
-download anger of stick 5 mod apk happymod
-anger of stick 5 hack tool no survey no password
-download anger of stick 5 mod apk android 1
-how to unlock all characters in anger of stick 5 hack
-anger of stick 5 hack apk download uptodown
-download anger of stick 5 mod apk rexdl
-anger of stick 5 hack without human verification
-how to get unlimited health in anger of stick 5
-download anger of stick 5 mod apk an1.com[^1^]
-anger of stick 5 hack ios download
-download anger of stick 5 mod apk offline
-how to hack anger of stick 5 with game guardian
-anger of stick 5 cheat engine for windows
-download anger of stick 5 mod apk pure
-anger of stick 5 hack apk mediafıre link
-download anger of stick 5 mod apk obb
-how to hack anger of stick 5 with root
-anger of stick 5 cheat codes for ios
-download anger of stick 5 mod apk unlimited diamonds
-how to get free weapons in anger of stick 5

-

That's why you might want to download hack Anger of Stick 5 and enjoy unlimited fun in this game.

-

Why Do You Need Hack Anger of Stick 5?

-

Hack Anger of Stick 5 is a tool that can help you modify the game and get unlimited resources, such as coins, gems, energy, weapons, skills, helicopters, robots, mechs, and more. With hack Anger of Stick 5, you can enjoy the following benefits:

- -

On the other hand, if you play without hack Anger of Stick 5, you might face the following drawbacks:

- -

As you can see, hack Anger of Stick 5 can make a huge difference in your gaming experience. It can make the game more enjoyable and rewarding for you. It can also save you a lot of time and money that you would otherwise spend on the game.

-

How to Download Hack Anger of Stick 5?

-

If you are convinced that hack Anger of Stick 5 is what you need to have more fun in this game, then you might be wondering how to download it. Well, it's not that hard. You just need to follow these simple steps:

-
    -
  1. Find a reliable website that offers hack tools for Anger of Stick 5. You can search on Google or ask your friends for recommendations.
  2. -
  3. Choose the hack tool that suits your needs and preferences. There are different types of hack tools for Anger of Stick 5, such as mod apk files, online generators, cheat codes, etc. Each type has its own advantages and disadvantages. You should read the reviews and ratings of each hack tool before downloading it.
  4. -
  5. Download the hack tool from the website. Make sure that the website is safe and secure. Avoid downloading from suspicious or unknown sources that might contain viruses or malware.
  6. -
  7. Install the hack tool on your device. If you are using a mod apk file, you will need to enable unknown sources in your device settings before installing it. If you are using an online generator or cheat code, you will need to enter your username or email address associated with your game account before generating or activating it.
  8. -
  9. Launch the hack tool and enjoy unlimited fun in Anger of Stick 5!
  10. How to Use Hack Anger of Stick 5?

    -

    Now that you have downloaded hack Anger of Stick 5, you might be wondering how to use it. Well, it's not that hard either. You just need to follow these simple tips and tricks:

    - -

    That's it! You have successfully used hack Anger of Stick 5 and made the game more enjoyable and rewarding for yourself. You can now play as long as you want without any limitations or restrictions. You can now unlock all the features, items, modes, and levels in the game without any hassle or frustration. You can now become the ultimate stickman warrior and dominate every battle.

    -

    What are the Risks of Using Hack Anger of Stick 5?

    -

    However, before you get too excited and start using hack Anger of Stick 5, you should also be aware of the risks involved. Using hack tools for any game is not without consequences. You might face some potential dangers or problems if you use hack Anger of Stick 5. Here are some of them:

    - -

    As you can see, using hack Anger of Stick 5 is not without risks. You might end up losing more than what you gain if you use hack tools for this game. You might also ruin the fun and challenge of the game by making it too easy or unfair.

    -

    So, how can you avoid or minimize these risks? Here are some ways:

    - -

    By following these tips, you can reduce the chances of getting into trouble or harm when using hack Anger of Stick 5. You can also enjoy the game more without compromising its quality or integrity.

    -

    Conclusion

    -

    In conclusion, hack Anger of Stick 5 is a tool that can help you modify the game and get unlimited resources, such as coins, gems, energy, weapons, skills, helicopters, robots, mechs, and more. It can make the game more fun and exciting for you. It can also save you a lot of time and money that you would otherwise spend on the game.

    -

    However, hack Anger of Stick 5 is not without risks. You might get banned from the game or lose your game account. You might get infected with viruses or malware. You might get scammed or cheated. You might also ruin the fun and challenge of the game by making it too easy or unfair.

    -

    Therefore, you should use hack Anger of Stick 5 at your own risk and discretion. You should understand the consequences and responsibilities of using hack tools for this game. You should also respect the rights and rules of the developers and other players of this game.

    -

    You should also use hack Anger of Stick 5 sparingly and moderately. You should not abuse or overuse hack tools for this game. You should also not use hack tools for this game in competitive or multiplayer modes where they can affect other players negatively.

    -

    Finally, you should use hack Anger of Stick 5 from reputable and verified sources only. You should do some research and check the reviews and ratings of each website or source that offers hack tools for this game. You should also scan and test each file or link before downloading or using it.

    -

    By following these tips, you can reduce the chances of getting into trouble or harm when using hack Anger of Stick 5. You can also enjoy the game more without compromising its quality or integrity.

    -

    We hope that this article has helped you learn how to download hack Anger of Stick 5 and enjoy unlimited fun in this game. If you have any questions or comments, feel free to leave them below. We would love to hear from you!

    -

    FAQs

    -

    Here are some frequently asked questions about hack Anger of Stick 5:

    -

    Q: Is hack Anger of Stick 5 legal?

    -

    A: Hack Anger of Stick 5 is not legal. It is against the terms of service and policies of the developers of Anger of Stick 5. It is also considered as cheating or hacking by other players of this game. Therefore, using hack Anger of Stick 5 can result in legal actions or penalties from the developers or other players.

    -

    Q: Is hack Anger of Stick 5 safe?

    -

    A: Hack Anger of Stick 5 is not safe. It can expose your device or data to viruses or malware. It can also expose your personal or financial information to phishing or scam sites. It can also expose your game account to suspension or termination. Therefore, using hack Anger of Stick 5 can result in safety issues or problems for you.

    -

    Q: Is hack Anger of Stick 5 free?

    -

    A: Hack Anger of Stick 5 is not free. It can cost you money or time to download or use it. It can also cost you money or time to fix any issues or problems that it might cause for your device, data, or game account. Therefore, using hack Anger of Stick 5 can result in hidden fees or subscriptions for you.

    -

    Q: Is hack Anger of Stick 5 worth it?

    -

    A: Hack Anger of Stick 5 is not worth it. It can ruin the fun and challenge of the game by making it too easy or unfair. It can also ruin the quality and integrity of the game by modifying it without permission or authorization. It can also ruin your reputation and relationship with other players by cheating or hacking in this game. Therefore, using hack Anger of Stick 5 can result in negative impacts or outcomes for you.

    -

    Q: Is there an alternative to hack Anger of Stick 5?

    -

    A: Yes, there is an alternative to hack Anger of Stick 5. You can play the game without using any hack tools and enjoy it as it is meant to be played. You can earn coins, gems, energy, weapons, skills, helicopters, robots, mechs, and more by playing the game fairly and honestly. You can also improve your skill and strategy by playing the game regularly and diligently. You can also interact with other players by playing the game cooperatively and competitively. Therefore, playing the game without using any hack tools can result in positive experiences or benefits for you.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Basketball Grand Slam APK and Compete with Legendary Players.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Basketball Grand Slam APK and Compete with Legendary Players.md deleted file mode 100644 index eb2eb5d7de008562d9915634ff7603e01c29fa96..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Download Basketball Grand Slam APK and Compete with Legendary Players.md +++ /dev/null @@ -1,148 +0,0 @@ - -

    Basketball Grand Slam APK: A Real-Time Competitive Basketball Game

    -

    Introduction

    -

    If you are a fan of basketball and want to experience the thrill of real-time competition on your mobile device, then you should check out Basketball Grand Slam APK. This is a game that lets you play with or against other players from around the world in various modes and events. You can also unlock and use legendary players with different skills and characteristics to form your own super lineup.

    -

    basketball grand slam apk


    Download Ziphttps://urlin.us/2uSYVJ



    -

    Some of the main features of Basketball Grand Slam APK are:

    - -

    To download and install Basketball Grand Slam APK, you need to follow these steps:

    -
      -
    1. Go to [Basketball Grand Slam APK (Android Game) - Free Download](^1^) or [Basketball Grand Slam for Android - Download the APK from Uptodown](^2^) and click on the download button.
    2. -
    3. Once the download is complete, open the file and follow the instructions to install the game on your device.
    4. -
    5. Launch the game and enjoy playing with other basketball fans.
    6. -
    -

    Game Modes

    -

    3v3 Qualifying Mode

    -

    This is the mode where you can compete with other players in a 3v3 format. You can either join a random team or invite your friends to form your own team. The goal is to win as many matches as possible and climb up the ranking ladder. You can also earn rewards such as coins, gems, chests, and tickets by playing this mode.

    -

    The benefits of playing this mode are:

    - -

    Bullfighting Grand Prix ModeBullfighting Grand Prix Mode

    -

    This is the mode where you can participate in various events and tournaments that have different rules and rewards. You can choose from different difficulty levels and modes such as knockout, round robin, and ladder. You can also customize your own event and invite other players to join. The goal is to win as many matches as possible and earn trophies and prizes.

    -

    The challenges of playing this mode are:

    - -

    Hot Three-Point Ball Competition

    -

    This is the mode where you can show off your shooting skills and compete with other players in a hot three-point ball competition. You can choose from different courts and backgrounds that have different effects on your shooting. You can also use different props and items that can enhance or hinder your performance. The goal is to score as many points as possible within the time limit and beat your opponents.

    -

    The skills required for playing this mode are:

    - -

    Legendary Players

    -

    How to unlock legendary players?

    -

    To unlock legendary players, you need to collect their cards and fragments. You can get them from various sources such as chests, events, rewards, and shops. You can also exchange them with other players or use gems to buy them. Once you have enough cards and fragments, you can activate and upgrade the legendary players in your lineup.

    -

    basketball grand slam game download
    -basketball grand slam android app
    -basketball grand slam free apk
    -basketball grand slam latest version
    -basketball grand slam wang lan
    -basketball grand slam real-time competitive
    -basketball grand slam legendary players
    -basketball grand slam 3v3 qualifying mode
    -basketball grand slam bullfighting grand prix mode
    -basketball grand slam three-point ball competition
    -basketball grand slam street basketball game
    -basketball grand slam fan page address
    -basketball grand slam customer service email
    -basketball grand slam apkcombo games sports
    -basketball grand slam uptodown android games sports
    -basketball grand slam apk file size
    -basketball grand slam apk install guide
    -basketball grand slam apk update history
    -basketball grand slam apk reviews and ratings
    -basketball grand slam apk screenshots and videos
    -basketball grand slam apk mod unlimited money
    -basketball grand slam apk offline play
    -basketball grand slam apk compatible devices
    -basketball grand slam apk download link
    -basketball grand slam apk mirror link
    -basketball grand slam apk alternative apps
    -basketball grand slam apk similar games
    -basketball grand slam apk tips and tricks
    -basketball grand slam apk cheats and hacks
    -basketball grand slam apk gameplay features
    -basketball grand slam apk system requirements
    -basketball grand slam apk bugs and issues
    -basketball grand slam apk feedback and suggestions
    -basketball grand slam apk questions and answers
    -basketball grand slam apk news and updates
    -basketball grand slam apk release date and version number
    -basketball grand slam apk developer information and contact details
    -basketball grand slam apk license and terms of service
    -basketball grand slam apk privacy policy and data usage
    -basketball grand slam apk security and virus scan results

    -

    What are the different types of legendary players?

    -

    There are different types of legendary players that have different attributes and skills. They are divided into four categories: lone hero, mercury diarrhea, rebound king, and assist master. Here are some examples of each category:

    -

    Lone Hero

    -

    This type of legendary player is good at scoring by themselves. They have high offensive stats and skills that can help them break through the defense and make difficult shots. They are also good at creating their own space and opportunities. However, they may not be very good at passing or cooperating with their teammates. Some examples of this type are Kobe Bryant, Michael Jordan, Allen Iverson, etc.

    -

    Mercury Diarrhea

    -

    This type of legendary player is good at running fast and changing directions. They have high speed and agility stats and skills that can help them outrun their opponents and make quick moves. They are also good at stealing the ball and making fast breaks. However, they may not be very good at shooting or defending against bigger players. Some examples of this type are Stephen Curry, Kyrie Irving, Derrick Rose, etc.

    Rebound King

    -

    This type of legendary player is good at grabbing rebounds and controlling the boards. They have high strength and jumping stats and skills that can help them dominate the paint and secure the ball. They are also good at blocking shots and protecting the rim. However, they may not be very good at dribbling or shooting from long range. Some examples of this type are Shaquille O'Neal, Wilt Chamberlain, Dennis Rodman, etc.

    -

    Assist Master

    -

    This type of legendary player is good at passing and assisting their teammates. They have high vision and intelligence stats and skills that can help them find the open man and create chances. They are also good at controlling the tempo and orchestrating the offense. However, they may not be very good at scoring by themselves or defending against faster players. Some examples of this type are Magic Johnson, Steve Nash, John Stockton, etc.

    -

    Tips and Tricks

    -

    How to improve your operation and hand feel?

    -

    To improve your operation and hand feel, you need to practice and familiarize yourself with the game controls and mechanics. You can use the training mode to learn the basic moves and skills of each player. You can also adjust the sensitivity and feedback settings to suit your preference. You can also watch some tutorials and guides online to learn some tips and tricks from other players.

    -

    How to use different skills and tactics?

    -

    To use different skills and tactics, you need to know the strengths and weaknesses of each player and team. You can check the stats and attributes of each player in your lineup and choose the ones that match your style and strategy. You can also use the skill buttons to activate different skills such as crossover, dunk, block, etc. You can also use the tactic buttons to switch between different tactics such as man-to-man, zone, pick-and-roll, etc.

    -

    How to cooperate with your teammates?

    -

    To cooperate with your teammates, you need to communicate and coordinate with them. You can use the chat function or voice chat function to talk to your teammates and share information and ideas. You can also use the gesture function or emoji function to express your emotions and reactions. You can also use the pass button or assist button to pass the ball or assist your teammates.

    -

    Conclusion

    -

    Basketball Grand Slam APK is a game that allows you to enjoy the excitement and fun of basketball on your mobile device. You can play with or against other players from around the world in various modes and events. You can also unlock and use legendary players with different skills and characteristics to form your own super lineup. If you are a basketball fan, you should not miss this game.

    -

    So what are you waiting for? Download Basketball Grand Slam APK now and start playing with other basketball fans. You will not regret it!

    -

    FAQs

    -

    Here are some of the frequently asked questions about Basketball Grand Slam APK:

    - -

    Beneficios sociales

    - -

    Beneficios ambientales

    - -

    Consejos para carreras de bicicletas

    -

    Si estás interesado en las carreras de motos, aquí hay algunos consejos sobre cómo empezar o mejorar tu rendimiento:

    -

    Entrenamiento

    - -

    Nutrición

    - -

    Equipo

    - -

    Seguridad

    - -

    Técnica

    - -

    Conclusión

    -

    Las carreras de bicicletas son un deporte divertido y saludable que puede ofrecerle muchos beneficios para su bienestar físico, mental, social y ambiental. También puede desafiarle a mejorar sus habilidades y rendimiento en varios tipos de carreras de bicicletas, como carretera, montaña, pista, BMX o ciclocross. Si usted es un principiante o un experto, las carreras de bicicletas pueden ser una experiencia gratificante y agradable para usted.

    -

    Si usted está interesado en las carreras de bicicletas, esperamos que este artículo le ha dado información útil y consejos sobre cómo empezar o mejorar su rendimiento. Recuerda entrenar inteligentemente, comer saludablemente, equiparte apropiadamente, mantenerte seguro y divertirte. ¡Feliz carrera de bicicletas!

    - -

    ¿Cuáles son las mejores bicicletas para las carreras de bicicletas?

    -

    Las mejores bicicletas para las carreras de bicicletas dependen del tipo de carreras de bicicletas que quieras hacer. Para las carreras de bicicleta de carretera, necesita una bicicleta de carretera que sea ligera, aerodinámica y rápida. Para las carreras de bicicleta de montaña, necesita una bicicleta de montaña que sea resistente, estable y versátil. Para las carreras de bicicleta de pista, necesita una bicicleta de pista que sea simple, rígida y ágil. Para las carreras de BMX, necesitas una bicicleta BMX pequeña, duradera y maniobrable. Para las carreras de ciclocross, necesitas una bicicleta ciclocross que sea similar a una bicicleta de carretera pero con neumáticos más anchos, marchas más bajas y mejores frenos.

    -

    ¿Cómo entreno para las carreras de bicicletas?

    -

    Para entrenar en las carreras de bicicletas, necesitas seguir un plan de entrenamiento estructurado y progresivo que incluya una variedad de entrenamientos, como intervalos, tempo, resistencia, recuperación, etc. También necesitas controlar la intensidad de tu entrenamiento, duración, frecuencia y recuperación utilizando un monitor de frecuencia cardíaca, un medidor de potencia, un dispositivo GPS o una aplicación. También debe realizar un seguimiento de su progreso y ajustar su plan de capacitación en consecuencia utilizando un registro de capacitación, un diario o una plataforma en línea. También necesitas buscar orientación profesional o unirte a un club o grupo si necesitas más apoyo, comentarios o motivación.

    -

    ¿Qué debo comer antes, durante y después de una carrera de bicicletas?

    -

    Antes de una carrera en bicicleta, debe comer una comida previa a la carrera que sea alta en carbohidratos, moderada en proteínas, baja en grasas y fácil de digerir al menos 2-3 horas antes de la carrera. Durante una carrera en bicicleta, debes comer barritas energéticas, geles, masticables o frutas para mantener tus niveles de azúcar en la sangre y prevenir la fatiga. También debe beber mucha agua o bebidas deportivas para mantenerse hidratado y reponer sus electrolitos. Después de una carrera en bicicleta, debe tener una comida post-carrera que es alta en proteínas, moderada en carbohidratos, baja en grasa y rica en antioxidantes dentro de los 30 minutos después de la carrera para reparar sus músculos y reducir la inflamación.

    - -

    Para prevenir lesiones o enfermedades de las carreras de bicicletas, es necesario calentar correctamente antes de la carrera haciendo algunos ejercicios de cardio y estiramiento ligeros para preparar los músculos y las articulaciones. También necesitas refrescarte adecuadamente después de la carrera haciendo ejercicios de cardio y estiramiento suaves para relajar tus músculos y articulaciones. También es necesario evitar el entrenamiento excesivo o insuficiente al escuchar a su cuerpo y descansar cuando sea necesario. También necesita usar el equipo adecuado, seguir las reglas, mantenerse alerta y buscar atención médica si es necesario.

    -

    ¿Cómo respeto el medio ambiente cuando corro en bicicleta?

    -

    Respetar Para respetar el medio ambiente en las carreras de bicicletas, es necesario seguir los principios de no dejar rastro, tales como la eliminación de sus residuos correctamente, minimizando su impacto y dejando lo que encuentra. También necesita conservar sus recursos naturales y vida silvestre evitando áreas sensibles, permaneciendo en senderos designados y no perturbar o dañar ninguna planta o animal. También necesita mejorar su conciencia y responsabilidad ambiental al educarse a sí mismo y a otros sobre los problemas y soluciones relacionados con las carreras de bicicletas. También necesita apoyar a su comunidad y causas participando en eventos de caridad o como voluntario para organizaciones relacionadas con la bicicleta. También necesitas inspirar tu activismo y activismo ambiental apoyando o uniéndote a iniciativas o movimientos amigables con la bicicleta. También necesitas mejorar tu apreciación y disfrute ambiental al experimentar la belleza y diversidad de la naturaleza.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Apk De Choque Royale Lite.md b/spaces/Benson/text-generation/Examples/Descargar Apk De Choque Royale Lite.md deleted file mode 100644 index 0c89bd2c68bcd99ecb75fe717a9c8bb649dfc66b..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Apk De Choque Royale Lite.md +++ /dev/null @@ -1,98 +0,0 @@ -
    -

    Choque Royale Lite APK Descargar: Cómo jugar el juego de estrategia popular en dispositivos de gama baja

    -

    Clash Royale es uno de los juegos móviles más populares del mundo, con millones de jugadores disfrutando de su juego rápido y adictivo. Sin embargo, no todo el mundo tiene un dispositivo de alta gama que puede ejecutar el juego sin problemas y sin retraso. Si usted es una de esas personas que aman Clash Royale pero tienen un dispositivo de gama baja, no se preocupe. Hay una solución para usted: Clash Royale Lite.

    -

    Clash Royale Lite es una versión modificada de Clash Royale que está diseñada para funcionar en dispositivos de gama baja con menos RAM y espacio de almacenamiento. Tiene todas las características y la diversión del juego original, pero con gráficos reducidos y tamaño de archivo. En este artículo, te mostraremos cómo descargar e instalar Clash Royale Lite en tu dispositivo Android, cómo jugar y disfrutar del juego, y cómo evitar posibles riesgos o problemas. ¡Vamos a empezar!

    -

    descargar apk de choque royale lite


    Download ————— https://bltlly.com/2v6L2w



    -

    ¿Qué es Clash Royale Lite?

    -

    Una breve introducción a Clash Royale y sus características

    -

    Clash Royale es un juego de estrategia en tiempo real desarrollado por Supercell, los creadores de Clash of Clans, Brawl Stars, Hay Day y más. Fue lanzado en 2016 y desde entonces se ha convertido en uno de los juegos móviles más exitosos de la historia. En Clash Royale, coleccionas y mejoras cartas con personajes, hechizos y edificios del universo Clash. Utiliza estas cartas para construir tu mazo de batalla y luchar contra otros jugadores en línea en duelos de ritmo rápido. El objetivo es destruir las torres de tu oponente mientras defiendes las tuyas. También puedes unirte o crear clanes, participar en torneos, eventos, desafíos y más.

    -

    La diferencia entre Clash Royale y Clash Royale Lite

    -

    Clash Royale Lite es una versión modificada de Clash Royale que está optimizada para dispositivos de gama baja. Tiene la misma jugabilidad y características que el juego original, pero con algunas diferencias:

    - -

    Estas diferencias hacen Clash Royale Lite más accesible y agradable para los jugadores que tienen dispositivos de gama baja o conexión a Internet limitada.

    -

    Los beneficios de jugar Clash Royale Lite

    -

    Jugar a Clash Royale Lite tiene varios beneficios para los jugadores que aman el juego pero tienen dispositivos de gama baja. Algunos de estos beneficios son:

    - -

    Jugar a Clash Royale Lite no significa que te estés perdiendo nada. Todavía puedes divertirte y competir con otros jugadores de todo el mundo.

    -

    -

    Cómo descargar e instalar Clash Royale Lite en su dispositivo Android

    -

    Los requisitos y la compatibilidad de Clash Royale Lite

    -

    Clash Royale Lite es compatible con la mayoría de los dispositivos Android que tienen al menos 1 GB de RAM y Android 4.4 o superior. Sin embargo, algunos dispositivos pueden no ser capaces de ejecutar el juego correctamente debido a limitaciones de hardware o problemas de software. Para comprobar si su dispositivo es compatible, puede visitar el sitio web oficial de Clash Royale Lite y ver la lista de dispositivos compatibles. También puede ponerse en contacto con los desarrolladores si tiene alguna pregunta o problema con respecto a la compatibilidad de su dispositivo.

    -

    Los pasos para descargar e instalar Clash Royale Lite desde una fuente de confianza

    -

    Clash Royale Lite no está disponible en Google Play Store, por lo que tendrá que descargar e instalar desde una fuente de confianza. Estos son los pasos para hacerlo:

    -
      - -
    1. Antes de descargar el archivo APK, asegúrese de que ha habilitado la opción de instalar aplicaciones de fuentes desconocidas en su dispositivo. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo.
    2. -
    3. Una vez que haya descargado el archivo APK, localizarlo en su dispositivo y toque en él para iniciar el proceso de instalación. Siga las instrucciones de la pantalla y espere a que termine la instalación.
    4. -
    5. Después de que la instalación se haya completado, puede iniciar Clash Royale Lite desde el cajón de la aplicación o la pantalla de inicio y disfrutar del juego.
    6. -
    -

    Los consejos para evitar malware y virus al descargar archivos APK

    -

    Descargar archivos APK de fuentes desconocidas puede ser arriesgado, ya que pueden contener malware o virus que pueden dañar su dispositivo o robar su información personal. Para evitar esto, debes seguir estos consejos:

    - -

    Siguiendo estos consejos, puede asegurarse de que está descargando e instalando Clash Royale Lite de forma segura.

    -

    Cómo jugar y disfrutar de Clash Royale Lite

    -

    El juego básico y las reglas de Clash Royale Lite

    - -

    Las mejores estrategias y consejos para ganar batallas en Clash Royale Lite

    -

    Para ganar batallas en Clash Royale Lite, necesitas tener una buena estrategia y algunos consejos en mente. Estos son algunos de ellos:

    - -

    Las formas de recoger y actualizar tarjetas, unirse a clanes, y participar en eventos en Clash Royale Lite

    -

    Clash Royale Lite tiene las mismas formas de recoger y actualizar tarjetas, unirse a clanes, y participar en eventos como Clash Royale. Puedes hacer lo siguiente:

    - -

    Al hacer estas cosas, puedes mejorar tu experiencia de juego y divertirte más con Clash Royale Lite.

    -

    Conclusión

    -

    Un resumen de los puntos principales del artículo

    -

    En conclusión, Clash Royale Lite es una gran alternativa para los jugadores que aman Clash Royale pero tienen dispositivos de gama baja. Tiene todas las características y la diversión del juego original, pero con gráficos reducidos y tamaño de archivo. Es fácil de descargar e instalar desde una fuente de confianza, y es seguro jugar si sigues algunos consejos. También tiene la misma jugabilidad y reglas que Clash Royale, pero con algunos consejos y estrategias para ayudarte a ganar batallas. También puedes recoger y actualizar cartas, unirte a clanes y participar en eventos en Clash Royale Lite.

    -

    Un llamado a la acción para que los lectores prueben Clash Royale Lite

    -

    Si usted está buscando una manera de jugar Clash Royale en su dispositivo de gama baja sin ningún problema, definitivamente debe probar Clash Royale Lite. Es un juego divertido y emocionante que te mantendrá entretenido durante horas. Puede descargarlo desde el sitio web oficial de Clash Royale Lite o desde otros sitios web de renombre. También puedes compartirlo con tus amigos que tienen dispositivos de gama baja y juegan juntos. ¿Qué estás esperando? Descargar Clash Royale Lite hoy y disfrutar del juego!

    -

    Preguntas frecuentes

    -

    ¿Es seguro descargar y jugar Clash Royale Lite?

    - -

    ¿Puedo jugar Clash Royale Lite con mis amigos que tienen Clash Royale?

    -

    No, Clash Royale Lite no es compatible con Clash Royale, por lo que no puedes jugar con tus amigos que tienen Clash Royale. Sin embargo, puedes jugar con tus amigos que tienen Clash Royale Lite agregándolos como amigos en el juego o uniéndote al mismo clan que ellos.

    -

    ¿Cuánto espacio de almacenamiento ocupa Clash Royale Lite en mi dispositivo?

    -

    Clash Royale Lite ocupa alrededor de 150 MB de espacio de almacenamiento en su dispositivo, en comparación con 445 MB para Clash Royale. Esto significa que puede ahorrar mucho espacio de almacenamiento en su dispositivo jugando Clash Royale Lite en lugar de Clash Royale.

    -

    ¿Con qué frecuencia se actualiza Clash Royale Lite con nuevas características y contenido?

    -

    Clash Royale Lite se actualiza regularmente con nuevas características y contenido, al igual que Clash Royale. Puede esperar ver nuevas tarjetas, arenas, modos, eventos, cambios de equilibrio, correcciones de errores y más en cada actualización. Puede consultar el sitio web oficial de Clash Royale Lite o seguir sus cuentas de redes sociales para mantenerse al día sobre las últimas noticias y actualizaciones.

    -

    ¿Cuáles son algunos otros juegos como Clash Royale que puedo jugar en mi dispositivo?

    -

    Si te gusta jugar juegos de estrategia como Clash Royale, es posible que también te guste jugar otros juegos similares o relacionados con él. Algunos de estos juegos son:

    - -

    Estos son algunos de los juegos que puedes jugar en tu dispositivo si te gusta Clash Royale. Puedes encontrarlos en Google Play Store u otras fuentes.

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/BernardoOlisan/vqganclip/CLIP/model-card.md b/spaces/BernardoOlisan/vqganclip/CLIP/model-card.md deleted file mode 100644 index 2d22e25bea89fdbccdaa2809fbeb83e0a7cfaa07..0000000000000000000000000000000000000000 --- a/spaces/BernardoOlisan/vqganclip/CLIP/model-card.md +++ /dev/null @@ -1,120 +0,0 @@ -# Model Card: CLIP - -Inspired by [Model Cards for Model Reporting (Mitchell et al.)](https://arxiv.org/abs/1810.03993) and [Lessons from Archives (Jo & Gebru)](https://arxiv.org/pdf/1912.10389.pdf), we’re providing some accompanying information about the multimodal model. - -## Model Details - -The CLIP model was developed by researchers at OpenAI to learn about what contributes to robustness in computer vision tasks. The model was also developed to test the ability of models to generalize to arbitrary image classification tasks in a zero-shot manner. It was not developed for general model deployment - to deploy models like CLIP, researchers will first need to carefully study their capabilities in relation to the specific context they’re being deployed within. - -### Model Date - -January 2021 - -### Model Type - -The base model uses a ResNet50 with several modifications as an image encoder and uses a masked self-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss. There is also a variant of the model where the ResNet image encoder is replaced with a Vision Transformer. - -### Model Versions - -Initially, we’ve released one CLIP model based on the Vision Transformer architecture equivalent to ViT-B/32, along with the RN50 model, using the architecture equivalent to ResNet-50. - -As part of the staged release process, we have also released the RN101 model, as well as RN50x4, a RN50 scaled up 4x according to the [EfficientNet](https://arxiv.org/abs/1905.11946) scaling rule. In July 2021, we additionally released the RN50x16 and ViT-B/16 models. - -Please see the paper linked below for further details about their specification. - -### Documents - -- [Blog Post](https://openai.com/blog/clip/) -- [CLIP Paper](https://arxiv.org/abs/2103.00020) - - - -## Model Use - -### Intended Use - -The model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot, arbitrary image classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models - the CLIP paper includes a discussion of potential downstream impacts to provide an example for this sort of analysis. - -#### Primary intended uses - -The primary intended users of these models are AI researchers. - -We primarily imagine the model will be used by researchers to better understand robustness, generalization, and other capabilities, biases, and constraints of computer vision models. - -### Out-of-Scope Use Cases - -**Any** deployed use case of the model - whether commercial or not - is currently out of scope. Non-deployed use cases such as image search in a constrained environment, are also not recommended unless there is thorough in-domain testing of the model with a specific, fixed class taxonomy. This is because our safety assessment demonstrated a high need for task specific testing especially given the variability of CLIP’s performance with different class taxonomies. This makes untested and unconstrained deployment of the model in any use case currently potentially harmful. - -Certain use cases which would fall under the domain of surveillance and facial recognition are always out-of-scope regardless of performance of the model. This is because the use of artificial intelligence for tasks such as these can be premature currently given the lack of testing norms and checks to ensure its fair use. - -Since the model has not been purposefully trained in or evaluated on any languages other than English, its use should be limited to English language use cases. - - - -## Data - -The model was trained on publicly available image-caption data. This was done through a combination of crawling a handful of websites and using commonly-used pre-existing image datasets such as [YFCC100M](http://projects.dfki.uni-kl.de/yfcc100m/). A large portion of the data comes from our crawling of the internet. This means that the data is more representative of people and societies most connected to the internet which tend to skew towards more developed nations, and younger, male users. - -### Data Mission Statement - -Our goal with building this dataset was to test out robustness and generalizability in computer vision tasks. As a result, the focus was on gathering large quantities of data from different publicly-available internet data sources. The data was gathered in a mostly non-interventionist manner. However, we only crawled websites that had policies against excessively violent and adult images and allowed us to filter out such content. We do not intend for this dataset to be used as the basis for any commercial or deployed model and will not be releasing the dataset. - - - -## Performance and Limitations - -### Performance - -We have evaluated the performance of CLIP on a wide range of benchmarks across a variety of computer vision datasets such as OCR to texture recognition to fine-grained classification. The paper describes model performance on the following datasets: - -- Food101 -- CIFAR10 -- CIFAR100 -- Birdsnap -- SUN397 -- Stanford Cars -- FGVC Aircraft -- VOC2007 -- DTD -- Oxford-IIIT Pet dataset -- Caltech101 -- Flowers102 -- MNIST -- SVHN -- IIIT5K -- Hateful Memes -- SST-2 -- UCF101 -- Kinetics700 -- Country211 -- CLEVR Counting -- KITTI Distance -- STL-10 -- RareAct -- Flickr30 -- MSCOCO -- ImageNet -- ImageNet-A -- ImageNet-R -- ImageNet Sketch -- ObjectNet (ImageNet Overlap) -- Youtube-BB -- ImageNet-Vid - -## Limitations - -CLIP and our analysis of it have a number of limitations. CLIP currently struggles with respect to certain tasks such as fine grained classification and counting objects. CLIP also poses issues with regards to fairness and bias which we discuss in the paper and briefly in the next section. Additionally, our approach to testing CLIP also has an important limitation- in many cases we have used linear probes to evaluate the performance of CLIP and there is evidence suggesting that linear probes can underestimate model performance. - -### Bias and Fairness - -We find that the performance of CLIP - and the specific biases it exhibits - can depend significantly on class design and the choices one makes for categories to include and exclude. We tested the risk of certain kinds of denigration with CLIP by classifying images of people from [Fairface](https://arxiv.org/abs/1908.04913) into crime-related and non-human animal categories. We found significant disparities with respect to race and gender. Additionally, we found that these disparities could shift based on how the classes were constructed. (Details captured in the Broader Impacts Section in the paper). - -We also tested the performance of CLIP on gender, race and age classification using the Fairface dataset (We default to using race categories as they are constructed in the Fairface dataset.) in order to assess quality of performance across different demographics. We found accuracy >96% across all races for gender classification with ‘Middle Eastern’ having the highest accuracy (98.4%) and ‘White’ having the lowest (96.5%). Additionally, CLIP averaged ~93% for racial classification and ~63% for age classification. Our use of evaluations to test for gender, race and age classification as well as denigration harms is simply to evaluate performance of the model across people and surface potential risks and not to demonstrate an endorsement/enthusiasm for such tasks. - - - -## Feedback - -### Where to send questions or comments about the model - -Please use [this Google Form](https://forms.gle/Uv7afRH5dvY34ZEs9) diff --git a/spaces/BiTransSciencia/www/index.html b/spaces/BiTransSciencia/www/index.html deleted file mode 100644 index 5619f092a9505f8f6e097d6f7be5bdeb435c402a..0000000000000000000000000000000000000000 --- a/spaces/BiTransSciencia/www/index.html +++ /dev/null @@ -1,753 +0,0 @@ - - - - - - - - - - - - - - BiTransSciencia [081] - - - - - -
    - - - - - - - logo__081 - - -
    - -

    'BiTransSciencia [081]' Evolution Tree (E.T.): __0_0_0__

    - -
    - -
    - Definition -

    - The 'BiTransSciencia'[numeric representation: '081'] is a 'Transmission Oriented - Pseudo based - - Closely-Opened System [__T_P_CO__.Ss]' based Data-Architectural Model for developing various - Software/Hardware Applications/Protocol using Computational Compiler/Interpretor based devices (such as - Computer).The 'BiTransSciencia' was inspired from Properties of general living organims in which, where and - how the Data is stored and accessed in these organims. -

    -

    - The 'BiTransSciencia' mainly focusses on two ascpects i.e. 'Data Storage' and 'Transmission of Data' - with/without manipulation of data. On the basis of these ascpects the Architecture was developed such that - the data transmits from one to another using a transmission medium; where we generally name the two - data-points(from where the data is stored) as 'Server[0]' and 'Sensor(s)[1]' where the 'Server' is basically - inspired from the functioning of 'Brain'/'CPU' and 'Sensor(s)' have been inspired from functioning of 'Sense - Organs in a living organisms & the external components of a Computer - (Mouse,Keyboard,Speakers,Monitor,etc...)' and the 'transmission-media[8]' was inspired from functionality of - 'Nerves in living organisms & wires/wireless functionality in Computer Devices'. -
    - Here the role of 'Server[0]/Sensors[1]' is to store data by itself or from 'Sensors[1]/Server[0]' - respectively via 'Transmission[8]' with/without manipulation of data. The data can only be manipulated at - 'Transmission[8]' after acessing and before storing the data (if required). -

    -

    - 'BiTransSciencia' is a cyclic process where the data gets transmitted to and from 'Server↔Sensors' -

    -
    -
    -
    - Origin (Derivation of the term 'BiTransSciencia') -

    - The term 'BiTransSciencia' is the composition of three terms: Bi-Trans-Sciencia; where 'Bi' means 'Two'(Two - Data Points), 'Trans' means 'across-Transmission(here)' and 'Sciencia' means 'Science(Knowledge)'. -
    - So, the term 'BiTransSciencia' provides the 'Knowledge about where and how the data to be stored & - manipulated and gets transmited across the points (Server and Sensor(s))'. -

    -

    - The logo of 'BiTransSciencia' consists of the composition of numerics '0,8,1' which indicates - 'Data-Point-0(Server),Transmission Medium,Data-Point-1(Sensor)'[from left-right] and a schematic - transmission of data from 0↔1 via 8 along a line (irrespective of Size,Color and alignments) -
    - In the logo '0 & 1' were represented inside '8' for indication of cyclic transmission of data across 0 and 1 -

    -
    -
    -
    - Terminology in 'BiTransSciencia' -
    -
      -
    • - UniPolySciencia: Study of the Universe discovered by 'K.V.N.Aditya'(not yet - published 'E.T.: __0_0_0__'),The 'UniPolySciencia' is the composition of 'Uni:Mono Atomic - Particle','Poly: Complex Molecules' (definition is unstable* due to E.T.: __0_0_0__). -
    • -
    • - BiTransSciencia [081]: A Data-Architectural Model used for designing Hardware - Models and developing Software Applications which was discovered by 'K.V.N.Aditya'. -
    • -
    • - System: The root Molecule/Folder which consists of all the Molecules/Folders - &(or)/ Particles/Files which aids in storing and transmiting data across Data-Points with some - unique characteristics (from 'UniPolySciencia') -
    • -
    • - Surrounding(s): Except the System Molecule/Folder, all other Molecules/Folders - and Particles/Files in the Universe/a Device and where the Molecules & Particles can be inherited - from and to the System. (from 'UniPolySciencia') -
    • -
    • - Molecule(s): Composition of a Molecule or Composition of Particles (generally - known as 'Folders' according to Computer-Language) (from 'UniPolySciencia') -
    • -
    • - Particle(s): A Mono-Component where the data is actually stored (generally - known as 'Files' according to Computer-Language) (from 'UniPolySciencia') -
    • -
    • - Data/Datum: A form of composition of energy which generates structure of - waves,etc... that can be used for Analyzing the Particle / to generate Particle by manipulating the - Particle (Mono-Structuared(File Transmitting without manipulating (File-Copy)) / Di-Structured(File - Transmitting with manipulating)) -
    • -
    • - Data-Point(s): Location/Address where the data is stored in a Particle -
    • -
    • - Data-Transmission: Transmitting the data from one particle to another particle - with or with-out manipulationg the data -
    • -
    • - Evolution: Changes occurred in the structure of the data with-in a Particle - which effects its root Molecule ('insertion','update/alteration','deletion') (from - 'UniPolySciencia') -
    • -
    • - Evolution-Tree: Representation of the - System's/Surrounding's/Molecule's/Particle's Evolution in a structure of a tree (from - 'UniPolySciencia') -
    • -
    -
    - -
    - -
    - -
    - 'BiTransSciencia' Architecture Protocols -
    -
    - - structure__081 -
    Schematic representation of tree structure of 'BiTransSciencia [081]'
    -
    -
    -
    -

    - The 'BiTransSciencia [081]' initialized by consisting of two points where the data is been stored. The - data may be originated with-in the system or from its Surroundings.Since there are two data points and - the data to be transmited from one to another, now there should be a medium to transmit the data... Here - comes the necessity of 'BiTransSciencia [081]' where '0 and 1' are the data points and '8' be the - transmitting medium. This transmiting medium transmits from one to another. -
    - There are mainly two transmissions that can be occurred i.e. with-in the system and/or System to &/ from - Surroundings. So the Architecture gets splits into three(3) namely {'0','8','1'} such that '0' & '1' - consists of the data of the 'System' and 'Surroundings' and '8' consists of transmission protocols which - aids to transmits the data that can be accessced only from the system. -
    -

    The '0'&'1' consists majority of data/interface(s)/programs that has been transmitted to/from System & Surroundings, and '8' consists majority of the System-based data/transmissions
    -

    -

    - Since '0' and '1' has two(2) kinds of data (i.e. System and Surroundings), these gets splits into - three(3) sub-divisions namely: -

    - '0' gets splits into: -

    -
    '10':
    -
    Consists of Partial/Source data of System's Surroundings
    -
    '88':
    -
    Consists of protocols to transmit the extracted data into the System [into '/8/_0_/10/']
    -
    '01':
    -
    Consists of Partial/Source data from '/8/_0_/' that to be transmitted to its Surroundings
    -
    -
    The molecules/particles '10' is required if to extract partial data from its surroundings, else the data directly be stored directly into '/8/_0_/10/' via surrounding's/system transmissions
    -
    The molecules/particles '01' is required if to transmit partial data to its surroundings, else the data produced from the '/8/_0_/' can be directly accessed to its surroundings. The '01' Molecule mainly aids in data abstraction while transmitting to its surroundings...
    -

    -

    - '1' gets splits into: -

    -
    '01':
    -
    Consists of Partial/Source data of System's Surroundings
    -
    '88':
    -
    Consists of protocols to transmit the extracted data into the System [into '/8/_1_/01/']
    -
    '10':
    -
    Consists of Partial/Source data from '/8/_1_/' that to be transmitted to its Surroundings
    -
    -
    The molecules/particles '01' is required if to extract partial data from its surroundings, else the data directly be stored directly into '/8/_1_/01/' via surrounding's/system transmissions
    -
    The molecules/particles '10' is required if to transmit partial data to its surroundings, else the data produced from the '/8/_1_/' can be directly accessed to its surroundings. The '10' Molecule mainly aids in data abstraction while transmitting to its surroundings...
    -

    -

    -

    - Till now, the data has been transmitted across System-Surroundings... Now, we explore how the data is - transmitted with-in the System i.e. the actual data transmission occurs in between '0' & '1' is at '8' - namely '_0_' ↔ '__8__' ↔ '_1_'. -
    - The '_0_' molecule consists of three(3) further molecules namely '10','00','01' where the actual '0's - System data been addressed. The '10' molecule consists of data that have been extracted from - Surroundings via '/0/88/' or directly from Surroundings. The molecule '00' consists of Origin data of - '0' that has been evoluted within-it or via System transmissions. The '01' molecule consists of data - that can be accessed by its Surroundings. -
    - The '_1_' molecule consists of three(3) further molecules namely '01','11','10' where the actual '1's - System data been addressed. The '01' molecule consists of data that have been extracted from - Surroundings via '/1/88/' or directly from Surroundings. The molecule '11' Consists of Origin data of - '1' that has been evoluted within-it or via System transmissions. The '10' molecule consists of data - that can be accessed by its Surroundings. -
    - The '__8__' molecule consists of three(3) further molecules namely '_0_','_8_','_1_'. -

    - The '_0_' molecule consists of three(3) further molecules namely '081','080','180'. -

    - The '081' molecule consists of transmission protocols to transmit data from '_0_' to '_1_' -
    - The '080' molecule consists of transmission protocols to transmit data with-in '_0_' molecule. -
    - The '180' molecule consists of transmission protocols to transmit data from '_1_' to '_0_' on the basis - of Server request... -

    -
    - The '_1_' molecule consists of three(3) further molecules namely '180','181','081'. -

    - The '180' molecule consists of transmission protocols to transmit data from '_1_' to '_0_' -
    - The '181' molecule consists of transmission protocols to transmit data with-in '_1_' molecule. -
    - The '081' molecule consists of transmission protocols to transmit data from '_0_' to '_1_' on the basis - of Sensor request... -

    -
    - The '_8_' molecule consists of three(3) further molecules namely '808','818','888'. -

    - The '808' molecule consists of two(2) further molecules namely '0' and '8'. -

    - The '8' molecule consists of interfaces that helps in calling the '/__8__/_0_/' protocols for - transmitting the data. -
    - The '0' molecule consists of "input → output" log files for the respective call of the protocols - and molecule(s)/particles(s) of processed data (such like 'cached data',etc...). This is an optional - molecule. -

    -
    - The '818' molecule consists of two(2) further molecules namely '1' and '8'. -

    - The '8' molecule consists of interfaces that helps in calling the '/__8__/_1_/' protocols for - transmitting the data. -
    - The '1' molecule consists of "input → output" log files for respective call of the protocols and - molecule(s)/particles(s) of processed data (such like 'cached data',etc...). This is an optional - molecule. -

    -
    - The '888' molecule consists of three(3) further molecules namely '0', '8', '1'. -

    - The '0' and '1' molecule(s) consists of protocols that aids in transmitting of data across '1' and '0' - data transmissions. These are intermediate transmitting data molecule. -
    - The '8' molecule consists of "input ↔ transmitting ↔ output" log files for respective call of - the protocols and molecule(s)/particles(s) of processed data (such like 'cached data',etc...). This is - an optional molecule. -

    -
    - The '888' molecule is required to transmit across the '_0_' and '_1_' and optional to transmit within '_0_' - and '_1_'. In 'Open System', the 'interface' transmissions can be embedded within this molecule such that - either 'Server[0]' or 'Sensors[1]' can access directly without separate interfaces.(i.e. interface{'Closed - System','Closely-Opened System' : Optional; 'Open System': Recommended}) -
    - The '808' and '818' molecules are required for transmitting the data via interfaces{'Closed - System','Closely-Opened System' : Recommended; 'Open System': Optional} and Complex transmissions, for - simple - transmissions, the data can be transmitted via '/__8__/_0_/' and '/__8__/_1_/' directly, but the log files - to be generated in '808' and '818' molecules respectively. -

    -

    - -

    -

    - There is no specific location where the '8' to be located since it is an intermediate point. It depends - of 'Eminencial' properties (from "'BiTransSciencia' Properties") -

    -
    - -
    -
    - There are two special particles named '.Ss' and '.et' which represents 'Evolution Tree' and - 'System-Surrounding' characteristics. -
    -
    -
    - -    :→ The '.Ss' particle is used to represent the 'System-Surroundings' - characteristics on the basis of "'FSE' properties of '081'"(from the "'BiTransSciencia' - Properties"). The - syntax of '.Ss' is "__[F]_[S]_[E]__"; where {F,S,E} is represented based on the abbreviations of - these sub-divisional properties. -
    -

    -       - The molecules/particles can be outside the [081] and inside the System. These molecules/particles have - Universal access w.r.t the System and the Surroundings those may be of any of {'0','8','1'}. These are - termed as '__Ss__' molecules and '_Ss_' particles. -

    -

    -      There are three(3) types of '.Ss' molecules/particles i.e {_,__,#}: -
    -

    -
          _ -
    -
          - - - The molecule '_' consists of molecules/particles which are connected to any/all of the molecules/particles of the System(majority) &/( i.e. and/or) Surroundings which are 'Private' i.e. these can only be accessed by the '0' or '1' ['_' represents '20' (where '0' describes 'Private')]. -
    -
          __ -
    -
          - - - The molecule '__' consists of molecules/particles which are connected to any/all of the molecules/particles of the System(majority) &/ Surroundings which are 'Public' i.e. these can be accessed by the '0' and '1' ['__' represents '21' (where '1' describes 'Public')]. -
    -
          # -
    -
          - - - The '#' represents the molecule(s)/particle(s) which are related to any/all the molecules/particles of the System &/ Surroundings and other molecules/particles which don't meet the properties of '_' and/or '__'. The molecules/particles are of superset of '_', '__' by which these can be accessed by any of the molecules/particles of the System and/or Surroundings. The '.et' is also a type of '#' molecule/particle which represents the System's Evolution-Tree but it also has its own properties i.e. it is superset of all the Molecules/Particles of the System &/ Surroundings on the basis of its specification. Note that '#' is not a specific molecule/particle, it's just a representation of the molecules/particles which are not of '_' , '__' and the 081 Molecules... -
    - -
    - -

    -
    -
    -
    -    :→ The '.et' particle is used to represent the evolution of the System (in - general-commonly known as 'Version'). The syntax of '.et' is "__[Stem(s)]_[Branch(s)]_[Leaf(s)]__"; - where 'Stem' count represents the number of evolutions added or removed or 'altered' in the system i.e. - the evolutions that occurred in the System's root molecule, 'Branch(s)' count represents the number of - evolutions that occurred inside the System's root molecule, 'Leaf(s)' count represents the number of - evolutions that occurred inside the System's particles.For every system the "Root" is a "BlackHole". - Initially the '.et' is "__0_0_0__" while evoluting, after its initial evolution the '.et' is as - '__1_1_1__', then after it depends on the evolutions... The name of the System to be initialized within - the [.et] Particle -

    -      The types of 'Evolution-Tree': -
    -        i. Balanced Evolution-Tree : -
    -            : This represents the initial - System - that has been Evoluted (Either by Hirerchial/Derived or via BlackHole/Neutral[The System is evoluted - by - no dependecy on surrounding Systems]).The syntax of this Evolution-Tree is as - "__[Stem(s)]_[Branch(s)]_[Leaf(s)]__" -
    -        ii. Pseudo Evolution Tree : -
    -            If a System has evoluted but if - this - has not effected the initial System, then the '.et' is represented by 'Pseudo Evolution-Tree'. This - is - only applicable for the evolutions with-in only one System. The syntax of this Evolution-Tree is - "_[Branch(s)]_[Leaf(s)]_"."__[Stem(s)]_[Branch(s)]_[Leaf(s)]__" ('.'(Single Dot) read as "pseudo - of"(here)); in - which left of "." is the System that has been evoluted of the System of the right of the ".". Here - the - Pseudo System doesn't Consists [Stem] since the [Stem] of the Pseudo System is same as its Balanced - System [right of "."]. -
    -        iii. Hierarchical Evolution Tree : -
    -            If a Balanced System derives any - of - the Molecules/Particles from the surrounding System(s) [Balanced/Hierarchical], the evolution of - this - System is represented by "Hirarchial Evolution-Tree". The syntax of the Evolution-Tree is - "__[Stem(s)]_[Branch(s)]_[Leaf(s)]__"-"__[Stem(s)]_[Branch(s)]_[Leaf(s)]__"....-"__[Stem(s)]_[Branch(s)]_[Leaf(s)]__"('-' - read as "Hierarchical of"(here)); in which the left of "-" is the Balanced System[not to be Black - Holed - since Hierarchical] which derives 'n' number of surrounding Systems which are separated by "-". Here - the - Stems can be different for left and right "-" since the Systems are non-Identical. - This Evolution-Tree can be represented with-in the Particle [.et] or by only initializing the - instance - System Evolution-Tree [left of "-"] and the derived System(s) to be inside the [.et] Particle by - initializing the Hierarchical System(s) with "-" with suffix of its Hierarchical System's Name - separated by - '--' and the successive Hierarchical System(s) are separated by "\n"(new line) -

    -

    -      The '.et' can be initialized by: -
    -        i. (as a Molecule): The {Molecules,Particles} will be - initiated with-in the '.et' Molecule. -
    -        ii. (as a Particle): The '.et' Particle will be - initiated - with-in the Root/Molecule. - -

    -
    - -
    -

    -

    - The different line-strokes represents different relations in between the molecules/particles with - respect to the data/transmission medium; i.e. -
    -

    The "———" represents the direct(or)indirect[while accessing with "---" type - molecules/particles] relation; "---" represents indirect relation; "..." represents about the - System-Surround relations

    -

    -

    - The {Black,Gray, White, Blue, Green, Red} color-codes are used in the schematic-architecture - which indicates: -
    -

    -
    Black -
    -
    - Representation of the Universe
    -
    Gray -
    -
    - 'Gray' is used to represent System's Molecule(s)/Particle(s) {'.et', '.Ss'} and/or 'Hierarchical - Transmission'
    -
    White -
    -
    - 'White' is used to represent data-transmission across System-Surroundings and the Boundary(s) of - the Molecule(s)/Particle(s)
    -
    Blue -
    -
    - 'Blue' is used to represent 'Server[0]' (which simulates 'Back-end' on the basis of its - wavelength)
    -
    Green -
    -
    - 'Green' is used to represent 'Transmission medium[8]' (which simulates 'intermediate of '0' and - '1'' on the basis of its wavelength)
    -
    Red -
    -
    - 'Red' is used to represent 'Sensors[1]' (which simulates 'Front-end' on the basis of its - wavelength)
    -
    -

    - -
    -
    - -
    - -
    - 'BiTransSciencia' Properties -

    - For a System developed by 'BiTransSciencia' consists of three(3) Properties [applicable for System and - Surroundings]: -

    - These Properties are commonly called as "'FSE' Properties of '081'" -

    -

    - Functional Properties: Functional Properties describes about the function of the System for - which it was developed for. -
    - These are of three(3) types: -

    -
    - 'Data Oriented System':   'Data Oriented' based systems mainly focusses on the - 'where to store the data' based on required conditions with minimum transmission protocols -
    - 'Transmission Oriented System'   'Transmission Oriented' based systems mainly - focusses on 'how to store the data' based on required conditions with minimum data with-in the system -
    - 'Data-Transmission Oriented System'   'Data-Transmission' based systems focusses on - both 'where and how the data to be stored' with required transmission protocols -

    -

    - Structural Properties: Structural Properties describes about the Structure of the System - based on the design of the System -
    - These are of three(3) types: -

    -
    - 'Pseudo System':   'Pseudo System' is a system in which it transmits to or from - with-in the System or Surroundings. The nature of the system is flexible and unstable(to make system Stable and - may or may not follow all the protocols of 'BiTransSciencia [081]'. To be converted to any of the above - Properties i.e. 'Natural/Balanced' which may loose flexible characteristic). At the initial Evolution of a - System - (i.e. et: __0_0_0__) possesses 'Pseudo System' util the System is initially Evoluted (et: __1_1_1__) by - transforming to 'Neutral'/'Balanced' System(s). Some of the System(s) possess the 'Pseudo System' even after - their initial evolution based on their functonality... -
    - 'Balanced System':   'Balanced System' is a system in which it must transmits the - data with-in the System and Surroundings. The nature of the system is Stable and Dynamic -
    - 'Neutral System':   'Neutral System' is a system in which it must transmits with-in - the system and doesn't transmit to and from Surroundings. The nature of the system is Stable and Static. - -

    -

    - Eminencial Properties: Eminencial Properties describes about accessing of the data and - transmission of data with respective to the Data-Points [i.e. 'Server' and 'Sensors'] -
    - These are of three(3) types: -
    -

    -
    - 'Open System':   The data in 'Server', 'Sensor' and 'Transmission of Data' can be - accessed from any of the Data-Point within the transmission medium. The '8' resides at any of the data-points or - at an individual location -
    - 'Closed System':   The data-point can only access the data with-in a data-point and - its protocols and the system protocols and system data [example: the 'Server' can only access the data with-in - 'Server' , and the protocols with-in '__8__/_0_', and '__8__/_8_'].The molecule '8' gets splits and resides at - the data-points except '/8/__8__/_8_/888/' which resides at an individual location that can be accessed by the - data-points -
    - 'Closely-Opened System':   In this, any of the data-point have control over the - system in which it prescribes the manipulation of the data by a data-point. -
    -  In 'Open-System' the folders '0' and '1' can be accessed by any of the data-point but, in 'Closed System' - & - 'Closely-Opened System' the folders '0' and '1' can only be accessed by the respective data folder unless the - folder becomes the input/output of the surrounding Systems.The '8' resides as per the protocols of controlling - data-point (i.e. 'Server[0]' or 'Sensors[1]') -

    -
    -
    -
    - Download 'BiTransSciencia [081]' Schematic Structure -
    - Download the below zip file to install 'BiTransSciencia [081]' Schematic Structure... The folders/files can be manipulated as per your System requriments w.r.t 'BiTransSciencia [081]' Protocols -

    - Download 'BiTransSciencia [081]' Schematic Structure -
    -
    !! Thank You choosing 'BiTransSciencia [081]' Data-Architecture for evoluting your System !!
    -
    -
    -
    -
    - Surrounding Acknowledgement(s) - - - - - - -
    - - - -
    - -
    - - -
    - Licensed to 'K.V.N.Aditya'... -

    - The elements of 'UniPolySciencia' is licensed under: - -

    UniPolySciencia (Icon) - by Venkata Naga Aditya Kothapalli is licensed - under CC BY-NC-ND - 4.0

    - -

    -

    - The elements of 'BiTransSciencia' is licensed under: - -

    BiTransSciencia [081] (Logo) - by Venkata Naga Aditya Kothapalli is licensed - under CC BY-NC-ND - 4.0

    - - -

    BiTransSciencia [081] (Icon) - by Venkata Naga Aditya Kothapalli is licensed - under CC BY-NC-ND - 4.0

    -
    - -

    BiTransSciencia [081] (Tree - Structure) by Venkata Naga Aditya - Kothapalli is licensed under CC BY-NC-ND 4.0

    -
    - -

    BiTransSciencia [081] by - Venkata Naga Aditya Kothapalli is licensed - under CC BY-ND 4.0 -

    -
    -

    -
    -

    - The images (any format) from 'UniPolySciencia' and 'BiTransSciencia' can be used on their respective work as - per the 'Creative Commons'. These images can only be manipulated accordingly as per requirement w.r.t the - 'Color', 'Font-Style', 'Alignment' without manipulating the structure of the actual image... -
    - The 'BiTransSciencia [081]' Data-Architecture can be used for developing 'Open-Source','Closed-Source' and - 'Commercial' applications without manipulating the protocols. -

    - -
    -
    - -
    - Source ↔ Resource - -
    - The 'BiTransSciencia [081]' has discovered on the basis of general working of data-transmission in the - Universe. There is no further refference(s) has been used in this. If you think the content used in this - was - a source of your's / other liable one's, mail to mailto:bitranssciencia@081 -
    -
    - -
    - - - - \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/util/retry.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/util/retry.py deleted file mode 100644 index 2490d5e5b63359a7f826922dc69c0015cb9a5b2e..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/urllib3/util/retry.py +++ /dev/null @@ -1,620 +0,0 @@ -from __future__ import absolute_import - -import email -import logging -import re -import time -import warnings -from collections import namedtuple -from itertools import takewhile - -from ..exceptions import ( - ConnectTimeoutError, - InvalidHeader, - MaxRetryError, - ProtocolError, - ProxyError, - ReadTimeoutError, - ResponseError, -) -from ..packages import six - -log = logging.getLogger(__name__) - - -# Data structure for representing the metadata of requests that result in a retry. -RequestHistory = namedtuple( - "RequestHistory", ["method", "url", "error", "status", "redirect_location"] -) - - -# TODO: In v2 we can remove this sentinel and metaclass with deprecated options. -_Default = object() - - -class _RetryMeta(type): - @property - def DEFAULT_METHOD_WHITELIST(cls): - warnings.warn( - "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead", - DeprecationWarning, - ) - return cls.DEFAULT_ALLOWED_METHODS - - @DEFAULT_METHOD_WHITELIST.setter - def DEFAULT_METHOD_WHITELIST(cls, value): - warnings.warn( - "Using 'Retry.DEFAULT_METHOD_WHITELIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_ALLOWED_METHODS' instead", - DeprecationWarning, - ) - cls.DEFAULT_ALLOWED_METHODS = value - - @property - def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls): - warnings.warn( - "Using 'Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT' instead", - DeprecationWarning, - ) - return cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT - - @DEFAULT_REDIRECT_HEADERS_BLACKLIST.setter - def DEFAULT_REDIRECT_HEADERS_BLACKLIST(cls, value): - warnings.warn( - "Using 'Retry.DEFAULT_REDIRECT_HEADERS_BLACKLIST' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_REMOVE_HEADERS_ON_REDIRECT' instead", - DeprecationWarning, - ) - cls.DEFAULT_REMOVE_HEADERS_ON_REDIRECT = value - - @property - def BACKOFF_MAX(cls): - warnings.warn( - "Using 'Retry.BACKOFF_MAX' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead", - DeprecationWarning, - ) - return cls.DEFAULT_BACKOFF_MAX - - @BACKOFF_MAX.setter - def BACKOFF_MAX(cls, value): - warnings.warn( - "Using 'Retry.BACKOFF_MAX' is deprecated and " - "will be removed in v2.0. Use 'Retry.DEFAULT_BACKOFF_MAX' instead", - DeprecationWarning, - ) - cls.DEFAULT_BACKOFF_MAX = value - - -@six.add_metaclass(_RetryMeta) -class Retry(object): - """Retry configuration. - - Each retry attempt will create a new Retry object with updated values, so - they can be safely reused. - - Retries can be defined as a default for a pool:: - - retries = Retry(connect=5, read=2, redirect=5) - http = PoolManager(retries=retries) - response = http.request('GET', 'http://example.com/') - - Or per-request (which overrides the default for the pool):: - - response = http.request('GET', 'http://example.com/', retries=Retry(10)) - - Retries can be disabled by passing ``False``:: - - response = http.request('GET', 'http://example.com/', retries=False) - - Errors will be wrapped in :class:`~urllib3.exceptions.MaxRetryError` unless - retries are disabled, in which case the causing exception will be raised. - - :param int total: - Total number of retries to allow. Takes precedence over other counts. - - Set to ``None`` to remove this constraint and fall back on other - counts. - - Set to ``0`` to fail on the first retry. - - Set to ``False`` to disable and imply ``raise_on_redirect=False``. - - :param int connect: - How many connection-related errors to retry on. - - These are errors raised before the request is sent to the remote server, - which we assume has not triggered the server to process the request. - - Set to ``0`` to fail on the first retry of this type. - - :param int read: - How many times to retry on read errors. - - These errors are raised after the request was sent to the server, so the - request may have side-effects. - - Set to ``0`` to fail on the first retry of this type. - - :param int redirect: - How many redirects to perform. Limit this to avoid infinite redirect - loops. - - A redirect is a HTTP response with a status code 301, 302, 303, 307 or - 308. - - Set to ``0`` to fail on the first retry of this type. - - Set to ``False`` to disable and imply ``raise_on_redirect=False``. - - :param int status: - How many times to retry on bad status codes. - - These are retries made on responses, where status code matches - ``status_forcelist``. - - Set to ``0`` to fail on the first retry of this type. - - :param int other: - How many times to retry on other errors. - - Other errors are errors that are not connect, read, redirect or status errors. - These errors might be raised after the request was sent to the server, so the - request might have side-effects. - - Set to ``0`` to fail on the first retry of this type. - - If ``total`` is not set, it's a good idea to set this to 0 to account - for unexpected edge cases and avoid infinite retry loops. - - :param iterable allowed_methods: - Set of uppercased HTTP method verbs that we should retry on. - - By default, we only retry on methods which are considered to be - idempotent (multiple requests with the same parameters end with the - same state). See :attr:`Retry.DEFAULT_ALLOWED_METHODS`. - - Set to a ``False`` value to retry on any verb. - - .. warning:: - - Previously this parameter was named ``method_whitelist``, that - usage is deprecated in v1.26.0 and will be removed in v2.0. - - :param iterable status_forcelist: - A set of integer HTTP status codes that we should force a retry on. - A retry is initiated if the request method is in ``allowed_methods`` - and the response status code is in ``status_forcelist``. - - By default, this is disabled with ``None``. - - :param float backoff_factor: - A backoff factor to apply between attempts after the second try - (most errors are resolved immediately by a second try without a - delay). urllib3 will sleep for:: - - {backoff factor} * (2 ** ({number of total retries} - 1)) - - seconds. If the backoff_factor is 0.1, then :func:`.sleep` will sleep - for [0.0s, 0.2s, 0.4s, ...] between retries. It will never be longer - than :attr:`Retry.DEFAULT_BACKOFF_MAX`. - - By default, backoff is disabled (set to 0). - - :param bool raise_on_redirect: Whether, if the number of redirects is - exhausted, to raise a MaxRetryError, or to return a response with a - response code in the 3xx range. - - :param bool raise_on_status: Similar meaning to ``raise_on_redirect``: - whether we should raise an exception, or return a response, - if status falls in ``status_forcelist`` range and retries have - been exhausted. - - :param tuple history: The history of the request encountered during - each call to :meth:`~Retry.increment`. The list is in the order - the requests occurred. Each list item is of class :class:`RequestHistory`. - - :param bool respect_retry_after_header: - Whether to respect Retry-After header on status codes defined as - :attr:`Retry.RETRY_AFTER_STATUS_CODES` or not. - - :param iterable remove_headers_on_redirect: - Sequence of headers to remove from the request when a response - indicating a redirect is returned before firing off the redirected - request. - """ - - #: Default methods to be used for ``allowed_methods`` - DEFAULT_ALLOWED_METHODS = frozenset( - ["HEAD", "GET", "PUT", "DELETE", "OPTIONS", "TRACE"] - ) - - #: Default status codes to be used for ``status_forcelist`` - RETRY_AFTER_STATUS_CODES = frozenset([413, 429, 503]) - - #: Default headers to be used for ``remove_headers_on_redirect`` - DEFAULT_REMOVE_HEADERS_ON_REDIRECT = frozenset(["Authorization"]) - - #: Maximum backoff time. - DEFAULT_BACKOFF_MAX = 120 - - def __init__( - self, - total=10, - connect=None, - read=None, - redirect=None, - status=None, - other=None, - allowed_methods=_Default, - status_forcelist=None, - backoff_factor=0, - raise_on_redirect=True, - raise_on_status=True, - history=None, - respect_retry_after_header=True, - remove_headers_on_redirect=_Default, - # TODO: Deprecated, remove in v2.0 - method_whitelist=_Default, - ): - - if method_whitelist is not _Default: - if allowed_methods is not _Default: - raise ValueError( - "Using both 'allowed_methods' and " - "'method_whitelist' together is not allowed. " - "Instead only use 'allowed_methods'" - ) - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - stacklevel=2, - ) - allowed_methods = method_whitelist - if allowed_methods is _Default: - allowed_methods = self.DEFAULT_ALLOWED_METHODS - if remove_headers_on_redirect is _Default: - remove_headers_on_redirect = self.DEFAULT_REMOVE_HEADERS_ON_REDIRECT - - self.total = total - self.connect = connect - self.read = read - self.status = status - self.other = other - - if redirect is False or total is False: - redirect = 0 - raise_on_redirect = False - - self.redirect = redirect - self.status_forcelist = status_forcelist or set() - self.allowed_methods = allowed_methods - self.backoff_factor = backoff_factor - self.raise_on_redirect = raise_on_redirect - self.raise_on_status = raise_on_status - self.history = history or tuple() - self.respect_retry_after_header = respect_retry_after_header - self.remove_headers_on_redirect = frozenset( - [h.lower() for h in remove_headers_on_redirect] - ) - - def new(self, **kw): - params = dict( - total=self.total, - connect=self.connect, - read=self.read, - redirect=self.redirect, - status=self.status, - other=self.other, - status_forcelist=self.status_forcelist, - backoff_factor=self.backoff_factor, - raise_on_redirect=self.raise_on_redirect, - raise_on_status=self.raise_on_status, - history=self.history, - remove_headers_on_redirect=self.remove_headers_on_redirect, - respect_retry_after_header=self.respect_retry_after_header, - ) - - # TODO: If already given in **kw we use what's given to us - # If not given we need to figure out what to pass. We decide - # based on whether our class has the 'method_whitelist' property - # and if so we pass the deprecated 'method_whitelist' otherwise - # we use 'allowed_methods'. Remove in v2.0 - if "method_whitelist" not in kw and "allowed_methods" not in kw: - if "method_whitelist" in self.__dict__: - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - params["method_whitelist"] = self.allowed_methods - else: - params["allowed_methods"] = self.allowed_methods - - params.update(kw) - return type(self)(**params) - - @classmethod - def from_int(cls, retries, redirect=True, default=None): - """Backwards-compatibility for the old retries format.""" - if retries is None: - retries = default if default is not None else cls.DEFAULT - - if isinstance(retries, Retry): - return retries - - redirect = bool(redirect) and None - new_retries = cls(retries, redirect=redirect) - log.debug("Converted retries value: %r -> %r", retries, new_retries) - return new_retries - - def get_backoff_time(self): - """Formula for computing the current backoff - - :rtype: float - """ - # We want to consider only the last consecutive errors sequence (Ignore redirects). - consecutive_errors_len = len( - list( - takewhile(lambda x: x.redirect_location is None, reversed(self.history)) - ) - ) - if consecutive_errors_len <= 1: - return 0 - - backoff_value = self.backoff_factor * (2 ** (consecutive_errors_len - 1)) - return min(self.DEFAULT_BACKOFF_MAX, backoff_value) - - def parse_retry_after(self, retry_after): - # Whitespace: https://tools.ietf.org/html/rfc7230#section-3.2.4 - if re.match(r"^\s*[0-9]+\s*$", retry_after): - seconds = int(retry_after) - else: - retry_date_tuple = email.utils.parsedate_tz(retry_after) - if retry_date_tuple is None: - raise InvalidHeader("Invalid Retry-After header: %s" % retry_after) - if retry_date_tuple[9] is None: # Python 2 - # Assume UTC if no timezone was specified - # On Python2.7, parsedate_tz returns None for a timezone offset - # instead of 0 if no timezone is given, where mktime_tz treats - # a None timezone offset as local time. - retry_date_tuple = retry_date_tuple[:9] + (0,) + retry_date_tuple[10:] - - retry_date = email.utils.mktime_tz(retry_date_tuple) - seconds = retry_date - time.time() - - if seconds < 0: - seconds = 0 - - return seconds - - def get_retry_after(self, response): - """Get the value of Retry-After in seconds.""" - - retry_after = response.headers.get("Retry-After") - - if retry_after is None: - return None - - return self.parse_retry_after(retry_after) - - def sleep_for_retry(self, response=None): - retry_after = self.get_retry_after(response) - if retry_after: - time.sleep(retry_after) - return True - - return False - - def _sleep_backoff(self): - backoff = self.get_backoff_time() - if backoff <= 0: - return - time.sleep(backoff) - - def sleep(self, response=None): - """Sleep between retry attempts. - - This method will respect a server's ``Retry-After`` response header - and sleep the duration of the time requested. If that is not present, it - will use an exponential backoff. By default, the backoff factor is 0 and - this method will return immediately. - """ - - if self.respect_retry_after_header and response: - slept = self.sleep_for_retry(response) - if slept: - return - - self._sleep_backoff() - - def _is_connection_error(self, err): - """Errors when we're fairly sure that the server did not receive the - request, so it should be safe to retry. - """ - if isinstance(err, ProxyError): - err = err.original_error - return isinstance(err, ConnectTimeoutError) - - def _is_read_error(self, err): - """Errors that occur after the request has been started, so we should - assume that the server began processing it. - """ - return isinstance(err, (ReadTimeoutError, ProtocolError)) - - def _is_method_retryable(self, method): - """Checks if a given HTTP method should be retried upon, depending if - it is included in the allowed_methods - """ - # TODO: For now favor if the Retry implementation sets its own method_whitelist - # property outside of our constructor to avoid breaking custom implementations. - if "method_whitelist" in self.__dict__: - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - allowed_methods = self.method_whitelist - else: - allowed_methods = self.allowed_methods - - if allowed_methods and method.upper() not in allowed_methods: - return False - return True - - def is_retry(self, method, status_code, has_retry_after=False): - """Is this method/status code retryable? (Based on allowlists and control - variables such as the number of total retries to allow, whether to - respect the Retry-After header, whether this header is present, and - whether the returned status code is on the list of status codes to - be retried upon on the presence of the aforementioned header) - """ - if not self._is_method_retryable(method): - return False - - if self.status_forcelist and status_code in self.status_forcelist: - return True - - return ( - self.total - and self.respect_retry_after_header - and has_retry_after - and (status_code in self.RETRY_AFTER_STATUS_CODES) - ) - - def is_exhausted(self): - """Are we out of retries?""" - retry_counts = ( - self.total, - self.connect, - self.read, - self.redirect, - self.status, - self.other, - ) - retry_counts = list(filter(None, retry_counts)) - if not retry_counts: - return False - - return min(retry_counts) < 0 - - def increment( - self, - method=None, - url=None, - response=None, - error=None, - _pool=None, - _stacktrace=None, - ): - """Return a new Retry object with incremented retry counters. - - :param response: A response object, or None, if the server did not - return a response. - :type response: :class:`~urllib3.response.HTTPResponse` - :param Exception error: An error encountered during the request, or - None if the response was received successfully. - - :return: A new ``Retry`` object. - """ - if self.total is False and error: - # Disabled, indicate to re-raise the error. - raise six.reraise(type(error), error, _stacktrace) - - total = self.total - if total is not None: - total -= 1 - - connect = self.connect - read = self.read - redirect = self.redirect - status_count = self.status - other = self.other - cause = "unknown" - status = None - redirect_location = None - - if error and self._is_connection_error(error): - # Connect retry? - if connect is False: - raise six.reraise(type(error), error, _stacktrace) - elif connect is not None: - connect -= 1 - - elif error and self._is_read_error(error): - # Read retry? - if read is False or not self._is_method_retryable(method): - raise six.reraise(type(error), error, _stacktrace) - elif read is not None: - read -= 1 - - elif error: - # Other retry? - if other is not None: - other -= 1 - - elif response and response.get_redirect_location(): - # Redirect retry? - if redirect is not None: - redirect -= 1 - cause = "too many redirects" - redirect_location = response.get_redirect_location() - status = response.status - - else: - # Incrementing because of a server error like a 500 in - # status_forcelist and the given method is in the allowed_methods - cause = ResponseError.GENERIC_ERROR - if response and response.status: - if status_count is not None: - status_count -= 1 - cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) - status = response.status - - history = self.history + ( - RequestHistory(method, url, error, status, redirect_location), - ) - - new_retry = self.new( - total=total, - connect=connect, - read=read, - redirect=redirect, - status=status_count, - other=other, - history=history, - ) - - if new_retry.is_exhausted(): - raise MaxRetryError(_pool, url, error or ResponseError(cause)) - - log.debug("Incremented Retry for (url='%s'): %r", url, new_retry) - - return new_retry - - def __repr__(self): - return ( - "{cls.__name__}(total={self.total}, connect={self.connect}, " - "read={self.read}, redirect={self.redirect}, status={self.status})" - ).format(cls=type(self), self=self) - - def __getattr__(self, item): - if item == "method_whitelist": - # TODO: Remove this deprecated alias in v2.0 - warnings.warn( - "Using 'method_whitelist' with Retry is deprecated and " - "will be removed in v2.0. Use 'allowed_methods' instead", - DeprecationWarning, - ) - return self.allowed_methods - try: - return getattr(super(Retry, self), item) - except AttributeError: - return getattr(Retry, item) - - -# For backwards compatibility (equivalent to pre-v1.9): -Retry.DEFAULT = Retry(3) diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/dist.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/dist.py deleted file mode 100644 index 824235488666c6ecdb22240b08354806fadb58ca..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/dist.py +++ /dev/null @@ -1,1222 +0,0 @@ -# -*- coding: utf-8 -*- -__all__ = ['Distribution'] - -import io -import sys -import re -import os -import warnings -import numbers -import distutils.log -import distutils.core -import distutils.cmd -import distutils.dist -import distutils.command -from distutils.util import strtobool -from distutils.debug import DEBUG -from distutils.fancy_getopt import translate_longopt -from glob import iglob -import itertools -import textwrap -from typing import List, Optional, TYPE_CHECKING -from pathlib import Path - -from collections import defaultdict -from email import message_from_file - -from distutils.errors import DistutilsOptionError, DistutilsSetupError -from distutils.util import rfc822_escape - -from setuptools.extern import packaging -from setuptools.extern import ordered_set -from setuptools.extern.more_itertools import unique_everseen, partition - -from ._importlib import metadata - -from . import SetuptoolsDeprecationWarning - -import setuptools -import setuptools.command -from setuptools import windows_support -from setuptools.monkey import get_unpatched -from setuptools.config import setupcfg, pyprojecttoml -from setuptools.discovery import ConfigDiscovery - -import pkg_resources -from setuptools.extern.packaging import version -from . import _reqs -from . import _entry_points - -if TYPE_CHECKING: - from email.message import Message - -__import__('setuptools.extern.packaging.specifiers') -__import__('setuptools.extern.packaging.version') - - -def _get_unpatched(cls): - warnings.warn("Do not call this function", DistDeprecationWarning) - return get_unpatched(cls) - - -def get_metadata_version(self): - mv = getattr(self, 'metadata_version', None) - if mv is None: - mv = version.Version('2.1') - self.metadata_version = mv - return mv - - -def rfc822_unescape(content: str) -> str: - """Reverse RFC-822 escaping by removing leading whitespaces from content.""" - lines = content.splitlines() - if len(lines) == 1: - return lines[0].lstrip() - return '\n'.join((lines[0].lstrip(), textwrap.dedent('\n'.join(lines[1:])))) - - -def _read_field_from_msg(msg: "Message", field: str) -> Optional[str]: - """Read Message header field.""" - value = msg[field] - if value == 'UNKNOWN': - return None - return value - - -def _read_field_unescaped_from_msg(msg: "Message", field: str) -> Optional[str]: - """Read Message header field and apply rfc822_unescape.""" - value = _read_field_from_msg(msg, field) - if value is None: - return value - return rfc822_unescape(value) - - -def _read_list_from_msg(msg: "Message", field: str) -> Optional[List[str]]: - """Read Message header field and return all results as list.""" - values = msg.get_all(field, None) - if values == []: - return None - return values - - -def _read_payload_from_msg(msg: "Message") -> Optional[str]: - value = msg.get_payload().strip() - if value == 'UNKNOWN' or not value: - return None - return value - - -def read_pkg_file(self, file): - """Reads the metadata values from a file object.""" - msg = message_from_file(file) - - self.metadata_version = version.Version(msg['metadata-version']) - self.name = _read_field_from_msg(msg, 'name') - self.version = _read_field_from_msg(msg, 'version') - self.description = _read_field_from_msg(msg, 'summary') - # we are filling author only. - self.author = _read_field_from_msg(msg, 'author') - self.maintainer = None - self.author_email = _read_field_from_msg(msg, 'author-email') - self.maintainer_email = None - self.url = _read_field_from_msg(msg, 'home-page') - self.download_url = _read_field_from_msg(msg, 'download-url') - self.license = _read_field_unescaped_from_msg(msg, 'license') - - self.long_description = _read_field_unescaped_from_msg(msg, 'description') - if ( - self.long_description is None and - self.metadata_version >= version.Version('2.1') - ): - self.long_description = _read_payload_from_msg(msg) - self.description = _read_field_from_msg(msg, 'summary') - - if 'keywords' in msg: - self.keywords = _read_field_from_msg(msg, 'keywords').split(',') - - self.platforms = _read_list_from_msg(msg, 'platform') - self.classifiers = _read_list_from_msg(msg, 'classifier') - - # PEP 314 - these fields only exist in 1.1 - if self.metadata_version == version.Version('1.1'): - self.requires = _read_list_from_msg(msg, 'requires') - self.provides = _read_list_from_msg(msg, 'provides') - self.obsoletes = _read_list_from_msg(msg, 'obsoletes') - else: - self.requires = None - self.provides = None - self.obsoletes = None - - self.license_files = _read_list_from_msg(msg, 'license-file') - - -def single_line(val): - """ - Quick and dirty validation for Summary pypa/setuptools#1390. - """ - if '\n' in val: - # TODO: Replace with `raise ValueError("newlines not allowed")` - # after reviewing #2893. - warnings.warn("newlines not allowed and will break in the future") - val = val.strip().split('\n')[0] - return val - - -# Based on Python 3.5 version -def write_pkg_file(self, file): # noqa: C901 # is too complex (14) # FIXME - """Write the PKG-INFO format data to a file object.""" - version = self.get_metadata_version() - - def write_field(key, value): - file.write("%s: %s\n" % (key, value)) - - write_field('Metadata-Version', str(version)) - write_field('Name', self.get_name()) - write_field('Version', self.get_version()) - - summary = self.get_description() - if summary: - write_field('Summary', single_line(summary)) - - optional_fields = ( - ('Home-page', 'url'), - ('Download-URL', 'download_url'), - ('Author', 'author'), - ('Author-email', 'author_email'), - ('Maintainer', 'maintainer'), - ('Maintainer-email', 'maintainer_email'), - ) - - for field, attr in optional_fields: - attr_val = getattr(self, attr, None) - if attr_val is not None: - write_field(field, attr_val) - - license = self.get_license() - if license: - write_field('License', rfc822_escape(license)) - - for project_url in self.project_urls.items(): - write_field('Project-URL', '%s, %s' % project_url) - - keywords = ','.join(self.get_keywords()) - if keywords: - write_field('Keywords', keywords) - - platforms = self.get_platforms() or [] - for platform in platforms: - write_field('Platform', platform) - - self._write_list(file, 'Classifier', self.get_classifiers()) - - # PEP 314 - self._write_list(file, 'Requires', self.get_requires()) - self._write_list(file, 'Provides', self.get_provides()) - self._write_list(file, 'Obsoletes', self.get_obsoletes()) - - # Setuptools specific for PEP 345 - if hasattr(self, 'python_requires'): - write_field('Requires-Python', self.python_requires) - - # PEP 566 - if self.long_description_content_type: - write_field('Description-Content-Type', self.long_description_content_type) - if self.provides_extras: - for extra in self.provides_extras: - write_field('Provides-Extra', extra) - - self._write_list(file, 'License-File', self.license_files or []) - - long_description = self.get_long_description() - if long_description: - file.write("\n%s" % long_description) - if not long_description.endswith("\n"): - file.write("\n") - - -sequence = tuple, list - - -def check_importable(dist, attr, value): - try: - ep = metadata.EntryPoint(value=value, name=None, group=None) - assert not ep.extras - except (TypeError, ValueError, AttributeError, AssertionError) as e: - raise DistutilsSetupError( - "%r must be importable 'module:attrs' string (got %r)" % (attr, value) - ) from e - - -def assert_string_list(dist, attr, value): - """Verify that value is a string list""" - try: - # verify that value is a list or tuple to exclude unordered - # or single-use iterables - assert isinstance(value, (list, tuple)) - # verify that elements of value are strings - assert ''.join(value) != value - except (TypeError, ValueError, AttributeError, AssertionError) as e: - raise DistutilsSetupError( - "%r must be a list of strings (got %r)" % (attr, value) - ) from e - - -def check_nsp(dist, attr, value): - """Verify that namespace packages are valid""" - ns_packages = value - assert_string_list(dist, attr, ns_packages) - for nsp in ns_packages: - if not dist.has_contents_for(nsp): - raise DistutilsSetupError( - "Distribution contains no modules or packages for " - + "namespace package %r" % nsp - ) - parent, sep, child = nsp.rpartition('.') - if parent and parent not in ns_packages: - distutils.log.warn( - "WARNING: %r is declared as a package namespace, but %r" - " is not: please correct this in setup.py", - nsp, - parent, - ) - msg = ( - "The namespace_packages parameter is deprecated, " - "consider using implicit namespaces instead (PEP 420)." - ) - warnings.warn(msg, SetuptoolsDeprecationWarning) - - -def check_extras(dist, attr, value): - """Verify that extras_require mapping is valid""" - try: - list(itertools.starmap(_check_extra, value.items())) - except (TypeError, ValueError, AttributeError) as e: - raise DistutilsSetupError( - "'extras_require' must be a dictionary whose values are " - "strings or lists of strings containing valid project/version " - "requirement specifiers." - ) from e - - -def _check_extra(extra, reqs): - name, sep, marker = extra.partition(':') - if marker and pkg_resources.invalid_marker(marker): - raise DistutilsSetupError("Invalid environment marker: " + marker) - list(_reqs.parse(reqs)) - - -def assert_bool(dist, attr, value): - """Verify that value is True, False, 0, or 1""" - if bool(value) != value: - tmpl = "{attr!r} must be a boolean value (got {value!r})" - raise DistutilsSetupError(tmpl.format(attr=attr, value=value)) - - -def invalid_unless_false(dist, attr, value): - if not value: - warnings.warn(f"{attr} is ignored.", DistDeprecationWarning) - return - raise DistutilsSetupError(f"{attr} is invalid.") - - -def check_requirements(dist, attr, value): - """Verify that install_requires is a valid requirements list""" - try: - list(_reqs.parse(value)) - if isinstance(value, (dict, set)): - raise TypeError("Unordered types are not allowed") - except (TypeError, ValueError) as error: - tmpl = ( - "{attr!r} must be a string or list of strings " - "containing valid project/version requirement specifiers; {error}" - ) - raise DistutilsSetupError(tmpl.format(attr=attr, error=error)) from error - - -def check_specifier(dist, attr, value): - """Verify that value is a valid version specifier""" - try: - packaging.specifiers.SpecifierSet(value) - except (packaging.specifiers.InvalidSpecifier, AttributeError) as error: - tmpl = ( - "{attr!r} must be a string " "containing valid version specifiers; {error}" - ) - raise DistutilsSetupError(tmpl.format(attr=attr, error=error)) from error - - -def check_entry_points(dist, attr, value): - """Verify that entry_points map is parseable""" - try: - _entry_points.load(value) - except Exception as e: - raise DistutilsSetupError(e) from e - - -def check_test_suite(dist, attr, value): - if not isinstance(value, str): - raise DistutilsSetupError("test_suite must be a string") - - -def check_package_data(dist, attr, value): - """Verify that value is a dictionary of package names to glob lists""" - if not isinstance(value, dict): - raise DistutilsSetupError( - "{!r} must be a dictionary mapping package names to lists of " - "string wildcard patterns".format(attr) - ) - for k, v in value.items(): - if not isinstance(k, str): - raise DistutilsSetupError( - "keys of {!r} dict must be strings (got {!r})".format(attr, k) - ) - assert_string_list(dist, 'values of {!r} dict'.format(attr), v) - - -def check_packages(dist, attr, value): - for pkgname in value: - if not re.match(r'\w+(\.\w+)*', pkgname): - distutils.log.warn( - "WARNING: %r not a valid package name; please use only " - ".-separated package names in setup.py", - pkgname, - ) - - -_Distribution = get_unpatched(distutils.core.Distribution) - - -class Distribution(_Distribution): - """Distribution with support for tests and package data - - This is an enhanced version of 'distutils.dist.Distribution' that - effectively adds the following new optional keyword arguments to 'setup()': - - 'install_requires' -- a string or sequence of strings specifying project - versions that the distribution requires when installed, in the format - used by 'pkg_resources.require()'. They will be installed - automatically when the package is installed. If you wish to use - packages that are not available in PyPI, or want to give your users an - alternate download location, you can add a 'find_links' option to the - '[easy_install]' section of your project's 'setup.cfg' file, and then - setuptools will scan the listed web pages for links that satisfy the - requirements. - - 'extras_require' -- a dictionary mapping names of optional "extras" to the - additional requirement(s) that using those extras incurs. For example, - this:: - - extras_require = dict(reST = ["docutils>=0.3", "reSTedit"]) - - indicates that the distribution can optionally provide an extra - capability called "reST", but it can only be used if docutils and - reSTedit are installed. If the user installs your package using - EasyInstall and requests one of your extras, the corresponding - additional requirements will be installed if needed. - - 'test_suite' -- the name of a test suite to run for the 'test' command. - If the user runs 'python setup.py test', the package will be installed, - and the named test suite will be run. The format is the same as - would be used on a 'unittest.py' command line. That is, it is the - dotted name of an object to import and call to generate a test suite. - - 'package_data' -- a dictionary mapping package names to lists of filenames - or globs to use to find data files contained in the named packages. - If the dictionary has filenames or globs listed under '""' (the empty - string), those names will be searched for in every package, in addition - to any names for the specific package. Data files found using these - names/globs will be installed along with the package, in the same - location as the package. Note that globs are allowed to reference - the contents of non-package subdirectories, as long as you use '/' as - a path separator. (Globs are automatically converted to - platform-specific paths at runtime.) - - In addition to these new keywords, this class also has several new methods - for manipulating the distribution's contents. For example, the 'include()' - and 'exclude()' methods can be thought of as in-place add and subtract - commands that add or remove packages, modules, extensions, and so on from - the distribution. - """ - - _DISTUTILS_UNSUPPORTED_METADATA = { - 'long_description_content_type': lambda: None, - 'project_urls': dict, - 'provides_extras': ordered_set.OrderedSet, - 'license_file': lambda: None, - 'license_files': lambda: None, - } - - _patched_dist = None - - def patch_missing_pkg_info(self, attrs): - # Fake up a replacement for the data that would normally come from - # PKG-INFO, but which might not yet be built if this is a fresh - # checkout. - # - if not attrs or 'name' not in attrs or 'version' not in attrs: - return - key = pkg_resources.safe_name(str(attrs['name'])).lower() - dist = pkg_resources.working_set.by_key.get(key) - if dist is not None and not dist.has_metadata('PKG-INFO'): - dist._version = pkg_resources.safe_version(str(attrs['version'])) - self._patched_dist = dist - - def __init__(self, attrs=None): - have_package_data = hasattr(self, "package_data") - if not have_package_data: - self.package_data = {} - attrs = attrs or {} - self.dist_files = [] - # Filter-out setuptools' specific options. - self.src_root = attrs.pop("src_root", None) - self.patch_missing_pkg_info(attrs) - self.dependency_links = attrs.pop('dependency_links', []) - self.setup_requires = attrs.pop('setup_requires', []) - for ep in metadata.entry_points(group='distutils.setup_keywords'): - vars(self).setdefault(ep.name, None) - _Distribution.__init__( - self, - { - k: v - for k, v in attrs.items() - if k not in self._DISTUTILS_UNSUPPORTED_METADATA - }, - ) - - # Save the original dependencies before they are processed into the egg format - self._orig_extras_require = {} - self._orig_install_requires = [] - self._tmp_extras_require = defaultdict(ordered_set.OrderedSet) - - self.set_defaults = ConfigDiscovery(self) - - self._set_metadata_defaults(attrs) - - self.metadata.version = self._normalize_version( - self._validate_version(self.metadata.version) - ) - self._finalize_requires() - - def _validate_metadata(self): - required = {"name"} - provided = { - key - for key in vars(self.metadata) - if getattr(self.metadata, key, None) is not None - } - missing = required - provided - - if missing: - msg = f"Required package metadata is missing: {missing}" - raise DistutilsSetupError(msg) - - def _set_metadata_defaults(self, attrs): - """ - Fill-in missing metadata fields not supported by distutils. - Some fields may have been set by other tools (e.g. pbr). - Those fields (vars(self.metadata)) take precedence to - supplied attrs. - """ - for option, default in self._DISTUTILS_UNSUPPORTED_METADATA.items(): - vars(self.metadata).setdefault(option, attrs.get(option, default())) - - @staticmethod - def _normalize_version(version): - if isinstance(version, setuptools.sic) or version is None: - return version - - normalized = str(packaging.version.Version(version)) - if version != normalized: - tmpl = "Normalizing '{version}' to '{normalized}'" - warnings.warn(tmpl.format(**locals())) - return normalized - return version - - @staticmethod - def _validate_version(version): - if isinstance(version, numbers.Number): - # Some people apparently take "version number" too literally :) - version = str(version) - - if version is not None: - try: - packaging.version.Version(version) - except (packaging.version.InvalidVersion, TypeError): - warnings.warn( - "The version specified (%r) is an invalid version, this " - "may not work as expected with newer versions of " - "setuptools, pip, and PyPI. Please see PEP 440 for more " - "details." % version - ) - return setuptools.sic(version) - return version - - def _finalize_requires(self): - """ - Set `metadata.python_requires` and fix environment markers - in `install_requires` and `extras_require`. - """ - if getattr(self, 'python_requires', None): - self.metadata.python_requires = self.python_requires - - if getattr(self, 'extras_require', None): - # Save original before it is messed by _convert_extras_requirements - self._orig_extras_require = self._orig_extras_require or self.extras_require - for extra in self.extras_require.keys(): - # Since this gets called multiple times at points where the - # keys have become 'converted' extras, ensure that we are only - # truly adding extras we haven't seen before here. - extra = extra.split(':')[0] - if extra: - self.metadata.provides_extras.add(extra) - - if getattr(self, 'install_requires', None) and not self._orig_install_requires: - # Save original before it is messed by _move_install_requirements_markers - self._orig_install_requires = self.install_requires - - self._convert_extras_requirements() - self._move_install_requirements_markers() - - def _convert_extras_requirements(self): - """ - Convert requirements in `extras_require` of the form - `"extra": ["barbazquux; {marker}"]` to - `"extra:{marker}": ["barbazquux"]`. - """ - spec_ext_reqs = getattr(self, 'extras_require', None) or {} - tmp = defaultdict(ordered_set.OrderedSet) - self._tmp_extras_require = getattr(self, '_tmp_extras_require', tmp) - for section, v in spec_ext_reqs.items(): - # Do not strip empty sections. - self._tmp_extras_require[section] - for r in _reqs.parse(v): - suffix = self._suffix_for(r) - self._tmp_extras_require[section + suffix].append(r) - - @staticmethod - def _suffix_for(req): - """ - For a requirement, return the 'extras_require' suffix for - that requirement. - """ - return ':' + str(req.marker) if req.marker else '' - - def _move_install_requirements_markers(self): - """ - Move requirements in `install_requires` that are using environment - markers `extras_require`. - """ - - # divide the install_requires into two sets, simple ones still - # handled by install_requires and more complex ones handled - # by extras_require. - - def is_simple_req(req): - return not req.marker - - spec_inst_reqs = getattr(self, 'install_requires', None) or () - inst_reqs = list(_reqs.parse(spec_inst_reqs)) - simple_reqs = filter(is_simple_req, inst_reqs) - complex_reqs = itertools.filterfalse(is_simple_req, inst_reqs) - self.install_requires = list(map(str, simple_reqs)) - - for r in complex_reqs: - self._tmp_extras_require[':' + str(r.marker)].append(r) - self.extras_require = dict( - # list(dict.fromkeys(...)) ensures a list of unique strings - (k, list(dict.fromkeys(str(r) for r in map(self._clean_req, v)))) - for k, v in self._tmp_extras_require.items() - ) - - def _clean_req(self, req): - """ - Given a Requirement, remove environment markers and return it. - """ - req.marker = None - return req - - def _finalize_license_files(self): - """Compute names of all license files which should be included.""" - license_files: Optional[List[str]] = self.metadata.license_files - patterns: List[str] = license_files if license_files else [] - - license_file: Optional[str] = self.metadata.license_file - if license_file and license_file not in patterns: - patterns.append(license_file) - - if license_files is None and license_file is None: - # Default patterns match the ones wheel uses - # See https://wheel.readthedocs.io/en/stable/user_guide.html - # -> 'Including license files in the generated wheel file' - patterns = ('LICEN[CS]E*', 'COPYING*', 'NOTICE*', 'AUTHORS*') - - self.metadata.license_files = list( - unique_everseen(self._expand_patterns(patterns)) - ) - - @staticmethod - def _expand_patterns(patterns): - """ - >>> list(Distribution._expand_patterns(['LICENSE'])) - ['LICENSE'] - >>> list(Distribution._expand_patterns(['setup.cfg', 'LIC*'])) - ['setup.cfg', 'LICENSE'] - """ - return ( - path - for pattern in patterns - for path in sorted(iglob(pattern)) - if not path.endswith('~') and os.path.isfile(path) - ) - - # FIXME: 'Distribution._parse_config_files' is too complex (14) - def _parse_config_files(self, filenames=None): # noqa: C901 - """ - Adapted from distutils.dist.Distribution.parse_config_files, - this method provides the same functionality in subtly-improved - ways. - """ - from configparser import ConfigParser - - # Ignore install directory options if we have a venv - ignore_options = ( - [] - if sys.prefix == sys.base_prefix - else [ - 'install-base', - 'install-platbase', - 'install-lib', - 'install-platlib', - 'install-purelib', - 'install-headers', - 'install-scripts', - 'install-data', - 'prefix', - 'exec-prefix', - 'home', - 'user', - 'root', - ] - ) - - ignore_options = frozenset(ignore_options) - - if filenames is None: - filenames = self.find_config_files() - - if DEBUG: - self.announce("Distribution.parse_config_files():") - - parser = ConfigParser() - parser.optionxform = str - for filename in filenames: - with io.open(filename, encoding='utf-8') as reader: - if DEBUG: - self.announce(" reading {filename}".format(**locals())) - parser.read_file(reader) - for section in parser.sections(): - options = parser.options(section) - opt_dict = self.get_option_dict(section) - - for opt in options: - if opt == '__name__' or opt in ignore_options: - continue - - val = parser.get(section, opt) - opt = self.warn_dash_deprecation(opt, section) - opt = self.make_option_lowercase(opt, section) - opt_dict[opt] = (filename, val) - - # Make the ConfigParser forget everything (so we retain - # the original filenames that options come from) - parser.__init__() - - if 'global' not in self.command_options: - return - - # If there was a "global" section in the config file, use it - # to set Distribution options. - - for (opt, (src, val)) in self.command_options['global'].items(): - alias = self.negative_opt.get(opt) - if alias: - val = not strtobool(val) - elif opt in ('verbose', 'dry_run'): # ugh! - val = strtobool(val) - - try: - setattr(self, alias or opt, val) - except ValueError as e: - raise DistutilsOptionError(e) from e - - def warn_dash_deprecation(self, opt, section): - if section in ( - 'options.extras_require', - 'options.data_files', - ): - return opt - - underscore_opt = opt.replace('-', '_') - commands = list(itertools.chain( - distutils.command.__all__, - self._setuptools_commands(), - )) - if ( - not section.startswith('options') - and section != 'metadata' - and section not in commands - ): - return underscore_opt - - if '-' in opt: - warnings.warn( - "Usage of dash-separated '%s' will not be supported in future " - "versions. Please use the underscore name '%s' instead" - % (opt, underscore_opt) - ) - return underscore_opt - - def _setuptools_commands(self): - try: - return metadata.distribution('setuptools').entry_points.names - except metadata.PackageNotFoundError: - # during bootstrapping, distribution doesn't exist - return [] - - def make_option_lowercase(self, opt, section): - if section != 'metadata' or opt.islower(): - return opt - - lowercase_opt = opt.lower() - warnings.warn( - "Usage of uppercase key '%s' in '%s' will be deprecated in future " - "versions. Please use lowercase '%s' instead" - % (opt, section, lowercase_opt) - ) - return lowercase_opt - - # FIXME: 'Distribution._set_command_options' is too complex (14) - def _set_command_options(self, command_obj, option_dict=None): # noqa: C901 - """ - Set the options for 'command_obj' from 'option_dict'. Basically - this means copying elements of a dictionary ('option_dict') to - attributes of an instance ('command'). - - 'command_obj' must be a Command instance. If 'option_dict' is not - supplied, uses the standard option dictionary for this command - (from 'self.command_options'). - - (Adopted from distutils.dist.Distribution._set_command_options) - """ - command_name = command_obj.get_command_name() - if option_dict is None: - option_dict = self.get_option_dict(command_name) - - if DEBUG: - self.announce(" setting options for '%s' command:" % command_name) - for (option, (source, value)) in option_dict.items(): - if DEBUG: - self.announce(" %s = %s (from %s)" % (option, value, source)) - try: - bool_opts = [translate_longopt(o) for o in command_obj.boolean_options] - except AttributeError: - bool_opts = [] - try: - neg_opt = command_obj.negative_opt - except AttributeError: - neg_opt = {} - - try: - is_string = isinstance(value, str) - if option in neg_opt and is_string: - setattr(command_obj, neg_opt[option], not strtobool(value)) - elif option in bool_opts and is_string: - setattr(command_obj, option, strtobool(value)) - elif hasattr(command_obj, option): - setattr(command_obj, option, value) - else: - raise DistutilsOptionError( - "error in %s: command '%s' has no such option '%s'" - % (source, command_name, option) - ) - except ValueError as e: - raise DistutilsOptionError(e) from e - - def _get_project_config_files(self, filenames): - """Add default file and split between INI and TOML""" - tomlfiles = [] - standard_project_metadata = Path(self.src_root or os.curdir, "pyproject.toml") - if filenames is not None: - parts = partition(lambda f: Path(f).suffix == ".toml", filenames) - filenames = list(parts[0]) # 1st element => predicate is False - tomlfiles = list(parts[1]) # 2nd element => predicate is True - elif standard_project_metadata.exists(): - tomlfiles = [standard_project_metadata] - return filenames, tomlfiles - - def parse_config_files(self, filenames=None, ignore_option_errors=False): - """Parses configuration files from various levels - and loads configuration. - """ - inifiles, tomlfiles = self._get_project_config_files(filenames) - - self._parse_config_files(filenames=inifiles) - - setupcfg.parse_configuration( - self, self.command_options, ignore_option_errors=ignore_option_errors - ) - for filename in tomlfiles: - pyprojecttoml.apply_configuration(self, filename, ignore_option_errors) - - self._finalize_requires() - self._finalize_license_files() - - def fetch_build_eggs(self, requires): - """Resolve pre-setup requirements""" - resolved_dists = pkg_resources.working_set.resolve( - _reqs.parse(requires), - installer=self.fetch_build_egg, - replace_conflicting=True, - ) - for dist in resolved_dists: - pkg_resources.working_set.add(dist, replace=True) - return resolved_dists - - def finalize_options(self): - """ - Allow plugins to apply arbitrary operations to the - distribution. Each hook may optionally define a 'order' - to influence the order of execution. Smaller numbers - go first and the default is 0. - """ - group = 'setuptools.finalize_distribution_options' - - def by_order(hook): - return getattr(hook, 'order', 0) - - defined = metadata.entry_points(group=group) - filtered = itertools.filterfalse(self._removed, defined) - loaded = map(lambda e: e.load(), filtered) - for ep in sorted(loaded, key=by_order): - ep(self) - - @staticmethod - def _removed(ep): - """ - When removing an entry point, if metadata is loaded - from an older version of Setuptools, that removed - entry point will attempt to be loaded and will fail. - See #2765 for more details. - """ - removed = { - # removed 2021-09-05 - '2to3_doctests', - } - return ep.name in removed - - def _finalize_setup_keywords(self): - for ep in metadata.entry_points(group='distutils.setup_keywords'): - value = getattr(self, ep.name, None) - if value is not None: - ep.load()(self, ep.name, value) - - def get_egg_cache_dir(self): - egg_cache_dir = os.path.join(os.curdir, '.eggs') - if not os.path.exists(egg_cache_dir): - os.mkdir(egg_cache_dir) - windows_support.hide_file(egg_cache_dir) - readme_txt_filename = os.path.join(egg_cache_dir, 'README.txt') - with open(readme_txt_filename, 'w') as f: - f.write( - 'This directory contains eggs that were downloaded ' - 'by setuptools to build, test, and run plug-ins.\n\n' - ) - f.write( - 'This directory caches those eggs to prevent ' - 'repeated downloads.\n\n' - ) - f.write('However, it is safe to delete this directory.\n\n') - - return egg_cache_dir - - def fetch_build_egg(self, req): - """Fetch an egg needed for building""" - from setuptools.installer import fetch_build_egg - - return fetch_build_egg(self, req) - - def get_command_class(self, command): - """Pluggable version of get_command_class()""" - if command in self.cmdclass: - return self.cmdclass[command] - - eps = metadata.entry_points(group='distutils.commands', name=command) - for ep in eps: - self.cmdclass[command] = cmdclass = ep.load() - return cmdclass - else: - return _Distribution.get_command_class(self, command) - - def print_commands(self): - for ep in metadata.entry_points(group='distutils.commands'): - if ep.name not in self.cmdclass: - cmdclass = ep.load() - self.cmdclass[ep.name] = cmdclass - return _Distribution.print_commands(self) - - def get_command_list(self): - for ep in metadata.entry_points(group='distutils.commands'): - if ep.name not in self.cmdclass: - cmdclass = ep.load() - self.cmdclass[ep.name] = cmdclass - return _Distribution.get_command_list(self) - - def include(self, **attrs): - """Add items to distribution that are named in keyword arguments - - For example, 'dist.include(py_modules=["x"])' would add 'x' to - the distribution's 'py_modules' attribute, if it was not already - there. - - Currently, this method only supports inclusion for attributes that are - lists or tuples. If you need to add support for adding to other - attributes in this or a subclass, you can add an '_include_X' method, - where 'X' is the name of the attribute. The method will be called with - the value passed to 'include()'. So, 'dist.include(foo={"bar":"baz"})' - will try to call 'dist._include_foo({"bar":"baz"})', which can then - handle whatever special inclusion logic is needed. - """ - for k, v in attrs.items(): - include = getattr(self, '_include_' + k, None) - if include: - include(v) - else: - self._include_misc(k, v) - - def exclude_package(self, package): - """Remove packages, modules, and extensions in named package""" - - pfx = package + '.' - if self.packages: - self.packages = [ - p for p in self.packages if p != package and not p.startswith(pfx) - ] - - if self.py_modules: - self.py_modules = [ - p for p in self.py_modules if p != package and not p.startswith(pfx) - ] - - if self.ext_modules: - self.ext_modules = [ - p - for p in self.ext_modules - if p.name != package and not p.name.startswith(pfx) - ] - - def has_contents_for(self, package): - """Return true if 'exclude_package(package)' would do something""" - - pfx = package + '.' - - for p in self.iter_distribution_names(): - if p == package or p.startswith(pfx): - return True - - def _exclude_misc(self, name, value): - """Handle 'exclude()' for list/tuple attrs without a special handler""" - if not isinstance(value, sequence): - raise DistutilsSetupError( - "%s: setting must be a list or tuple (%r)" % (name, value) - ) - try: - old = getattr(self, name) - except AttributeError as e: - raise DistutilsSetupError("%s: No such distribution setting" % name) from e - if old is not None and not isinstance(old, sequence): - raise DistutilsSetupError( - name + ": this setting cannot be changed via include/exclude" - ) - elif old: - setattr(self, name, [item for item in old if item not in value]) - - def _include_misc(self, name, value): - """Handle 'include()' for list/tuple attrs without a special handler""" - - if not isinstance(value, sequence): - raise DistutilsSetupError("%s: setting must be a list (%r)" % (name, value)) - try: - old = getattr(self, name) - except AttributeError as e: - raise DistutilsSetupError("%s: No such distribution setting" % name) from e - if old is None: - setattr(self, name, value) - elif not isinstance(old, sequence): - raise DistutilsSetupError( - name + ": this setting cannot be changed via include/exclude" - ) - else: - new = [item for item in value if item not in old] - setattr(self, name, old + new) - - def exclude(self, **attrs): - """Remove items from distribution that are named in keyword arguments - - For example, 'dist.exclude(py_modules=["x"])' would remove 'x' from - the distribution's 'py_modules' attribute. Excluding packages uses - the 'exclude_package()' method, so all of the package's contained - packages, modules, and extensions are also excluded. - - Currently, this method only supports exclusion from attributes that are - lists or tuples. If you need to add support for excluding from other - attributes in this or a subclass, you can add an '_exclude_X' method, - where 'X' is the name of the attribute. The method will be called with - the value passed to 'exclude()'. So, 'dist.exclude(foo={"bar":"baz"})' - will try to call 'dist._exclude_foo({"bar":"baz"})', which can then - handle whatever special exclusion logic is needed. - """ - for k, v in attrs.items(): - exclude = getattr(self, '_exclude_' + k, None) - if exclude: - exclude(v) - else: - self._exclude_misc(k, v) - - def _exclude_packages(self, packages): - if not isinstance(packages, sequence): - raise DistutilsSetupError( - "packages: setting must be a list or tuple (%r)" % (packages,) - ) - list(map(self.exclude_package, packages)) - - def _parse_command_opts(self, parser, args): - # Remove --with-X/--without-X options when processing command args - self.global_options = self.__class__.global_options - self.negative_opt = self.__class__.negative_opt - - # First, expand any aliases - command = args[0] - aliases = self.get_option_dict('aliases') - while command in aliases: - src, alias = aliases[command] - del aliases[command] # ensure each alias can expand only once! - import shlex - - args[:1] = shlex.split(alias, True) - command = args[0] - - nargs = _Distribution._parse_command_opts(self, parser, args) - - # Handle commands that want to consume all remaining arguments - cmd_class = self.get_command_class(command) - if getattr(cmd_class, 'command_consumes_arguments', None): - self.get_option_dict(command)['args'] = ("command line", nargs) - if nargs is not None: - return [] - - return nargs - - def get_cmdline_options(self): - """Return a '{cmd: {opt:val}}' map of all command-line options - - Option names are all long, but do not include the leading '--', and - contain dashes rather than underscores. If the option doesn't take - an argument (e.g. '--quiet'), the 'val' is 'None'. - - Note that options provided by config files are intentionally excluded. - """ - - d = {} - - for cmd, opts in self.command_options.items(): - - for opt, (src, val) in opts.items(): - - if src != "command line": - continue - - opt = opt.replace('_', '-') - - if val == 0: - cmdobj = self.get_command_obj(cmd) - neg_opt = self.negative_opt.copy() - neg_opt.update(getattr(cmdobj, 'negative_opt', {})) - for neg, pos in neg_opt.items(): - if pos == opt: - opt = neg - val = None - break - else: - raise AssertionError("Shouldn't be able to get here") - - elif val == 1: - val = None - - d.setdefault(cmd, {})[opt] = val - - return d - - def iter_distribution_names(self): - """Yield all packages, modules, and extension names in distribution""" - - for pkg in self.packages or (): - yield pkg - - for module in self.py_modules or (): - yield module - - for ext in self.ext_modules or (): - if isinstance(ext, tuple): - name, buildinfo = ext - else: - name = ext.name - if name.endswith('module'): - name = name[:-6] - yield name - - def handle_display_options(self, option_order): - """If there were any non-global "display-only" options - (--help-commands or the metadata display options) on the command - line, display the requested info and return true; else return - false. - """ - import sys - - if self.help_commands: - return _Distribution.handle_display_options(self, option_order) - - # Stdout may be StringIO (e.g. in tests) - if not isinstance(sys.stdout, io.TextIOWrapper): - return _Distribution.handle_display_options(self, option_order) - - # Don't wrap stdout if utf-8 is already the encoding. Provides - # workaround for #334. - if sys.stdout.encoding.lower() in ('utf-8', 'utf8'): - return _Distribution.handle_display_options(self, option_order) - - # Print metadata in UTF-8 no matter the platform - encoding = sys.stdout.encoding - errors = sys.stdout.errors - newline = sys.platform != 'win32' and '\n' or None - line_buffering = sys.stdout.line_buffering - - sys.stdout = io.TextIOWrapper( - sys.stdout.detach(), 'utf-8', errors, newline, line_buffering - ) - try: - return _Distribution.handle_display_options(self, option_order) - finally: - sys.stdout = io.TextIOWrapper( - sys.stdout.detach(), encoding, errors, newline, line_buffering - ) - - def run_command(self, command): - self.set_defaults() - # Postpone defaults until all explicit configuration is considered - # (setup() args, config files, command line and plugins) - - super().run_command(command) - - -class DistDeprecationWarning(SetuptoolsDeprecationWarning): - """Class for warning about deprecations in dist in - setuptools. Not ignored by default, unlike DeprecationWarning.""" diff --git a/spaces/Billyosoro/ESRGAN/app.py b/spaces/Billyosoro/ESRGAN/app.py deleted file mode 100644 index 97c59221c429e335c3a2e3413c11cc155d5b6122..0000000000000000000000000000000000000000 --- a/spaces/Billyosoro/ESRGAN/app.py +++ /dev/null @@ -1,68 +0,0 @@ -import os -os.system("pip install gradio==2.9b23") -import random -import gradio as gr -from PIL import Image -import torch -from random import randint -import sys -from subprocess import call -import psutil - - - - -torch.hub.download_url_to_file('http://people.csail.mit.edu/billf/project%20pages/sresCode/Markov%20Random%20Fields%20for%20Super-Resolution_files/100075_lowres.jpg', 'bear.jpg') - - -def run_cmd(command): - try: - print(command) - call(command, shell=True) - except KeyboardInterrupt: - print("Process interrupted") - sys.exit(1) -run_cmd("wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P .") -run_cmd("pip install basicsr") -run_cmd("pip freeze") - -os.system("wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.2.4/RealESRGAN_x4plus_anime_6B.pth -P .") - - -def inference(img,mode): - _id = randint(1, 10000) - INPUT_DIR = "/tmp/input_image" + str(_id) + "/" - OUTPUT_DIR = "/tmp/output_image" + str(_id) + "/" - run_cmd("rm -rf " + INPUT_DIR) - run_cmd("rm -rf " + OUTPUT_DIR) - run_cmd("mkdir " + INPUT_DIR) - run_cmd("mkdir " + OUTPUT_DIR) - basewidth = 256 - wpercent = (basewidth/float(img.size[0])) - hsize = int((float(img.size[1])*float(wpercent))) - img = img.resize((basewidth,hsize), Image.ANTIALIAS) - img.save(INPUT_DIR + "1.jpg", "JPEG") - if mode == "base": - run_cmd("python inference_realesrgan.py -n RealESRGAN_x4plus -i "+ INPUT_DIR + " -o " + OUTPUT_DIR) - else: - os.system("python inference_realesrgan.py -n RealESRGAN_x4plus_anime_6B -i "+ INPUT_DIR + " -o " + OUTPUT_DIR) - return os.path.join(OUTPUT_DIR, "1_out.jpg") - - - - -title = "Real-ESRGAN" -description = "Gradio demo for Real-ESRGAN. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

    Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data | Github Repo

    " - -gr.Interface( - inference, - [gr.inputs.Image(type="pil", label="Input"),gr.inputs.Radio(["base","anime"], type="value", default="base", label="model type")], - gr.outputs.Image(type="file", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['bear.jpg','base'], - ['anime.png','anime'] - ]).launch() \ No newline at end of file diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/config/defaults.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/config/defaults.py deleted file mode 100644 index a397a6fbef36e188a676ad52f34309c42877ba1e..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/config/defaults.py +++ /dev/null @@ -1,596 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from .config import CfgNode as CN - -# ----------------------------------------------------------------------------- -# Convention about Training / Test specific parameters -# ----------------------------------------------------------------------------- -# Whenever an argument can be either used for training or for testing, the -# corresponding name will be post-fixed by a _TRAIN for a training parameter, -# or _TEST for a test-specific parameter. -# For example, the number of images during training will be -# IMAGES_PER_BATCH_TRAIN, while the number of images for testing will be -# IMAGES_PER_BATCH_TEST - -# ----------------------------------------------------------------------------- -# Config definition -# ----------------------------------------------------------------------------- - -_C = CN() - -# The version number, to upgrade from old configs to new ones if any -# changes happen. It's recommended to keep a VERSION in your config file. -_C.VERSION = 2 - -_C.MODEL = CN() -_C.MODEL.LOAD_PROPOSALS = False -_C.MODEL.MASK_ON = False -_C.MODEL.KEYPOINT_ON = False -_C.MODEL.DEVICE = "cuda" -_C.MODEL.META_ARCHITECTURE = "GeneralizedRCNN" - -# Path (possibly with schema like catalog:// or detectron2://) to a checkpoint file -# to be loaded to the model. You can find available models in the model zoo. -_C.MODEL.WEIGHTS = "" - -# Values to be used for image normalization (BGR order, since INPUT.FORMAT defaults to BGR). -# To train on images of different number of channels, just set different mean & std. -# Default values are the mean pixel value from ImageNet: [103.53, 116.28, 123.675] -_C.MODEL.PIXEL_MEAN = [103.530, 116.280, 123.675] -# When using pre-trained models in Detectron1 or any MSRA models, -# std has been absorbed into its conv1 weights, so the std needs to be set 1. -# Otherwise, you can use [57.375, 57.120, 58.395] (ImageNet std) -_C.MODEL.PIXEL_STD = [1.0, 1.0, 1.0] - - -# ----------------------------------------------------------------------------- -# INPUT -# ----------------------------------------------------------------------------- -_C.INPUT = CN() -# Size of the smallest side of the image during training -_C.INPUT.MIN_SIZE_TRAIN = (800,) -# Sample size of smallest side by choice or random selection from range give by -# INPUT.MIN_SIZE_TRAIN -_C.INPUT.MIN_SIZE_TRAIN_SAMPLING = "choice" -# Maximum size of the side of the image during training -_C.INPUT.MAX_SIZE_TRAIN = 1333 -# Size of the smallest side of the image during testing. Set to zero to disable resize in testing. -_C.INPUT.MIN_SIZE_TEST = 800 -# Maximum size of the side of the image during testing -_C.INPUT.MAX_SIZE_TEST = 1333 - -# `True` if cropping is used for data augmentation during training -_C.INPUT.CROP = CN({"ENABLED": False}) -# Cropping type: -# - "relative" crop (H * CROP.SIZE[0], W * CROP.SIZE[1]) part of an input of size (H, W) -# - "relative_range" uniformly sample relative crop size from between [CROP.SIZE[0], [CROP.SIZE[1]]. -# and [1, 1] and use it as in "relative" scenario. -# - "absolute" crop part of an input with absolute size: (CROP.SIZE[0], CROP.SIZE[1]). -_C.INPUT.CROP.TYPE = "relative_range" -# Size of crop in range (0, 1] if CROP.TYPE is "relative" or "relative_range" and in number of -# pixels if CROP.TYPE is "absolute" -_C.INPUT.CROP.SIZE = [0.9, 0.9] - - -# Whether the model needs RGB, YUV, HSV etc. -# Should be one of the modes defined here, as we use PIL to read the image: -# https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes -# with BGR being the one exception. One can set image format to BGR, we will -# internally use RGB for conversion and flip the channels over -_C.INPUT.FORMAT = "BGR" -# The ground truth mask format that the model will use. -# Mask R-CNN supports either "polygon" or "bitmask" as ground truth. -_C.INPUT.MASK_FORMAT = "polygon" # alternative: "bitmask" - - -# ----------------------------------------------------------------------------- -# Dataset -# ----------------------------------------------------------------------------- -_C.DATASETS = CN() -# List of the dataset names for training. Must be registered in DatasetCatalog -_C.DATASETS.TRAIN = () -# List of the pre-computed proposal files for training, which must be consistent -# with datasets listed in DATASETS.TRAIN. -_C.DATASETS.PROPOSAL_FILES_TRAIN = () -# Number of top scoring precomputed proposals to keep for training -_C.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TRAIN = 2000 -# List of the dataset names for testing. Must be registered in DatasetCatalog -_C.DATASETS.TEST = () -# List of the pre-computed proposal files for test, which must be consistent -# with datasets listed in DATASETS.TEST. -_C.DATASETS.PROPOSAL_FILES_TEST = () -# Number of top scoring precomputed proposals to keep for test -_C.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TEST = 1000 - -# ----------------------------------------------------------------------------- -# DataLoader -# ----------------------------------------------------------------------------- -_C.DATALOADER = CN() -# Number of data loading threads -_C.DATALOADER.NUM_WORKERS = 4 -# If True, each batch should contain only images for which the aspect ratio -# is compatible. This groups portrait images together, and landscape images -# are not batched with portrait images. -_C.DATALOADER.ASPECT_RATIO_GROUPING = True -# Options: TrainingSampler, RepeatFactorTrainingSampler -_C.DATALOADER.SAMPLER_TRAIN = "TrainingSampler" -# Repeat threshold for RepeatFactorTrainingSampler -_C.DATALOADER.REPEAT_THRESHOLD = 0.0 -# if True, the dataloader will filter out images that have no associated -# annotations at train time. -_C.DATALOADER.FILTER_EMPTY_ANNOTATIONS = True - -# ---------------------------------------------------------------------------- # -# Backbone options -# ---------------------------------------------------------------------------- # -_C.MODEL.BACKBONE = CN() - -_C.MODEL.BACKBONE.NAME = "build_resnet_backbone" -# Freeze the first several stages so they are not trained. -# There are 5 stages in ResNet. The first is a convolution, and the following -# stages are each group of residual blocks. -_C.MODEL.BACKBONE.FREEZE_AT = 2 - - -# ---------------------------------------------------------------------------- # -# FPN options -# ---------------------------------------------------------------------------- # -_C.MODEL.FPN = CN() -# Names of the input feature maps to be used by FPN -# They must have contiguous power of 2 strides -# e.g., ["res2", "res3", "res4", "res5"] -_C.MODEL.FPN.IN_FEATURES = [] -_C.MODEL.FPN.OUT_CHANNELS = 256 - -# Options: "" (no norm), "GN" -_C.MODEL.FPN.NORM = "" - -# Types for fusing the FPN top-down and lateral features. Can be either "sum" or "avg" -_C.MODEL.FPN.FUSE_TYPE = "sum" - - -# ---------------------------------------------------------------------------- # -# Proposal generator options -# ---------------------------------------------------------------------------- # -_C.MODEL.PROPOSAL_GENERATOR = CN() -# Current proposal generators include "RPN", "RRPN" and "PrecomputedProposals" -_C.MODEL.PROPOSAL_GENERATOR.NAME = "RPN" -# Proposal height and width both need to be greater than MIN_SIZE -# (a the scale used during training or inference) -_C.MODEL.PROPOSAL_GENERATOR.MIN_SIZE = 0 - - -# ---------------------------------------------------------------------------- # -# Anchor generator options -# ---------------------------------------------------------------------------- # -_C.MODEL.ANCHOR_GENERATOR = CN() -# The generator can be any name in the ANCHOR_GENERATOR registry -_C.MODEL.ANCHOR_GENERATOR.NAME = "DefaultAnchorGenerator" -# Anchor sizes (i.e. sqrt of area) in absolute pixels w.r.t. the network input. -# Format: list[list[int]]. SIZES[i] specifies the list of sizes -# to use for IN_FEATURES[i]; len(SIZES) == len(IN_FEATURES) must be true, -# or len(SIZES) == 1 is true and size list SIZES[0] is used for all -# IN_FEATURES. -_C.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64, 128, 256, 512]] -# Anchor aspect ratios. For each area given in `SIZES`, anchors with different aspect -# ratios are generated by an anchor generator. -# Format: list[list[int]]. ASPECT_RATIOS[i] specifies the list of aspect ratios -# to use for IN_FEATURES[i]; len(ASPECT_RATIOS) == len(IN_FEATURES) must be true, -# or len(ASPECT_RATIOS) == 1 is true and aspect ratio list ASPECT_RATIOS[0] is used -# for all IN_FEATURES. -_C.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.5, 1.0, 2.0]] -# Anchor angles. -# list[float], the angle in degrees, for each input feature map. -# ANGLES[i] specifies the list of angles for IN_FEATURES[i]. -_C.MODEL.ANCHOR_GENERATOR.ANGLES = [[-90, 0, 90]] -# Relative offset between the center of the first anchor and the top-left corner of the image -# Units: fraction of feature map stride (e.g., 0.5 means half stride) -# Allowed values are floats in [0, 1) range inclusive. -# Recommended value is 0.5, although it is not expected to affect model accuracy. -_C.MODEL.ANCHOR_GENERATOR.OFFSET = 0.0 - -# ---------------------------------------------------------------------------- # -# RPN options -# ---------------------------------------------------------------------------- # -_C.MODEL.RPN = CN() -_C.MODEL.RPN.HEAD_NAME = "StandardRPNHead" # used by RPN_HEAD_REGISTRY - -# Names of the input feature maps to be used by RPN -# e.g., ["p2", "p3", "p4", "p5", "p6"] for FPN -_C.MODEL.RPN.IN_FEATURES = ["res4"] -# Remove RPN anchors that go outside the image by BOUNDARY_THRESH pixels -# Set to -1 or a large value, e.g. 100000, to disable pruning anchors -_C.MODEL.RPN.BOUNDARY_THRESH = -1 -# IOU overlap ratios [BG_IOU_THRESHOLD, FG_IOU_THRESHOLD] -# Minimum overlap required between an anchor and ground-truth box for the -# (anchor, gt box) pair to be a positive example (IoU >= FG_IOU_THRESHOLD -# ==> positive RPN example: 1) -# Maximum overlap allowed between an anchor and ground-truth box for the -# (anchor, gt box) pair to be a negative examples (IoU < BG_IOU_THRESHOLD -# ==> negative RPN example: 0) -# Anchors with overlap in between (BG_IOU_THRESHOLD <= IoU < FG_IOU_THRESHOLD) -# are ignored (-1) -_C.MODEL.RPN.IOU_THRESHOLDS = [0.3, 0.7] -_C.MODEL.RPN.IOU_LABELS = [0, -1, 1] -# Total number of RPN examples per image -_C.MODEL.RPN.BATCH_SIZE_PER_IMAGE = 256 -# Target fraction of foreground (positive) examples per RPN minibatch -_C.MODEL.RPN.POSITIVE_FRACTION = 0.5 -# Weights on (dx, dy, dw, dh) for normalizing RPN anchor regression targets -_C.MODEL.RPN.BBOX_REG_WEIGHTS = (1.0, 1.0, 1.0, 1.0) -# The transition point from L1 to L2 loss. Set to 0.0 to make the loss simply L1. -_C.MODEL.RPN.SMOOTH_L1_BETA = 0.0 -_C.MODEL.RPN.LOSS_WEIGHT = 1.0 -# Number of top scoring RPN proposals to keep before applying NMS -# When FPN is used, this is *per FPN level* (not total) -_C.MODEL.RPN.PRE_NMS_TOPK_TRAIN = 12000 -_C.MODEL.RPN.PRE_NMS_TOPK_TEST = 6000 -# Number of top scoring RPN proposals to keep after applying NMS -# When FPN is used, this limit is applied per level and then again to the union -# of proposals from all levels -# NOTE: When FPN is used, the meaning of this config is different from Detectron1. -# It means per-batch topk in Detectron1, but per-image topk here. -# See "modeling/rpn/rpn_outputs.py" for details. -_C.MODEL.RPN.POST_NMS_TOPK_TRAIN = 2000 -_C.MODEL.RPN.POST_NMS_TOPK_TEST = 1000 -# NMS threshold used on RPN proposals -_C.MODEL.RPN.NMS_THRESH = 0.7 - -# ---------------------------------------------------------------------------- # -# ROI HEADS options -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_HEADS = CN() -_C.MODEL.ROI_HEADS.NAME = "Res5ROIHeads" -# Number of foreground classes -_C.MODEL.ROI_HEADS.NUM_CLASSES = 80 -# Names of the input feature maps to be used by ROI heads -# Currently all heads (box, mask, ...) use the same input feature map list -# e.g., ["p2", "p3", "p4", "p5"] is commonly used for FPN -_C.MODEL.ROI_HEADS.IN_FEATURES = ["res4"] -# IOU overlap ratios [IOU_THRESHOLD] -# Overlap threshold for an RoI to be considered background (if < IOU_THRESHOLD) -# Overlap threshold for an RoI to be considered foreground (if >= IOU_THRESHOLD) -_C.MODEL.ROI_HEADS.IOU_THRESHOLDS = [0.5] -_C.MODEL.ROI_HEADS.IOU_LABELS = [0, 1] -# RoI minibatch size *per image* (number of regions of interest [ROIs]) -# Total number of RoIs per training minibatch = -# ROI_HEADS.BATCH_SIZE_PER_IMAGE * SOLVER.IMS_PER_BATCH -# E.g., a common configuration is: 512 * 16 = 8192 -_C.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512 -# Target fraction of RoI minibatch that is labeled foreground (i.e. class > 0) -_C.MODEL.ROI_HEADS.POSITIVE_FRACTION = 0.25 - -# Only used on test mode - -# Minimum score threshold (assuming scores in a [0, 1] range); a value chosen to -# balance obtaining high recall with not having too many low precision -# detections that will slow down inference post processing steps (like NMS) -# A default threshold of 0.0 increases AP by ~0.2-0.3 but significantly slows down -# inference. -_C.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.05 -# Overlap threshold used for non-maximum suppression (suppress boxes with -# IoU >= this threshold) -_C.MODEL.ROI_HEADS.NMS_THRESH_TEST = 0.5 -# If True, augment proposals with ground-truth boxes before sampling proposals to -# train ROI heads. -_C.MODEL.ROI_HEADS.PROPOSAL_APPEND_GT = True - -# ---------------------------------------------------------------------------- # -# Box Head -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_BOX_HEAD = CN() -# C4 don't use head name option -# Options for non-C4 models: FastRCNNConvFCHead, -_C.MODEL.ROI_BOX_HEAD.NAME = "" -# Default weights on (dx, dy, dw, dh) for normalizing bbox regression targets -# These are empirically chosen to approximately lead to unit variance targets -_C.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS = (10.0, 10.0, 5.0, 5.0) -# The transition point from L1 to L2 loss. Set to 0.0 to make the loss simply L1. -_C.MODEL.ROI_BOX_HEAD.SMOOTH_L1_BETA = 0.0 -_C.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION = 14 -_C.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO = 0 -# Type of pooling operation applied to the incoming feature map for each RoI -_C.MODEL.ROI_BOX_HEAD.POOLER_TYPE = "ROIAlignV2" - -_C.MODEL.ROI_BOX_HEAD.NUM_FC = 0 -# Hidden layer dimension for FC layers in the RoI box head -_C.MODEL.ROI_BOX_HEAD.FC_DIM = 1024 -_C.MODEL.ROI_BOX_HEAD.NUM_CONV = 0 -# Channel dimension for Conv layers in the RoI box head -_C.MODEL.ROI_BOX_HEAD.CONV_DIM = 256 -# Normalization method for the convolution layers. -# Options: "" (no norm), "GN", "SyncBN". -_C.MODEL.ROI_BOX_HEAD.NORM = "" -# Whether to use class agnostic for bbox regression -_C.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG = False -# If true, RoI heads use bounding boxes predicted by the box head rather than proposal boxes. -_C.MODEL.ROI_BOX_HEAD.TRAIN_ON_PRED_BOXES = False - -# ---------------------------------------------------------------------------- # -# Cascaded Box Head -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_BOX_CASCADE_HEAD = CN() -# The number of cascade stages is implicitly defined by the length of the following two configs. -_C.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS = ( - (10.0, 10.0, 5.0, 5.0), - (20.0, 20.0, 10.0, 10.0), - (30.0, 30.0, 15.0, 15.0), -) -_C.MODEL.ROI_BOX_CASCADE_HEAD.IOUS = (0.5, 0.6, 0.7) - - -# ---------------------------------------------------------------------------- # -# Mask Head -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_MASK_HEAD = CN() -_C.MODEL.ROI_MASK_HEAD.NAME = "MaskRCNNConvUpsampleHead" -_C.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION = 14 -_C.MODEL.ROI_MASK_HEAD.POOLER_SAMPLING_RATIO = 0 -_C.MODEL.ROI_MASK_HEAD.NUM_CONV = 0 # The number of convs in the mask head -_C.MODEL.ROI_MASK_HEAD.CONV_DIM = 256 -# Normalization method for the convolution layers. -# Options: "" (no norm), "GN", "SyncBN". -_C.MODEL.ROI_MASK_HEAD.NORM = "" -# Whether to use class agnostic for mask prediction -_C.MODEL.ROI_MASK_HEAD.CLS_AGNOSTIC_MASK = False -# Type of pooling operation applied to the incoming feature map for each RoI -_C.MODEL.ROI_MASK_HEAD.POOLER_TYPE = "ROIAlignV2" - - -# ---------------------------------------------------------------------------- # -# Keypoint Head -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_KEYPOINT_HEAD = CN() -_C.MODEL.ROI_KEYPOINT_HEAD.NAME = "KRCNNConvDeconvUpsampleHead" -_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_RESOLUTION = 14 -_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_SAMPLING_RATIO = 0 -_C.MODEL.ROI_KEYPOINT_HEAD.CONV_DIMS = tuple(512 for _ in range(8)) -_C.MODEL.ROI_KEYPOINT_HEAD.NUM_KEYPOINTS = 17 # 17 is the number of keypoints in COCO. - -# Images with too few (or no) keypoints are excluded from training. -_C.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE = 1 -# Normalize by the total number of visible keypoints in the minibatch if True. -# Otherwise, normalize by the total number of keypoints that could ever exist -# in the minibatch. -# The keypoint softmax loss is only calculated on visible keypoints. -# Since the number of visible keypoints can vary significantly between -# minibatches, this has the effect of up-weighting the importance of -# minibatches with few visible keypoints. (Imagine the extreme case of -# only one visible keypoint versus N: in the case of N, each one -# contributes 1/N to the gradient compared to the single keypoint -# determining the gradient direction). Instead, we can normalize the -# loss by the total number of keypoints, if it were the case that all -# keypoints were visible in a full minibatch. (Returning to the example, -# this means that the one visible keypoint contributes as much as each -# of the N keypoints.) -_C.MODEL.ROI_KEYPOINT_HEAD.NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS = True -# Multi-task loss weight to use for keypoints -# Recommended values: -# - use 1.0 if NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS is True -# - use 4.0 if NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS is False -_C.MODEL.ROI_KEYPOINT_HEAD.LOSS_WEIGHT = 1.0 -# Type of pooling operation applied to the incoming feature map for each RoI -_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_TYPE = "ROIAlignV2" - -# ---------------------------------------------------------------------------- # -# Semantic Segmentation Head -# ---------------------------------------------------------------------------- # -_C.MODEL.SEM_SEG_HEAD = CN() -_C.MODEL.SEM_SEG_HEAD.NAME = "SemSegFPNHead" -_C.MODEL.SEM_SEG_HEAD.IN_FEATURES = ["p2", "p3", "p4", "p5"] -# Label in the semantic segmentation ground truth that is ignored, i.e., no loss is calculated for -# the correposnding pixel. -_C.MODEL.SEM_SEG_HEAD.IGNORE_VALUE = 255 -# Number of classes in the semantic segmentation head -_C.MODEL.SEM_SEG_HEAD.NUM_CLASSES = 54 -# Number of channels in the 3x3 convs inside semantic-FPN heads. -_C.MODEL.SEM_SEG_HEAD.CONVS_DIM = 128 -# Outputs from semantic-FPN heads are up-scaled to the COMMON_STRIDE stride. -_C.MODEL.SEM_SEG_HEAD.COMMON_STRIDE = 4 -# Normalization method for the convolution layers. Options: "" (no norm), "GN". -_C.MODEL.SEM_SEG_HEAD.NORM = "GN" -_C.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT = 1.0 - -_C.MODEL.PANOPTIC_FPN = CN() -# Scaling of all losses from instance detection / segmentation head. -_C.MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT = 1.0 - -# options when combining instance & semantic segmentation outputs -_C.MODEL.PANOPTIC_FPN.COMBINE = CN({"ENABLED": True}) -_C.MODEL.PANOPTIC_FPN.COMBINE.OVERLAP_THRESH = 0.5 -_C.MODEL.PANOPTIC_FPN.COMBINE.STUFF_AREA_LIMIT = 4096 -_C.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = 0.5 - - -# ---------------------------------------------------------------------------- # -# RetinaNet Head -# ---------------------------------------------------------------------------- # -_C.MODEL.RETINANET = CN() - -# This is the number of foreground classes. -_C.MODEL.RETINANET.NUM_CLASSES = 80 - -_C.MODEL.RETINANET.IN_FEATURES = ["p3", "p4", "p5", "p6", "p7"] - -# Convolutions to use in the cls and bbox tower -# NOTE: this doesn't include the last conv for logits -_C.MODEL.RETINANET.NUM_CONVS = 4 - -# IoU overlap ratio [bg, fg] for labeling anchors. -# Anchors with < bg are labeled negative (0) -# Anchors with >= bg and < fg are ignored (-1) -# Anchors with >= fg are labeled positive (1) -_C.MODEL.RETINANET.IOU_THRESHOLDS = [0.4, 0.5] -_C.MODEL.RETINANET.IOU_LABELS = [0, -1, 1] - -# Prior prob for rare case (i.e. foreground) at the beginning of training. -# This is used to set the bias for the logits layer of the classifier subnet. -# This improves training stability in the case of heavy class imbalance. -_C.MODEL.RETINANET.PRIOR_PROB = 0.01 - -# Inference cls score threshold, only anchors with score > INFERENCE_TH are -# considered for inference (to improve speed) -_C.MODEL.RETINANET.SCORE_THRESH_TEST = 0.05 -_C.MODEL.RETINANET.TOPK_CANDIDATES_TEST = 1000 -_C.MODEL.RETINANET.NMS_THRESH_TEST = 0.5 - -# Weights on (dx, dy, dw, dh) for normalizing Retinanet anchor regression targets -_C.MODEL.RETINANET.BBOX_REG_WEIGHTS = (1.0, 1.0, 1.0, 1.0) - -# Loss parameters -_C.MODEL.RETINANET.FOCAL_LOSS_GAMMA = 2.0 -_C.MODEL.RETINANET.FOCAL_LOSS_ALPHA = 0.25 -_C.MODEL.RETINANET.SMOOTH_L1_LOSS_BETA = 0.1 - - -# ---------------------------------------------------------------------------- # -# ResNe[X]t options (ResNets = {ResNet, ResNeXt} -# Note that parts of a resnet may be used for both the backbone and the head -# These options apply to both -# ---------------------------------------------------------------------------- # -_C.MODEL.RESNETS = CN() - -_C.MODEL.RESNETS.DEPTH = 50 -_C.MODEL.RESNETS.OUT_FEATURES = ["res4"] # res4 for C4 backbone, res2..5 for FPN backbone - -# Number of groups to use; 1 ==> ResNet; > 1 ==> ResNeXt -_C.MODEL.RESNETS.NUM_GROUPS = 1 - -# Options: FrozenBN, GN, "SyncBN", "BN" -_C.MODEL.RESNETS.NORM = "FrozenBN" - -# Baseline width of each group. -# Scaling this parameters will scale the width of all bottleneck layers. -_C.MODEL.RESNETS.WIDTH_PER_GROUP = 64 - -# Place the stride 2 conv on the 1x1 filter -# Use True only for the original MSRA ResNet; use False for C2 and Torch models -_C.MODEL.RESNETS.STRIDE_IN_1X1 = True - -# Apply dilation in stage "res5" -_C.MODEL.RESNETS.RES5_DILATION = 1 - -# Output width of res2. Scaling this parameters will scale the width of all 1x1 convs in ResNet -# For R18 and R34, this needs to be set to 64 -_C.MODEL.RESNETS.RES2_OUT_CHANNELS = 256 -_C.MODEL.RESNETS.STEM_OUT_CHANNELS = 64 - -# Apply Deformable Convolution in stages -# Specify if apply deform_conv on Res2, Res3, Res4, Res5 -_C.MODEL.RESNETS.DEFORM_ON_PER_STAGE = [False, False, False, False] -# Use True to use modulated deform_conv (DeformableV2, https://arxiv.org/abs/1811.11168); -# Use False for DeformableV1. -_C.MODEL.RESNETS.DEFORM_MODULATED = False -# Number of groups in deformable conv. -_C.MODEL.RESNETS.DEFORM_NUM_GROUPS = 1 - - -# ---------------------------------------------------------------------------- # -# Solver -# ---------------------------------------------------------------------------- # -_C.SOLVER = CN() - -# See detectron2/solver/build.py for LR scheduler options -_C.SOLVER.LR_SCHEDULER_NAME = "WarmupMultiStepLR" - -_C.SOLVER.MAX_ITER = 40000 - -_C.SOLVER.BASE_LR = 0.001 - -_C.SOLVER.MOMENTUM = 0.9 - -_C.SOLVER.WEIGHT_DECAY = 0.0001 -# The weight decay that's applied to parameters of normalization layers -# (typically the affine transformation) -_C.SOLVER.WEIGHT_DECAY_NORM = 0.0 - -_C.SOLVER.GAMMA = 0.1 -# The iteration number to decrease learning rate by GAMMA. -_C.SOLVER.STEPS = (30000,) - -_C.SOLVER.WARMUP_FACTOR = 1.0 / 1000 -_C.SOLVER.WARMUP_ITERS = 1000 -_C.SOLVER.WARMUP_METHOD = "linear" - -# Save a checkpoint after every this number of iterations -_C.SOLVER.CHECKPOINT_PERIOD = 5000 - -# Number of images per batch across all machines. -# If we have 16 GPUs and IMS_PER_BATCH = 32, -# each GPU will see 2 images per batch. -_C.SOLVER.IMS_PER_BATCH = 16 - -# Detectron v1 (and previous detection code) used a 2x higher LR and 0 WD for -# biases. This is not useful (at least for recent models). You should avoid -# changing these and they exist only to reproduce Detectron v1 training if -# desired. -_C.SOLVER.BIAS_LR_FACTOR = 1.0 -_C.SOLVER.WEIGHT_DECAY_BIAS = _C.SOLVER.WEIGHT_DECAY - -# Gradient clipping -_C.SOLVER.CLIP_GRADIENTS = CN({"ENABLED": False}) -# Type of gradient clipping, currently 2 values are supported: -# - "value": the absolute values of elements of each gradients are clipped -# - "norm": the norm of the gradient for each parameter is clipped thus -# affecting all elements in the parameter -_C.SOLVER.CLIP_GRADIENTS.CLIP_TYPE = "value" -# Maximum absolute value used for clipping gradients -_C.SOLVER.CLIP_GRADIENTS.CLIP_VALUE = 1.0 -# Floating point number p for L-p norm to be used with the "norm" -# gradient clipping type; for L-inf, please specify .inf -_C.SOLVER.CLIP_GRADIENTS.NORM_TYPE = 2.0 - -# ---------------------------------------------------------------------------- # -# Specific test options -# ---------------------------------------------------------------------------- # -_C.TEST = CN() -# For end-to-end tests to verify the expected accuracy. -# Each item is [task, metric, value, tolerance] -# e.g.: [['bbox', 'AP', 38.5, 0.2]] -_C.TEST.EXPECTED_RESULTS = [] -# The period (in terms of steps) to evaluate the model during training. -# Set to 0 to disable. -_C.TEST.EVAL_PERIOD = 0 -# The sigmas used to calculate keypoint OKS. See http://cocodataset.org/#keypoints-eval -# When empty it will use the defaults in COCO. -# Otherwise it should have the same length as ROI_KEYPOINT_HEAD.NUM_KEYPOINTS. -_C.TEST.KEYPOINT_OKS_SIGMAS = [] -# Maximum number of detections to return per image during inference (100 is -# based on the limit established for the COCO dataset). -_C.TEST.DETECTIONS_PER_IMAGE = 100 - -_C.TEST.AUG = CN({"ENABLED": False}) -_C.TEST.AUG.MIN_SIZES = (400, 500, 600, 700, 800, 900, 1000, 1100, 1200) -_C.TEST.AUG.MAX_SIZE = 4000 -_C.TEST.AUG.FLIP = True - -_C.TEST.PRECISE_BN = CN({"ENABLED": False}) -_C.TEST.PRECISE_BN.NUM_ITER = 200 - -# ---------------------------------------------------------------------------- # -# Misc options -# ---------------------------------------------------------------------------- # -# Directory where output files are written -_C.OUTPUT_DIR = "./output" -# Set seed to negative to fully randomize everything. -# Set seed to positive to use a fixed seed. Note that a fixed seed does not -# guarantee fully deterministic behavior. -_C.SEED = -1 -# Benchmark different cudnn algorithms. -# If input images have very different sizes, this option will have large overhead -# for about 10k iterations. It usually hurts total time, but can benefit for certain models. -# If input images have the same or similar sizes, benchmark is often helpful. -_C.CUDNN_BENCHMARK = False -# The period (in terms of steps) for minibatch visualization at train time. -# Set to 0 to disable. -_C.VIS_PERIOD = 0 - -# global config is for quick hack purposes. -# You can set them in command line or config files, -# and access it with: -# -# from detectron2.config import global_cfg -# print(global_cfg.HACK) -# -# Do not commit any configs into it. -_C.GLOBAL = CN() -_C.GLOBAL.HACK = 1.0 diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mfb/model_cfgs.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mfb/model_cfgs.py deleted file mode 100644 index e914255c67b3ef34f8c793a5311584fecd9f82d1..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/openvqa/models/mfb/model_cfgs.py +++ /dev/null @@ -1,20 +0,0 @@ -# -------------------------------------------------------- -# OpenVQA -# Written by Gao Pengbing https://github.com/nbgao -# -------------------------------------------------------- - -from openvqa.core.base_cfgs import BaseCfgs - - -class Cfgs(BaseCfgs): - def __init__(self): - super(Cfgs, self).__init__() - - self.HIGH_ORDER = False - self.HIDDEN_SIZE = 512 - self.MFB_K = 5 - self.MFB_O = 1000 - self.LSTM_OUT_SIZE = 1024 - self.DROPOUT_R = 0.1 - self.I_GLIMPSES = 2 - self.Q_GLIMPSES = 2 diff --git a/spaces/CVPR/regionclip-demo/detectron2/data/dataset_mapper.py b/spaces/CVPR/regionclip-demo/detectron2/data/dataset_mapper.py deleted file mode 100644 index 5e03ea2f428a271fcc85de1d97a17a8914a8978a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/data/dataset_mapper.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import logging -import numpy as np -from typing import List, Optional, Union -import torch - -from detectron2.config import configurable - -from . import detection_utils as utils -from . import transforms as T - -""" -This file contains the default mapping that's applied to "dataset dicts". -""" - -__all__ = ["DatasetMapper"] - - -class DatasetMapper: - """ - A callable which takes a dataset dict in Detectron2 Dataset format, - and map it into a format used by the model. - - This is the default callable to be used to map your dataset dict into training data. - You may need to follow it to implement your own one for customized logic, - such as a different way to read or transform images. - See :doc:`/tutorials/data_loading` for details. - - The callable currently does the following: - - 1. Read the image from "file_name" - 2. Applies cropping/geometric transforms to the image and annotations - 3. Prepare data and annotations to Tensor and :class:`Instances` - """ - - @configurable - def __init__( - self, - is_train: bool, - *, - augmentations: List[Union[T.Augmentation, T.Transform]], - image_format: str, - use_instance_mask: bool = False, - use_keypoint: bool = False, - instance_mask_format: str = "polygon", - keypoint_hflip_indices: Optional[np.ndarray] = None, - precomputed_proposal_topk: Optional[int] = None, - recompute_boxes: bool = False, - filter_open_cls: bool = False, - clip_crop: bool = False, - ): - """ - NOTE: this interface is experimental. - - Args: - is_train: whether it's used in training or inference - augmentations: a list of augmentations or deterministic transforms to apply - image_format: an image format supported by :func:`detection_utils.read_image`. - use_instance_mask: whether to process instance segmentation annotations, if available - use_keypoint: whether to process keypoint annotations if available - instance_mask_format: one of "polygon" or "bitmask". Process instance segmentation - masks into this format. - keypoint_hflip_indices: see :func:`detection_utils.create_keypoint_hflip_indices` - precomputed_proposal_topk: if given, will load pre-computed - proposals from dataset_dict and keep the top k proposals for each image. - recompute_boxes: whether to overwrite bounding box annotations - by computing tight bounding boxes from instance mask annotations. - filter_open_cls: open-set setting, filter the open-set categories during training - clip_crop: the mode that directly use CLIP on cropped image regions - """ - if recompute_boxes: - assert use_instance_mask, "recompute_boxes requires instance masks" - # fmt: off - self.is_train = is_train - self.augmentations = T.AugmentationList(augmentations) - self.image_format = image_format - self.use_instance_mask = use_instance_mask - self.instance_mask_format = instance_mask_format - self.use_keypoint = use_keypoint - self.keypoint_hflip_indices = keypoint_hflip_indices - self.proposal_topk = precomputed_proposal_topk - self.recompute_boxes = recompute_boxes - self.filter_open_cls = filter_open_cls - self.clip_crop = clip_crop - # fmt: on - logger = logging.getLogger(__name__) - mode = "training" if is_train else "inference" - logger.info(f"[DatasetMapper] Augmentations used in {mode}: {augmentations}") - - @classmethod - def from_config(cls, cfg, is_train: bool = True): - augs = utils.build_augmentation(cfg, is_train) - if cfg.INPUT.CROP.ENABLED and is_train: - augs.insert(0, T.RandomCrop(cfg.INPUT.CROP.TYPE, cfg.INPUT.CROP.SIZE)) - recompute_boxes = cfg.MODEL.MASK_ON - else: - recompute_boxes = False - - ret = { - "is_train": is_train, - "augmentations": augs, - "image_format": cfg.INPUT.FORMAT, - "use_instance_mask": cfg.MODEL.MASK_ON, - "instance_mask_format": cfg.INPUT.MASK_FORMAT, - "use_keypoint": cfg.MODEL.KEYPOINT_ON, - "recompute_boxes": recompute_boxes, - } - - if cfg.MODEL.KEYPOINT_ON: - ret["keypoint_hflip_indices"] = utils.create_keypoint_hflip_indices(cfg.DATASETS.TRAIN) - - if cfg.MODEL.LOAD_PROPOSALS: - ret["precomputed_proposal_topk"] = ( - cfg.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TRAIN - if is_train - else cfg.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TEST - ) - # open-set setting, filter the open-set categories during training - # filter_open_cls = cfg.SOLVER.IMS_PER_BATCH < 10 # debug - # if filter_open_cls: - # ret["filter_open_cls"] = True - # CLIP inference on cropped image regions - if cfg.MODEL.META_ARCHITECTURE in ["CLIPRCNN", "CLIPFastRCNN"]: - ret["clip_crop"] = True - return ret - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format. - - Returns: - dict: a format that builtin models in detectron2 accept - """ - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - # USER: Write your own image loading if it's not from a file - image = utils.read_image(dataset_dict["file_name"], format=self.image_format) - utils.check_image_size(dataset_dict, image) - - # USER: Remove if you don't do semantic/panoptic segmentation. - if "sem_seg_file_name" in dataset_dict: - sem_seg_gt = utils.read_image(dataset_dict.pop("sem_seg_file_name"), "L").squeeze(2) - else: - sem_seg_gt = None - - aug_input = T.AugInput(image, sem_seg=sem_seg_gt) - transforms = self.augmentations(aug_input) - # if self.clip_crop: # load original images into CLIP model, without resizing - # pass - # else: - image, sem_seg_gt = aug_input.image, aug_input.sem_seg - - image_shape = image.shape[:2] # h, w - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1))) - if sem_seg_gt is not None: - dataset_dict["sem_seg"] = torch.as_tensor(sem_seg_gt.astype("long")) - - # USER: Remove if you don't use pre-computed proposals. - # Most users would not need this feature. - if self.proposal_topk is not None: - utils.transform_proposals( - dataset_dict, image_shape, transforms, proposal_topk=self.proposal_topk - ) - - if not self.is_train: - if self.clip_crop: # still load the GT annotations - pass - else: - # USER: Modify this if you want to keep them for some reason. - dataset_dict.pop("annotations", None) - dataset_dict.pop("sem_seg_file_name", None) - return dataset_dict - - if "annotations" in dataset_dict: - # if self.filter_open_cls: # filter categories for open-set training - # obj_annos = dataset_dict['annotations'] - # clean_obj_annos = [obj_anno for obj_anno in obj_annos if obj_anno['frequency'] != 'r'] # filter rare classes - # if len(clean_obj_annos) == 0: # empty annotation - # print("\n\nImage {} has no annotation after filtering open-set classes!\n\n".format(dataset_dict['image_id'])) - # clean_obj_annos = obj_annos[0] # keep one for compatability, fix it later - # dataset_dict['annotations'] = clean_obj_annos - - # USER: Modify this if you want to keep them for some reason. - for anno in dataset_dict["annotations"]: - if not self.use_instance_mask: - anno.pop("segmentation", None) - if not self.use_keypoint: - anno.pop("keypoints", None) - - # USER: Implement additional transformations if you have other types of data - annos = [ - utils.transform_instance_annotations( - obj, transforms, image_shape, keypoint_hflip_indices=self.keypoint_hflip_indices - ) - for obj in dataset_dict.pop("annotations") - if obj.get("iscrowd", 0) == 0 - ] - instances = utils.annotations_to_instances( - annos, image_shape, mask_format=self.instance_mask_format - ) - - # After transforms such as cropping are applied, the bounding box may no longer - # tightly bound the object. As an example, imagine a triangle object - # [(0,0), (2,0), (0,2)] cropped by a box [(1,0),(2,2)] (XYXY format). The tight - # bounding box of the cropped triangle should be [(1,0),(2,1)], which is not equal to - # the intersection of original bounding box and the cropping box. - if self.recompute_boxes: - instances.gt_boxes = instances.gt_masks.get_bounding_boxes() - dataset_dict["instances"] = utils.filter_empty_instances(instances) - return dataset_dict diff --git a/spaces/CXD200/QSign/README.md b/spaces/CXD200/QSign/README.md deleted file mode 100644 index 113f580f7c693f7fd8dc9051ca915f2f86dfeab3..0000000000000000000000000000000000000000 --- a/spaces/CXD200/QSign/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: QSign -emoji: 💻 -colorFrom: gray -colorTo: gray -sdk: docker -pinned: false -duplicated_from: AIxPha/QSign ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Curranj/Words_To_SQL/README.md b/spaces/Curranj/Words_To_SQL/README.md deleted file mode 100644 index 65458ab203d20ff01bb4a0c70f84b25c43568dc7..0000000000000000000000000000000000000000 --- a/spaces/Curranj/Words_To_SQL/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Words_to_sql -emoji: 🐨 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.0.6 -app_file: app.py -pinned: true ---- - -Natural Language to SQL diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cpu/dcn_v2_im2col_cpu.cpp b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cpu/dcn_v2_im2col_cpu.cpp deleted file mode 100644 index 1704a60d1aeeecd4cd08b44a75ff2b0cf7167fac..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/csrc/cpu/dcn_v2_im2col_cpu.cpp +++ /dev/null @@ -1,395 +0,0 @@ -#include "dcn_v2_im2col_cpu.h" -#include -#include -#include - -#include -//#include - -#include -//#include -//#include - -// modified from the CUDA version for CPU use by Daniel K. Suhendro - -/*#define CUDA_KERNEL_LOOP(i, n) \ - for (int i = blockIdx.x * blockDim.x + threadIdx.x; \ - i < (n); \ - i += blockDim.x * gridDim.x) - -const int CUDA_NUM_THREADS = 1024; -inline int GET_BLOCKS(const int N) -{ - return (N + CUDA_NUM_THREADS - 1) / CUDA_NUM_THREADS; -}*/ - - -float dmcn_im2col_bilinear_cpu(const float *bottom_data, const int data_width, - const int height, const int width, float h, float w) -{ - int h_low = floor(h); - int w_low = floor(w); - int h_high = h_low + 1; - int w_high = w_low + 1; - - float lh = h - h_low; - float lw = w - w_low; - float hh = 1 - lh, hw = 1 - lw; - - float v1 = 0; - if (h_low >= 0 && w_low >= 0) - v1 = bottom_data[h_low * data_width + w_low]; - float v2 = 0; - if (h_low >= 0 && w_high <= width - 1) - v2 = bottom_data[h_low * data_width + w_high]; - float v3 = 0; - if (h_high <= height - 1 && w_low >= 0) - v3 = bottom_data[h_high * data_width + w_low]; - float v4 = 0; - if (h_high <= height - 1 && w_high <= width - 1) - v4 = bottom_data[h_high * data_width + w_high]; - - float w1 = hh * hw, w2 = hh * lw, w3 = lh * hw, w4 = lh * lw; - - float val = (w1 * v1 + w2 * v2 + w3 * v3 + w4 * v4); - return val; -} - -float dmcn_get_gradient_weight_cpu(float argmax_h, float argmax_w, - const int h, const int w, const int height, const int width) -{ - if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || argmax_w >= width) - { - //empty - return 0; - } - - int argmax_h_low = floor(argmax_h); - int argmax_w_low = floor(argmax_w); - int argmax_h_high = argmax_h_low + 1; - int argmax_w_high = argmax_w_low + 1; - - float weight = 0; - if (h == argmax_h_low && w == argmax_w_low) - weight = (h + 1 - argmax_h) * (w + 1 - argmax_w); - if (h == argmax_h_low && w == argmax_w_high) - weight = (h + 1 - argmax_h) * (argmax_w + 1 - w); - if (h == argmax_h_high && w == argmax_w_low) - weight = (argmax_h + 1 - h) * (w + 1 - argmax_w); - if (h == argmax_h_high && w == argmax_w_high) - weight = (argmax_h + 1 - h) * (argmax_w + 1 - w); - return weight; -} - -float dmcn_get_coordinate_weight_cpu(float argmax_h, float argmax_w, - const int height, const int width, const float *im_data, - const int data_width, const int bp_dir) -{ - if (argmax_h <= -1 || argmax_h >= height || argmax_w <= -1 || argmax_w >= width) - { - //empty - return 0; - } - - int argmax_h_low = floor(argmax_h); - int argmax_w_low = floor(argmax_w); - int argmax_h_high = argmax_h_low + 1; - int argmax_w_high = argmax_w_low + 1; - - float weight = 0; - - if (bp_dir == 0) - { - if (argmax_h_low >= 0 && argmax_w_low >= 0) - weight += -1 * (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_low * data_width + argmax_w_low]; - if (argmax_h_low >= 0 && argmax_w_high <= width - 1) - weight += -1 * (argmax_w - argmax_w_low) * im_data[argmax_h_low * data_width + argmax_w_high]; - if (argmax_h_high <= height - 1 && argmax_w_low >= 0) - weight += (argmax_w_low + 1 - argmax_w) * im_data[argmax_h_high * data_width + argmax_w_low]; - if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) - weight += (argmax_w - argmax_w_low) * im_data[argmax_h_high * data_width + argmax_w_high]; - } - else if (bp_dir == 1) - { - if (argmax_h_low >= 0 && argmax_w_low >= 0) - weight += -1 * (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_low]; - if (argmax_h_low >= 0 && argmax_w_high <= width - 1) - weight += (argmax_h_low + 1 - argmax_h) * im_data[argmax_h_low * data_width + argmax_w_high]; - if (argmax_h_high <= height - 1 && argmax_w_low >= 0) - weight += -1 * (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_low]; - if (argmax_h_high <= height - 1 && argmax_w_high <= width - 1) - weight += (argmax_h - argmax_h_low) * im_data[argmax_h_high * data_width + argmax_w_high]; - } - - return weight; -} - -void modulated_deformable_im2col_cpu_kernel(const int n, const float *data_im, const float *data_offset, const float *data_mask, - const int height, const int width, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, - const int batch_size, const int num_channels, const int deformable_group, - const int height_col, const int width_col, - float *data_col) -{ - // launch channels * batch_size * height_col * width_col cores - for(int index=0; index(0); - const float h_im = h_in + i * dilation_h + offset_h; - const float w_im = w_in + j * dilation_w + offset_w; - //if (h_im >= 0 && w_im >= 0 && h_im < height && w_im < width) { - if (h_im > -1 && w_im > -1 && h_im < height && w_im < width) - { - //const float map_h = i * dilation_h + offset_h; - //const float map_w = j * dilation_w + offset_w; - //const int cur_height = height - h_in; - //const int cur_width = width - w_in; - //val = dmcn_im2col_bilinear_cpu(data_im_ptr, width, cur_height, cur_width, map_h, map_w); - val = dmcn_im2col_bilinear_cpu(data_im_ptr, width, height, width, h_im, w_im); - } - *data_col_ptr = val * mask; - // data_col_ptr += batch_size * height_col * width_col; - data_col_ptr += height_col * width_col; - } - } - } -} - -void modulated_deformable_col2im_cpu_kernel(const int n, const float *data_col, const float *data_offset, const float *data_mask, - const int channels, const int height, const int width, - const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, - const int batch_size, const int deformable_group, - const int height_col, const int width_col, - float *grad_im) -{ - for(int index = 0; index < n; index++) - { - const int j = (index / width_col / height_col / batch_size) % kernel_w; - const int i = (index / width_col / height_col / batch_size / kernel_w) % kernel_h; - const int c = index / width_col / height_col / batch_size / kernel_w / kernel_h; - // compute the start and end of the output - - const int deformable_group_index = c / channel_per_deformable_group; - - int w_out = index % width_col; - int h_out = (index / width_col) % height_col; - int b = (index / width_col / height_col) % batch_size; - int w_in = w_out * stride_w - pad_w; - int h_in = h_out * stride_h - pad_h; - - const float *data_offset_ptr = data_offset + (b * deformable_group + deformable_group_index) * 2 * kernel_h * kernel_w * height_col * width_col; - const float *data_mask_ptr = data_mask + (b * deformable_group + deformable_group_index) * kernel_h * kernel_w * height_col * width_col; - const int data_offset_h_ptr = ((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out; - const int data_offset_w_ptr = ((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out; - const int data_mask_hw_ptr = ((i * kernel_w + j) * height_col + h_out) * width_col + w_out; - const float offset_h = data_offset_ptr[data_offset_h_ptr]; - const float offset_w = data_offset_ptr[data_offset_w_ptr]; - const float mask = data_mask_ptr[data_mask_hw_ptr]; - const float cur_inv_h_data = h_in + i * dilation_h + offset_h; - const float cur_inv_w_data = w_in + j * dilation_w + offset_w; - - const float cur_top_grad = data_col[index] * mask; - const int cur_h = (int)cur_inv_h_data; - const int cur_w = (int)cur_inv_w_data; - - for (int dy = -2; dy <= 2; dy++) - { - for (int dx = -2; dx <= 2; dx++) - { - if (cur_h + dy >= 0 && cur_h + dy < height && - cur_w + dx >= 0 && cur_w + dx < width && - abs(cur_inv_h_data - (cur_h + dy)) < 1 && - abs(cur_inv_w_data - (cur_w + dx)) < 1) - { - int cur_bottom_grad_pos = ((b * channels + c) * height + cur_h + dy) * width + cur_w + dx; - float weight = dmcn_get_gradient_weight_cpu(cur_inv_h_data, cur_inv_w_data, cur_h + dy, cur_w + dx, height, width); - //atomicAdd(grad_im + cur_bottom_grad_pos, weight * cur_top_grad); - *(grad_im + cur_bottom_grad_pos) += weight * cur_top_grad; - - } - } - } - } -} - -void modulated_deformable_col2im_coord_cpu_kernel(const int n, const float *data_col, const float *data_im, - const float *data_offset, const float *data_mask, - const int channels, const int height, const int width, - const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, - const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int channel_per_deformable_group, - const int batch_size, const int offset_channels, const int deformable_group, - const int height_col, const int width_col, - float *grad_offset, float *grad_mask) -{ - for(int index = 0; index < n; index++) - { - float val = 0, mval = 0; - int w = index % width_col; - int h = (index / width_col) % height_col; - int c = (index / width_col / height_col) % offset_channels; - int b = (index / width_col / height_col) / offset_channels; - // compute the start and end of the output - - const int deformable_group_index = c / (2 * kernel_h * kernel_w); - const int col_step = kernel_h * kernel_w; - int cnt = 0; - const float *data_col_ptr = data_col + deformable_group_index * channel_per_deformable_group * batch_size * width_col * height_col; - const float *data_im_ptr = data_im + (b * deformable_group + deformable_group_index) * channel_per_deformable_group / kernel_h / kernel_w * height * width; - const float *data_offset_ptr = data_offset + (b * deformable_group + deformable_group_index) * 2 * kernel_h * kernel_w * height_col * width_col; - const float *data_mask_ptr = data_mask + (b * deformable_group + deformable_group_index) * kernel_h * kernel_w * height_col * width_col; - - const int offset_c = c - deformable_group_index * 2 * kernel_h * kernel_w; - - for (int col_c = (offset_c / 2); col_c < channel_per_deformable_group; col_c += col_step) - { - const int col_pos = (((col_c * batch_size + b) * height_col) + h) * width_col + w; - const int bp_dir = offset_c % 2; - - int j = (col_pos / width_col / height_col / batch_size) % kernel_w; - int i = (col_pos / width_col / height_col / batch_size / kernel_w) % kernel_h; - int w_out = col_pos % width_col; - int h_out = (col_pos / width_col) % height_col; - int w_in = w_out * stride_w - pad_w; - int h_in = h_out * stride_h - pad_h; - const int data_offset_h_ptr = (((2 * (i * kernel_w + j)) * height_col + h_out) * width_col + w_out); - const int data_offset_w_ptr = (((2 * (i * kernel_w + j) + 1) * height_col + h_out) * width_col + w_out); - const int data_mask_hw_ptr = (((i * kernel_w + j) * height_col + h_out) * width_col + w_out); - const float offset_h = data_offset_ptr[data_offset_h_ptr]; - const float offset_w = data_offset_ptr[data_offset_w_ptr]; - const float mask = data_mask_ptr[data_mask_hw_ptr]; - float inv_h = h_in + i * dilation_h + offset_h; - float inv_w = w_in + j * dilation_w + offset_w; - if (inv_h <= -1 || inv_w <= -1 || inv_h >= height || inv_w >= width) - { - inv_h = inv_w = -2; - } - else - { - mval += data_col_ptr[col_pos] * dmcn_im2col_bilinear_cpu(data_im_ptr + cnt * height * width, width, height, width, inv_h, inv_w); - } - const float weight = dmcn_get_coordinate_weight_cpu( - inv_h, inv_w, - height, width, data_im_ptr + cnt * height * width, width, bp_dir); - val += weight * data_col_ptr[col_pos] * mask; - cnt += 1; - } - // KERNEL_ASSIGN(grad_offset[index], offset_req, val); - grad_offset[index] = val; - if (offset_c % 2 == 0) - // KERNEL_ASSIGN(grad_mask[(((b * deformable_group + deformable_group_index) * kernel_h * kernel_w + offset_c / 2) * height_col + h) * width_col + w], mask_req, mval); - grad_mask[(((b * deformable_group + deformable_group_index) * kernel_h * kernel_w + offset_c / 2) * height_col + h) * width_col + w] = mval; - } -} - -void modulated_deformable_im2col_cpu(const float* data_im, const float* data_offset, const float* data_mask, - const int batch_size, const int channels, const int height_im, const int width_im, - const int height_col, const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int deformable_group, float* data_col) { - // num_axes should be smaller than block size - const int channel_per_deformable_group = channels / deformable_group; - const int num_kernels = channels * batch_size * height_col * width_col; - modulated_deformable_im2col_cpu_kernel( - num_kernels, data_im, data_offset, data_mask, height_im, width_im, kernel_h, kernel_w, - pad_h, pad_w, stride_h, stride_w, dilation_h, dilation_w, channel_per_deformable_group, - batch_size, channels, deformable_group, height_col, width_col, data_col); - - /*cudaError_t err = cudaGetLastError(); - if (err != cudaSuccess) - { - printf("error in modulated_deformable_im2col_cuda: %s\n", cudaGetErrorString(err)); - }*/ - -} - -void modulated_deformable_col2im_cpu(const float* data_col, const float* data_offset, const float* data_mask, - const int batch_size, const int channels, const int height_im, const int width_im, - const int height_col, const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int deformable_group, float* grad_im){ - - const int channel_per_deformable_group = channels / deformable_group; - const int num_kernels = channels * kernel_h * kernel_w * batch_size * height_col * width_col; - modulated_deformable_col2im_cpu_kernel( - num_kernels, data_col, data_offset, data_mask, channels, height_im, width_im, - kernel_h, kernel_w, pad_h, pad_h, stride_h, stride_w, - dilation_h, dilation_w, channel_per_deformable_group, - batch_size, deformable_group, height_col, width_col, grad_im); - /*cudaError_t err = cudaGetLastError(); - if (err != cudaSuccess) - { - printf("error in modulated_deformable_col2im_cuda: %s\n", cudaGetErrorString(err)); - }*/ - -} - -void modulated_deformable_col2im_coord_cpu(const float* data_col, const float* data_im, const float* data_offset, const float* data_mask, - const int batch_size, const int channels, const int height_im, const int width_im, - const int height_col, const int width_col, const int kernel_h, const int kernel_w, - const int pad_h, const int pad_w, const int stride_h, const int stride_w, - const int dilation_h, const int dilation_w, - const int deformable_group, - float* grad_offset, float* grad_mask) { - const int num_kernels = batch_size * height_col * width_col * 2 * kernel_h * kernel_w * deformable_group; - const int channel_per_deformable_group = channels * kernel_h * kernel_w / deformable_group; - modulated_deformable_col2im_coord_cpu_kernel( - num_kernels, data_col, data_im, data_offset, data_mask, channels, height_im, width_im, - kernel_h, kernel_w, pad_h, pad_w, stride_h, stride_w, - dilation_h, dilation_w, channel_per_deformable_group, - batch_size, 2 * kernel_h * kernel_w * deformable_group, deformable_group, height_col, width_col, - grad_offset, grad_mask); - /*cudaError_t err = cudaGetLastError(); - if (err != cudaSuccess) - { - printf("error in modulated_deformable_col2im_coord_cuda: %s\n", cudaGetErrorString(err)); - }*/ -} \ No newline at end of file diff --git a/spaces/DHEIVER/Classificacao.de.Imagens.de.Cardiomiopatia/README.md b/spaces/DHEIVER/Classificacao.de.Imagens.de.Cardiomiopatia/README.md deleted file mode 100644 index 3c75846a271c38ecc56724b3590536cdc366fc29..0000000000000000000000000000000000000000 --- a/spaces/DHEIVER/Classificacao.de.Imagens.de.Cardiomiopatia/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Cardiomyopathy Image Classification -emoji: 🐠 -colorFrom: pink -colorTo: gray -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DHEIVER/CoronaryAngioSegment/README.md b/spaces/DHEIVER/CoronaryAngioSegment/README.md deleted file mode 100644 index 32bd868daf127f136881a5daf5c43b865dbc04e3..0000000000000000000000000000000000000000 --- a/spaces/DHEIVER/CoronaryAngioSegment/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: CoronaryAngioSegment -emoji: 🌖 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: false -license: mit -duplicated_from: KurtLin/CoronaryAngioSegment ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/util/bokeh_util.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/util/bokeh_util.py deleted file mode 100644 index e75654d7c30c552c1e1bd0492a85d40e8f27de40..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/util/bokeh_util.py +++ /dev/null @@ -1,90 +0,0 @@ -from __future__ import annotations - -from typing import TYPE_CHECKING, cast - -from contourpy import FillType, LineType -from contourpy.util.mpl_util import mpl_codes_to_offsets - -if TYPE_CHECKING: - from contourpy._contourpy import ( - CoordinateArray, FillReturn, LineReturn, LineReturn_Separate, LineReturn_SeparateCode, - ) - - -def filled_to_bokeh( - filled: FillReturn, - fill_type: FillType, -) -> tuple[list[list[CoordinateArray]], list[list[CoordinateArray]]]: - xs: list[list[CoordinateArray]] = [] - ys: list[list[CoordinateArray]] = [] - if fill_type in (FillType.OuterOffset, FillType.ChunkCombinedOffset, - FillType.OuterCode, FillType.ChunkCombinedCode): - have_codes = fill_type in (FillType.OuterCode, FillType.ChunkCombinedCode) - - for points, offsets in zip(*filled): - if points is None: - continue - if have_codes: - offsets = mpl_codes_to_offsets(offsets) - xs.append([]) # New outer with zero or more holes. - ys.append([]) - for i in range(len(offsets)-1): - xys = points[offsets[i]:offsets[i+1]] - xs[-1].append(xys[:, 0]) - ys[-1].append(xys[:, 1]) - elif fill_type in (FillType.ChunkCombinedCodeOffset, FillType.ChunkCombinedOffsetOffset): - for points, codes_or_offsets, outer_offsets in zip(*filled): - if points is None: - continue - for j in range(len(outer_offsets)-1): - if fill_type == FillType.ChunkCombinedCodeOffset: - codes = codes_or_offsets[outer_offsets[j]:outer_offsets[j+1]] - offsets = mpl_codes_to_offsets(codes) + outer_offsets[j] - else: - offsets = codes_or_offsets[outer_offsets[j]:outer_offsets[j+1]+1] - xs.append([]) # New outer with zero or more holes. - ys.append([]) - for k in range(len(offsets)-1): - xys = points[offsets[k]:offsets[k+1]] - xs[-1].append(xys[:, 0]) - ys[-1].append(xys[:, 1]) - else: - raise RuntimeError(f"Conversion of FillType {fill_type} to Bokeh is not implemented") - - return xs, ys - - -def lines_to_bokeh( - lines: LineReturn, - line_type: LineType, -) -> tuple[list[CoordinateArray], list[CoordinateArray]]: - xs: list[CoordinateArray] = [] - ys: list[CoordinateArray] = [] - - if line_type == LineType.Separate: - if TYPE_CHECKING: - lines = cast(LineReturn_Separate, lines) - for line in lines: - xs.append(line[:, 0]) - ys.append(line[:, 1]) - elif line_type == LineType.SeparateCode: - if TYPE_CHECKING: - lines = cast(LineReturn_SeparateCode, lines) - for line in lines[0]: - xs.append(line[:, 0]) - ys.append(line[:, 1]) - elif line_type in (LineType.ChunkCombinedCode, LineType.ChunkCombinedOffset): - for points, offsets in zip(*lines): - if points is None: - continue - if line_type == LineType.ChunkCombinedCode: - offsets = mpl_codes_to_offsets(offsets) - - for i in range(len(offsets)-1): - line = points[offsets[i]:offsets[i+1]] - xs.append(line[:, 0]) - ys.append(line[:, 1]) - else: - raise RuntimeError(f"Conversion of LineType {line_type} to Bokeh is not implemented") - - return xs, ys diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/qu2cu/__main__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/qu2cu/__main__.py deleted file mode 100644 index 27728cc7aa400fa7389cf0ba31990165bc7b03b5..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/qu2cu/__main__.py +++ /dev/null @@ -1,7 +0,0 @@ -import sys - -from .cli import main - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/strings.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/strings.py deleted file mode 100644 index d85bc052969438e1e05dbf3abd9c75c8effc7d03..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/strings.py +++ /dev/null @@ -1,48 +0,0 @@ -import os -import threading -from typing import Dict - -import requests - -from gradio import wasm_utils - -MESSAGING_API_ENDPOINT = "https://api.gradio.app/gradio-messaging/en" - -en = { - "RUNNING_LOCALLY": "Running on local URL: {}", - "RUNNING_LOCALLY_SEPARATED": "Running on local URL: {}://{}:{}", - "SHARE_LINK_DISPLAY": "Running on public URL: {}", - "COULD_NOT_GET_SHARE_LINK": "\nCould not create share link. Please check your internet connection or our status page: https://status.gradio.app.", - "COULD_NOT_GET_SHARE_LINK_MISSING_FILE": "\nCould not create share link. Missing file: {}. \n\nPlease check your internet connection. This can happen if your antivirus software blocks the download of this file. You can install manually by following these steps: \n\n1. Download this file: {}\n2. Rename the downloaded file to: {}\n3. Move the file to this location: {}", - "COLAB_NO_LOCAL": "Cannot display local interface on google colab, public link created.", - "PUBLIC_SHARE_TRUE": "\nTo create a public link, set `share=True` in `launch()`.", - "MODEL_PUBLICLY_AVAILABLE_URL": "Model available publicly at: {} (may take up to a minute for link to be usable)", - "GENERATING_PUBLIC_LINK": "Generating public link (may take a few seconds...):", - "BETA_INVITE": "\nThanks for being a Gradio user! If you have questions or feedback, please join our Discord server and chat with us: https://discord.gg/feTf9x3ZSB", - "COLAB_DEBUG_TRUE": "Colab notebook detected. This cell will run indefinitely so that you can see errors and logs. " - "To turn off, set debug=False in launch().", - "COLAB_DEBUG_FALSE": "Colab notebook detected. To show errors in colab notebook, set debug=True in launch()", - "COLAB_WARNING": "Note: opening Chrome Inspector may crash demo inside Colab notebooks.", - "SHARE_LINK_MESSAGE": "\nThis share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)", - "INLINE_DISPLAY_BELOW": "Interface loading below...", - "TIPS": [ - "You can add authentication to your app with the `auth=` kwarg in the `launch()` command; for example: `gr.Interface(...).launch(auth=('username', 'password'))`", - "Let users specify why they flagged input with the `flagging_options=` kwarg; for example: `gr.Interface(..., flagging_options=['too slow', 'incorrect output', 'other'])`", - "You can show or hide the button for flagging with the `allow_flagging=` kwarg; for example: gr.Interface(..., allow_flagging=False)", - "The inputs and outputs flagged by the users are stored in the flagging directory, specified by the flagging_dir= kwarg. You can view this data through the interface by setting the examples= kwarg to the flagging directory; for example gr.Interface(..., examples='flagged')", - "You can add a title and description to your interface using the `title=` and `description=` kwargs. The `article=` kwarg can be used to add a description under the interface; for example gr.Interface(..., title='My app', description='Lorem ipsum'). Try using Markdown!", - "For a classification or regression model, set `interpretation='default'` to see why the model made a prediction.", - ], -} - - -def get_updated_messaging(en: Dict): - try: - updated_messaging = requests.get(MESSAGING_API_ENDPOINT, timeout=3).json() - en.update(updated_messaging) - except Exception: # Use default messaging - pass - - -if os.getenv("GRADIO_ANALYTICS_ENABLED", "True") == "True" and not wasm_utils.IS_WASM: - threading.Thread(target=get_updated_messaging, args=(en,)).start() diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Login-aa2d581f.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Login-aa2d581f.js deleted file mode 100644 index fbb42150314250efd9cf9a32f40b1b4a51b71c8a..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/Login-aa2d581f.js +++ /dev/null @@ -1,3 +0,0 @@ -import{S as j,e as q,s as A,N as h,k as $,K as C,U as L,p,o as v,z as x,v as w,A as c,x as k,O as g,P,M as B,R as H,h as N,j as S,t as I}from"./index-3370be2a.js";import{F as K}from"./Form-bf52aaa0.js";import{T}from"./Textbox-086bc878.js";import{a as M}from"./Button-89624748.js";import{C as R}from"./Column-61895400.js";/* empty css */import"./BlockTitle-bcf8c05e.js";import"./Info-5611e10f.js";import"./Copy-6cd42558.js";/* empty css */function z(i){let e,s;return{c(){e=h("p"),s=P(i[0]),C(e,"class","auth svelte-1ogxbi0")},m(l,o){p(l,e,o),B(e,s)},p(l,o){o&1&&H(s,l[0])},d(l){l&&c(e)}}}function D(i){let e;return{c(){e=h("p"),e.textContent=`If you are visiting a HuggingFace Space in Incognito mode, you must - enable third party cookies.`,C(e,"class","auth svelte-1ogxbi0")},m(s,l){p(s,e,l)},d(s){s&&c(e)}}}function O(i){let e;return{c(){e=h("p"),e.textContent="Incorrect Credentials",C(e,"class","creds svelte-1ogxbi0")},m(s,l){p(s,e,l)},d(s){s&&c(e)}}}function U(i){let e,s,l,o,r,m;function d(n){i[8](n)}let _={label:"username",lines:1,show_label:!0,max_lines:1,mode:"dynamic"};i[3]!==void 0&&(_.value=i[3]),e=new T({props:_}),N.push(()=>S(e,"value",d)),e.$on("submit",i[6]);function b(n){i[9](n)}let u={label:"password",lines:1,show_label:!0,max_lines:1,mode:"dynamic",type:"password"};return i[4]!==void 0&&(u.value=i[4]),o=new T({props:u}),N.push(()=>S(o,"value",b)),o.$on("submit",i[6]),{c(){$(e.$$.fragment),l=g(),$(o.$$.fragment)},m(n,f){v(e,n,f),p(n,l,f),v(o,n,f),m=!0},p(n,f){const t={};!s&&f&8&&(s=!0,t.value=n[3],I(()=>s=!1)),e.$set(t);const a={};!r&&f&16&&(r=!0,a.value=n[4],I(()=>r=!1)),o.$set(a)},i(n){m||(x(e.$$.fragment,n),x(o.$$.fragment,n),m=!0)},o(n){w(e.$$.fragment,n),w(o.$$.fragment,n),m=!1},d(n){n&&c(l),k(e,n),k(o,n)}}}function E(i){let e;return{c(){e=P("Login")},m(s,l){p(s,e,l)},d(s){s&&c(e)}}}function G(i){let e,s,l,o,r,m,d,_,b,u=i[0]&&z(i),n=i[2]&&D(),f=i[5]&&O();return m=new K({props:{$$slots:{default:[U]},$$scope:{ctx:i}}}),_=new M({props:{size:"lg",variant:"primary",$$slots:{default:[E]},$$scope:{ctx:i}}}),_.$on("click",i[6]),{c(){e=h("h2"),e.textContent="Login",s=g(),u&&u.c(),l=g(),n&&n.c(),o=g(),f&&f.c(),r=g(),$(m.$$.fragment),d=g(),$(_.$$.fragment),C(e,"class","svelte-1ogxbi0")},m(t,a){p(t,e,a),p(t,s,a),u&&u.m(t,a),p(t,l,a),n&&n.m(t,a),p(t,o,a),f&&f.m(t,a),p(t,r,a),v(m,t,a),p(t,d,a),v(_,t,a),b=!0},p(t,a){t[0]?u?u.p(t,a):(u=z(t),u.c(),u.m(l.parentNode,l)):u&&(u.d(1),u=null),t[2]?n||(n=D(),n.c(),n.m(o.parentNode,o)):n&&(n.d(1),n=null),t[5]?f||(f=O(),f.c(),f.m(r.parentNode,r)):f&&(f.d(1),f=null);const y={};a&1048&&(y.$$scope={dirty:a,ctx:t}),m.$set(y);const F={};a&1024&&(F.$$scope={dirty:a,ctx:t}),_.$set(F)},i(t){b||(x(m.$$.fragment,t),x(_.$$.fragment,t),b=!0)},o(t){w(m.$$.fragment,t),w(_.$$.fragment,t),b=!1},d(t){t&&(c(e),c(s),c(l),c(o),c(r),c(d)),u&&u.d(t),n&&n.d(t),f&&f.d(t),k(m,t),k(_,t)}}}function J(i){let e,s,l;return s=new R({props:{variant:"panel",min_width:480,$$slots:{default:[G]},$$scope:{ctx:i}}}),{c(){e=h("div"),$(s.$$.fragment),C(e,"class","wrap svelte-1ogxbi0"),L(e,"min-h-screen",i[1])},m(o,r){p(o,e,r),v(s,e,null),l=!0},p(o,[r]){const m={};r&1085&&(m.$$scope={dirty:r,ctx:o}),s.$set(m),(!l||r&2)&&L(e,"min-h-screen",o[1])},i(o){l||(x(s.$$.fragment,o),l=!0)},o(o){w(s.$$.fragment,o),l=!1},d(o){o&&c(e),k(s)}}}function Q(i,e,s){let{root:l}=e,{auth_message:o}=e,{app_mode:r}=e,{space_id:m}=e,d="",_="",b=!1;const u=async()=>{const t=new FormData;t.append("username",d),t.append("password",_);let a=await fetch(l+"/login",{method:"POST",body:t});a.status===400?(s(5,b=!0),s(3,d=""),s(4,_="")):a.status==200&&location.reload()};function n(t){d=t,s(3,d)}function f(t){_=t,s(4,_)}return i.$$set=t=>{"root"in t&&s(7,l=t.root),"auth_message"in t&&s(0,o=t.auth_message),"app_mode"in t&&s(1,r=t.app_mode),"space_id"in t&&s(2,m=t.space_id)},[o,r,m,d,_,b,u,l,n,f]}class le extends j{constructor(e){super(),q(this,e,Q,J,A,{root:7,auth_message:0,app_mode:1,space_id:2})}}export{le as default}; -//# sourceMappingURL=Login-aa2d581f.js.map diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_client.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_client.py deleted file mode 100644 index cb475e02045aafac34309e4b808e12c580e58d8f..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/httpx/_client.py +++ /dev/null @@ -1,2006 +0,0 @@ -import datetime -import enum -import logging -import typing -import warnings -from contextlib import asynccontextmanager, contextmanager -from types import TracebackType - -from .__version__ import __version__ -from ._auth import Auth, BasicAuth, FunctionAuth -from ._config import ( - DEFAULT_LIMITS, - DEFAULT_MAX_REDIRECTS, - DEFAULT_TIMEOUT_CONFIG, - Limits, - Proxy, - Timeout, -) -from ._decoders import SUPPORTED_DECODERS -from ._exceptions import ( - InvalidURL, - RemoteProtocolError, - TooManyRedirects, - request_context, -) -from ._models import Cookies, Headers, Request, Response -from ._status_codes import codes -from ._transports.asgi import ASGITransport -from ._transports.base import AsyncBaseTransport, BaseTransport -from ._transports.default import AsyncHTTPTransport, HTTPTransport -from ._transports.wsgi import WSGITransport -from ._types import ( - AsyncByteStream, - AuthTypes, - CertTypes, - CookieTypes, - HeaderTypes, - ProxiesTypes, - QueryParamTypes, - RequestContent, - RequestData, - RequestExtensions, - RequestFiles, - SyncByteStream, - TimeoutTypes, - URLTypes, - VerifyTypes, -) -from ._urls import URL, QueryParams -from ._utils import ( - Timer, - URLPattern, - get_environment_proxies, - is_https_redirect, - same_origin, -) - -# The type annotation for @classmethod and context managers here follows PEP 484 -# https://www.python.org/dev/peps/pep-0484/#annotating-instance-and-class-methods -T = typing.TypeVar("T", bound="Client") -U = typing.TypeVar("U", bound="AsyncClient") - - -class UseClientDefault: - """ - For some parameters such as `auth=...` and `timeout=...` we need to be able - to indicate the default "unset" state, in a way that is distinctly different - to using `None`. - - The default "unset" state indicates that whatever default is set on the - client should be used. This is different to setting `None`, which - explicitly disables the parameter, possibly overriding a client default. - - For example we use `timeout=USE_CLIENT_DEFAULT` in the `request()` signature. - Omitting the `timeout` parameter will send a request using whatever default - timeout has been configured on the client. Including `timeout=None` will - ensure no timeout is used. - - Note that user code shouldn't need to use the `USE_CLIENT_DEFAULT` constant, - but it is used internally when a parameter is not included. - """ - - -USE_CLIENT_DEFAULT = UseClientDefault() - - -logger = logging.getLogger("httpx") - -USER_AGENT = f"python-httpx/{__version__}" -ACCEPT_ENCODING = ", ".join( - [key for key in SUPPORTED_DECODERS.keys() if key != "identity"] -) - - -class ClientState(enum.Enum): - # UNOPENED: - # The client has been instantiated, but has not been used to send a request, - # or been opened by entering the context of a `with` block. - UNOPENED = 1 - # OPENED: - # The client has either sent a request, or is within a `with` block. - OPENED = 2 - # CLOSED: - # The client has either exited the `with` block, or `close()` has - # been called explicitly. - CLOSED = 3 - - -class BoundSyncStream(SyncByteStream): - """ - A byte stream that is bound to a given response instance, and that - ensures the `response.elapsed` is set once the response is closed. - """ - - def __init__( - self, stream: SyncByteStream, response: Response, timer: Timer - ) -> None: - self._stream = stream - self._response = response - self._timer = timer - - def __iter__(self) -> typing.Iterator[bytes]: - for chunk in self._stream: - yield chunk - - def close(self) -> None: - seconds = self._timer.sync_elapsed() - self._response.elapsed = datetime.timedelta(seconds=seconds) - self._stream.close() - - -class BoundAsyncStream(AsyncByteStream): - """ - An async byte stream that is bound to a given response instance, and that - ensures the `response.elapsed` is set once the response is closed. - """ - - def __init__( - self, stream: AsyncByteStream, response: Response, timer: Timer - ) -> None: - self._stream = stream - self._response = response - self._timer = timer - - async def __aiter__(self) -> typing.AsyncIterator[bytes]: - async for chunk in self._stream: - yield chunk - - async def aclose(self) -> None: - seconds = await self._timer.async_elapsed() - self._response.elapsed = datetime.timedelta(seconds=seconds) - await self._stream.aclose() - - -EventHook = typing.Callable[..., typing.Any] - - -class BaseClient: - def __init__( - self, - *, - auth: typing.Optional[AuthTypes] = None, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG, - follow_redirects: bool = False, - max_redirects: int = DEFAULT_MAX_REDIRECTS, - event_hooks: typing.Optional[ - typing.Mapping[str, typing.List[EventHook]] - ] = None, - base_url: URLTypes = "", - trust_env: bool = True, - default_encoding: typing.Union[str, typing.Callable[[bytes], str]] = "utf-8", - ): - event_hooks = {} if event_hooks is None else event_hooks - - self._base_url = self._enforce_trailing_slash(URL(base_url)) - - self._auth = self._build_auth(auth) - self._params = QueryParams(params) - self.headers = Headers(headers) - self._cookies = Cookies(cookies) - self._timeout = Timeout(timeout) - self.follow_redirects = follow_redirects - self.max_redirects = max_redirects - self._event_hooks = { - "request": list(event_hooks.get("request", [])), - "response": list(event_hooks.get("response", [])), - } - self._trust_env = trust_env - self._default_encoding = default_encoding - self._state = ClientState.UNOPENED - - @property - def is_closed(self) -> bool: - """ - Check if the client being closed - """ - return self._state == ClientState.CLOSED - - @property - def trust_env(self) -> bool: - return self._trust_env - - def _enforce_trailing_slash(self, url: URL) -> URL: - if url.raw_path.endswith(b"/"): - return url - return url.copy_with(raw_path=url.raw_path + b"/") - - def _get_proxy_map( - self, proxies: typing.Optional[ProxiesTypes], allow_env_proxies: bool - ) -> typing.Dict[str, typing.Optional[Proxy]]: - if proxies is None: - if allow_env_proxies: - return { - key: None if url is None else Proxy(url=url) - for key, url in get_environment_proxies().items() - } - return {} - if isinstance(proxies, dict): - new_proxies = {} - for key, value in proxies.items(): - proxy = Proxy(url=value) if isinstance(value, (str, URL)) else value - new_proxies[str(key)] = proxy - return new_proxies - else: - proxy = Proxy(url=proxies) if isinstance(proxies, (str, URL)) else proxies - return {"all://": proxy} - - @property - def timeout(self) -> Timeout: - return self._timeout - - @timeout.setter - def timeout(self, timeout: TimeoutTypes) -> None: - self._timeout = Timeout(timeout) - - @property - def event_hooks(self) -> typing.Dict[str, typing.List[EventHook]]: - return self._event_hooks - - @event_hooks.setter - def event_hooks( - self, event_hooks: typing.Dict[str, typing.List[EventHook]] - ) -> None: - self._event_hooks = { - "request": list(event_hooks.get("request", [])), - "response": list(event_hooks.get("response", [])), - } - - @property - def auth(self) -> typing.Optional[Auth]: - """ - Authentication class used when none is passed at the request-level. - - See also [Authentication][0]. - - [0]: /quickstart/#authentication - """ - return self._auth - - @auth.setter - def auth(self, auth: AuthTypes) -> None: - self._auth = self._build_auth(auth) - - @property - def base_url(self) -> URL: - """ - Base URL to use when sending requests with relative URLs. - """ - return self._base_url - - @base_url.setter - def base_url(self, url: URLTypes) -> None: - self._base_url = self._enforce_trailing_slash(URL(url)) - - @property - def headers(self) -> Headers: - """ - HTTP headers to include when sending requests. - """ - return self._headers - - @headers.setter - def headers(self, headers: HeaderTypes) -> None: - client_headers = Headers( - { - b"Accept": b"*/*", - b"Accept-Encoding": ACCEPT_ENCODING.encode("ascii"), - b"Connection": b"keep-alive", - b"User-Agent": USER_AGENT.encode("ascii"), - } - ) - client_headers.update(headers) - self._headers = client_headers - - @property - def cookies(self) -> Cookies: - """ - Cookie values to include when sending requests. - """ - return self._cookies - - @cookies.setter - def cookies(self, cookies: CookieTypes) -> None: - self._cookies = Cookies(cookies) - - @property - def params(self) -> QueryParams: - """ - Query parameters to include in the URL when sending requests. - """ - return self._params - - @params.setter - def params(self, params: QueryParamTypes) -> None: - self._params = QueryParams(params) - - def build_request( - self, - method: str, - url: URLTypes, - *, - content: typing.Optional[RequestContent] = None, - data: typing.Optional[RequestData] = None, - files: typing.Optional[RequestFiles] = None, - json: typing.Optional[typing.Any] = None, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Request: - """ - Build and return a request instance. - - * The `params`, `headers` and `cookies` arguments - are merged with any values set on the client. - * The `url` argument is merged with any `base_url` set on the client. - - See also: [Request instances][0] - - [0]: /advanced/#request-instances - """ - url = self._merge_url(url) - headers = self._merge_headers(headers) - cookies = self._merge_cookies(cookies) - params = self._merge_queryparams(params) - extensions = {} if extensions is None else extensions - if "timeout" not in extensions: - timeout = ( - self.timeout - if isinstance(timeout, UseClientDefault) - else Timeout(timeout) - ) - extensions = dict(**extensions, timeout=timeout.as_dict()) - return Request( - method, - url, - content=content, - data=data, - files=files, - json=json, - params=params, - headers=headers, - cookies=cookies, - extensions=extensions, - ) - - def _merge_url(self, url: URLTypes) -> URL: - """ - Merge a URL argument together with any 'base_url' on the client, - to create the URL used for the outgoing request. - """ - merge_url = URL(url) - if merge_url.is_relative_url: - # To merge URLs we always append to the base URL. To get this - # behaviour correct we always ensure the base URL ends in a '/' - # separator, and strip any leading '/' from the merge URL. - # - # So, eg... - # - # >>> client = Client(base_url="https://www.example.com/subpath") - # >>> client.base_url - # URL('https://www.example.com/subpath/') - # >>> client.build_request("GET", "/path").url - # URL('https://www.example.com/subpath/path') - merge_raw_path = self.base_url.raw_path + merge_url.raw_path.lstrip(b"/") - return self.base_url.copy_with(raw_path=merge_raw_path) - return merge_url - - def _merge_cookies( - self, cookies: typing.Optional[CookieTypes] = None - ) -> typing.Optional[CookieTypes]: - """ - Merge a cookies argument together with any cookies on the client, - to create the cookies used for the outgoing request. - """ - if cookies or self.cookies: - merged_cookies = Cookies(self.cookies) - merged_cookies.update(cookies) - return merged_cookies - return cookies - - def _merge_headers( - self, headers: typing.Optional[HeaderTypes] = None - ) -> typing.Optional[HeaderTypes]: - """ - Merge a headers argument together with any headers on the client, - to create the headers used for the outgoing request. - """ - merged_headers = Headers(self.headers) - merged_headers.update(headers) - return merged_headers - - def _merge_queryparams( - self, params: typing.Optional[QueryParamTypes] = None - ) -> typing.Optional[QueryParamTypes]: - """ - Merge a queryparams argument together with any queryparams on the client, - to create the queryparams used for the outgoing request. - """ - if params or self.params: - merged_queryparams = QueryParams(self.params) - return merged_queryparams.merge(params) - return params - - def _build_auth(self, auth: typing.Optional[AuthTypes]) -> typing.Optional[Auth]: - if auth is None: - return None - elif isinstance(auth, tuple): - return BasicAuth(username=auth[0], password=auth[1]) - elif isinstance(auth, Auth): - return auth - elif callable(auth): - return FunctionAuth(func=auth) - else: - raise TypeError(f'Invalid "auth" argument: {auth!r}') - - def _build_request_auth( - self, - request: Request, - auth: typing.Union[AuthTypes, UseClientDefault, None] = USE_CLIENT_DEFAULT, - ) -> Auth: - auth = ( - self._auth if isinstance(auth, UseClientDefault) else self._build_auth(auth) - ) - - if auth is not None: - return auth - - username, password = request.url.username, request.url.password - if username or password: - return BasicAuth(username=username, password=password) - - return Auth() - - def _build_redirect_request(self, request: Request, response: Response) -> Request: - """ - Given a request and a redirect response, return a new request that - should be used to effect the redirect. - """ - method = self._redirect_method(request, response) - url = self._redirect_url(request, response) - headers = self._redirect_headers(request, url, method) - stream = self._redirect_stream(request, method) - cookies = Cookies(self.cookies) - return Request( - method=method, - url=url, - headers=headers, - cookies=cookies, - stream=stream, - extensions=request.extensions, - ) - - def _redirect_method(self, request: Request, response: Response) -> str: - """ - When being redirected we may want to change the method of the request - based on certain specs or browser behavior. - """ - method = request.method - - # https://tools.ietf.org/html/rfc7231#section-6.4.4 - if response.status_code == codes.SEE_OTHER and method != "HEAD": - method = "GET" - - # Do what the browsers do, despite standards... - # Turn 302s into GETs. - if response.status_code == codes.FOUND and method != "HEAD": - method = "GET" - - # If a POST is responded to with a 301, turn it into a GET. - # This bizarre behaviour is explained in 'requests' issue 1704. - if response.status_code == codes.MOVED_PERMANENTLY and method == "POST": - method = "GET" - - return method - - def _redirect_url(self, request: Request, response: Response) -> URL: - """ - Return the URL for the redirect to follow. - """ - location = response.headers["Location"] - - try: - url = URL(location) - except InvalidURL as exc: - raise RemoteProtocolError( - f"Invalid URL in location header: {exc}.", request=request - ) from None - - # Handle malformed 'Location' headers that are "absolute" form, have no host. - # See: https://github.com/encode/httpx/issues/771 - if url.scheme and not url.host: - url = url.copy_with(host=request.url.host) - - # Facilitate relative 'Location' headers, as allowed by RFC 7231. - # (e.g. '/path/to/resource' instead of 'http://domain.tld/path/to/resource') - if url.is_relative_url: - url = request.url.join(url) - - # Attach previous fragment if needed (RFC 7231 7.1.2) - if request.url.fragment and not url.fragment: - url = url.copy_with(fragment=request.url.fragment) - - return url - - def _redirect_headers(self, request: Request, url: URL, method: str) -> Headers: - """ - Return the headers that should be used for the redirect request. - """ - headers = Headers(request.headers) - - if not same_origin(url, request.url): - if not is_https_redirect(request.url, url): - # Strip Authorization headers when responses are redirected - # away from the origin. (Except for direct HTTP to HTTPS redirects.) - headers.pop("Authorization", None) - - # Update the Host header. - headers["Host"] = url.netloc.decode("ascii") - - if method != request.method and method == "GET": - # If we've switch to a 'GET' request, then strip any headers which - # are only relevant to the request body. - headers.pop("Content-Length", None) - headers.pop("Transfer-Encoding", None) - - # We should use the client cookie store to determine any cookie header, - # rather than whatever was on the original outgoing request. - headers.pop("Cookie", None) - - return headers - - def _redirect_stream( - self, request: Request, method: str - ) -> typing.Optional[typing.Union[SyncByteStream, AsyncByteStream]]: - """ - Return the body that should be used for the redirect request. - """ - if method != request.method and method == "GET": - return None - - return request.stream - - -class Client(BaseClient): - """ - An HTTP client, with connection pooling, HTTP/2, redirects, cookie persistence, etc. - - It can be shared between threads. - - Usage: - - ```python - >>> client = httpx.Client() - >>> response = client.get('https://example.org') - ``` - - **Parameters:** - - * **auth** - *(optional)* An authentication class to use when sending - requests. - * **params** - *(optional)* Query parameters to include in request URLs, as - a string, dictionary, or sequence of two-tuples. - * **headers** - *(optional)* Dictionary of HTTP headers to include when - sending requests. - * **cookies** - *(optional)* Dictionary of Cookie items to include when - sending requests. - * **verify** - *(optional)* SSL certificates (a.k.a CA bundle) used to - verify the identity of requested hosts. Either `True` (default CA bundle), - a path to an SSL certificate file, an `ssl.SSLContext`, or `False` - (which will disable verification). - * **cert** - *(optional)* An SSL certificate used by the requested host - to authenticate the client. Either a path to an SSL certificate file, or - two-tuple of (certificate file, key file), or a three-tuple of (certificate - file, key file, password). - * **proxies** - *(optional)* A dictionary mapping proxy keys to proxy - URLs. - * **timeout** - *(optional)* The timeout configuration to use when sending - requests. - * **limits** - *(optional)* The limits configuration to use. - * **max_redirects** - *(optional)* The maximum number of redirect responses - that should be followed. - * **base_url** - *(optional)* A URL to use as the base when building - request URLs. - * **transport** - *(optional)* A transport class to use for sending requests - over the network. - * **app** - *(optional)* An WSGI application to send requests to, - rather than sending actual network requests. - * **trust_env** - *(optional)* Enables or disables usage of environment - variables for configuration. - * **default_encoding** - *(optional)* The default encoding to use for decoding - response text, if no charset information is included in a response Content-Type - header. Set to a callable for automatic character set detection. Default: "utf-8". - """ - - def __init__( - self, - *, - auth: typing.Optional[AuthTypes] = None, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - verify: VerifyTypes = True, - cert: typing.Optional[CertTypes] = None, - http1: bool = True, - http2: bool = False, - proxies: typing.Optional[ProxiesTypes] = None, - mounts: typing.Optional[typing.Mapping[str, BaseTransport]] = None, - timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG, - follow_redirects: bool = False, - limits: Limits = DEFAULT_LIMITS, - max_redirects: int = DEFAULT_MAX_REDIRECTS, - event_hooks: typing.Optional[ - typing.Mapping[str, typing.List[EventHook]] - ] = None, - base_url: URLTypes = "", - transport: typing.Optional[BaseTransport] = None, - app: typing.Optional[typing.Callable[..., typing.Any]] = None, - trust_env: bool = True, - default_encoding: typing.Union[str, typing.Callable[[bytes], str]] = "utf-8", - ): - super().__init__( - auth=auth, - params=params, - headers=headers, - cookies=cookies, - timeout=timeout, - follow_redirects=follow_redirects, - max_redirects=max_redirects, - event_hooks=event_hooks, - base_url=base_url, - trust_env=trust_env, - default_encoding=default_encoding, - ) - - if http2: - try: - import h2 # noqa - except ImportError: # pragma: no cover - raise ImportError( - "Using http2=True, but the 'h2' package is not installed. " - "Make sure to install httpx using `pip install httpx[http2]`." - ) from None - - allow_env_proxies = trust_env and app is None and transport is None - proxy_map = self._get_proxy_map(proxies, allow_env_proxies) - - self._transport = self._init_transport( - verify=verify, - cert=cert, - http1=http1, - http2=http2, - limits=limits, - transport=transport, - app=app, - trust_env=trust_env, - ) - self._mounts: typing.Dict[URLPattern, typing.Optional[BaseTransport]] = { - URLPattern(key): None - if proxy is None - else self._init_proxy_transport( - proxy, - verify=verify, - cert=cert, - http1=http1, - http2=http2, - limits=limits, - trust_env=trust_env, - ) - for key, proxy in proxy_map.items() - } - if mounts is not None: - self._mounts.update( - {URLPattern(key): transport for key, transport in mounts.items()} - ) - - self._mounts = dict(sorted(self._mounts.items())) - - def _init_transport( - self, - verify: VerifyTypes = True, - cert: typing.Optional[CertTypes] = None, - http1: bool = True, - http2: bool = False, - limits: Limits = DEFAULT_LIMITS, - transport: typing.Optional[BaseTransport] = None, - app: typing.Optional[typing.Callable[..., typing.Any]] = None, - trust_env: bool = True, - ) -> BaseTransport: - if transport is not None: - return transport - - if app is not None: - return WSGITransport(app=app) - - return HTTPTransport( - verify=verify, - cert=cert, - http1=http1, - http2=http2, - limits=limits, - trust_env=trust_env, - ) - - def _init_proxy_transport( - self, - proxy: Proxy, - verify: VerifyTypes = True, - cert: typing.Optional[CertTypes] = None, - http1: bool = True, - http2: bool = False, - limits: Limits = DEFAULT_LIMITS, - trust_env: bool = True, - ) -> BaseTransport: - return HTTPTransport( - verify=verify, - cert=cert, - http1=http1, - http2=http2, - limits=limits, - trust_env=trust_env, - proxy=proxy, - ) - - def _transport_for_url(self, url: URL) -> BaseTransport: - """ - Returns the transport instance that should be used for a given URL. - This will either be the standard connection pool, or a proxy. - """ - for pattern, transport in self._mounts.items(): - if pattern.matches(url): - return self._transport if transport is None else transport - - return self._transport - - def request( - self, - method: str, - url: URLTypes, - *, - content: typing.Optional[RequestContent] = None, - data: typing.Optional[RequestData] = None, - files: typing.Optional[RequestFiles] = None, - json: typing.Optional[typing.Any] = None, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault, None] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Build and send a request. - - Equivalent to: - - ```python - request = client.build_request(...) - response = client.send(request, ...) - ``` - - See `Client.build_request()`, `Client.send()` and - [Merging of configuration][0] for how the various parameters - are merged with client-level configuration. - - [0]: /advanced/#merging-of-configuration - """ - if cookies is not None: - message = ( - "Setting per-request cookies=<...> is being deprecated, because " - "the expected behaviour on cookie persistence is ambiguous. Set " - "cookies directly on the client instance instead." - ) - warnings.warn(message, DeprecationWarning) - - request = self.build_request( - method=method, - url=url, - content=content, - data=data, - files=files, - json=json, - params=params, - headers=headers, - cookies=cookies, - timeout=timeout, - extensions=extensions, - ) - return self.send(request, auth=auth, follow_redirects=follow_redirects) - - @contextmanager - def stream( - self, - method: str, - url: URLTypes, - *, - content: typing.Optional[RequestContent] = None, - data: typing.Optional[RequestData] = None, - files: typing.Optional[RequestFiles] = None, - json: typing.Optional[typing.Any] = None, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault, None] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> typing.Iterator[Response]: - """ - Alternative to `httpx.request()` that streams the response body - instead of loading it into memory at once. - - **Parameters**: See `httpx.request`. - - See also: [Streaming Responses][0] - - [0]: /quickstart#streaming-responses - """ - request = self.build_request( - method=method, - url=url, - content=content, - data=data, - files=files, - json=json, - params=params, - headers=headers, - cookies=cookies, - timeout=timeout, - extensions=extensions, - ) - response = self.send( - request=request, - auth=auth, - follow_redirects=follow_redirects, - stream=True, - ) - try: - yield response - finally: - response.close() - - def send( - self, - request: Request, - *, - stream: bool = False, - auth: typing.Union[AuthTypes, UseClientDefault, None] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - ) -> Response: - """ - Send a request. - - The request is sent as-is, unmodified. - - Typically you'll want to build one with `Client.build_request()` - so that any client-level configuration is merged into the request, - but passing an explicit `httpx.Request()` is supported as well. - - See also: [Request instances][0] - - [0]: /advanced/#request-instances - """ - if self._state == ClientState.CLOSED: - raise RuntimeError("Cannot send a request, as the client has been closed.") - - self._state = ClientState.OPENED - follow_redirects = ( - self.follow_redirects - if isinstance(follow_redirects, UseClientDefault) - else follow_redirects - ) - - auth = self._build_request_auth(request, auth) - - response = self._send_handling_auth( - request, - auth=auth, - follow_redirects=follow_redirects, - history=[], - ) - try: - if not stream: - response.read() - - return response - - except BaseException as exc: - response.close() - raise exc - - def _send_handling_auth( - self, - request: Request, - auth: Auth, - follow_redirects: bool, - history: typing.List[Response], - ) -> Response: - auth_flow = auth.sync_auth_flow(request) - try: - request = next(auth_flow) - - while True: - response = self._send_handling_redirects( - request, - follow_redirects=follow_redirects, - history=history, - ) - try: - try: - next_request = auth_flow.send(response) - except StopIteration: - return response - - response.history = list(history) - response.read() - request = next_request - history.append(response) - - except BaseException as exc: - response.close() - raise exc - finally: - auth_flow.close() - - def _send_handling_redirects( - self, - request: Request, - follow_redirects: bool, - history: typing.List[Response], - ) -> Response: - while True: - if len(history) > self.max_redirects: - raise TooManyRedirects( - "Exceeded maximum allowed redirects.", request=request - ) - - for hook in self._event_hooks["request"]: - hook(request) - - response = self._send_single_request(request) - try: - for hook in self._event_hooks["response"]: - hook(response) - response.history = list(history) - - if not response.has_redirect_location: - return response - - request = self._build_redirect_request(request, response) - history = history + [response] - - if follow_redirects: - response.read() - else: - response.next_request = request - return response - - except BaseException as exc: - response.close() - raise exc - - def _send_single_request(self, request: Request) -> Response: - """ - Sends a single request, without handling any redirections. - """ - transport = self._transport_for_url(request.url) - timer = Timer() - timer.sync_start() - - if not isinstance(request.stream, SyncByteStream): - raise RuntimeError( - "Attempted to send an async request with a sync Client instance." - ) - - with request_context(request=request): - response = transport.handle_request(request) - - assert isinstance(response.stream, SyncByteStream) - - response.request = request - response.stream = BoundSyncStream( - response.stream, response=response, timer=timer - ) - self.cookies.extract_cookies(response) - response.default_encoding = self._default_encoding - - logger.info( - 'HTTP Request: %s %s "%s %d %s"', - request.method, - request.url, - response.http_version, - response.status_code, - response.reason_phrase, - ) - - return response - - def get( - self, - url: URLTypes, - *, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Send a `GET` request. - - **Parameters**: See `httpx.request`. - """ - return self.request( - "GET", - url, - params=params, - headers=headers, - cookies=cookies, - auth=auth, - follow_redirects=follow_redirects, - timeout=timeout, - extensions=extensions, - ) - - def options( - self, - url: URLTypes, - *, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Send an `OPTIONS` request. - - **Parameters**: See `httpx.request`. - """ - return self.request( - "OPTIONS", - url, - params=params, - headers=headers, - cookies=cookies, - auth=auth, - follow_redirects=follow_redirects, - timeout=timeout, - extensions=extensions, - ) - - def head( - self, - url: URLTypes, - *, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Send a `HEAD` request. - - **Parameters**: See `httpx.request`. - """ - return self.request( - "HEAD", - url, - params=params, - headers=headers, - cookies=cookies, - auth=auth, - follow_redirects=follow_redirects, - timeout=timeout, - extensions=extensions, - ) - - def post( - self, - url: URLTypes, - *, - content: typing.Optional[RequestContent] = None, - data: typing.Optional[RequestData] = None, - files: typing.Optional[RequestFiles] = None, - json: typing.Optional[typing.Any] = None, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Send a `POST` request. - - **Parameters**: See `httpx.request`. - """ - return self.request( - "POST", - url, - content=content, - data=data, - files=files, - json=json, - params=params, - headers=headers, - cookies=cookies, - auth=auth, - follow_redirects=follow_redirects, - timeout=timeout, - extensions=extensions, - ) - - def put( - self, - url: URLTypes, - *, - content: typing.Optional[RequestContent] = None, - data: typing.Optional[RequestData] = None, - files: typing.Optional[RequestFiles] = None, - json: typing.Optional[typing.Any] = None, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Send a `PUT` request. - - **Parameters**: See `httpx.request`. - """ - return self.request( - "PUT", - url, - content=content, - data=data, - files=files, - json=json, - params=params, - headers=headers, - cookies=cookies, - auth=auth, - follow_redirects=follow_redirects, - timeout=timeout, - extensions=extensions, - ) - - def patch( - self, - url: URLTypes, - *, - content: typing.Optional[RequestContent] = None, - data: typing.Optional[RequestData] = None, - files: typing.Optional[RequestFiles] = None, - json: typing.Optional[typing.Any] = None, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Send a `PATCH` request. - - **Parameters**: See `httpx.request`. - """ - return self.request( - "PATCH", - url, - content=content, - data=data, - files=files, - json=json, - params=params, - headers=headers, - cookies=cookies, - auth=auth, - follow_redirects=follow_redirects, - timeout=timeout, - extensions=extensions, - ) - - def delete( - self, - url: URLTypes, - *, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Send a `DELETE` request. - - **Parameters**: See `httpx.request`. - """ - return self.request( - "DELETE", - url, - params=params, - headers=headers, - cookies=cookies, - auth=auth, - follow_redirects=follow_redirects, - timeout=timeout, - extensions=extensions, - ) - - def close(self) -> None: - """ - Close transport and proxies. - """ - if self._state != ClientState.CLOSED: - self._state = ClientState.CLOSED - - self._transport.close() - for transport in self._mounts.values(): - if transport is not None: - transport.close() - - def __enter__(self: T) -> T: - if self._state != ClientState.UNOPENED: - msg = { - ClientState.OPENED: "Cannot open a client instance more than once.", - ClientState.CLOSED: "Cannot reopen a client instance, once it has been closed.", - }[self._state] - raise RuntimeError(msg) - - self._state = ClientState.OPENED - - self._transport.__enter__() - for transport in self._mounts.values(): - if transport is not None: - transport.__enter__() - return self - - def __exit__( - self, - exc_type: typing.Optional[typing.Type[BaseException]] = None, - exc_value: typing.Optional[BaseException] = None, - traceback: typing.Optional[TracebackType] = None, - ) -> None: - self._state = ClientState.CLOSED - - self._transport.__exit__(exc_type, exc_value, traceback) - for transport in self._mounts.values(): - if transport is not None: - transport.__exit__(exc_type, exc_value, traceback) - - -class AsyncClient(BaseClient): - """ - An asynchronous HTTP client, with connection pooling, HTTP/2, redirects, - cookie persistence, etc. - - Usage: - - ```python - >>> async with httpx.AsyncClient() as client: - >>> response = await client.get('https://example.org') - ``` - - **Parameters:** - - * **auth** - *(optional)* An authentication class to use when sending - requests. - * **params** - *(optional)* Query parameters to include in request URLs, as - a string, dictionary, or sequence of two-tuples. - * **headers** - *(optional)* Dictionary of HTTP headers to include when - sending requests. - * **cookies** - *(optional)* Dictionary of Cookie items to include when - sending requests. - * **verify** - *(optional)* SSL certificates (a.k.a CA bundle) used to - verify the identity of requested hosts. Either `True` (default CA bundle), - a path to an SSL certificate file, an `ssl.SSLContext`, or `False` - (which will disable verification). - * **cert** - *(optional)* An SSL certificate used by the requested host - to authenticate the client. Either a path to an SSL certificate file, or - two-tuple of (certificate file, key file), or a three-tuple of (certificate - file, key file, password). - * **http2** - *(optional)* A boolean indicating if HTTP/2 support should be - enabled. Defaults to `False`. - * **proxies** - *(optional)* A dictionary mapping HTTP protocols to proxy - URLs. - * **timeout** - *(optional)* The timeout configuration to use when sending - requests. - * **limits** - *(optional)* The limits configuration to use. - * **max_redirects** - *(optional)* The maximum number of redirect responses - that should be followed. - * **base_url** - *(optional)* A URL to use as the base when building - request URLs. - * **transport** - *(optional)* A transport class to use for sending requests - over the network. - * **app** - *(optional)* An ASGI application to send requests to, - rather than sending actual network requests. - * **trust_env** - *(optional)* Enables or disables usage of environment - variables for configuration. - * **default_encoding** - *(optional)* The default encoding to use for decoding - response text, if no charset information is included in a response Content-Type - header. Set to a callable for automatic character set detection. Default: "utf-8". - """ - - def __init__( - self, - *, - auth: typing.Optional[AuthTypes] = None, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - verify: VerifyTypes = True, - cert: typing.Optional[CertTypes] = None, - http1: bool = True, - http2: bool = False, - proxies: typing.Optional[ProxiesTypes] = None, - mounts: typing.Optional[typing.Mapping[str, AsyncBaseTransport]] = None, - timeout: TimeoutTypes = DEFAULT_TIMEOUT_CONFIG, - follow_redirects: bool = False, - limits: Limits = DEFAULT_LIMITS, - max_redirects: int = DEFAULT_MAX_REDIRECTS, - event_hooks: typing.Optional[ - typing.Mapping[str, typing.List[typing.Callable[..., typing.Any]]] - ] = None, - base_url: URLTypes = "", - transport: typing.Optional[AsyncBaseTransport] = None, - app: typing.Optional[typing.Callable[..., typing.Any]] = None, - trust_env: bool = True, - default_encoding: typing.Union[str, typing.Callable[[bytes], str]] = "utf-8", - ): - super().__init__( - auth=auth, - params=params, - headers=headers, - cookies=cookies, - timeout=timeout, - follow_redirects=follow_redirects, - max_redirects=max_redirects, - event_hooks=event_hooks, - base_url=base_url, - trust_env=trust_env, - default_encoding=default_encoding, - ) - - if http2: - try: - import h2 # noqa - except ImportError: # pragma: no cover - raise ImportError( - "Using http2=True, but the 'h2' package is not installed. " - "Make sure to install httpx using `pip install httpx[http2]`." - ) from None - - allow_env_proxies = trust_env and app is None and transport is None - proxy_map = self._get_proxy_map(proxies, allow_env_proxies) - - self._transport = self._init_transport( - verify=verify, - cert=cert, - http1=http1, - http2=http2, - limits=limits, - transport=transport, - app=app, - trust_env=trust_env, - ) - - self._mounts: typing.Dict[URLPattern, typing.Optional[AsyncBaseTransport]] = { - URLPattern(key): None - if proxy is None - else self._init_proxy_transport( - proxy, - verify=verify, - cert=cert, - http1=http1, - http2=http2, - limits=limits, - trust_env=trust_env, - ) - for key, proxy in proxy_map.items() - } - if mounts is not None: - self._mounts.update( - {URLPattern(key): transport for key, transport in mounts.items()} - ) - self._mounts = dict(sorted(self._mounts.items())) - - def _init_transport( - self, - verify: VerifyTypes = True, - cert: typing.Optional[CertTypes] = None, - http1: bool = True, - http2: bool = False, - limits: Limits = DEFAULT_LIMITS, - transport: typing.Optional[AsyncBaseTransport] = None, - app: typing.Optional[typing.Callable[..., typing.Any]] = None, - trust_env: bool = True, - ) -> AsyncBaseTransport: - if transport is not None: - return transport - - if app is not None: - return ASGITransport(app=app) - - return AsyncHTTPTransport( - verify=verify, - cert=cert, - http1=http1, - http2=http2, - limits=limits, - trust_env=trust_env, - ) - - def _init_proxy_transport( - self, - proxy: Proxy, - verify: VerifyTypes = True, - cert: typing.Optional[CertTypes] = None, - http1: bool = True, - http2: bool = False, - limits: Limits = DEFAULT_LIMITS, - trust_env: bool = True, - ) -> AsyncBaseTransport: - return AsyncHTTPTransport( - verify=verify, - cert=cert, - http2=http2, - limits=limits, - trust_env=trust_env, - proxy=proxy, - ) - - def _transport_for_url(self, url: URL) -> AsyncBaseTransport: - """ - Returns the transport instance that should be used for a given URL. - This will either be the standard connection pool, or a proxy. - """ - for pattern, transport in self._mounts.items(): - if pattern.matches(url): - return self._transport if transport is None else transport - - return self._transport - - async def request( - self, - method: str, - url: URLTypes, - *, - content: typing.Optional[RequestContent] = None, - data: typing.Optional[RequestData] = None, - files: typing.Optional[RequestFiles] = None, - json: typing.Optional[typing.Any] = None, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault, None] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Build and send a request. - - Equivalent to: - - ```python - request = client.build_request(...) - response = await client.send(request, ...) - ``` - - See `AsyncClient.build_request()`, `AsyncClient.send()` - and [Merging of configuration][0] for how the various parameters - are merged with client-level configuration. - - [0]: /advanced/#merging-of-configuration - """ - request = self.build_request( - method=method, - url=url, - content=content, - data=data, - files=files, - json=json, - params=params, - headers=headers, - cookies=cookies, - timeout=timeout, - extensions=extensions, - ) - return await self.send(request, auth=auth, follow_redirects=follow_redirects) - - @asynccontextmanager - async def stream( - self, - method: str, - url: URLTypes, - *, - content: typing.Optional[RequestContent] = None, - data: typing.Optional[RequestData] = None, - files: typing.Optional[RequestFiles] = None, - json: typing.Optional[typing.Any] = None, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> typing.AsyncIterator[Response]: - """ - Alternative to `httpx.request()` that streams the response body - instead of loading it into memory at once. - - **Parameters**: See `httpx.request`. - - See also: [Streaming Responses][0] - - [0]: /quickstart#streaming-responses - """ - request = self.build_request( - method=method, - url=url, - content=content, - data=data, - files=files, - json=json, - params=params, - headers=headers, - cookies=cookies, - timeout=timeout, - extensions=extensions, - ) - response = await self.send( - request=request, - auth=auth, - follow_redirects=follow_redirects, - stream=True, - ) - try: - yield response - finally: - await response.aclose() - - async def send( - self, - request: Request, - *, - stream: bool = False, - auth: typing.Union[AuthTypes, UseClientDefault, None] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - ) -> Response: - """ - Send a request. - - The request is sent as-is, unmodified. - - Typically you'll want to build one with `AsyncClient.build_request()` - so that any client-level configuration is merged into the request, - but passing an explicit `httpx.Request()` is supported as well. - - See also: [Request instances][0] - - [0]: /advanced/#request-instances - """ - if self._state == ClientState.CLOSED: - raise RuntimeError("Cannot send a request, as the client has been closed.") - - self._state = ClientState.OPENED - follow_redirects = ( - self.follow_redirects - if isinstance(follow_redirects, UseClientDefault) - else follow_redirects - ) - - auth = self._build_request_auth(request, auth) - - response = await self._send_handling_auth( - request, - auth=auth, - follow_redirects=follow_redirects, - history=[], - ) - try: - if not stream: - await response.aread() - - return response - - except BaseException as exc: # pragma: no cover - await response.aclose() - raise exc - - async def _send_handling_auth( - self, - request: Request, - auth: Auth, - follow_redirects: bool, - history: typing.List[Response], - ) -> Response: - auth_flow = auth.async_auth_flow(request) - try: - request = await auth_flow.__anext__() - - while True: - response = await self._send_handling_redirects( - request, - follow_redirects=follow_redirects, - history=history, - ) - try: - try: - next_request = await auth_flow.asend(response) - except StopAsyncIteration: - return response - - response.history = list(history) - await response.aread() - request = next_request - history.append(response) - - except BaseException as exc: - await response.aclose() - raise exc - finally: - await auth_flow.aclose() - - async def _send_handling_redirects( - self, - request: Request, - follow_redirects: bool, - history: typing.List[Response], - ) -> Response: - while True: - if len(history) > self.max_redirects: - raise TooManyRedirects( - "Exceeded maximum allowed redirects.", request=request - ) - - for hook in self._event_hooks["request"]: - await hook(request) - - response = await self._send_single_request(request) - try: - for hook in self._event_hooks["response"]: - await hook(response) - - response.history = list(history) - - if not response.has_redirect_location: - return response - - request = self._build_redirect_request(request, response) - history = history + [response] - - if follow_redirects: - await response.aread() - else: - response.next_request = request - return response - - except BaseException as exc: - await response.aclose() - raise exc - - async def _send_single_request(self, request: Request) -> Response: - """ - Sends a single request, without handling any redirections. - """ - transport = self._transport_for_url(request.url) - timer = Timer() - await timer.async_start() - - if not isinstance(request.stream, AsyncByteStream): - raise RuntimeError( - "Attempted to send an sync request with an AsyncClient instance." - ) - - with request_context(request=request): - response = await transport.handle_async_request(request) - - assert isinstance(response.stream, AsyncByteStream) - response.request = request - response.stream = BoundAsyncStream( - response.stream, response=response, timer=timer - ) - self.cookies.extract_cookies(response) - response.default_encoding = self._default_encoding - - logger.info( - 'HTTP Request: %s %s "%s %d %s"', - request.method, - request.url, - response.http_version, - response.status_code, - response.reason_phrase, - ) - - return response - - async def get( - self, - url: URLTypes, - *, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault, None] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Send a `GET` request. - - **Parameters**: See `httpx.request`. - """ - return await self.request( - "GET", - url, - params=params, - headers=headers, - cookies=cookies, - auth=auth, - follow_redirects=follow_redirects, - timeout=timeout, - extensions=extensions, - ) - - async def options( - self, - url: URLTypes, - *, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Send an `OPTIONS` request. - - **Parameters**: See `httpx.request`. - """ - return await self.request( - "OPTIONS", - url, - params=params, - headers=headers, - cookies=cookies, - auth=auth, - follow_redirects=follow_redirects, - timeout=timeout, - extensions=extensions, - ) - - async def head( - self, - url: URLTypes, - *, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Send a `HEAD` request. - - **Parameters**: See `httpx.request`. - """ - return await self.request( - "HEAD", - url, - params=params, - headers=headers, - cookies=cookies, - auth=auth, - follow_redirects=follow_redirects, - timeout=timeout, - extensions=extensions, - ) - - async def post( - self, - url: URLTypes, - *, - content: typing.Optional[RequestContent] = None, - data: typing.Optional[RequestData] = None, - files: typing.Optional[RequestFiles] = None, - json: typing.Optional[typing.Any] = None, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Send a `POST` request. - - **Parameters**: See `httpx.request`. - """ - return await self.request( - "POST", - url, - content=content, - data=data, - files=files, - json=json, - params=params, - headers=headers, - cookies=cookies, - auth=auth, - follow_redirects=follow_redirects, - timeout=timeout, - extensions=extensions, - ) - - async def put( - self, - url: URLTypes, - *, - content: typing.Optional[RequestContent] = None, - data: typing.Optional[RequestData] = None, - files: typing.Optional[RequestFiles] = None, - json: typing.Optional[typing.Any] = None, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Send a `PUT` request. - - **Parameters**: See `httpx.request`. - """ - return await self.request( - "PUT", - url, - content=content, - data=data, - files=files, - json=json, - params=params, - headers=headers, - cookies=cookies, - auth=auth, - follow_redirects=follow_redirects, - timeout=timeout, - extensions=extensions, - ) - - async def patch( - self, - url: URLTypes, - *, - content: typing.Optional[RequestContent] = None, - data: typing.Optional[RequestData] = None, - files: typing.Optional[RequestFiles] = None, - json: typing.Optional[typing.Any] = None, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Send a `PATCH` request. - - **Parameters**: See `httpx.request`. - """ - return await self.request( - "PATCH", - url, - content=content, - data=data, - files=files, - json=json, - params=params, - headers=headers, - cookies=cookies, - auth=auth, - follow_redirects=follow_redirects, - timeout=timeout, - extensions=extensions, - ) - - async def delete( - self, - url: URLTypes, - *, - params: typing.Optional[QueryParamTypes] = None, - headers: typing.Optional[HeaderTypes] = None, - cookies: typing.Optional[CookieTypes] = None, - auth: typing.Union[AuthTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - follow_redirects: typing.Union[bool, UseClientDefault] = USE_CLIENT_DEFAULT, - timeout: typing.Union[TimeoutTypes, UseClientDefault] = USE_CLIENT_DEFAULT, - extensions: typing.Optional[RequestExtensions] = None, - ) -> Response: - """ - Send a `DELETE` request. - - **Parameters**: See `httpx.request`. - """ - return await self.request( - "DELETE", - url, - params=params, - headers=headers, - cookies=cookies, - auth=auth, - follow_redirects=follow_redirects, - timeout=timeout, - extensions=extensions, - ) - - async def aclose(self) -> None: - """ - Close transport and proxies. - """ - if self._state != ClientState.CLOSED: - self._state = ClientState.CLOSED - - await self._transport.aclose() - for proxy in self._mounts.values(): - if proxy is not None: - await proxy.aclose() - - async def __aenter__(self: U) -> U: - if self._state != ClientState.UNOPENED: - msg = { - ClientState.OPENED: "Cannot open a client instance more than once.", - ClientState.CLOSED: "Cannot reopen a client instance, once it has been closed.", - }[self._state] - raise RuntimeError(msg) - - self._state = ClientState.OPENED - - await self._transport.__aenter__() - for proxy in self._mounts.values(): - if proxy is not None: - await proxy.__aenter__() - return self - - async def __aexit__( - self, - exc_type: typing.Optional[typing.Type[BaseException]] = None, - exc_value: typing.Optional[BaseException] = None, - traceback: typing.Optional[TracebackType] = None, - ) -> None: - self._state = ClientState.CLOSED - - await self._transport.__aexit__(exc_type, exc_value, traceback) - for proxy in self._mounts.values(): - if proxy is not None: - await proxy.__aexit__(exc_type, exc_value, traceback) diff --git a/spaces/Datasculptor/StyleGAN-NADA/op/fused_act.py b/spaces/Datasculptor/StyleGAN-NADA/op/fused_act.py deleted file mode 100644 index 8459d510d7b79684779dfe47f5b46d81c94b4a4d..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/StyleGAN-NADA/op/fused_act.py +++ /dev/null @@ -1,86 +0,0 @@ -import os - -import torch -from torch import nn -from torch.autograd import Function -from torch.utils.cpp_extension import load - - -module_path = os.path.dirname(__file__) -fused = load( - 'fused', - sources=[ - os.path.join(module_path, 'fused_bias_act.cpp'), - os.path.join(module_path, 'fused_bias_act_kernel.cu'), - ], -) - - -class FusedLeakyReLUFunctionBackward(Function): - @staticmethod - def forward(ctx, grad_output, out, negative_slope, scale): - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - empty = grad_output.new_empty(0) - - grad_input = fused.fused_bias_act( - grad_output, empty, out, 3, 1, negative_slope, scale - ) - - dim = [0] - - if grad_input.ndim > 2: - dim += list(range(2, grad_input.ndim)) - - grad_bias = grad_input.sum(dim).detach() - - return grad_input, grad_bias - - @staticmethod - def backward(ctx, gradgrad_input, gradgrad_bias): - out, = ctx.saved_tensors - gradgrad_out = fused.fused_bias_act( - gradgrad_input, gradgrad_bias, out, 3, 1, ctx.negative_slope, ctx.scale - ) - - return gradgrad_out, None, None, None - - -class FusedLeakyReLUFunction(Function): - @staticmethod - def forward(ctx, input, bias, negative_slope, scale): - empty = input.new_empty(0) - out = fused.fused_bias_act(input, bias, empty, 3, 0, negative_slope, scale) - ctx.save_for_backward(out) - ctx.negative_slope = negative_slope - ctx.scale = scale - - return out - - @staticmethod - def backward(ctx, grad_output): - out, = ctx.saved_tensors - - grad_input, grad_bias = FusedLeakyReLUFunctionBackward.apply( - grad_output, out, ctx.negative_slope, ctx.scale - ) - - return grad_input, grad_bias, None, None - - -class FusedLeakyReLU(nn.Module): - def __init__(self, channel, negative_slope=0.2, scale=2 ** 0.5): - super().__init__() - - self.bias = nn.Parameter(torch.zeros(channel)) - self.negative_slope = negative_slope - self.scale = scale - - def forward(self, input): - return fused_leaky_relu(input, self.bias, self.negative_slope, self.scale) - - -def fused_leaky_relu(input, bias, negative_slope=0.2, scale=2 ** 0.5): - return FusedLeakyReLUFunction.apply(input, bias, negative_slope, scale) diff --git a/spaces/Demosthene-OR/avr23-cds-translation/tabs/game_tab.py b/spaces/Demosthene-OR/avr23-cds-translation/tabs/game_tab.py deleted file mode 100644 index ee089e60491b6e13904d457d9e30c99b1ac0dc10..0000000000000000000000000000000000000000 --- a/spaces/Demosthene-OR/avr23-cds-translation/tabs/game_tab.py +++ /dev/null @@ -1,214 +0,0 @@ -import streamlit as st -import pandas as pd -import numpy as np -import os -import time -import matplotlib.pyplot as plt -import random -import json -import csv -from extra_streamlit_components import tab_bar, TabBarItemData -import matplotlib.pyplot as plt -from datetime import datetime - -title = "Jouez avec nous !" -sidebar_name = "Jeu" - -@st.cache_data -def init_game(): - new = int(time.time()) - sentence_test = pd.read_csv('data/multilingue/sentence_test_extract.csv') - sentence_test = sentence_test[4750:] - # Lisez le contenu du fichier JSON - with open('data/multilingue/lan_to_language.json', 'r') as fichier: - lan_to_language = json.load(fichier) - t_now = time.time() - return sentence_test, lan_to_language, new, t_now - -def find_indice(sent_selected): - l = list(lan_to_language.keys()) - for i in range(len(l)): - if l[i] == sentence_test['lan_code'].iloc[sent_selected]: - return i - -@st.cache_data -def set_game(new): - nb_st = len(sentence_test) - sent_sel = [] - # Utilisez une boucle pour générer 5 nombres aléatoires différents - while len(sent_sel) < 5: - nombre = random.randint(0, nb_st) - if nombre not in sent_sel: - sent_sel.append(nombre) - - rep_possibles=[] - for i in range(5): - rep_possibles.append([find_indice(sent_sel[i])]) - while len(rep_possibles[i]) < 5: - rep_possible = random.randint(0, 95) - if rep_possible not in rep_possibles[i]: - rep_possibles[i].append(rep_possible) - random.shuffle(rep_possibles[i]) - return sent_sel, rep_possibles, new - -def calc_score(n_rep,duration): - - if n_rep==0: return 0 - s1 = n_rep*200 - if duration < 60: - s2 = (60-duration)*200/60 - if n_rep==5: - s2 *= 2.5 - else: - s2 = max(-(duration-60)*100/60,-100) - s = int(s1+s2) - return s - -def read_leaderboard(): - return pd.read_csv('data/game_leaderboard.csv', index_col=False,encoding='utf8') - -def write_leaderboard(lb): - lb['Nom'] = lb['Nom'].astype(str) - lb['Rang'] = lb['Rang'].astype(int) - lb.to_csv(path_or_buf='data/game_leaderboard.csv',columns=['Rang','Nom','Score','Timestamp','BR','Duree'],index=False, header=True,encoding='utf8') - -def display_leaderboard(): - lb = read_leaderboard() - st.write("**Leaderboard :**") - list_champ = """ - | Rang | Nom | Score | - |------|------------|-------|""" - if len(lb)>0: - for i in range(len(lb)): - list_champ += """ - | """+str(lb['Rang'].iloc[i])+""" | """+str(lb['Nom'].iloc[i])[:9]+""" | """+str(lb['Score'].iloc[i])+""" |""" - st.markdown(list_champ, unsafe_allow_html=True ) - return lb - -def write_log(TS,Nom,Score,BR,Duree): - log = pd.read_csv('data/game_log.csv', index_col=False,encoding='utf8') - date_heure = datetime.fromtimestamp(TS) - Date = date_heure.strftime('%Y-%m-%d %H:%M:%S') - log = pd.concat([log, pd.DataFrame(data={'Date':[Date], 'Nom':[Nom],'Score':[Score],'BR':[BR],'Duree':[Duree]})], ignore_index=True) - log.to_csv(path_or_buf='data/game_log.csv',columns=['Date','Nom','Score','BR','Duree'],index=False, header=True,encoding='utf8') - -def display_files(): - log = pd.read_csv('data/game_log.csv', index_col=False,encoding='utf8') - lb = pd.read_csv('data/game_leaderboard.csv', index_col=False,encoding='utf8') - st.dataframe(lb) - st.dataframe(log) - -def run(): - global sentence_test, lan_to_language - - sentence_test, lan_to_language, new, t_debut = init_game() - - st.write("") - st.title(title) - st.write("#### **Etes vous un expert es Langues ?**\n") - st.markdown( - """ - Essayer de trouvez, sans aide, la langue des 5 phrases suivantes. - Attention : Vous devez être le plus rapide possible ! - """, unsafe_allow_html=True - ) - st.write("") - player_name = st.text_input("Quel est votre nom ?") - - if player_name == 'display_files': - display_files() - return - - score = 0 - col1, col2 = st.columns([0.7,0.3]) - with col2: - lb = display_leaderboard() - with col1: - sent_sel, rep_possibles, new = set_game(new) - answer = [""] * 5 - l = list(lan_to_language.values()) - for i in range(5): - answer[i] = st.radio("**:blue["+sentence_test['sentence'].iloc[sent_sel[i]]+"]**\n",[l[rep_possibles[i][0]],l[rep_possibles[i][1]],l[rep_possibles[i][2]], \ - l[rep_possibles[i][3]],l[rep_possibles[i][4]]], horizontal=True, key=i) - t_previous_debut = t_debut - t_debut = time.time() - - if st.button(label="Valider", type="primary"): - st.cache_data.clear() - - nb_bonnes_reponses = 0 - for i in range(5): - if lan_to_language[sentence_test['lan_code'].iloc[sent_sel[i]]]==answer[i]: - nb_bonnes_reponses +=1 - - t_fin = time.time() - duration = t_fin - t_previous_debut - - score = calc_score(nb_bonnes_reponses,duration) - write_log(time.time(),player_name,score,nb_bonnes_reponses,duration) - if nb_bonnes_reponses >=4: - st.write(":red[**Félicitations, vous avez "+str(nb_bonnes_reponses)+" bonnes réponses !**]") - st.write(":red[Votre score est de "+str(score)+" points]") - else: - if nb_bonnes_reponses >1 : s="s" - else: s="" - st.write("**:red[Vous avez "+str(nb_bonnes_reponses)+" bonne"+s+" réponse"+s+".]**") - if nb_bonnes_reponses >0 : s="s" - else: s="" - st.write(":red[Votre score est de "+str(score)+" point"+s+"]") - - st.write("Bonne réponses:") - for i in range(5): - st.write("- "+sentence_test['sentence'].iloc[sent_sel[i]]+" -> :blue[**"+lan_to_language[sentence_test['lan_code'].iloc[sent_sel[i]]]+"**]") - new = int(time.time()) - st.button(label="Play again ?", type="primary") - - with col2: - now = time.time() - # Si le score du dernier est plus vieux d'une semaine, il est remplacé par un score + récent - renew_old = ((len(lb)>9) and (lb['Timestamp'].iloc[9])<(now-604800)) - - if (score>0) and ((((score >= lb['Score'].min()) and (len(lb)>9)) or (len(lb)<=9)) or (pd.isna(lb['Score'].min())) or renew_old): - if player_name not in lb['Nom'].tolist(): - if (((score >= lb['Score'].min()) and (len(lb)>9)) or (len(lb)<=9)) or (pd.isna(lb['Score'].min())) : - lb = pd.concat([lb, pd.DataFrame(data={'Nom':[player_name],'Score':[score],'Timestamp':[now],'BR':[nb_bonnes_reponses],'Duree':[duration]})], ignore_index=True) - lb = lb.sort_values(by=['Score', 'Timestamp'], ascending=[False, False]).reset_index() - lb = lb.drop(lb.index[10:]) - else: - st.write('2:',player_name) - lb['Nom'].iloc[9]= player_name - lb['Score'].iloc[9]= score - lb['Timestamp'].iloc[9]=now - lb['BR'].iloc[9]=nb_bonnes_reponses - lb['Duree'].iloc[9]=duration - lb = lb.reset_index() - else: - liste_Nom = lb['Nom'].tolist() - for i,player in enumerate(liste_Nom): - if player == player_name: - if lb['Score'].iloc[i] < score: - lb['Score'].iloc[i] = score - lb['Timestamp'].iloc[i]=now - lb = lb.sort_values(by=['Score', 'Timestamp'], ascending=[False, False]).reset_index() - for i in range(len(lb)): - if (i>0): - if (lb['Score'].iloc[i]==lb['Score'].iloc[i-1]): - lb['Rang'].iloc[i] = lb['Rang'].iloc[i-1] - else: - lb['Rang'].iloc[i] = i+1 - else: - lb['Rang'].iloc[i] = i+1 - if player_name !="": - write_leaderboard(lb) - - - return - - - - - - - - - diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/wrappers.py b/spaces/Dinoking/Guccio-AI-Designer/models/wrappers.py deleted file mode 100644 index 335321bc67e7b3c7f1e715948e967388c3be05f9..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/models/wrappers.py +++ /dev/null @@ -1,737 +0,0 @@ -# Copyright 2020 Erik Härkönen. All rights reserved. -# This file is licensed to you under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. You may obtain a copy -# of the License at http://www.apache.org/licenses/LICENSE-2.0 - -# Unless required by applicable law or agreed to in writing, software distributed under -# the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR REPRESENTATIONS -# OF ANY KIND, either express or implied. See the License for the specific language -# governing permissions and limitations under the License. - -import torch -import numpy as np -import re -import os -import random -from pathlib import Path -from types import SimpleNamespace -from utils import download_ckpt -from config import Config -from netdissect import proggan, zdataset -from . import biggan -from . import stylegan -from . import stylegan2 -from abc import abstractmethod, ABC as AbstractBaseClass -from functools import singledispatch - -class BaseModel(AbstractBaseClass, torch.nn.Module): - - # Set parameters for identifying model from instance - def __init__(self, model_name, class_name): - super(BaseModel, self).__init__() - self.model_name = model_name - self.outclass = class_name - - # Stop model evaluation as soon as possible after - # given layer has been executed, used to speed up - # netdissect.InstrumentedModel::retain_layer(). - # Validate with tests/partial_forward_test.py - # Can use forward() as fallback at the cost of performance. - @abstractmethod - def partial_forward(self, x, layer_name): - pass - - # Generate batch of latent vectors - @abstractmethod - def sample_latent(self, n_samples=1, seed=None, truncation=None): - pass - - # Maximum number of latents that can be provided - # Typically one for each layer - def get_max_latents(self): - return 1 - - # Name of primary latent space - # E.g. StyleGAN can alternatively use W - def latent_space_name(self): - return 'Z' - - def get_latent_shape(self): - return tuple(self.sample_latent(1).shape) - - def get_latent_dims(self): - return np.prod(self.get_latent_shape()) - - def set_output_class(self, new_class): - self.outclass = new_class - - # Map from typical range [-1, 1] to [0, 1] - def forward(self, x): - out = self.model.forward(x) - return 0.5*(out+1) - - # Generate images and convert to numpy - def sample_np(self, z=None, n_samples=1, seed=None): - if z is None: - z = self.sample_latent(n_samples, seed=seed) - elif isinstance(z, list): - z = [torch.tensor(l).to(self.device) if not torch.is_tensor(l) else l for l in z] - elif not torch.is_tensor(z): - z = torch.tensor(z).to(self.device) - img = self.forward(z) - img_np = img.permute(0, 2, 3, 1).cpu().detach().numpy() - return np.clip(img_np, 0.0, 1.0).squeeze() - - # For models that use part of latent as conditioning - def get_conditional_state(self, z): - return None - - # For models that use part of latent as conditioning - def set_conditional_state(self, z, c): - return z - - def named_modules(self, *args, **kwargs): - return self.model.named_modules(*args, **kwargs) - -# PyTorch port of StyleGAN 2 -class StyleGAN2(BaseModel): - def __init__(self, device, class_name, truncation=1.0, use_w=False): - super(StyleGAN2, self).__init__('StyleGAN2', class_name or 'ffhq') - self.device = device - self.truncation = truncation - self.latent_avg = None - self.w_primary = use_w # use W as primary latent space? - - # Image widths - configs = { - # Converted NVIDIA official - 'ffhq': 1024, - 'car': 512, - 'cat': 256, - 'church': 256, - 'horse': 256, - # Tuomas - 'bedrooms': 256, - 'kitchen': 256, - 'places': 256, - 'lookbook': 512 - } - - assert self.outclass in configs, \ - f'Invalid StyleGAN2 class {self.outclass}, should be one of [{", ".join(configs.keys())}]' - - self.resolution = configs[self.outclass] - self.name = f'StyleGAN2-{self.outclass}' - self.has_latent_residual = True - self.load_model() - self.set_noise_seed(0) - - def latent_space_name(self): - return 'W' if self.w_primary else 'Z' - - def use_w(self): - self.w_primary = True - - def use_z(self): - self.w_primary = False - - # URLs created with https://sites.google.com/site/gdocs2direct/ - def download_checkpoint(self, outfile): - checkpoints = { - 'horse': 'https://drive.google.com/uc?export=download&id=18SkqWAkgt0fIwDEf2pqeaenNi4OoCo-0', - 'ffhq': 'https://drive.google.com/uc?export=download&id=1FJRwzAkV-XWbxgTwxEmEACvuqF5DsBiV', - 'church': 'https://drive.google.com/uc?export=download&id=1HFM694112b_im01JT7wop0faftw9ty5g', - 'car': 'https://drive.google.com/uc?export=download&id=1iRoWclWVbDBAy5iXYZrQnKYSbZUqXI6y', - 'cat': 'https://drive.google.com/uc?export=download&id=15vJP8GDr0FlRYpE8gD7CdeEz2mXrQMgN', - 'places': 'https://drive.google.com/uc?export=download&id=1X8-wIH3aYKjgDZt4KMOtQzN1m4AlCVhm', - 'bedrooms': 'https://drive.google.com/uc?export=download&id=1nZTW7mjazs-qPhkmbsOLLA_6qws-eNQu', - 'kitchen': 'https://drive.google.com/uc?export=download&id=15dCpnZ1YLAnETAPB0FGmXwdBclbwMEkZ', - 'lookbook': 'https://drive.google.com/uc?export=download&id=1-F-RMkbHUv_S_k-_olh43mu5rDUMGYKe' - } - - url = checkpoints[self.outclass] - download_ckpt(url, outfile) - - def load_model(self): - checkpoint_root = os.environ.get('GANCONTROL_CHECKPOINT_DIR', Path(__file__).parent / 'checkpoints') - checkpoint = Path(checkpoint_root) / f'stylegan2/stylegan2_{self.outclass}_{self.resolution}.pt' - - self.model = stylegan2.Generator(self.resolution, 512, 8).to(self.device) - - if not checkpoint.is_file(): - os.makedirs(checkpoint.parent, exist_ok=True) - self.download_checkpoint(checkpoint) - - ckpt = torch.load(checkpoint) - self.model.load_state_dict(ckpt['g_ema'], strict=False) - self.latent_avg = 0 - - def sample_latent(self, n_samples=1, seed=None, truncation=None): - if seed is None: - seed = np.random.randint(np.iinfo(np.int32).max) # use (reproducible) global rand state - - rng = np.random.RandomState(seed) - z = torch.from_numpy( - rng.standard_normal(512 * n_samples) - .reshape(n_samples, 512)).float().to(self.device) #[N, 512] - - if self.w_primary: - z = self.model.style(z) - - return z - - def get_max_latents(self): - return self.model.n_latent - - def set_output_class(self, new_class): - if self.outclass != new_class: - raise RuntimeError('StyleGAN2: cannot change output class without reloading') - - def forward(self, x): - x = x if isinstance(x, list) else [x] - out, _ = self.model(x, noise=self.noise, - truncation=self.truncation, truncation_latent=self.latent_avg, input_is_w=self.w_primary) - return 0.5*(out+1) - - def partial_forward(self, x, layer_name): - styles = x if isinstance(x, list) else [x] - inject_index = None - noise = self.noise - - if not self.w_primary: - styles = [self.model.style(s) for s in styles] - - if len(styles) == 1: - # One global latent - inject_index = self.model.n_latent - latent = self.model.strided_style(styles[0].unsqueeze(1).repeat(1, inject_index, 1)) # [N, 18, 512] - elif len(styles) == 2: - # Latent mixing with two latents - if inject_index is None: - inject_index = random.randint(1, self.model.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.model.n_latent - inject_index, 1) - - latent = self.model.strided_style(torch.cat([latent, latent2], 1)) - else: - # One latent per layer - assert len(styles) == self.model.n_latent, f'Expected {self.model.n_latents} latents, got {len(styles)}' - styles = torch.stack(styles, dim=1) # [N, 18, 512] - latent = self.model.strided_style(styles) - - if 'style' in layer_name: - return - - out = self.model.input(latent) - if 'input' == layer_name: - return - - out = self.model.conv1(out, latent[:, 0], noise=noise[0]) - if 'conv1' in layer_name: - return - - skip = self.model.to_rgb1(out, latent[:, 1]) - if 'to_rgb1' in layer_name: - return - - i = 1 - noise_i = 1 - - for conv1, conv2, to_rgb in zip( - self.model.convs[::2], self.model.convs[1::2], self.model.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise[noise_i]) - if f'convs.{i-1}' in layer_name: - return - - out = conv2(out, latent[:, i + 1], noise=noise[noise_i + 1]) - if f'convs.{i}' in layer_name: - return - - skip = to_rgb(out, latent[:, i + 2], skip) - if f'to_rgbs.{i//2}' in layer_name: - return - - i += 2 - noise_i += 2 - - image = skip - - raise RuntimeError(f'Layer {layer_name} not encountered in partial_forward') - - def set_noise_seed(self, seed): - torch.manual_seed(seed) - self.noise = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=self.device)] - - for i in range(3, self.model.log_size + 1): - for _ in range(2): - self.noise.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=self.device)) - -# PyTorch port of StyleGAN 1 -class StyleGAN(BaseModel): - def __init__(self, device, class_name, truncation=1.0, use_w=False): - super(StyleGAN, self).__init__('StyleGAN', class_name or 'ffhq') - self.device = device - self.w_primary = use_w # is W primary latent space? - - configs = { - # Official - 'ffhq': 1024, - 'celebahq': 1024, - 'bedrooms': 256, - 'cars': 512, - 'cats': 256, - - # From https://github.com/justinpinkney/awesome-pretrained-stylegan - 'vases': 1024, - 'wikiart': 512, - 'fireworks': 512, - 'abstract': 512, - 'anime': 512, - 'ukiyo-e': 512, - } - - assert self.outclass in configs, \ - f'Invalid StyleGAN class {self.outclass}, should be one of [{", ".join(configs.keys())}]' - - self.resolution = configs[self.outclass] - self.name = f'StyleGAN-{self.outclass}' - self.has_latent_residual = True - self.load_model() - self.set_noise_seed(0) - - def latent_space_name(self): - return 'W' if self.w_primary else 'Z' - - def use_w(self): - self.w_primary = True - - def use_z(self): - self.w_primary = False - - def load_model(self): - checkpoint_root = os.environ.get('GANCONTROL_CHECKPOINT_DIR', Path(__file__).parent / 'checkpoints') - checkpoint = Path(checkpoint_root) / f'stylegan/stylegan_{self.outclass}_{self.resolution}.pt' - - self.model = stylegan.StyleGAN_G(self.resolution).to(self.device) - - urls_tf = { - 'vases': 'https://thisvesseldoesnotexist.s3-us-west-2.amazonaws.com/public/network-snapshot-008980.pkl', - 'fireworks': 'https://mega.nz/#!7uBHnACY!quIW-pjdDa7NqnZOYh1z5UemWwPOW6HkYSoJ4usCg9U', - 'abstract': 'https://mega.nz/#!vCQyHQZT!zdeOg3VvT4922Z2UfxO51xgAfJD-NAK2nW7H_jMlilU', - 'anime': 'https://mega.nz/#!vawjXISI!F7s13yRicxDA3QYqYDL2kjnc2K7Zk3DwCIYETREmBP4', - 'ukiyo-e': 'https://drive.google.com/uc?id=1CHbJlci9NhVFifNQb3vCGu6zw4eqzvTd', - } - - urls_torch = { - 'celebahq': 'https://drive.google.com/uc?export=download&id=1lGcRwNoXy_uwXkD6sy43aAa-rMHRR7Ad', - 'bedrooms': 'https://drive.google.com/uc?export=download&id=1r0_s83-XK2dKlyY3WjNYsfZ5-fnH8QgI', - 'ffhq': 'https://drive.google.com/uc?export=download&id=1GcxTcLDPYxQqcQjeHpLUutGzwOlXXcks', - 'cars': 'https://drive.google.com/uc?export=download&id=1aaUXHRHjQ9ww91x4mtPZD0w50fsIkXWt', - 'cats': 'https://drive.google.com/uc?export=download&id=1JzA5iiS3qPrztVofQAjbb0N4xKdjOOyV', - 'wikiart': 'https://drive.google.com/uc?export=download&id=1fN3noa7Rsl9slrDXsgZVDsYFxV0O08Vx', - } - - if not checkpoint.is_file(): - os.makedirs(checkpoint.parent, exist_ok=True) - if self.outclass in urls_torch: - download_ckpt(urls_torch[self.outclass], checkpoint) - else: - checkpoint_tf = checkpoint.with_suffix('.pkl') - if not checkpoint_tf.is_file(): - download_ckpt(urls_tf[self.outclass], checkpoint_tf) - print('Converting TensorFlow checkpoint to PyTorch') - self.model.export_from_tf(checkpoint_tf) - - self.model.load_weights(checkpoint) - - def sample_latent(self, n_samples=1, seed=None, truncation=None): - if seed is None: - seed = np.random.randint(np.iinfo(np.int32).max) # use (reproducible) global rand state - - rng = np.random.RandomState(seed) - noise = torch.from_numpy( - rng.standard_normal(512 * n_samples) - .reshape(n_samples, 512)).float().to(self.device) #[N, 512] - - if self.w_primary: - noise = self.model._modules['g_mapping'].forward(noise) - - return noise - - def get_max_latents(self): - return 18 - - def set_output_class(self, new_class): - if self.outclass != new_class: - raise RuntimeError('StyleGAN: cannot change output class without reloading') - - def forward(self, x): - out = self.model.forward(x, latent_is_w=self.w_primary) - return 0.5*(out+1) - - # Run model only until given layer - def partial_forward(self, x, layer_name): - mapping = self.model._modules['g_mapping'] - G = self.model._modules['g_synthesis'] - trunc = self.model._modules.get('truncation', lambda x : x) - - if not self.w_primary: - x = mapping.forward(x) # handles list inputs - - if isinstance(x, list): - x = torch.stack(x, dim=1) - else: - x = x.unsqueeze(1).expand(-1, 18, -1) - - # Whole mapping - if 'g_mapping' in layer_name: - return - - x = trunc(x) - if layer_name == 'truncation': - return - - # Get names of children - def iterate(m, name, seen): - children = getattr(m, '_modules', []) - if len(children) > 0: - for child_name, module in children.items(): - seen += iterate(module, f'{name}.{child_name}', seen) - return seen - else: - return [name] - - # Generator - batch_size = x.size(0) - for i, (n, m) in enumerate(G.blocks.items()): # InputBlock or GSynthesisBlock - if i == 0: - r = m(x[:, 2*i:2*i+2]) - else: - r = m(r, x[:, 2*i:2*i+2]) - - children = iterate(m, f'g_synthesis.blocks.{n}', []) - for c in children: - if layer_name in c: # substring - return - - raise RuntimeError(f'Layer {layer_name} not encountered in partial_forward') - - - def set_noise_seed(self, seed): - G = self.model._modules['g_synthesis'] - - def for_each_child(this, name, func): - children = getattr(this, '_modules', []) - for child_name, module in children.items(): - for_each_child(module, f'{name}.{child_name}', func) - func(this, name) - - def modify(m, name): - if isinstance(m, stylegan.NoiseLayer): - H, W = [int(s) for s in name.split('.')[2].split('x')] - torch.random.manual_seed(seed) - m.noise = torch.randn(1, 1, H, W, device=self.device, dtype=torch.float32) - #m.noise = 1.0 # should be [N, 1, H, W], but this also works - - for_each_child(G, 'g_synthesis', modify) - -class GANZooModel(BaseModel): - def __init__(self, device, model_name): - super(GANZooModel, self).__init__(model_name, 'default') - self.device = device - self.base_model = torch.hub.load('facebookresearch/pytorch_GAN_zoo:hub', - model_name, pretrained=True, useGPU=(device.type == 'cuda')) - self.model = self.base_model.netG.to(self.device) - self.name = model_name - self.has_latent_residual = False - - def sample_latent(self, n_samples=1, seed=0, truncation=None): - # Uses torch.randn - noise, _ = self.base_model.buildNoiseData(n_samples) - return noise - - # Don't bother for now - def partial_forward(self, x, layer_name): - return self.forward(x) - - def get_conditional_state(self, z): - return z[:, -20:] # last 20 = conditioning - - def set_conditional_state(self, z, c): - z[:, -20:] = c - return z - - def forward(self, x): - out = self.base_model.test(x) - return 0.5*(out+1) - - -class ProGAN(BaseModel): - def __init__(self, device, lsun_class=None): - super(ProGAN, self).__init__('ProGAN', lsun_class) - self.device = device - - # These are downloaded by GANDissect - valid_classes = [ 'bedroom', 'churchoutdoor', 'conferenceroom', 'diningroom', 'kitchen', 'livingroom', 'restaurant' ] - assert self.outclass in valid_classes, \ - f'Invalid LSUN class {self.outclass}, should be one of {valid_classes}' - - self.load_model() - self.name = f'ProGAN-{self.outclass}' - self.has_latent_residual = False - - def load_model(self): - checkpoint_root = os.environ.get('GANCONTROL_CHECKPOINT_DIR', Path(__file__).parent / 'checkpoints') - checkpoint = Path(checkpoint_root) / f'progan/{self.outclass}_lsun.pth' - - if not checkpoint.is_file(): - os.makedirs(checkpoint.parent, exist_ok=True) - url = f'http://netdissect.csail.mit.edu/data/ganmodel/karras/{self.outclass}_lsun.pth' - download_ckpt(url, checkpoint) - - self.model = proggan.from_pth_file(str(checkpoint.resolve())).to(self.device) - - def sample_latent(self, n_samples=1, seed=None, truncation=None): - if seed is None: - seed = np.random.randint(np.iinfo(np.int32).max) # use (reproducible) global rand state - noise = zdataset.z_sample_for_model(self.model, n_samples, seed=seed)[...] - return noise.to(self.device) - - def forward(self, x): - if isinstance(x, list): - assert len(x) == 1, "ProGAN only supports a single global latent" - x = x[0] - - out = self.model.forward(x) - return 0.5*(out+1) - - # Run model only until given layer - def partial_forward(self, x, layer_name): - assert isinstance(self.model, torch.nn.Sequential), 'Expected sequential model' - - if isinstance(x, list): - assert len(x) == 1, "ProGAN only supports a single global latent" - x = x[0] - - x = x.view(x.shape[0], x.shape[1], 1, 1) - for name, module in self.model._modules.items(): # ordered dict - x = module(x) - if name == layer_name: - return - - raise RuntimeError(f'Layer {layer_name} not encountered in partial_forward') - - -class BigGAN(BaseModel): - def __init__(self, device, resolution, class_name, truncation=1.0): - super(BigGAN, self).__init__(f'BigGAN-{resolution}', class_name) - self.device = device - self.truncation = truncation - self.load_model(f'biggan-deep-{resolution}') - self.set_output_class(class_name or 'husky') - self.name = f'BigGAN-{resolution}-{self.outclass}-t{self.truncation}' - self.has_latent_residual = True - - # Default implementaiton fails without an internet - # connection, even if the model has been cached - def load_model(self, name): - if name not in biggan.model.PRETRAINED_MODEL_ARCHIVE_MAP: - raise RuntimeError('Unknown BigGAN model name', name) - - checkpoint_root = os.environ.get('GANCONTROL_CHECKPOINT_DIR', Path(__file__).parent / 'checkpoints') - model_path = Path(checkpoint_root) / name - - os.makedirs(model_path, exist_ok=True) - - model_file = model_path / biggan.model.WEIGHTS_NAME - config_file = model_path / biggan.model.CONFIG_NAME - model_url = biggan.model.PRETRAINED_MODEL_ARCHIVE_MAP[name] - config_url = biggan.model.PRETRAINED_CONFIG_ARCHIVE_MAP[name] - - for filename, url in ((model_file, model_url), (config_file, config_url)): - if not filename.is_file(): - print('Downloading', url) - with open(filename, 'wb') as f: - if url.startswith("s3://"): - biggan.s3_get(url, f) - else: - biggan.http_get(url, f) - - self.model = biggan.BigGAN.from_pretrained(model_path).to(self.device) - - def sample_latent(self, n_samples=1, truncation=None, seed=None): - if seed is None: - seed = np.random.randint(np.iinfo(np.int32).max) # use (reproducible) global rand state - - noise_vector = biggan.truncated_noise_sample(truncation=truncation or self.truncation, batch_size=n_samples, seed=seed) - noise = torch.from_numpy(noise_vector) #[N, 128] - - return noise.to(self.device) - - # One extra for gen_z - def get_max_latents(self): - return len(self.model.config.layers) + 1 - - def get_conditional_state(self, z): - return self.v_class - - def set_conditional_state(self, z, c): - self.v_class = c - - def is_valid_class(self, class_id): - if isinstance(class_id, int): - return class_id < 1000 - elif isinstance(class_id, str): - return biggan.one_hot_from_names([class_id.replace(' ', '_')]) is not None - else: - raise RuntimeError(f'Unknown class identifier {class_id}') - - def set_output_class(self, class_id): - if isinstance(class_id, int): - self.v_class = torch.from_numpy(biggan.one_hot_from_int([class_id])).to(self.device) - self.outclass = f'class{class_id}' - elif isinstance(class_id, str): - self.outclass = class_id.replace(' ', '_') - self.v_class = torch.from_numpy(biggan.one_hot_from_names([class_id])).to(self.device) - else: - raise RuntimeError(f'Unknown class identifier {class_id}') - - def forward(self, x): - # Duplicate along batch dimension - if isinstance(x, list): - c = self.v_class.repeat(x[0].shape[0], 1) - class_vector = len(x)*[c] - else: - class_vector = self.v_class.repeat(x.shape[0], 1) - out = self.model.forward(x, class_vector, self.truncation) # [N, 3, 128, 128], in [-1, 1] - return 0.5*(out+1) - - # Run model only until given layer - # Used to speed up PCA sample collection - def partial_forward(self, x, layer_name): - if layer_name in ['embeddings', 'generator.gen_z']: - n_layers = 0 - elif 'generator.layers' in layer_name: - layer_base = re.match('^generator\.layers\.[0-9]+', layer_name)[0] - n_layers = int(layer_base.split('.')[-1]) + 1 - else: - n_layers = len(self.model.config.layers) - - if not isinstance(x, list): - x = self.model.n_latents*[x] - - if isinstance(self.v_class, list): - labels = [c.repeat(x[0].shape[0], 1) for c in class_label] - embed = [self.model.embeddings(l) for l in labels] - else: - class_label = self.v_class.repeat(x[0].shape[0], 1) - embed = len(x)*[self.model.embeddings(class_label)] - - assert len(x) == self.model.n_latents, f'Expected {self.model.n_latents} latents, got {len(x)}' - assert len(embed) == self.model.n_latents, f'Expected {self.model.n_latents} class vectors, got {len(class_label)}' - - cond_vectors = [torch.cat((z, e), dim=1) for (z, e) in zip(x, embed)] - - # Generator forward - z = self.model.generator.gen_z(cond_vectors[0]) - z = z.view(-1, 4, 4, 16 * self.model.generator.config.channel_width) - z = z.permute(0, 3, 1, 2).contiguous() - - cond_idx = 1 - for i, layer in enumerate(self.model.generator.layers[:n_layers]): - if isinstance(layer, biggan.GenBlock): - z = layer(z, cond_vectors[cond_idx], self.truncation) - cond_idx += 1 - else: - z = layer(z) - - return None - -# Version 1: separate parameters -@singledispatch -def get_model(name, output_class, device, **kwargs): - # Check if optionally provided existing model can be reused - inst = kwargs.get('inst', None) - model = kwargs.get('model', None) - - if inst or model: - cached = model or inst.model - - network_same = (cached.model_name == name) - outclass_same = (cached.outclass == output_class) - can_change_class = ('BigGAN' in name) - - if network_same and (outclass_same or can_change_class): - cached.set_output_class(output_class) - return cached - - if name == 'DCGAN': - import warnings - warnings.filterwarnings("ignore", message="nn.functional.tanh is deprecated") - model = GANZooModel(device, 'DCGAN') - elif name == 'ProGAN': - model = ProGAN(device, output_class) - elif 'BigGAN' in name: - assert '-' in name, 'Please specify BigGAN resolution, e.g. BigGAN-512' - model = BigGAN(device, name.split('-')[-1], class_name=output_class) - elif name == 'StyleGAN': - model = StyleGAN(device, class_name=output_class) - elif name == 'StyleGAN2': - model = StyleGAN2(device, class_name=output_class) - else: - raise RuntimeError(f'Unknown model {name}') - - return model - -# Version 2: Config object -@get_model.register(Config) -def _(cfg, device, **kwargs): - kwargs['use_w'] = kwargs.get('use_w', cfg.use_w) # explicit arg can override cfg - return get_model(cfg.model, cfg.output_class, device, **kwargs) - -# Version 1: separate parameters -@singledispatch -def get_instrumented_model(name, output_class, layers, device, **kwargs): - model = get_model(name, output_class, device, **kwargs) - model.eval() - - inst = kwargs.get('inst', None) - if inst: - inst.close() - - if not isinstance(layers, list): - layers = [layers] - - # Verify given layer names - module_names = [name for (name, _) in model.named_modules()] - for layer_name in layers: - if not layer_name in module_names: - print(f"Layer '{layer_name}' not found in model!") - print("Available layers:", '\n'.join(module_names)) - raise RuntimeError(f"Unknown layer '{layer_name}''") - - # Reset StyleGANs to z mode for shape annotation - if hasattr(model, 'use_z'): - model.use_z() - - from netdissect.modelconfig import create_instrumented_model - inst = create_instrumented_model(SimpleNamespace( - model = model, - layers = layers, - cuda = device.type == 'cuda', - gen = True, - latent_shape = model.get_latent_shape() - )) - - if kwargs.get('use_w', False): - model.use_w() - - return inst - -# Version 2: Config object -@get_instrumented_model.register(Config) -def _(cfg, device, **kwargs): - kwargs['use_w'] = kwargs.get('use_w', cfg.use_w) # explicit arg can override cfg - return get_instrumented_model(cfg.model, cfg.output_class, cfg.layer, device, **kwargs) diff --git a/spaces/ECCV2022/PSG/OpenPSG/configs/_base_/models/detr_r50.py b/spaces/ECCV2022/PSG/OpenPSG/configs/_base_/models/detr_r50.py deleted file mode 100644 index b83d7d5e108ff52eb9c2c8701697684e1fd88844..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/PSG/OpenPSG/configs/_base_/models/detr_r50.py +++ /dev/null @@ -1,64 +0,0 @@ -model = dict( - type='DETR', - backbone=dict(type='ResNet', - depth=50, - num_stages=4, - out_indices=(3, ), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='pytorch', - init_cfg=dict(type='Pretrained', - checkpoint='torchvision://resnet50')), - bbox_head=dict(type='DETRHead', - num_classes=80, - in_channels=2048, - transformer=dict( - type='Transformer', - encoder=dict(type='DetrTransformerEncoder', - num_layers=6, - transformerlayers=dict( - type='BaseTransformerLayer', - attn_cfgs=[ - dict(type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1) - ], - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', - 'ffn', 'norm'))), - decoder=dict( - type='DetrTransformerDecoder', - return_intermediate=True, - num_layers=6, - transformerlayers=dict( - type='DetrTransformerDecoderLayer', - attn_cfgs=dict(type='MultiheadAttention', - embed_dims=256, - num_heads=8, - dropout=0.1), - feedforward_channels=2048, - ffn_dropout=0.1, - operation_order=('self_attn', 'norm', - 'cross_attn', 'norm', 'ffn', - 'norm')), - )), - positional_encoding=dict(type='SinePositionalEncoding', - num_feats=128, - normalize=True), - loss_cls=dict(type='CrossEntropyLoss', - bg_cls_weight=0.1, - use_sigmoid=False, - loss_weight=1.0, - class_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=5.0), - loss_iou=dict(type='GIoULoss', loss_weight=2.0)), - # training and testing settings - train_cfg=dict(assigner=dict( - type='HungarianAssigner', - cls_cost=dict(type='ClassificationCost', weight=1.), - reg_cost=dict(type='BBoxL1Cost', weight=5.0, box_format='xywh'), - iou_cost=dict(type='IoUCost', iou_mode='giou', weight=2.0))), - test_cfg=dict(max_per_img=100)) diff --git a/spaces/ECCV2022/bytetrack/yolox/data/datasets/mot.py b/spaces/ECCV2022/bytetrack/yolox/data/datasets/mot.py deleted file mode 100644 index d52febcbbe816bdd3d1e07f2d042e115ae330442..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/yolox/data/datasets/mot.py +++ /dev/null @@ -1,132 +0,0 @@ -import cv2 -import numpy as np -from pycocotools.coco import COCO - -import os - -from ..dataloading import get_yolox_datadir -from .datasets_wrapper import Dataset - - -class MOTDataset(Dataset): - """ - COCO dataset class. - """ - - def __init__( - self, - data_dir=None, - json_file="train_half.json", - name="train", - img_size=(608, 1088), - preproc=None, - ): - """ - COCO dataset initialization. Annotation data are read into memory by COCO API. - Args: - data_dir (str): dataset root directory - json_file (str): COCO json file name - name (str): COCO data name (e.g. 'train2017' or 'val2017') - img_size (int): target image size after pre-processing - preproc: data augmentation strategy - """ - super().__init__(img_size) - if data_dir is None: - data_dir = os.path.join(get_yolox_datadir(), "mot") - self.data_dir = data_dir - self.json_file = json_file - - self.coco = COCO(os.path.join(self.data_dir, "annotations", self.json_file)) - self.ids = self.coco.getImgIds() - self.class_ids = sorted(self.coco.getCatIds()) - cats = self.coco.loadCats(self.coco.getCatIds()) - self._classes = tuple([c["name"] for c in cats]) - self.annotations = self._load_coco_annotations() - self.name = name - self.img_size = img_size - self.preproc = preproc - - def __len__(self): - return len(self.ids) - - def _load_coco_annotations(self): - return [self.load_anno_from_ids(_ids) for _ids in self.ids] - - def load_anno_from_ids(self, id_): - im_ann = self.coco.loadImgs(id_)[0] - width = im_ann["width"] - height = im_ann["height"] - frame_id = im_ann["frame_id"] - video_id = im_ann["video_id"] - anno_ids = self.coco.getAnnIds(imgIds=[int(id_)], iscrowd=False) - annotations = self.coco.loadAnns(anno_ids) - objs = [] - for obj in annotations: - x1 = obj["bbox"][0] - y1 = obj["bbox"][1] - x2 = x1 + obj["bbox"][2] - y2 = y1 + obj["bbox"][3] - if obj["area"] > 0 and x2 >= x1 and y2 >= y1: - obj["clean_bbox"] = [x1, y1, x2, y2] - objs.append(obj) - - num_objs = len(objs) - - res = np.zeros((num_objs, 6)) - - for ix, obj in enumerate(objs): - cls = self.class_ids.index(obj["category_id"]) - res[ix, 0:4] = obj["clean_bbox"] - res[ix, 4] = cls - res[ix, 5] = obj["track_id"] - - file_name = im_ann["file_name"] if "file_name" in im_ann else "{:012}".format(id_) + ".jpg" - img_info = (height, width, frame_id, video_id, file_name) - - del im_ann, annotations - - return (res, img_info, file_name) - - def load_anno(self, index): - return self.annotations[index][0] - - def pull_item(self, index): - id_ = self.ids[index] - - res, img_info, file_name = self.annotations[index] - # load image and preprocess - img_file = os.path.join( - self.data_dir, self.name, file_name - ) - img = cv2.imread(img_file) - assert img is not None - - return img, res.copy(), img_info, np.array([id_]) - - @Dataset.resize_getitem - def __getitem__(self, index): - """ - One image / label pair for the given index is picked up and pre-processed. - - Args: - index (int): data index - - Returns: - img (numpy.ndarray): pre-processed image - padded_labels (torch.Tensor): pre-processed label data. - The shape is :math:`[max_labels, 5]`. - each label consists of [class, xc, yc, w, h]: - class (float): class index. - xc, yc (float) : center of bbox whose values range from 0 to 1. - w, h (float) : size of bbox whose values range from 0 to 1. - info_img : tuple of h, w, nh, nw, dx, dy. - h, w (int): original shape of the image - nh, nw (int): shape of the resized image without padding - dx, dy (int): pad size - img_id (int): same as the input index. Used for evaluation. - """ - img, target, img_info, img_id = self.pull_item(index) - - if self.preproc is not None: - img, target = self.preproc(img, target, self.input_dim) - return img, target, img_info, img_id diff --git a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/transformer_decoder/position_encoding.py b/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/transformer_decoder/position_encoding.py deleted file mode 100644 index f32532e070e67b2cd25771aea1ad10e7e5a5dc69..0000000000000000000000000000000000000000 --- a/spaces/EPFL-VILAB/MultiMAE/mask2former/modeling/transformer_decoder/position_encoding.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# # Modified by Bowen Cheng from: https://github.com/facebookresearch/detr/blob/master/models/position_encoding.py -""" -Various positional encodings for the transformer. -""" -import math - -import torch -from torch import nn - - -class PositionEmbeddingSine(nn.Module): - """ - This is a more standard version of the position embedding, very similar to the one - used by the Attention is all you need paper, generalized to work on images. - """ - - def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None): - super().__init__() - self.num_pos_feats = num_pos_feats - self.temperature = temperature - self.normalize = normalize - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - if scale is None: - scale = 2 * math.pi - self.scale = scale - - def forward(self, x, mask=None): - if mask is None: - mask = torch.zeros((x.size(0), x.size(2), x.size(3)), device=x.device, dtype=torch.bool) - not_mask = ~mask - y_embed = not_mask.cumsum(1, dtype=torch.float32) - x_embed = not_mask.cumsum(2, dtype=torch.float32) - if self.normalize: - eps = 1e-6 - y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale - x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale - - dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) - - pos_x = x_embed[:, :, :, None] / dim_t - pos_y = y_embed[:, :, :, None] / dim_t - pos_x = torch.stack( - (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos_y = torch.stack( - (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - return pos - - def __repr__(self, _repr_indent=4): - head = "Positional encoding " + self.__class__.__name__ - body = [ - "num_pos_feats: {}".format(self.num_pos_feats), - "temperature: {}".format(self.temperature), - "normalize: {}".format(self.normalize), - "scale: {}".format(self.scale), - ] - # _repr_indent = 4 - lines = [head] + [" " * _repr_indent + line for line in body] - return "\n".join(lines) diff --git a/spaces/EPFL-VILAB/MultiMAE/utils/log_images.py b/spaces/EPFL-VILAB/MultiMAE/utils/log_images.py deleted file mode 100644 index 826f29cfb5d29d22044d07c14068f1678a5ae003..0000000000000000000000000000000000000000 --- a/spaces/EPFL-VILAB/MultiMAE/utils/log_images.py +++ /dev/null @@ -1,138 +0,0 @@ -# Copyright (c) EPFL VILAB. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, List - -import numpy as np -import torch -import torch.nn.functional as F -import torchvision.transforms as transforms -import wandb - -import utils -from utils.datasets_semseg import (ade_classes, hypersim_classes, - nyu_v2_40_classes) - - -def inv_norm(tensor: torch.Tensor) -> torch.Tensor: - """Inverse of the normalization that was done during pre-processing - """ - inv_normalize = transforms.Normalize( - mean=[-0.485 / 0.229, -0.456 / 0.224, -0.406 / 0.225], - std=[1 / 0.229, 1 / 0.224, 1 / 0.225]) - - return inv_normalize(tensor) - - -@torch.no_grad() -def log_semseg_wandb( - images: torch.Tensor, - preds: List[np.ndarray], - gts: List[np.ndarray], - depth_gts: List[np.ndarray], - dataset_name: str = 'ade20k', - image_count=8, - prefix="" - ): - - if dataset_name == 'ade20k': - classes = ade_classes() - elif dataset_name == 'hypersim': - classes = hypersim_classes() - elif dataset_name == 'nyu': - classes = nyu_v2_40_classes() - else: - raise ValueError(f'Dataset {dataset_name} not supported for logging to wandb.') - - class_labels = {i: cls for i, cls in enumerate(classes)} - class_labels[len(classes)] = "void" - class_labels[utils.SEG_IGNORE_INDEX] = "ignore" - - image_count = min(len(images), image_count) - - images = images[:image_count] - preds = preds[:image_count] - gts = gts[:image_count] - depth_gts = depth_gts[:image_count] if len(depth_gts) > 0 else None - - semseg_images = {} - - for i, (image, pred, gt) in enumerate(zip(images, preds, gts)): - image = inv_norm(image) - pred[gt == utils.SEG_IGNORE_INDEX] = utils.SEG_IGNORE_INDEX - - semseg_image = wandb.Image(image, masks={ - "predictions": { - "mask_data": pred, - "class_labels": class_labels, - }, - "ground_truth": { - "mask_data": gt, - "class_labels": class_labels, - } - }) - - semseg_images[f"{prefix}_{i}"] = semseg_image - - if depth_gts is not None: - semseg_images[f"{prefix}_{i}_depth"] = wandb.Image(depth_gts[i]) - - wandb.log(semseg_images, commit=False) - - -@torch.no_grad() -def log_taskonomy_wandb( - preds: Dict[str, torch.Tensor], - gts: Dict[str, torch.Tensor], - image_count=8, - prefix="" - ): - pred_tasks = list(preds.keys()) - gt_tasks = list(gts.keys()) - if 'mask_valid' in gt_tasks: - gt_tasks.remove('mask_valid') - - image_count = min(len(preds[pred_tasks[0]]), image_count) - - all_images = {} - - for i in range(image_count): - - # Log GTs - for task in gt_tasks: - gt_img = gts[task][i] - if task == 'rgb': - gt_img = inv_norm(gt_img) - if gt_img.shape[0] == 1: - gt_img = gt_img[0] - elif gt_img.shape[0] == 2: - gt_img = F.pad(gt_img, (0,0,0,0,0,1), mode='constant', value=0.0) - - gt_img = wandb.Image(gt_img, caption=f'GT #{i}') - key = f'{prefix}_gt_{task}' - if key not in all_images: - all_images[key] = [gt_img] - else: - all_images[key].append(gt_img) - - # Log preds - for task in pred_tasks: - pred_img = preds[task][i] - if task == 'rgb': - pred_img = inv_norm(pred_img) - if pred_img.shape[0] == 1: - pred_img = pred_img[0] - elif pred_img.shape[0] == 2: - pred_img = F.pad(pred_img, (0,0,0,0,0,1), mode='constant', value=0.0) - - pred_img = wandb.Image(pred_img, caption=f'Pred #{i}') - key = f'{prefix}_pred_{task}' - if key not in all_images: - all_images[key] = [pred_img] - else: - all_images[key].append(pred_img) - - wandb.log(all_images, commit=False) diff --git a/spaces/Ekimetrics/Biomap/biomap/utils_gee.py b/spaces/Ekimetrics/Biomap/biomap/utils_gee.py deleted file mode 100644 index 24603ad7c4552526a1159ca9afac5431a6b6efc6..0000000000000000000000000000000000000000 --- a/spaces/Ekimetrics/Biomap/biomap/utils_gee.py +++ /dev/null @@ -1,174 +0,0 @@ -import io -import requests -import ee -import numpy as np -import matplotlib.pyplot as plt -import os -from pathlib import Path -import logging -import json - -#Initialize -service_account = os.environ["SERVICE_ACCOUNT_EE"] -private_key = json.loads(os.environ["PRIVATE_KEY_EE"]) - -with open(os.path.join(os.path.dirname(__file__), '.private-key-2.json'), "w") as ipt: - json.dump(private_key, ipt) - -credentials = ee.ServiceAccountCredentials(service_account, os.path.join(os.path.dirname(__file__), '.private-key-2.json')) -ee.Initialize(credentials) - -def get_image(location, d1, d2): - logging.info(f"getting image for {d1} to {d2} at location {location}") - img = extract_img(location, d1, d2) - - img_test = transform_ee_img( - img, max=0.3 - ) - return img_test - -#delete clouds -def maskS2clouds(image): - qa = image.select('QA60'); - - # // Bits 10 and 11 are clouds and cirrus, respectively. - cloudBitMask = 1 << 10; - cirrusBitMask = 1 << 11; - - # // Both flags should be set to zero, indicating clear conditions. - mask = (qa.bitwiseAnd(cloudBitMask).eq(0))and(qa.bitwiseAnd(cirrusBitMask).eq(0)) - - return image.updateMask(mask).divide(10000); - - -#find ee_img -def extract_ee_img(location,start_date,end_date, width = 0.01 , len = 0.01) : - """Extract the earth engine image - - Args: - location (list[float]): - start_date (str): the start date for finding an image - end_date (str): the end date for finding an image - width (float, optional): _description_. Defaults to 0.01. - len (float, optional): _description_. Defaults to 0.01. - - Returns: - _type_: _description_ - """ - # define the polygone - polygone =[[[float(location[0])-0.01,float(location[1])+0.01], - [float(location[0])-0.01,float(location[1])-0.01], - [float(location[0])+0.01,float(location[1])-0.01], - [float(location[0])+0.01,float(location[1])+0.01], - ]] - - #define the ee geometry - geometry = ee.Geometry.Polygon(polygone, None, False); - - #extract the dataset - dataset = ee.ImageCollection('COPERNICUS/S2_SR_HARMONIZED')\ - .filterDate(start_date, end_date)\ - .filter(ee.Filter.lt('CLOUDY_PIXEL_PERCENTAGE',1))\ - .map(maskS2clouds) - return dataset.mean(), geometry - - - -# Get URL -def get_url(ee_img, geometry, scale=5): - """Get the url of a dataset and a geometry - - Args: - ee_img (ee.ImageCollection: meta data on the image - geometry (ee.Geometry.Polygon): geometry of the desired landscape - scale (int, optional): _description_. Defaults to 5. - - Returns: - str: the url to use to ask the server - """ - region = geometry - - # collectionList = ee_img.toList(ee_img.size()) - # collectionSize = collectionList.size().getInfo() - # for i in xrange(collectionSize): - # ee.batch.Export.image.toDrive( - # image = ee.Image(collectionList.get(i)).clip(rectangle), - # fileNamePrefix = 'foo' + str(i + 1), - # dimensions = '128x128').start() - - url = ee_img.getDownloadURL({ - # 'min': 0.0, - # 'max': 0.3, - 'bands': ['B4', 'B3', 'B2'], - 'region' : region, - 'scale' : scale, - 'format' : 'NPY' - }) - - return url - -def extract_np_from_url(url): - """extract a numpy array based on a url - - Args: - url (str): _description_ - - Returns: - numpyarray: response from earth engine as numpy - """ - #get the response from url - response = requests.get(url) - - #transform it into numpy - data = np.load(io.BytesIO(response.content)) - - #transform numpy of tuples to 3D numpy - temp1 = [] - - for x in data: - temp2 = [] - for y in x : - temp2.append([z for z in y]) - temp1.append(temp2) - - data = np.array(temp1) - return data - -#Fonction globale -def extract_img(location,start_date,end_date, width = 0.01 , len = 0.01,scale=5): - """Extract an image of the landscape at the selected longitude and latitude with the selected width and length - - Args: - location (list[float]): [latitude of the center of the landscape, longitude of the center of the landscape] - start_date (str): the start date - end_date (str): _description_ - width (float, optional): _description_. Defaults to 0.01. - len (float, optional): _description_. Defaults to 0.01. - scale (int, optional): _description_. Defaults to 5. - - Returns: - img: image as numpy array - """ - # reversed longitude latitude - location = (location[1], location[0]) - ee_img, geometry = extract_ee_img(location, width,start_date,end_date , len) - url = get_url(ee_img, geometry, scale) - img = extract_np_from_url(url) - - return img - -# transform img from numpy to PIL -def transform_ee_img(img, min = 0, max=0.3): - """Transform an img from numpy to PIL - - Args: - img (numpy array): the original image as a numpy array - min (int, optional): _description_. Defaults to 0. - max (float, optional): _description_. Defaults to 0.3. - - Returns: - img_test: a PIL image - """ - img=np.minimum(img*255/max,np.ones(img.shape)*255) - img=np.uint8((np.rint(img)).astype(int)) - return img \ No newline at end of file diff --git a/spaces/Enutrof/GenreClassifier/app.py b/spaces/Enutrof/GenreClassifier/app.py deleted file mode 100644 index 0f64cfdfd3d0d26771169b709c16fe2601f14c7b..0000000000000000000000000000000000000000 --- a/spaces/Enutrof/GenreClassifier/app.py +++ /dev/null @@ -1,7 +0,0 @@ -import gradio as gr -from inference import * - -iface = gr.Interface(fn=inference, - inputs=gr.inputs.Audio(source="upload", type="filepath"), - outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/EswarBilla/EswarGenAiChatbot/app.py b/spaces/EswarBilla/EswarGenAiChatbot/app.py deleted file mode 100644 index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000 --- a/spaces/EswarBilla/EswarGenAiChatbot/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a helpful assistant to answer all user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/FridaZuley/RVC_HFKawaii/go-applio-manager-recode.bat b/spaces/FridaZuley/RVC_HFKawaii/go-applio-manager-recode.bat deleted file mode 100644 index 91b8acfc0c69a356fd5b1d77650b2cd728b1072b..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/go-applio-manager-recode.bat +++ /dev/null @@ -1,322 +0,0 @@ -@echo off -title Applio Installer - -::: _ _ _____ _ -::: /\ | (_) | __ \ | | -::: / \ _ __ _ __ | |_ ___ | |__) |___ ___ ___ __| | ___ -::: / /\ \ | '_ \| '_ \| | |/ _ \ | _ // _ \/ __/ _ \ / _` |/ _ \ -::: / ____ \| |_) | |_) | | | (_) | | | \ \ __/ (_| (_) | (_| | __/ -::: /_/ \_\ .__/| .__/|_|_|\___/ |_| \_\___|\___\___/ \__,_|\___| -::: | | | | -::: |_| |_| -::: -::: - -setlocal -set "branch=applio-recode" -set "runtime=runtime-recode" -set "repoUrl=https://github.com/IAHispano/Applio-RVC-Fork/archive/refs/heads/%branch%.zip" -set "fixesFolder=fixes" -set "localFixesPy=local_fixes.py" -set "principal=%cd%" -set "URL_BASE=https://huggingface.co/lj1995/VoiceConversionWebUI/resolve/main" -set "URL_EXTRA=https://huggingface.co/IAHispano/applio/resolve/main" - -:menu -for /f "delims=: tokens=*" %%A in ('findstr /b ":::" "%~f0"') do @echo(%%A - -echo [1] Reinstall Applio -echo [2] Update Applio -echo [3] Update Applio + Runtime -echo. - -set /p choice=Select an option: -set choice=%choice: =% - -if "%choice%"=="1" ( - cls - echo Starting Applio Reinstaller... - echo. - goto reinstaller - pause - cls - goto menu - -) - -if "%choice%"=="2" ( - cls - echo Starting Applio Updater... - echo. - goto updater - pause - cls - goto menu -) - -if "%choice%"=="3" ( - cls - echo Updating Applio + Runtime... - echo. - goto updaterRuntime - pause - cls - goto menu - -) - -cls -echo Invalid option. Please enter a number from 1 to 3. -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu - -:reinstaller - -echo WARNING: Remember to install Microsoft C++ Build Tools, Redistributable, Python, and Git before continuing. -echo. -echo Step-by-step guide: https://rentry.org/appliolocal -echo Build Tools: https://aka.ms/vs/17/release/vs_BuildTools.exe -echo Redistributable: https://aka.ms/vs/17/release/vc_redist.x64.exe -echo Git: https://github.com/git-for-windows/git/releases/download/v2.42.0.windows.2/Git-2.42.0.2-64-bit.exe -echo Python: Add this route to the windows enviroment variables the user path variable: %principal%\runtime\Scripts -echo. -pause -cls - -echo Downloading ZIP file... -powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }" -echo. - -echo Extracting ZIP file... -powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }" -echo. - -echo Copying folder and file structure from subdirectory to main directory... -robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E -echo. - -echo Deleting contents of subdirectory (files and folders)... -rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q -echo. - -echo Cleaning up... -del "%principal%\repo.zip" -echo. -cls - -echo Proceeding to download the models... -echo. - -echo WARNING: At this point, it's recommended to disable antivirus or firewall, as errors might occur when downloading pretrained models. -pause -cls - -echo Downloading models in the assets folder... -cd "assets" -echo. -echo Downloading the "pretrained" folder... -cd "pretrained" -curl -LJO "%URL_BASE%/pretrained/D32k.pth" -curl -LJO "%URL_BASE%/pretrained/D40k.pth" -curl -LJO "%URL_BASE%/pretrained/D48k.pth" -curl -LJO "%URL_BASE%/pretrained/G32k.pth" -curl -LJO "%URL_BASE%/pretrained/G40k.pth" -curl -LJO "%URL_BASE%/pretrained/G48k.pth" -curl -LJO "%URL_BASE%/pretrained/f0D32k.pth" -curl -LJO "%URL_BASE%/pretrained/f0D40k.pth" -curl -LJO "%URL_BASE%/pretrained/f0D48k.pth" -curl -LJO "%URL_BASE%/pretrained/f0G32k.pth" -curl -LJO "%URL_BASE%/pretrained/f0G40k.pth" -curl -LJO "%URL_BASE%/pretrained/f0G48k.pth" -cd ".." -echo. -cls - -echo Downloading the "pretrained_v2" folder... -cd "pretrained_v2" -curl -LJO "%URL_BASE%/pretrained_v2/D32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/D40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/D48k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/G32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/G40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/G48k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0D32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0D40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0D48k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0G32k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0G40k.pth" -curl -LJO "%URL_BASE%/pretrained_v2/f0G48k.pth" -cd ".." -echo. -cls - -echo Downloading the hubert_base.pt file... -cd "hubert" -curl -LJO "%URL_BASE%/hubert_base.pt" -cd ".." -echo. -cls - - -echo Downloading the rmvpe.pt file... -cd "rmvpe" -curl -LJO "%URL_BASE%/rmvpe.pt" -echo. -cls - -echo Downloading the rmvpe.onnx file... -curl -LJO "%URL_BASE%/rmvpe.onnx" -cd ".." -cd ".." -echo. -cls - -echo Downloading the rest of the large files - -echo Downloading the "uvr5_weights" folder... -cd "uvr5_weights" -curl -LJO "%URL_BASE%/uvr5_weights/HP2_all_vocals.pth" -curl -LJO "%URL_BASE%/uvr5_weights/HP3_all_vocals.pth" -curl -LJO "%URL_BASE%/uvr5_weights/HP5_only_main_vocal.pth" -curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoAggressive.pth" -curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoDeReverb.pth" -curl -LJO "%URL_BASE%/uvr5_weights/VR-DeEchoNormal.pth" -cd ".." -echo. -cls - -echo Downloading the ffmpeg.exe file... -curl -LJO "%URL_BASE%/ffmpeg.exe" -echo. -cls - -echo Downloading the ffprobe.exe file... -curl -LJO "%URL_BASE%/ffprobe.exe" -echo. -cls - -echo Downloading the runtime.zip file... -curl -LJO "%URL_EXTRA%/%runtime%.zip" -echo. -cls - -echo Extracting the runtime.zip file, this might take a while... -powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'" -del %runtime%.zip -echo. -cls - -echo Downloads completed! -echo. - -echo Checking if the local_fixes.py file exists in the Fixes folder... -if exist "%fixesFolder%\%localFixesPy%" ( - echo Running the file... - runtime\python.exe "%fixesFolder%\%localFixesPy%" -) else ( - echo The "%localFixesPy%" file was not found in the "Fixes" folder. -) -echo. - -echo Fixes Applied! -echo. - -echo Applio has been reinstalled! -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu - - -:updater - -echo Downloading the ZIP file... -powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }" -echo. - -echo Extracting ZIP file... -powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }" -echo. - -echo Copying folder and file structure from subdirectory to main directory... -robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E -echo. - -echo Deleting contents of the subdirectory (files and folders)... -rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q -echo. - -echo Cleaning up... -del "%principal%\repo.zip" -echo. -cls - -echo Verifying if the local_fixes.py file exists in the Fixes folder... -if exist "%fixesFolder%\%localFixesPy%" ( - echo Running the file... - runtime\python.exe "%fixesFolder%\%localFixesPy%" -) else ( - echo The file "%localFixesPy%" was not found in the "Fixes" folder. -) -echo. - -echo Applio has been updated! -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu - - -:updaterRuntime - -echo Downloading the ZIP file... -powershell -command "& { Invoke-WebRequest -Uri '%repoUrl%' -OutFile '%principal%\repo.zip' }" -echo. - -echo Extracting ZIP file... -powershell -command "& { Add-Type -AssemblyName System.IO.Compression.FileSystem ; [System.IO.Compression.ZipFile]::ExtractToDirectory('%principal%\repo.zip', '%principal%') }" -echo. - -echo Copying folder and file structure from subdirectory to main directory... -robocopy "%principal%\Applio-RVC-Fork-%branch%" "%principal%" /E -echo. - -echo Deleting contents of the subdirectory (files and folders)... -rmdir "%principal%\Applio-RVC-Fork-%branch%" /S /Q -echo. - -echo Cleaning up... -del "%principal%\repo.zip" -echo. -cls - -echo Downloading the runtime.zip file... -curl -LJO "%URL_EXTRA%/%runtime%.zip" -echo. -cls -echo Extracting the runtime.zip file, this might take a while... -powershell -Command "Expand-Archive -Path '%runtime%.zip' -DestinationPath '.'" -del runtime.zip -echo. -cls - -echo Verifying if the local_fixes.py file exists in the Fixes folder... -if exist "%fixesFolder%\%localFixesPy%" ( - echo Running the file... - runtime\python.exe "%fixesFolder%\%localFixesPy%" -) else ( - echo The file "%localFixesPy%" was not found in the "Fixes" folder. -) -echo. - -echo Applio has been updated! -echo. -echo Press 'Enter' to access the main menu... -pause>nul -cls -goto menu diff --git a/spaces/GTKJF/SFE/README.md b/spaces/GTKJF/SFE/README.md deleted file mode 100644 index cae22b107c940df82ed5e79ecffff21e9534e426..0000000000000000000000000000000000000000 --- a/spaces/GTKJF/SFE/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Panel Template -emoji: 📈 -colorFrom: gray -colorTo: green -sdk: docker -pinned: false -duplicated_from: Panel-Org/panel-template -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/respace.py b/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/respace.py deleted file mode 100644 index fa0e3972184f83a3bea359f25f53a9e69d691d3a..0000000000000000000000000000000000000000 --- a/spaces/GastonMazzei/escher-inpaint-project/glide_text2im/respace.py +++ /dev/null @@ -1,117 +0,0 @@ -""" -Utilities for changing sampling schedules of a trained model. - -Simplified from: https://github.com/openai/guided-diffusion/blob/main/guided_diffusion/respace.py -""" - -import numpy as np -import torch as th - -from .gaussian_diffusion import GaussianDiffusion - - -def space_timesteps(num_timesteps, section_counts): - """ - Create a list of timesteps to use from an original diffusion process, - given the number of timesteps we want to take from equally-sized portions - of the original process. - - For example, if there's 300 timesteps and the section counts are [10,15,20] - then the first 100 timesteps are strided to be 10 timesteps, the second 100 - are strided to be 15 timesteps, and the final 100 are strided to be 20. - - :param num_timesteps: the number of diffusion steps in the original - process to divide up. - :param section_counts: either a list of numbers, or a string containing - comma-separated numbers, indicating the step count - per section. As a special case, use "ddimN" where N - is a number of steps to use the striding from the - DDIM paper. - :return: a set of diffusion steps from the original process to use. - """ - if isinstance(section_counts, str): - if section_counts.startswith("ddim"): - desired_count = int(section_counts[len("ddim") :]) - for i in range(1, num_timesteps): - if len(range(0, num_timesteps, i)) == desired_count: - return set(range(0, num_timesteps, i)) - raise ValueError(f"cannot create exactly {num_timesteps} steps with an integer stride") - elif section_counts == "fast27": - steps = space_timesteps(num_timesteps, "10,10,3,2,2") - # Help reduce DDIM artifacts from noisiest timesteps. - steps.remove(num_timesteps - 1) - steps.add(num_timesteps - 3) - return steps - section_counts = [int(x) for x in section_counts.split(",")] - size_per = num_timesteps // len(section_counts) - extra = num_timesteps % len(section_counts) - start_idx = 0 - all_steps = [] - for i, section_count in enumerate(section_counts): - size = size_per + (1 if i < extra else 0) - if size < section_count: - raise ValueError(f"cannot divide section of {size} steps into {section_count}") - if section_count <= 1: - frac_stride = 1 - else: - frac_stride = (size - 1) / (section_count - 1) - cur_idx = 0.0 - taken_steps = [] - for _ in range(section_count): - taken_steps.append(start_idx + round(cur_idx)) - cur_idx += frac_stride - all_steps += taken_steps - start_idx += size - return set(all_steps) - - -class SpacedDiffusion(GaussianDiffusion): - """ - A diffusion process which can skip steps in a base diffusion process. - - :param use_timesteps: a collection (sequence or set) of timesteps from the - original diffusion process to retain. - :param kwargs: the kwargs to create the base diffusion process. - """ - - def __init__(self, use_timesteps, **kwargs): - self.use_timesteps = set(use_timesteps) - self.timestep_map = [] - self.original_num_steps = len(kwargs["betas"]) - - base_diffusion = GaussianDiffusion(**kwargs) # pylint: disable=missing-kwoa - last_alpha_cumprod = 1.0 - new_betas = [] - for i, alpha_cumprod in enumerate(base_diffusion.alphas_cumprod): - if i in self.use_timesteps: - new_betas.append(1 - alpha_cumprod / last_alpha_cumprod) - last_alpha_cumprod = alpha_cumprod - self.timestep_map.append(i) - kwargs["betas"] = np.array(new_betas) - super().__init__(**kwargs) - - def p_mean_variance(self, model, *args, **kwargs): - return super().p_mean_variance(self._wrap_model(model), *args, **kwargs) - - def condition_mean(self, cond_fn, *args, **kwargs): - return super().condition_mean(self._wrap_model(cond_fn), *args, **kwargs) - - def condition_score(self, cond_fn, *args, **kwargs): - return super().condition_score(self._wrap_model(cond_fn), *args, **kwargs) - - def _wrap_model(self, model): - if isinstance(model, _WrappedModel): - return model - return _WrappedModel(model, self.timestep_map, self.original_num_steps) - - -class _WrappedModel: - def __init__(self, model, timestep_map, original_num_steps): - self.model = model - self.timestep_map = timestep_map - self.original_num_steps = original_num_steps - - def __call__(self, x, ts, **kwargs): - map_tensor = th.tensor(self.timestep_map, device=ts.device, dtype=ts.dtype) - new_ts = map_tensor[ts] - return self.model(x, new_ts, **kwargs) diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/colored_cylinder_in_square.py b/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/colored_cylinder_in_square.py deleted file mode 100644 index be3f01bea7c5d8e3f302d9d92ec0c6193612d78e..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/generated_tasks/colored_cylinder_in_square.py +++ /dev/null @@ -1,44 +0,0 @@ -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils - -class ColoredCylinderInSquare(Task): - """Pick up five differently colored cylinder blocks and arrange them inside the square template on the tabletop. Each block should be placed along the corresponding color edge: red, blue, green, yellow, and orange.""" - - def __init__(self): - super().__init__() - self.max_steps = 20 - self.lang_template = "arrange the {color} cylinder along the {color} edge" - self.task_completed_desc = "done arranging cylinders." - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Add square template. - square_size = (0.3, 0.3, 0.005) # x, y, z dimensions for the asset size - square_pose = self.get_random_pose(env, square_size) - square_urdf = 'square/square-template.urdf' - env.add_object(square_urdf, square_pose, 'fixed') - - # Cylinder colors. - colors = ['red', 'blue', 'green', 'yellow', 'orange'] - - # Add cylinders. - cylinder_size = (0.04, 0.04, 0.08) # x, y, z dimensions for the asset size - cylinder_urdf = 'cylinder/cylinder-template.urdf' - cylinders = [] - for color in colors: - cylinder_pose = self.get_random_pose(env, cylinder_size) - cylinder_id = env.add_object(cylinder_urdf, cylinder_pose, color=utils.COLORS[color]) - cylinders.append(cylinder_id) - - # Associate placement locations for goals. - place_pos = [(0.1, 0, 0.04), (-0.1, 0, 0.04), (0, 0.1, 0.04), (0, -0.1, 0.04), (0, 0, 0.04)] - targs = [(utils.apply(square_pose, i), square_pose[1]) for i in place_pos] - - # Goal: each cylinder is placed along the corresponding color edge. - for i, cylinder in enumerate(cylinders): - self.add_goal(objs=[cylinder], matches=np.ones((1, 1)), targ_poses=[targs[i]], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1 / 5, - language_goal=self.lang_template.format(color=colors[i])) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/free_anchor/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/free_anchor/README.md deleted file mode 100644 index 6d6474c90f1e76f80a0043d35897133ef604ce0a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/free_anchor/README.md +++ /dev/null @@ -1,27 +0,0 @@ -# FreeAnchor: Learning to Match Anchors for Visual Object Detection - -## Introduction - -[ALGORITHM] - -```latex -@inproceedings{zhang2019freeanchor, - title = {{FreeAnchor}: Learning to Match Anchors for Visual Object Detection}, - author = {Zhang, Xiaosong and Wan, Fang and Liu, Chang and Ji, Rongrong and Ye, Qixiang}, - booktitle = {Neural Information Processing Systems}, - year = {2019} -} -``` - -## Results and Models - -| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -|:--------:|:-------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:| -| R-50 | pytorch | 1x | 4.9 | 18.4 | 38.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco/retinanet_free_anchor_r50_fpn_1x_coco_20200130-0f67375f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/free_anchor/retinanet_free_anchor_r50_fpn_1x_coco/retinanet_free_anchor_r50_fpn_1x_coco_20200130_095625.log.json) | -| R-101 | pytorch | 1x | 6.8 | 14.9 | 40.3 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/free_anchor/retinanet_free_anchor_r101_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/free_anchor/retinanet_free_anchor_r101_fpn_1x_coco/retinanet_free_anchor_r101_fpn_1x_coco_20200130-358324e6.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/free_anchor/retinanet_free_anchor_r101_fpn_1x_coco/retinanet_free_anchor_r101_fpn_1x_coco_20200130_100723.log.json) | -| X-101-32x4d | pytorch | 1x | 8.1 | 11.1 | 41.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/free_anchor/retinanet_free_anchor_x101_32x4d_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/free_anchor/retinanet_free_anchor_x101_32x4d_fpn_1x_coco/retinanet_free_anchor_x101_32x4d_fpn_1x_coco_20200130-d4846968.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/free_anchor/retinanet_free_anchor_x101_32x4d_fpn_1x_coco/retinanet_free_anchor_x101_32x4d_fpn_1x_coco_20200130_095627.log.json) | - -**Notes:** - -- We use 8 GPUs with 2 images/GPU. -- For more settings and models, please refer to the [official repo](https://github.com/zhangxiaosong18/FreeAnchor). diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_2x_coco.py deleted file mode 100644 index 9c85d26d2372ad1ab5490b4ec93dd7484dc9f6f0..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/gn/mask_rcnn_r50_fpn_gn-all_2x_coco.py +++ /dev/null @@ -1,46 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r50_fpn_1x_coco.py' -norm_cfg = dict(type='GN', num_groups=32, requires_grad=True) -model = dict( - pretrained='open-mmlab://detectron/resnet50_gn', - backbone=dict(norm_cfg=norm_cfg), - neck=dict(norm_cfg=norm_cfg), - roi_head=dict( - bbox_head=dict( - type='Shared4Conv1FCBBoxHead', - conv_out_channels=256, - norm_cfg=norm_cfg), - mask_head=dict(norm_cfg=norm_cfg))) -img_norm_cfg = dict( - mean=[103.530, 116.280, 123.675], std=[1.0, 1.0, 1.0], to_rgb=False) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# learning policy -lr_config = dict(step=[16, 22]) -runner = dict(type='EpochBasedRunner', max_epochs=24) diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/diffusion/_explorers.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/diffusion/_explorers.py deleted file mode 100644 index 0bf4ca57b63f5f9308bd1178ddbde5d8f06748e5..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/diffusion/_explorers.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import treetable as tt - -from .._base_explorers import BaseExplorer - - -class DiffusionExplorer(BaseExplorer): - eval_metrics = ["sisnr", "visqol"] - - def stages(self): - return ["train", "valid", "valid_ema", "evaluate", "evaluate_ema"] - - def get_grid_meta(self): - """Returns the list of Meta information to display for each XP/job. - """ - return [ - tt.leaf("index", align=">"), - tt.leaf("name", wrap=140), - tt.leaf("state"), - tt.leaf("sig", align=">"), - ] - - def get_grid_metrics(self): - """Return the metrics that should be displayed in the tracking table. - """ - return [ - tt.group( - "train", - [ - tt.leaf("epoch"), - tt.leaf("loss", ".3%"), - ], - align=">", - ), - tt.group( - "valid", - [ - tt.leaf("loss", ".3%"), - # tt.leaf("loss_0", ".3%"), - ], - align=">", - ), - tt.group( - "valid_ema", - [ - tt.leaf("loss", ".3%"), - # tt.leaf("loss_0", ".3%"), - ], - align=">", - ), - tt.group( - "evaluate", [tt.leaf("rvm", ".4f"), tt.leaf("rvm_0", ".4f"), - tt.leaf("rvm_1", ".4f"), tt.leaf("rvm_2", ".4f"), - tt.leaf("rvm_3", ".4f"), ], align=">" - ), - tt.group( - "evaluate_ema", [tt.leaf("rvm", ".4f"), tt.leaf("rvm_0", ".4f"), - tt.leaf("rvm_1", ".4f"), tt.leaf("rvm_2", ".4f"), - tt.leaf("rvm_3", ".4f")], align=">" - ), - ] diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/musicgen/_explorers.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/musicgen/_explorers.py deleted file mode 100644 index 334836b72559a120feb8a15eef3fe96ce88a4edb..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/grids/musicgen/_explorers.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import treetable as tt - -from .._base_explorers import BaseExplorer - - -class LMExplorer(BaseExplorer): - eval_metrics: tp.List[str] = [] - - def stages(self) -> tp.List[str]: - return ['train', 'valid'] - - def get_grid_metrics(self): - """Return the metrics that should be displayed in the tracking table.""" - return [ - tt.group( - 'train', - [ - tt.leaf('epoch'), - tt.leaf('duration', '.1f'), # duration in minutes - tt.leaf('ping'), - tt.leaf('ce', '.4f'), # cross entropy - tt.leaf("ppl", '.3f'), # perplexity - ], - align='>', - ), - tt.group( - 'valid', - [ - tt.leaf('ce', '.4f'), - tt.leaf('ppl', '.3f'), - tt.leaf('best_ppl', '.3f'), - ], - align='>', - ), - ] - - def process_sheep(self, sheep, history): - parts = super().process_sheep(sheep, history) - - track_by = {'ppl': 'lower'} # values should be in ['lower', 'higher'] - best_metrics = {k: (1 if v == 'lower' else -1) * float('inf') for k, v in track_by.items()} - - def comparator(mode, a, b): - return a < b if mode == 'lower' else a > b - - for metrics in history: - for key, sub in metrics.items(): - for metric in track_by: - # for the validation set, keep track of best metrics (ppl in this example) - # this is so we can conveniently compare metrics between runs in the grid - if key == 'valid' and metric in sub and comparator( - track_by[metric], sub[metric], best_metrics[metric] - ): - best_metrics[metric] = sub[metric] - - if 'valid' in parts: - parts['valid'].update({f'best_{k}': v for k, v in best_metrics.items()}) - return parts - - -class GenerationEvalExplorer(BaseExplorer): - eval_metrics: tp.List[str] = [] - - def stages(self) -> tp.List[str]: - return ['evaluate'] - - def get_grid_metrics(self): - """Return the metrics that should be displayed in the tracking table.""" - return [ - tt.group( - 'evaluate', - [ - tt.leaf('epoch', '.3f'), - tt.leaf('duration', '.1f'), - tt.leaf('ping'), - tt.leaf('ce', '.4f'), - tt.leaf('ppl', '.3f'), - tt.leaf('fad', '.3f'), - tt.leaf('kld', '.3f'), - tt.leaf('text_consistency', '.3f'), - tt.leaf('chroma_cosine', '.3f'), - ], - align='>', - ), - ] diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/losses/__init__.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/losses/__init__.py deleted file mode 100644 index d55107b2c11822cab749ed3683cf19020802898a..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/losses/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Loss related classes and functions. In particular the loss balancer from -EnCodec, and the usual spectral losses.""" - -# flake8: noqa -from .balancer import Balancer -from .sisnr import SISNR -from .stftloss import ( - LogSTFTMagnitudeLoss, - MRSTFTLoss, - SpectralConvergenceLoss, - STFTLoss -) -from .specloss import ( - MelSpectrogramL1Loss, - MultiScaleMelSpectrogramLoss, -) diff --git a/spaces/GroveStreet/GTA_SOVITS/inference/infer_tool.py b/spaces/GroveStreet/GTA_SOVITS/inference/infer_tool.py deleted file mode 100644 index df81d0ffa449baba56be359dd88f02e5ce82f4f8..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/inference/infer_tool.py +++ /dev/null @@ -1,550 +0,0 @@ -import hashlib -import io -import json -import logging -import os -import time -from pathlib import Path -from inference import slicer -import gc - -import librosa -import numpy as np -# import onnxruntime -import soundfile -import torch -import torchaudio - -import cluster -import utils -from models import SynthesizerTrn -import pickle - -from diffusion.unit2mel import load_model_vocoder -import yaml - -logging.getLogger('matplotlib').setLevel(logging.WARNING) - - -def read_temp(file_name): - if not os.path.exists(file_name): - with open(file_name, "w") as f: - f.write(json.dumps({"info": "temp_dict"})) - return {} - else: - try: - with open(file_name, "r") as f: - data = f.read() - data_dict = json.loads(data) - if os.path.getsize(file_name) > 50 * 1024 * 1024: - f_name = file_name.replace("\\", "/").split("/")[-1] - print(f"clean {f_name}") - for wav_hash in list(data_dict.keys()): - if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600: - del data_dict[wav_hash] - except Exception as e: - print(e) - print(f"{file_name} error,auto rebuild file") - data_dict = {"info": "temp_dict"} - return data_dict - - -def write_temp(file_name, data): - with open(file_name, "w") as f: - f.write(json.dumps(data)) - - -def timeit(func): - def run(*args, **kwargs): - t = time.time() - res = func(*args, **kwargs) - print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t)) - return res - - return run - - -def format_wav(audio_path): - if Path(audio_path).suffix == '.wav': - return - raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None) - soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate) - - -def get_end_file(dir_path, end): - file_lists = [] - for root, dirs, files in os.walk(dir_path): - files = [f for f in files if f[0] != '.'] - dirs[:] = [d for d in dirs if d[0] != '.'] - for f_file in files: - if f_file.endswith(end): - file_lists.append(os.path.join(root, f_file).replace("\\", "/")) - return file_lists - - -def get_md5(content): - return hashlib.new("md5", content).hexdigest() - - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - - -def pad_array(arr, target_length): - current_length = arr.shape[0] - if current_length >= target_length: - return arr - else: - pad_width = target_length - current_length - pad_left = pad_width // 2 - pad_right = pad_width - pad_left - padded_arr = np.pad(arr, (pad_left, pad_right), 'constant', constant_values=(0, 0)) - return padded_arr - - -def split_list_by_n(list_collection, n, pre=0): - for i in range(0, len(list_collection), n): - yield list_collection[i - pre if i - pre >= 0 else i: i + n] - - -class F0FilterException(Exception): - pass - - -class Svc(object): - def __init__(self, net_g_path, config_path, - device=None, - cluster_model_path="logs/44k/kmeans_10000.pt", - nsf_hifigan_enhance=False, - diffusion_model_path="logs/44k/diffusion/model_0.pt", - diffusion_config_path="configs/diffusion.yaml", - shallow_diffusion=False, - only_diffusion=False, - spk_mix_enable=False, - feature_retrieval=False - ): - self.net_g_path = net_g_path - self.only_diffusion = only_diffusion - self.shallow_diffusion = shallow_diffusion - self.feature_retrieval = feature_retrieval - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - self.net_g_ms = None - if not self.only_diffusion: - self.hps_ms = utils.get_hparams_from_file(config_path) - self.target_sample = self.hps_ms.data.sampling_rate - self.hop_size = self.hps_ms.data.hop_length - self.spk2id = self.hps_ms.spk - try: - self.vol_embedding = self.hps_ms.model.vol_embedding - except Exception as e: - self.vol_embedding = False - try: - self.speech_encoder = self.hps_ms.model.speech_encoder - except Exception as e: - self.speech_encoder = 'vec768l12' - - self.nsf_hifigan_enhance = nsf_hifigan_enhance - if self.shallow_diffusion or self.only_diffusion: - if os.path.exists(diffusion_model_path) and os.path.exists(diffusion_model_path): - self.diffusion_model, self.vocoder, self.diffusion_args = load_model_vocoder(diffusion_model_path, - self.dev, - config_path=diffusion_config_path) - if self.only_diffusion: - self.target_sample = self.diffusion_args.data.sampling_rate - self.hop_size = self.diffusion_args.data.block_size - self.spk2id = self.diffusion_args.spk - self.speech_encoder = self.diffusion_args.data.encoder - if spk_mix_enable: - self.diffusion_model.init_spkmix(len(self.spk2id)) - else: - print("No diffusion model or config found. Shallow diffusion mode will False") - self.shallow_diffusion = self.only_diffusion = False - - # load hubert and model - if not self.only_diffusion: - self.load_model(spk_mix_enable) - self.hubert_model = utils.get_speech_encoder(self.speech_encoder, device=self.dev) - self.volume_extractor = utils.Volume_Extractor(self.hop_size) - else: - self.hubert_model = utils.get_speech_encoder(self.diffusion_args.data.encoder, device=self.dev) - self.volume_extractor = utils.Volume_Extractor(self.diffusion_args.data.block_size) - - if os.path.exists(cluster_model_path): - if self.feature_retrieval: - with open(cluster_model_path, "rb") as f: - self.cluster_model = pickle.load(f) - self.big_npy = None - self.now_spk_id = -1 - else: - self.cluster_model = cluster.get_cluster_model(cluster_model_path) - else: - self.feature_retrieval = False - - if self.shallow_diffusion: self.nsf_hifigan_enhance = False - if self.nsf_hifigan_enhance: - from modules.enhancer import Enhancer - self.enhancer = Enhancer('nsf-hifigan', 'pretrain/nsf_hifigan/model', device=self.dev) - - def load_model(self, spk_mix_enable=False): - # get model configuration - self.net_g_ms = SynthesizerTrn( - self.hps_ms.data.filter_length // 2 + 1, - self.hps_ms.train.segment_size // self.hps_ms.data.hop_length, - **self.hps_ms.model) - _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None) - if "half" in self.net_g_path and torch.cuda.is_available(): - _ = self.net_g_ms.half().eval().to(self.dev) - else: - _ = self.net_g_ms.eval().to(self.dev) - if spk_mix_enable: - self.net_g_ms.EnableCharacterMix(len(self.spk2id), self.dev) - - def get_unit_f0(self, wav, tran, cluster_infer_ratio, speaker, f0_filter, f0_predictor, cr_threshold=0.05): - - f0_predictor_object = utils.get_f0_predictor(f0_predictor, hop_length=self.hop_size, - sampling_rate=self.target_sample, device=self.dev, - threshold=cr_threshold) - - f0, uv = f0_predictor_object.compute_f0_uv(wav) - if f0_filter and sum(f0) == 0: - raise F0FilterException("No voice detected") - f0 = torch.FloatTensor(f0).to(self.dev) - uv = torch.FloatTensor(uv).to(self.dev) - - f0 = f0 * 2 ** (tran / 12) - f0 = f0.unsqueeze(0) - uv = uv.unsqueeze(0) - - wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000) - wav16k = torch.from_numpy(wav16k).to(self.dev) - c = self.hubert_model.encoder(wav16k) - c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1]) - - if cluster_infer_ratio != 0: - if self.feature_retrieval: - speaker_id = self.spk2id.get(speaker) - if speaker_id is None: - raise RuntimeError("The name you entered is not in the speaker list!") - if not speaker_id and type(speaker) is int: - if len(self.spk2id.__dict__) >= speaker: - speaker_id = speaker - feature_index = self.cluster_model[speaker_id] - feat_np = c.transpose(0, 1).cpu().numpy() - if self.big_npy is None or self.now_spk_id != speaker_id: - self.big_npy = feature_index.reconstruct_n(0, feature_index.ntotal) - self.now_spk_id = speaker_id - print("starting feature retrieval...") - score, ix = feature_index.search(feat_np, k=8) - weight = np.square(1 / score) - weight /= weight.sum(axis=1, keepdims=True) - npy = np.sum(self.big_npy[ix] * np.expand_dims(weight, axis=2), axis=1) - c = cluster_infer_ratio * npy + (1 - cluster_infer_ratio) * feat_np - c = torch.FloatTensor(c).to(self.dev).transpose(0, 1) - print("end feature retrieval...") - else: - cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.cpu().numpy().T, speaker).T - cluster_c = torch.FloatTensor(cluster_c).to(self.dev) - c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c - - c = c.unsqueeze(0) - return c, f0, uv - - def infer(self, speaker, tran, raw_path, - cluster_infer_ratio=0, - auto_predict_f0=False, - noice_scale=0.4, - f0_filter=False, - f0_predictor='pm', - enhancer_adaptive_key=0, - cr_threshold=0.05, - k_step=100, - frame=0, - spk_mix=False, - second_encoding=False, - loudness_envelope_adjustment=1 - ): - wav, sr = librosa.load(raw_path, sr=self.target_sample) - if spk_mix: - c, f0, uv = self.get_unit_f0(wav, tran, 0, None, f0_filter, f0_predictor, cr_threshold=cr_threshold) - n_frames = f0.size(1) - sid = speaker[:, frame:frame + n_frames].transpose(0, 1) - else: - speaker_id = self.spk2id.get(speaker) - if not speaker_id and type(speaker) is int: - if len(self.spk2id.__dict__) >= speaker: - speaker_id = speaker - if speaker_id is None: - raise RuntimeError("The name you entered is not in the speaker list!") - sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0) - c, f0, uv = self.get_unit_f0(wav, tran, cluster_infer_ratio, speaker, f0_filter, f0_predictor, - cr_threshold=cr_threshold) - n_frames = f0.size(1) - if "half" in self.net_g_path and torch.cuda.is_available(): - c = c.half() - with torch.no_grad(): - start = time.time() - vol = None - if not self.only_diffusion: - vol = self.volume_extractor.extract(torch.FloatTensor(wav).to(self.dev)[None, :])[None, :].to( - self.dev) if self.vol_embedding else None - audio, f0 = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, - noice_scale=noice_scale, vol=vol) - audio = audio[0, 0].data.float() - audio_mel = self.vocoder.extract(audio[None, :], self.target_sample) if self.shallow_diffusion else None - else: - audio = torch.FloatTensor(wav).to(self.dev) - audio_mel = None - if self.only_diffusion or self.shallow_diffusion: - vol = self.volume_extractor.extract(audio[None, :])[None, :, None].to(self.dev) if vol == None else vol[ - :, - :, - None] - if self.shallow_diffusion and second_encoding: - audio16k = librosa.resample(audio.detach().cpu().numpy(), orig_sr=self.target_sample, - target_sr=16000) - audio16k = torch.from_numpy(audio16k).to(self.dev) - c = self.hubert_model.encoder(audio16k) - c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1]) - f0 = f0[:, :, None] - c = c.transpose(-1, -2) - audio_mel = self.diffusion_model( - c, - f0, - vol, - spk_id=sid, - spk_mix_dict=None, - gt_spec=audio_mel, - infer=True, - infer_speedup=self.diffusion_args.infer.speedup, - method=self.diffusion_args.infer.method, - k_step=k_step) - audio = self.vocoder.infer(audio_mel, f0).squeeze() - if self.nsf_hifigan_enhance: - audio, _ = self.enhancer.enhance( - audio[None, :], - self.target_sample, - f0[:, :, None], - self.hps_ms.data.hop_length, - adaptive_key=enhancer_adaptive_key) - if loudness_envelope_adjustment != 1: - audio = utils.change_rms(wav, self.target_sample, audio, self.target_sample, - loudness_envelope_adjustment) - use_time = time.time() - start - print("vits use time:{}".format(use_time)) - return audio, audio.shape[-1], n_frames - - def clear_empty(self): - # clean up vram - torch.cuda.empty_cache() - - def unload_model(self): - # unload model - self.net_g_ms = self.net_g_ms.to("cpu") - del self.net_g_ms - if hasattr(self, "enhancer"): - self.enhancer.enhancer = self.enhancer.enhancer.to("cpu") - del self.enhancer.enhancer - del self.enhancer - gc.collect() - - def slice_inference(self, - raw_audio_path, - spk, - tran, - slice_db, - cluster_infer_ratio, - auto_predict_f0, - noice_scale, - pad_seconds=0.5, - clip_seconds=0, - lg_num=0, - lgr_num=0.75, - f0_predictor='pm', - enhancer_adaptive_key=0, - cr_threshold=0.05, - k_step=100, - use_spk_mix=False, - second_encoding=False, - loudness_envelope_adjustment=1 - ): - if use_spk_mix: - if len(self.spk2id) == 1: - spk = self.spk2id.keys()[0] - use_spk_mix = False - wav_path = Path(raw_audio_path).with_suffix('.wav') - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - per_size = int(clip_seconds * audio_sr) - lg_size = int(lg_num * audio_sr) - lg_size_r = int(lg_size * lgr_num) - lg_size_c_l = (lg_size - lg_size_r) // 2 - lg_size_c_r = lg_size - lg_size_r - lg_size_c_l - lg = np.linspace(0, 1, lg_size_r) if lg_size != 0 else 0 - - if use_spk_mix: - assert len(self.spk2id) == len(spk) - audio_length = 0 - for (slice_tag, data) in audio_data: - aud_length = int(np.ceil(len(data) / audio_sr * self.target_sample)) - if slice_tag: - audio_length += aud_length // self.hop_size - continue - if per_size != 0: - datas = split_list_by_n(data, per_size, lg_size) - else: - datas = [data] - for k, dat in enumerate(datas): - pad_len = int(audio_sr * pad_seconds) - per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) - a_length = per_length + 2 * pad_len - audio_length += a_length // self.hop_size - audio_length += len(audio_data) - spk_mix_tensor = torch.zeros(size=(len(spk), audio_length)).to(self.dev) - for i in range(len(spk)): - last_end = None - for mix in spk[i]: - if mix[3] < 0. or mix[2] < 0.: - raise RuntimeError("mix value must higer Than zero!") - begin = int(audio_length * mix[0]) - end = int(audio_length * mix[1]) - length = end - begin - if length <= 0: - raise RuntimeError("begin Must lower Than end!") - step = (mix[3] - mix[2]) / length - if last_end is not None: - if last_end != begin: - raise RuntimeError("[i]EndTime Must Equal [i+1]BeginTime!") - last_end = end - if step == 0.: - spk_mix_data = torch.zeros(length).to(self.dev) + mix[2] - else: - spk_mix_data = torch.arange(mix[2], mix[3], step).to(self.dev) - if (len(spk_mix_data) < length): - num_pad = length - len(spk_mix_data) - spk_mix_data = torch.nn.functional.pad(spk_mix_data, [0, num_pad], mode="reflect").to(self.dev) - spk_mix_tensor[i][begin:end] = spk_mix_data[:length] - - spk_mix_ten = torch.sum(spk_mix_tensor, dim=0).unsqueeze(0).to(self.dev) - # spk_mix_tensor[0][spk_mix_ten<0.001] = 1.0 - for i, x in enumerate(spk_mix_ten[0]): - if x == 0.0: - spk_mix_ten[0][i] = 1.0 - spk_mix_tensor[:, i] = 1.0 / len(spk) - spk_mix_tensor = spk_mix_tensor / spk_mix_ten - if not ((torch.sum(spk_mix_tensor, dim=0) - 1.) < 0.0001).all(): - raise RuntimeError("sum(spk_mix_tensor) not equal 1") - spk = spk_mix_tensor - - global_frame = 0 - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - # padd - length = int(np.ceil(len(data) / audio_sr * self.target_sample)) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - audio.extend(list(pad_array(_audio, length))) - global_frame += length // self.hop_size - continue - if per_size != 0: - datas = split_list_by_n(data, per_size, lg_size) - else: - datas = [data] - for k, dat in enumerate(datas): - per_length = int(np.ceil(len(dat) / audio_sr * self.target_sample)) if clip_seconds != 0 else length - if clip_seconds != 0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])]) - raw_path = io.BytesIO() - soundfile.write(raw_path, dat, audio_sr, format="wav") - raw_path.seek(0) - out_audio, out_sr, out_frame = self.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - f0_predictor=f0_predictor, - enhancer_adaptive_key=enhancer_adaptive_key, - cr_threshold=cr_threshold, - k_step=k_step, - frame=global_frame, - spk_mix=use_spk_mix, - second_encoding=second_encoding, - loudness_envelope_adjustment=loudness_envelope_adjustment - ) - global_frame += out_frame - _audio = out_audio.cpu().numpy() - pad_len = int(self.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - _audio = pad_array(_audio, per_length) - if lg_size != 0 and k != 0: - lg1 = audio[-(lg_size_r + lg_size_c_r):-lg_size_c_r] if lgr_num != 1 else audio[-lg_size:] - lg2 = _audio[lg_size_c_l:lg_size_c_l + lg_size_r] if lgr_num != 1 else _audio[0:lg_size] - lg_pre = lg1 * (1 - lg) + lg2 * lg - audio = audio[0:-(lg_size_r + lg_size_c_r)] if lgr_num != 1 else audio[0:-lg_size] - audio.extend(lg_pre) - _audio = _audio[lg_size_c_l + lg_size_r:] if lgr_num != 1 else _audio[lg_size:] - audio.extend(list(_audio)) - return np.array(audio) - - -class RealTimeVC: - def __init__(self): - self.last_chunk = None - self.last_o = None - self.chunk_len = 16000 # chunk length - self.pre_len = 3840 # cross fade length, multiples of 640 - - # Input and output are 1-dimensional numpy waveform arrays - - def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path, - cluster_infer_ratio=0, - auto_predict_f0=False, - noice_scale=0.4, - f0_filter=False): - - import maad - audio, sr = torchaudio.load(input_wav_path) - audio = audio.cpu().numpy()[0] - temp_wav = io.BytesIO() - if self.last_chunk is None: - input_wav_path.seek(0) - - audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - f0_filter=f0_filter) - - audio = audio.cpu().numpy() - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return audio[-self.chunk_len:] - else: - audio = np.concatenate([self.last_chunk, audio]) - soundfile.write(temp_wav, audio, sr, format="wav") - temp_wav.seek(0) - - audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - f0_filter=f0_filter) - - audio = audio.cpu().numpy() - ret = maad.util.crossfade(self.last_o, audio, self.pre_len) - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return ret[self.chunk_len:2 * self.chunk_len] diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_base_cluener.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_base_cluener.sh deleted file mode 100644 index 04b97b5fe5123af3170523dfde0ae008a78b2428..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/ner_zen2_base_cluener.sh +++ /dev/null @@ -1,91 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=zen2_base_cluener # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=1 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:1 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o /cognitive_comp/ganruyi/experiments/ner_finetune/zen2_base_cluener/%x-%j.log # output and error file name (%x=job name, %j=job id) - - -# export CUDA_VISIBLE_DEVICES='2' -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions - -MODEL_NAME=zen2_base - -TASK=cluener - -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -ROOT_DIR=/cognitive_comp/ganruyi/experiments/ner_finetune/${MODEL_NAME}_${TASK} -if [ ! -d ${ROOT_DIR} ];then - mkdir -p ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -DATA_DIR=/cognitive_comp/lujunyu/data_zh/NER_Aligned/CLUENER/ -PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_base_2.0 - -CHECKPOINT_PATH=${ROOT_DIR}/ckpt/ -OUTPUT_PATH=${ROOT_DIR}/predict.json - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.char.txt \ - --valid_data dev.char.txt \ - --test_data dev.char.txt \ - --train_batchsize 32 \ - --valid_batchsize 16 \ - --max_seq_length 256 \ - --task_name cluener \ - " - -MODEL_ARGS="\ - --learning_rate 3e-5 \ - --weight_decay 0.1 \ - --warmup_ratio 0.01 \ - --markup bio \ - --middle_prefix I- \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_f1 \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 100 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_f1:.4f} \ - " - -TRAINER_ARGS="\ - --max_epochs 30 \ - --gpus 1 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 100 \ - --default_root_dir $ROOT_DIR \ - " - - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \ - --do_lower_case \ - --output_save_path $OUTPUT_PATH \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ -" -SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_token_level_ft_task.py -/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - -# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -# python3 $SCRIPT_PATH $options -# source activate base -# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options -# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/byte_level_bpe/README.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/byte_level_bpe/README.md deleted file mode 100644 index 657092660eae42d20f67647417623b8b8cb7b66c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/byte_level_bpe/README.md +++ /dev/null @@ -1,88 +0,0 @@ -# Neural Machine Translation with Byte-Level Subwords - -https://arxiv.org/abs/1909.03341 - -We provide an implementation of byte-level byte-pair encoding (BBPE), taking IWSLT 2017 Fr-En translation as -example. - -## Data -Get data and generate fairseq binary dataset: -```bash -bash ./get_data.sh -``` - -## Model Training -Train Transformer model with Bi-GRU embedding contextualization (implemented in `gru_transformer.py`): -```bash -# VOCAB=bytes -# VOCAB=chars -VOCAB=bbpe2048 -# VOCAB=bpe2048 -# VOCAB=bbpe4096 -# VOCAB=bpe4096 -# VOCAB=bpe16384 -``` -```bash -fairseq-train "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --arch gru_transformer --encoder-layers 2 --decoder-layers 2 --dropout 0.3 --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --log-format 'simple' --log-interval 100 --save-dir "checkpoints/${VOCAB}" \ - --batch-size 100 --max-update 100000 --update-freq 2 -``` - -## Generation -`fairseq-generate` requires bytes (BBPE) decoder to convert byte-level representation back to characters: -```bash -# BPE=--bpe bytes -# BPE=--bpe characters -BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe2048.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe2048.model -# BPE=--bpe byte_bpe --sentencepiece-model-path data/spm_bbpe4096.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe4096.model -# BPE=--bpe sentencepiece --sentencepiece-model data/spm_bpe16384.model -``` - -```bash -fairseq-generate "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --source-lang fr --gen-subset test --sacrebleu --path "checkpoints/${VOCAB}/checkpoint_last.pt" \ - --tokenizer moses --moses-target-lang en ${BPE} -``` -When using `fairseq-interactive`, bytes (BBPE) encoder/decoder is required to tokenize input data and detokenize model predictions: -```bash -fairseq-interactive "data/bin_${VOCAB}" --task translation --user-dir examples/byte_level_bpe/gru_transformer \ - --path "checkpoints/${VOCAB}/checkpoint_last.pt" --input data/test.fr --tokenizer moses --moses-source-lang fr \ - --moses-target-lang en ${BPE} --buffer-size 1000 --max-tokens 10000 -``` - -## Results -| Vocabulary | Model | BLEU | -|:-------------:|:-------------:|:-------------:| -| Joint BPE 16k ([Kudo, 2018](https://arxiv.org/abs/1804.10959)) | 512d LSTM 2+2 | 33.81 | -| Joint BPE 16k | Transformer base 2+2 (w/ GRU) | 36.64 (36.72) | -| Joint BPE 4k | Transformer base 2+2 (w/ GRU) | 35.49 (36.10) | -| Joint BBPE 4k | Transformer base 2+2 (w/ GRU) | 35.61 (35.82) | -| Joint BPE 2k | Transformer base 2+2 (w/ GRU) | 34.87 (36.13) | -| Joint BBPE 2k | Transformer base 2+2 (w/ GRU) | 34.98 (35.43) | -| Characters | Transformer base 2+2 (w/ GRU) | 31.78 (33.30) | -| Bytes | Transformer base 2+2 (w/ GRU) | 31.57 (33.62) | - - -## Citation -``` -@misc{wang2019neural, - title={Neural Machine Translation with Byte-Level Subwords}, - author={Changhan Wang and Kyunghyun Cho and Jiatao Gu}, - year={2019}, - eprint={1909.03341}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` - - -## Contact -Changhan Wang ([changhan@fb.com](mailto:changhan@fb.com)), -Kyunghyun Cho ([kyunghyuncho@fb.com](mailto:kyunghyuncho@fb.com)), -Jiatao Gu ([jgu@fb.com](mailto:jgu@fb.com)) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/criss/sentence_retrieval/encoder_analysis.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/criss/sentence_retrieval/encoder_analysis.py deleted file mode 100644 index b41bfbe38789ba14e6a5ea938c75d761424c00ab..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/criss/sentence_retrieval/encoder_analysis.py +++ /dev/null @@ -1,92 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import argparse -import glob - -import numpy as np - - -DIM = 1024 - - -def compute_dist(source_embs, target_embs, k=5, return_sim_mat=False): - target_ids = [tid for tid in target_embs] - source_mat = np.stack(source_embs.values(), axis=0) - normalized_source_mat = source_mat / np.linalg.norm( - source_mat, axis=1, keepdims=True - ) - target_mat = np.stack(target_embs.values(), axis=0) - normalized_target_mat = target_mat / np.linalg.norm( - target_mat, axis=1, keepdims=True - ) - sim_mat = normalized_source_mat.dot(normalized_target_mat.T) - if return_sim_mat: - return sim_mat - neighbors_map = {} - for i, sentence_id in enumerate(source_embs): - idx = np.argsort(sim_mat[i, :])[::-1][:k] - neighbors_map[sentence_id] = [target_ids[tid] for tid in idx] - return neighbors_map - - -def load_embeddings(directory, LANGS): - sentence_embeddings = {} - sentence_texts = {} - for lang in LANGS: - sentence_embeddings[lang] = {} - sentence_texts[lang] = {} - lang_dir = f"{directory}/{lang}" - embedding_files = glob.glob(f"{lang_dir}/all_avg_pool.{lang}.*") - for embed_file in embedding_files: - shard_id = embed_file.split(".")[-1] - embeddings = np.fromfile(embed_file, dtype=np.float32) - num_rows = embeddings.shape[0] // DIM - embeddings = embeddings.reshape((num_rows, DIM)) - - with open(f"{lang_dir}/sentences.{lang}.{shard_id}") as sentence_file: - for idx, line in enumerate(sentence_file): - sentence_id, sentence = line.strip().split("\t") - sentence_texts[lang][sentence_id] = sentence - sentence_embeddings[lang][sentence_id] = embeddings[idx, :] - - return sentence_embeddings, sentence_texts - - -def compute_accuracy(directory, LANGS): - sentence_embeddings, sentence_texts = load_embeddings(directory, LANGS) - - top_1_accuracy = {} - - top1_str = " ".join(LANGS) + "\n" - for source_lang in LANGS: - top_1_accuracy[source_lang] = {} - top1_str += f"{source_lang} " - for target_lang in LANGS: - top1 = 0 - top5 = 0 - neighbors_map = compute_dist( - sentence_embeddings[source_lang], sentence_embeddings[target_lang] - ) - for sentence_id, neighbors in neighbors_map.items(): - if sentence_id == neighbors[0]: - top1 += 1 - if sentence_id in neighbors[:5]: - top5 += 1 - n = len(sentence_embeddings[target_lang]) - top1_str += f"{top1/n} " - top1_str += "\n" - - print(top1_str) - print(top1_str, file=open(f"{directory}/accuracy", "w")) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="Analyze encoder outputs") - parser.add_argument("directory", help="Source language corpus") - parser.add_argument("--langs", help="List of langs") - args = parser.parse_args() - langs = args.langs.split(",") - compute_accuracy(args.directory, langs) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/preprocessing/denoiser/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/preprocessing/denoiser/__init__.py deleted file mode 100644 index 6264236915a7269a4d920ee8213004374dd86a9a..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_synthesis/preprocessing/denoiser/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py deleted file mode 100644 index 50683e6d7c8c0db5b8f019e5f7f5fb8c6dfd9f66..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/speech_text_joint_to_text/models/s2t_dualinputxmtransformer.py +++ /dev/null @@ -1,585 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import copy - -import torch.nn as nn -from fairseq import checkpoint_utils -from fairseq import utils -from fairseq.data.data_utils import lengths_to_padding_mask -from fairseq.models import ( - register_model, - register_model_architecture, - FairseqEncoder, -) -from fairseq.models.speech_to_text import XMTransformerModel, Wav2VecEncoderWithAdaptor -from fairseq.models.speech_to_text.xm_transformer import ( - set_default_adaptor_args, - set_default_w2v_encoder_args, -) -from fairseq.models.transformer import TransformerEncoder, TransformerDecoder -from fairseq.models.wav2vec import TransformerSentenceEncoderLayer -from fairseq.utils import safe_hasattr - -from .s2t_dualinputtransformer import ( - DualInputS2TTransformerModel, - TransformerMultiInputDecoder, - DualInputEncoder, -) - - -class TransformerSentenceEncoderLayerStd(TransformerSentenceEncoderLayer): - def __init__(self, sent_enc_layer): - super(TransformerSentenceEncoderLayer, self).__init__() - self.embedding_dim = sent_enc_layer.embedding_dim - self.dropout = sent_enc_layer.dropout - self.activation_dropout = sent_enc_layer.activation_dropout - - # Initialize blocks - self.activation_fn = sent_enc_layer.activation_fn - self.self_attn = sent_enc_layer.self_attn - - self.dropout1 = sent_enc_layer.dropout1 - self.dropout2 = sent_enc_layer.dropout2 - self.dropout3 = sent_enc_layer.dropout3 - - self.layer_norm_first = sent_enc_layer.layer_norm_first - - # layer norm associated with the self attention layer - self.self_attn_layer_norm = sent_enc_layer.self_attn_layer_norm - self.fc1 = sent_enc_layer.fc1 - self.fc2 = sent_enc_layer.fc2 - - # layer norm associated with the position wise feed-forward NN - self.final_layer_norm = sent_enc_layer.final_layer_norm - - def forward( - self, - x, - self_attn_mask=None, - self_attn_padding_mask=None, - need_weights=None, - att_args=None, - ): - x, attn = super().forward( - x, self_attn_mask, self_attn_padding_mask, need_weights, att_args - ) - return x - - -# TODO retire SharedEncoder -class SharedEncoder(FairseqEncoder): - def __init__(self, wav2vec_enc, mbart_enc, adaptor, shared_layers): - super().__init__(None) - self.w2v_encoder = wav2vec_enc - self.shared_layers = self.w2v_encoder.w2v_model.encoder.layers[-shared_layers:] - self.w2v_encoder.w2v_model.encoder.layers = ( - self.w2v_encoder.w2v_model.encoder.layers[:-shared_layers] - ) - self.adaptor = adaptor - if self.shared_layers[-1].layer_norm_first: - self.final_layer_norm = mbart_enc.layer_norm - else: - mbart_enc.layer_norm = None - self.final_layer_norm = None - shared_layer_from = len(mbart_enc.layers) - shared_layers - if shared_layer_from < 0: - shared_layer_from = 0 - for layer_id, layer in enumerate(self.shared_layers): - mbart_enc.layers[ - shared_layer_from + layer_id - ] = TransformerSentenceEncoderLayerStd(layer) - - def forward(self, src_tokens, src_lengths=None, **kwargs): - padding_mask = lengths_to_padding_mask(src_lengths) - if not padding_mask.any(): - padding_mask = None - - out = self.w2v_encoder.forward(src_tokens, padding_mask, tbc=True) - x = out["encoder_out"] - enc_padding_mask = None - if out["encoder_padding_mask"] is not None: - enc_padding_mask = out["encoder_padding_mask"].transpose( - 0, 1 - ) # T X B --> B X T - - x, enc_padding_mask = self.adaptor(x, enc_padding_mask) - for layer in self.shared_layers: - x, _ = layer(x, enc_padding_mask) - if self.final_layer_norm is not None: - x = self.final_layer_norm(x) - - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [enc_padding_mask] - if enc_padding_mask is not None - else [], # B x T - "encoder_embedding": [], # B x T x C - "encoder_states": [], # List[T x B x C] - "src_tokens": [], - "src_lengths": [], - } - - -class StackedWav2VecEncoderWithAdaptor(FairseqEncoder): - def __init__( - self, - wav2vec_enc, - mbart_enc_layers, - mbart_layer_norm, - adaptor, - drop_w2v_layers=0, - ): - super().__init__(None) - self.w2v_encoder = wav2vec_enc - self.adaptor = adaptor - self.mbart_encoder_layers = mbart_enc_layers - self.final_layer_norm = mbart_layer_norm - if drop_w2v_layers > 0: - self.w2v_encoder.w2v_model.encoder.layers = ( - self.w2v_encoder.w2v_model.encoder.layers[:-drop_w2v_layers] - ) - - def forward(self, src_tokens, src_lengths=None, return_all_hiddens=False, **kwargs): - padding_mask = lengths_to_padding_mask(src_lengths) - if not padding_mask.any(): - padding_mask = None - - out = self.w2v_encoder.forward(src_tokens, padding_mask, tbc=True) - x = out["encoder_out"] - enc_padding_mask = None - if out["encoder_padding_mask"] is not None: - enc_padding_mask = out["encoder_padding_mask"].transpose( - 0, 1 - ) # T X B --> B X T - - x, enc_padding_mask = self.adaptor(x, enc_padding_mask) - encoder_states = [] - for layer in self.mbart_encoder_layers: - x = layer(x, enc_padding_mask) - if return_all_hiddens: - encoder_states.append(x) - if self.final_layer_norm is not None: - x = self.final_layer_norm(x) - - return { - "encoder_out": [x], # T x B x C - "encoder_padding_mask": [enc_padding_mask] - if enc_padding_mask is not None - else [], # B x T - "encoder_embedding": [], # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], - "src_lengths": [], - } - - def reorder_encoder_out(self, encoder_out, new_order): - new_encoder_out = ( - [] - if len(encoder_out["encoder_out"]) == 0 - else [x.index_select(1, new_order) for x in encoder_out["encoder_out"]] - ) - - new_encoder_padding_mask = ( - [] - if len(encoder_out["encoder_padding_mask"]) == 0 - else [ - x.index_select(0, new_order) - for x in encoder_out["encoder_padding_mask"] - ] - ) - - new_encoder_embedding = ( - [] - if len(encoder_out["encoder_embedding"]) == 0 - else [ - x.index_select(0, new_order) for x in encoder_out["encoder_embedding"] - ] - ) - - encoder_states = encoder_out["encoder_states"] - if len(encoder_states) > 0: - for idx, state in enumerate(encoder_states): - encoder_states[idx] = state.index_select(1, new_order) - - return { - "encoder_out": new_encoder_out, # T x B x C - "encoder_padding_mask": new_encoder_padding_mask, # B x T - "encoder_embedding": new_encoder_embedding, # B x T x C - "encoder_states": encoder_states, # List[T x B x C] - "src_tokens": [], # B x T - "src_lengths": [], # B x 1 - } - - -# Note: -# dual input transformer: -# encoder: wav2vec for speech + mbart encoder for text -# decoder: mbart decoder for text -@register_model("dual_input_xm_transformer") -class DualInputXMTransformerModel(DualInputS2TTransformerModel): - def __init__(self, encoder, decoder): - super().__init__(encoder, decoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # wav2vec encoder - Wav2VecEncoderWithAdaptor.add_args(parser) - # add_decoder_args(parser) - # mbart Transformer - parser.add_argument( - "--activation-fn", - type=str, - default="relu", - choices=utils.get_available_activation_fns(), - help="activation function to use", - ) - - parser.add_argument( - "--mbart-dropout", type=float, metavar="D", help="dropout probability" - ) - parser.add_argument( - "--mbart-attention-dropout", - type=float, - metavar="D", - help="dropout probability for attention weights", - ) - parser.add_argument( - "--mbart-activation-dropout", - type=float, - metavar="D", - help="dropout probability after activation in FFN.", - ) - - parser.add_argument( - "--encoder-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension", - ) - parser.add_argument( - "--encoder-ffn-embed-dim", - type=int, - metavar="N", - help="encoder embedding dimension for FFN", - ) - parser.add_argument( - "--encoder-layers", type=int, metavar="N", help="num encoder layers" - ) - parser.add_argument( - "--encoder-attention-heads", - type=int, - metavar="N", - help="num encoder attention heads", - ) - parser.add_argument( - "--encoder-normalize-before", - action="store_true", - help="apply layernorm before each encoder block", - ) - - parser.add_argument( - "--decoder-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension", - ) - parser.add_argument( - "--decoder-ffn-embed-dim", - type=int, - metavar="N", - help="decoder embedding dimension for FFN", - ) - parser.add_argument( - "--decoder-layers", type=int, metavar="N", help="num decoder layers" - ) - parser.add_argument( - "--decoder-attention-heads", - type=int, - metavar="N", - help="num decoder attention heads", - ) - parser.add_argument( - "--decoder-normalize-before", - action="store_true", - help="apply layernorm before each decoder block", - ) - parser.add_argument( - "--layernorm-embedding", - action="store_true", - help="add layernorm to embedding", - ) - parser.add_argument( - "--no-scale-embedding", - action="store_true", - help="if True, dont scale embeddings", - ) - parser.add_argument( - "--load-pretrained-mbart-from", - type=str, - metavar="STR", - help="model to take text encoder decoder weights from (for initialization)", - ) - # parser.add_argument("--finetune-w2v-params", type=str, metavar="STR", - # help="comma-separated param strings to finetune.") - parser.add_argument( - "--finetune-mbart-decoder-params", - type=str, - metavar="STR", - help="comma-separated param strings to finetune.", - ) - parser.add_argument( - "--finetune-mbart-encoder-params", - type=str, - metavar="STR", - help="comma-separated param strings to finetune.", - ) - parser.add_argument( - "--skip-encoder-projection", - action="store_true", - help="skip the projection layer in encoder", - ) - - parser.add_argument( - "--enc-grad-mult", - type=float, - metavar="V", - default=1.0, - help="multiply enc1 and enc2 gradient by V", - ) - parser.add_argument( - "--enc2-along-grad-mult", - type=float, - metavar="V", - default=1.0, - help="multiply enc2 gradient by V if only enc2 is used", - ) - parser.add_argument( - "--text-input-cost-ratio", - type=float, - default=1.0, - metavar="V", - help="text input cost ratio relative to speech input cost", - ) - parser.add_argument( - "--stack-w2v-mbart-encoder", - action="store_true", - help="stack w2v and mbart encoder", - ) - parser.add_argument( - "--stack-w2v-mbart-nonorm-encoder", - action="store_true", - help="stack w2v and mbart encoder", - ) - parser.add_argument( - "--no-final-norm-decoder", action="store_true", help="no layer norm" - ) - parser.add_argument( - "--drop-w2v-layers", - type=int, - default=0, - metavar="N", - help="drop w2v encoder layers", - ) - - parser.add_argument( - "--share-w2v-text-encoder", - action="store_true", - help="share w2v encoder layers with text encoder", - ) - parser.add_argument( - "--shared-w2v-layers", - type=int, - default=0, - metavar="N", - help="shared encoder layers from w2v encoder", - ) - - @classmethod - def build_encoder(cls, args, task): - _args = copy.deepcopy(args) - _args.dropout = args.mbart_dropout - _args.attention_dropout = args.mbart_attention_dropout - _args.activation_dropout = args.mbart_activation_dropout - _args.max_source_positions = 1024 - enc_emb = nn.Embedding( - len(task.src_dict), _args.encoder_embed_dim, task.src_dict.pad() - ) - text_encoder = TransformerEncoder(_args, task.src_dict, enc_emb) - spch_encoder = Wav2VecEncoderWithAdaptor(args) - if getattr(args, "load_pretrained_mbart_from", None): - text_encoder = checkpoint_utils.load_pretrained_component_from_model( - component=text_encoder, checkpoint=args.load_pretrained_mbart_from - ) - if getattr(args, "stack_w2v_mbart_encoder", False): - assert getattr(args, "share_w2v_text_encoder", False) is False - spch_encoder = StackedWav2VecEncoderWithAdaptor( - spch_encoder.w2v_encoder, - text_encoder.layers, - text_encoder.layer_norm, - spch_encoder.adaptor, - args.drop_w2v_layers, - ) - elif getattr(args, "stack_w2v_mbart_nonorm_encoder", False): - text_encoder.layer_norm = None - spch_encoder = StackedWav2VecEncoderWithAdaptor( - spch_encoder.w2v_encoder, - text_encoder.layers, - text_encoder.layer_norm, - spch_encoder.adaptor, - args.drop_w2v_layers, - ) - elif getattr(args, "share_w2v_text_encoder", False): - spch_encoder = SharedEncoder( - spch_encoder.w2v_encoder, - text_encoder, - spch_encoder.adaptor, - args.shared_w2v_layers, - ) - - for k, p in spch_encoder.named_parameters(): - # Freeze pretrained models by default - if safe_hasattr( - args, "finetune_w2v_params" - ) and XMTransformerModel.finetune_params(args.finetune_w2v_params, k): - p.requires_grad = True - else: - p.requires_grad = False - for k, p in text_encoder.named_parameters(): - # Freeze pretrained models by default - if safe_hasattr( - args, "finetune_mbart_encoder_params" - ) and XMTransformerModel.finetune_params( - args.finetune_mbart_encoder_params, k - ): - p.requires_grad = True - else: - p.requires_grad = False - cross_attentive_loss_before_last_layer = ( - 0 if getattr(args, "attentive_cost_regularization", 0.0) > 0.0 else -1 - ) - encoder = DualInputEncoder( - args, - spch_encoder, - text_encoder, - task.src_dict, - cross_attentive_loss_before_last_layer, - ) - return encoder - - @classmethod - def build_decoder(cls, args, task): - _args = copy.deepcopy(args) - _args.dropout = args.mbart_dropout - _args.attention_dropout = args.mbart_attention_dropout - _args.activation_dropout = args.mbart_activation_dropout - _args.max_target_positions = 1024 - dec_emb = nn.Embedding( - len(task.tgt_dict), _args.encoder_embed_dim, task.tgt_dict.pad() - ) - decoder = TransformerDecoder(_args, task.tgt_dict, dec_emb) - if getattr(args, "load_pretrained_mbart_from", None): - decoder = checkpoint_utils.load_pretrained_component_from_model( - component=decoder, checkpoint=args.load_pretrained_mbart_from - ) - if getattr(args, "no_final_norm_decoder", False): - decoder.layer_norm = None - for k, p in decoder.named_parameters(): - # Freeze pretrained models by default - if safe_hasattr( - args, "finetune_mbart_decoder_params" - ) and XMTransformerModel.finetune_params( - args.finetune_mbart_decoder_params, k - ): - p.requires_grad = True - else: - p.requires_grad = False - - compute_cross_attentive_loss = ( - True if getattr(args, "attentive_cost_regularization", 0.0) > 0.0 else False - ) - cross_attentive_loss_without_norm = getattr( - args, "attentive_cost_without_normalize", False - ) - cross_attentive_loss_reverse = ( - False # getattr(args, "attentive_cost_reverse", False) - ) - decoder = TransformerMultiInputDecoder( - dictionary=task.target_dictionary, - spch_decoder=decoder, - text_decoder=decoder, - compute_cross_attentive_loss=compute_cross_attentive_loss, - cross_attentive_loss_with_norm=True - if not cross_attentive_loss_without_norm - else False, - cross_attentive_loss_reverse=cross_attentive_loss_reverse, - ) - return decoder - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - # make sure that all args are properly defaulted - # (in case there are any new ones) - dualinputxmtransformer_base(args) - - encoder = cls.build_encoder(args, task) - decoder = cls.build_decoder(args, task) - return cls(encoder, decoder) - - -@register_model_architecture("dual_input_xm_transformer", "dualinputxmtransformer_base") -def dualinputxmtransformer_base(args): - # wav2vec encoder - set_default_w2v_encoder_args(args) - set_default_adaptor_args(args) - - # mbart model - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr( - args, "encoder_ffn_embed_dim", 4 * args.encoder_embed_dim - ) - args.encoder_layers = getattr(args, "encoder_layers", 12) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.encoder_layerdrop = getattr(args, "encoder_layerdrop", 0) - args.encoder_learned_pos = getattr(args, "encoder_learned_pos", True) - - args.decoder_embed_path = getattr(args, "decoder_embed_path", None) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4 * 1024) - args.decoder_layers = getattr(args, "decoder_layers", 12) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.decoder_learned_pos = getattr(args, "decoder_learned_pos", True) - args.decoder_layerdrop = getattr(args, "decoder_layerdrop", 0.0) - - args.adaptive_input = getattr(args, "adaptive_input", False) - - args.mbart_attention_dropout = getattr(args, "mbart_attention_dropout", 0.0) - args.mbart_activation_dropout = getattr(args, "mbart_activation_dropout", 0.0) - args.mbart_dropout = getattr(args, "mbart_dropout", 0.1) - args.adaptive_softmax_cutoff = getattr(args, "adaptive_softmax_cutoff", None) - args.adaptive_softmax_dropout = getattr(args, "adaptive_softmax_dropout", 0) - args.share_decoder_input_output_embed = getattr( - args, "share_decoder_input_output_embed", True - ) - args.no_token_positional_embeddings = getattr( - args, "no_token_positional_embeddings", False - ) - - args.decoder_output_dim = getattr( - args, "decoder_output_dim", args.decoder_embed_dim - ) - args.decoder_input_dim = getattr(args, "decoder_input_dim", args.decoder_embed_dim) - - args.no_scale_embedding = getattr(args, "no_scale_embedding", False) - args.quant_noise_pq = getattr(args, "quant_noise_pq", 0) - args.layernorm_embedding = getattr(args, "layernorm_embedding", True) - - args.activation_fn = getattr(args, "activation_fn", "gelu") - args.pooler_activation_fn = getattr(args, "pooler_activation_fn", "tanh") - args.pooler_dropout = getattr(args, "pooler_dropout", 0.0) diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/shorten_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/shorten_dataset.py deleted file mode 100644 index 6ebb5d88feb3f29d1512a0873df304915d051209..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/data/shorten_dataset.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -from fairseq.data import data_utils - -from . import BaseWrapperDataset - - -class TruncateDataset(BaseWrapperDataset): - """Truncate a sequence by returning the first truncation_length tokens""" - - def __init__(self, dataset, truncation_length): - super().__init__(dataset) - assert truncation_length is not None - self.truncation_length = truncation_length - self.dataset = dataset - - def __getitem__(self, index): - item = self.dataset[index] - item_len = item.size(0) - if item_len > self.truncation_length: - item = item[: self.truncation_length] - return item - - @property - def sizes(self): - return np.minimum(self.dataset.sizes, self.truncation_length) - - def __len__(self): - return len(self.dataset) - - -class RandomCropDataset(TruncateDataset): - """Truncate a sequence by returning a random crop of truncation_length tokens""" - - def __init__(self, dataset, truncation_length, seed=1): - super().__init__(dataset, truncation_length) - self.seed = seed - self.epoch = 0 - - @property - def can_reuse_epoch_itr_across_epochs(self): - return True # only the crop changes, not item sizes - - def set_epoch(self, epoch, **unused): - super().set_epoch(epoch) - self.epoch = epoch - - def __getitem__(self, index): - with data_utils.numpy_seed(self.seed, self.epoch, index): - item = self.dataset[index] - item_len = item.size(0) - excess = item_len - self.truncation_length - if excess > 0: - start_idx = np.random.randint(0, excess) - item = item[start_idx : start_idx + self.truncation_length] - return item - - -def maybe_shorten_dataset( - dataset, - split, - shorten_data_split_list, - shorten_method, - tokens_per_sample, - seed, -): - truncate_split = ( - split in shorten_data_split_list.split(",") or len(shorten_data_split_list) == 0 - ) - if shorten_method == "truncate" and truncate_split: - dataset = TruncateDataset(dataset, tokens_per_sample) - elif shorten_method == "random_crop" and truncate_split: - dataset = RandomCropDataset(dataset, tokens_per_sample, seed) - return dataset diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/README.md b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/README.md deleted file mode 100644 index 02892bc9dd4344e550596d238e2b71870cfc7dd3..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/README.md +++ /dev/null @@ -1,220 +0,0 @@ -# vakyansh-tts -Text to Speech for Indic languages - -## 1. Installation and Setup for training - -Clone repo -Note : for multspeaker glow-tts training use branch [multispeaker](https://github.com/Open-Speech-EkStep/vakyansh-tts/tree/multispeaker) -``` -git clone https://github.com/Open-Speech-EkStep/vakyansh-tts -``` -Build conda virtual environment -``` -cd ./vakyansh-tts -conda create --name python=3.7 -conda activate -pip install -r requirements.txt -``` -Install [apex](https://github.com/NVIDIA/apex); commit: 37cdaf4 for Mixed-precision training - -Note : used only for glow-tts -``` -cd .. -git clone https://github.com/NVIDIA/apex -cd apex -git checkout 37cdaf4 -pip install -v --disable-pip-version-check --no-cache-dir ./ -cd ../vakyansh-tts -``` -Build Monotonic Alignment Search Code (Cython) - -Note : used only for glow-tts -``` -bash install.sh -``` - -## 2. Data Resampling - -The data format should have a folder containing all the .wav files for glow-tts and a text file containing filenames with their sentences. - -Directory structure: - -langauge_folder_name -``` -language_folder_name -|-- ./wav/*.wav -|-- ./text_file_name.txt -``` -The format for text_file_name.txt (Text file is only needed for glow-tts training) - -``` -( audio1.wav "Sentence1." ) -( audio2.wav "Sentence2." ) -``` - -To resample the .wav files to 22050 sample rate, change the following parameters in the vakyansh-tts/scripts/data/resample.sh - -``` -input_wav_path : absolute path to wav file folder in vakyansh_tts/data/ -output_wav_path : absolute path to vakyansh_tts/data/resampled_wav_folder_name -output_sample_rate : 22050 (or any other desired sample rate) -``` - -To run: -```bash -cd scripts/data/ -bash resample.sh -``` - - -## 3. Spectogram Training (glow-tts) - -### 3.1 Data Preparation - - -To prepare the data edit the vakyansh-tts/scripts/glow/prepare_data.sh file and change the following parameters -``` -input_text_path : absolute path to vakyansh_tts/data/text_file_name.txt -input_wav_path : absolute path to vakyansh_tts/data/resampled_wav_folder_name -gender : female or male voice -``` -To run: -```bash -cd scripts/glow/ -bash prepare_data.sh -``` -### 3.2 Training glow-tts - -To start the spectogram-training edit the vakyansh-tts/scripts/glow/train_glow.sh file and change the following parameter: -``` -gender : female or male voice -``` -Make sure that the gender is same as that of the prepare_data.sh file - -To start the training, run: -```bash -cd scripts/glow/ -bash train_glow.sh -``` -## 4. Vocoder Training (hifi-gan) - -### 4.1 Data Preparation - -To prepare the data edit the vakyansh-tts/scripts/hifi/prepare_data.sh file and change the following parameters -``` -input_wav_path : absolute path to vakyansh_tts/data/resampled_wav_folder_name -gender : female or male voice -``` -To run: -```bash -cd scripts/hifi/ -bash prepare_data.sh -``` -### 4.2 Training hifi-gan - -To start the spectogram-training edit the vakyansh-tts/scripts/hifi/train_hifi.sh file and change the following parameter: -``` -gender : female or male voice -``` -Make sure that the gender is same as that of the prepare_data.sh file - -To start the training, run: -```bash -cd scripts/hifi/ -bash train_hifi.sh -``` - -## 5. Inference - -### 5.1 Using Gradio - -To use the gradio link edit the following parameters in the vakyansh-tts/scripts/inference/gradio.sh file: -``` -gender : female or male voice -device : cpu or cuda -lang : langauge code -``` - -To run: -```bash -cd scripts/inference/ -bash gradio.sh -``` -### 5.2 Using fast API -To use the fast api link edit the parameters in the vakyansh-tts/scripts/inference/api.sh file similar to section 5.1 - -To run: -```bash -cd scripts/inference/ -bash api.sh -``` - -### 5.3 Direct Inference using text -To infer, edit the parameters in the vakyansh-tts/scripts/inference/infer.sh file similar to section 5.1 and set the text to the text variable - -To run: -```bash -cd scripts/inference/ -bash infer.sh -``` - -To configure other parameters there is a version that runs the advanced inference as well. Additional Parameters: -``` -noise_scale : can vary from 0 to 1 for noise factor -length_scale : can vary from 0 to 2 for changing the speed of the generated audio -transliteration : whether to switch on/off transliteration. 1: ON, 0: OFF -number_conversion : whether to switch on/off number to words conversion. 1: ON, 0: OFF -split_sentences : whether to switch on/off splitting of sentences. 1: ON, 0: OFF -``` -To run: -``` -cd scripts/inference/ -bash advanced_infer.sh -``` - -### 5.4 Installation of tts_infer package - -In tts_infer package, we currently have two components: - - 1. Transliteration (AI4bharat's open sourced models) (Languages supported: {'hi', 'gu', 'mr', 'bn', 'te', 'ta', 'kn', 'pa', 'gom', 'mai', 'ml', 'sd', 'si', 'ur'} ) - - 2. Num to Word (Languages supported: {'en', 'hi', 'gu', 'mr', 'bn', 'te', 'ta', 'kn', 'or', 'pa'} ) -``` -git clone https://github.com/Open-Speech-EkStep/vakyansh-tts -cd vakyansh-tts -bash install.sh -python setup.py bdist_wheel -pip install -e . -cd tts_infer -gsutil -m cp -r gs://vakyaansh-open-models/translit_models . -``` - -Usage: Refer to example file in tts_infer/ -``` -from tts_infer.tts import TextToMel, MelToWav -from tts_infer.transliterate import XlitEngine -from tts_infer.num_to_word_on_sent import normalize_nums - -import re -from scipy.io.wavfile import write - -text_to_mel = TextToMel(glow_model_dir='/path/to/glow-tts/checkpoint/dir', device='cuda') -mel_to_wav = MelToWav(hifi_model_dir='/path/to/hifi/checkpoint/dir', device='cuda') - -def translit(text, lang): - reg = re.compile(r'[a-zA-Z]') - engine = XlitEngine(lang) - words = [engine.translit_word(word, topk=1)[lang][0] if reg.match(word) else word for word in text.split()] - updated_sent = ' '.join(words) - return updated_sent - -def run_tts(text, lang): - text = text.replace('।', '.') # only for hindi models - text_num_to_word = normalize_nums(text, lang) # converting numbers to words in lang - text_num_to_word_and_transliterated = translit(text_num_to_word, lang) # transliterating english words to lang - - mel = text_to_mel.generate_mel(text_num_to_word_and_transliterated) - audio, sr = mel_to_wav.generate_wav(mel) - write(filename='temp.wav', rate=sr, data=audio) # for saving wav file, if needed - return (sr, audio) -``` diff --git a/spaces/Hashom132/stabilityai-stable-diffusion-2/app.py b/spaces/Hashom132/stabilityai-stable-diffusion-2/app.py deleted file mode 100644 index d2782cea00b1bfcd22df7c204d9e52a6baf46ac2..0000000000000000000000000000000000000000 --- a/spaces/Hashom132/stabilityai-stable-diffusion-2/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-2").launch() \ No newline at end of file diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/test_text_len.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/test_text_len.py deleted file mode 100644 index 77ad9d3adc4fabb6b6eee099a60b9793cef2dfa2..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/test_text_len.py +++ /dev/null @@ -1,204 +0,0 @@ -# Copyright 2021 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import argparse -import ast -import gradio as gr -from os.path import isdir -from data_measurements.dataset_statistics import DatasetStatisticsCacheClass as dmt_cls -import utils -from utils import dataset_utils -from utils import gradio_utils as gr_utils -import widgets -import app as ap -from app import load_or_prepare_widgets - - -logs = utils.prepare_logging(__file__) - -# Utility for sidebar description and selection of the dataset -DATASET_NAME_TO_DICT = dataset_utils.get_dataset_info_dicts() - - -def get_load_prepare_list(dstats): - """ - # Get load_or_prepare functions for the measurements we will display - """ - # Measurement calculation: - # Add any additional modules and their load-prepare function here. - load_prepare_list = [ - ("text_lengths", dstats.load_or_prepare_text_lengths), - ] - - return load_prepare_list - - -def get_ui_widgets(): - """Get the widgets that will be displayed in the UI.""" - return [ - widgets.TextLengths(),] - - -def get_widgets(): - """ - # A measurement widget requires 2 things: - # - A load or prepare function - # - A display function - # We define these in two separate functions get_load_prepare_list and get_ui_widgets; - # any widget can be added by modifying both functions and the rest of the app logic will work. - # get_load_prepare_list is a function since it requires a DatasetStatisticsCacheClass which will - # not be created until dataset and config values are selected in the ui - """ - return get_load_prepare_list, get_ui_widgets() - - -def get_title(dstats): - title_str = f"### Showing: {dstats.dset_name} - {dstats.dset_config} - {dstats.split_name} - {'-'.join(dstats.text_field)}" - logs.info("showing header") - return title_str - - -def display_initial_UI(): - """Displays the header in the UI""" - # Extract the selected arguments - dataset_args = gr_utils.sidebar_selection(DATASET_NAME_TO_DICT) - return dataset_args - - - - -def show_column(dstats, display_list, show_perplexities, column_id=""): - """ - Function for displaying the elements in the streamlit app. - Args: - dstats (class): The dataset_statistics.py DatasetStatisticsCacheClass - display_list (list): List of tuples for (widget_name, widget_display_function) - show_perplexities (Bool): Whether perplexities should be loaded and displayed for this dataset - column_id (str): Which column of the dataset the analysis is done on [DEPRECATED for v1] - """ - - # start showing stuff - gr_utils.expander_header(dstats, DATASET_NAME_TO_DICT) - for widget_tuple in display_list: - widget_type = widget_tuple[0] - widget_fn = widget_tuple[1] - logs.info("showing %s." % widget_type) - try: - widget_fn(dstats, column_id) - except Exception as e: - logs.warning("Jk jk jk. There was an issue with %s:" % widget_type) - logs.exception(e) - # TODO: Fix how this is a weird outlier. - if show_perplexities: - gr_utils.expander_text_perplexities(dstats, column_id) - logs.info("Have finished displaying the widgets.") - - -def create_demo(live: bool, pull_cache_from_hub: bool): - with gr.Blocks() as demo: - state = gr.State() - with gr.Row(): - with gr.Column(scale=1): - dataset_args = display_initial_UI() - get_load_prepare_list_fn, widget_list = get_widgets() - # # TODO: Make this less of a weird outlier. - # Doesn't do anything right now - show_perplexities = gr.Checkbox(label="Show text perplexities") - with gr.Column(scale=4): - gr.Markdown("# Data Measurements Tool") - title = gr.Markdown() - for widget in widget_list: - widget.render() - # when UI upates, call the new text --> parse to teh TTi function - def update_ui(dataset: str, config: str, split: str, feature: str): - feature = ast.literal_eval(feature) - label_field, label_names = gr_utils.get_label_names(dataset, config, DATASET_NAME_TO_DICT) - dstats = dmt_cls(dset_name=dataset, dset_config=config, split_name=split, text_field=feature, - label_field=label_field, label_names=label_names, use_cache=True) - load_prepare_list = get_load_prepare_list_fn(dstats) - dstats = load_or_prepare_widgets(dstats, load_prepare_list, show_perplexities=False, - live=live, pull_cache_from_hub=pull_cache_from_hub) - output = {title: get_title(dstats), state: dstats} - for widget in widget_list: - output.update(widget.update(dstats)) - return output - - def update_dataset(dataset: str): - new_values = gr_utils.update_dataset(dataset, DATASET_NAME_TO_DICT) - config = new_values[0][1] - feature = new_values[1][1] - split = new_values[2][1] - new_dropdown = { - dataset_args["text_field"]: gr.Dropdown.update(choices=new_values[1][0], value=feature), - dataset_args["split_name"]: gr.Dropdown.update(choices=new_values[2][0], value=split), - } - return new_dropdown - - def update_config(dataset: str, config: str): - new_values = gr_utils.update_config(dataset, config, DATASET_NAME_TO_DICT) - - feature = new_values[0][1] - split = new_values[1][1] - new_dropdown = { - dataset_args["text_field"]: gr.Dropdown.update(choices=new_values[0][0], value=feature), - dataset_args["split_name"]: gr.Dropdown.update(choices=new_values[1][0], value=split) - } - return new_dropdown - - measurements = [comp for output in widget_list for comp in output.output_components] - demo.load(update_ui, - inputs=[dataset_args["dset_name"], dataset_args["dset_config"], dataset_args["split_name"], dataset_args["text_field"]], - outputs=[title, state] + measurements) - print(dataset_args["text_field"]) - for widget in widget_list: - widget.add_events(state) - - dataset_args["dset_name"].change(update_dataset, - inputs=[dataset_args["dset_name"]], - outputs=[dataset_args["dset_config"], - dataset_args["split_name"], dataset_args["text_field"], - title, state] + measurements) - - dataset_args["dset_config"].change(update_config, - inputs=[dataset_args["dset_name"], dataset_args["dset_config"]], - outputs=[dataset_args["split_name"], dataset_args["text_field"], - title, state] + measurements) - - dataset_args["calculate_btn"].click(update_ui, - inputs=[dataset_args["dset_name"], dataset_args["dset_config"], - dataset_args["split_name"], dataset_args["text_field"]], - outputs=[title, state] + measurements) - return demo - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--live", default=False, required=False, action="store_true", help="Flag to specify that this is not running live.") - parser.add_argument( - "--pull_cache_from_hub", default=False, required=False, action="store_true", help="Flag to specify whether to look in the hub for measurements caches. If you are using this option, you must have HUB_CACHE_ORGANIZATION= and HF_TOKEN= on separate lines in a file named .env at the root of this repo.") - arguments = parser.parse_args() - live = arguments.live - pull_cache_from_hub = arguments.pull_cache_from_hub - - # Create and initialize the demo - dataset_args = display_initial_UI() - demo = create_demo(live, pull_cache_from_hub) - print("this is the cureenrt TEXT:") - print(dataset_args["text_field"]) - - demo.launch() - -if __name__ == "__main__": - main() diff --git a/spaces/HugoDzz/super-godot-galaxy/static/smg/index.html b/spaces/HugoDzz/super-godot-galaxy/static/smg/index.html deleted file mode 100644 index 221664ad7b1306dc83bc68b640ae9f2927e46f47..0000000000000000000000000000000000000000 --- a/spaces/HugoDzz/super-godot-galaxy/static/smg/index.html +++ /dev/null @@ -1,248 +0,0 @@ - - - - - - Super Godot Galaxy - - - - - - - - HTML5 canvas appears to be unsupported in the current browser.
    - Please try updating or use a different browser. -
    -
    - - - -
    - - - - - - diff --git a/spaces/ICML2022/OFA/fairseq/examples/latent_depth/latent_depth_src/__init__.py b/spaces/ICML2022/OFA/fairseq/examples/latent_depth/latent_depth_src/__init__.py deleted file mode 100644 index c5fa76039ff98c18d3c14b5f4a8f73ffe644de11..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/latent_depth/latent_depth_src/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import multilingual_translation_latent_depth # noqa -from .loss import latent_depth # noqa -from .models import latent_multilingual_transformer # noqa -from .modules import latent_layers # noqa diff --git a/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/custom_ops.py b/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/custom_ops.py deleted file mode 100644 index c5853ac187e6e3ae522b0ef1aabefc7b188f7083..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/resefa/third_party/stylegan3_official_ops/custom_ops.py +++ /dev/null @@ -1,191 +0,0 @@ -# python3.7 - -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Utility functions to setup customized operators. - -Please refer to https://github.com/NVlabs/stylegan3 -""" - -# pylint: disable=line-too-long -# pylint: disable=multiple-statements -# pylint: disable=missing-function-docstring -# pylint: disable=useless-suppression -# pylint: disable=inconsistent-quotes - -import glob -import hashlib -import importlib -import os -import re -import shutil -import uuid - -import torch -import torch.utils.cpp_extension - -#---------------------------------------------------------------------------- -# Global options. - -verbosity = 'none' # Verbosity level: 'none', 'brief', 'full' - -#---------------------------------------------------------------------------- -# Internal helper funcs. - -def _find_compiler_bindir(): - patterns = [ - 'C:/Program Files (x86)/Microsoft Visual Studio/*/Professional/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio/*/BuildTools/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio/*/Community/VC/Tools/MSVC/*/bin/Hostx64/x64', - 'C:/Program Files (x86)/Microsoft Visual Studio */vc/bin', - ] - for pattern in patterns: - matches = sorted(glob.glob(pattern)) - if len(matches): - return matches[-1] - return None - -def _find_compiler_bindir_posix(): - patterns = [ - '/usr/local/cuda/bin' - ] - for pattern in patterns: - matches = sorted(glob.glob(pattern)) - if len(matches): - return matches[-1] - return None - -#---------------------------------------------------------------------------- - -def _get_mangled_gpu_name(): - name = torch.cuda.get_device_name().lower() - out = [] - for c in name: - if re.match('[a-z0-9_-]+', c): - out.append(c) - else: - out.append('-') - return ''.join(out) - -#---------------------------------------------------------------------------- -# Main entry point for compiling and loading C++/CUDA plugins. - -_cached_plugins = dict() - -def get_plugin(module_name, sources, headers=None, source_dir=None, **build_kwargs): - assert verbosity in ['none', 'brief', 'full'] - if headers is None: - headers = [] - if source_dir is not None: - sources = [os.path.join(source_dir, fname) for fname in sources] - headers = [os.path.join(source_dir, fname) for fname in headers] - - # Already cached? - if module_name in _cached_plugins: - return _cached_plugins[module_name] - - # Print status. - if verbosity == 'full': - print(f'Setting up PyTorch plugin "{module_name}"...') - elif verbosity == 'brief': - print(f'Setting up PyTorch plugin "{module_name}"... ', end='', flush=True) - verbose_build = (verbosity == 'full') - - # Compile and load. - try: # pylint: disable=too-many-nested-blocks - # Make sure we can find the necessary compiler binaries. - if os.name == 'nt' and os.system("where cl.exe >nul 2>nul") != 0: - compiler_bindir = _find_compiler_bindir() - if compiler_bindir is None: - raise RuntimeError(f'Could not find MSVC/GCC/CLANG installation on this computer. Check _find_compiler_bindir() in "{__file__}".') - os.environ['PATH'] += ';' + compiler_bindir - - elif os.name == 'posix': - compiler_bindir = _find_compiler_bindir_posix() - if compiler_bindir is None: - raise RuntimeError(f'Could not find NVCC installation on this computer. Check _find_compiler_bindir_posix() in "{__file__}".') - os.environ['PATH'] += ';' + compiler_bindir - - # Some containers set TORCH_CUDA_ARCH_LIST to a list that can either - # break the build or unnecessarily restrict what's available to nvcc. - # Unset it to let nvcc decide based on what's available on the - # machine. - os.environ['TORCH_CUDA_ARCH_LIST'] = '' - - # Incremental build md5sum trickery. Copies all the input source files - # into a cached build directory under a combined md5 digest of the input - # source files. Copying is done only if the combined digest has changed. - # This keeps input file timestamps and filenames the same as in previous - # extension builds, allowing for fast incremental rebuilds. - # - # This optimization is done only in case all the source files reside in - # a single directory (just for simplicity) and if the TORCH_EXTENSIONS_DIR - # environment variable is set (we take this as a signal that the user - # actually cares about this.) - # - # EDIT: We now do it regardless of TORCH_EXTENSIONS_DIR, in order to work - # around the *.cu dependency bug in ninja config. - # - all_source_files = sorted(sources + headers) - all_source_dirs = set(os.path.dirname(fname) for fname in all_source_files) - if len(all_source_dirs) == 1: # and ('TORCH_EXTENSIONS_DIR' in os.environ): - - # Compute combined hash digest for all source files. - hash_md5 = hashlib.md5() - for src in all_source_files: - with open(src, 'rb') as f: - hash_md5.update(f.read()) - - # Select cached build directory name. - source_digest = hash_md5.hexdigest() - build_top_dir = torch.utils.cpp_extension._get_build_directory(module_name, verbose=verbose_build) # pylint: disable=protected-access - cached_build_dir = os.path.join(build_top_dir, f'{source_digest}-{_get_mangled_gpu_name()}') - - if not os.path.isdir(cached_build_dir): - tmpdir = f'{build_top_dir}/srctmp-{uuid.uuid4().hex}' - os.makedirs(tmpdir) - for src in all_source_files: - shutil.copyfile(src, os.path.join(tmpdir, os.path.basename(src))) - try: - os.replace(tmpdir, cached_build_dir) # atomic - except OSError: - # source directory already exists, delete tmpdir and its contents. - shutil.rmtree(tmpdir) - if not os.path.isdir(cached_build_dir): raise - - # Compile. - cached_sources = [os.path.join(cached_build_dir, os.path.basename(fname)) for fname in sources] - torch.utils.cpp_extension.load(name=module_name, build_directory=cached_build_dir, - verbose=verbose_build, sources=cached_sources, **build_kwargs) - else: - torch.utils.cpp_extension.load(name=module_name, verbose=verbose_build, sources=sources, **build_kwargs) - - # Load. - module = importlib.import_module(module_name) - - except: - if verbosity == 'brief': - print('Failed!') - raise - - # Print status and add to cache dict. - if verbosity == 'full': - print(f'Done setting up PyTorch plugin "{module_name}".') - elif verbosity == 'brief': - print('Done.') - _cached_plugins[module_name] = module - return module - -#---------------------------------------------------------------------------- - -# pylint: enable=line-too-long -# pylint: enable=multiple-statements -# pylint: enable=missing-function-docstring -# pylint: enable=useless-suppression -# pylint: enable=inconsistent-quotes diff --git a/spaces/Ikaros521/moe-tts/transforms.py b/spaces/Ikaros521/moe-tts/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/moe-tts/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Illumotion/Koboldcpp/examples/gptneox-wip/cmpnct_gpt2bpe.hpp b/spaces/Illumotion/Koboldcpp/examples/gptneox-wip/cmpnct_gpt2bpe.hpp deleted file mode 100644 index 9d433f4b1acf01019344e66ce9eea59e7ed7d299..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/gptneox-wip/cmpnct_gpt2bpe.hpp +++ /dev/null @@ -1,1133 +0,0 @@ -#ifndef CMPNCT_GPT2BPE -#define CMPNCT_GPT2BPE - -#include -#include -#include -#include -#include -#include -#include -#include -#include - - -// Unicode GPT2 Byte Pair Encoding Tokenizer -// Adapted from https://github.com/cmp-nct/ggllm.cpp [MIT License] -// Removed loading of merges from HF json and parts made for a specific vocab - - -//----------------- -// Unicode library (from cmpnct_unicode.cpp) -//----------------- - -// Minimal library for high performance handling and categorization of UTF8 strings and characters -// Using std::string - -enum CNCTCharType { - DIGIT, // a numerical char in any language - LETTER, // a letter in any language - WHITESPACE, // any form of whitespace - ACCENT_MARK, // letter modifiers like ´ in é - PUNCTUATION, // punctuation including brackets - SYMBOL, // math, currency, other symbols - CONTROL, // control characters - MIXED, // a mix of the above - UNIDENTIFIED // something more exotic like emoji or separators -}; - -struct CNCTUnicode; - -struct CNCTString { - std::string str; - size_t utf8_chars; - - CNCTCharType char_type=UNIDENTIFIED; - bool is_sequential=false; - - size_t seq_offset_bytes=0; - size_t seq_offset_utf8_chars=0; - - bool operator==(const std::string &other) const; - bool operator==(const char other) const; - bool operator==(const CNCTString &other) const; - CNCTString &operator+=(const std::string &other); - CNCTString &operator+=(const char other); - friend CNCTString operator+(CNCTString lhs, const std::string &rhs); - friend CNCTString operator+(CNCTString lhs, const char rhs); - CNCTString& operator+=(const CNCTString& other); - friend CNCTString operator+(CNCTString lhs, const CNCTString& rhs); -}; - -struct CNCTUnicode { - static bool check_code_range(int c, const std::vector>& ranges); - static CNCTCharType get_code_type(int c); - static CNCTCharType get_code_type(const std::string &utf8_char); - static int utf8_len(const char c); - static int strlen_utf8(std::string src); - static std::vector split_utf8(const std::string &src); - static std::vector split_utf8_enhanced(const std::string &src); - static CNCTCharType string_identify(const std::string& str); - static bool string_test(const std::string& str, CNCTCharType chartype); -}; - -static const std::vector> digit_ranges = { -{0x30, 0x39}, {0xB2, 0xB3}, {0xB9, 0xB9}, {0x660, 0x669}, {0x6F0, 0x6F9}, {0x7C0, 0x7C9}, {0x966, 0x96F}, {0x9E6, 0x9EF}, {0xA66, 0xA6F}, {0xAE6, 0xAEF}, {0xB66, 0xB6F}, {0xBE6, 0xBEF}, {0xC66, 0xC6F}, -{0xCE6, 0xCEF}, {0xD66, 0xD6F}, {0xDE6, 0xDEF}, {0xE50, 0xE59}, {0xED0, 0xED9}, {0xF20, 0xF29}, {0x1040, 0x1049}, {0x1090, 0x1099}, {0x1369, 0x1371}, {0x17E0, 0x17E9}, {0x1810, 0x1819}, {0x1946, 0x194F}, -{0x19D0, 0x19DA}, {0x1A80, 0x1A89}, {0x1A90, 0x1A99}, {0x1B50, 0x1B59}, {0x1BB0, 0x1BB9}, {0x1C40, 0x1C49}, {0x1C50, 0x1C59}, {0x2070, 0x2070}, {0x2074, 0x2079}, {0x2080, 0x2089}, {0x2460, 0x2468}, -{0x2474, 0x247C}, {0x2488, 0x2490}, {0x24EA, 0x24EA}, {0x24F5, 0x24FD}, {0x24FF, 0x24FF}, {0x2776, 0x277E}, {0x2780, 0x2788}, {0x278A, 0x2792}, {0xA620, 0xA629}, {0xA8D0, 0xA8D9}, {0xA900, 0xA909}, -{0xA9D0, 0xA9D9}, {0xA9F0, 0xA9F9}, {0xAA50, 0xAA59}, {0xABF0, 0xABF9}, {0xFF10, 0xFF19}, {0x104A0, 0x104A9}, {0x10A40, 0x10A43}, {0x10D30, 0x10D39}, {0x10E60, 0x10E68}, {0x11052, 0x1105A}, -{0x11066, 0x1106F}, {0x110F0, 0x110F9}, {0x11136, 0x1113F}, {0x111D0, 0x111D9}, {0x112F0, 0x112F9}, {0x11450, 0x11459}, {0x114D0, 0x114D9}, {0x11650, 0x11659}, {0x116C0, 0x116C9}, {0x11730, 0x11739}, -{0x118E0, 0x118E9}, {0x11950, 0x11959}, {0x11C50, 0x11C59}, {0x11D50, 0x11D59}, {0x11DA0, 0x11DA9}, {0x16A60, 0x16A69}, {0x16B50, 0x16B59}, {0x1D7CE, 0x1D7FF}, {0x1E140, 0x1E149}, {0x1E2F0, 0x1E2F9}, -{0x1E950, 0x1E959}, {0x1F100, 0x1F10A}, {0x1FBF0, 0x1FBF9}, -}; - -static const std::vector> letter_ranges = { -{0x41, 0x5A}, {0x61, 0x7A}, {0xAA, 0xAA}, {0xB5, 0xB5}, {0xBA, 0xBA}, {0xC0, 0xD6}, {0xD8, 0xF6}, {0xF8, 0x2C1}, {0x2C6, 0x2D1}, {0x2E0, 0x2E4}, {0x2EC, 0x2EC}, {0x2EE, 0x2EE}, {0x370, 0x374}, -{0x376, 0x377}, {0x37A, 0x37D}, {0x37F, 0x37F}, {0x386, 0x386}, {0x388, 0x38A}, {0x38C, 0x38C}, {0x38E, 0x3A1}, {0x3A3, 0x3F5}, {0x3F7, 0x481}, {0x48A, 0x52F}, {0x531, 0x556}, {0x559, 0x559}, -{0x560, 0x588}, {0x5D0, 0x5EA}, {0x5EF, 0x5F2}, {0x620, 0x64A}, {0x66E, 0x66F}, {0x671, 0x6D3}, {0x6D5, 0x6D5}, {0x6E5, 0x6E6}, {0x6EE, 0x6EF}, {0x6FA, 0x6FC}, {0x6FF, 0x6FF}, {0x710, 0x710}, -{0x712, 0x72F}, {0x74D, 0x7A5}, {0x7B1, 0x7B1}, {0x7CA, 0x7EA}, {0x7F4, 0x7F5}, {0x7FA, 0x7FA}, {0x800, 0x815}, {0x81A, 0x81A}, {0x824, 0x824}, {0x828, 0x828}, {0x840, 0x858}, {0x860, 0x86A}, -{0x8A0, 0x8B4}, {0x8B6, 0x8C7}, {0x904, 0x939}, {0x93D, 0x93D}, {0x950, 0x950}, {0x958, 0x961}, {0x971, 0x980}, {0x985, 0x98C}, {0x98F, 0x990}, {0x993, 0x9A8}, {0x9AA, 0x9B0}, {0x9B2, 0x9B2}, -{0x9B6, 0x9B9}, {0x9BD, 0x9BD}, {0x9CE, 0x9CE}, {0x9DC, 0x9DD}, {0x9DF, 0x9E1}, {0x9F0, 0x9F1}, {0x9FC, 0x9FC}, {0xA05, 0xA0A}, {0xA0F, 0xA10}, {0xA13, 0xA28}, {0xA2A, 0xA30}, {0xA32, 0xA33}, -{0xA35, 0xA36}, {0xA38, 0xA39}, {0xA59, 0xA5C}, {0xA5E, 0xA5E}, {0xA72, 0xA74}, {0xA85, 0xA8D}, {0xA8F, 0xA91}, {0xA93, 0xAA8}, {0xAAA, 0xAB0}, {0xAB2, 0xAB3}, {0xAB5, 0xAB9}, {0xABD, 0xABD}, -{0xAD0, 0xAD0}, {0xAE0, 0xAE1}, {0xAF9, 0xAF9}, {0xB05, 0xB0C}, {0xB0F, 0xB10}, {0xB13, 0xB28}, {0xB2A, 0xB30}, {0xB32, 0xB33}, {0xB35, 0xB39}, {0xB3D, 0xB3D}, {0xB5C, 0xB5D}, {0xB5F, 0xB61}, -{0xB71, 0xB71}, {0xB83, 0xB83}, {0xB85, 0xB8A}, {0xB8E, 0xB90}, {0xB92, 0xB95}, {0xB99, 0xB9A}, {0xB9C, 0xB9C}, {0xB9E, 0xB9F}, {0xBA3, 0xBA4}, {0xBA8, 0xBAA}, {0xBAE, 0xBB9}, {0xBD0, 0xBD0}, -{0xC05, 0xC0C}, {0xC0E, 0xC10}, {0xC12, 0xC28}, {0xC2A, 0xC39}, {0xC3D, 0xC3D}, {0xC58, 0xC5A}, {0xC60, 0xC61}, {0xC80, 0xC80}, {0xC85, 0xC8C}, {0xC8E, 0xC90}, {0xC92, 0xCA8}, {0xCAA, 0xCB3}, -{0xCB5, 0xCB9}, {0xCBD, 0xCBD}, {0xCDE, 0xCDE}, {0xCE0, 0xCE1}, {0xCF1, 0xCF2}, {0xD04, 0xD0C}, {0xD0E, 0xD10}, {0xD12, 0xD3A}, {0xD3D, 0xD3D}, {0xD4E, 0xD4E}, {0xD54, 0xD56}, {0xD5F, 0xD61}, -{0xD7A, 0xD7F}, {0xD85, 0xD96}, {0xD9A, 0xDB1}, {0xDB3, 0xDBB}, {0xDBD, 0xDBD}, {0xDC0, 0xDC6}, {0xE01, 0xE30}, {0xE32, 0xE33}, {0xE40, 0xE46}, {0xE81, 0xE82}, {0xE84, 0xE84}, {0xE86, 0xE8A}, -{0xE8C, 0xEA3}, {0xEA5, 0xEA5}, {0xEA7, 0xEB0}, {0xEB2, 0xEB3}, {0xEBD, 0xEBD}, {0xEC0, 0xEC4}, {0xEC6, 0xEC6}, {0xEDC, 0xEDF}, {0xF00, 0xF00}, {0xF40, 0xF47}, {0xF49, 0xF6C}, {0xF88, 0xF8C}, -{0x1000, 0x102A}, {0x103F, 0x103F}, {0x1050, 0x1055}, {0x105A, 0x105D}, {0x1061, 0x1061}, {0x1065, 0x1066}, {0x106E, 0x1070}, {0x1075, 0x1081}, {0x108E, 0x108E}, {0x10A0, 0x10C5}, {0x10C7, 0x10C7}, -{0x10CD, 0x10CD}, {0x10D0, 0x10FA}, {0x10FC, 0x1248}, {0x124A, 0x124D}, {0x1250, 0x1256}, {0x1258, 0x1258}, {0x125A, 0x125D}, {0x1260, 0x1288}, {0x128A, 0x128D}, {0x1290, 0x12B0}, {0x12B2, 0x12B5}, -{0x12B8, 0x12BE}, {0x12C0, 0x12C0}, {0x12C2, 0x12C5}, {0x12C8, 0x12D6}, {0x12D8, 0x1310}, {0x1312, 0x1315}, {0x1318, 0x135A}, {0x1380, 0x138F}, {0x13A0, 0x13F5}, {0x13F8, 0x13FD}, {0x1401, 0x166C}, -{0x166F, 0x167F}, {0x1681, 0x169A}, {0x16A0, 0x16EA}, {0x16F1, 0x16F8}, {0x1700, 0x170C}, {0x170E, 0x1711}, {0x1720, 0x1731}, {0x1740, 0x1751}, {0x1760, 0x176C}, {0x176E, 0x1770}, {0x1780, 0x17B3}, -{0x17D7, 0x17D7}, {0x17DC, 0x17DC}, {0x1820, 0x1878}, {0x1880, 0x1884}, {0x1887, 0x18A8}, {0x18AA, 0x18AA}, {0x18B0, 0x18F5}, {0x1900, 0x191E}, {0x1950, 0x196D}, {0x1970, 0x1974}, {0x1980, 0x19AB}, -{0x19B0, 0x19C9}, {0x1A00, 0x1A16}, {0x1A20, 0x1A54}, {0x1AA7, 0x1AA7}, {0x1B05, 0x1B33}, {0x1B45, 0x1B4B}, {0x1B83, 0x1BA0}, {0x1BAE, 0x1BAF}, {0x1BBA, 0x1BE5}, {0x1C00, 0x1C23}, {0x1C4D, 0x1C4F}, -{0x1C5A, 0x1C7D}, {0x1C80, 0x1C88}, {0x1C90, 0x1CBA}, {0x1CBD, 0x1CBF}, {0x1CE9, 0x1CEC}, {0x1CEE, 0x1CF3}, {0x1CF5, 0x1CF6}, {0x1CFA, 0x1CFA}, {0x1D00, 0x1DBF}, {0x1E00, 0x1F15}, {0x1F18, 0x1F1D}, -{0x1F20, 0x1F45}, {0x1F48, 0x1F4D}, {0x1F50, 0x1F57}, {0x1F59, 0x1F59}, {0x1F5B, 0x1F5B}, {0x1F5D, 0x1F5D}, {0x1F5F, 0x1F7D}, {0x1F80, 0x1FB4}, {0x1FB6, 0x1FBC}, {0x1FBE, 0x1FBE}, {0x1FC2, 0x1FC4}, -{0x1FC6, 0x1FCC}, {0x1FD0, 0x1FD3}, {0x1FD6, 0x1FDB}, {0x1FE0, 0x1FEC}, {0x1FF2, 0x1FF4}, {0x1FF6, 0x1FFC}, {0x2071, 0x2071}, {0x207F, 0x207F}, {0x2090, 0x209C}, {0x2102, 0x2102}, {0x2107, 0x2107}, -{0x210A, 0x2113}, {0x2115, 0x2115}, {0x2119, 0x211D}, {0x2124, 0x2124}, {0x2126, 0x2126}, {0x2128, 0x2128}, {0x212A, 0x212D}, {0x212F, 0x2139}, {0x213C, 0x213F}, {0x2145, 0x2149}, {0x214E, 0x214E}, -{0x2183, 0x2184}, {0x2C00, 0x2C2E}, {0x2C30, 0x2C5E}, {0x2C60, 0x2CE4}, {0x2CEB, 0x2CEE}, {0x2CF2, 0x2CF3}, {0x2D00, 0x2D25}, {0x2D27, 0x2D27}, {0x2D2D, 0x2D2D}, {0x2D30, 0x2D67}, {0x2D6F, 0x2D6F}, -{0x2D80, 0x2D96}, {0x2DA0, 0x2DA6}, {0x2DA8, 0x2DAE}, {0x2DB0, 0x2DB6}, {0x2DB8, 0x2DBE}, {0x2DC0, 0x2DC6}, {0x2DC8, 0x2DCE}, {0x2DD0, 0x2DD6}, {0x2DD8, 0x2DDE}, {0x2E2F, 0x2E2F}, {0x3005, 0x3006}, -{0x3031, 0x3035}, {0x303B, 0x303C}, {0x3041, 0x3096}, {0x309D, 0x309F}, {0x30A1, 0x30FA}, {0x30FC, 0x30FF}, {0x3105, 0x312F}, {0x3131, 0x318E}, {0x31A0, 0x31BF}, {0x31F0, 0x31FF}, {0x3400, 0x4DBF}, -{0x4E00, 0x9FFC}, {0xA000, 0xA48C}, {0xA4D0, 0xA4FD}, {0xA500, 0xA60C}, {0xA610, 0xA61F}, {0xA62A, 0xA62B}, {0xA640, 0xA66E}, {0xA67F, 0xA69D}, {0xA6A0, 0xA6E5}, {0xA717, 0xA71F}, {0xA722, 0xA788}, -{0xA78B, 0xA7BF}, {0xA7C2, 0xA7CA}, {0xA7F5, 0xA801}, {0xA803, 0xA805}, {0xA807, 0xA80A}, {0xA80C, 0xA822}, {0xA840, 0xA873}, {0xA882, 0xA8B3}, {0xA8F2, 0xA8F7}, {0xA8FB, 0xA8FB}, {0xA8FD, 0xA8FE}, -{0xA90A, 0xA925}, {0xA930, 0xA946}, {0xA960, 0xA97C}, {0xA984, 0xA9B2}, {0xA9CF, 0xA9CF}, {0xA9E0, 0xA9E4}, {0xA9E6, 0xA9EF}, {0xA9FA, 0xA9FE}, {0xAA00, 0xAA28}, {0xAA40, 0xAA42}, {0xAA44, 0xAA4B}, -{0xAA60, 0xAA76}, {0xAA7A, 0xAA7A}, {0xAA7E, 0xAAAF}, {0xAAB1, 0xAAB1}, {0xAAB5, 0xAAB6}, {0xAAB9, 0xAABD}, {0xAAC0, 0xAAC0}, {0xAAC2, 0xAAC2}, {0xAADB, 0xAADD}, {0xAAE0, 0xAAEA}, {0xAAF2, 0xAAF4}, -{0xAB01, 0xAB06}, {0xAB09, 0xAB0E}, {0xAB11, 0xAB16}, {0xAB20, 0xAB26}, {0xAB28, 0xAB2E}, {0xAB30, 0xAB5A}, {0xAB5C, 0xAB69}, {0xAB70, 0xABE2}, {0xAC00, 0xD7A3}, {0xD7B0, 0xD7C6}, {0xD7CB, 0xD7FB}, -{0xF900, 0xFA6D}, {0xFA70, 0xFAD9}, {0xFB00, 0xFB06}, {0xFB13, 0xFB17}, {0xFB1D, 0xFB1D}, {0xFB1F, 0xFB28}, {0xFB2A, 0xFB36}, {0xFB38, 0xFB3C}, {0xFB3E, 0xFB3E}, {0xFB40, 0xFB41}, {0xFB43, 0xFB44}, -{0xFB46, 0xFBB1}, {0xFBD3, 0xFD3D}, {0xFD50, 0xFD8F}, {0xFD92, 0xFDC7}, {0xFDF0, 0xFDFB}, {0xFE70, 0xFE74}, {0xFE76, 0xFEFC}, {0xFF21, 0xFF3A}, {0xFF41, 0xFF5A}, {0xFF66, 0xFFBE}, {0xFFC2, 0xFFC7}, -{0xFFCA, 0xFFCF}, {0xFFD2, 0xFFD7}, {0xFFDA, 0xFFDC}, {0x10000, 0x1000B}, {0x1000D, 0x10026}, {0x10028, 0x1003A}, {0x1003C, 0x1003D}, {0x1003F, 0x1004D}, {0x10050, 0x1005D}, {0x10080, 0x100FA}, -{0x10280, 0x1029C}, {0x102A0, 0x102D0}, {0x10300, 0x1031F}, {0x1032D, 0x10340}, {0x10342, 0x10349}, {0x10350, 0x10375}, {0x10380, 0x1039D}, {0x103A0, 0x103C3}, {0x103C8, 0x103CF}, {0x10400, 0x1049D}, -{0x104B0, 0x104D3}, {0x104D8, 0x104FB}, {0x10500, 0x10527}, {0x10530, 0x10563}, {0x10600, 0x10736}, {0x10740, 0x10755}, {0x10760, 0x10767}, {0x10800, 0x10805}, {0x10808, 0x10808}, {0x1080A, 0x10835}, -{0x10837, 0x10838}, {0x1083C, 0x1083C}, {0x1083F, 0x10855}, {0x10860, 0x10876}, {0x10880, 0x1089E}, {0x108E0, 0x108F2}, {0x108F4, 0x108F5}, {0x10900, 0x10915}, {0x10920, 0x10939}, {0x10980, 0x109B7}, -{0x109BE, 0x109BF}, {0x10A00, 0x10A00}, {0x10A10, 0x10A13}, {0x10A15, 0x10A17}, {0x10A19, 0x10A35}, {0x10A60, 0x10A7C}, {0x10A80, 0x10A9C}, {0x10AC0, 0x10AC7}, {0x10AC9, 0x10AE4}, {0x10B00, 0x10B35}, -{0x10B40, 0x10B55}, {0x10B60, 0x10B72}, {0x10B80, 0x10B91}, {0x10C00, 0x10C48}, {0x10C80, 0x10CB2}, {0x10CC0, 0x10CF2}, {0x10D00, 0x10D23}, {0x10E80, 0x10EA9}, {0x10EB0, 0x10EB1}, {0x10F00, 0x10F1C}, -{0x10F27, 0x10F27}, {0x10F30, 0x10F45}, {0x10FB0, 0x10FC4}, {0x10FE0, 0x10FF6}, {0x11003, 0x11037}, {0x11083, 0x110AF}, {0x110D0, 0x110E8}, {0x11103, 0x11126}, {0x11144, 0x11144}, {0x11147, 0x11147}, -{0x11150, 0x11172}, {0x11176, 0x11176}, {0x11183, 0x111B2}, {0x111C1, 0x111C4}, {0x111DA, 0x111DA}, {0x111DC, 0x111DC}, {0x11200, 0x11211}, {0x11213, 0x1122B}, {0x11280, 0x11286}, {0x11288, 0x11288}, -{0x1128A, 0x1128D}, {0x1128F, 0x1129D}, {0x1129F, 0x112A8}, {0x112B0, 0x112DE}, {0x11305, 0x1130C}, {0x1130F, 0x11310}, {0x11313, 0x11328}, {0x1132A, 0x11330}, {0x11332, 0x11333}, {0x11335, 0x11339}, -{0x1133D, 0x1133D}, {0x11350, 0x11350}, {0x1135D, 0x11361}, {0x11400, 0x11434}, {0x11447, 0x1144A}, {0x1145F, 0x11461}, {0x11480, 0x114AF}, {0x114C4, 0x114C5}, {0x114C7, 0x114C7}, {0x11580, 0x115AE}, -{0x115D8, 0x115DB}, {0x11600, 0x1162F}, {0x11644, 0x11644}, {0x11680, 0x116AA}, {0x116B8, 0x116B8}, {0x11700, 0x1171A}, {0x11800, 0x1182B}, {0x118A0, 0x118DF}, {0x118FF, 0x11906}, {0x11909, 0x11909}, -{0x1190C, 0x11913}, {0x11915, 0x11916}, {0x11918, 0x1192F}, {0x1193F, 0x1193F}, {0x11941, 0x11941}, {0x119A0, 0x119A7}, {0x119AA, 0x119D0}, {0x119E1, 0x119E1}, {0x119E3, 0x119E3}, {0x11A00, 0x11A00}, -{0x11A0B, 0x11A32}, {0x11A3A, 0x11A3A}, {0x11A50, 0x11A50}, {0x11A5C, 0x11A89}, {0x11A9D, 0x11A9D}, {0x11AC0, 0x11AF8}, {0x11C00, 0x11C08}, {0x11C0A, 0x11C2E}, {0x11C40, 0x11C40}, {0x11C72, 0x11C8F}, -{0x11D00, 0x11D06}, {0x11D08, 0x11D09}, {0x11D0B, 0x11D30}, {0x11D46, 0x11D46}, {0x11D60, 0x11D65}, {0x11D67, 0x11D68}, {0x11D6A, 0x11D89}, {0x11D98, 0x11D98}, {0x11EE0, 0x11EF2}, {0x11FB0, 0x11FB0}, -{0x12000, 0x12399}, {0x12480, 0x12543}, {0x13000, 0x1342E}, {0x14400, 0x14646}, {0x16800, 0x16A38}, {0x16A40, 0x16A5E}, {0x16AD0, 0x16AED}, {0x16B00, 0x16B2F}, {0x16B40, 0x16B43}, {0x16B63, 0x16B77}, -{0x16B7D, 0x16B8F}, {0x16E40, 0x16E7F}, {0x16F00, 0x16F4A}, {0x16F50, 0x16F50}, {0x16F93, 0x16F9F}, {0x16FE0, 0x16FE1}, {0x16FE3, 0x16FE3}, {0x17000, 0x187F7}, {0x18800, 0x18CD5}, {0x18D00, 0x18D08}, -{0x1B000, 0x1B11E}, {0x1B150, 0x1B152}, {0x1B164, 0x1B167}, {0x1B170, 0x1B2FB}, {0x1BC00, 0x1BC6A}, {0x1BC70, 0x1BC7C}, {0x1BC80, 0x1BC88}, {0x1BC90, 0x1BC99}, {0x1D400, 0x1D454}, {0x1D456, 0x1D49C}, -{0x1D49E, 0x1D49F}, {0x1D4A2, 0x1D4A2}, {0x1D4A5, 0x1D4A6}, {0x1D4A9, 0x1D4AC}, {0x1D4AE, 0x1D4B9}, {0x1D4BB, 0x1D4BB}, {0x1D4BD, 0x1D4C3}, {0x1D4C5, 0x1D505}, {0x1D507, 0x1D50A}, {0x1D50D, 0x1D514}, -{0x1D516, 0x1D51C}, {0x1D51E, 0x1D539}, {0x1D53B, 0x1D53E}, {0x1D540, 0x1D544}, {0x1D546, 0x1D546}, {0x1D54A, 0x1D550}, {0x1D552, 0x1D6A5}, {0x1D6A8, 0x1D6C0}, {0x1D6C2, 0x1D6DA}, {0x1D6DC, 0x1D6FA}, -{0x1D6FC, 0x1D714}, {0x1D716, 0x1D734}, {0x1D736, 0x1D74E}, {0x1D750, 0x1D76E}, {0x1D770, 0x1D788}, {0x1D78A, 0x1D7A8}, {0x1D7AA, 0x1D7C2}, {0x1D7C4, 0x1D7CB}, {0x1E100, 0x1E12C}, {0x1E137, 0x1E13D}, -{0x1E14E, 0x1E14E}, {0x1E2C0, 0x1E2EB}, {0x1E800, 0x1E8C4}, {0x1E900, 0x1E943}, {0x1E94B, 0x1E94B}, {0x1EE00, 0x1EE03}, {0x1EE05, 0x1EE1F}, {0x1EE21, 0x1EE22}, {0x1EE24, 0x1EE24}, {0x1EE27, 0x1EE27}, -{0x1EE29, 0x1EE32}, {0x1EE34, 0x1EE37}, {0x1EE39, 0x1EE39}, {0x1EE3B, 0x1EE3B}, {0x1EE42, 0x1EE42}, {0x1EE47, 0x1EE47}, {0x1EE49, 0x1EE49}, {0x1EE4B, 0x1EE4B}, {0x1EE4D, 0x1EE4F}, {0x1EE51, 0x1EE52}, -{0x1EE54, 0x1EE54}, {0x1EE57, 0x1EE57}, {0x1EE59, 0x1EE59}, {0x1EE5B, 0x1EE5B}, {0x1EE5D, 0x1EE5D}, {0x1EE5F, 0x1EE5F}, {0x1EE61, 0x1EE62}, {0x1EE64, 0x1EE64}, {0x1EE67, 0x1EE6A}, {0x1EE6C, 0x1EE72}, -{0x1EE74, 0x1EE77}, {0x1EE79, 0x1EE7C}, {0x1EE7E, 0x1EE7E}, {0x1EE80, 0x1EE89}, {0x1EE8B, 0x1EE9B}, {0x1EEA1, 0x1EEA3}, {0x1EEA5, 0x1EEA9}, {0x1EEAB, 0x1EEBB}, {0x20000, 0x2A6DD}, {0x2A700, 0x2B734}, -{0x2B740, 0x2B81D}, {0x2B820, 0x2CEA1}, {0x2CEB0, 0x2EBE0}, {0x2F800, 0x2FA1D}, {0x30000, 0x3134A}, -}; - -static const std::vector> whitespace_ranges = { -{0x9, 0xD}, {0x1C, 0x20}, {0x85, 0x85}, {0xA0, 0xA0}, {0x1680, 0x1680}, {0x2000, 0x200A}, {0x2028, 0x2029}, {0x202F, 0x202F}, {0x205F, 0x205F}, {0x3000, 0x3000}, -}; - -static const std::vector> accent_mark_ranges = { -{0x300, 0x36F}, {0x483, 0x489}, {0x591, 0x5BD}, {0x5BF, 0x5BF}, {0x5C1, 0x5C2}, {0x5C4, 0x5C5}, {0x5C7, 0x5C7}, {0x610, 0x61A}, {0x64B, 0x65F}, {0x670, 0x670}, {0x6D6, 0x6DC}, {0x6DF, 0x6E4}, -{0x6E7, 0x6E8}, {0x6EA, 0x6ED}, {0x711, 0x711}, {0x730, 0x74A}, {0x7A6, 0x7B0}, {0x7EB, 0x7F3}, {0x7FD, 0x7FD}, {0x816, 0x819}, {0x81B, 0x823}, {0x825, 0x827}, {0x829, 0x82D}, {0x859, 0x85B}, -{0x8D3, 0x8E1}, {0x8E3, 0x903}, {0x93A, 0x93C}, {0x93E, 0x94F}, {0x951, 0x957}, {0x962, 0x963}, {0x981, 0x983}, {0x9BC, 0x9BC}, {0x9BE, 0x9C4}, {0x9C7, 0x9C8}, {0x9CB, 0x9CD}, {0x9D7, 0x9D7}, -{0x9E2, 0x9E3}, {0x9FE, 0x9FE}, {0xA01, 0xA03}, {0xA3C, 0xA3C}, {0xA3E, 0xA42}, {0xA47, 0xA48}, {0xA4B, 0xA4D}, {0xA51, 0xA51}, {0xA70, 0xA71}, {0xA75, 0xA75}, {0xA81, 0xA83}, {0xABC, 0xABC}, -{0xABE, 0xAC5}, {0xAC7, 0xAC9}, {0xACB, 0xACD}, {0xAE2, 0xAE3}, {0xAFA, 0xAFF}, {0xB01, 0xB03}, {0xB3C, 0xB3C}, {0xB3E, 0xB44}, {0xB47, 0xB48}, {0xB4B, 0xB4D}, {0xB55, 0xB57}, {0xB62, 0xB63}, -{0xB82, 0xB82}, {0xBBE, 0xBC2}, {0xBC6, 0xBC8}, {0xBCA, 0xBCD}, {0xBD7, 0xBD7}, {0xC00, 0xC04}, {0xC3E, 0xC44}, {0xC46, 0xC48}, {0xC4A, 0xC4D}, {0xC55, 0xC56}, {0xC62, 0xC63}, {0xC81, 0xC83}, -{0xCBC, 0xCBC}, {0xCBE, 0xCC4}, {0xCC6, 0xCC8}, {0xCCA, 0xCCD}, {0xCD5, 0xCD6}, {0xCE2, 0xCE3}, {0xD00, 0xD03}, {0xD3B, 0xD3C}, {0xD3E, 0xD44}, {0xD46, 0xD48}, {0xD4A, 0xD4D}, {0xD57, 0xD57}, -{0xD62, 0xD63}, {0xD81, 0xD83}, {0xDCA, 0xDCA}, {0xDCF, 0xDD4}, {0xDD6, 0xDD6}, {0xDD8, 0xDDF}, {0xDF2, 0xDF3}, {0xE31, 0xE31}, {0xE34, 0xE3A}, {0xE47, 0xE4E}, {0xEB1, 0xEB1}, {0xEB4, 0xEBC}, -{0xEC8, 0xECD}, {0xF18, 0xF19}, {0xF35, 0xF35}, {0xF37, 0xF37}, {0xF39, 0xF39}, {0xF3E, 0xF3F}, {0xF71, 0xF84}, {0xF86, 0xF87}, {0xF8D, 0xF97}, {0xF99, 0xFBC}, {0xFC6, 0xFC6}, {0x102B, 0x103E}, -{0x1056, 0x1059}, {0x105E, 0x1060}, {0x1062, 0x1064}, {0x1067, 0x106D}, {0x1071, 0x1074}, {0x1082, 0x108D}, {0x108F, 0x108F}, {0x109A, 0x109D}, {0x135D, 0x135F}, {0x1712, 0x1714}, {0x1732, 0x1734}, -{0x1752, 0x1753}, {0x1772, 0x1773}, {0x17B4, 0x17D3}, {0x17DD, 0x17DD}, {0x180B, 0x180D}, {0x1885, 0x1886}, {0x18A9, 0x18A9}, {0x1920, 0x192B}, {0x1930, 0x193B}, {0x1A17, 0x1A1B}, {0x1A55, 0x1A5E}, -{0x1A60, 0x1A7C}, {0x1A7F, 0x1A7F}, {0x1AB0, 0x1AC0}, {0x1B00, 0x1B04}, {0x1B34, 0x1B44}, {0x1B6B, 0x1B73}, {0x1B80, 0x1B82}, {0x1BA1, 0x1BAD}, {0x1BE6, 0x1BF3}, {0x1C24, 0x1C37}, {0x1CD0, 0x1CD2}, -{0x1CD4, 0x1CE8}, {0x1CED, 0x1CED}, {0x1CF4, 0x1CF4}, {0x1CF7, 0x1CF9}, {0x1DC0, 0x1DF9}, {0x1DFB, 0x1DFF}, {0x20D0, 0x20F0}, {0x2CEF, 0x2CF1}, {0x2D7F, 0x2D7F}, {0x2DE0, 0x2DFF}, {0x302A, 0x302F}, -{0x3099, 0x309A}, {0xA66F, 0xA672}, {0xA674, 0xA67D}, {0xA69E, 0xA69F}, {0xA6F0, 0xA6F1}, {0xA802, 0xA802}, {0xA806, 0xA806}, {0xA80B, 0xA80B}, {0xA823, 0xA827}, {0xA82C, 0xA82C}, {0xA880, 0xA881}, -{0xA8B4, 0xA8C5}, {0xA8E0, 0xA8F1}, {0xA8FF, 0xA8FF}, {0xA926, 0xA92D}, {0xA947, 0xA953}, {0xA980, 0xA983}, {0xA9B3, 0xA9C0}, {0xA9E5, 0xA9E5}, {0xAA29, 0xAA36}, {0xAA43, 0xAA43}, {0xAA4C, 0xAA4D}, -{0xAA7B, 0xAA7D}, {0xAAB0, 0xAAB0}, {0xAAB2, 0xAAB4}, {0xAAB7, 0xAAB8}, {0xAABE, 0xAABF}, {0xAAC1, 0xAAC1}, {0xAAEB, 0xAAEF}, {0xAAF5, 0xAAF6}, {0xABE3, 0xABEA}, {0xABEC, 0xABED}, {0xFB1E, 0xFB1E}, -{0xFE00, 0xFE0F}, {0xFE20, 0xFE2F}, {0x101FD, 0x101FD}, {0x102E0, 0x102E0}, {0x10376, 0x1037A}, {0x10A01, 0x10A03}, {0x10A05, 0x10A06}, {0x10A0C, 0x10A0F}, {0x10A38, 0x10A3A}, {0x10A3F, 0x10A3F}, -{0x10AE5, 0x10AE6}, {0x10D24, 0x10D27}, {0x10EAB, 0x10EAC}, {0x10F46, 0x10F50}, {0x11000, 0x11002}, {0x11038, 0x11046}, {0x1107F, 0x11082}, {0x110B0, 0x110BA}, {0x11100, 0x11102}, {0x11127, 0x11134}, -{0x11145, 0x11146}, {0x11173, 0x11173}, {0x11180, 0x11182}, {0x111B3, 0x111C0}, {0x111C9, 0x111CC}, {0x111CE, 0x111CF}, {0x1122C, 0x11237}, {0x1123E, 0x1123E}, {0x112DF, 0x112EA}, {0x11300, 0x11303}, -{0x1133B, 0x1133C}, {0x1133E, 0x11344}, {0x11347, 0x11348}, {0x1134B, 0x1134D}, {0x11357, 0x11357}, {0x11362, 0x11363}, {0x11366, 0x1136C}, {0x11370, 0x11374}, {0x11435, 0x11446}, {0x1145E, 0x1145E}, -{0x114B0, 0x114C3}, {0x115AF, 0x115B5}, {0x115B8, 0x115C0}, {0x115DC, 0x115DD}, {0x11630, 0x11640}, {0x116AB, 0x116B7}, {0x1171D, 0x1172B}, {0x1182C, 0x1183A}, {0x11930, 0x11935}, {0x11937, 0x11938}, -{0x1193B, 0x1193E}, {0x11940, 0x11940}, {0x11942, 0x11943}, {0x119D1, 0x119D7}, {0x119DA, 0x119E0}, {0x119E4, 0x119E4}, {0x11A01, 0x11A0A}, {0x11A33, 0x11A39}, {0x11A3B, 0x11A3E}, {0x11A47, 0x11A47}, -{0x11A51, 0x11A5B}, {0x11A8A, 0x11A99}, {0x11C2F, 0x11C36}, {0x11C38, 0x11C3F}, {0x11C92, 0x11CA7}, {0x11CA9, 0x11CB6}, {0x11D31, 0x11D36}, {0x11D3A, 0x11D3A}, {0x11D3C, 0x11D3D}, {0x11D3F, 0x11D45}, -{0x11D47, 0x11D47}, {0x11D8A, 0x11D8E}, {0x11D90, 0x11D91}, {0x11D93, 0x11D97}, {0x11EF3, 0x11EF6}, {0x16AF0, 0x16AF4}, {0x16B30, 0x16B36}, {0x16F4F, 0x16F4F}, {0x16F51, 0x16F87}, {0x16F8F, 0x16F92}, -{0x16FE4, 0x16FE4}, {0x16FF0, 0x16FF1}, {0x1BC9D, 0x1BC9E}, {0x1D165, 0x1D169}, {0x1D16D, 0x1D172}, {0x1D17B, 0x1D182}, {0x1D185, 0x1D18B}, {0x1D1AA, 0x1D1AD}, {0x1D242, 0x1D244}, {0x1DA00, 0x1DA36}, -{0x1DA3B, 0x1DA6C}, {0x1DA75, 0x1DA75}, {0x1DA84, 0x1DA84}, {0x1DA9B, 0x1DA9F}, {0x1DAA1, 0x1DAAF}, {0x1E000, 0x1E006}, {0x1E008, 0x1E018}, {0x1E01B, 0x1E021}, {0x1E023, 0x1E024}, {0x1E026, 0x1E02A}, -{0x1E130, 0x1E136}, {0x1E2EC, 0x1E2EF}, {0x1E8D0, 0x1E8D6}, {0x1E944, 0x1E94A}, {0xE0100, 0xE01EF}, -}; - -static const std::vector> punctuation_ranges = { -{0x21, 0x23}, {0x25, 0x2A}, {0x2C, 0x2F}, {0x3A, 0x3B}, {0x3F, 0x40}, {0x5B, 0x5D}, {0x5F, 0x5F}, {0x7B, 0x7B}, {0x7D, 0x7D}, {0xA1, 0xA1}, {0xA7, 0xA7}, {0xAB, 0xAB}, {0xB6, 0xB7}, {0xBB, 0xBB}, -{0xBF, 0xBF}, {0x37E, 0x37E}, {0x387, 0x387}, {0x55A, 0x55F}, {0x589, 0x58A}, {0x5BE, 0x5BE}, {0x5C0, 0x5C0}, {0x5C3, 0x5C3}, {0x5C6, 0x5C6}, {0x5F3, 0x5F4}, {0x609, 0x60A}, {0x60C, 0x60D}, -{0x61B, 0x61B}, {0x61E, 0x61F}, {0x66A, 0x66D}, {0x6D4, 0x6D4}, {0x700, 0x70D}, {0x7F7, 0x7F9}, {0x830, 0x83E}, {0x85E, 0x85E}, {0x964, 0x965}, {0x970, 0x970}, {0x9FD, 0x9FD}, {0xA76, 0xA76}, -{0xAF0, 0xAF0}, {0xC77, 0xC77}, {0xC84, 0xC84}, {0xDF4, 0xDF4}, {0xE4F, 0xE4F}, {0xE5A, 0xE5B}, {0xF04, 0xF12}, {0xF14, 0xF14}, {0xF3A, 0xF3D}, {0xF85, 0xF85}, {0xFD0, 0xFD4}, {0xFD9, 0xFDA}, -{0x104A, 0x104F}, {0x10FB, 0x10FB}, {0x1360, 0x1368}, {0x1400, 0x1400}, {0x166E, 0x166E}, {0x169B, 0x169C}, {0x16EB, 0x16ED}, {0x1735, 0x1736}, {0x17D4, 0x17D6}, {0x17D8, 0x17DA}, {0x1800, 0x180A}, -{0x1944, 0x1945}, {0x1A1E, 0x1A1F}, {0x1AA0, 0x1AA6}, {0x1AA8, 0x1AAD}, {0x1B5A, 0x1B60}, {0x1BFC, 0x1BFF}, {0x1C3B, 0x1C3F}, {0x1C7E, 0x1C7F}, {0x1CC0, 0x1CC7}, {0x1CD3, 0x1CD3}, {0x2010, 0x2027}, -{0x2030, 0x2043}, {0x2045, 0x2051}, {0x2053, 0x205E}, {0x207D, 0x207E}, {0x208D, 0x208E}, {0x2308, 0x230B}, {0x2329, 0x232A}, {0x2768, 0x2775}, {0x27C5, 0x27C6}, {0x27E6, 0x27EF}, {0x2983, 0x2998}, -{0x29D8, 0x29DB}, {0x29FC, 0x29FD}, {0x2CF9, 0x2CFC}, {0x2CFE, 0x2CFF}, {0x2D70, 0x2D70}, {0x2E00, 0x2E2E}, {0x2E30, 0x2E4F}, {0x2E52, 0x2E52}, {0x3001, 0x3003}, {0x3008, 0x3011}, {0x3014, 0x301F}, -{0x3030, 0x3030}, {0x303D, 0x303D}, {0x30A0, 0x30A0}, {0x30FB, 0x30FB}, {0xA4FE, 0xA4FF}, {0xA60D, 0xA60F}, {0xA673, 0xA673}, {0xA67E, 0xA67E}, {0xA6F2, 0xA6F7}, {0xA874, 0xA877}, {0xA8CE, 0xA8CF}, -{0xA8F8, 0xA8FA}, {0xA8FC, 0xA8FC}, {0xA92E, 0xA92F}, {0xA95F, 0xA95F}, {0xA9C1, 0xA9CD}, {0xA9DE, 0xA9DF}, {0xAA5C, 0xAA5F}, {0xAADE, 0xAADF}, {0xAAF0, 0xAAF1}, {0xABEB, 0xABEB}, {0xFD3E, 0xFD3F}, -{0xFE10, 0xFE19}, {0xFE30, 0xFE52}, {0xFE54, 0xFE61}, {0xFE63, 0xFE63}, {0xFE68, 0xFE68}, {0xFE6A, 0xFE6B}, {0xFF01, 0xFF03}, {0xFF05, 0xFF0A}, {0xFF0C, 0xFF0F}, {0xFF1A, 0xFF1B}, {0xFF1F, 0xFF20}, -{0xFF3B, 0xFF3D}, {0xFF3F, 0xFF3F}, {0xFF5B, 0xFF5B}, {0xFF5D, 0xFF5D}, {0xFF5F, 0xFF65}, {0x10100, 0x10102}, {0x1039F, 0x1039F}, {0x103D0, 0x103D0}, {0x1056F, 0x1056F}, {0x10857, 0x10857}, -{0x1091F, 0x1091F}, {0x1093F, 0x1093F}, {0x10A50, 0x10A58}, {0x10A7F, 0x10A7F}, {0x10AF0, 0x10AF6}, {0x10B39, 0x10B3F}, {0x10B99, 0x10B9C}, {0x10EAD, 0x10EAD}, {0x10F55, 0x10F59}, {0x11047, 0x1104D}, -{0x110BB, 0x110BC}, {0x110BE, 0x110C1}, {0x11140, 0x11143}, {0x11174, 0x11175}, {0x111C5, 0x111C8}, {0x111CD, 0x111CD}, {0x111DB, 0x111DB}, {0x111DD, 0x111DF}, {0x11238, 0x1123D}, {0x112A9, 0x112A9}, -{0x1144B, 0x1144F}, {0x1145A, 0x1145B}, {0x1145D, 0x1145D}, {0x114C6, 0x114C6}, {0x115C1, 0x115D7}, {0x11641, 0x11643}, {0x11660, 0x1166C}, {0x1173C, 0x1173E}, {0x1183B, 0x1183B}, {0x11944, 0x11946}, -{0x119E2, 0x119E2}, {0x11A3F, 0x11A46}, {0x11A9A, 0x11A9C}, {0x11A9E, 0x11AA2}, {0x11C41, 0x11C45}, {0x11C70, 0x11C71}, {0x11EF7, 0x11EF8}, {0x11FFF, 0x11FFF}, {0x12470, 0x12474}, {0x16A6E, 0x16A6F}, -{0x16AF5, 0x16AF5}, {0x16B37, 0x16B3B}, {0x16B44, 0x16B44}, {0x16E97, 0x16E9A}, {0x16FE2, 0x16FE2}, {0x1BC9F, 0x1BC9F}, {0x1DA87, 0x1DA8B}, {0x1E95E, 0x1E95F}, -}; - -static const std::vector> symbol_ranges = { -{0x24, 0x24}, {0x2B, 0x2B}, {0x3C, 0x3E}, {0x5E, 0x5E}, {0x60, 0x60}, {0x7C, 0x7C}, {0x7E, 0x7E}, {0xA2, 0xA6}, {0xA8, 0xA9}, {0xAC, 0xAC}, {0xAE, 0xB1}, {0xB4, 0xB4}, {0xB8, 0xB8}, {0xD7, 0xD7}, -{0xF7, 0xF7}, {0x2C2, 0x2C5}, {0x2D2, 0x2DF}, {0x2E5, 0x2EB}, {0x2ED, 0x2ED}, {0x2EF, 0x2FF}, {0x375, 0x375}, {0x384, 0x385}, {0x3F6, 0x3F6}, {0x482, 0x482}, {0x58D, 0x58F}, {0x606, 0x608}, -{0x60B, 0x60B}, {0x60E, 0x60F}, {0x6DE, 0x6DE}, {0x6E9, 0x6E9}, {0x6FD, 0x6FE}, {0x7F6, 0x7F6}, {0x7FE, 0x7FF}, {0x9F2, 0x9F3}, {0x9FA, 0x9FB}, {0xAF1, 0xAF1}, {0xB70, 0xB70}, {0xBF3, 0xBFA}, -{0xC7F, 0xC7F}, {0xD4F, 0xD4F}, {0xD79, 0xD79}, {0xE3F, 0xE3F}, {0xF01, 0xF03}, {0xF13, 0xF13}, {0xF15, 0xF17}, {0xF1A, 0xF1F}, {0xF34, 0xF34}, {0xF36, 0xF36}, {0xF38, 0xF38}, {0xFBE, 0xFC5}, -{0xFC7, 0xFCC}, {0xFCE, 0xFCF}, {0xFD5, 0xFD8}, {0x109E, 0x109F}, {0x1390, 0x1399}, {0x166D, 0x166D}, {0x17DB, 0x17DB}, {0x1940, 0x1940}, {0x19DE, 0x19FF}, {0x1B61, 0x1B6A}, {0x1B74, 0x1B7C}, -{0x1FBD, 0x1FBD}, {0x1FBF, 0x1FC1}, {0x1FCD, 0x1FCF}, {0x1FDD, 0x1FDF}, {0x1FED, 0x1FEF}, {0x1FFD, 0x1FFE}, {0x2044, 0x2044}, {0x2052, 0x2052}, {0x207A, 0x207C}, {0x208A, 0x208C}, {0x20A0, 0x20BF}, -{0x2100, 0x2101}, {0x2103, 0x2106}, {0x2108, 0x2109}, {0x2114, 0x2114}, {0x2116, 0x2118}, {0x211E, 0x2123}, {0x2125, 0x2125}, {0x2127, 0x2127}, {0x2129, 0x2129}, {0x212E, 0x212E}, {0x213A, 0x213B}, -{0x2140, 0x2144}, {0x214A, 0x214D}, {0x214F, 0x214F}, {0x218A, 0x218B}, {0x2190, 0x2307}, {0x230C, 0x2328}, {0x232B, 0x2426}, {0x2440, 0x244A}, {0x249C, 0x24E9}, {0x2500, 0x2767}, {0x2794, 0x27C4}, -{0x27C7, 0x27E5}, {0x27F0, 0x2982}, {0x2999, 0x29D7}, {0x29DC, 0x29FB}, {0x29FE, 0x2B73}, {0x2B76, 0x2B95}, {0x2B97, 0x2BFF}, {0x2CE5, 0x2CEA}, {0x2E50, 0x2E51}, {0x2E80, 0x2E99}, {0x2E9B, 0x2EF3}, -{0x2F00, 0x2FD5}, {0x2FF0, 0x2FFB}, {0x3004, 0x3004}, {0x3012, 0x3013}, {0x3020, 0x3020}, {0x3036, 0x3037}, {0x303E, 0x303F}, {0x309B, 0x309C}, {0x3190, 0x3191}, {0x3196, 0x319F}, {0x31C0, 0x31E3}, -{0x3200, 0x321E}, {0x322A, 0x3247}, {0x3250, 0x3250}, {0x3260, 0x327F}, {0x328A, 0x32B0}, {0x32C0, 0x33FF}, {0x4DC0, 0x4DFF}, {0xA490, 0xA4C6}, {0xA700, 0xA716}, {0xA720, 0xA721}, {0xA789, 0xA78A}, -{0xA828, 0xA82B}, {0xA836, 0xA839}, {0xAA77, 0xAA79}, {0xAB5B, 0xAB5B}, {0xAB6A, 0xAB6B}, {0xFB29, 0xFB29}, {0xFBB2, 0xFBC1}, {0xFDFC, 0xFDFD}, {0xFE62, 0xFE62}, {0xFE64, 0xFE66}, {0xFE69, 0xFE69}, -{0xFF04, 0xFF04}, {0xFF0B, 0xFF0B}, {0xFF1C, 0xFF1E}, {0xFF3E, 0xFF3E}, {0xFF40, 0xFF40}, {0xFF5C, 0xFF5C}, {0xFF5E, 0xFF5E}, {0xFFE0, 0xFFE6}, {0xFFE8, 0xFFEE}, {0xFFFC, 0xFFFD}, {0x10137, 0x1013F}, -{0x10179, 0x10189}, {0x1018C, 0x1018E}, {0x10190, 0x1019C}, {0x101A0, 0x101A0}, {0x101D0, 0x101FC}, {0x10877, 0x10878}, {0x10AC8, 0x10AC8}, {0x1173F, 0x1173F}, {0x11FD5, 0x11FF1}, {0x16B3C, 0x16B3F}, -{0x16B45, 0x16B45}, {0x1BC9C, 0x1BC9C}, {0x1D000, 0x1D0F5}, {0x1D100, 0x1D126}, {0x1D129, 0x1D164}, {0x1D16A, 0x1D16C}, {0x1D183, 0x1D184}, {0x1D18C, 0x1D1A9}, {0x1D1AE, 0x1D1E8}, {0x1D200, 0x1D241}, -{0x1D245, 0x1D245}, {0x1D300, 0x1D356}, {0x1D6C1, 0x1D6C1}, {0x1D6DB, 0x1D6DB}, {0x1D6FB, 0x1D6FB}, {0x1D715, 0x1D715}, {0x1D735, 0x1D735}, {0x1D74F, 0x1D74F}, {0x1D76F, 0x1D76F}, {0x1D789, 0x1D789}, -{0x1D7A9, 0x1D7A9}, {0x1D7C3, 0x1D7C3}, {0x1D800, 0x1D9FF}, {0x1DA37, 0x1DA3A}, {0x1DA6D, 0x1DA74}, {0x1DA76, 0x1DA83}, {0x1DA85, 0x1DA86}, {0x1E14F, 0x1E14F}, {0x1E2FF, 0x1E2FF}, {0x1ECAC, 0x1ECAC}, -{0x1ECB0, 0x1ECB0}, {0x1ED2E, 0x1ED2E}, {0x1EEF0, 0x1EEF1}, {0x1F000, 0x1F02B}, {0x1F030, 0x1F093}, {0x1F0A0, 0x1F0AE}, {0x1F0B1, 0x1F0BF}, {0x1F0C1, 0x1F0CF}, {0x1F0D1, 0x1F0F5}, {0x1F10D, 0x1F1AD}, -{0x1F1E6, 0x1F202}, {0x1F210, 0x1F23B}, {0x1F240, 0x1F248}, {0x1F250, 0x1F251}, {0x1F260, 0x1F265}, {0x1F300, 0x1F6D7}, {0x1F6E0, 0x1F6EC}, {0x1F6F0, 0x1F6FC}, {0x1F700, 0x1F773}, {0x1F780, 0x1F7D8}, -{0x1F7E0, 0x1F7EB}, {0x1F800, 0x1F80B}, {0x1F810, 0x1F847}, {0x1F850, 0x1F859}, {0x1F860, 0x1F887}, {0x1F890, 0x1F8AD}, {0x1F8B0, 0x1F8B1}, {0x1F900, 0x1F978}, {0x1F97A, 0x1F9CB}, {0x1F9CD, 0x1FA53}, -{0x1FA60, 0x1FA6D}, {0x1FA70, 0x1FA74}, {0x1FA78, 0x1FA7A}, {0x1FA80, 0x1FA86}, {0x1FA90, 0x1FAA8}, {0x1FAB0, 0x1FAB6}, {0x1FAC0, 0x1FAC2}, {0x1FAD0, 0x1FAD6}, {0x1FB00, 0x1FB92}, {0x1FB94, 0x1FBCA}, -}; - -static const std::vector> control_ranges = { -{0x0, 0x8}, {0xE, 0x1B}, {0x7F, 0x84}, {0x86, 0x9F}, {0xAD, 0xAD}, {0x378, 0x379}, {0x380, 0x383}, {0x38B, 0x38B}, {0x38D, 0x38D}, {0x3A2, 0x3A2}, {0x530, 0x530}, {0x557, 0x558}, {0x58B, 0x58C}, -{0x590, 0x590}, {0x5C8, 0x5CF}, {0x5EB, 0x5EE}, {0x5F5, 0x605}, {0x61C, 0x61D}, {0x6DD, 0x6DD}, {0x70E, 0x70F}, {0x74B, 0x74C}, {0x7B2, 0x7BF}, {0x7FB, 0x7FC}, {0x82E, 0x82F}, {0x83F, 0x83F}, -{0x85C, 0x85D}, {0x85F, 0x85F}, {0x86B, 0x89F}, {0x8B5, 0x8B5}, {0x8C8, 0x8D2}, {0x8E2, 0x8E2}, {0x984, 0x984}, {0x98D, 0x98E}, {0x991, 0x992}, {0x9A9, 0x9A9}, {0x9B1, 0x9B1}, {0x9B3, 0x9B5}, -{0x9BA, 0x9BB}, {0x9C5, 0x9C6}, {0x9C9, 0x9CA}, {0x9CF, 0x9D6}, {0x9D8, 0x9DB}, {0x9DE, 0x9DE}, {0x9E4, 0x9E5}, {0x9FF, 0xA00}, {0xA04, 0xA04}, {0xA0B, 0xA0E}, {0xA11, 0xA12}, {0xA29, 0xA29}, -{0xA31, 0xA31}, {0xA34, 0xA34}, {0xA37, 0xA37}, {0xA3A, 0xA3B}, {0xA3D, 0xA3D}, {0xA43, 0xA46}, {0xA49, 0xA4A}, {0xA4E, 0xA50}, {0xA52, 0xA58}, {0xA5D, 0xA5D}, {0xA5F, 0xA65}, {0xA77, 0xA80}, -{0xA84, 0xA84}, {0xA8E, 0xA8E}, {0xA92, 0xA92}, {0xAA9, 0xAA9}, {0xAB1, 0xAB1}, {0xAB4, 0xAB4}, {0xABA, 0xABB}, {0xAC6, 0xAC6}, {0xACA, 0xACA}, {0xACE, 0xACF}, {0xAD1, 0xADF}, {0xAE4, 0xAE5}, -{0xAF2, 0xAF8}, {0xB00, 0xB00}, {0xB04, 0xB04}, {0xB0D, 0xB0E}, {0xB11, 0xB12}, {0xB29, 0xB29}, {0xB31, 0xB31}, {0xB34, 0xB34}, {0xB3A, 0xB3B}, {0xB45, 0xB46}, {0xB49, 0xB4A}, {0xB4E, 0xB54}, -{0xB58, 0xB5B}, {0xB5E, 0xB5E}, {0xB64, 0xB65}, {0xB78, 0xB81}, {0xB84, 0xB84}, {0xB8B, 0xB8D}, {0xB91, 0xB91}, {0xB96, 0xB98}, {0xB9B, 0xB9B}, {0xB9D, 0xB9D}, {0xBA0, 0xBA2}, {0xBA5, 0xBA7}, -{0xBAB, 0xBAD}, {0xBBA, 0xBBD}, {0xBC3, 0xBC5}, {0xBC9, 0xBC9}, {0xBCE, 0xBCF}, {0xBD1, 0xBD6}, {0xBD8, 0xBE5}, {0xBFB, 0xBFF}, {0xC0D, 0xC0D}, {0xC11, 0xC11}, {0xC29, 0xC29}, {0xC3A, 0xC3C}, -{0xC45, 0xC45}, {0xC49, 0xC49}, {0xC4E, 0xC54}, {0xC57, 0xC57}, {0xC5B, 0xC5F}, {0xC64, 0xC65}, {0xC70, 0xC76}, {0xC8D, 0xC8D}, {0xC91, 0xC91}, {0xCA9, 0xCA9}, {0xCB4, 0xCB4}, {0xCBA, 0xCBB}, -{0xCC5, 0xCC5}, {0xCC9, 0xCC9}, {0xCCE, 0xCD4}, {0xCD7, 0xCDD}, {0xCDF, 0xCDF}, {0xCE4, 0xCE5}, {0xCF0, 0xCF0}, {0xCF3, 0xCFF}, {0xD0D, 0xD0D}, {0xD11, 0xD11}, {0xD45, 0xD45}, {0xD49, 0xD49}, -{0xD50, 0xD53}, {0xD64, 0xD65}, {0xD80, 0xD80}, {0xD84, 0xD84}, {0xD97, 0xD99}, {0xDB2, 0xDB2}, {0xDBC, 0xDBC}, {0xDBE, 0xDBF}, {0xDC7, 0xDC9}, {0xDCB, 0xDCE}, {0xDD5, 0xDD5}, {0xDD7, 0xDD7}, -{0xDE0, 0xDE5}, {0xDF0, 0xDF1}, {0xDF5, 0xE00}, {0xE3B, 0xE3E}, {0xE5C, 0xE80}, {0xE83, 0xE83}, {0xE85, 0xE85}, {0xE8B, 0xE8B}, {0xEA4, 0xEA4}, {0xEA6, 0xEA6}, {0xEBE, 0xEBF}, {0xEC5, 0xEC5}, -{0xEC7, 0xEC7}, {0xECE, 0xECF}, {0xEDA, 0xEDB}, {0xEE0, 0xEFF}, {0xF48, 0xF48}, {0xF6D, 0xF70}, {0xF98, 0xF98}, {0xFBD, 0xFBD}, {0xFCD, 0xFCD}, {0xFDB, 0xFFF}, {0x10C6, 0x10C6}, {0x10C8, 0x10CC}, -{0x10CE, 0x10CF}, {0x1249, 0x1249}, {0x124E, 0x124F}, {0x1257, 0x1257}, {0x1259, 0x1259}, {0x125E, 0x125F}, {0x1289, 0x1289}, {0x128E, 0x128F}, {0x12B1, 0x12B1}, {0x12B6, 0x12B7}, {0x12BF, 0x12BF}, -{0x12C1, 0x12C1}, {0x12C6, 0x12C7}, {0x12D7, 0x12D7}, {0x1311, 0x1311}, {0x1316, 0x1317}, {0x135B, 0x135C}, {0x137D, 0x137F}, {0x139A, 0x139F}, {0x13F6, 0x13F7}, {0x13FE, 0x13FF}, {0x169D, 0x169F}, -{0x16F9, 0x16FF}, {0x170D, 0x170D}, {0x1715, 0x171F}, {0x1737, 0x173F}, {0x1754, 0x175F}, {0x176D, 0x176D}, {0x1771, 0x1771}, {0x1774, 0x177F}, {0x17DE, 0x17DF}, {0x17EA, 0x17EF}, {0x17FA, 0x17FF}, -{0x180E, 0x180F}, {0x181A, 0x181F}, {0x1879, 0x187F}, {0x18AB, 0x18AF}, {0x18F6, 0x18FF}, {0x191F, 0x191F}, {0x192C, 0x192F}, {0x193C, 0x193F}, {0x1941, 0x1943}, {0x196E, 0x196F}, {0x1975, 0x197F}, -{0x19AC, 0x19AF}, {0x19CA, 0x19CF}, {0x19DB, 0x19DD}, {0x1A1C, 0x1A1D}, {0x1A5F, 0x1A5F}, {0x1A7D, 0x1A7E}, {0x1A8A, 0x1A8F}, {0x1A9A, 0x1A9F}, {0x1AAE, 0x1AAF}, {0x1AC1, 0x1AFF}, {0x1B4C, 0x1B4F}, -{0x1B7D, 0x1B7F}, {0x1BF4, 0x1BFB}, {0x1C38, 0x1C3A}, {0x1C4A, 0x1C4C}, {0x1C89, 0x1C8F}, {0x1CBB, 0x1CBC}, {0x1CC8, 0x1CCF}, {0x1CFB, 0x1CFF}, {0x1DFA, 0x1DFA}, {0x1F16, 0x1F17}, {0x1F1E, 0x1F1F}, -{0x1F46, 0x1F47}, {0x1F4E, 0x1F4F}, {0x1F58, 0x1F58}, {0x1F5A, 0x1F5A}, {0x1F5C, 0x1F5C}, {0x1F5E, 0x1F5E}, {0x1F7E, 0x1F7F}, {0x1FB5, 0x1FB5}, {0x1FC5, 0x1FC5}, {0x1FD4, 0x1FD5}, {0x1FDC, 0x1FDC}, -{0x1FF0, 0x1FF1}, {0x1FF5, 0x1FF5}, {0x1FFF, 0x1FFF}, {0x200B, 0x200F}, {0x202A, 0x202E}, {0x2060, 0x206F}, {0x2072, 0x2073}, {0x208F, 0x208F}, {0x209D, 0x209F}, {0x20C0, 0x20CF}, {0x20F1, 0x20FF}, -{0x218C, 0x218F}, {0x2427, 0x243F}, {0x244B, 0x245F}, {0x2B74, 0x2B75}, {0x2B96, 0x2B96}, {0x2C2F, 0x2C2F}, {0x2C5F, 0x2C5F}, {0x2CF4, 0x2CF8}, {0x2D26, 0x2D26}, {0x2D28, 0x2D2C}, {0x2D2E, 0x2D2F}, -{0x2D68, 0x2D6E}, {0x2D71, 0x2D7E}, {0x2D97, 0x2D9F}, {0x2DA7, 0x2DA7}, {0x2DAF, 0x2DAF}, {0x2DB7, 0x2DB7}, {0x2DBF, 0x2DBF}, {0x2DC7, 0x2DC7}, {0x2DCF, 0x2DCF}, {0x2DD7, 0x2DD7}, {0x2DDF, 0x2DDF}, -{0x2E53, 0x2E7F}, {0x2E9A, 0x2E9A}, {0x2EF4, 0x2EFF}, {0x2FD6, 0x2FEF}, {0x2FFC, 0x2FFF}, {0x3040, 0x3040}, {0x3097, 0x3098}, {0x3100, 0x3104}, {0x3130, 0x3130}, {0x318F, 0x318F}, {0x31E4, 0x31EF}, -{0x321F, 0x321F}, {0x9FFD, 0x9FFF}, {0xA48D, 0xA48F}, {0xA4C7, 0xA4CF}, {0xA62C, 0xA63F}, {0xA6F8, 0xA6FF}, {0xA7C0, 0xA7C1}, {0xA7CB, 0xA7F4}, {0xA82D, 0xA82F}, {0xA83A, 0xA83F}, {0xA878, 0xA87F}, -{0xA8C6, 0xA8CD}, {0xA8DA, 0xA8DF}, {0xA954, 0xA95E}, {0xA97D, 0xA97F}, {0xA9CE, 0xA9CE}, {0xA9DA, 0xA9DD}, {0xA9FF, 0xA9FF}, {0xAA37, 0xAA3F}, {0xAA4E, 0xAA4F}, {0xAA5A, 0xAA5B}, {0xAAC3, 0xAADA}, -{0xAAF7, 0xAB00}, {0xAB07, 0xAB08}, {0xAB0F, 0xAB10}, {0xAB17, 0xAB1F}, {0xAB27, 0xAB27}, {0xAB2F, 0xAB2F}, {0xAB6C, 0xAB6F}, {0xABEE, 0xABEF}, {0xABFA, 0xABFF}, {0xD7A4, 0xD7AF}, {0xD7C7, 0xD7CA}, -{0xD7FC, 0xF8FF}, {0xFA6E, 0xFA6F}, {0xFADA, 0xFAFF}, {0xFB07, 0xFB12}, {0xFB18, 0xFB1C}, {0xFB37, 0xFB37}, {0xFB3D, 0xFB3D}, {0xFB3F, 0xFB3F}, {0xFB42, 0xFB42}, {0xFB45, 0xFB45}, {0xFBC2, 0xFBD2}, -{0xFD40, 0xFD4F}, {0xFD90, 0xFD91}, {0xFDC8, 0xFDEF}, {0xFDFE, 0xFDFF}, {0xFE1A, 0xFE1F}, {0xFE53, 0xFE53}, {0xFE67, 0xFE67}, {0xFE6C, 0xFE6F}, {0xFE75, 0xFE75}, {0xFEFD, 0xFF00}, {0xFFBF, 0xFFC1}, -{0xFFC8, 0xFFC9}, {0xFFD0, 0xFFD1}, {0xFFD8, 0xFFD9}, {0xFFDD, 0xFFDF}, {0xFFE7, 0xFFE7}, {0xFFEF, 0xFFFB}, {0xFFFE, 0xFFFF}, {0x1000C, 0x1000C}, {0x10027, 0x10027}, {0x1003B, 0x1003B}, -{0x1003E, 0x1003E}, {0x1004E, 0x1004F}, {0x1005E, 0x1007F}, {0x100FB, 0x100FF}, {0x10103, 0x10106}, {0x10134, 0x10136}, {0x1018F, 0x1018F}, {0x1019D, 0x1019F}, {0x101A1, 0x101CF}, {0x101FE, 0x1027F}, -{0x1029D, 0x1029F}, {0x102D1, 0x102DF}, {0x102FC, 0x102FF}, {0x10324, 0x1032C}, {0x1034B, 0x1034F}, {0x1037B, 0x1037F}, {0x1039E, 0x1039E}, {0x103C4, 0x103C7}, {0x103D6, 0x103FF}, {0x1049E, 0x1049F}, -{0x104AA, 0x104AF}, {0x104D4, 0x104D7}, {0x104FC, 0x104FF}, {0x10528, 0x1052F}, {0x10564, 0x1056E}, {0x10570, 0x105FF}, {0x10737, 0x1073F}, {0x10756, 0x1075F}, {0x10768, 0x107FF}, {0x10806, 0x10807}, -{0x10809, 0x10809}, {0x10836, 0x10836}, {0x10839, 0x1083B}, {0x1083D, 0x1083E}, {0x10856, 0x10856}, {0x1089F, 0x108A6}, {0x108B0, 0x108DF}, {0x108F3, 0x108F3}, {0x108F6, 0x108FA}, {0x1091C, 0x1091E}, -{0x1093A, 0x1093E}, {0x10940, 0x1097F}, {0x109B8, 0x109BB}, {0x109D0, 0x109D1}, {0x10A04, 0x10A04}, {0x10A07, 0x10A0B}, {0x10A14, 0x10A14}, {0x10A18, 0x10A18}, {0x10A36, 0x10A37}, {0x10A3B, 0x10A3E}, -{0x10A49, 0x10A4F}, {0x10A59, 0x10A5F}, {0x10AA0, 0x10ABF}, {0x10AE7, 0x10AEA}, {0x10AF7, 0x10AFF}, {0x10B36, 0x10B38}, {0x10B56, 0x10B57}, {0x10B73, 0x10B77}, {0x10B92, 0x10B98}, {0x10B9D, 0x10BA8}, -{0x10BB0, 0x10BFF}, {0x10C49, 0x10C7F}, {0x10CB3, 0x10CBF}, {0x10CF3, 0x10CF9}, {0x10D28, 0x10D2F}, {0x10D3A, 0x10E5F}, {0x10E7F, 0x10E7F}, {0x10EAA, 0x10EAA}, {0x10EAE, 0x10EAF}, {0x10EB2, 0x10EFF}, -{0x10F28, 0x10F2F}, {0x10F5A, 0x10FAF}, {0x10FCC, 0x10FDF}, {0x10FF7, 0x10FFF}, {0x1104E, 0x11051}, {0x11070, 0x1107E}, {0x110BD, 0x110BD}, {0x110C2, 0x110CF}, {0x110E9, 0x110EF}, {0x110FA, 0x110FF}, -{0x11135, 0x11135}, {0x11148, 0x1114F}, {0x11177, 0x1117F}, {0x111E0, 0x111E0}, {0x111F5, 0x111FF}, {0x11212, 0x11212}, {0x1123F, 0x1127F}, {0x11287, 0x11287}, {0x11289, 0x11289}, {0x1128E, 0x1128E}, -{0x1129E, 0x1129E}, {0x112AA, 0x112AF}, {0x112EB, 0x112EF}, {0x112FA, 0x112FF}, {0x11304, 0x11304}, {0x1130D, 0x1130E}, {0x11311, 0x11312}, {0x11329, 0x11329}, {0x11331, 0x11331}, {0x11334, 0x11334}, -{0x1133A, 0x1133A}, {0x11345, 0x11346}, {0x11349, 0x1134A}, {0x1134E, 0x1134F}, {0x11351, 0x11356}, {0x11358, 0x1135C}, {0x11364, 0x11365}, {0x1136D, 0x1136F}, {0x11375, 0x113FF}, {0x1145C, 0x1145C}, -{0x11462, 0x1147F}, {0x114C8, 0x114CF}, {0x114DA, 0x1157F}, {0x115B6, 0x115B7}, {0x115DE, 0x115FF}, {0x11645, 0x1164F}, {0x1165A, 0x1165F}, {0x1166D, 0x1167F}, {0x116B9, 0x116BF}, {0x116CA, 0x116FF}, -{0x1171B, 0x1171C}, {0x1172C, 0x1172F}, {0x11740, 0x117FF}, {0x1183C, 0x1189F}, {0x118F3, 0x118FE}, {0x11907, 0x11908}, {0x1190A, 0x1190B}, {0x11914, 0x11914}, {0x11917, 0x11917}, {0x11936, 0x11936}, -{0x11939, 0x1193A}, {0x11947, 0x1194F}, {0x1195A, 0x1199F}, {0x119A8, 0x119A9}, {0x119D8, 0x119D9}, {0x119E5, 0x119FF}, {0x11A48, 0x11A4F}, {0x11AA3, 0x11ABF}, {0x11AF9, 0x11BFF}, {0x11C09, 0x11C09}, -{0x11C37, 0x11C37}, {0x11C46, 0x11C4F}, {0x11C6D, 0x11C6F}, {0x11C90, 0x11C91}, {0x11CA8, 0x11CA8}, {0x11CB7, 0x11CFF}, {0x11D07, 0x11D07}, {0x11D0A, 0x11D0A}, {0x11D37, 0x11D39}, {0x11D3B, 0x11D3B}, -{0x11D3E, 0x11D3E}, {0x11D48, 0x11D4F}, {0x11D5A, 0x11D5F}, {0x11D66, 0x11D66}, {0x11D69, 0x11D69}, {0x11D8F, 0x11D8F}, {0x11D92, 0x11D92}, {0x11D99, 0x11D9F}, {0x11DAA, 0x11EDF}, {0x11EF9, 0x11FAF}, -{0x11FB1, 0x11FBF}, {0x11FF2, 0x11FFE}, {0x1239A, 0x123FF}, {0x1246F, 0x1246F}, {0x12475, 0x1247F}, {0x12544, 0x12FFF}, {0x1342F, 0x143FF}, {0x14647, 0x167FF}, {0x16A39, 0x16A3F}, {0x16A5F, 0x16A5F}, -{0x16A6A, 0x16A6D}, {0x16A70, 0x16ACF}, {0x16AEE, 0x16AEF}, {0x16AF6, 0x16AFF}, {0x16B46, 0x16B4F}, {0x16B5A, 0x16B5A}, {0x16B62, 0x16B62}, {0x16B78, 0x16B7C}, {0x16B90, 0x16E3F}, {0x16E9B, 0x16EFF}, -{0x16F4B, 0x16F4E}, {0x16F88, 0x16F8E}, {0x16FA0, 0x16FDF}, {0x16FE5, 0x16FEF}, {0x16FF2, 0x16FFF}, {0x187F8, 0x187FF}, {0x18CD6, 0x18CFF}, {0x18D09, 0x1AFFF}, {0x1B11F, 0x1B14F}, {0x1B153, 0x1B163}, -{0x1B168, 0x1B16F}, {0x1B2FC, 0x1BBFF}, {0x1BC6B, 0x1BC6F}, {0x1BC7D, 0x1BC7F}, {0x1BC89, 0x1BC8F}, {0x1BC9A, 0x1BC9B}, {0x1BCA0, 0x1CFFF}, {0x1D0F6, 0x1D0FF}, {0x1D127, 0x1D128}, {0x1D173, 0x1D17A}, -{0x1D1E9, 0x1D1FF}, {0x1D246, 0x1D2DF}, {0x1D2F4, 0x1D2FF}, {0x1D357, 0x1D35F}, {0x1D379, 0x1D3FF}, {0x1D455, 0x1D455}, {0x1D49D, 0x1D49D}, {0x1D4A0, 0x1D4A1}, {0x1D4A3, 0x1D4A4}, {0x1D4A7, 0x1D4A8}, -{0x1D4AD, 0x1D4AD}, {0x1D4BA, 0x1D4BA}, {0x1D4BC, 0x1D4BC}, {0x1D4C4, 0x1D4C4}, {0x1D506, 0x1D506}, {0x1D50B, 0x1D50C}, {0x1D515, 0x1D515}, {0x1D51D, 0x1D51D}, {0x1D53A, 0x1D53A}, {0x1D53F, 0x1D53F}, -{0x1D545, 0x1D545}, {0x1D547, 0x1D549}, {0x1D551, 0x1D551}, {0x1D6A6, 0x1D6A7}, {0x1D7CC, 0x1D7CD}, {0x1DA8C, 0x1DA9A}, {0x1DAA0, 0x1DAA0}, {0x1DAB0, 0x1DFFF}, {0x1E007, 0x1E007}, {0x1E019, 0x1E01A}, -{0x1E022, 0x1E022}, {0x1E025, 0x1E025}, {0x1E02B, 0x1E0FF}, {0x1E12D, 0x1E12F}, {0x1E13E, 0x1E13F}, {0x1E14A, 0x1E14D}, {0x1E150, 0x1E2BF}, {0x1E2FA, 0x1E2FE}, {0x1E300, 0x1E7FF}, {0x1E8C5, 0x1E8C6}, -{0x1E8D7, 0x1E8FF}, {0x1E94C, 0x1E94F}, {0x1E95A, 0x1E95D}, {0x1E960, 0x1EC70}, {0x1ECB5, 0x1ED00}, {0x1ED3E, 0x1EDFF}, {0x1EE04, 0x1EE04}, {0x1EE20, 0x1EE20}, {0x1EE23, 0x1EE23}, {0x1EE25, 0x1EE26}, -{0x1EE28, 0x1EE28}, {0x1EE33, 0x1EE33}, {0x1EE38, 0x1EE38}, {0x1EE3A, 0x1EE3A}, {0x1EE3C, 0x1EE41}, {0x1EE43, 0x1EE46}, {0x1EE48, 0x1EE48}, {0x1EE4A, 0x1EE4A}, {0x1EE4C, 0x1EE4C}, {0x1EE50, 0x1EE50}, -{0x1EE53, 0x1EE53}, {0x1EE55, 0x1EE56}, {0x1EE58, 0x1EE58}, {0x1EE5A, 0x1EE5A}, {0x1EE5C, 0x1EE5C}, {0x1EE5E, 0x1EE5E}, {0x1EE60, 0x1EE60}, {0x1EE63, 0x1EE63}, {0x1EE65, 0x1EE66}, {0x1EE6B, 0x1EE6B}, -{0x1EE73, 0x1EE73}, {0x1EE78, 0x1EE78}, {0x1EE7D, 0x1EE7D}, {0x1EE7F, 0x1EE7F}, {0x1EE8A, 0x1EE8A}, {0x1EE9C, 0x1EEA0}, {0x1EEA4, 0x1EEA4}, {0x1EEAA, 0x1EEAA}, {0x1EEBC, 0x1EEEF}, {0x1EEF2, 0x1EFFF}, -{0x1F02C, 0x1F02F}, {0x1F094, 0x1F09F}, {0x1F0AF, 0x1F0B0}, {0x1F0C0, 0x1F0C0}, {0x1F0D0, 0x1F0D0}, {0x1F0F6, 0x1F0FF}, {0x1F1AE, 0x1F1E5}, {0x1F203, 0x1F20F}, {0x1F23C, 0x1F23F}, {0x1F249, 0x1F24F}, -{0x1F252, 0x1F25F}, {0x1F266, 0x1F2FF}, {0x1F6D8, 0x1F6DF}, {0x1F6ED, 0x1F6EF}, {0x1F6FD, 0x1F6FF}, {0x1F774, 0x1F77F}, {0x1F7D9, 0x1F7DF}, {0x1F7EC, 0x1F7FF}, {0x1F80C, 0x1F80F}, {0x1F848, 0x1F84F}, -{0x1F85A, 0x1F85F}, {0x1F888, 0x1F88F}, {0x1F8AE, 0x1F8AF}, {0x1F8B2, 0x1F8FF}, {0x1F979, 0x1F979}, {0x1F9CC, 0x1F9CC}, {0x1FA54, 0x1FA5F}, {0x1FA6E, 0x1FA6F}, {0x1FA75, 0x1FA77}, {0x1FA7B, 0x1FA7F}, -{0x1FA87, 0x1FA8F}, {0x1FAA9, 0x1FAAF}, {0x1FAB7, 0x1FABF}, {0x1FAC3, 0x1FACF}, {0x1FAD7, 0x1FAFF}, {0x1FB93, 0x1FB93}, {0x1FBCB, 0x1FBEF}, {0x1FBFA, 0x1FFFF}, {0x2A6DE, 0x2A6FF}, {0x2B735, 0x2B73F}, -{0x2B81E, 0x2B81F}, {0x2CEA2, 0x2CEAF}, {0x2EBE1, 0x2F7FF}, {0x2FA1E, 0x2FFFF}, {0x3134B, 0xE00FF}, {0xE01F0, 0x10FFFF}, -}; - -//String -bool CNCTString::operator==(const std::string& other) const { - return str.compare(other) == 0; -} -bool CNCTString::operator==(const char other) const { - return str.compare(std::string(1, other)) == 0; -} -bool CNCTString::operator==(const CNCTString& other) const { - return str.compare(other.str) == 0; -} -// + operators -CNCTString& CNCTString::operator+=(const std::string& other) { - str += other; - int new_len = CNCTUnicode::strlen_utf8(other); - utf8_chars += new_len; - char_type = CNCTUnicode::string_identify(str); - seq_offset_bytes += other.size(); - seq_offset_utf8_chars += new_len; - return *this; -} - -CNCTString& CNCTString::operator+=(const char other) { - std::string str = std::string(1, other); - *this += str; - return *this; -} - -CNCTString& CNCTString::operator+=(const CNCTString& other) { - str += other.str; - utf8_chars += other.utf8_chars; - char_type = CNCTUnicode::string_identify(str); - seq_offset_bytes += other.str.size(); - seq_offset_utf8_chars += other.utf8_chars; - return *this; -} - -struct CRCompare { - bool operator()(const std::pair& p, int i) { - return p.second < i; - } - bool operator()(int i, const std::pair& p) { - return i < p.first; - } -}; - -// binary search for code range -bool CNCTUnicode::check_code_range(int c, const std::vector> &ranges) { - auto it = std::upper_bound(ranges.begin(), ranges.end(), c, CRCompare()); - if (it != ranges.begin()) { - --it; - } - return c >= it->first && c <= it->second; -} - -// these are binary searches, it takes only a few operations -CNCTCharType CNCTUnicode::get_code_type(int c) { - if (check_code_range(c, letter_ranges)) { - return LETTER; - } - if (check_code_range(c, digit_ranges)) { - return DIGIT; - } - if (check_code_range(c, whitespace_ranges)) { - return WHITESPACE; - } - if (check_code_range(c, punctuation_ranges)) { - return PUNCTUATION; - } - if (check_code_range(c, symbol_ranges)) { - return SYMBOL; - } - if (check_code_range(c, accent_mark_ranges)) { - return ACCENT_MARK; - } - if (check_code_range(c, control_ranges)) { - return CONTROL; - } - return UNIDENTIFIED; -} - -static int utf8_to_unicode(const std::string& utf8_char) { - int c = 0; - int len = (int)utf8_char.size(); - if (len == 1) { - c = utf8_char[0]; - } else if (len == 2) { - c = ((utf8_char[0] & 0x1F) << 6) | (utf8_char[1] & 0x3F); - } else if (len == 3) { - c = ((utf8_char[0] & 0x0F) << 12) | ((utf8_char[1] & 0x3F) << 6) | (utf8_char[2] & 0x3F); - } else if (len == 4) { - c = ((utf8_char[0] & 0x07) << 18) | ((utf8_char[1] & 0x3F) << 12) | ((utf8_char[2] & 0x3F) << 6) | (utf8_char[3] & 0x3F); - } - return c; -} - -CNCTCharType CNCTUnicode::get_code_type(const std::string &utf8_char) { - return get_code_type(utf8_to_unicode(utf8_char)); -} - -int CNCTUnicode::utf8_len(const char c) -{ - if ((c & 0x80) == 0) { - return 1; // ASCII character - } - if ((c & 0xE0) == 0xC0) { - return 2; // 2-byte character - } - if ((c & 0xF0) == 0xE0) { - return 3; // 3-byte character - } - if ((c & 0xF0) == 0xF0) { - return 4; // 4-byte character - } - return 1; // not valid utf8 - // static const uint8_t lookup[] = { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 3, 4 }; - // return lookup[static_cast(c) >> 4]; -} - -int CNCTUnicode::strlen_utf8(const std::string src) { - int len = 0; - for (std::string::const_iterator it = src.begin(); it != src.end(); ++it) { - int char_len = utf8_len(*it); - if (char_len > 1) { - it += char_len - 1; - } - len += 1; - } - return len; -} - -// split a string into unicode strings -std::vector CNCTUnicode::split_utf8(const std::string &src) { - std::vector result; - for (std::string::const_iterator it = src.begin(); it != src.end(); ++it) { - int char_len = utf8_len(*it); - std::string str(it, it + char_len); - result.push_back(str); - if (char_len > 1) { - it += char_len - 1; - } - } - return result; -} - -// split a string into unicode strings (CNCTString) with sequence information -std::vector CNCTUnicode::split_utf8_enhanced(const std::string &src) { - std::vector result; - int seq_offset_bytes=0; - int seq_offset_utf8_chars=0; - for (std::string::const_iterator it = src.begin(); it != src.end(); ++it) { - int char_len = utf8_len(*it); - std::string str(it, it + char_len); - CNCTString cnct_str; - cnct_str.seq_offset_bytes = seq_offset_bytes; - cnct_str.seq_offset_utf8_chars = seq_offset_utf8_chars; - cnct_str.str = str; - cnct_str.utf8_chars = 1; - cnct_str.char_type = get_code_type(str); - #if 0 - switch (cnct_str.char_type) - { - case DIGIT: - printf("%s = DIGIT\n", str.c_str()); - break; - case LETTER: - printf("%s = LETTER\n", str.c_str()); - break; - case WHITESPACE: - printf("%s = WHITESPACE\n", str.c_str()); - break; - case PUNCTUATION: - printf("%s = PUNCTUATION\n", str.c_str()); - break; - case UNIDENTIFIED: - printf("%s = UNIDENTIFIED\n", str.c_str()); - break; - case SYMBOL: - printf("%s = SYMBOL\n", str.c_str()); - break; - case CONTROL: - printf("%s = CONTROL\n", str.c_str()); - break; - } - #endif - - result.push_back(cnct_str); - seq_offset_bytes += char_len; - seq_offset_utf8_chars += 1; - if (char_len > 1) { - it += char_len - 1; - } - - } - return result; -} - -// return the type of the string -CNCTCharType CNCTUnicode::string_identify(const std::string &str) { - CNCTCharType result = UNIDENTIFIED; - std::string::const_iterator it = str.begin(); - while (it != str.end()) { - int len = utf8_len(*it); - int c = 0; - for (int i = 0; i < len && it != str.end(); ++i, ++it) { - c = (c << 8) | static_cast(*it); - } - switch (get_code_type(c)) { - case DIGIT: - if (result == UNIDENTIFIED) { - result = DIGIT; - } else if (result != DIGIT) { - return MIXED; - } - break; - case LETTER: - if (result == UNIDENTIFIED) { - result = LETTER; - } else if (result != LETTER) { - return MIXED; - } - break; - case WHITESPACE: - if (result == UNIDENTIFIED) { - result = WHITESPACE; - } else if (result != WHITESPACE) { - return MIXED; - } - break; - case PUNCTUATION: - if (result == UNIDENTIFIED) { - result = PUNCTUATION; - } else if (result != PUNCTUATION) { - return MIXED; - } - break; - default: - return MIXED; - break; - } - } - return result; -} - -// verify the content of a string -bool CNCTUnicode::string_test(const std::string &str, CNCTCharType chartype) -{ - std::string::const_iterator it = str.begin(); - while (it != str.end()) { - int len = utf8_len(*it); - int c = 0; - for (int i = 0; i < len && it != str.end(); ++i, ++it) { - c = (c << 8) | static_cast(*it); - } - if (get_code_type(c) != chartype) { - return false; - } - } - return true; -} - -//----------------- -// llama.cpp GPT2 vocab (from libfalcon.cpp) -//----------------- - -std::string replaceAll(std::string str, const std::string& from, const std::string& to) { - size_t start_pos = 0; - while((start_pos = str.find(from, start_pos)) != std::string::npos) { - str.replace(start_pos, from.length(), to); - start_pos += to.length(); // Handles case where 'to' is a substring of 'from' - } - return str; -} - -struct TrieNode { - std::map map; - int32_t Id = -1; -}; - -struct Trie { - TrieNode *root; - - Trie() : root(new TrieNode()) {} - - ~Trie() { - if(root) - deleteTrie(root); - } - - // Move constructor - Trie(Trie&& other) noexcept : root(other.root) { - other.root = nullptr; - } - - // Move assignment operator - Trie& operator=(Trie&& other) noexcept { - if (this != &other) { - if(root) - deleteTrie(root); - root = other.root; - other.root = nullptr; - } - return *this; - } - - void insert(const std::string &token, int32_t Id) { - TrieNode* current = root; - for(auto ch : token) { - if(current->map.find(ch) == current->map.end()) { - current->map[ch] = new TrieNode(); - } - current = current->map[ch]; - } - current->Id = Id; - } - - void reset() { - deleteTrie(root); - root = new TrieNode(); - } - -private: - void deleteTrie(TrieNode* node) { - for(auto &it: node->map) { - deleteTrie(it.second); - } - delete node; - } - -}; - -struct gpt2bpe_vocab { - using id = int32_t; - using token = std::string; - - std::map max_token_length; // max length, for each 2byte prefix - std::map, int> bpe_ranks; - std::vector> bpe_merges; - - id special_bos_id = -1; - id special_eos_id = -1; - id special_unk_id = -1; - id special_sep_id = -1; - id special_pad_id = -1; - - id linefeed_id = -1; - - std::unordered_map token_to_id; - std::unordered_map id_to_token; - - Trie trie; // highspeed access to tokens by prefix tree - - // populate trie from map - void populate_trie_from_map() { - trie.reset(); - for (const auto& pair : token_to_id) { - trie.insert(pair.first, pair.second); - if (pair.first.size() >= 2) { - std::string prefix = pair.first.substr(0, 2); - max_token_length[prefix] = std::max(max_token_length[prefix], (uint32_t)pair.first.size()); - } - } - } - // populate token ranks map - int populate_bpe_ranks(std::vector> bpe_merges_) { - for (int i = 0; i < (int)bpe_merges_.size(); i++) { - bpe_ranks.emplace(bpe_merges_[i], i); - } - bpe_merges = bpe_merges_; - return bpe_merges_.size(); - } - - // Trim whitespace characters from the beginning and end of the string - void trim(std::string& str) { - // Remove whitespace characters from the beginning of the string - str.erase(str.begin(), std::find_if(str.begin(), str.end(), [](int ch) { - return !std::isspace(ch); - })); - - // Remove whitespace characters from the end of the string - str.erase(std::find_if(str.rbegin(), str.rend(), [](int ch) { - return !std::isspace(ch); - }).base(), str.end()); - } - - // get max token length available for a prefix of 2 bytes (string at least 2 bytes long) - int get_max_token_length(const std::string& string) const { - if (string.size() < 2) { - return -1; - } - std::string prefix = string.substr(0, 2); - if (max_token_length.find(prefix) == max_token_length.end()) { - return 0; - } - return max_token_length.at(prefix); - } - - // function to find if two tokens match in bpe_rank, return rank or -1 - int find_bpe_rank(const std::string& token1, const std::string& token2) const { - std::string left_token = token1; - std::string right_token = token2; - left_token = replaceAll(left_token, " ", "Ġ"); - left_token = replaceAll(left_token, "\n", "Ċ"); - right_token = replaceAll(right_token, " ", "Ġ"); - right_token = replaceAll(right_token, "\n", "Ċ"); - - auto it = bpe_ranks.find(std::make_pair(left_token, right_token)); - if (it == bpe_ranks.end()) { - return -1; - } - return it->second; - } - - std::pair find_longest_match(const std::string& snippet) const { - TrieNode* current = trie.root; - gpt2bpe_vocab::id last_matched_id = -1; - std::string last_matched_token = ""; - std::string current_token = ""; - for (auto ch : snippet) { - if (current->map.find(ch) == current->map.end()) { - break; - } - current = current->map[ch]; - current_token += ch; - if (current->Id != -1) { - last_matched_id = current->Id; - last_matched_token = current_token; - } - } - return {last_matched_id, last_matched_token}; - } - -}; - - -// -// tokenizer - bpe type, gpt2 tokenization compatible -// - -struct ggllm_bpe_symbol { - using index = int; - index prev; - index next; - const char * text; - size_t n; -}; - -static_assert(std::is_trivially_copyable::value, "ggllm_bpe_symbol is not trivially copyable"); - -struct ggllm_bpe_bigram { - struct comparator { - bool operator()(ggllm_bpe_bigram & l, ggllm_bpe_bigram & r) { - return l.rank > r.rank || (l.rank == r.rank && l.left > r.left); - } - }; - - using queue_storage = std::vector; - using queue = std::priority_queue; - ggllm_bpe_symbol::index left; - ggllm_bpe_symbol::index right; - std::string text; - int rank; - size_t size; -}; - -struct gpt2bpe_tokenizer { - gpt2bpe_tokenizer(const gpt2bpe_vocab & vocab, bool g2ws_): vocab_(vocab) { flag_g2ws = g2ws_; } - - void tokenize(const std::string & text, std::vector & output) { - int final_prev_index = -1; - // auto start = ggml_time_us(); - auto word_collection = bpe_gpt2_preprocess(text); - // auto end = ggml_time_us(); - // fprintf(stderr, "%s: preprocessing took %0.3f ms\n", __func__, (end - start) / 1000.0); - - symbols_final.clear(); - - for (auto & word : word_collection) { - work_queue_ = ggllm_bpe_bigram::queue(); - symbols_.clear(); - - int index = 0; - size_t offset = 0; - - while (offset < word.size()) { - ggllm_bpe_symbol sym; - size_t char_len = std::min(word.size() - offset, (size_t) CNCTUnicode::utf8_len(word[offset])); - sym.text = word.c_str() + offset; - sym.n = 1; - sym.n = char_len; - offset += sym.n; - sym.prev = index - 1; - sym.next = offset == word.size() ? -1 : index + 1; - index++; - symbols_.emplace_back(sym); - } - for (size_t i = 1; i < symbols_.size(); ++i) { - add_new_bigram(i - 1, i); - } - - // build token(s) - while (!work_queue_.empty()) { - auto bigram = work_queue_.top(); - work_queue_.pop(); - - auto & left_symbol = symbols_[bigram.left]; - auto & right_symbol = symbols_[bigram.right]; - - if (left_symbol.n == 0 || right_symbol.n == 0) { - continue; - } - std::string left_token = std::string(left_symbol.text, left_symbol.n); - std::string right_token = std::string(right_symbol.text, right_symbol.n); - if (left_token + right_token != bigram.text) { - continue; // Skip this bigram if it's outdated - } - - // merge the right sym into the left one - left_symbol.n += right_symbol.n; - right_symbol.n = 0; - - // remove the right sym from the chain - left_symbol.next = right_symbol.next; - if (right_symbol.next >= 0) { - symbols_[right_symbol.next].prev = bigram.left; - } - - add_new_bigram(left_symbol.prev, bigram.left); // left side of current symbol - add_new_bigram(bigram.left, left_symbol.next); // right side of current symbol - } - - // add the fnished tokens to the final list keeping correct order for next and prev - for (auto & sym : symbols_) { - if (sym.n > 0) { - sym.prev = final_prev_index; - sym.next = -1; - if (final_prev_index != -1) { - symbols_final[final_prev_index].next = symbols_final.size(); - } - symbols_final.emplace_back(sym); - final_prev_index = symbols_final.size() - 1; - } - } - } - - symbols_ = symbols_final; - if (symbols_.size()) - for (int i = 0; i != -1; i = symbols_[i].next) { - auto & symbol = symbols_[i]; - if (symbol.n == 0) { - continue; - } - std::string str = std::string(symbol.text, symbol.n); - std::string str_decoded = decode_token(str); - auto token = vocab_.token_to_id.find(str_decoded); - - if (token == vocab_.token_to_id.end()) { - for (auto j = str_decoded.begin(); j != str_decoded.end(); ++j) { - std::string byte_str(1, *j); - auto token_multibyte = vocab_.token_to_id.find(byte_str); - if (token_multibyte == vocab_.token_to_id.end()) { - fprintf(stderr,"ERROR: byte not found in vocab: '%s'\n", byte_str.c_str()); - } - output.push_back((*token_multibyte).second); - } - } else { - output.push_back((*token).second); - } - } - } - -private: - void add_new_bigram(int left, int right) { - if (left == -1 || right == -1) return; - - std::string left_token = std::string(symbols_[left].text, symbols_[left].n); - std::string right_token = std::string(symbols_[right].text, symbols_[right].n); - - int rank_found = -1; - rank_found = vocab_.find_bpe_rank(left_token, right_token); - - if (rank_found < 0) { - return; - } - - ggllm_bpe_bigram bigram; - bigram.left = left; - bigram.right = right; - bigram.rank = rank_found; - bigram.size = left_token.size() + right_token.size(); - bigram.text = left_token + right_token; - work_queue_.push(bigram); - } - - std::unordered_map bytes_to_unicode() { - static std::unordered_map hex_map = { - { 0x21, "\x21" }, { 0x22, "\x22" }, { 0x23, "\x23" }, { 0x24, "\x24" }, { 0x25, "\x25" }, { 0x26, "\x26" }, { 0x27, "\x27" }, { 0x28, "\x28" }, { 0x29, "\x29" }, { 0x2A, "\x2A" }, - { 0x2B, "\x2B" }, { 0x2C, "\x2C" }, { 0x2D, "\x2D" }, { 0x2E, "\x2E" }, { 0x2F, "\x2F" }, { 0x30, "\x30" }, { 0x31, "\x31" }, { 0x32, "\x32" }, { 0x33, "\x33" }, { 0x34, "\x34" }, - { 0x35, "\x35" }, { 0x36, "\x36" }, { 0x37, "\x37" }, { 0x38, "\x38" }, { 0x39, "\x39" }, { 0x3A, "\x3A" }, { 0x3B, "\x3B" }, { 0x3C, "\x3C" }, { 0x3D, "\x3D" }, { 0x3E, "\x3E" }, - { 0x3F, "\x3F" }, { 0x40, "\x40" }, { 0x41, "\x41" }, { 0x42, "\x42" }, { 0x43, "\x43" }, { 0x44, "\x44" }, { 0x45, "\x45" }, { 0x46, "\x46" }, { 0x47, "\x47" }, { 0x48, "\x48" }, - { 0x49, "\x49" }, { 0x4A, "\x4A" }, { 0x4B, "\x4B" }, { 0x4C, "\x4C" }, { 0x4D, "\x4D" }, { 0x4E, "\x4E" }, { 0x4F, "\x4F" }, { 0x50, "\x50" }, { 0x51, "\x51" }, { 0x52, "\x52" }, - { 0x53, "\x53" }, { 0x54, "\x54" }, { 0x55, "\x55" }, { 0x56, "\x56" }, { 0x57, "\x57" }, { 0x58, "\x58" }, { 0x59, "\x59" }, { 0x5A, "\x5A" }, { 0x5B, "\x5B" }, { 0x5C, "\x5C" }, - { 0x5D, "\x5D" }, { 0x5E, "\x5E" }, { 0x5F, "\x5F" }, { 0x60, "\x60" }, { 0x61, "\x61" }, { 0x62, "\x62" }, { 0x63, "\x63" }, { 0x64, "\x64" }, { 0x65, "\x65" }, { 0x66, "\x66" }, - { 0x67, "\x67" }, { 0x68, "\x68" }, { 0x69, "\x69" }, { 0x6A, "\x6A" }, { 0x6B, "\x6B" }, { 0x6C, "\x6C" }, { 0x6D, "\x6D" }, { 0x6E, "\x6E" }, { 0x6F, "\x6F" }, { 0x70, "\x70" }, - { 0x71, "\x71" }, { 0x72, "\x72" }, { 0x73, "\x73" }, { 0x74, "\x74" }, { 0x75, "\x75" }, { 0x76, "\x76" }, { 0x77, "\x77" }, { 0x78, "\x78" }, { 0x79, "\x79" }, { 0x7A, "\x7A" }, - { 0x7B, "\x7B" }, { 0x7C, "\x7C" }, { 0x7D, "\x7D" }, { 0x7E, "\x7E" }, { 0xA1, "\xC2\xA1" }, { 0xA2, "\xC2\xA2" }, { 0xA3, "\xC2\xA3" }, { 0xA4, "\xC2\xA4" }, { 0xA5, "\xC2\xA5" }, - { 0xA6, "\xC2\xA6" }, { 0xA7, "\xC2\xA7" }, { 0xA8, "\xC2\xA8" }, { 0xA9, "\xC2\xA9" }, { 0xAA, "\xC2\xAA" }, { 0xAB, "\xC2\xAB" }, { 0xAC, "\xC2\xAC" }, { 0xAE, "\xC2\xAE" }, - { 0xAF, "\xC2\xAF" }, { 0xB0, "\xC2\xB0" }, { 0xB1, "\xC2\xB1" }, { 0xB2, "\xC2\xB2" }, { 0xB3, "\xC2\xB3" }, { 0xB4, "\xC2\xB4" }, { 0xB5, "\xC2\xB5" }, { 0xB6, "\xC2\xB6" }, - { 0xB7, "\xC2\xB7" }, { 0xB8, "\xC2\xB8" }, { 0xB9, "\xC2\xB9" }, { 0xBA, "\xC2\xBA" }, { 0xBB, "\xC2\xBB" }, { 0xBC, "\xC2\xBC" }, { 0xBD, "\xC2\xBD" }, { 0xBE, "\xC2\xBE" }, - { 0xBF, "\xC2\xBF" }, { 0xC0, "\xC3\x80" }, { 0xC1, "\xC3\x81" }, { 0xC2, "\xC3\x82" }, { 0xC3, "\xC3\x83" }, { 0xC4, "\xC3\x84" }, { 0xC5, "\xC3\x85" }, { 0xC6, "\xC3\x86" }, - { 0xC7, "\xC3\x87" }, { 0xC8, "\xC3\x88" }, { 0xC9, "\xC3\x89" }, { 0xCA, "\xC3\x8A" }, { 0xCB, "\xC3\x8B" }, { 0xCC, "\xC3\x8C" }, { 0xCD, "\xC3\x8D" }, { 0xCE, "\xC3\x8E" }, - { 0xCF, "\xC3\x8F" }, { 0xD0, "\xC3\x90" }, { 0xD1, "\xC3\x91" }, { 0xD2, "\xC3\x92" }, { 0xD3, "\xC3\x93" }, { 0xD4, "\xC3\x94" }, { 0xD5, "\xC3\x95" }, { 0xD6, "\xC3\x96" }, - { 0xD7, "\xC3\x97" }, { 0xD8, "\xC3\x98" }, { 0xD9, "\xC3\x99" }, { 0xDA, "\xC3\x9A" }, { 0xDB, "\xC3\x9B" }, { 0xDC, "\xC3\x9C" }, { 0xDD, "\xC3\x9D" }, { 0xDE, "\xC3\x9E" }, - { 0xDF, "\xC3\x9F" }, { 0xE0, "\xC3\xA0" }, { 0xE1, "\xC3\xA1" }, { 0xE2, "\xC3\xA2" }, { 0xE3, "\xC3\xA3" }, { 0xE4, "\xC3\xA4" }, { 0xE5, "\xC3\xA5" }, { 0xE6, "\xC3\xA6" }, - { 0xE7, "\xC3\xA7" }, { 0xE8, "\xC3\xA8" }, { 0xE9, "\xC3\xA9" }, { 0xEA, "\xC3\xAA" }, { 0xEB, "\xC3\xAB" }, { 0xEC, "\xC3\xAC" }, { 0xED, "\xC3\xAD" }, { 0xEE, "\xC3\xAE" }, - { 0xEF, "\xC3\xAF" }, { 0xF0, "\xC3\xB0" }, { 0xF1, "\xC3\xB1" }, { 0xF2, "\xC3\xB2" }, { 0xF3, "\xC3\xB3" }, { 0xF4, "\xC3\xB4" }, { 0xF5, "\xC3\xB5" }, { 0xF6, "\xC3\xB6" }, - { 0xF7, "\xC3\xB7" }, { 0xF8, "\xC3\xB8" }, { 0xF9, "\xC3\xB9" }, { 0xFA, "\xC3\xBA" }, { 0xFB, "\xC3\xBB" }, { 0xFC, "\xC3\xBC" }, { 0xFD, "\xC3\xBD" }, { 0xFE, "\xC3\xBE" }, - { 0xFF, "\xC3\xBF" }, { 0x00, "\xC4\x80" }, { 0x01, "\xC4\x81" }, { 0x02, "\xC4\x82" }, { 0x03, "\xC4\x83" }, { 0x04, "\xC4\x84" }, { 0x05, "\xC4\x85" }, { 0x06, "\xC4\x86" }, - { 0x07, "\xC4\x87" }, { 0x08, "\xC4\x88" }, { 0x09, "\xC4\x89" }, { 0x0A, "\xC4\x8A" }, { 0x0B, "\xC4\x8B" }, { 0x0C, "\xC4\x8C" }, { 0x0D, "\xC4\x8D" }, { 0x0E, "\xC4\x8E" }, - { 0x0F, "\xC4\x8F" }, { 0x10, "\xC4\x90" }, { 0x11, "\xC4\x91" }, { 0x12, "\xC4\x92" }, { 0x13, "\xC4\x93" }, { 0x14, "\xC4\x94" }, { 0x15, "\xC4\x95" }, { 0x16, "\xC4\x96" }, - { 0x17, "\xC4\x97" }, { 0x18, "\xC4\x98" }, { 0x19, "\xC4\x99" }, { 0x1A, "\xC4\x9A" }, { 0x1B, "\xC4\x9B" }, { 0x1C, "\xC4\x9C" }, { 0x1D, "\xC4\x9D" }, { 0x1E, "\xC4\x9E" }, - { 0x1F, "\xC4\x9F" }, { 0x20, "\xC4\xA0" }, { 0x7F, "\xC4\xA1" }, { 0x80, "\xC4\xA2" }, { 0x81, "\xC4\xA3" }, { 0x82, "\xC4\xA4" }, { 0x83, "\xC4\xA5" }, { 0x84, "\xC4\xA6" }, - { 0x85, "\xC4\xA7" }, { 0x86, "\xC4\xA8" }, { 0x87, "\xC4\xA9" }, { 0x88, "\xC4\xAA" }, { 0x89, "\xC4\xAB" }, { 0x8A, "\xC4\xAC" }, { 0x8B, "\xC4\xAD" }, { 0x8C, "\xC4\xAE" }, - { 0x8D, "\xC4\xAF" }, { 0x8E, "\xC4\xB0" }, { 0x8F, "\xC4\xB1" }, { 0x90, "\xC4\xB2" }, { 0x91, "\xC4\xB3" }, { 0x92, "\xC4\xB4" }, { 0x93, "\xC4\xB5" }, { 0x94, "\xC4\xB6" }, - { 0x95, "\xC4\xB7" }, { 0x96, "\xC4\xB8" }, { 0x97, "\xC4\xB9" }, { 0x98, "\xC4\xBA" }, { 0x99, "\xC4\xBB" }, { 0x9A, "\xC4\xBC" }, { 0x9B, "\xC4\xBD" }, { 0x9C, "\xC4\xBE" }, - { 0x9D, "\xC4\xBF" }, { 0x9E, "\xC5\x80" }, { 0x9F, "\xC5\x81" }, { 0xA0, "\xC5\x82" }, { 0xAD, "\xC5\x83" } - }; - return hex_map; - } - - std::unordered_map unicode_to_bytes() { - static std::unordered_map hex_map = { - { "\x21", 0x21 }, { "\x22", 0x22 }, { "\x23", 0x23 }, { "\x24", 0x24 }, { "\x25", 0x25 }, { "\x26", 0x26 }, { "\x27", 0x27 }, { "\x28", 0x28 }, { "\x29", 0x29 }, { "\x2A", 0x2A }, - { "\x2B", 0x2B }, { "\x2C", 0x2C }, { "\x2D", 0x2D }, { "\x2E", 0x2E }, { "\x2F", 0x2F }, { "\x30", 0x30 }, { "\x31", 0x31 }, { "\x32", 0x32 }, { "\x33", 0x33 }, { "\x34", 0x34 }, - { "\x35", 0x35 }, { "\x36", 0x36 }, { "\x37", 0x37 }, { "\x38", 0x38 }, { "\x39", 0x39 }, { "\x3A", 0x3A }, { "\x3B", 0x3B }, { "\x3C", 0x3C }, { "\x3D", 0x3D }, { "\x3E", 0x3E }, - { "\x3F", 0x3F }, { "\x40", 0x40 }, { "\x41", 0x41 }, { "\x42", 0x42 }, { "\x43", 0x43 }, { "\x44", 0x44 }, { "\x45", 0x45 }, { "\x46", 0x46 }, { "\x47", 0x47 }, { "\x48", 0x48 }, - { "\x49", 0x49 }, { "\x4A", 0x4A }, { "\x4B", 0x4B }, { "\x4C", 0x4C }, { "\x4D", 0x4D }, { "\x4E", 0x4E }, { "\x4F", 0x4F }, { "\x50", 0x50 }, { "\x51", 0x51 }, { "\x52", 0x52 }, - { "\x53", 0x53 }, { "\x54", 0x54 }, { "\x55", 0x55 }, { "\x56", 0x56 }, { "\x57", 0x57 }, { "\x58", 0x58 }, { "\x59", 0x59 }, { "\x5A", 0x5A }, { "\x5B", 0x5B }, { "\x5C", 0x5C }, - { "\x5D", 0x5D }, { "\x5E", 0x5E }, { "\x5F", 0x5F }, { "\x60", 0x60 }, { "\x61", 0x61 }, { "\x62", 0x62 }, { "\x63", 0x63 }, { "\x64", 0x64 }, { "\x65", 0x65 }, { "\x66", 0x66 }, - { "\x67", 0x67 }, { "\x68", 0x68 }, { "\x69", 0x69 }, { "\x6A", 0x6A }, { "\x6B", 0x6B }, { "\x6C", 0x6C }, { "\x6D", 0x6D }, { "\x6E", 0x6E }, { "\x6F", 0x6F }, { "\x70", 0x70 }, - { "\x71", 0x71 }, { "\x72", 0x72 }, { "\x73", 0x73 }, { "\x74", 0x74 }, { "\x75", 0x75 }, { "\x76", 0x76 }, { "\x77", 0x77 }, { "\x78", 0x78 }, { "\x79", 0x79 }, { "\x7A", 0x7A }, - { "\x7B", 0x7B }, { "\x7C", 0x7C }, { "\x7D", 0x7D }, { "\x7E", 0x7E }, { "\xC2\xA1", 0xA1 }, { "\xC2\xA2", 0xA2 }, { "\xC2\xA3", 0xA3 }, { "\xC2\xA4", 0xA4 }, { "\xC2\xA5", 0xA5 }, - { "\xC2\xA6", 0xA6 }, { "\xC2\xA7", 0xA7 }, { "\xC2\xA8", 0xA8 }, { "\xC2\xA9", 0xA9 }, { "\xC2\xAA", 0xAA }, { "\xC2\xAB", 0xAB }, { "\xC2\xAC", 0xAC }, { "\xC2\xAE", 0xAE }, - { "\xC2\xAF", 0xAF }, { "\xC2\xB0", 0xB0 }, { "\xC2\xB1", 0xB1 }, { "\xC2\xB2", 0xB2 }, { "\xC2\xB3", 0xB3 }, { "\xC2\xB4", 0xB4 }, { "\xC2\xB5", 0xB5 }, { "\xC2\xB6", 0xB6 }, - { "\xC2\xB7", 0xB7 }, { "\xC2\xB8", 0xB8 }, { "\xC2\xB9", 0xB9 }, { "\xC2\xBA", 0xBA }, { "\xC2\xBB", 0xBB }, { "\xC2\xBC", 0xBC }, { "\xC2\xBD", 0xBD }, { "\xC2\xBE", 0xBE }, - { "\xC2\xBF", 0xBF }, { "\xC3\x80", 0xC0 }, { "\xC3\x81", 0xC1 }, { "\xC3\x82", 0xC2 }, { "\xC3\x83", 0xC3 }, { "\xC3\x84", 0xC4 }, { "\xC3\x85", 0xC5 }, { "\xC3\x86", 0xC6 }, - { "\xC3\x87", 0xC7 }, { "\xC3\x88", 0xC8 }, { "\xC3\x89", 0xC9 }, { "\xC3\x8A", 0xCA }, { "\xC3\x8B", 0xCB }, { "\xC3\x8C", 0xCC }, { "\xC3\x8D", 0xCD }, { "\xC3\x8E", 0xCE }, - { "\xC3\x8F", 0xCF }, { "\xC3\x90", 0xD0 }, { "\xC3\x91", 0xD1 }, { "\xC3\x92", 0xD2 }, { "\xC3\x93", 0xD3 }, { "\xC3\x94", 0xD4 }, { "\xC3\x95", 0xD5 }, { "\xC3\x96", 0xD6 }, - { "\xC3\x97", 0xD7 }, { "\xC3\x98", 0xD8 }, { "\xC3\x99", 0xD9 }, { "\xC3\x9A", 0xDA }, { "\xC3\x9B", 0xDB }, { "\xC3\x9C", 0xDC }, { "\xC3\x9D", 0xDD }, { "\xC3\x9E", 0xDE }, - { "\xC3\x9F", 0xDF }, { "\xC3\xA0", 0xE0 }, { "\xC3\xA1", 0xE1 }, { "\xC3\xA2", 0xE2 }, { "\xC3\xA3", 0xE3 }, { "\xC3\xA4", 0xE4 }, { "\xC3\xA5", 0xE5 }, { "\xC3\xA6", 0xE6 }, - { "\xC3\xA7", 0xE7 }, { "\xC3\xA8", 0xE8 }, { "\xC3\xA9", 0xE9 }, { "\xC3\xAA", 0xEA }, { "\xC3\xAB", 0xEB }, { "\xC3\xAC", 0xEC }, { "\xC3\xAD", 0xED }, { "\xC3\xAE", 0xEE }, - { "\xC3\xAF", 0xEF }, { "\xC3\xB0", 0xF0 }, { "\xC3\xB1", 0xF1 }, { "\xC3\xB2", 0xF2 }, { "\xC3\xB3", 0xF3 }, { "\xC3\xB4", 0xF4 }, { "\xC3\xB5", 0xF5 }, { "\xC3\xB6", 0xF6 }, - { "\xC3\xB7", 0xF7 }, { "\xC3\xB8", 0xF8 }, { "\xC3\xB9", 0xF9 }, { "\xC3\xBA", 0xFA }, { "\xC3\xBB", 0xFB }, { "\xC3\xBC", 0xFC }, { "\xC3\xBD", 0xFD }, { "\xC3\xBE", 0xFE }, - { "\xC3\xBF", 0xFF }, { "\xC4\x80", 0x00 }, { "\xC4\x81", 0x01 }, { "\xC4\x82", 0x02 }, { "\xC4\x83", 0x03 }, { "\xC4\x84", 0x04 }, { "\xC4\x85", 0x05 }, { "\xC4\x86", 0x06 }, - { "\xC4\x87", 0x07 }, { "\xC4\x88", 0x08 }, { "\xC4\x89", 0x09 }, { "\xC4\x8A", 0x0A }, { "\xC4\x8B", 0x0B }, { "\xC4\x8C", 0x0C }, { "\xC4\x8D", 0x0D }, { "\xC4\x8E", 0x0E }, - { "\xC4\x8F", 0x0F }, { "\xC4\x90", 0x10 }, { "\xC4\x91", 0x11 }, { "\xC4\x92", 0x12 }, { "\xC4\x93", 0x13 }, { "\xC4\x94", 0x14 }, { "\xC4\x95", 0x15 }, { "\xC4\x96", 0x16 }, - { "\xC4\x97", 0x17 }, { "\xC4\x98", 0x18 }, { "\xC4\x99", 0x19 }, { "\xC4\x9A", 0x1A }, { "\xC4\x9B", 0x1B }, { "\xC4\x9C", 0x1C }, { "\xC4\x9D", 0x1D }, { "\xC4\x9E", 0x1E }, - { "\xC4\x9F", 0x1F }, { "\xC4\xA0", 0x20 }, { "\xC4\xA1", 0x7F }, { "\xC4\xA2", 0x80 }, { "\xC4\xA3", 0x81 }, { "\xC4\xA4", 0x82 }, { "\xC4\xA5", 0x83 }, { "\xC4\xA6", 0x84 }, - { "\xC4\xA7", 0x85 }, { "\xC4\xA8", 0x86 }, { "\xC4\xA9", 0x87 }, { "\xC4\xAA", 0x88 }, { "\xC4\xAB", 0x89 }, { "\xC4\xAC", 0x8A }, { "\xC4\xAD", 0x8B }, { "\xC4\xAE", 0x8C }, - { "\xC4\xAF", 0x8D }, { "\xC4\xB0", 0x8E }, { "\xC4\xB1", 0x8F }, { "\xC4\xB2", 0x90 }, { "\xC4\xB3", 0x91 }, { "\xC4\xB4", 0x92 }, { "\xC4\xB5", 0x93 }, { "\xC4\xB6", 0x94 }, - { "\xC4\xB7", 0x95 }, { "\xC4\xB8", 0x96 }, { "\xC4\xB9", 0x97 }, { "\xC4\xBA", 0x98 }, { "\xC4\xBB", 0x99 }, { "\xC4\xBC", 0x9A }, { "\xC4\xBD", 0x9B }, { "\xC4\xBE", 0x9C }, - { "\xC4\xBF", 0x9D }, { "\xC5\x80", 0x9E }, { "\xC5\x81", 0x9F }, { "\xC5\x82", 0xA0 }, { "\xC5\x83", 0xAD } - }; - return hex_map; - } - - // len must be available - bool inline str_is_equal(const char* str1, const char* str2, size_t len) { - for (size_t i = 0; i < len; ++i) { - if (str1[i] != str2[i]) { - return false; - } - } - return true; - } - - std::vector bpe_gpt2_preprocess(const std::string& text) { - static std::unordered_map< unsigned char, std::string> byte_encoder = bytes_to_unicode(); - std::vector bpe_words; - std::vector bpe_encoded_words; - - std::string token=""; - const char *raw_text_p = text.c_str(); - // GPT2 system regex: 's|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+ - bool collecting_numeric = false; - bool collecting_letter = false; - bool collecting_special = false; - bool collecting_whitespace_lookahead = false; - bool collecting=false; - - std::vector text_utf; - text_utf.reserve(text.size()); - bpe_words.reserve(text.size()); - bpe_encoded_words.reserve(text.size()); - - text_utf = CNCTUnicode::split_utf8_enhanced(text); - - for (int i = 0; i < (int)text_utf.size(); i++) { - const CNCTString &utf_char = text_utf[i]; - bool split_condition = false; - const char *text_pos = raw_text_p + utf_char.seq_offset_bytes; - int bytes_remain = strlen(text_pos); - // forward backward lookups - const CNCTString &utf_char_next = (i+1 < (int)text_utf.size()) ? text_utf[i+1] : CNCTString(); - const CNCTString &utf_char_next_next = (i+2 < (int)text_utf.size()) ? text_utf[i+2] : CNCTString(); - // const CNCTString &utf_char_prev = (i > 0) ? text_utf[i-1] : CNCTString(); - - // handling contractions - if (!split_condition && bytes_remain >= 2) { - // 's|'t|'m|'d - if (utf_char == '\'' && (utf_char_next == 's' || utf_char_next == 't' || utf_char_next == 'm' || utf_char_next == 'd')) { - split_condition = true; - } - if (split_condition) { - if (token.size()) { - bpe_words.emplace_back(token); // push previous content as token - } - token = utf_char.str + utf_char_next.str; - bpe_words.emplace_back(token); - token=""; - i++; - continue; - } - } - if (!split_condition && bytes_remain >= 3) { - // 're|'ve|'ll - if (utf_char == '\'' && ( - (utf_char_next == 'r' || utf_char_next_next == 'e') || - (utf_char_next == 'v' || utf_char_next_next == 'e') || - (utf_char_next == 'l' || utf_char_next_next == 'l')) - ) { - split_condition = true; - } - if (split_condition) { - // current token + next token can be defined - if (token.size()) { - bpe_words.emplace_back(token); // push previous content as token - } - token = utf_char.str + utf_char_next.str + utf_char_next_next.str; - bpe_words.emplace_back(token); // the contraction - token=""; - i+=2; - continue; - } - } - - if (!split_condition && !collecting) { - if (utf_char.char_type == CNCTCharType::LETTER || (!token.size() && utf_char==" " && utf_char_next.char_type == CNCTCharType::LETTER)) { - collecting_letter = true; - collecting = true; - } else if (utf_char.char_type == CNCTCharType::DIGIT || (!token.size() && utf_char==" " && utf_char_next.char_type == CNCTCharType::DIGIT)) { - collecting_numeric = true; - collecting = true; - } else if ( - ((utf_char.char_type != CNCTCharType::LETTER && utf_char.char_type != CNCTCharType::DIGIT) && (utf_char.char_type != CNCTCharType::WHITESPACE)) || - (!token.size() && utf_char==" " && utf_char_next.char_type != CNCTCharType::LETTER && utf_char_next.char_type != CNCTCharType::DIGIT && utf_char_next.char_type != CNCTCharType::WHITESPACE) - ) { - collecting_special = true; - collecting = true; - } else if (utf_char.char_type == CNCTCharType::WHITESPACE && utf_char_next.char_type == CNCTCharType::WHITESPACE) { - collecting_whitespace_lookahead = true; - collecting = true; - } else if (utf_char.char_type == CNCTCharType::WHITESPACE) { - split_condition = true; - } - } else if (!split_condition && collecting) { - if (collecting_letter && utf_char.char_type != CNCTCharType::LETTER) { - split_condition = true; - } else if (collecting_numeric && utf_char.char_type != CNCTCharType::DIGIT) { - split_condition = true; - } else if (collecting_special && (utf_char.char_type == CNCTCharType::LETTER || utf_char.char_type == CNCTCharType::DIGIT || utf_char.char_type == CNCTCharType::WHITESPACE)) { - split_condition = true; - } else if (collecting_whitespace_lookahead && utf_char_next.char_type != CNCTCharType::WHITESPACE) { - split_condition = true; - } - } - - if(utf_char_next.str.size() == 0) { - split_condition = true; // final - token += utf_char.str; - } - - if (split_condition) { - if (token.size()) { - bpe_words.emplace_back(token); - } - token = utf_char.str; - collecting = false; - collecting_letter = false; - collecting_numeric = false; - collecting_special = false; - collecting_whitespace_lookahead = false; - } else { - token += utf_char.str; - } - } - - for (std::string& word : bpe_words) { - std::string encoded_token=""; - for (char& c : word) { - encoded_token += byte_encoder[c]; - } - bpe_encoded_words.emplace_back(encoded_token); - } - - return bpe_encoded_words; - } - - // decoder (for one token) - std::string decode_token(const std::string& token) { - static std::unordered_map< std::string, unsigned char> byte_decoder = unicode_to_bytes(); - std::string decoded_token=""; - auto unicode_seqeunces = CNCTUnicode::split_utf8(token); - for (auto& unicode_sequence : unicode_seqeunces) { - decoded_token += byte_decoder[unicode_sequence]; - } - - return decoded_token; - } - - const gpt2bpe_vocab & vocab_; - std::vector symbols_; - std::vector symbols_final; - ggllm_bpe_bigram::queue work_queue_; - bool flag_g2ws=false; -}; - -static std::vector gpt2bpe_tokenize(const gpt2bpe_vocab & vocab, const std::string & text, bool bos, bool g2ws ) { - gpt2bpe_tokenizer tokenizer(vocab, g2ws); - std::vector output; - - if (text.empty()) { - return output; - } - - if (bos && vocab.special_bos_id != -1) { - output.push_back(vocab.special_bos_id); - } - - tokenizer.tokenize(text, output); - return output; -} - -#endif // CMPNCT_GPT2BPE diff --git a/spaces/Intoval/privateChatGPT/modules/__init__.py b/spaces/Intoval/privateChatGPT/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/JFoz/CoherentControl/model.py b/spaces/JFoz/CoherentControl/model.py deleted file mode 100644 index 5b1ae29226d8fe5af6d91ca225f35e7364b61fbd..0000000000000000000000000000000000000000 --- a/spaces/JFoz/CoherentControl/model.py +++ /dev/null @@ -1,136 +0,0 @@ -from enum import Enum -import gc -import numpy as np -import torch - - - - -import jax -import jax.numpy as jnp -import numpy as np -from flax.jax_utils import replicate -from flax.training.common_utils import shard -from PIL import Image -from diffusers import FlaxStableDiffusionControlNetPipeline, FlaxControlNetModel - - -import utils -import gradio_utils -import os - -from einops import rearrange - -import matplotlib.pyplot as plt - -def create_key(seed=0): - return jax.random.PRNGKey(seed) - -class Model: - def __init__(self, **kwargs): - self.base_controlnet, self.base_controlnet_params = FlaxControlNetModel.from_pretrained( - #"JFoz/dog-cat-pose", dtype=jnp.bfloat16 - "lllyasviel/control_v11p_sd15_openpose", dtype=jnp.bfloat16, from_pt=True - ) - self.pipe, self.params = FlaxStableDiffusionControlNetPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", controlnet=self.base_controlnet, revision="flax", dtype=jnp.bfloat16,# from_pt=True, - ) - - def infer_frame(self, frame_id, prompt, negative_prompt, rng, **kwargs): - - print(prompt, frame_id) - - num_samples = 1 - prompt_ids = self.pipe.prepare_text_inputs([prompt[frame_id]]*num_samples) - negative_prompt_ids = self.pipe.prepare_text_inputs([negative_prompt[frame_id]] * num_samples) - processed_image = self.pipe.prepare_image_inputs([kwargs['image'][frame_id]]*num_samples) - - self.params["controlnet"] = self.base_controlnet_params - - - p_params = replicate(self.params) - prompt_ids = shard(prompt_ids) - negative_prompt_ids = shard(negative_prompt_ids) - processed_image = shard(processed_image) - - output = self.pipe( - prompt_ids=prompt_ids, - image=processed_image, - params=p_params, - prng_seed=rng, - num_inference_steps=50, - neg_prompt_ids=negative_prompt_ids, - jit=True, - ).images - - output_images = np.asarray(output.reshape((num_samples,) + output.shape[-3:])) - return output_images - - def inference(self, **kwargs): - - seed = kwargs.pop('seed', 0) - - rng = create_key(0) - rng = jax.random.split(rng, jax.device_count()) - - f = len(kwargs['image']) - print('frames', f) - - - assert 'prompt' in kwargs - prompt = [kwargs.pop('prompt')] * f - negative_prompt = [kwargs.pop('negative_prompt', '')] * f - - frames_counter = 0 - - result = [] - for i in range(0, f): - print(f'Processing frame {i + 1} / {f}') - result.append(self.infer_frame(frame_id=i, - prompt=prompt, - negative_prompt=negative_prompt, - rng = rng, - **kwargs)) - frames_counter += 1 - result = np.stack(result, axis=0) - return result - - def process_controlnet_pose(self, - video_path, - prompt, - num_inference_steps=20, - controlnet_conditioning_scale=1.0, - guidance_scale=9.0, - seed=42, - eta=0.0, - resolution=512, - save_path=None): - print("Module Pose") - video_path = gradio_utils.motion_to_video_path(video_path) - - - added_prompt = 'best quality, extremely detailed, HD, ultra-realistic, 8K, HQ, masterpiece, trending on artstation, art, smooth' - negative_prompts = 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer difits, cropped, worst quality, low quality, deformed body, bloated, ugly, unrealistic' - - video, fps = utils.prepare_video( - video_path, resolution, False, output_fps=4) - control = utils.pre_process_pose( - video, apply_pose_detect=False) - - print('N frames', len(control)) - f, _, h, w = video.shape - - result = self.inference(image=control, - prompt=prompt + ', ' + added_prompt, - height=h, - width=w, - negative_prompt=negative_prompts, - num_inference_steps=num_inference_steps, - guidance_scale=guidance_scale, - controlnet_conditioning_scale=controlnet_conditioning_scale, - eta=eta, - seed=seed, - output_type='numpy', - ) - return utils.create_gif(result.astype(jnp.float16), fps, path=save_path) - diff --git a/spaces/Jeff2323/ai-comic-factory/src/lib/replaceWhiteWithTransparent.ts b/spaces/Jeff2323/ai-comic-factory/src/lib/replaceWhiteWithTransparent.ts deleted file mode 100644 index cee490fc1a0b19b2192ce86d6c8f9867a3a6a6d9..0000000000000000000000000000000000000000 --- a/spaces/Jeff2323/ai-comic-factory/src/lib/replaceWhiteWithTransparent.ts +++ /dev/null @@ -1,37 +0,0 @@ -export function replaceWhiteWithTransparent(imageBase64: string): Promise { - return new Promise((resolve, reject) => { - const img = new Image(); - img.onload = () => { - const canvas = document.createElement('canvas'); - canvas.width = img.width; - canvas.height = img.height; - - const ctx = canvas.getContext('2d'); - if (!ctx) { - reject('Unable to get canvas 2D context'); - return; - } - - ctx.drawImage(img, 0, 0); - - const imageData = ctx.getImageData(0, 0, canvas.width, canvas.height); - const data = imageData.data; - - for (let i = 0; i < data.length; i += 4) { - if (data[i] === 255 && data[i + 1] === 255 && data[i + 2] === 255) { - data[i + 3] = 0; - } - } - - ctx.putImageData(imageData, 0, 0); - - resolve(canvas.toDataURL()); - }; - - img.onerror = (err) => { - reject(err); - }; - - img.src = imageBase64; - }); -} \ No newline at end of file diff --git a/spaces/Jikiwi/sovits-models/modules/losses.py b/spaces/Jikiwi/sovits-models/modules/losses.py deleted file mode 100644 index cd21799eccde350c3aac0bdd661baf96ed220147..0000000000000000000000000000000000000000 --- a/spaces/Jikiwi/sovits-models/modules/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import modules.commons as commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - #print(logs_p) - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/Jishnnu/Emotion-Detection/README.md b/spaces/Jishnnu/Emotion-Detection/README.md deleted file mode 100644 index cea079d8a3c915ab13b572a3687b20a32dde173f..0000000000000000000000000000000000000000 --- a/spaces/Jishnnu/Emotion-Detection/README.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Emotion Detection -emoji: 🌍 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false ---- - -This repository contains all the materials related to Case Study 1 in the book "A Guide to Applied Machine Learning for Biologists". -Use this link to access the dataset : https://www.kaggle.com/datasets/jonathanoheix/face-expression-recognition-dataset - -Please make sure that you change the existing path specification with your custom directory path in the source code. - -CREDITS -1. KISS Institute for Practical Robotics Face Detection Model – Harrcascade File (haarcascade_frontalface_default.xml) -2. Kaggle Dataset – Face Expression Recognition diff --git a/spaces/KaygNas/cut-it/vite.config.ts b/spaces/KaygNas/cut-it/vite.config.ts deleted file mode 100644 index cdfa874196f0a57ba3ce02841b08ae2ea7367f52..0000000000000000000000000000000000000000 --- a/spaces/KaygNas/cut-it/vite.config.ts +++ /dev/null @@ -1,30 +0,0 @@ -import { defineConfig } from 'vite' -import glsl from 'vite-plugin-glsl' - -export default defineConfig(({ command, mode }) => { - return { - resolve: { - alias: { - babylonjs: mode === 'development' ? 'babylonjs/babylon.max' : 'babylonjs', - }, - }, - build: { - rollupOptions: { - output: [{ - manualChunks: (id) => { - if (id.includes('@babylonjs/core')) - return 'babylonjs-core' - else if (id.includes('@babylonjs/gui')) - return 'babylonjs-gui' - else if (id.includes('@babylonjs/loaders/glTF')) - return 'babylonjs-loaders-glTF' - }, - }], - }, - }, - plugins: [glsl()], - server: { - port: 3000, - }, - } -}) diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/wavernn/inference.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/wavernn/inference.py deleted file mode 100644 index 40cd3054b54b1111a213740b35bc8c50c76930cf..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/vocoder/wavernn/inference.py +++ /dev/null @@ -1,64 +0,0 @@ -from vocoder.wavernn.models.fatchord_version import WaveRNN -from vocoder.wavernn import hparams as hp -import torch - - -_model = None # type: WaveRNN - -def load_model(weights_fpath, verbose=True): - global _model, _device - - if verbose: - print("Building Wave-RNN") - _model = WaveRNN( - rnn_dims=hp.voc_rnn_dims, - fc_dims=hp.voc_fc_dims, - bits=hp.bits, - pad=hp.voc_pad, - upsample_factors=hp.voc_upsample_factors, - feat_dims=hp.num_mels, - compute_dims=hp.voc_compute_dims, - res_out_dims=hp.voc_res_out_dims, - res_blocks=hp.voc_res_blocks, - hop_length=hp.hop_length, - sample_rate=hp.sample_rate, - mode=hp.voc_mode - ) - - if torch.cuda.is_available(): - _model = _model.cuda() - _device = torch.device('cuda') - else: - _device = torch.device('cpu') - - if verbose: - print("Loading model weights at %s" % weights_fpath) - checkpoint = torch.load(weights_fpath, _device) - _model.load_state_dict(checkpoint['model_state']) - _model.eval() - - -def is_loaded(): - return _model is not None - - -def infer_waveform(mel, normalize=True, batched=True, target=8000, overlap=800, - progress_callback=None): - """ - Infers the waveform of a mel spectrogram output by the synthesizer (the format must match - that of the synthesizer!) - - :param normalize: - :param batched: - :param target: - :param overlap: - :return: - """ - if _model is None: - raise Exception("Please load Wave-RNN in memory before using it") - - if normalize: - mel = mel / hp.mel_max_abs_value - mel = torch.from_numpy(mel[None, ...]) - wav = _model.generate(mel, batched, target, overlap, hp.mu_law, progress_callback) - return wav, hp.sample_rate diff --git a/spaces/Kuachi/ai-voice/mel_processing.py b/spaces/Kuachi/ai-voice/mel_processing.py deleted file mode 100644 index 3e252e76320522a8a4195a60665168f22769aec2..0000000000000000000000000000000000000000 --- a/spaces/Kuachi/ai-voice/mel_processing.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/sparse_rcnn.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/sparse_rcnn.py deleted file mode 100644 index 75442a69e472953854ded9fc8c30ac4ab30535d3..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/sparse_rcnn.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.registry import MODELS -from mmdet.utils import ConfigType, OptConfigType, OptMultiConfig -from .two_stage import TwoStageDetector - - -@MODELS.register_module() -class SparseRCNN(TwoStageDetector): - r"""Implementation of `Sparse R-CNN: End-to-End Object Detection with - Learnable Proposals `_""" - - def __init__(self, - backbone: ConfigType, - neck: OptConfigType = None, - rpn_head: OptConfigType = None, - roi_head: OptConfigType = None, - train_cfg: OptConfigType = None, - test_cfg: OptConfigType = None, - data_preprocessor: OptConfigType = None, - init_cfg: OptMultiConfig = None) -> None: - super().__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - data_preprocessor=data_preprocessor, - init_cfg=init_cfg) - assert self.with_rpn, 'Sparse R-CNN and QueryInst ' \ - 'do not support external proposals' diff --git a/spaces/LightChen2333/OpenSLU/tools/parse_to_hugging_face.py b/spaces/LightChen2333/OpenSLU/tools/parse_to_hugging_face.py deleted file mode 100644 index 4231d89f997ea42f16cc54b1489ef5b5a42c076d..0000000000000000000000000000000000000000 --- a/spaces/LightChen2333/OpenSLU/tools/parse_to_hugging_face.py +++ /dev/null @@ -1,86 +0,0 @@ -''' -Author: Qiguang Chen -LastEditors: Qiguang Chen -Date: 2023-02-13 10:44:39 -LastEditTime: 2023-02-19 15:45:08 -Description: - -''' - -import argparse -import sys -import os - -sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) - -import dill - -from common.config import Config -from common.model_manager import ModelManager -from transformers import PretrainedConfig, PreTrainedModel, AutoModel, AutoTokenizer, PreTrainedTokenizer - -class PretrainedConfigForSLUToSave(PretrainedConfig): - def __init__(self, **kargs) -> None: - cfg = model_manager.config - kargs["name_or_path"] = cfg.base["name"] - kargs["return_dict"] = False - kargs["is_decoder"] = True - kargs["_id2label"] = {"intent": model_manager.intent_list, "slot": model_manager.slot_list} - kargs["_label2id"] = {"intent": model_manager.intent_dict, "slot": model_manager.slot_dict} - kargs["_num_labels"] = {"intent": len(model_manager.intent_list), "slot": len(model_manager.slot_list)} - kargs["tokenizer_class"] = cfg.base["name"] - kargs["vocab_size"] = model_manager.tokenizer.vocab_size - kargs["model"] = cfg.model - kargs["model"]["decoder"]["intent_classifier"]["intent_label_num"] = len(model_manager.intent_list) - kargs["model"]["decoder"]["slot_classifier"]["slot_label_num"] = len(model_manager.slot_list) - kargs["tokenizer"] = cfg.tokenizer - len(model_manager.slot_list) - super().__init__(**kargs) - -class PretrainedModelForSLUToSave(PreTrainedModel): - def __init__(self, config: PretrainedConfig, *inputs, **kwargs) -> None: - super().__init__(config, *inputs, **kwargs) - self.model = model_manager.model - self.config_class = config - - -class PreTrainedTokenizerForSLUToSave(PreTrainedTokenizer): - def __init__(self, **kwargs): - super().__init__(**kwargs) - self.tokenizer = model_manager.tokenizer - - # @overload - def save_vocabulary(self, save_directory: str, filename_prefix = None): - if filename_prefix is not None: - path = os.path.join(save_directory, filename_prefix+"-tokenizer.pkl") - else: - path = os.path.join(save_directory, "tokenizer.pkl") - # tokenizer_name=model_manager.config.tokenizer.get("_tokenizer_name_") - # if tokenizer_name == "word_tokenizer": - # self.tokenizer.save(path) - # else: - # torch.save() - with open(path,'wb') as f: - dill.dump(self.tokenizer,f) - return (path,) - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument('--config_path', '-cp', type=str, required=True) - parser.add_argument('--output_path', '-op', type=str, default="save/temp") - args = parser.parse_args() - config = Config.load_from_yaml(args.config_path) - config.base["train"] = False - config.base["test"] = False - if config.model_manager["load_dir"] is None: - config.model_manager["load_dir"] = config.model_manager["save_dir"] - model_manager = ModelManager(config) - model_manager.load() - model_manager.config.autoload_template() - - pretrained_config = PretrainedConfigForSLUToSave() - pretrained_model= PretrainedModelForSLUToSave(pretrained_config) - pretrained_model.save_pretrained(args.output_path) - - pretrained_tokenizer = PreTrainedTokenizerForSLUToSave() - pretrained_tokenizer.save_pretrained(args.output_path) diff --git a/spaces/Marshalls/testmtd/feature_extraction/madmom/audio/spectrogram.py b/spaces/Marshalls/testmtd/feature_extraction/madmom/audio/spectrogram.py deleted file mode 100644 index 93a5eec2236ed194a730f7aecaf92915ac4bd662..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/feature_extraction/madmom/audio/spectrogram.py +++ /dev/null @@ -1,1486 +0,0 @@ -# encoding: utf-8 -# pylint: disable=no-member -# pylint: disable=invalid-name -# pylint: disable=too-many-arguments -""" -This module contains spectrogram related functionality. - -""" - -from __future__ import absolute_import, division, print_function - -import inspect -import numpy as np - -from ..processors import Processor, SequentialProcessor, BufferProcessor -from .filters import (Filterbank, LogarithmicFilterbank, NUM_BANDS, FMIN, FMAX, - A4, NORM_FILTERS, UNIQUE_FILTERS) - - -def spec(stft): - """ - Computes the magnitudes of the complex Short Time Fourier Transform of a - signal. - - Parameters - ---------- - stft : numpy array - Complex STFT of a signal. - - Returns - ------- - spec : numpy array - Magnitude spectrogram. - - """ - return np.abs(stft) - - -def tuning_frequency(spectrogram, bin_frequencies, num_hist_bins=15, fref=A4): - """ - Determines the tuning frequency of the audio signal based on the given - magnitude spectrogram. - - To determine the tuning frequency, a weighted histogram of relative - deviations of the spectrogram bins towards the closest semitones is built. - - Parameters - ---------- - spectrogram : numpy array - Magnitude spectrogram. - bin_frequencies : numpy array - Frequencies of the spectrogram bins [Hz]. - num_hist_bins : int, optional - Number of histogram bins. - fref : float, optional - Reference tuning frequency [Hz]. - - Returns - ------- - tuning_frequency : float - Tuning frequency [Hz]. - - """ - from .filters import hz2midi - # interval of spectral bins from the reference frequency in semitones - semitone_int = hz2midi(bin_frequencies, fref=fref) - # deviation from the next semitone - semitone_dev = semitone_int - np.round(semitone_int) - # np.histogram accepts bin edges, so we need to apply an offset and use 1 - # more bin than given to build a histogram - offset = 0.5 / num_hist_bins - hist_bins = np.linspace(-0.5 - offset, 0.5 + offset, num_hist_bins + 1) - histogram = np.histogram(semitone_dev, weights=np.sum(spectrogram, axis=0), - bins=hist_bins) - # deviation of the bins (centre of the bins) - dev_bins = (histogram[1][:-1] + histogram[1][1:]) / 2. - # dominant deviation - dev = dev_bins[np.argmax(histogram[0])] - # calculate the tuning frequency - return fref * 2. ** (dev / 12.) - - -# magnitude spectrogram of STFT -class Spectrogram(np.ndarray): - """ - A :class:`Spectrogram` represents the magnitude spectrogram of a - :class:`.audio.stft.ShortTimeFourierTransform`. - - Parameters - ---------- - stft : :class:`.audio.stft.ShortTimeFourierTransform` instance - Short Time Fourier Transform. - kwargs : dict, optional - If no :class:`.audio.stft.ShortTimeFourierTransform` instance was - given, one is instantiated with these additional keyword arguments. - - Examples - -------- - Create a :class:`Spectrogram` from a - :class:`.audio.stft.ShortTimeFourierTransform` (or anything it can be - instantiated from: - - >>> spec = Spectrogram('tests/data/audio/sample.wav') - >>> spec # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS - Spectrogram([[ 3.15249, 4.00272, ..., 0.03634, 0.03671], - [ 4.28429, 2.85158, ..., 0.0219 , 0.02227], - ..., - [ 4.92274, 10.27775, ..., 0.00607, 0.00593], - [ 9.22709, 9.6387 , ..., 0.00981, 0.00984]], dtype=float32) - - """ - # pylint: disable=super-on-old-class - # pylint: disable=super-init-not-called - # pylint: disable=attribute-defined-outside-init - - def __init__(self, stft, **kwargs): - # this method is for documentation purposes only - pass - - def __new__(cls, stft, **kwargs): - from .stft import ShortTimeFourierTransform - # check stft type - if isinstance(stft, Spectrogram): - # already a Spectrogram - data = stft - elif isinstance(stft, ShortTimeFourierTransform): - # take the abs of the STFT - data = np.abs(stft) - else: - # try to instantiate a ShortTimeFourierTransform - stft = ShortTimeFourierTransform(stft, **kwargs) - # take the abs of the STFT - data = np.abs(stft) - # cast as Spectrogram - obj = np.asarray(data).view(cls) - # save additional attributes - obj.stft = stft - # return the object - return obj - - def __array_finalize__(self, obj): - if obj is None: - return - # set default values here, also needed for views - self.stft = getattr(obj, 'stft', None) - - @property - def num_frames(self): - """Number of frames.""" - return len(self) - - @property - def num_bins(self): - """Number of bins.""" - return int(self.shape[1]) - - @property - def bin_frequencies(self): - """Bin frequencies.""" - return self.stft.bin_frequencies - - def diff(self, **kwargs): - """ - Return the difference of the magnitude spectrogram. - - Parameters - ---------- - kwargs : dict - Keyword arguments passed to :class:`SpectrogramDifference`. - - Returns - ------- - diff : :class:`SpectrogramDifference` instance - The differences of the magnitude spectrogram. - - """ - return SpectrogramDifference(self, **kwargs) - - def filter(self, **kwargs): - """ - Return a filtered version of the magnitude spectrogram. - - Parameters - ---------- - kwargs : dict - Keyword arguments passed to :class:`FilteredSpectrogram`. - - Returns - ------- - filt_spec : :class:`FilteredSpectrogram` instance - Filtered version of the magnitude spectrogram. - - """ - return FilteredSpectrogram(self, **kwargs) - - def log(self, **kwargs): - """ - Return a logarithmically scaled version of the magnitude spectrogram. - - Parameters - ---------- - kwargs : dict - Keyword arguments passed to :class:`LogarithmicSpectrogram`. - - Returns - ------- - log_spec : :class:`LogarithmicSpectrogram` instance - Logarithmically scaled version of the magnitude spectrogram. - - """ - return LogarithmicSpectrogram(self, **kwargs) - - def tuning_frequency(self, **kwargs): - """ - Return the tuning frequency of the audio signal based on peaks of the - spectrogram. - - Parameters - ---------- - kwargs : dict - Keyword arguments passed to :func:`tuning_frequency`. - - Returns - ------- - tuning_frequency : float - Tuning frequency of the spectrogram. - - """ - from scipy.ndimage.filters import maximum_filter - # widen the spectrogram in frequency dimension - max_spec = maximum_filter(self, size=[1, 3]) - # get the peaks of the spectrogram - max_spec = self * (self == max_spec) - # determine the tuning frequency - return tuning_frequency(max_spec, self.bin_frequencies, **kwargs) - - -class SpectrogramProcessor(Processor): - """ - SpectrogramProcessor class. - - """ - def __init__(self, **kwargs): - pass - - def process(self, data, **kwargs): - """ - Create a Spectrogram from the given data. - - Parameters - ---------- - data : numpy array - Data to be processed. - kwargs : dict - Keyword arguments passed to :class:`Spectrogram`. - - Returns - ------- - spec : :class:`Spectrogram` instance - Spectrogram. - - """ - return Spectrogram(data, **kwargs) - - -# filtered spectrogram stuff -FILTERBANK = LogarithmicFilterbank - - -class FilteredSpectrogram(Spectrogram): - """ - FilteredSpectrogram class. - - Parameters - ---------- - spectrogram : :class:`Spectrogram` instance - Spectrogram. - filterbank : :class:`.audio.filters.Filterbank`, optional - Filterbank class or instance; if a class is given (rather than an - instance), one will be created with the given type and parameters. - num_bands : int, optional - Number of filter bands (per octave, depending on the type of the - `filterbank`). - fmin : float, optional - Minimum frequency of the filterbank [Hz]. - fmax : float, optional - Maximum frequency of the filterbank [Hz]. - fref : float, optional - Tuning frequency of the filterbank [Hz]. - norm_filters : bool, optional - Normalize the filter bands of the filterbank to area 1. - unique_filters : bool, optional - Indicate if the filterbank should contain only unique filters, i.e. - remove duplicate filters resulting from insufficient resolution at - low frequencies. - kwargs : dict, optional - If no :class:`Spectrogram` instance was given, one is instantiated - with these additional keyword arguments. - - Examples - -------- - Create a :class:`FilteredSpectrogram` from a :class:`Spectrogram` (or - anything it can be instantiated from. Per default a - :class:`.madmom.audio.filters.LogarithmicFilterbank` with 12 bands per - octave is used. - - >>> spec = FilteredSpectrogram('tests/data/audio/sample.wav') - >>> spec # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS - FilteredSpectrogram([[ 5.66156, 6.30141, ..., 0.05426, 0.06461], - [ 8.44266, 8.69582, ..., 0.07703, 0.0902 ], - ..., - [10.04626, 1.12018, ..., 0.0487 , 0.04282], - [ 8.60186, 6.81195, ..., 0.03721, 0.03371]], - dtype=float32) - - The resulting spectrogram has fewer frequency bins, with the centers of - the bins aligned logarithmically (lower frequency bins still have a linear - spacing due to the coarse resolution of the DFT at low frequencies): - - >>> spec.shape - (281, 81) - >>> spec.num_bins - 81 - >>> spec.bin_frequencies # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS - array([ 43.06641, 64.59961, 86.13281, 107.66602, - 129.19922, 150.73242, 172.26562, 193.79883, ..., - 10551.26953, 11175.73242, 11843.26172, 12553.85742, - 13285.98633, 14082.71484, 14922.50977, 15805.37109]) - - The filterbank used to filter the spectrogram is saved as an attribute: - - >>> spec.filterbank # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS - LogarithmicFilterbank([[0., 0., ..., 0., 0.], - [0., 0., ..., 0., 0.], - ..., - [0., 0., ..., 0., 0.], - [0., 0., ..., 0., 0.]], dtype=float32) - >>> spec.filterbank.num_bands - 81 - - The filterbank can be chosen at instantiation time: - - >>> from madmom.audio.filters import MelFilterbank - >>> spec = FilteredSpectrogram('tests/data/audio/sample.wav', \ - filterbank=MelFilterbank, num_bands=40) - >>> type(spec.filterbank) - - >>> spec.shape - (281, 40) - - """ - # pylint: disable=super-on-old-class - # pylint: disable=super-init-not-called - # pylint: disable=attribute-defined-outside-init - - def __init__(self, spectrogram, filterbank=FILTERBANK, num_bands=NUM_BANDS, - fmin=FMIN, fmax=FMAX, fref=A4, norm_filters=NORM_FILTERS, - unique_filters=UNIQUE_FILTERS, **kwargs): - # this method is for documentation purposes only - pass - - def __new__(cls, spectrogram, filterbank=FILTERBANK, num_bands=NUM_BANDS, - fmin=FMIN, fmax=FMAX, fref=A4, norm_filters=NORM_FILTERS, - unique_filters=UNIQUE_FILTERS, **kwargs): - # pylint: disable=unused-argument - # instantiate a Spectrogram if needed - if not isinstance(spectrogram, Spectrogram): - # try to instantiate a Spectrogram object - spectrogram = Spectrogram(spectrogram, **kwargs) - # instantiate a Filterbank if needed - if inspect.isclass(filterbank) and issubclass(filterbank, Filterbank): - # a Filterbank class is given, create a filterbank of this type - filterbank = filterbank(spectrogram.bin_frequencies, - num_bands=num_bands, fmin=fmin, fmax=fmax, - fref=fref, norm_filters=norm_filters, - unique_filters=unique_filters) - if not isinstance(filterbank, Filterbank): - raise TypeError('not a Filterbank type or instance: %s' % - filterbank) - # filter the spectrogram - data = np.dot(spectrogram, filterbank) - # cast as FilteredSpectrogram - obj = np.asarray(data).view(cls) - # save additional attributes - obj.filterbank = filterbank - # and those from the given spectrogram - obj.stft = spectrogram.stft - # return the object - return obj - - def __array_finalize__(self, obj): - if obj is None: - return - # set default values here, also needed for views - self.stft = getattr(obj, 'stft', None) - self.filterbank = getattr(obj, 'filterbank', None) - - @property - def bin_frequencies(self): - """Bin frequencies.""" - # use the center frequencies of the filterbank as bin_frequencies - return self.filterbank.center_frequencies - - -class FilteredSpectrogramProcessor(Processor): - """ - FilteredSpectrogramProcessor class. - - Parameters - ---------- - filterbank : :class:`.audio.filters.Filterbank` - Filterbank used to filter a spectrogram. - num_bands : int - Number of bands (per octave). - fmin : float, optional - Minimum frequency of the filterbank [Hz]. - fmax : float, optional - Maximum frequency of the filterbank [Hz]. - fref : float, optional - Tuning frequency of the filterbank [Hz]. - norm_filters : bool, optional - Normalize the filter of the filterbank to area 1. - unique_filters : bool, optional - Indicate if the filterbank should contain only unique filters, i.e. - remove duplicate filters resulting from insufficient resolution at - low frequencies. - - """ - - def __init__(self, filterbank=FILTERBANK, num_bands=NUM_BANDS, fmin=FMIN, - fmax=FMAX, fref=A4, norm_filters=NORM_FILTERS, - unique_filters=UNIQUE_FILTERS, **kwargs): - # pylint: disable=unused-argument - self.filterbank = filterbank - self.num_bands = num_bands - self.fmin = fmin - self.fmax = fmax - self.fref = fref - self.norm_filters = norm_filters - self.unique_filters = unique_filters - - def process(self, data, **kwargs): - """ - Create a FilteredSpectrogram from the given data. - - Parameters - ---------- - data : numpy array - Data to be processed. - kwargs : dict - Keyword arguments passed to :class:`FilteredSpectrogram`. - - Returns - ------- - filt_spec : :class:`FilteredSpectrogram` instance - Filtered spectrogram. - - """ - # update arguments passed to FilteredSpectrogram - args = dict(filterbank=self.filterbank, num_bands=self.num_bands, - fmin=self.fmin, fmax=self.fmax, fref=self.fref, - norm_filters=self.norm_filters, - unique_filters=self.unique_filters) - args.update(kwargs) - # instantiate a FilteredSpectrogram and return it - data = FilteredSpectrogram(data, **args) - # cache the filterbank - self.filterbank = data.filterbank - return data - - -# logarithmic spectrogram stuff -LOG = np.log10 -MUL = 1. -ADD = 1. - - -class LogarithmicSpectrogram(Spectrogram): - """ - LogarithmicSpectrogram class. - - Parameters - ---------- - spectrogram : :class:`Spectrogram` instance - Spectrogram. - log : numpy ufunc, optional - Logarithmic scaling function to apply. - mul : float, optional - Multiply the magnitude spectrogram with this factor before taking - the logarithm. - add : float, optional - Add this value before taking the logarithm of the magnitudes. - kwargs : dict, optional - If no :class:`Spectrogram` instance was given, one is instantiated - with these additional keyword arguments. - - Examples - -------- - Create a :class:`LogarithmicSpectrogram` from a :class:`Spectrogram` (or - anything it can be instantiated from. Per default `np.log10` is used as - the scaling function and a value of 1 is added to avoid negative values. - - >>> spec = LogarithmicSpectrogram('tests/data/audio/sample.wav') - >>> spec # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS - LogarithmicSpectrogram([[...]], dtype=float32) - >>> spec.min() - LogarithmicSpectrogram(0., dtype=float32) - - """ - # pylint: disable=super-on-old-class - # pylint: disable=super-init-not-called - # pylint: disable=attribute-defined-outside-init - - def __init__(self, spectrogram, log=LOG, mul=MUL, add=ADD, **kwargs): - # this method is for documentation purposes only - pass - - def __new__(cls, spectrogram, log=LOG, mul=MUL, add=ADD, **kwargs): - # instantiate a Spectrogram if needed - if not isinstance(spectrogram, Spectrogram): - # try to instantiate a Spectrogram object - spectrogram = Spectrogram(spectrogram, **kwargs) - data = spectrogram - else: - # make a copy of the spectrogram - data = spectrogram.copy() - # scale the spectrogram - if mul is not None: - data *= mul - if add is not None: - data += add - if log is not None: - log(data, data) - # cast as FilteredSpectrogram - obj = np.asarray(data).view(cls) - # save additional attributes - obj.mul = mul - obj.add = add - # and those from the given spectrogram - obj.stft = spectrogram.stft - obj.spectrogram = spectrogram - # return the object - return obj - - def __array_finalize__(self, obj): - if obj is None: - return - # set default values here, also needed for views - self.stft = getattr(obj, 'stft', None) - self.spectrogram = getattr(obj, 'spectrogram', None) - self.mul = getattr(obj, 'mul', MUL) - self.add = getattr(obj, 'add', ADD) - - @property - def filterbank(self): - """Filterbank.""" - return self.spectrogram.filterbank - - @property - def bin_frequencies(self): - """Bin frequencies.""" - return self.spectrogram.bin_frequencies - - -class LogarithmicSpectrogramProcessor(Processor): - """ - Logarithmic Spectrogram Processor class. - - Parameters - ---------- - log : numpy ufunc, optional - Loagrithmic scaling function to apply. - mul : float, optional - Multiply the magnitude spectrogram with this factor before taking the - logarithm. - add : float, optional - Add this value before taking the logarithm of the magnitudes. - - """ - - def __init__(self, log=LOG, mul=MUL, add=ADD, **kwargs): - # pylint: disable=unused-argument - self.log = log - self.mul = mul - self.add = add - - def process(self, data, **kwargs): - """ - Perform logarithmic scaling of a spectrogram. - - Parameters - ---------- - data : numpy array - Data to be processed. - kwargs : dict - Keyword arguments passed to :class:`LogarithmicSpectrogram`. - - Returns - ------- - log_spec : :class:`LogarithmicSpectrogram` instance - Logarithmically scaled spectrogram. - - """ - # update arguments passed to LogarithmicSpectrogram - args = dict(log=self.log, mul=self.mul, add=self.add) - args.update(kwargs) - # instantiate a LogarithmicSpectrogram - return LogarithmicSpectrogram(data, **args) - - @staticmethod - def add_arguments(parser, log=None, mul=None, add=None): - """ - Add spectrogram scaling related arguments to an existing parser. - - Parameters - ---------- - parser : argparse parser instance - Existing argparse parser object. - log : bool, optional - Take the logarithm of the spectrogram. - mul : float, optional - Multiply the magnitude spectrogram with this factor before taking - the logarithm. - add : float, optional - Add this value before taking the logarithm of the magnitudes. - - Returns - ------- - argparse argument group - Spectrogram scaling argument parser group. - - Notes - ----- - Parameters are included in the group only if they are not 'None'. - - """ - # add log related options to the existing parser - g = parser.add_argument_group('magnitude scaling arguments') - # log - if log is True: - g.add_argument('--linear', dest='log', action='store_const', - const=None, default=LOG, - help='linear magnitudes [default=logarithmic]') - elif log is False: - g.add_argument('--log', action='store_const', - const=LOG, default=None, - help='logarithmic magnitudes [default=linear]') - # mul - if mul is not None: - g.add_argument('--mul', action='store', type=float, - default=mul, help='multiplier (before taking ' - 'the log) [default=%(default).1f]') - # add - if add is not None: - g.add_argument('--add', action='store', type=float, - default=add, help='value added (before taking ' - 'the log) [default=%(default).1f]') - # return the group - return g - - -# logarithmic filtered spectrogram class -class LogarithmicFilteredSpectrogram(LogarithmicSpectrogram, - FilteredSpectrogram): - """ - LogarithmicFilteredSpectrogram class. - - Parameters - ---------- - spectrogram : :class:`FilteredSpectrogram` instance - Filtered spectrogram. - kwargs : dict, optional - If no :class:`FilteredSpectrogram` instance was given, one is - instantiated with these additional keyword arguments and - logarithmically scaled afterwards, i.e. passed to - :class:`LogarithmicSpectrogram`. - - Notes - ----- - For the filtering and scaling parameters, please refer to - :class:`FilteredSpectrogram` and :class:`LogarithmicSpectrogram`. - - See Also - -------- - :class:`FilteredSpectrogram` - :class:`LogarithmicSpectrogram` - - Examples - -------- - Create a :class:`LogarithmicFilteredSpectrogram` from a - :class:`Spectrogram` (or anything it can be instantiated from. This is - mainly a convenience class which first filters the spectrogram and then - scales it logarithmically. - - >>> spec = LogarithmicFilteredSpectrogram('tests/data/audio/sample.wav') - >>> spec # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS - LogarithmicFilteredSpectrogram([[0.82358, 0.86341, ..., 0.02295, 0.02719], - [0.97509, 0.98658, ..., 0.03223, 0.0375 ], - ..., - [1.04322, 0.32637, ..., 0.02065, 0.01821], - [0.98236, 0.89276, ..., 0.01587, 0.0144 ]], - dtype=float32) - >>> spec.shape - (281, 81) - >>> spec.filterbank # doctest: +ELLIPSIS - LogarithmicFilterbank([[...]], dtype=float32) - >>> spec.min() # doctest: +ELLIPSIS - LogarithmicFilteredSpectrogram(0.00831, dtype=float32) - - """ - # pylint: disable=super-on-old-class - # pylint: disable=super-init-not-called - # pylint: disable=attribute-defined-outside-init - - def __init__(self, spectrogram, **kwargs): - # this method is for documentation purposes only - pass - - def __new__(cls, spectrogram, **kwargs): - # get the log args - mul = kwargs.pop('mul', MUL) - add = kwargs.pop('add', ADD) - # instantiate a FilteredSpectrogram if needed - if not isinstance(spectrogram, FilteredSpectrogram): - spectrogram = FilteredSpectrogram(spectrogram, **kwargs) - # take the logarithm - data = LogarithmicSpectrogram(spectrogram, mul=mul, add=add, **kwargs) - # cast as LogarithmicFilteredSpectrogram - obj = np.asarray(data).view(cls) - # save additional attributes - obj.mul = data.mul - obj.add = data.add - # and those from the given spectrogram - obj.stft = spectrogram.stft - obj.spectrogram = spectrogram - # return the object - return obj - - @property - def filterbank(self): - """Filterbank.""" - return self.spectrogram.filterbank - - @property - def bin_frequencies(self): - """Bin frequencies.""" - return self.filterbank.center_frequencies - - -class LogarithmicFilteredSpectrogramProcessor(Processor): - """ - Logarithmic Filtered Spectrogram Processor class. - - Parameters - ---------- - filterbank : :class:`.audio.filters.Filterbank` - Filterbank used to filter a spectrogram. - num_bands : int - Number of bands (per octave). - fmin : float, optional - Minimum frequency of the filterbank [Hz]. - fmax : float, optional - Maximum frequency of the filterbank [Hz]. - fref : float, optional - Tuning frequency of the filterbank [Hz]. - norm_filters : bool, optional - Normalize the filter of the filterbank to area 1. - unique_filters : bool, optional - Indicate if the filterbank should contain only unique filters, i.e. - remove duplicate filters resulting from insufficient resolution at - low frequencies. - mul : float, optional - Multiply the magnitude spectrogram with this factor before taking the - logarithm. - add : float, optional - Add this value before taking the logarithm of the magnitudes. - - """ - - def __init__(self, filterbank=FILTERBANK, num_bands=NUM_BANDS, fmin=FMIN, - fmax=FMAX, fref=A4, norm_filters=NORM_FILTERS, - unique_filters=UNIQUE_FILTERS, mul=MUL, add=ADD, **kwargs): - # pylint: disable=unused-argument - self.filterbank = filterbank - self.num_bands = num_bands - self.fmin = fmin - self.fmax = fmax - self.fref = fref - self.norm_filters = norm_filters - self.unique_filters = unique_filters - self.mul = mul - self.add = add - - def process(self, data, **kwargs): - """ - Perform filtering and logarithmic scaling of a spectrogram. - - Parameters - ---------- - data : numpy array - Data to be processed. - kwargs : dict - Keyword arguments passed to - :class:`LogarithmicFilteredSpectrogram`. - - Returns - ------- - log_filt_spec : :class:`LogarithmicFilteredSpectrogram` instance - Logarithmically scaled filtered spectrogram. - - """ - # update arguments passed to LogarithmicFilteredSpectrogram - args = dict(filterbank=self.filterbank, num_bands=self.num_bands, - fmin=self.fmin, fmax=self.fmax, fref=self.fref, - norm_filters=self.norm_filters, - unique_filters=self.unique_filters, mul=self.mul, - add=self.add) - args.update(kwargs) - # instantiate a LogarithmicFilteredSpectrogram - data = LogarithmicFilteredSpectrogram(data, **args) - # cache the filterbank - self.filterbank = data.filterbank - return data - - -# spectrogram difference stuff -DIFF_RATIO = 0.5 -DIFF_FRAMES = None -DIFF_MAX_BINS = None -POSITIVE_DIFFS = False - - -def _diff_frames(diff_ratio, hop_size, frame_size, window=np.hanning): - """ - Compute the number of `diff_frames` for the given ratio of overlap. - - Parameters - ---------- - diff_ratio : float - Ratio of overlap of windows of two consecutive STFT frames. - hop_size : int - Samples between two adjacent frames. - frame_size : int - Size of one frames in samples. - window : numpy ufunc or array - Window function. - - Returns - ------- - diff_frames : int - Number of frames to calculate the difference to. - - """ - # calculate the number of diff frames on basis of the diff_ratio - # first sample of the window with a higher magnitude than given ratio - if hasattr(window, '__call__'): - # Note: if only a window function is given (default in audio.stft), - # generate a window of size `frame_size` with the given shape - window = window(frame_size) - sample = np.argmax(window > float(diff_ratio) * max(window)) - diff_samples = len(window) / 2 - sample - # convert to frames, must be at least 1 - return int(max(1, round(diff_samples / hop_size))) - - -class SpectrogramDifference(Spectrogram): - """ - SpectrogramDifference class. - - Parameters - ---------- - spectrogram : :class:`Spectrogram` instance - Spectrogram. - diff_ratio : float, optional - Calculate the difference to the frame at which the window used for the - STFT yields this ratio of the maximum height. - diff_frames : int, optional - Calculate the difference to the `diff_frames`-th previous frame (if - set, this overrides the value calculated from the `diff_ratio`) - diff_max_bins : int, optional - Apply a maximum filter with this width (in bins in frequency dimension) - to the spectrogram the difference is calculated to. - positive_diffs : bool, optional - Keep only the positive differences, i.e. set all diff values < 0 to 0. - keep_dims : bool, optional - Indicate if the dimensions (i.e. shape) of the spectrogram should be - kept. - kwargs : dict, optional - If no :class:`Spectrogram` instance was given, one is instantiated with - these additional keyword arguments. - - Notes - ----- - The first `diff_frames` frames will have a value of 0. - - If `keep_dims` is 'True' the returned difference has the same shape as the - spectrogram. This is needed if the diffs should be stacked on top of it. - If set to 'False', the length will be `diff_frames` frames shorter (mostly - used by the SpectrogramDifferenceProcessor which first buffers that many - frames. - - The SuperFlux algorithm [1]_ uses a maximum filtered spectrogram with 3 - `diff_max_bins` together with a 24 band logarithmic filterbank to calculate - the difference spectrogram with a `diff_ratio` of 0.5. - - The effect of this maximum filter applied to the spectrogram is that the - magnitudes are "widened" in frequency direction, i.e. the following - difference calculation is less sensitive against frequency fluctuations. - This effect is exploited to suppress false positive energy fragments - originating from vibrato. - - References - ---------- - .. [1] Sebastian Böck and Gerhard Widmer - "Maximum Filter Vibrato Suppression for Onset Detection" - Proceedings of the 16th International Conference on Digital Audio - Effects (DAFx), 2013. - - Examples - -------- - To obtain the SuperFlux feature as described above first create a filtered - and logarithmically spaced spectrogram: - - >>> spec = LogarithmicFilteredSpectrogram('tests/data/audio/sample.wav', \ - num_bands=24, fps=200) - >>> spec # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS - LogarithmicFilteredSpectrogram([[0.82358, 0.86341, ..., 0.02809, 0.02672], - [0.92514, 0.93211, ..., 0.03607, 0.0317 ], - ..., - [1.03826, 0.767 , ..., 0.01814, 0.01138], - [0.98236, 0.89276, ..., 0.01669, 0.00919]], - dtype=float32) - >>> spec.shape - (561, 140) - - Then use the temporal first order difference and apply a maximum filter - with 3 bands, keeping only the positive differences (i.e. rise in energy): - - >>> superflux = SpectrogramDifference(spec, diff_max_bins=3, \ - positive_diffs=True) - >>> superflux # doctest: +NORMALIZE_WHITESPACE +ELLIPSIS - SpectrogramDifference([[0. , 0. , ..., 0. , 0. ], - [0. , 0. , ..., 0. , 0. ], - ..., - [0.01941, 0. , ..., 0. , 0. ], - [0. , 0. , ..., 0. , 0. ]], dtype=float32) - - """ - # pylint: disable=super-on-old-class - # pylint: disable=super-init-not-called - # pylint: disable=attribute-defined-outside-init - - def __init__(self, spectrogram, diff_ratio=DIFF_RATIO, - diff_frames=DIFF_FRAMES, diff_max_bins=DIFF_MAX_BINS, - positive_diffs=POSITIVE_DIFFS, keep_dims=True, **kwargs): - # this method is for documentation purposes only - pass - - def __new__(cls, spectrogram, diff_ratio=DIFF_RATIO, - diff_frames=DIFF_FRAMES, diff_max_bins=DIFF_MAX_BINS, - positive_diffs=POSITIVE_DIFFS, keep_dims=True, **kwargs): - # instantiate a Spectrogram if needed - if not isinstance(spectrogram, Spectrogram): - # try to instantiate a Spectrogram object - spectrogram = Spectrogram(spectrogram, **kwargs) - - # calculate the number of diff frames to use - if diff_frames is None: - diff_frames = _diff_frames( - diff_ratio, hop_size=spectrogram.stft.frames.hop_size, - frame_size=spectrogram.stft.frames.frame_size, - window=spectrogram.stft.window) - - # apply a maximum filter to diff_spec if needed - if diff_max_bins is not None and diff_max_bins > 1: - from scipy.ndimage.filters import maximum_filter - # widen the spectrogram in frequency dimension - size = (1, int(diff_max_bins)) - diff_spec = maximum_filter(spectrogram, size=size) - else: - diff_spec = spectrogram - - # calculate the diff - if keep_dims: - diff = np.zeros_like(spectrogram) - diff[diff_frames:] = (spectrogram[diff_frames:] - - diff_spec[:-diff_frames]) - else: - diff = spectrogram[diff_frames:] - diff_spec[:-diff_frames] - - # positive differences only? - if positive_diffs: - np.maximum(diff, 0, out=diff) - - # cast as FilteredSpectrogram - obj = np.asarray(diff).view(cls) - # save additional attributes - obj.spectrogram = spectrogram - obj.diff_ratio = diff_ratio - obj.diff_frames = diff_frames - obj.diff_max_bins = diff_max_bins - obj.positive_diffs = positive_diffs - # return the object - return obj - - def __array_finalize__(self, obj): - if obj is None: - return - # set default values here, also needed for views - self.diff_ratio = getattr(obj, 'diff_ratio', 0.5) - self.diff_frames = getattr(obj, 'diff_frames', None) - self.diff_max_bins = getattr(obj, 'diff_max_bins', None) - self.positive_diffs = getattr(obj, 'positive_diffs', False) - - @property - def bin_frequencies(self): - """Bin frequencies.""" - return self.spectrogram.bin_frequencies - - def positive_diff(self): - """Positive diff.""" - return np.maximum(self, 0) - - -class SpectrogramDifferenceProcessor(Processor): - """ - Difference Spectrogram Processor class. - - Parameters - ---------- - diff_ratio : float, optional - Calculate the difference to the frame at which the window used for the - STFT yields this ratio of the maximum height. - diff_frames : int, optional - Calculate the difference to the `diff_frames`-th previous frame (if - set, this overrides the value calculated from the `diff_ratio`) - diff_max_bins : int, optional - Apply a maximum filter with this width (in bins in frequency dimension) - to the spectrogram the difference is calculated to. - positive_diffs : bool, optional - Keep only the positive differences, i.e. set all diff values < 0 to 0. - stack_diffs : numpy stacking function, optional - If 'None', only the differences are returned. If set, the diffs are - stacked with the underlying spectrogram data according to the `stack` - function: - - - ``np.vstack`` - the differences and spectrogram are stacked vertically, i.e. in time - direction, - - ``np.hstack`` - the differences and spectrogram are stacked horizontally, i.e. in - frequency direction, - - ``np.dstack`` - the differences and spectrogram are stacked in depth, i.e. return - them as a 3D representation with depth as the third dimension. - - """ - - def __init__(self, diff_ratio=DIFF_RATIO, diff_frames=DIFF_FRAMES, - diff_max_bins=DIFF_MAX_BINS, positive_diffs=POSITIVE_DIFFS, - stack_diffs=None, **kwargs): - # pylint: disable=unused-argument - self.diff_ratio = diff_ratio - self.diff_frames = diff_frames - self.diff_max_bins = diff_max_bins - self.positive_diffs = positive_diffs - self.stack_diffs = stack_diffs - # attributes needed for stateful processing - # Note: do not init the buffer here, since it depends on the data - self._buffer = None - - def __getstate__(self): - # copy everything to a picklable object - state = self.__dict__.copy() - # do not pickle attributes needed for stateful processing - state.pop('_buffer', None) - return state - - def __setstate__(self, state): - # restore pickled instance attributes - self.__dict__.update(state) - # add non-pickled attributes needed for stateful processing - self._buffer = None - - def process(self, data, reset=True, **kwargs): - """ - Perform a temporal difference calculation on the given data. - - Parameters - ---------- - data : numpy array - Data to be processed. - reset : bool, optional - Reset the spectrogram buffer before computing the difference. - kwargs : dict - Keyword arguments passed to :class:`SpectrogramDifference`. - - Returns - ------- - diff : :class:`SpectrogramDifference` instance - Spectrogram difference. - - Notes - ----- - If `reset` is 'True', the first `diff_frames` differences will be 0. - - """ - # update arguments passed to SpectrogramDifference - args = dict(diff_ratio=self.diff_ratio, diff_frames=self.diff_frames, - diff_max_bins=self.diff_max_bins, - positive_diffs=self.positive_diffs) - args.update(kwargs) - # calculate the number of diff frames - if self.diff_frames is None: - # Note: use diff_ration from args, not self.diff_ratio - self.diff_frames = _diff_frames( - args['diff_ratio'], frame_size=data.stft.frames.frame_size, - hop_size=data.stft.frames.hop_size, window=data.stft.window) - # init buffer or shift it - if self._buffer is None or reset: - # put diff_frames infs before the data (will be replaced by 0s) - init = np.empty((self.diff_frames, data.shape[1])) - init[:] = np.inf - data = np.insert(data, 0, init, axis=0) - # use the data for the buffer - self._buffer = BufferProcessor(init=data) - else: - # shift buffer by length of data and put new data at end of buffer - data = self._buffer(data) - # compute difference based on this data (reduce 1st dimension) - diff = SpectrogramDifference(data, keep_dims=False, **args) - # set all inf-diffs to 0 - diff[np.isinf(diff)] = 0 - # stack the diff and the data if needed - if self.stack_diffs is None: - return diff - # Note: don't use `data` directly, because it could be a str - # we ave to access diff.spectrogram (i.e. converted data) - return self.stack_diffs((diff.spectrogram[self.diff_frames:], diff)) - - def reset(self): - """Reset the SpectrogramDifferenceProcessor.""" - # reset cached spectrogram data - self._buffer = None - - @staticmethod - def add_arguments(parser, diff=None, diff_ratio=None, diff_frames=None, - diff_max_bins=None, positive_diffs=None): - """ - Add spectrogram difference related arguments to an existing parser. - - Parameters - ---------- - parser : argparse parser instance - Existing argparse parser object. - diff : bool, optional - Take the difference of the spectrogram. - diff_ratio : float, optional - Calculate the difference to the frame at which the window used for - the STFT yields this ratio of the maximum height. - diff_frames : int, optional - Calculate the difference to the `diff_frames`-th previous frame (if - set, this overrides the value calculated from the `diff_ratio`) - diff_max_bins : int, optional - Apply a maximum filter with this width (in bins in frequency - dimension) to the spectrogram the difference is calculated to. - positive_diffs : bool, optional - Keep only the positive differences, i.e. set all diff values < 0 - to 0. - - Returns - ------- - argparse argument group - Spectrogram difference argument parser group. - - Notes - ----- - Parameters are included in the group only if they are not 'None'. - - Only the `diff_frames` parameter behaves differently, it is included - if either the `diff_ratio` is set or a value != 'None' is given. - - """ - # add diff related options to the existing parser - g = parser.add_argument_group('spectrogram difference arguments') - # diff - if diff is True: - g.add_argument('--no_diff', dest='diff', action='store_false', - help='use the spectrogram [default=differences ' - 'of the spectrogram]') - elif diff is False: - g.add_argument('--diff', action='store_true', - help='use the differences of the spectrogram ' - '[default=spectrogram]') - # diff ratio - if diff_ratio is not None: - g.add_argument('--diff_ratio', action='store', type=float, - default=diff_ratio, - help='calculate the difference to the frame at ' - 'which the window of the STFT have this ratio ' - 'of the maximum height ' - '[default=%(default).1f]') - # diff frames - if diff_ratio is not None or diff_frames: - g.add_argument('--diff_frames', action='store', type=int, - default=diff_frames, - help='calculate the difference to the N-th previous' - ' frame (this overrides the value calculated ' - 'with `diff_ratio`) [default=%(default)s]') - # positive diffs - if positive_diffs is True: - g.add_argument('--all_diffs', dest='positive_diffs', - action='store_false', - help='keep both positive and negative diffs ' - '[default=only the positive diffs]') - elif positive_diffs is False: - g.add_argument('--positive_diffs', action='store_true', - help='keep only positive diffs ' - '[default=positive and negative diffs]') - # add maximum filter related options to the existing parser - if diff_max_bins is not None: - g.add_argument('--max_bins', action='store', type=int, - dest='diff_max_bins', default=diff_max_bins, - help='apply a maximum filter with this width (in ' - 'frequency bins) [default=%(default)d]') - # return the group - return g - - -class SuperFluxProcessor(SequentialProcessor): - """ - Spectrogram processor which sets the default values suitable for the - SuperFlux algorithm. - - """ - # pylint: disable=too-many-ancestors - - def __init__(self, **kwargs): - from .stft import ShortTimeFourierTransformProcessor - # set the default values (can be overwritten if set) - # we need an un-normalized LogarithmicFilterbank with 24 bands - filterbank = kwargs.pop('filterbank', FILTERBANK) - num_bands = kwargs.pop('num_bands', 24) - norm_filters = kwargs.pop('norm_filters', False) - # we want max filtered diffs - diff_ratio = kwargs.pop('diff_ratio', 0.5) - diff_max_bins = kwargs.pop('diff_max_bins', 3) - positive_diffs = kwargs.pop('positive_diffs', True) - # processing chain - stft = ShortTimeFourierTransformProcessor(**kwargs) - spec = SpectrogramProcessor(**kwargs) - filt = FilteredSpectrogramProcessor(filterbank=filterbank, - num_bands=num_bands, - norm_filters=norm_filters, - **kwargs) - log = LogarithmicSpectrogramProcessor(**kwargs) - diff = SpectrogramDifferenceProcessor(diff_ratio=diff_ratio, - diff_max_bins=diff_max_bins, - positive_diffs=positive_diffs, - **kwargs) - # sequentially process everything - super(SuperFluxProcessor, self).__init__([stft, spec, filt, log, diff]) - - -class MultiBandSpectrogram(FilteredSpectrogram): - """ - MultiBandSpectrogram class. - - Parameters - ---------- - spectrogram : :class:`Spectrogram` instance - Spectrogram. - crossover_frequencies : list or numpy array - List of crossover frequencies at which the `spectrogram` is split - into multiple bands. - fmin : float, optional - Minimum frequency of the filterbank [Hz]. - fmax : float, optional - Maximum frequency of the filterbank [Hz]. - norm_filters : bool, optional - Normalize the filter bands of the filterbank to area 1. - unique_filters : bool, optional - Indicate if the filterbank should contain only unique filters, i.e. - remove duplicate filters resulting from insufficient resolution at - low frequencies. - kwargs : dict, optional - If no :class:`Spectrogram` instance was given, one is instantiated - with these additional keyword arguments. - - Notes - ----- - The MultiBandSpectrogram is implemented as a :class:`Spectrogram` which - uses a :class:`.audio.filters.RectangularFilterbank` to combine multiple - frequency bins. - - """ - # pylint: disable=super-on-old-class - # pylint: disable=super-init-not-called - # pylint: disable=attribute-defined-outside-init - - def __init__(self, spectrogram, crossover_frequencies, fmin=FMIN, - fmax=FMAX, norm_filters=NORM_FILTERS, - unique_filters=UNIQUE_FILTERS, **kwargs): - # this method is for documentation purposes only - pass - - def __new__(cls, spectrogram, crossover_frequencies, fmin=FMIN, fmax=FMAX, - norm_filters=NORM_FILTERS, unique_filters=UNIQUE_FILTERS, - **kwargs): - from .filters import RectangularFilterbank - # instantiate a Spectrogram if needed - if not isinstance(spectrogram, Spectrogram): - spectrogram = Spectrogram(spectrogram, **kwargs) - # create a rectangular filterbank - filterbank = RectangularFilterbank(spectrogram.bin_frequencies, - crossover_frequencies, - fmin=fmin, fmax=fmax, - norm_filters=norm_filters, - unique_filters=unique_filters) - # filter the spectrogram - data = np.dot(spectrogram, filterbank) - # cast as FilteredSpectrogram - obj = np.asarray(data).view(cls) - # save additional attributes - obj.spectrogram = spectrogram - obj.filterbank = filterbank - obj.crossover_frequencies = crossover_frequencies - # return the object - return obj - - def __array_finalize__(self, obj): - if obj is None: - return - # set default values here, also needed for views - self.spectrogram = getattr(obj, 'spectrogram', None) - self.filterbank = getattr(obj, 'filterbank', None) - self.crossover_frequencies = getattr(obj, 'crossover_frequencies', - None) - - -class MultiBandSpectrogramProcessor(Processor): - """ - Spectrogram processor which combines the spectrogram magnitudes into - multiple bands. - - Parameters - ---------- - crossover_frequencies : list or numpy array - List of crossover frequencies at which a spectrogram is split into - the individual bands. - fmin : float, optional - Minimum frequency of the filterbank [Hz]. - fmax : float, optional - Maximum frequency of the filterbank [Hz]. - norm_filters : bool, optional - Normalize the filter bands of the filterbank to area 1. - unique_filters : bool, optional - Indicate if the filterbank should contain only unique filters, i.e. - remove duplicate filters resulting from insufficient resolution at - low frequencies. - - """ - - def __init__(self, crossover_frequencies, fmin=FMIN, fmax=FMAX, - norm_filters=NORM_FILTERS, unique_filters=UNIQUE_FILTERS, - **kwargs): - # pylint: disable=unused-argument - self.crossover_frequencies = np.array(crossover_frequencies) - self.fmin = fmin - self.fmax = fmax - self.norm_filters = norm_filters - self.unique_filters = unique_filters - - def process(self, data, **kwargs): - """ - Return the a multi-band representation of the given data. - - Parameters - ---------- - data : numpy array - Data to be processed. - kwargs : dict - Keyword arguments passed to :class:`MultiBandSpectrogram`. - - Returns - ------- - multi_band_spec : :class:`MultiBandSpectrogram` instance - Spectrogram split into multiple bands. - - """ - # update arguments passed to MultiBandSpectrogram - args = dict(crossover_frequencies=self.crossover_frequencies, - fmin=self.fmin, fmax=self.fmax, - norm_filters=self.norm_filters, - unique_filters=self.unique_filters) - args.update(kwargs) - # instantiate a MultiBandSpectrogram - return MultiBandSpectrogram(data, **args) - - -class SemitoneBandpassSpectrogram(FilteredSpectrogram): - """ - Construct a semitone spectrogram by using a time domain filterbank of - bandpass filters as described in [1]_. - - Parameters - ---------- - signal : Signal - Signal instance. - fps : float, optional - Frame rate of the spectrogram [Hz]. - fmin : float, optional - Lowest frequency of the spectrogram [Hz]. - fmax : float, optional - Highest frequency of the spectrogram [Hz]. - - References - ---------- - .. [1] Meinard Müller, - "Information retrieval for music and motion", Springer, 2007. - - """ - # pylint: disable=super-on-old-class - # pylint: disable=super-init-not-called - # pylint: disable=attribute-defined-outside-init - - def __init__(self, signal, fps=50., fmin=27.5, fmax=4200.): - # this method is for documentation purposes only - pass - - def __new__(cls, signal, fps=50., fmin=27.5, fmax=4200.): - from scipy.signal import filtfilt - from .filters import SemitoneBandpassFilterbank - from .signal import FramedSignal, Signal, resample - # check if we got a mono Signal - if not isinstance(signal, Signal) or signal.num_channels != 1: - signal = Signal(signal, num_channels=1) - sample_rate = float(signal.sample_rate) - # keep a reference to the original signal - signal_ = signal - # determine how many frames the filtered signal will have - num_frames = np.round(len(signal) * fps / sample_rate) + 1 - # compute the energy of the frames of the bandpass filtered signal - filterbank = SemitoneBandpassFilterbank(fmin=fmin, fmax=fmax) - bands = [] - for filt, band_sample_rate in zip(filterbank.filters, - filterbank.band_sample_rates): - # frames should overlap 50% - frame_size = np.round(2 * band_sample_rate / float(fps)) - # down-sample audio if needed - if band_sample_rate != signal.sample_rate: - signal = resample(signal_, band_sample_rate) - # filter the signal - b, a = filt - filtered_signal = filtfilt(b, a, signal) - # normalise the signal if it has an integer dtype - try: - filtered_signal /= np.iinfo(signal.dtype).max - except ValueError: - pass - # compute the energy of the filtered signal - # Note: 1) the energy of the signal is computed with respect to the - # reference sampling rate as in the MATLAB chroma toolbox - # 2) we do not sum here, but rather after splitting the - # signal into overlapping frames to avoid doubled - # computation due to the overlapping frames - filtered_signal = filtered_signal ** 2 / band_sample_rate * 22050. - # split into overlapping frames - frames = FramedSignal(filtered_signal, frame_size=frame_size, - fps=fps, sample_rate=band_sample_rate, - num_frames=num_frames) - # finally sum the energy of all frames - bands.append(np.sum(frames, axis=1)) - # cast as SemitoneBandpassSpectrogram - obj = np.vstack(bands).T.view(cls) - # save additional attributes - obj.filterbank = filterbank - obj.fps = fps - return obj - - def __array_finalize__(self, obj): - if obj is None: - return - # set default values here - self.filterbank = getattr(obj, 'filterbank', None) - self.fps = getattr(obj, 'fps', None) diff --git a/spaces/Marshalls/testmtd/script_train_gcp1.sh b/spaces/Marshalls/testmtd/script_train_gcp1.sh deleted file mode 100644 index 89041ce396887f9b859e59bb0581a8511770933a..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/script_train_gcp1.sh +++ /dev/null @@ -1,48 +0,0 @@ -#!/bin/bash - -#export TPU_IP_ADDRESS=10.104.22.146; -#export TPU_IP_ADDRESS=10.95.66.34; -#export TPU_IP_ADDRESS=10.65.226.162; -#export TPU_IP_ADDRESS=10.21.219.242; -#export TPU_IP_ADDRESS=10.93.151.138; -#export XRT_TPU_CONFIG="tpu_worker;0;$TPU_IP_ADDRESS:8470" -#export TPU_NAME="grpc://$TPU_IP_ADDRESS:8470" -export XRT_TPU_CONFIG="localservice;0;localhost:51011" - -py=python3 - -root_dir=data - -data_dir=${root_dir}/dance_combined2 -#exp=transflower_expmap_cr2 -exp=transglower_expmap_cr -hparams_file=dance_combined/${exp} - - -echo $exp - -$py training/train.py --data_dir=${data_dir} --max_epochs=300\ - --hparams_file=training/hparams/${hparams_file}.yaml \ - --experiment_name=$exp\ - --tpu_cores=8 \ - --workers=$(nproc) \ - #--continue_train \ - #--sync_batchnorm \ - #--optimizer=madgrad \ - #--learning_rate=1e-3 \ - #--batch_size=128 \ - #--use_x_transformers \ - #--use_rotary_pos_emb \ - #--accelerator=ddp \ - #--flow_dist=studentT \ - #--no-use_pos_emb_output \ - #--load_weights_only \ - #--stage2 \ - #--prior_use_x_transformers \ - #--output_lengths="3" \ - #--max_prior_loss_weight=0.01 \ - #--scales="[[16,0]]" \ - #--residual_scales="[[16,0]]" -# --glow_norm_layer="actnorm" \ - #--use_pos_emb_output \ - #--gpus=2 \ diff --git a/spaces/MathysL/AutoGPT4/autogpt/processing/html.py b/spaces/MathysL/AutoGPT4/autogpt/processing/html.py deleted file mode 100644 index 81387b12adab5023150c55f2075ddd40b554f386..0000000000000000000000000000000000000000 --- a/spaces/MathysL/AutoGPT4/autogpt/processing/html.py +++ /dev/null @@ -1,33 +0,0 @@ -"""HTML processing functions""" -from __future__ import annotations - -from bs4 import BeautifulSoup -from requests.compat import urljoin - - -def extract_hyperlinks(soup: BeautifulSoup, base_url: str) -> list[tuple[str, str]]: - """Extract hyperlinks from a BeautifulSoup object - - Args: - soup (BeautifulSoup): The BeautifulSoup object - base_url (str): The base URL - - Returns: - List[Tuple[str, str]]: The extracted hyperlinks - """ - return [ - (link.text, urljoin(base_url, link["href"])) - for link in soup.find_all("a", href=True) - ] - - -def format_hyperlinks(hyperlinks: list[tuple[str, str]]) -> list[str]: - """Format hyperlinks to be displayed to the user - - Args: - hyperlinks (List[Tuple[str, str]]): The hyperlinks to format - - Returns: - List[str]: The formatted hyperlinks - """ - return [f"{link_text} ({link_url})" for link_text, link_url in hyperlinks] diff --git a/spaces/Megareyka/imageRecognition/app.py b/spaces/Megareyka/imageRecognition/app.py deleted file mode 100644 index d03c68cb330c6e3cb30033b937d46427eceaa5ab..0000000000000000000000000000000000000000 --- a/spaces/Megareyka/imageRecognition/app.py +++ /dev/null @@ -1,21 +0,0 @@ -from transformers import pipeline -from transformers import ViTFeatureExtractor, ViTForImageClassification -from PIL import Image as img - -import numpy as np -import gradio as gr - -featureextractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224') -model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224') - -def classify(input_img): - filename = input_img - imagearray = input_img - inputs = featureextractor(images = imagearray, return_tensors="pt") - outputs = model(**inputs) - logits = outputs.logits - predicted_class_idx = logits.argmax(-1).item() - return model.config.id2label[predicted_class_idx] - -demo = gr.Interface(fn=classify, inputs="image", outputs="text") -demo.launch() \ No newline at end of file diff --git a/spaces/Menna2211/Text-Image/pages/image-captioning.py b/spaces/Menna2211/Text-Image/pages/image-captioning.py deleted file mode 100644 index 1bf2f583bb722ddfbd38a5cddd53f93a4a49c88b..0000000000000000000000000000000000000000 --- a/spaces/Menna2211/Text-Image/pages/image-captioning.py +++ /dev/null @@ -1,116 +0,0 @@ -from transformers import BertTokenizer -import torch -import time -import streamlit as st -from PIL import Image -import torchvision.transforms as transforms -import requests -from transformers import BlipProcessor, BlipForConditionalGeneration - - -tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') -start_token = tokenizer.convert_tokens_to_ids(tokenizer._cls_token) -end_token = tokenizer.convert_tokens_to_ids(tokenizer._sep_token) -def create_caption_and_mask(start_token, max_length): - caption_template = torch.zeros((1, max_length), dtype=torch.long) - mask_template = torch.ones((1, max_length), dtype=torch.bool) - caption_template[:, 0] = start_token - mask_template[:, 0] = False - return caption_template, mask_template - -caption, cap_mask = create_caption_and_mask(start_token, 128) - -# Model 1 -@st.cache_resource(show_spinner=False ,ttl=3600) -def get_model1(): - processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") - model1 = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") - return processor, model1 - -processor, model1 =get_model1() - -# Model 2 -@st.cache_resource(show_spinner=False ,ttl=3600) -def get_model2(): - model2 = torch.hub.load('saahiluppal/catr', 'v3', pretrained=True) # you can choose between v1, v2 and v3 - return model2 - -model2 =get_model2() - -st.title("Image Captioning App") -# define the layout of your app -uploaded_file = st.file_uploader("Upload an image", type=["jpg", "jpeg", "png"]) -model = st.selectbox("Select a Model", ["Select a Model","Hugging-Face", "Github"]) -submit_button = st.button("Compute") -if model == "Select a Model" and not submit_button : - st.stop() -elif model == "Select a Model" and submit_button : - st.warning('Warning.....!!,Plz..... Select a Model ', icon="⚠️") - -if model == "Hugging-Face" and submit_button: - if uploaded_file is not None : - # Load the uploaded image - image = Image.open(uploaded_file).convert('RGB') - st.image(image) - # Use the pre-trained model to generate a caption for the uploaded image - progress_text = "Operation in progress. Please wait." - bar = st.progress(0, text=progress_text) - for percent_complete in range(100): - inputs = processor(image, return_tensors="pt") - out = model1.generate(**inputs , max_new_tokens=100) - time.sleep(0.1) - bar.progress(percent_complete + 1, text=progress_text) - - # Display the uploaded image and its generated caption - st.write("Generated Caption:") - st.write(processor.decode(out[0], skip_special_tokens=True)) - time.sleep(5) - st.success('Congratulations..!! task is done ', icon="✅") - st.balloons() - else: - st.error('Error...!!,Plz..... Upload an image' , icon="🚨") - -elif model == "Github" and submit_button: - if uploaded_file is not None : - # Load the uploaded image - im = Image.open(uploaded_file) - st.image(im) - # Preprocess the input image - transform = transforms.Compose([ - transforms.Resize((224, 224)), # Resize the image to 224x224 - transforms.ToTensor(), # Convert the image to a tensor - transforms.Normalize( # Normalize the image - mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225])]) - image = transform(im).unsqueeze(0) # Add a batch dimension - #@torch.no_grad() - def evaluate(): - for i in range(128-1): - predictions = model2(image, caption, cap_mask) - predictions = predictions[:, i, :] - predicted_id = torch.argmax(predictions, axis=-1) - - if predicted_id[0] == 102: - return caption - caption[:, i+1] = predicted_id[0] - cap_mask[:, i+1] = False - - return caption - - # Use the pre-trained model to generate a caption for the uploaded image - progress_text = "Operation in progress. Please wait." - bar = st.progress(0, text=progress_text) - for percent_complete in range(100): - output = evaluate() - time.sleep(0.1) - bar.progress(percent_complete + 1, text=progress_text) - - # Display the uploaded image and its generated caption - st.write("Generated Caption:") - result = tokenizer.decode(output[0].tolist(), skip_special_tokens=True) - st.write(result.capitalize()) - time.sleep(5) - st.success('Congratulations...!! task is done ', icon="✅") - st.balloons() - else: - st.error('Error...!!,Plz..... Upload an image' , icon="🚨") diff --git a/spaces/MercurialAi/OncoMedleyMini/OncoMedley/GMIC/modules.py b/spaces/MercurialAi/OncoMedleyMini/OncoMedley/GMIC/modules.py deleted file mode 100644 index a096e99fe482fa65f6fc51b22488342a85e19da8..0000000000000000000000000000000000000000 --- a/spaces/MercurialAi/OncoMedleyMini/OncoMedley/GMIC/modules.py +++ /dev/null @@ -1,496 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import numpy as np - -import OncoMedley.GMIC.tools as tools -from torchvision.models.resnet import conv3x3 - -def resolve_norm_layer(planes, norm_class, num_groups): - if norm_class.lower() == "batch": - return nn.BatchNorm2d(planes) - if norm_class.lower() == "group": - return nn.GroupNorm(num_groups, planes) - raise NotImplementedError( - f"norm_class must be batch or group, but {norm_class} was given" - ) - -class BasicBlockV2(nn.Module): - """ - Basic Residual Block of ResNet V2 - """ - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None, norm_class='batch', num_groups=1): - super(BasicBlockV2, self).__init__() - self.relu = nn.ReLU(inplace=True) - - self.bn1 = resolve_norm_layer(inplanes, norm_class, num_groups) - self.conv1 = conv3x3(inplanes, planes, stride=stride) - self.bn2 = resolve_norm_layer(planes, norm_class, num_groups) - self.conv2 = conv3x3(planes, planes, stride=1) - - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - # Phase 1 - out = self.bn1(x) - out = self.relu(out) - if self.downsample is not None: - residual = self.downsample(out) - out = self.conv1(out) - - # Phase 2 - out = self.bn2(out) - out = self.relu(out) - out = self.conv2(out) - - out += residual - - return out - - -class BasicBlockV1(nn.Module): - """ - Basic Residual Block of ResNet V1 - """ - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None, norm_class='batch', num_groups=1): - super(BasicBlockV1, self).__init__() - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = resolve_norm_layer(planes, norm_class, num_groups) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = resolve_norm_layer(planes, norm_class, num_groups) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class ResNetV2(nn.Module): - """ - Adapted fom torchvision ResNet, converted to v2 - """ - def __init__(self, - input_channels, num_filters, - first_layer_kernel_size, first_layer_conv_stride, - blocks_per_layer_list, block_strides_list, block_fn, - first_layer_padding=0, - first_pool_size=None, first_pool_stride=None, first_pool_padding=0, - growth_factor=2, - norm_class='batch', - num_groups=1,): - super(ResNetV2, self).__init__() - self.first_conv = nn.Conv2d( - in_channels=input_channels, out_channels=num_filters, - kernel_size=first_layer_kernel_size, - stride=first_layer_conv_stride, - padding=first_layer_padding, - bias=False, - ) - self.first_pool = nn.MaxPool2d( - kernel_size=first_pool_size, - stride=first_pool_stride, - padding=first_pool_padding, - ) - - self.layer_list = nn.ModuleList() - current_num_filters = num_filters - self.inplanes = num_filters - for i, (num_blocks, stride) in enumerate(zip( - blocks_per_layer_list, block_strides_list)): - self.layer_list.append(self._make_layer( - block=block_fn, - planes=current_num_filters, - blocks=num_blocks, - stride=stride, - norm_class=norm_class, - num_groups=num_groups, - )) - current_num_filters *= growth_factor - self.final_bn = resolve_norm_layer( - current_num_filters // growth_factor * block_fn.expansion, - norm_class=norm_class, - num_groups=num_groups, - ) - self.relu = nn.ReLU() - - # Expose attributes for downstream dimension computation - self.num_filters = num_filters - self.growth_factor = growth_factor - - def forward(self, x): - h = self.first_conv(x) - h = self.first_pool(h) - for i, layer in enumerate(self.layer_list): - h = layer(h) - h = self.final_bn(h) - h = self.relu(h) - return h - - def _make_layer(self, block, planes, blocks, stride=1, norm_class='batch', num_groups=1): - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=stride, bias=False), - ) - - layers_ = [ - block(self.inplanes, planes, stride, downsample, norm_class=norm_class, num_groups=num_groups) - ] - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers_.append(block(self.inplanes, planes, norm_class=norm_class, num_groups=num_groups)) - - return nn.Sequential(*layers_) - - -class ResNetV1(nn.Module): - """ - Class that represents a ResNet with classifier sequence removed - """ - def __init__(self, initial_filters, block, layers, input_channels=1): - - self.inplanes = initial_filters - self.num_layers = len(layers) - super(ResNetV1, self).__init__() - - # initial sequence - # the first sequence only has 1 input channel which is different from original ResNet - self.conv1 = nn.Conv2d(input_channels, initial_filters, kernel_size=7, stride=2, padding=3, bias=False) - self.bn1 = nn.BatchNorm2d(initial_filters) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - # residual sequence - for i in range(self.num_layers): - num_filters = initial_filters * pow(2,i) - num_stride = (1 if i == 0 else 2) - setattr(self, 'layer{0}'.format(i+1), self._make_layer(block, num_filters, layers[i], stride=num_stride)) - self.num_filter_last_seq = initial_filters * pow(2, self.num_layers-1) - - # initialization - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=stride, bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x): - # first sequence - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - # residual sequences - for i in range(self.num_layers): - x = getattr(self, 'layer{0}'.format(i+1))(x) - return x - - -class DownsampleNetworkResNet18V1(ResNetV1): - """ - Downsampling using ResNet V1 - First conv is 7*7, stride 2, padding 3, cut 1/2 resolution - """ - def __init__(self): - super(DownsampleNetworkResNet18V1, self).__init__( - initial_filters=64, - block=BasicBlockV1, - layers=[2, 2, 2, 2], - input_channels=3) - - def forward(self, x): - last_feature_map = super(DownsampleNetworkResNet18V1, self).forward(x) - return last_feature_map - - -class AbstractMILUnit: - """ - An abstract class that represents an MIL unit module - """ - def __init__(self, parameters, parent_module): - self.parameters = parameters - self.parent_module = parent_module - - -class PostProcessingStandard(nn.Module): - """ - Unit in Global Network that takes in x_out and produce saliency maps - """ - def __init__(self, parameters): - super(PostProcessingStandard, self).__init__() - # map all filters to output classes - self.gn_conv_last = nn.Conv2d(parameters["post_processing_dim"], - parameters["num_classes"], - (1, 1), bias=False) - - self.saliency_nonlinearity = parameters["saliency_nonlinearity"] - - def forward(self, x_out): - out = self.gn_conv_last(x_out) - - if self.saliency_nonlinearity == "sigmoid": - return torch.sigmoid(out) - elif self.saliency_nonlinearity == "tanh_relu": - return torch.relu(torch.tanh(out)) - else: - raise KeyError(self.saliency_nonlinearity) - - -class GlobalNetwork(AbstractMILUnit): - """ - Implementation of Global Network using ResNet-22 - """ - def __init__(self, parameters, parent_module): - super(GlobalNetwork, self).__init__(parameters, parent_module) - # downsampling-branch - if "use_v1_global" in parameters and parameters["use_v1_global"]: - self.downsampling_branch = DownsampleNetworkResNet18V1() - else: - self.downsampling_branch = ResNetV2(input_channels=1, num_filters=16, - # first conv layer - first_layer_kernel_size=(7,7), first_layer_conv_stride=2, - first_layer_padding=3, - # first pooling layer - first_pool_size=3, first_pool_stride=2, first_pool_padding=0, - # res blocks architecture - blocks_per_layer_list=[2, 2, 2, 2, 2], - block_strides_list=[1, 2, 2, 2, 2], - block_fn=BasicBlockV2, - growth_factor=2, - norm_class=parameters["norm_class"], - num_groups=parameters["num_groups"],) - # post-processing - self.postprocess_module = PostProcessingStandard(parameters) - - def add_layers(self): - self.parent_module.ds_net = self.downsampling_branch - self.parent_module.left_postprocess_net = self.postprocess_module - - def forward(self, x): - # retrieve results from downsampling network at all 4 levels - last_feature_map = self.downsampling_branch.forward(x) - # feed into postprocessing network - cam = self.postprocess_module.forward(last_feature_map) - return last_feature_map, cam - -class TopTPercentAggregationFunctionFlattened(AbstractMILUnit): - """ - An aggregator that uses the SM to compute the y_global. - Use the sum of topK value - """ - - def __init__(self, parameters, parent_module): - super(TopTPercentAggregationFunctionFlattened, self).__init__(parameters, parent_module) - self.percent_t = parameters["percent_t"] - self.parent_module = parent_module - - def forward(self, cam_flattened, num_slices=1): - batch_size, num_class, flattened_size = cam_flattened.size() - - # top_t% pooling in 3D-GMIC applies percentage w.r.t. a single slice of image - top_t = int(round(flattened_size / num_slices * self.percent_t)) - assert flattened_size >= top_t, f"flattened_size: {flattened_size}, top_t: {top_t}" - - selected_area = cam_flattened.topk(top_t, dim=2)[0] - return selected_area.mean(dim=2) - -class RetrieveROIModule3D(AbstractMILUnit): - """ - A Regional Proposal Network instance that computes the locations of the crops - Greedy select crops with largest sums - - Assume the given batch is consisted on one image. The slices are provided via the batch dimension. - """ - - def __init__(self, parameters, parent_module, slice_threshold_dist=10): - super(RetrieveROIModule3D, self).__init__(parameters, parent_module) - self.crop_method = "upper_left" - self.num_crops_per_class = parameters["K"] - self.crop_shape = parameters["crop_shape"] - self.use_gpu = parameters["device_type"] == "gpu" - self.half = parameters["half"] - self.slice_threshold_dist = slice_threshold_dist - - def forward(self, x_original, cam_size, h_small): - """ - Function that use the low-res image to determine the position of the high-res crops - :param x_original: N, C, H, W pytorch tensor - :param cam_size: (h, w) - :param h_small: N, C, h_h, w_h pytorch tensor - :return: N, num_classes*k, 2 numpy matrix; returned coordinates are corresponding to x_small - """ - # retrieve parameters - _, _, H, W = x_original.size() - (h, w) = cam_size - N, C, h_h, w_h = h_small.size() - - # make sure that the size of h_small == size of cam_size - assert h_h == h, "h_h!=h" - assert w_h == w, "w_h!=w" - - # adjust crop_shape since crop shape is based on the original image - crop_x_adjusted = int(np.round(self.crop_shape[0] * h / H)) - crop_y_adjusted = int(np.round(self.crop_shape[1] * w / W)) - crop_shape_adjusted = (crop_x_adjusted, crop_y_adjusted) - - # greedily find the box with max sum of weights - current_images = h_small - all_max_position = [] - all_max_slices = [] - all_sampled_slices = [] - # combine channels - max_vals = current_images.view(N, C, -1).max(dim=2, keepdim=True)[0].unsqueeze(-1) - min_vals = current_images.view(N, C, -1).min(dim=2, keepdim=True)[0].unsqueeze(-1) - max_vals = max_vals.max(dim=0, keepdim=False)[0].unsqueeze(0) - min_vals = min_vals.min(dim=0, keepdim=False)[0].unsqueeze(0) - range_vals = max_vals - min_vals - normalize_images = current_images - min_vals - normalize_images = normalize_images / range_vals - - # get combination of normalized images from different classes - current_images = normalize_images.sum(dim=1, keepdim=True) - - num_crops_per_class = self.num_crops_per_class - - for _ in range(int(num_crops_per_class)): - # current_images.shape [N, 1, h, w] - # max_pos here is shape [N, 1, 2] - - # find the greedy crop position from all slices in parallel - # only one of these (the overall, globally-maximum crop position) will be considered at each step - max_values, max_pos = tools.get_max_window_3d(current_images, crop_shape_adjusted, "avg", True) - mask = tools.generate_mask_uplft_3d(current_images, crop_shape_adjusted, max_pos, self.use_gpu, self.half) - - # max_pos and mask are for each slice separately. - # We now further find the global maximum among all slices. - _, max_slice_idx = torch.max(max_values, 0) - max_slice_idx_item = max_slice_idx.item() - all_max_slices.append(max_slice_idx_item) - - # calculate the slice indices of the interval for the neighbors around the max_slice_idx - neighbor_min_slice_idx = max(0, max_slice_idx_item-self.slice_threshold_dist) - neighbor_max_slice_idx = min(N, max_slice_idx_item+self.slice_threshold_dist+1) - - # first, for all slices other than the slice where the current max item is found, reset values to 1 - # and then copy mask to the neighboring slices to erase the same location from all neighbors - mask[[i for i in range(N) if i != max_slice_idx_item], :, :] = 1 - mask[neighbor_min_slice_idx:neighbor_max_slice_idx] = mask[max_slice_idx_item] - - # if training, randomly choose one of the neighboring slices - if self.parent_module.training: - all_sampled_slices.append(np.random.randint(neighbor_min_slice_idx, neighbor_max_slice_idx)) - max_pos[neighbor_min_slice_idx:neighbor_max_slice_idx] = max_pos[max_slice_idx_item] - else: - all_sampled_slices.append(max_slice_idx_item) - - all_max_position.append(max_pos) - current_images = current_images * mask - return np.array(all_max_slices).reshape(-1,1), np.array(all_sampled_slices).reshape(-1,1), torch.cat(all_max_position, dim=1).data.cpu().numpy() - - -class LocalNetwork(AbstractMILUnit): - """ - The local network that takes a crop and computes its hidden representation - Use ResNet - """ - def add_layers(self): - """ - Function that add layers to the parent module that implements nn.Module - :return: - """ - self.parent_module.dn_resnet = ResNetV1(64, BasicBlockV1, [2,2,2,2], 3) - - def forward(self, x_crop): - """ - Function that takes in a single crop and return the hidden representation - :param x_crop: (N,C,h,w) - :return: - """ - # forward propagte using ResNet - res = self.parent_module.dn_resnet(x_crop.expand(-1, 3, -1 , -1)) - # global average pooling - res = res.mean(dim=2).mean(dim=2) - return res - - -class AttentionModule(AbstractMILUnit): - """ - The attention module takes multiple hidden representations and compute the attention-weighted average - Use Gated Attention Mechanism in https://arxiv.org/pdf/1802.04712.pdf - """ - def add_layers(self): - """ - Function that add layers to the parent module that implements nn.Module - :return: - """ - # The gated attention mechanism - self.parent_module.mil_attn_V = nn.Linear(512, 128, bias=False) - self.parent_module.mil_attn_U = nn.Linear(512, 128, bias=False) - self.parent_module.mil_attn_w = nn.Linear(128, 1, bias=False) - # classifier - self.parent_module.classifier_linear = nn.Linear(512, 1, bias=False) - - def forward(self, h_crops): - """ - Function that takes in the hidden representations of crops and use attention to generate a single hidden vector - :param h_small: - :param h_crops: - :return: - """ - batch_size, num_crops, h_dim = h_crops.size() - h_crops_reshape = h_crops.view(batch_size * num_crops, h_dim) - # calculate the attn score - attn_projection = torch.sigmoid(self.parent_module.mil_attn_U(h_crops_reshape)) * \ - torch.tanh(self.parent_module.mil_attn_V(h_crops_reshape)) - attn_score = self.parent_module.mil_attn_w(attn_projection) - # use softmax to map score to attention - attn_score_reshape = attn_score.view(batch_size, num_crops) - attn = F.softmax(attn_score_reshape, dim=1) - - # final hidden vector - z_weighted_avg = torch.sum(attn.unsqueeze(-1) * h_crops, 1) - - # map to the final layer - y_crops = self.parent_module.classifier_linear(z_weighted_avg) - return z_weighted_avg, attn, y_crops \ No newline at end of file diff --git a/spaces/MetaWabbit/Auto-GPT/run.bat b/spaces/MetaWabbit/Auto-GPT/run.bat deleted file mode 100644 index afbab57a0603a126b04845ec754d1ecf3fdea18d..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/run.bat +++ /dev/null @@ -1,8 +0,0 @@ -@echo off -python scripts/check_requirements.py requirements.txt -if errorlevel 1 ( - echo Installing missing packages... - pip install -r requirements.txt -) -python -m autogpt %* -pause diff --git a/spaces/MinzChan/ChatGPT-PPT-Generate-With-Azure-OpenAI-API/README.md b/spaces/MinzChan/ChatGPT-PPT-Generate-With-Azure-OpenAI-API/README.md deleted file mode 100644 index 10fd9d2903fdaa24603b8fd2371555060b56745e..0000000000000000000000000000000000000000 --- a/spaces/MinzChan/ChatGPT-PPT-Generate-With-Azure-OpenAI-API/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: ChatGPT PPT Generate -emoji: 🌍 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -duplicated_from: Alpaca233/ChatGPT-PPT-Generate ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -form [here](https://github.com/AmNotAGoose/Python-PPTX-ChatGPT-Presentation-Generator) diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/gatherers/mono_gatherer.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/gatherers/mono_gatherer.py deleted file mode 100644 index bad35fa2f1a46362ac3e515fbe5281621143118a..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/datasets/preparers/gatherers/mono_gatherer.py +++ /dev/null @@ -1,34 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os.path as osp -from typing import Tuple - -from mmocr.registry import DATA_GATHERERS -from .base import BaseGatherer - - -@DATA_GATHERERS.register_module() -class MonoGatherer(BaseGatherer): - """Gather the dataset file. Specifically for the case that only one - annotation file is needed. For example, - - img_001.jpg \ - img_002.jpg ---> train.json - img_003.jpg / - - Args: - ann_name (str): The name of the annotation file. - """ - - def __init__(self, ann_name: str, **kwargs) -> None: - super().__init__(**kwargs) - - self.ann_name = ann_name - - def __call__(self) -> Tuple[str, str]: - """ - Returns: - tuple(str, str): The directory of the image and the path of - annotation file. - """ - - return (self.img_dir, osp.join(self.ann_dir, self.ann_name)) diff --git a/spaces/NCTCMumbai/NCTC/models/official/recommendation/ncf_input_pipeline.py b/spaces/NCTCMumbai/NCTC/models/official/recommendation/ncf_input_pipeline.py deleted file mode 100644 index 4dab86c43bfde14eb5adfc82e52b30b315060217..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/recommendation/ncf_input_pipeline.py +++ /dev/null @@ -1,200 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""NCF model input pipeline.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import functools - -# pylint: disable=g-bad-import-order -import tensorflow.compat.v2 as tf -# pylint: enable=g-bad-import-order - -from official.recommendation import constants as rconst -from official.recommendation import movielens -from official.recommendation import data_pipeline - -NUM_SHARDS = 16 - - -def create_dataset_from_tf_record_files(input_file_pattern, - pre_batch_size, - batch_size, - is_training=True, - rebatch=False): - """Creates dataset from (tf)records files for training/evaluation.""" - - files = tf.data.Dataset.list_files(input_file_pattern, shuffle=is_training) - - def make_dataset(files_dataset, shard_index): - """Returns dataset for sharded tf record files.""" - if pre_batch_size != batch_size: - raise ValueError("Pre-batch ({}) size is not equal to batch " - "size ({})".format(pre_batch_size, batch_size)) - files_dataset = files_dataset.shard(NUM_SHARDS, shard_index) - dataset = files_dataset.interleave( - tf.data.TFRecordDataset, - num_parallel_calls=tf.data.experimental.AUTOTUNE) - decode_fn = functools.partial( - data_pipeline.DatasetManager.deserialize, - batch_size=pre_batch_size, - is_training=is_training) - dataset = dataset.map( - decode_fn, num_parallel_calls=tf.data.experimental.AUTOTUNE) - return dataset - - dataset = tf.data.Dataset.range(NUM_SHARDS) - map_fn = functools.partial(make_dataset, files) - dataset = dataset.interleave( - map_fn, - cycle_length=NUM_SHARDS, - num_parallel_calls=tf.data.experimental.AUTOTUNE) - - if rebatch: - # A workaround for TPU Pod evaluation dataset. - # TODO (b/162341937) remove once it's fixed. - dataset = dataset.unbatch() - dataset = dataset.batch(pre_batch_size) - - dataset = dataset.prefetch(tf.data.experimental.AUTOTUNE) - return dataset - - -def create_dataset_from_data_producer(producer, params): - """Return dataset online-generating data.""" - - def preprocess_train_input(features, labels): - """Pre-process the training data. - - This is needed because - - The label needs to be extended to be used in the loss fn - - We need the same inputs for training and eval so adding fake inputs - for DUPLICATE_MASK in training data. - - Args: - features: Dictionary of features for training. - labels: Training labels. - - Returns: - Processed training features. - """ - fake_dup_mask = tf.zeros_like(features[movielens.USER_COLUMN]) - features[rconst.DUPLICATE_MASK] = fake_dup_mask - features[rconst.TRAIN_LABEL_KEY] = labels - return features - - train_input_fn = producer.make_input_fn(is_training=True) - train_input_dataset = train_input_fn(params).map(preprocess_train_input) - - def preprocess_eval_input(features): - """Pre-process the eval data. - - This is needed because: - - The label needs to be extended to be used in the loss fn - - We need the same inputs for training and eval so adding fake inputs - for VALID_PT_MASK in eval data. - - Args: - features: Dictionary of features for evaluation. - - Returns: - Processed evaluation features. - """ - labels = tf.cast(tf.zeros_like(features[movielens.USER_COLUMN]), tf.bool) - fake_valid_pt_mask = tf.cast( - tf.zeros_like(features[movielens.USER_COLUMN]), tf.bool) - features[rconst.VALID_POINT_MASK] = fake_valid_pt_mask - features[rconst.TRAIN_LABEL_KEY] = labels - return features - - eval_input_fn = producer.make_input_fn(is_training=False) - eval_input_dataset = eval_input_fn(params).map(preprocess_eval_input) - - return train_input_dataset, eval_input_dataset - - -def create_ncf_input_data(params, - producer=None, - input_meta_data=None, - strategy=None): - """Creates NCF training/evaluation dataset. - - Args: - params: Dictionary containing parameters for train/evaluation data. - producer: Instance of BaseDataConstructor that generates data online. Must - not be None when params['train_dataset_path'] or - params['eval_dataset_path'] is not specified. - input_meta_data: A dictionary of input metadata to be used when reading data - from tf record files. Must be specified when params["train_input_dataset"] - is specified. - strategy: Distribution strategy used for distributed training. If specified, - used to assert that evaluation batch size is correctly a multiple of - total number of devices used. - - Returns: - (training dataset, evaluation dataset, train steps per epoch, - eval steps per epoch) - - Raises: - ValueError: If data is being generated online for when using TPU's. - """ - # NCF evaluation metric calculation logic assumes that evaluation data - # sample size are in multiples of (1 + number of negative samples in - # evaluation) for each device. As so, evaluation batch size must be a - # multiple of (number of replicas * (1 + number of negative samples)). - num_devices = strategy.num_replicas_in_sync if strategy else 1 - if (params["eval_batch_size"] % (num_devices * - (1 + rconst.NUM_EVAL_NEGATIVES))): - raise ValueError("Evaluation batch size must be divisible by {} " - "times {}".format(num_devices, - (1 + rconst.NUM_EVAL_NEGATIVES))) - - if params["train_dataset_path"]: - assert params["eval_dataset_path"] - - train_dataset = create_dataset_from_tf_record_files( - params["train_dataset_path"], - input_meta_data["train_prebatch_size"], - params["batch_size"], - is_training=True, - rebatch=False) - - # Re-batch evaluation dataset for TPU Pods. - # TODO (b/162341937) remove once it's fixed. - eval_rebatch = (params["use_tpu"] and strategy.num_replicas_in_sync > 8) - eval_dataset = create_dataset_from_tf_record_files( - params["eval_dataset_path"], - input_meta_data["eval_prebatch_size"], - params["eval_batch_size"], - is_training=False, - rebatch=eval_rebatch) - - num_train_steps = int(input_meta_data["num_train_steps"]) - num_eval_steps = int(input_meta_data["num_eval_steps"]) - else: - if params["use_tpu"]: - raise ValueError("TPU training does not support data producer yet. " - "Use pre-processed data.") - - assert producer - # Start retrieving data from producer. - train_dataset, eval_dataset = create_dataset_from_data_producer( - producer, params) - num_train_steps = producer.train_batches_per_epoch - num_eval_steps = producer.eval_batches_per_epoch - - return train_dataset, eval_dataset, num_train_steps, num_eval_steps diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/learning_rates.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/learning_rates.py deleted file mode 100644 index ecc24ffadb073c79f71725b1adcb61cbd83127cd..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/learning_rates.py +++ /dev/null @@ -1,98 +0,0 @@ -# Copyright 2018 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Learning rate schedule.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import functools - -import numpy as np -import tensorflow as tf -from official.modeling.hyperparams import params_dict - - -class StepLearningRateWithLinearWarmup(tf.keras.optimizers.schedules.LearningRateSchedule): - """Class to generate learning rate tensor.""" - - def __init__(self, total_steps, params): - """Creates the step learning rate tensor with linear warmup.""" - super(StepLearningRateWithLinearWarmup, self).__init__() - self._total_steps = total_steps - assert isinstance(params, (dict, params_dict.ParamsDict)) - if isinstance(params, dict): - params = params_dict.ParamsDict(params) - self._params = params - - def __call__(self, global_step): - warmup_lr = self._params.warmup_learning_rate - warmup_steps = self._params.warmup_steps - init_lr = self._params.init_learning_rate - lr_levels = self._params.learning_rate_levels - lr_steps = self._params.learning_rate_steps - linear_warmup = ( - warmup_lr + tf.cast(global_step, dtype=tf.float32) / warmup_steps * - (init_lr - warmup_lr)) - learning_rate = tf.where(global_step < warmup_steps, linear_warmup, init_lr) - - for next_learning_rate, start_step in zip(lr_levels, lr_steps): - learning_rate = tf.where(global_step >= start_step, next_learning_rate, - learning_rate) - return learning_rate - - def get_config(self): - return {'_params': self._params.as_dict()} - - -class CosineLearningRateWithLinearWarmup(tf.keras.optimizers.schedules.LearningRateSchedule): - """Class to generate learning rate tensor.""" - - def __init__(self, total_steps, params): - """Creates the consine learning rate tensor with linear warmup.""" - super(CosineLearningRateWithLinearWarmup, self).__init__() - self._total_steps = total_steps - assert isinstance(params, (dict, params_dict.ParamsDict)) - if isinstance(params, dict): - params = params_dict.ParamsDict(params) - self._params = params - - def __call__(self, global_step): - global_step = tf.cast(global_step, dtype=tf.float32) - warmup_lr = self._params.warmup_learning_rate - warmup_steps = self._params.warmup_steps - init_lr = self._params.init_learning_rate - total_steps = self._total_steps - linear_warmup = ( - warmup_lr + global_step / warmup_steps * (init_lr - warmup_lr)) - cosine_learning_rate = ( - init_lr * (tf.cos(np.pi * (global_step - warmup_steps) / - (total_steps - warmup_steps)) + 1.0) / 2.0) - learning_rate = tf.where(global_step < warmup_steps, linear_warmup, - cosine_learning_rate) - return learning_rate - - def get_config(self): - return {'_params': self._params.as_dict()} - - -def learning_rate_generator(total_steps, params): - """The learning rate function generator.""" - if params.type == 'step': - return StepLearningRateWithLinearWarmup(total_steps, params) - elif params.type == 'cosine': - return CosineLearningRateWithLinearWarmup(total_steps, params) - else: - raise ValueError('Unsupported learning rate type: {}.'.format(params.type)) diff --git a/spaces/NeuralInternet/Alpaca-LoRA-Serve/app.py b/spaces/NeuralInternet/Alpaca-LoRA-Serve/app.py deleted file mode 100644 index 0884e9380f2b3397e0f2851da9f1d6e993aa0612..0000000000000000000000000000000000000000 --- a/spaces/NeuralInternet/Alpaca-LoRA-Serve/app.py +++ /dev/null @@ -1,186 +0,0 @@ -from strings import TITLE, ABSTRACT, BOTTOM_LINE -from strings import DEFAULT_EXAMPLES -from strings import SPECIAL_STRS -from styles import PARENT_BLOCK_CSS - -import time -import gradio as gr - -from model import load_model -from gen import get_output_batch, StreamModel -from utils import generate_prompt, post_processes_batch, post_process_stream, get_generation_config, common_post_process - -model, tokenizer = load_model( - base="decapoda-research/llama-13b-hf", - finetuned="chansung/alpaca-lora-13b" -) - -model = StreamModel(model, tokenizer) - -def chat_stream( - context, - instruction, - state_chatbot, -): - # print(instruction) - - # user input should be appropriately formatted (don't be confused by the function name) - instruction_display = common_post_process(instruction) - instruction_prompt = generate_prompt(instruction, state_chatbot, context) - bot_response = model( - instruction_prompt, - max_tokens=256, - temperature=1, - top_p=0.9 - ) - - instruction_display = None if instruction_display == SPECIAL_STRS["continue"] else instruction_display - state_chatbot = state_chatbot + [(instruction_display, None)] - - prev_index = 0 - agg_tokens = "" - cutoff_idx = 0 - for tokens in bot_response: - tokens = tokens.strip() - cur_token = tokens[prev_index:] - - if "#" in cur_token and agg_tokens == "": - cutoff_idx = tokens.find("#") - agg_tokens = tokens[cutoff_idx:] - - if agg_tokens != "": - if len(agg_tokens) < len("### Instruction:") : - agg_tokens = agg_tokens + cur_token - elif len(agg_tokens) >= len("### Instruction:"): - if tokens.find("### Instruction:") > -1: - processed_response, _ = post_process_stream(tokens[:tokens.find("### Instruction:")].strip()) - - state_chatbot[-1] = ( - instruction_display, - processed_response - ) - yield (state_chatbot, state_chatbot, context) - break - else: - agg_tokens = "" - cutoff_idx = 0 - - if agg_tokens == "": - processed_response, to_exit = post_process_stream(tokens) - state_chatbot[-1] = (instruction_display, processed_response) - yield (state_chatbot, state_chatbot, context) - - if to_exit: - break - - prev_index = len(tokens) - - yield ( - state_chatbot, - state_chatbot, - gr.Textbox.update(value=tokens) if instruction_display == SPECIAL_STRS["summarize"] else context - ) - -def chat_batch( - contexts, - instructions, - state_chatbots, -): - state_results = [] - ctx_results = [] - - instruct_prompts = [ - generate_prompt(instruct, histories, ctx) - for ctx, instruct, histories in zip(contexts, instructions, state_chatbots) - ] - - bot_responses = get_output_batch( - model, tokenizer, instruct_prompts, generation_config - ) - bot_responses = post_processes_batch(bot_responses) - - for ctx, instruction, bot_response, state_chatbot in zip(contexts, instructions, bot_responses, state_chatbots): - new_state_chatbot = state_chatbot + [('' if instruction == SPECIAL_STRS["continue"] else instruction, bot_response)] - ctx_results.append(gr.Textbox.update(value=bot_response) if instruction == SPECIAL_STRS["summarize"] else ctx) - state_results.append(new_state_chatbot) - - return (state_results, state_results, ctx_results) - -def reset_textbox(): - return gr.Textbox.update(value='') - -with gr.Blocks(css=PARENT_BLOCK_CSS) as demo: - state_chatbot = gr.State([]) - - with gr.Column(elem_id='col_container'): - gr.Markdown(f"## {TITLE}\n\n\n{ABSTRACT}") - - with gr.Accordion("Context Setting", open=False): - context_txtbox = gr.Textbox(placeholder="Surrounding information to AI", label="Enter Context") - hidden_txtbox = gr.Textbox(placeholder="", label="Order", visible=False) - - chatbot = gr.Chatbot(elem_id='chatbot', label="Alpaca-LoRA") - instruction_txtbox = gr.Textbox(placeholder="What do you want to say to AI?", label="Instruction") - send_prompt_btn = gr.Button(value="Send Prompt") - - with gr.Accordion("Helper Buttons", open=False): - gr.Markdown(f"`Continue` lets AI to complete the previous incomplete answers. `Summarize` lets AI to summarize the conversations so far.") - continue_txtbox = gr.Textbox(value=SPECIAL_STRS["continue"], visible=False) - summrize_txtbox = gr.Textbox(value=SPECIAL_STRS["summarize"], visible=False) - - continue_btn = gr.Button(value="Continue") - summarize_btn = gr.Button(value="Summarize") - - gr.Markdown("#### Examples") - for idx, examples in enumerate(DEFAULT_EXAMPLES): - with gr.Accordion(examples["title"], open=False): - gr.Examples( - examples=examples["examples"], - inputs=[ - hidden_txtbox, instruction_txtbox - ], - label=None - ) - - gr.Markdown(f"{BOTTOM_LINE}") - - send_prompt_btn.click( - chat_stream, - [context_txtbox, instruction_txtbox, state_chatbot], - [state_chatbot, chatbot, context_txtbox], - ) - send_prompt_btn.click( - reset_textbox, - [], - [instruction_txtbox], - ) - - continue_btn.click( - chat_stream, - [context_txtbox, continue_txtbox, state_chatbot], - [state_chatbot, chatbot, context_txtbox], - ) - continue_btn.click( - reset_textbox, - [], - [instruction_txtbox], - ) - - summarize_btn.click( - chat_stream, - [context_txtbox, summrize_txtbox, state_chatbot], - [state_chatbot, chatbot, context_txtbox], - ) - summarize_btn.click( - reset_textbox, - [], - [instruction_txtbox], - ) - -demo.queue( - concurrency_count=2, - max_size=100, -).launch( - max_threads=2, - server_name="0.0.0.0", -) \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/distributed/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/tests/distributed/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OFA-Sys/OFA-vqa/utils/BPE/__init__.py b/spaces/OFA-Sys/OFA-vqa/utils/BPE/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/roi_heads/custom_roi_heads.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/roi_heads/custom_roi_heads.py deleted file mode 100644 index 90fadf1a9667cf836223945b22c5147b89ad98a4..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/roi_heads/custom_roi_heads.py +++ /dev/null @@ -1,185 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import numpy as np -import json -import math -import torch -from torch import nn -from torch.autograd.function import Function -from typing import Dict, List, Optional, Tuple, Union - -from detectron2.layers import ShapeSpec -from detectron2.structures import Boxes, Instances, pairwise_iou -from detectron2.utils.events import get_event_storage - -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.roi_heads.fast_rcnn import fast_rcnn_inference -from detectron2.modeling.roi_heads.roi_heads import ROI_HEADS_REGISTRY, StandardROIHeads -from detectron2.modeling.roi_heads.cascade_rcnn import CascadeROIHeads -from detectron2.modeling.roi_heads.box_head import build_box_head -from .custom_fast_rcnn import CustomFastRCNNOutputLayers - - -@ROI_HEADS_REGISTRY.register() -class CustomROIHeads(StandardROIHeads): - @classmethod - def _init_box_head(self, cfg, input_shape): - ret = super()._init_box_head(cfg, input_shape) - del ret['box_predictor'] - ret['box_predictor'] = CustomFastRCNNOutputLayers( - cfg, ret['box_head'].output_shape) - self.debug = cfg.DEBUG - if self.debug: - self.debug_show_name = cfg.DEBUG_SHOW_NAME - self.save_debug = cfg.SAVE_DEBUG - self.vis_thresh = cfg.VIS_THRESH - self.pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to( - torch.device(cfg.MODEL.DEVICE)).view(3, 1, 1) - self.pixel_std = torch.Tensor(cfg.MODEL.PIXEL_STD).to( - torch.device(cfg.MODEL.DEVICE)).view(3, 1, 1) - return ret - - def forward(self, images, features, proposals, targets=None): - """ - enable debug - """ - if not self.debug: - del images - if self.training: - assert targets - proposals = self.label_and_sample_proposals(proposals, targets) - del targets - - if self.training: - losses = self._forward_box(features, proposals) - losses.update(self._forward_mask(features, proposals)) - losses.update(self._forward_keypoint(features, proposals)) - return proposals, losses - else: - pred_instances = self._forward_box(features, proposals) - pred_instances = self.forward_with_given_boxes(features, pred_instances) - if self.debug: - from ..debug import debug_second_stage - denormalizer = lambda x: x * self.pixel_std + self.pixel_mean - debug_second_stage( - [denormalizer(images[0].clone())], - pred_instances, proposals=proposals, - debug_show_name=self.debug_show_name) - return pred_instances, {} - - -@ROI_HEADS_REGISTRY.register() -class CustomCascadeROIHeads(CascadeROIHeads): - @classmethod - def _init_box_head(self, cfg, input_shape): - self.mult_proposal_score = cfg.MODEL.ROI_BOX_HEAD.MULT_PROPOSAL_SCORE - ret = super()._init_box_head(cfg, input_shape) - del ret['box_predictors'] - cascade_bbox_reg_weights = cfg.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS - box_predictors = [] - for box_head, bbox_reg_weights in zip(ret['box_heads'], cascade_bbox_reg_weights): - box_predictors.append( - CustomFastRCNNOutputLayers( - cfg, box_head.output_shape, - box2box_transform=Box2BoxTransform(weights=bbox_reg_weights) - )) - ret['box_predictors'] = box_predictors - self.debug = cfg.DEBUG - if self.debug: - self.debug_show_name = cfg.DEBUG_SHOW_NAME - self.save_debug = cfg.SAVE_DEBUG - self.vis_thresh = cfg.VIS_THRESH - self.pixel_mean = torch.Tensor(cfg.MODEL.PIXEL_MEAN).to( - torch.device(cfg.MODEL.DEVICE)).view(3, 1, 1) - self.pixel_std = torch.Tensor(cfg.MODEL.PIXEL_STD).to( - torch.device(cfg.MODEL.DEVICE)).view(3, 1, 1) - return ret - - - def _forward_box(self, features, proposals, targets=None): - """ - Add mult proposal scores at testing - """ - if (not self.training) and self.mult_proposal_score: - if len(proposals) > 0 and proposals[0].has('scores'): - proposal_scores = [ - p.get('scores') for p in proposals] - else: - proposal_scores = [ - p.get('objectness_logits') for p in proposals] - - features = [features[f] for f in self.box_in_features] - head_outputs = [] # (predictor, predictions, proposals) - prev_pred_boxes = None - image_sizes = [x.image_size for x in proposals] - for k in range(self.num_cascade_stages): - if k > 0: - proposals = self._create_proposals_from_boxes(prev_pred_boxes, image_sizes) - if self.training: - proposals = self._match_and_label_boxes(proposals, k, targets) - predictions = self._run_stage(features, proposals, k) - prev_pred_boxes = self.box_predictor[k].predict_boxes(predictions, proposals) - head_outputs.append((self.box_predictor[k], predictions, proposals)) - - if self.training: - losses = {} - storage = get_event_storage() - for stage, (predictor, predictions, proposals) in enumerate(head_outputs): - with storage.name_scope("stage{}".format(stage)): - stage_losses = predictor.losses(predictions, proposals) - losses.update({k + "_stage{}".format(stage): v for k, v in stage_losses.items()}) - return losses - else: - # Each is a list[Tensor] of length #image. Each tensor is Ri x (K+1) - scores_per_stage = [h[0].predict_probs(h[1], h[2]) for h in head_outputs] - scores = [ - sum(list(scores_per_image)) * (1.0 / self.num_cascade_stages) - for scores_per_image in zip(*scores_per_stage) - ] - - if self.mult_proposal_score: - scores = [(s * ps[:, None]) ** 0.5 \ - for s, ps in zip(scores, proposal_scores)] - - predictor, predictions, proposals = head_outputs[-1] - boxes = predictor.predict_boxes(predictions, proposals) - pred_instances, _ = fast_rcnn_inference( - boxes, - scores, - image_sizes, - predictor.test_score_thresh, - predictor.test_nms_thresh, - predictor.test_topk_per_image, - ) - - return pred_instances - - def forward(self, images, features, proposals, targets=None): - ''' - enable debug - ''' - if not self.debug: - del images - if self.training: - proposals = self.label_and_sample_proposals(proposals, targets) - - if self.training: - losses = self._forward_box(features, proposals, targets) - losses.update(self._forward_mask(features, proposals)) - losses.update(self._forward_keypoint(features, proposals)) - return proposals, losses - else: - # import pdb; pdb.set_trace() - pred_instances = self._forward_box(features, proposals) - pred_instances = self.forward_with_given_boxes(features, pred_instances) - if self.debug: - from ..debug import debug_second_stage - denormalizer = lambda x: x * self.pixel_std + self.pixel_mean - debug_second_stage( - [denormalizer(x.clone()) for x in images], - pred_instances, proposals=proposals, - save_debug=self.save_debug, - debug_show_name=self.debug_show_name, - vis_thresh=self.vis_thresh) - return pred_instances, {} - - diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/modeling/test_fast_rcnn.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/modeling/test_fast_rcnn.py deleted file mode 100644 index e29b944bffca1ccbf5b02be59a753f3188d90a4f..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/modeling/test_fast_rcnn.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import unittest -import torch - -from detectron2.layers import ShapeSpec -from detectron2.modeling.box_regression import Box2BoxTransform, Box2BoxTransformRotated -from detectron2.modeling.roi_heads.fast_rcnn import FastRCNNOutputLayers -from detectron2.modeling.roi_heads.rotated_fast_rcnn import RotatedFastRCNNOutputLayers -from detectron2.structures import Boxes, Instances, RotatedBoxes -from detectron2.utils.events import EventStorage - -logger = logging.getLogger(__name__) - - -class FastRCNNTest(unittest.TestCase): - def test_fast_rcnn(self): - torch.manual_seed(132) - - box_head_output_size = 8 - - box_predictor = FastRCNNOutputLayers( - ShapeSpec(channels=box_head_output_size), - box2box_transform=Box2BoxTransform(weights=(10, 10, 5, 5)), - num_classes=5, - ) - feature_pooled = torch.rand(2, box_head_output_size) - predictions = box_predictor(feature_pooled) - - proposal_boxes = torch.tensor([[0.8, 1.1, 3.2, 2.8], [2.3, 2.5, 7, 8]], dtype=torch.float32) - gt_boxes = torch.tensor([[1, 1, 3, 3], [2, 2, 6, 6]], dtype=torch.float32) - proposal = Instances((10, 10)) - proposal.proposal_boxes = Boxes(proposal_boxes) - proposal.gt_boxes = Boxes(gt_boxes) - proposal.gt_classes = torch.tensor([1, 2]) - - with EventStorage(): # capture events in a new storage to discard them - losses = box_predictor.losses(predictions, [proposal]) - - expected_losses = { - "loss_cls": torch.tensor(1.7951188087), - "loss_box_reg": torch.tensor(4.0357131958), - } - for name in expected_losses.keys(): - assert torch.allclose(losses[name], expected_losses[name]) - - def test_fast_rcnn_empty_batch(self, device="cpu"): - box_predictor = FastRCNNOutputLayers( - ShapeSpec(channels=10), - box2box_transform=Box2BoxTransform(weights=(10, 10, 5, 5)), - num_classes=8, - ).to(device=device) - - logits = torch.randn(0, 100, requires_grad=True, device=device) - deltas = torch.randn(0, 4, requires_grad=True, device=device) - losses = box_predictor.losses([logits, deltas], []) - for value in losses.values(): - self.assertTrue(torch.allclose(value, torch.zeros_like(value))) - sum(losses.values()).backward() - self.assertTrue(logits.grad is not None) - self.assertTrue(deltas.grad is not None) - - predictions, _ = box_predictor.inference([logits, deltas], []) - self.assertEqual(len(predictions), 0) - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_fast_rcnn_empty_batch_cuda(self): - self.test_fast_rcnn_empty_batch(device=torch.device("cuda")) - - def test_fast_rcnn_rotated(self): - torch.manual_seed(132) - box_head_output_size = 8 - - box_predictor = RotatedFastRCNNOutputLayers( - ShapeSpec(channels=box_head_output_size), - box2box_transform=Box2BoxTransformRotated(weights=(10, 10, 5, 5, 1)), - num_classes=5, - ) - feature_pooled = torch.rand(2, box_head_output_size) - predictions = box_predictor(feature_pooled) - proposal_boxes = torch.tensor( - [[2, 1.95, 2.4, 1.7, 0], [4.65, 5.25, 4.7, 5.5, 0]], dtype=torch.float32 - ) - gt_boxes = torch.tensor([[2, 2, 2, 2, 0], [4, 4, 4, 4, 0]], dtype=torch.float32) - proposal = Instances((10, 10)) - proposal.proposal_boxes = RotatedBoxes(proposal_boxes) - proposal.gt_boxes = RotatedBoxes(gt_boxes) - proposal.gt_classes = torch.tensor([1, 2]) - - with EventStorage(): # capture events in a new storage to discard them - losses = box_predictor.losses(predictions, [proposal]) - - # Note: the expected losses are slightly different even if - # the boxes are essentially the same as in the FastRCNNOutput test, because - # bbox_pred in FastRCNNOutputLayers have different Linear layers/initialization - # between the two cases. - expected_losses = { - "loss_cls": torch.tensor(1.7920907736), - "loss_box_reg": torch.tensor(4.0410838127), - } - for name in expected_losses.keys(): - assert torch.allclose(losses[name], expected_losses[name]) - - def test_predict_boxes_tracing(self): - class Model(torch.nn.Module): - def __init__(self, output_layer): - super(Model, self).__init__() - self._output_layer = output_layer - - def forward(self, proposal_deltas, proposal_boxes): - instances = Instances((10, 10)) - instances.proposal_boxes = Boxes(proposal_boxes) - return self._output_layer.predict_boxes((None, proposal_deltas), [instances]) - - box_head_output_size = 8 - - box_predictor = FastRCNNOutputLayers( - ShapeSpec(channels=box_head_output_size), - box2box_transform=Box2BoxTransform(weights=(10, 10, 5, 5)), - num_classes=5, - ) - - model = Model(box_predictor) - - from detectron2.export.torchscript_patch import patch_builtin_len - - with torch.no_grad(), patch_builtin_len(): - func = torch.jit.trace(model, (torch.randn(10, 20), torch.randn(10, 4))) - - o = func(torch.randn(10, 20), torch.randn(10, 4)) - self.assertEqual(o[0].shape, (10, 20)) - o = func(torch.randn(5, 20), torch.randn(5, 4)) - self.assertEqual(o[0].shape, (5, 20)) - o = func(torch.randn(20, 20), torch.randn(20, 4)) - self.assertEqual(o[0].shape, (20, 20)) - - def test_predict_probs_tracing(self): - class Model(torch.nn.Module): - def __init__(self, output_layer): - super(Model, self).__init__() - self._output_layer = output_layer - - def forward(self, scores, proposal_boxes): - instances = Instances((10, 10)) - instances.proposal_boxes = Boxes(proposal_boxes) - return self._output_layer.predict_probs((scores, None), [instances]) - - box_head_output_size = 8 - - box_predictor = FastRCNNOutputLayers( - ShapeSpec(channels=box_head_output_size), - box2box_transform=Box2BoxTransform(weights=(10, 10, 5, 5)), - num_classes=5, - ) - - model = Model(box_predictor) - - from detectron2.export.torchscript_patch import patch_builtin_len - - with torch.no_grad(), patch_builtin_len(): - func = torch.jit.trace(model, (torch.randn(10, 6), torch.rand(10, 4))) - o = func(torch.randn(10, 6), torch.randn(10, 4)) - self.assertEqual(o[0].shape, (10, 6)) - o = func(torch.randn(5, 6), torch.randn(5, 4)) - self.assertEqual(o[0].shape, (5, 6)) - o = func(torch.randn(20, 6), torch.randn(20, 4)) - self.assertEqual(o[0].shape, (20, 6)) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/intset.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/intset.go deleted file mode 100644 index 45c674543dac103a4d69d79e47e87e7a3465d251..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/intset.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/bar-line.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/bar-line.go deleted file mode 100644 index f8bffd4faed63ee438a5493c238cec815d410e55..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/bar-line.go and /dev/null differ diff --git a/spaces/PaulHilders/IEAI_CLIPGroundingExplainability/clip_grounding/evaluation/qualitative_results.py b/spaces/PaulHilders/IEAI_CLIPGroundingExplainability/clip_grounding/evaluation/qualitative_results.py deleted file mode 100644 index 1c6a4adf006a6e25fc43a37505722fa05b92d391..0000000000000000000000000000000000000000 --- a/spaces/PaulHilders/IEAI_CLIPGroundingExplainability/clip_grounding/evaluation/qualitative_results.py +++ /dev/null @@ -1,93 +0,0 @@ -"""Converts notebook for qualitative results to a python script.""" -import sys -from os.path import join - -from clip_grounding.utils.paths import REPO_PATH -sys.path.append(join(REPO_PATH, "CLIP_explainability/Transformer-MM-Explainability/")) - -import os -import torch -import matplotlib.pyplot as plt -import numpy as np -from matplotlib.patches import Patch -import CLIP.clip as clip -import cv2 -from PIL import Image -from glob import glob -from natsort import natsorted - -from clip_grounding.utils.paths import REPO_PATH -from clip_grounding.utils.io import load_json -from clip_grounding.utils.visualize import set_latex_fonts, show_grid_of_images -from clip_grounding.utils.image import pad_to_square -from clip_grounding.datasets.png_utils import show_images_and_caption -from clip_grounding.datasets.png import ( - PNG, - visualize_item, - overlay_segmask_on_image, - overlay_relevance_map_on_image, - get_text_colors, -) -from clip_grounding.evaluation.clip_on_png import ( - process_entry_image_to_text, - process_entry_text_to_image, - interpret_and_generate, -) - -# load dataset -dataset = PNG(dataset_root=join(REPO_PATH, "data/panoptic_narrative_grounding"), split="val2017") - -# load CLIP model -device = "cuda" if torch.cuda.is_available() else "cpu" -model, preprocess = clip.load("ViT-B/32", device=device, jit=False) - - -def visualize_entry_text_to_image(entry, pad_images=True, figsize=(18, 5)): - test_img, test_texts, orig_image = process_entry_text_to_image(entry, unimodal=False) - outputs = interpret_and_generate(model, test_img, test_texts, orig_image, return_outputs=True, show=False) - relevance_map = outputs[0]["image_relevance"] - - image_with_mask = overlay_segmask_on_image(entry["image"], entry["image_mask"]) - if pad_images: - image_with_mask = pad_to_square(image_with_mask) - - image_with_relevance_map = overlay_relevance_map_on_image(entry["image"], relevance_map) - if pad_images: - image_with_relevance_map = pad_to_square(image_with_relevance_map) - - text_colors = get_text_colors(entry["text"], entry["text_mask"]) - - show_images_and_caption( - [image_with_mask, image_with_relevance_map], - entry["text"], text_colors, figsize=figsize, - image_xlabels=["Ground truth segmentation", "Predicted relevance map"] - ) - - -def create_and_save_gif(filenames, save_path, **kwargs): - import imageio - images = [] - for filename in filenames: - images.append(imageio.imread(filename)) - imageio.mimsave(save_path, images, **kwargs) - - -idx = 100 -instance = dataset[idx] - -instance_dir = join(REPO_PATH, "figures", f"instance-{idx}") -os.makedirs(instance_dir, exist_ok=True) - -for i, entry in enumerate(instance): - del entry["full_caption"] - - visualize_entry_text_to_image(entry, pad_images=False, figsize=(19, 4)) - - save_path = instance_dir - plt.savefig(join(instance_dir, f"viz-{i}.png"), bbox_inches="tight") - - -filenames = natsorted(glob(join(instance_dir, "viz-*.png"))) -save_path = join(REPO_PATH, "media", "sample.gif") - -create_and_save_gif(filenames, save_path, duration=3) diff --git a/spaces/Paulraj916/paulraj916/addVideo.py b/spaces/Paulraj916/paulraj916/addVideo.py deleted file mode 100644 index 4ea86668faf2f0d714dfd5284fdb0e1d80b1308d..0000000000000000000000000000000000000000 --- a/spaces/Paulraj916/paulraj916/addVideo.py +++ /dev/null @@ -1,45 +0,0 @@ -import os -from bs4 import BeautifulSoup -from urllib.parse import urljoin - -class AddVideo: - def __init__(self, url, output_folder): - self.url = url - self.output_folder = output_folder - - def add_absolute_video_urls(self): - try: - # Read the downloaded HTML file from the output folder - html_file_path = os.path.join(self.output_folder, 'index.html') - - with open(html_file_path, 'r', encoding='utf-8') as file: - html_content = file.read() - - # Create the BeautifulSoup object using the downloaded HTML content - soup = BeautifulSoup(html_content, 'html.parser') - - # Find all video tags - video_tags = soup.find_all('video') - - # Extract video URLs and store them in a list - video_urls = [] - for video_tag in video_tags: - if 'src' in video_tag.attrs: - video_url = video_tag['src'] - absolute_url = urljoin(self.url, video_url) - video_urls.append((video_url, absolute_url)) - - # Replace video URLs in the HTML code with absolute URLs - for video_url, absolute_url in video_urls: - soup.find('video', src=video_url)['src'] = absolute_url - - # Save the updated HTML code to video_updated.html - updated_html_path = os.path.join(self.output_folder, 'index.html') - with open(updated_html_path, 'w', encoding='utf-8') as file: - file.write(str(soup)) - - print(f"Updated HTML code saved to {updated_html_path}") - except FileNotFoundError: - print("HTML file not found. Make sure to run the other scrapers first.") - except Exception as e: - print(f"Failed to update video URLs: {e}") diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/datasets/stare.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/datasets/stare.py deleted file mode 100644 index cbd14e0920e7f6a73baff1432e5a32ccfdb0dfae..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/datasets/stare.py +++ /dev/null @@ -1,27 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class STAREDataset(CustomDataset): - """STARE dataset. - - In segmentation map annotation for STARE, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '.ah.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(STAREDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.ah.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/__init__.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/__init__.py deleted file mode 100644 index 156eed9099de590919c6cc48b71c3e7efe9628cd..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/__init__.py +++ /dev/null @@ -1,56 +0,0 @@ -from maskrcnn_benchmark.data import datasets - -from .coco import coco_evaluation -from .voc import voc_evaluation -from .vg import vg_evaluation -from .box_aug import im_detect_bbox_aug -from .od_to_grounding import od_to_grounding_evaluation - - -def evaluate(dataset, predictions, output_folder, **kwargs): - """evaluate dataset using different methods based on dataset type. - Args: - dataset: Dataset object - predictions(list[BoxList]): each item in the list represents the - prediction results for one image. - output_folder: output folder, to save evaluation files or results. - **kwargs: other args. - Returns: - evaluation result - """ - args = dict( - dataset=dataset, predictions=predictions, output_folder=output_folder, **kwargs - ) - if isinstance(dataset, datasets.COCODataset) or isinstance(dataset, datasets.TSVDataset): - return coco_evaluation(**args) - # elif isinstance(dataset, datasets.VGTSVDataset): - # return vg_evaluation(**args) - elif isinstance(dataset, datasets.PascalVOCDataset): - return voc_evaluation(**args) - elif isinstance(dataset, datasets.CocoDetectionTSV): - return od_to_grounding_evaluation(**args) - elif isinstance(dataset, datasets.LvisDetection): - pass - else: - dataset_name = dataset.__class__.__name__ - raise NotImplementedError("Unsupported dataset type {}.".format(dataset_name)) - - -def evaluate_mdetr(dataset, predictions, output_folder, cfg): - - args = dict( - dataset=dataset, predictions=predictions, output_folder=output_folder, **kwargs - ) - if isinstance(dataset, datasets.COCODataset) or isinstance(dataset, datasets.TSVDataset): - return coco_evaluation(**args) - # elif isinstance(dataset, datasets.VGTSVDataset): - # return vg_evaluation(**args) - elif isinstance(dataset, datasets.PascalVOCDataset): - return voc_evaluation(**args) - elif isinstance(dataset, datasets.CocoDetectionTSV): - return od_to_grounding_evaluation(**args) - elif isinstance(dataset, datasets.LvisDetection): - pass - else: - dataset_name = dataset.__class__.__name__ - raise NotImplementedError("Unsupported dataset type {}.".format(dataset_name)) diff --git a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/training/README.md b/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/training/README.md deleted file mode 100644 index 2f128762223f56327b614fcbf277c65153c71df4..0000000000000000000000000000000000000000 --- a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/training/README.md +++ /dev/null @@ -1,100 +0,0 @@ -# ProteinMPNN -To train/retrain ProteinMPNN clone this github repo and install Python>=3.0, PyTorch, Numpy. - -The multi-chain training data (16.5 GB, PDB biounits, 2021 August 2) can be downloaded from here: `https://files.ipd.uw.edu/pub/training_sets/pdb_2021aug02.tar.gz`; The small subsample (47 MB) of this data for testing purposes can be downloaded from here: `https://files.ipd.uw.edu/pub/training_sets/pdb_2021aug02_sample.tar.gz` - -``` -Training set for ProteinMPNN curated by Ivan Anishchanko. - -Each PDB entry is represented as a collection of .pt files: - PDBID_CHAINID.pt - contains CHAINID chain from PDBID - PDBID.pt - metadata and information on biological assemblies - -PDBID_CHAINID.pt has the following fields: - seq - amino acid sequence (string) - xyz - atomic coordinates [L,14,3] - mask - boolean mask [L,14] - bfac - temperature factors [L,14] - occ - occupancy [L,14] (is 1 for most atoms, <1 if alternative conformations are present) - -PDBID.pt: - method - experimental method (str) - date - deposition date (str) - resolution - resolution (float) - chains - list of CHAINIDs (there is a corresponding PDBID_CHAINID.pt file for each of these) - tm - pairwise similarity between chains (TM-score,seq.id.,rmsd from TM-align) [num_chains,num_chains,3] - asmb_ids - biounit IDs as in the PDB (list of str) - asmb_details - how the assembly was identified: author, or software, or smth else (list of str) - asmb_method - PISA or smth else (list of str) - - asmb_chains - list of chains which each biounit is composed of (list of str, each str contains comma separated CHAINIDs) - asmb_xformIDX - (one per biounit) xforms to be applied to chains from asmb_chains[IDX], [n,4,4] - [n,:3,:3] - rotation matrices - [n,3,:3] - translation vectors - -list.csv: - CHAINID - chain label, PDBID_CHAINID - DEPOSITION - deposition date - RESOLUTION - structure resolution - HASH - unique 6-digit hash for the sequence - CLUSTER - sequence cluster the chain belongs to (clusters were generated at seqID=30%) - SEQUENCE - reference amino acid sequence - -valid_clusters.txt - clusters used for validation - -test_clusters.txt - clusters used for testing -``` - -Code organization: -* `training.py` - the main script to train the model -* `model_utils.py` - utility functions and classes for the model -* `utils.py` - utility functions and classes for data loading -* `exp_020/` - sample outputs -* `submit_exp_020.sh` - sample SLURM submit script ------------------------------------------------------------------------------------------------------ -Input flags for `training.py`: -``` - argparser.add_argument("--path_for_training_data", type=str, default="my_path/pdb_2021aug02", help="path for loading training data") - argparser.add_argument("--path_for_outputs", type=str, default="./test", help="path for logs and model weights") - argparser.add_argument("--previous_checkpoint", type=str, default="", help="path for previous model weights, e.g. file.pt") - argparser.add_argument("--num_epochs", type=int, default=200, help="number of epochs to train for") - argparser.add_argument("--save_model_every_n_epochs", type=int, default=10, help="save model weights every n epochs") - argparser.add_argument("--reload_data_every_n_epochs", type=int, default=2, help="reload training data every n epochs") - argparser.add_argument("--num_examples_per_epoch", type=int, default=1000000, help="number of training example to load for one epoch") - argparser.add_argument("--batch_size", type=int, default=10000, help="number of tokens for one batch") - argparser.add_argument("--max_protein_length", type=int, default=10000, help="maximum length of the protein complext") - argparser.add_argument("--hidden_dim", type=int, default=128, help="hidden model dimension") - argparser.add_argument("--num_encoder_layers", type=int, default=3, help="number of encoder layers") - argparser.add_argument("--num_decoder_layers", type=int, default=3, help="number of decoder layers") - argparser.add_argument("--num_neighbors", type=int, default=48, help="number of neighbors for the sparse graph") - argparser.add_argument("--dropout", type=float, default=0.1, help="dropout level; 0.0 means no dropout") - argparser.add_argument("--backbone_noise", type=float, default=0.2, help="amount of noise added to backbone during training") - argparser.add_argument("--rescut", type=float, default=3.5, help="PDB resolution cutoff") - argparser.add_argument("--debug", type=bool, default=False, help="minimal data loading for debugging") - argparser.add_argument("--gradient_norm", type=float, default=-1.0, help="clip gradient norm, set to negative to omit clipping") - argparser.add_argument("--mixed_precision", type=bool, default=True, help="train with mixed precision") -``` ------------------------------------------------------------------------------------------------------ -For example to make a conda environment to run ProteinMPNN: -* `conda create --name mlfold` - this creates conda environment called `mlfold` -* `source activate mlfold` - this activate environment -* `conda install pytorch torchvision torchaudio cudatoolkit=11.3 -c pytorch` - install pytorch following steps from https://pytorch.org/ ------------------------------------------------------------------------------------------------------ -Models provided for the vanilla MPNN were trained with default flags: -* `v_48_002.pt` - `--num_neighbors 48 --backbone_noise 0.02 --num_epochs 150` -* `v_48_010.pt` - `--num_neighbors 48 --backbone_noise 0.10 --num_epochs 150` -* `v_48_020.pt` - `--num_neighbors 48 --backbone_noise 0.20 --num_epochs 150` ------------------------------------------------------------------------------------------------------ -``` -@article{dauparas2022robust, - title={Robust deep learning--based protein sequence design using ProteinMPNN}, - author={Dauparas, Justas and Anishchenko, Ivan and Bennett, Nathaniel and Bai, Hua and Ragotte, Robert J and Milles, Lukas F and Wicky, Basile IM and Courbet, Alexis and de Haas, Rob J and Bethel, Neville and others}, - journal={Science}, - volume={378}, - number={6615}, - pages={49--56}, - year={2022}, - publisher={American Association for the Advancement of Science} -} -``` ------------------------------------------------------------------------------------------------------ diff --git a/spaces/Qiukai/gpt/crazy_functions/test_project/cpp/libJPG/jpge.h b/spaces/Qiukai/gpt/crazy_functions/test_project/cpp/libJPG/jpge.h deleted file mode 100644 index a46c805ab80aab491f7f9508b3a008b149866bee..0000000000000000000000000000000000000000 --- a/spaces/Qiukai/gpt/crazy_functions/test_project/cpp/libJPG/jpge.h +++ /dev/null @@ -1,172 +0,0 @@ - -// jpge.h - C++ class for JPEG compression. -// Public domain, Rich Geldreich -// Alex Evans: Added RGBA support, linear memory allocator. -#ifndef JPEG_ENCODER_H -#define JPEG_ENCODER_H - -#include - -namespace jpge -{ - typedef unsigned char uint8; - typedef signed short int16; - typedef signed int int32; - typedef unsigned short uint16; - typedef unsigned int uint32; - typedef unsigned int uint; - - // JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common. - enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 }; - - // JPEG compression parameters structure. - struct params - { - inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { } - - inline bool check_valid() const - { - if ((m_quality < 1) || (m_quality > 100)) return false; - if ((uint)m_subsampling > (uint)H2V2) return false; - return true; - } - - // Quality: 1-100, higher is better. Typical values are around 50-95. - int m_quality; - - // m_subsampling: - // 0 = Y (grayscale) only - // 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU) - // 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU) - // 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common) - subsampling_t m_subsampling; - - // Disables CbCr discrimination - only intended for testing. - // If true, the Y quantization table is also used for the CbCr channels. - bool m_no_chroma_discrim_flag; - - bool m_two_pass_flag; - }; - - // Writes JPEG image to a file. - // num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels. - bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Writes JPEG image to memory buffer. - // On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes. - // If return value is true, buf_size will be set to the size of the compressed data. - bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Output stream abstract class - used by the jpeg_encoder class to write to the output stream. - // put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts. - class output_stream - { - public: - virtual ~output_stream() { }; - virtual bool put_buf(const void* Pbuf, int64_t len) = 0; - template inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); } - }; - - // Lower level jpeg_encoder class - useful if more control is needed than the above helper functions. - class jpeg_encoder - { - public: - jpeg_encoder(); - ~jpeg_encoder(); - - // Initializes the compressor. - // pStream: The stream object to use for writing compressed data. - // params - Compression parameters structure, defined above. - // width, height - Image dimensions. - // channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data. - // Returns false on out of memory or if a stream write fails. - bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params()); - - const params &get_params() const { return m_params; } - - // Deinitializes the compressor, freeing any allocated memory. May be called at any time. - void deinit(); - - uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; } - inline uint get_cur_pass() { return m_pass_num; } - - // Call this method with each source scanline. - // width * src_channels bytes per scanline is expected (RGB or Y format). - // You must call with NULL after all scanlines are processed to finish compression. - // Returns false on out of memory or if a stream write fails. - bool process_scanline(const void* pScanline); - - private: - jpeg_encoder(const jpeg_encoder &); - jpeg_encoder &operator =(const jpeg_encoder &); - - typedef int32 sample_array_t; - - output_stream *m_pStream; - params m_params; - uint8 m_num_components; - uint8 m_comp_h_samp[3], m_comp_v_samp[3]; - int m_image_x, m_image_y, m_image_bpp, m_image_bpl; - int m_image_x_mcu, m_image_y_mcu; - int m_image_bpl_xlt, m_image_bpl_mcu; - int m_mcus_per_row; - int m_mcu_x, m_mcu_y; - uint8 *m_mcu_lines[16]; - uint8 m_mcu_y_ofs; - sample_array_t m_sample_array[64]; - int16 m_coefficient_array[64]; - int32 m_quantization_tables[2][64]; - uint m_huff_codes[4][256]; - uint8 m_huff_code_sizes[4][256]; - uint8 m_huff_bits[4][17]; - uint8 m_huff_val[4][256]; - uint32 m_huff_count[4][256]; - int m_last_dc_val[3]; - enum { JPGE_OUT_BUF_SIZE = 2048 }; - uint8 m_out_buf[JPGE_OUT_BUF_SIZE]; - uint8 *m_pOut_buf; - uint m_out_buf_left; - uint32 m_bit_buffer; - uint m_bits_in; - uint8 m_pass_num; - bool m_all_stream_writes_succeeded; - - void optimize_huffman_table(int table_num, int table_len); - void emit_byte(uint8 i); - void emit_word(uint i); - void emit_marker(int marker); - void emit_jfif_app0(); - void emit_dqt(); - void emit_sof(); - void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag); - void emit_dhts(); - void emit_sos(); - void emit_markers(); - void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val); - void compute_quant_table(int32 *dst, int16 *src); - void adjust_quant_table(int32 *dst, int32 *src); - void first_pass_init(); - bool second_pass_init(); - bool jpg_open(int p_x_res, int p_y_res, int src_channels); - void load_block_8_8_grey(int x); - void load_block_8_8(int x, int y, int c); - void load_block_16_8(int x, int c); - void load_block_16_8_8(int x, int c); - void load_quantized_coefficients(int component_num); - void flush_output_buffer(); - void put_bits(uint bits, uint len); - void code_coefficients_pass_one(int component_num); - void code_coefficients_pass_two(int component_num); - void code_block(int component_num); - void process_mcu_row(); - bool terminate_pass_one(); - bool terminate_pass_two(); - bool process_end_of_image(); - void load_mcu(const void* src); - void clear(); - void init(); - }; - -} // namespace jpge - -#endif // JPEG_ENCODER \ No newline at end of file diff --git a/spaces/Rajagopal/ImageBind_zeroshot_demo2/model_card.md b/spaces/Rajagopal/ImageBind_zeroshot_demo2/model_card.md deleted file mode 100644 index c7bb26500b6590b64ffa6350f37be80dc88612d8..0000000000000000000000000000000000000000 --- a/spaces/Rajagopal/ImageBind_zeroshot_demo2/model_card.md +++ /dev/null @@ -1,94 +0,0 @@ -# Model Card for ImageBind - -Multimodal joint embedding model for image/video, text, audio, depth, IMU, and thermal images. -Input any of the six modalities and get the same sized embedding that can be used for cross-modal and multimodal tasks. - -# Model Details - -## Model Description - - -Multimodal joint embedding model for image/video, text, audio, depth, IMU, and thermal images - -- **Developed by:** Meta AI -- **Model type:** Multimodal model -- **Language(s) (NLP):** en -- **License:** CC BY-NC-SA 4.0 -- **Resources for more information:** - - [GitHub Repo](https://github.com/facebookresearch/ImageBind) - - -# Uses - - -This model is intended only for research purposes. It provides a joint embedding space for different modalities -- image/video, text, audio, depth, IMU and thermal images. -We hope that these joint embeddings can be used for a variety of different cross-modal research, e.g., cross-modal retrieval and combining embeddings from different modalities. - -## Out-of-Scope Use - - - - -This model is *NOT* intended to be used in any real world application -- commercial or otherwise. -It may produce harmful associations with different inputs. -The model needs to be investigated and likely re-trained on specific data for any such application. -The model is expected to work better on web-based visual data since it was trained on such data. -The text encoder is likely to work only on English language text because of the underlying training datasets. - -# Bias, Risks, and Limitations - - -Open-domain joint embedding models are prone to producing specific biases, e.g., study from [CLIP](https://github.com/openai/CLIP/blob/main/model-card.md#bias-and-fairness). -Since our model uses such models as initialization, it will exhibit such biases too. -Moreover, for learning joint embeddings for other modalities such as audio, thermal, depth, and IMU we leverage datasets that are relatively small. These joint embeddings are thus limited to the concepts present in the datasets. For example, the thermal datasets we used are limited to outdoor street scenes, while the depth datasets are limited to indoor scenes. - - - -# Training Details - -## Training Data - - - -ImageBind uses image-paired data for training -- (image, X) where X is one of text, audio, depth, IMU or thermal data. -In particular, we initialize and freeze the image and text encoders using an OpenCLIP ViT-H encoder. -We train audio embeddings using Audioset, depth embeddings using the SUN RGB-D dataset, IMU using the Ego4D dataset and thermal embeddings using the LLVIP dataset. -We provide the exact training data details in the paper. - - -## Training Procedure - - -Please refer to the research paper and github repo for exact details on this. - -# Evaluation - -## Testing Data, Factors & Metrics - -We evaluate the model on a variety of different classification benchmarks for each modality. -The evaluation details are presented in the paper. -The models performance is measured using standard classification metrics such as accuracy and mAP. - -# Citation - - - -**BibTeX:** -``` -@inproceedings{girdhar2023imagebind, - title={ImageBind: One Embedding Space To Bind Them All}, - author={Girdhar, Rohit and El-Nouby, Alaaeldin and Liu, Zhuang -and Singh, Mannat and Alwala, Kalyan Vasudev and Joulin, Armand and Misra, Ishan}, - booktitle={CVPR}, - year={2023} -} -``` - - -# Model Card Contact - -Please reach out to the authors at: rgirdhar@meta.com imisra@meta.com alaaelnouby@gmail.com - -# How to Get Started with the Model - -Our github repo provides a simple example to extract embeddings from images, audio etc. diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/build/metadata_legacy.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/build/metadata_legacy.py deleted file mode 100644 index e60988d643e007801f79e8718354e7d00c7acf18..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/operations/build/metadata_legacy.py +++ /dev/null @@ -1,74 +0,0 @@ -"""Metadata generation logic for legacy source distributions. -""" - -import logging -import os - -from pip._internal.build_env import BuildEnvironment -from pip._internal.cli.spinners import open_spinner -from pip._internal.exceptions import ( - InstallationError, - InstallationSubprocessError, - MetadataGenerationFailed, -) -from pip._internal.utils.setuptools_build import make_setuptools_egg_info_args -from pip._internal.utils.subprocess import call_subprocess -from pip._internal.utils.temp_dir import TempDirectory - -logger = logging.getLogger(__name__) - - -def _find_egg_info(directory: str) -> str: - """Find an .egg-info subdirectory in `directory`.""" - filenames = [f for f in os.listdir(directory) if f.endswith(".egg-info")] - - if not filenames: - raise InstallationError(f"No .egg-info directory found in {directory}") - - if len(filenames) > 1: - raise InstallationError( - "More than one .egg-info directory found in {}".format(directory) - ) - - return os.path.join(directory, filenames[0]) - - -def generate_metadata( - build_env: BuildEnvironment, - setup_py_path: str, - source_dir: str, - isolated: bool, - details: str, -) -> str: - """Generate metadata using setup.py-based defacto mechanisms. - - Returns the generated metadata directory. - """ - logger.debug( - "Running setup.py (path:%s) egg_info for package %s", - setup_py_path, - details, - ) - - egg_info_dir = TempDirectory(kind="pip-egg-info", globally_managed=True).path - - args = make_setuptools_egg_info_args( - setup_py_path, - egg_info_dir=egg_info_dir, - no_user_config=isolated, - ) - - with build_env: - with open_spinner("Preparing metadata (setup.py)") as spinner: - try: - call_subprocess( - args, - cwd=source_dir, - command_desc="python setup.py egg_info", - spinner=spinner, - ) - except InstallationSubprocessError as error: - raise MetadataGenerationFailed(package_details=details) from error - - # Return the .egg-info directory. - return _find_egg_info(egg_info_dir) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/rule.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/rule.py deleted file mode 100644 index 0b78f7a4ec4a111e35d7fdc7f9744afb696df20e..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/rule.py +++ /dev/null @@ -1,134 +0,0 @@ -from typing import Union - -from .align import AlignMethod -from .cells import cell_len, set_cell_size -from .console import Console, ConsoleOptions, RenderResult -from .jupyter import JupyterMixin -from .measure import Measurement -from .style import Style -from .text import Text - - -class Rule(JupyterMixin): - """A console renderable to draw a horizontal rule (line). - - Args: - title (Union[str, Text], optional): Text to render in the rule. Defaults to "". - characters (str, optional): Character(s) used to draw the line. Defaults to "─". - style (StyleType, optional): Style of Rule. Defaults to "rule.line". - end (str, optional): Character at end of Rule. defaults to "\\\\n" - align (str, optional): How to align the title, one of "left", "center", or "right". Defaults to "center". - """ - - def __init__( - self, - title: Union[str, Text] = "", - *, - characters: str = "─", - style: Union[str, Style] = "rule.line", - end: str = "\n", - align: AlignMethod = "center", - ) -> None: - if cell_len(characters) < 1: - raise ValueError( - "'characters' argument must have a cell width of at least 1" - ) - if align not in ("left", "center", "right"): - raise ValueError( - f'invalid value for align, expected "left", "center", "right" (not {align!r})' - ) - self.title = title - self.characters = characters - self.style = style - self.end = end - self.align = align - - def __repr__(self) -> str: - return f"Rule({self.title!r}, {self.characters!r})" - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - width = options.max_width - - # Python3.6 doesn't have an isascii method on str - isascii = getattr(str, "isascii", None) or ( - lambda s: all(ord(c) < 128 for c in s) - ) - characters = ( - "-" - if (options.ascii_only and not isascii(self.characters)) - else self.characters - ) - - chars_len = cell_len(characters) - if not self.title: - yield self._rule_line(chars_len, width) - return - - if isinstance(self.title, Text): - title_text = self.title - else: - title_text = console.render_str(self.title, style="rule.text") - - title_text.plain = title_text.plain.replace("\n", " ") - title_text.expand_tabs() - - required_space = 4 if self.align == "center" else 2 - truncate_width = max(0, width - required_space) - if not truncate_width: - yield self._rule_line(chars_len, width) - return - - rule_text = Text(end=self.end) - if self.align == "center": - title_text.truncate(truncate_width, overflow="ellipsis") - side_width = (width - cell_len(title_text.plain)) // 2 - left = Text(characters * (side_width // chars_len + 1)) - left.truncate(side_width - 1) - right_length = width - cell_len(left.plain) - cell_len(title_text.plain) - right = Text(characters * (side_width // chars_len + 1)) - right.truncate(right_length) - rule_text.append(left.plain + " ", self.style) - rule_text.append(title_text) - rule_text.append(" " + right.plain, self.style) - elif self.align == "left": - title_text.truncate(truncate_width, overflow="ellipsis") - rule_text.append(title_text) - rule_text.append(" ") - rule_text.append(characters * (width - rule_text.cell_len), self.style) - elif self.align == "right": - title_text.truncate(truncate_width, overflow="ellipsis") - rule_text.append(characters * (width - title_text.cell_len - 1), self.style) - rule_text.append(" ") - rule_text.append(title_text) - - rule_text.plain = set_cell_size(rule_text.plain, width) - yield rule_text - - def _rule_line(self, chars_len: int, width: int) -> Text: - rule_text = Text(self.characters * ((width // chars_len) + 1), self.style) - rule_text.truncate(width) - rule_text.plain = set_cell_size(rule_text.plain, width) - return rule_text - - def __rich_measure__( - self, console: Console, options: ConsoleOptions - ) -> Measurement: - return Measurement(1, 1) - - -if __name__ == "__main__": # pragma: no cover - import sys - - from pip._vendor.rich.console import Console - - try: - text = sys.argv[1] - except IndexError: - text = "Hello, World" - console = Console() - console.print(Rule(title=text)) - - console = Console() - console.print(Rule("foo"), width=4) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/bdist_rpm.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/bdist_rpm.py deleted file mode 100644 index 6a50ef34eab60cf005ea604f83eaf6170437032e..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/bdist_rpm.py +++ /dev/null @@ -1,615 +0,0 @@ -"""distutils.command.bdist_rpm - -Implements the Distutils 'bdist_rpm' command (create RPM source and binary -distributions).""" - -import subprocess -import sys -import os - -from distutils.core import Command -from distutils.debug import DEBUG -from distutils.file_util import write_file -from distutils.errors import ( - DistutilsOptionError, - DistutilsPlatformError, - DistutilsFileError, - DistutilsExecError, -) -from distutils.sysconfig import get_python_version -from distutils import log - - -class bdist_rpm(Command): - - description = "create an RPM distribution" - - user_options = [ - ('bdist-base=', None, "base directory for creating built distributions"), - ( - 'rpm-base=', - None, - "base directory for creating RPMs (defaults to \"rpm\" under " - "--bdist-base; must be specified for RPM 2)", - ), - ( - 'dist-dir=', - 'd', - "directory to put final RPM files in " "(and .spec files if --spec-only)", - ), - ( - 'python=', - None, - "path to Python interpreter to hard-code in the .spec file " - "(default: \"python\")", - ), - ( - 'fix-python', - None, - "hard-code the exact path to the current Python interpreter in " - "the .spec file", - ), - ('spec-only', None, "only regenerate spec file"), - ('source-only', None, "only generate source RPM"), - ('binary-only', None, "only generate binary RPM"), - ('use-bzip2', None, "use bzip2 instead of gzip to create source distribution"), - # More meta-data: too RPM-specific to put in the setup script, - # but needs to go in the .spec file -- so we make these options - # to "bdist_rpm". The idea is that packagers would put this - # info in setup.cfg, although they are of course free to - # supply it on the command line. - ( - 'distribution-name=', - None, - "name of the (Linux) distribution to which this " - "RPM applies (*not* the name of the module distribution!)", - ), - ('group=', None, "package classification [default: \"Development/Libraries\"]"), - ('release=', None, "RPM release number"), - ('serial=', None, "RPM serial number"), - ( - 'vendor=', - None, - "RPM \"vendor\" (eg. \"Joe Blow \") " - "[default: maintainer or author from setup script]", - ), - ( - 'packager=', - None, - "RPM packager (eg. \"Jane Doe \") " "[default: vendor]", - ), - ('doc-files=', None, "list of documentation files (space or comma-separated)"), - ('changelog=', None, "RPM changelog"), - ('icon=', None, "name of icon file"), - ('provides=', None, "capabilities provided by this package"), - ('requires=', None, "capabilities required by this package"), - ('conflicts=', None, "capabilities which conflict with this package"), - ('build-requires=', None, "capabilities required to build this package"), - ('obsoletes=', None, "capabilities made obsolete by this package"), - ('no-autoreq', None, "do not automatically calculate dependencies"), - # Actions to take when building RPM - ('keep-temp', 'k', "don't clean up RPM build directory"), - ('no-keep-temp', None, "clean up RPM build directory [default]"), - ( - 'use-rpm-opt-flags', - None, - "compile with RPM_OPT_FLAGS when building from source RPM", - ), - ('no-rpm-opt-flags', None, "do not pass any RPM CFLAGS to compiler"), - ('rpm3-mode', None, "RPM 3 compatibility mode (default)"), - ('rpm2-mode', None, "RPM 2 compatibility mode"), - # Add the hooks necessary for specifying custom scripts - ('prep-script=', None, "Specify a script for the PREP phase of RPM building"), - ('build-script=', None, "Specify a script for the BUILD phase of RPM building"), - ( - 'pre-install=', - None, - "Specify a script for the pre-INSTALL phase of RPM building", - ), - ( - 'install-script=', - None, - "Specify a script for the INSTALL phase of RPM building", - ), - ( - 'post-install=', - None, - "Specify a script for the post-INSTALL phase of RPM building", - ), - ( - 'pre-uninstall=', - None, - "Specify a script for the pre-UNINSTALL phase of RPM building", - ), - ( - 'post-uninstall=', - None, - "Specify a script for the post-UNINSTALL phase of RPM building", - ), - ('clean-script=', None, "Specify a script for the CLEAN phase of RPM building"), - ( - 'verify-script=', - None, - "Specify a script for the VERIFY phase of the RPM build", - ), - # Allow a packager to explicitly force an architecture - ('force-arch=', None, "Force an architecture onto the RPM build process"), - ('quiet', 'q', "Run the INSTALL phase of RPM building in quiet mode"), - ] - - boolean_options = [ - 'keep-temp', - 'use-rpm-opt-flags', - 'rpm3-mode', - 'no-autoreq', - 'quiet', - ] - - negative_opt = { - 'no-keep-temp': 'keep-temp', - 'no-rpm-opt-flags': 'use-rpm-opt-flags', - 'rpm2-mode': 'rpm3-mode', - } - - def initialize_options(self): - self.bdist_base = None - self.rpm_base = None - self.dist_dir = None - self.python = None - self.fix_python = None - self.spec_only = None - self.binary_only = None - self.source_only = None - self.use_bzip2 = None - - self.distribution_name = None - self.group = None - self.release = None - self.serial = None - self.vendor = None - self.packager = None - self.doc_files = None - self.changelog = None - self.icon = None - - self.prep_script = None - self.build_script = None - self.install_script = None - self.clean_script = None - self.verify_script = None - self.pre_install = None - self.post_install = None - self.pre_uninstall = None - self.post_uninstall = None - self.prep = None - self.provides = None - self.requires = None - self.conflicts = None - self.build_requires = None - self.obsoletes = None - - self.keep_temp = 0 - self.use_rpm_opt_flags = 1 - self.rpm3_mode = 1 - self.no_autoreq = 0 - - self.force_arch = None - self.quiet = 0 - - def finalize_options(self): - self.set_undefined_options('bdist', ('bdist_base', 'bdist_base')) - if self.rpm_base is None: - if not self.rpm3_mode: - raise DistutilsOptionError("you must specify --rpm-base in RPM 2 mode") - self.rpm_base = os.path.join(self.bdist_base, "rpm") - - if self.python is None: - if self.fix_python: - self.python = sys.executable - else: - self.python = "python3" - elif self.fix_python: - raise DistutilsOptionError( - "--python and --fix-python are mutually exclusive options" - ) - - if os.name != 'posix': - raise DistutilsPlatformError( - "don't know how to create RPM " "distributions on platform %s" % os.name - ) - if self.binary_only and self.source_only: - raise DistutilsOptionError( - "cannot supply both '--source-only' and '--binary-only'" - ) - - # don't pass CFLAGS to pure python distributions - if not self.distribution.has_ext_modules(): - self.use_rpm_opt_flags = 0 - - self.set_undefined_options('bdist', ('dist_dir', 'dist_dir')) - self.finalize_package_data() - - def finalize_package_data(self): - self.ensure_string('group', "Development/Libraries") - self.ensure_string( - 'vendor', - "%s <%s>" - % (self.distribution.get_contact(), self.distribution.get_contact_email()), - ) - self.ensure_string('packager') - self.ensure_string_list('doc_files') - if isinstance(self.doc_files, list): - for readme in ('README', 'README.txt'): - if os.path.exists(readme) and readme not in self.doc_files: - self.doc_files.append(readme) - - self.ensure_string('release', "1") - self.ensure_string('serial') # should it be an int? - - self.ensure_string('distribution_name') - - self.ensure_string('changelog') - # Format changelog correctly - self.changelog = self._format_changelog(self.changelog) - - self.ensure_filename('icon') - - self.ensure_filename('prep_script') - self.ensure_filename('build_script') - self.ensure_filename('install_script') - self.ensure_filename('clean_script') - self.ensure_filename('verify_script') - self.ensure_filename('pre_install') - self.ensure_filename('post_install') - self.ensure_filename('pre_uninstall') - self.ensure_filename('post_uninstall') - - # XXX don't forget we punted on summaries and descriptions -- they - # should be handled here eventually! - - # Now *this* is some meta-data that belongs in the setup script... - self.ensure_string_list('provides') - self.ensure_string_list('requires') - self.ensure_string_list('conflicts') - self.ensure_string_list('build_requires') - self.ensure_string_list('obsoletes') - - self.ensure_string('force_arch') - - def run(self): # noqa: C901 - if DEBUG: - print("before _get_package_data():") - print("vendor =", self.vendor) - print("packager =", self.packager) - print("doc_files =", self.doc_files) - print("changelog =", self.changelog) - - # make directories - if self.spec_only: - spec_dir = self.dist_dir - self.mkpath(spec_dir) - else: - rpm_dir = {} - for d in ('SOURCES', 'SPECS', 'BUILD', 'RPMS', 'SRPMS'): - rpm_dir[d] = os.path.join(self.rpm_base, d) - self.mkpath(rpm_dir[d]) - spec_dir = rpm_dir['SPECS'] - - # Spec file goes into 'dist_dir' if '--spec-only specified', - # build/rpm. otherwise. - spec_path = os.path.join(spec_dir, "%s.spec" % self.distribution.get_name()) - self.execute( - write_file, (spec_path, self._make_spec_file()), "writing '%s'" % spec_path - ) - - if self.spec_only: # stop if requested - return - - # Make a source distribution and copy to SOURCES directory with - # optional icon. - saved_dist_files = self.distribution.dist_files[:] - sdist = self.reinitialize_command('sdist') - if self.use_bzip2: - sdist.formats = ['bztar'] - else: - sdist.formats = ['gztar'] - self.run_command('sdist') - self.distribution.dist_files = saved_dist_files - - source = sdist.get_archive_files()[0] - source_dir = rpm_dir['SOURCES'] - self.copy_file(source, source_dir) - - if self.icon: - if os.path.exists(self.icon): - self.copy_file(self.icon, source_dir) - else: - raise DistutilsFileError("icon file '%s' does not exist" % self.icon) - - # build package - log.info("building RPMs") - rpm_cmd = ['rpmbuild'] - - if self.source_only: # what kind of RPMs? - rpm_cmd.append('-bs') - elif self.binary_only: - rpm_cmd.append('-bb') - else: - rpm_cmd.append('-ba') - rpm_cmd.extend(['--define', '__python %s' % self.python]) - if self.rpm3_mode: - rpm_cmd.extend(['--define', '_topdir %s' % os.path.abspath(self.rpm_base)]) - if not self.keep_temp: - rpm_cmd.append('--clean') - - if self.quiet: - rpm_cmd.append('--quiet') - - rpm_cmd.append(spec_path) - # Determine the binary rpm names that should be built out of this spec - # file - # Note that some of these may not be really built (if the file - # list is empty) - nvr_string = "%{name}-%{version}-%{release}" - src_rpm = nvr_string + ".src.rpm" - non_src_rpm = "%{arch}/" + nvr_string + ".%{arch}.rpm" - q_cmd = r"rpm -q --qf '{} {}\n' --specfile '{}'".format( - src_rpm, - non_src_rpm, - spec_path, - ) - - out = os.popen(q_cmd) - try: - binary_rpms = [] - source_rpm = None - while True: - line = out.readline() - if not line: - break - ell = line.strip().split() - assert len(ell) == 2 - binary_rpms.append(ell[1]) - # The source rpm is named after the first entry in the spec file - if source_rpm is None: - source_rpm = ell[0] - - status = out.close() - if status: - raise DistutilsExecError("Failed to execute: %s" % repr(q_cmd)) - - finally: - out.close() - - self.spawn(rpm_cmd) - - if not self.dry_run: - if self.distribution.has_ext_modules(): - pyversion = get_python_version() - else: - pyversion = 'any' - - if not self.binary_only: - srpm = os.path.join(rpm_dir['SRPMS'], source_rpm) - assert os.path.exists(srpm) - self.move_file(srpm, self.dist_dir) - filename = os.path.join(self.dist_dir, source_rpm) - self.distribution.dist_files.append(('bdist_rpm', pyversion, filename)) - - if not self.source_only: - for rpm in binary_rpms: - rpm = os.path.join(rpm_dir['RPMS'], rpm) - if os.path.exists(rpm): - self.move_file(rpm, self.dist_dir) - filename = os.path.join(self.dist_dir, os.path.basename(rpm)) - self.distribution.dist_files.append( - ('bdist_rpm', pyversion, filename) - ) - - def _dist_path(self, path): - return os.path.join(self.dist_dir, os.path.basename(path)) - - def _make_spec_file(self): # noqa: C901 - """Generate the text of an RPM spec file and return it as a - list of strings (one per line). - """ - # definitions and headers - spec_file = [ - '%define name ' + self.distribution.get_name(), - '%define version ' + self.distribution.get_version().replace('-', '_'), - '%define unmangled_version ' + self.distribution.get_version(), - '%define release ' + self.release.replace('-', '_'), - '', - 'Summary: ' + (self.distribution.get_description() or "UNKNOWN"), - ] - - # Workaround for #14443 which affects some RPM based systems such as - # RHEL6 (and probably derivatives) - vendor_hook = subprocess.getoutput('rpm --eval %{__os_install_post}') - # Generate a potential replacement value for __os_install_post (whilst - # normalizing the whitespace to simplify the test for whether the - # invocation of brp-python-bytecompile passes in __python): - vendor_hook = '\n'.join( - [' %s \\' % line.strip() for line in vendor_hook.splitlines()] - ) - problem = "brp-python-bytecompile \\\n" - fixed = "brp-python-bytecompile %{__python} \\\n" - fixed_hook = vendor_hook.replace(problem, fixed) - if fixed_hook != vendor_hook: - spec_file.append('# Workaround for http://bugs.python.org/issue14443') - spec_file.append('%define __os_install_post ' + fixed_hook + '\n') - - # put locale summaries into spec file - # XXX not supported for now (hard to put a dictionary - # in a config file -- arg!) - # for locale in self.summaries.keys(): - # spec_file.append('Summary(%s): %s' % (locale, - # self.summaries[locale])) - - spec_file.extend( - [ - 'Name: %{name}', - 'Version: %{version}', - 'Release: %{release}', - ] - ) - - # XXX yuck! this filename is available from the "sdist" command, - # but only after it has run: and we create the spec file before - # running "sdist", in case of --spec-only. - if self.use_bzip2: - spec_file.append('Source0: %{name}-%{unmangled_version}.tar.bz2') - else: - spec_file.append('Source0: %{name}-%{unmangled_version}.tar.gz') - - spec_file.extend( - [ - 'License: ' + (self.distribution.get_license() or "UNKNOWN"), - 'Group: ' + self.group, - 'BuildRoot: %{_tmppath}/%{name}-%{version}-%{release}-buildroot', - 'Prefix: %{_prefix}', - ] - ) - - if not self.force_arch: - # noarch if no extension modules - if not self.distribution.has_ext_modules(): - spec_file.append('BuildArch: noarch') - else: - spec_file.append('BuildArch: %s' % self.force_arch) - - for field in ( - 'Vendor', - 'Packager', - 'Provides', - 'Requires', - 'Conflicts', - 'Obsoletes', - ): - val = getattr(self, field.lower()) - if isinstance(val, list): - spec_file.append('{}: {}'.format(field, ' '.join(val))) - elif val is not None: - spec_file.append('{}: {}'.format(field, val)) - - if self.distribution.get_url(): - spec_file.append('Url: ' + self.distribution.get_url()) - - if self.distribution_name: - spec_file.append('Distribution: ' + self.distribution_name) - - if self.build_requires: - spec_file.append('BuildRequires: ' + ' '.join(self.build_requires)) - - if self.icon: - spec_file.append('Icon: ' + os.path.basename(self.icon)) - - if self.no_autoreq: - spec_file.append('AutoReq: 0') - - spec_file.extend( - [ - '', - '%description', - self.distribution.get_long_description() or "", - ] - ) - - # put locale descriptions into spec file - # XXX again, suppressed because config file syntax doesn't - # easily support this ;-( - # for locale in self.descriptions.keys(): - # spec_file.extend([ - # '', - # '%description -l ' + locale, - # self.descriptions[locale], - # ]) - - # rpm scripts - # figure out default build script - def_setup_call = "{} {}".format(self.python, os.path.basename(sys.argv[0])) - def_build = "%s build" % def_setup_call - if self.use_rpm_opt_flags: - def_build = 'env CFLAGS="$RPM_OPT_FLAGS" ' + def_build - - # insert contents of files - - # XXX this is kind of misleading: user-supplied options are files - # that we open and interpolate into the spec file, but the defaults - # are just text that we drop in as-is. Hmmm. - - install_cmd = ( - '%s install -O1 --root=$RPM_BUILD_ROOT ' '--record=INSTALLED_FILES' - ) % def_setup_call - - script_options = [ - ('prep', 'prep_script', "%setup -n %{name}-%{unmangled_version}"), - ('build', 'build_script', def_build), - ('install', 'install_script', install_cmd), - ('clean', 'clean_script', "rm -rf $RPM_BUILD_ROOT"), - ('verifyscript', 'verify_script', None), - ('pre', 'pre_install', None), - ('post', 'post_install', None), - ('preun', 'pre_uninstall', None), - ('postun', 'post_uninstall', None), - ] - - for (rpm_opt, attr, default) in script_options: - # Insert contents of file referred to, if no file is referred to - # use 'default' as contents of script - val = getattr(self, attr) - if val or default: - spec_file.extend( - [ - '', - '%' + rpm_opt, - ] - ) - if val: - with open(val) as f: - spec_file.extend(f.read().split('\n')) - else: - spec_file.append(default) - - # files section - spec_file.extend( - [ - '', - '%files -f INSTALLED_FILES', - '%defattr(-,root,root)', - ] - ) - - if self.doc_files: - spec_file.append('%doc ' + ' '.join(self.doc_files)) - - if self.changelog: - spec_file.extend( - [ - '', - '%changelog', - ] - ) - spec_file.extend(self.changelog) - - return spec_file - - def _format_changelog(self, changelog): - """Format the changelog correctly and convert it to a list of strings""" - if not changelog: - return changelog - new_changelog = [] - for line in changelog.strip().split('\n'): - line = line.strip() - if line[0] == '*': - new_changelog.extend(['', line]) - elif line[0] == '-': - new_changelog.append(line) - else: - new_changelog.append(' ' + line) - - # strip trailing newline inserted by first changelog entry - if not new_changelog[0]: - del new_changelog[0] - - return new_changelog diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/build_ext.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/build_ext.py deleted file mode 100644 index 3c6cee7e3644fdbdeeb4b5bcb0124044eb0f50ed..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/command/build_ext.py +++ /dev/null @@ -1,787 +0,0 @@ -"""distutils.command.build_ext - -Implements the Distutils 'build_ext' command, for building extension -modules (currently limited to C extensions, should accommodate C++ -extensions ASAP).""" - -import contextlib -import os -import re -import sys -from distutils.core import Command -from distutils.errors import ( - DistutilsOptionError, - DistutilsSetupError, - CCompilerError, - DistutilsError, - CompileError, - DistutilsPlatformError, -) -from distutils.sysconfig import customize_compiler, get_python_version -from distutils.sysconfig import get_config_h_filename -from distutils.dep_util import newer_group -from distutils.extension import Extension -from distutils.util import get_platform -from distutils import log -from . import py37compat - -from site import USER_BASE - -# An extension name is just a dot-separated list of Python NAMEs (ie. -# the same as a fully-qualified module name). -extension_name_re = re.compile(r'^[a-zA-Z_][a-zA-Z_0-9]*(\.[a-zA-Z_][a-zA-Z_0-9]*)*$') - - -def show_compilers(): - from distutils.ccompiler import show_compilers - - show_compilers() - - -class build_ext(Command): - - description = "build C/C++ extensions (compile/link to build directory)" - - # XXX thoughts on how to deal with complex command-line options like - # these, i.e. how to make it so fancy_getopt can suck them off the - # command line and make it look like setup.py defined the appropriate - # lists of tuples of what-have-you. - # - each command needs a callback to process its command-line options - # - Command.__init__() needs access to its share of the whole - # command line (must ultimately come from - # Distribution.parse_command_line()) - # - it then calls the current command class' option-parsing - # callback to deal with weird options like -D, which have to - # parse the option text and churn out some custom data - # structure - # - that data structure (in this case, a list of 2-tuples) - # will then be present in the command object by the time - # we get to finalize_options() (i.e. the constructor - # takes care of both command-line and client options - # in between initialize_options() and finalize_options()) - - sep_by = " (separated by '%s')" % os.pathsep - user_options = [ - ('build-lib=', 'b', "directory for compiled extension modules"), - ('build-temp=', 't', "directory for temporary files (build by-products)"), - ( - 'plat-name=', - 'p', - "platform name to cross-compile for, if supported " - "(default: %s)" % get_platform(), - ), - ( - 'inplace', - 'i', - "ignore build-lib and put compiled extensions into the source " - + "directory alongside your pure Python modules", - ), - ( - 'include-dirs=', - 'I', - "list of directories to search for header files" + sep_by, - ), - ('define=', 'D', "C preprocessor macros to define"), - ('undef=', 'U', "C preprocessor macros to undefine"), - ('libraries=', 'l', "external C libraries to link with"), - ( - 'library-dirs=', - 'L', - "directories to search for external C libraries" + sep_by, - ), - ('rpath=', 'R', "directories to search for shared C libraries at runtime"), - ('link-objects=', 'O', "extra explicit link objects to include in the link"), - ('debug', 'g', "compile/link with debugging information"), - ('force', 'f', "forcibly build everything (ignore file timestamps)"), - ('compiler=', 'c', "specify the compiler type"), - ('parallel=', 'j', "number of parallel build jobs"), - ('swig-cpp', None, "make SWIG create C++ files (default is C)"), - ('swig-opts=', None, "list of SWIG command line options"), - ('swig=', None, "path to the SWIG executable"), - ('user', None, "add user include, library and rpath"), - ] - - boolean_options = ['inplace', 'debug', 'force', 'swig-cpp', 'user'] - - help_options = [ - ('help-compiler', None, "list available compilers", show_compilers), - ] - - def initialize_options(self): - self.extensions = None - self.build_lib = None - self.plat_name = None - self.build_temp = None - self.inplace = 0 - self.package = None - - self.include_dirs = None - self.define = None - self.undef = None - self.libraries = None - self.library_dirs = None - self.rpath = None - self.link_objects = None - self.debug = None - self.force = None - self.compiler = None - self.swig = None - self.swig_cpp = None - self.swig_opts = None - self.user = None - self.parallel = None - - def finalize_options(self): # noqa: C901 - from distutils import sysconfig - - self.set_undefined_options( - 'build', - ('build_lib', 'build_lib'), - ('build_temp', 'build_temp'), - ('compiler', 'compiler'), - ('debug', 'debug'), - ('force', 'force'), - ('parallel', 'parallel'), - ('plat_name', 'plat_name'), - ) - - if self.package is None: - self.package = self.distribution.ext_package - - self.extensions = self.distribution.ext_modules - - # Make sure Python's include directories (for Python.h, pyconfig.h, - # etc.) are in the include search path. - py_include = sysconfig.get_python_inc() - plat_py_include = sysconfig.get_python_inc(plat_specific=1) - if self.include_dirs is None: - self.include_dirs = self.distribution.include_dirs or [] - if isinstance(self.include_dirs, str): - self.include_dirs = self.include_dirs.split(os.pathsep) - - # If in a virtualenv, add its include directory - # Issue 16116 - if sys.exec_prefix != sys.base_exec_prefix: - self.include_dirs.append(os.path.join(sys.exec_prefix, 'include')) - - # Put the Python "system" include dir at the end, so that - # any local include dirs take precedence. - self.include_dirs.extend(py_include.split(os.path.pathsep)) - if plat_py_include != py_include: - self.include_dirs.extend(plat_py_include.split(os.path.pathsep)) - - self.ensure_string_list('libraries') - self.ensure_string_list('link_objects') - - # Life is easier if we're not forever checking for None, so - # simplify these options to empty lists if unset - if self.libraries is None: - self.libraries = [] - if self.library_dirs is None: - self.library_dirs = [] - elif isinstance(self.library_dirs, str): - self.library_dirs = self.library_dirs.split(os.pathsep) - - if self.rpath is None: - self.rpath = [] - elif isinstance(self.rpath, str): - self.rpath = self.rpath.split(os.pathsep) - - # for extensions under windows use different directories - # for Release and Debug builds. - # also Python's library directory must be appended to library_dirs - if os.name == 'nt': - # the 'libs' directory is for binary installs - we assume that - # must be the *native* platform. But we don't really support - # cross-compiling via a binary install anyway, so we let it go. - self.library_dirs.append(os.path.join(sys.exec_prefix, 'libs')) - if sys.base_exec_prefix != sys.prefix: # Issue 16116 - self.library_dirs.append(os.path.join(sys.base_exec_prefix, 'libs')) - if self.debug: - self.build_temp = os.path.join(self.build_temp, "Debug") - else: - self.build_temp = os.path.join(self.build_temp, "Release") - - # Append the source distribution include and library directories, - # this allows distutils on windows to work in the source tree - self.include_dirs.append(os.path.dirname(get_config_h_filename())) - self.library_dirs.append(sys.base_exec_prefix) - - # Use the .lib files for the correct architecture - if self.plat_name == 'win32': - suffix = 'win32' - else: - # win-amd64 - suffix = self.plat_name[4:] - new_lib = os.path.join(sys.exec_prefix, 'PCbuild') - if suffix: - new_lib = os.path.join(new_lib, suffix) - self.library_dirs.append(new_lib) - - # For extensions under Cygwin, Python's library directory must be - # appended to library_dirs - if sys.platform[:6] == 'cygwin': - if not sysconfig.python_build: - # building third party extensions - self.library_dirs.append( - os.path.join( - sys.prefix, "lib", "python" + get_python_version(), "config" - ) - ) - else: - # building python standard extensions - self.library_dirs.append('.') - - # For building extensions with a shared Python library, - # Python's library directory must be appended to library_dirs - # See Issues: #1600860, #4366 - if sysconfig.get_config_var('Py_ENABLE_SHARED'): - if not sysconfig.python_build: - # building third party extensions - self.library_dirs.append(sysconfig.get_config_var('LIBDIR')) - else: - # building python standard extensions - self.library_dirs.append('.') - - # The argument parsing will result in self.define being a string, but - # it has to be a list of 2-tuples. All the preprocessor symbols - # specified by the 'define' option will be set to '1'. Multiple - # symbols can be separated with commas. - - if self.define: - defines = self.define.split(',') - self.define = [(symbol, '1') for symbol in defines] - - # The option for macros to undefine is also a string from the - # option parsing, but has to be a list. Multiple symbols can also - # be separated with commas here. - if self.undef: - self.undef = self.undef.split(',') - - if self.swig_opts is None: - self.swig_opts = [] - else: - self.swig_opts = self.swig_opts.split(' ') - - # Finally add the user include and library directories if requested - if self.user: - user_include = os.path.join(USER_BASE, "include") - user_lib = os.path.join(USER_BASE, "lib") - if os.path.isdir(user_include): - self.include_dirs.append(user_include) - if os.path.isdir(user_lib): - self.library_dirs.append(user_lib) - self.rpath.append(user_lib) - - if isinstance(self.parallel, str): - try: - self.parallel = int(self.parallel) - except ValueError: - raise DistutilsOptionError("parallel should be an integer") - - def run(self): # noqa: C901 - from distutils.ccompiler import new_compiler - - # 'self.extensions', as supplied by setup.py, is a list of - # Extension instances. See the documentation for Extension (in - # distutils.extension) for details. - # - # For backwards compatibility with Distutils 0.8.2 and earlier, we - # also allow the 'extensions' list to be a list of tuples: - # (ext_name, build_info) - # where build_info is a dictionary containing everything that - # Extension instances do except the name, with a few things being - # differently named. We convert these 2-tuples to Extension - # instances as needed. - - if not self.extensions: - return - - # If we were asked to build any C/C++ libraries, make sure that the - # directory where we put them is in the library search path for - # linking extensions. - if self.distribution.has_c_libraries(): - build_clib = self.get_finalized_command('build_clib') - self.libraries.extend(build_clib.get_library_names() or []) - self.library_dirs.append(build_clib.build_clib) - - # Setup the CCompiler object that we'll use to do all the - # compiling and linking - self.compiler = new_compiler( - compiler=self.compiler, - verbose=self.verbose, - dry_run=self.dry_run, - force=self.force, - ) - customize_compiler(self.compiler) - # If we are cross-compiling, init the compiler now (if we are not - # cross-compiling, init would not hurt, but people may rely on - # late initialization of compiler even if they shouldn't...) - if os.name == 'nt' and self.plat_name != get_platform(): - self.compiler.initialize(self.plat_name) - - # And make sure that any compile/link-related options (which might - # come from the command-line or from the setup script) are set in - # that CCompiler object -- that way, they automatically apply to - # all compiling and linking done here. - if self.include_dirs is not None: - self.compiler.set_include_dirs(self.include_dirs) - if self.define is not None: - # 'define' option is a list of (name,value) tuples - for (name, value) in self.define: - self.compiler.define_macro(name, value) - if self.undef is not None: - for macro in self.undef: - self.compiler.undefine_macro(macro) - if self.libraries is not None: - self.compiler.set_libraries(self.libraries) - if self.library_dirs is not None: - self.compiler.set_library_dirs(self.library_dirs) - if self.rpath is not None: - self.compiler.set_runtime_library_dirs(self.rpath) - if self.link_objects is not None: - self.compiler.set_link_objects(self.link_objects) - - # Now actually compile and link everything. - self.build_extensions() - - def check_extensions_list(self, extensions): # noqa: C901 - """Ensure that the list of extensions (presumably provided as a - command option 'extensions') is valid, i.e. it is a list of - Extension objects. We also support the old-style list of 2-tuples, - where the tuples are (ext_name, build_info), which are converted to - Extension instances here. - - Raise DistutilsSetupError if the structure is invalid anywhere; - just returns otherwise. - """ - if not isinstance(extensions, list): - raise DistutilsSetupError( - "'ext_modules' option must be a list of Extension instances" - ) - - for i, ext in enumerate(extensions): - if isinstance(ext, Extension): - continue # OK! (assume type-checking done - # by Extension constructor) - - if not isinstance(ext, tuple) or len(ext) != 2: - raise DistutilsSetupError( - "each element of 'ext_modules' option must be an " - "Extension instance or 2-tuple" - ) - - ext_name, build_info = ext - - log.warn( - "old-style (ext_name, build_info) tuple found in " - "ext_modules for extension '%s' " - "-- please convert to Extension instance", - ext_name, - ) - - if not (isinstance(ext_name, str) and extension_name_re.match(ext_name)): - raise DistutilsSetupError( - "first element of each tuple in 'ext_modules' " - "must be the extension name (a string)" - ) - - if not isinstance(build_info, dict): - raise DistutilsSetupError( - "second element of each tuple in 'ext_modules' " - "must be a dictionary (build info)" - ) - - # OK, the (ext_name, build_info) dict is type-safe: convert it - # to an Extension instance. - ext = Extension(ext_name, build_info['sources']) - - # Easy stuff: one-to-one mapping from dict elements to - # instance attributes. - for key in ( - 'include_dirs', - 'library_dirs', - 'libraries', - 'extra_objects', - 'extra_compile_args', - 'extra_link_args', - ): - val = build_info.get(key) - if val is not None: - setattr(ext, key, val) - - # Medium-easy stuff: same syntax/semantics, different names. - ext.runtime_library_dirs = build_info.get('rpath') - if 'def_file' in build_info: - log.warn("'def_file' element of build info dict " "no longer supported") - - # Non-trivial stuff: 'macros' split into 'define_macros' - # and 'undef_macros'. - macros = build_info.get('macros') - if macros: - ext.define_macros = [] - ext.undef_macros = [] - for macro in macros: - if not (isinstance(macro, tuple) and len(macro) in (1, 2)): - raise DistutilsSetupError( - "'macros' element of build info dict " - "must be 1- or 2-tuple" - ) - if len(macro) == 1: - ext.undef_macros.append(macro[0]) - elif len(macro) == 2: - ext.define_macros.append(macro) - - extensions[i] = ext - - def get_source_files(self): - self.check_extensions_list(self.extensions) - filenames = [] - - # Wouldn't it be neat if we knew the names of header files too... - for ext in self.extensions: - filenames.extend(ext.sources) - return filenames - - def get_outputs(self): - # Sanity check the 'extensions' list -- can't assume this is being - # done in the same run as a 'build_extensions()' call (in fact, we - # can probably assume that it *isn't*!). - self.check_extensions_list(self.extensions) - - # And build the list of output (built) filenames. Note that this - # ignores the 'inplace' flag, and assumes everything goes in the - # "build" tree. - outputs = [] - for ext in self.extensions: - outputs.append(self.get_ext_fullpath(ext.name)) - return outputs - - def build_extensions(self): - # First, sanity-check the 'extensions' list - self.check_extensions_list(self.extensions) - if self.parallel: - self._build_extensions_parallel() - else: - self._build_extensions_serial() - - def _build_extensions_parallel(self): - workers = self.parallel - if self.parallel is True: - workers = os.cpu_count() # may return None - try: - from concurrent.futures import ThreadPoolExecutor - except ImportError: - workers = None - - if workers is None: - self._build_extensions_serial() - return - - with ThreadPoolExecutor(max_workers=workers) as executor: - futures = [ - executor.submit(self.build_extension, ext) for ext in self.extensions - ] - for ext, fut in zip(self.extensions, futures): - with self._filter_build_errors(ext): - fut.result() - - def _build_extensions_serial(self): - for ext in self.extensions: - with self._filter_build_errors(ext): - self.build_extension(ext) - - @contextlib.contextmanager - def _filter_build_errors(self, ext): - try: - yield - except (CCompilerError, DistutilsError, CompileError) as e: - if not ext.optional: - raise - self.warn('building extension "{}" failed: {}'.format(ext.name, e)) - - def build_extension(self, ext): - sources = ext.sources - if sources is None or not isinstance(sources, (list, tuple)): - raise DistutilsSetupError( - "in 'ext_modules' option (extension '%s'), " - "'sources' must be present and must be " - "a list of source filenames" % ext.name - ) - # sort to make the resulting .so file build reproducible - sources = sorted(sources) - - ext_path = self.get_ext_fullpath(ext.name) - depends = sources + ext.depends - if not (self.force or newer_group(depends, ext_path, 'newer')): - log.debug("skipping '%s' extension (up-to-date)", ext.name) - return - else: - log.info("building '%s' extension", ext.name) - - # First, scan the sources for SWIG definition files (.i), run - # SWIG on 'em to create .c files, and modify the sources list - # accordingly. - sources = self.swig_sources(sources, ext) - - # Next, compile the source code to object files. - - # XXX not honouring 'define_macros' or 'undef_macros' -- the - # CCompiler API needs to change to accommodate this, and I - # want to do one thing at a time! - - # Two possible sources for extra compiler arguments: - # - 'extra_compile_args' in Extension object - # - CFLAGS environment variable (not particularly - # elegant, but people seem to expect it and I - # guess it's useful) - # The environment variable should take precedence, and - # any sensible compiler will give precedence to later - # command line args. Hence we combine them in order: - extra_args = ext.extra_compile_args or [] - - macros = ext.define_macros[:] - for undef in ext.undef_macros: - macros.append((undef,)) - - objects = self.compiler.compile( - sources, - output_dir=self.build_temp, - macros=macros, - include_dirs=ext.include_dirs, - debug=self.debug, - extra_postargs=extra_args, - depends=ext.depends, - ) - - # XXX outdated variable, kept here in case third-part code - # needs it. - self._built_objects = objects[:] - - # Now link the object files together into a "shared object" -- - # of course, first we have to figure out all the other things - # that go into the mix. - if ext.extra_objects: - objects.extend(ext.extra_objects) - extra_args = ext.extra_link_args or [] - - # Detect target language, if not provided - language = ext.language or self.compiler.detect_language(sources) - - self.compiler.link_shared_object( - objects, - ext_path, - libraries=self.get_libraries(ext), - library_dirs=ext.library_dirs, - runtime_library_dirs=ext.runtime_library_dirs, - extra_postargs=extra_args, - export_symbols=self.get_export_symbols(ext), - debug=self.debug, - build_temp=self.build_temp, - target_lang=language, - ) - - def swig_sources(self, sources, extension): - """Walk the list of source files in 'sources', looking for SWIG - interface (.i) files. Run SWIG on all that are found, and - return a modified 'sources' list with SWIG source files replaced - by the generated C (or C++) files. - """ - new_sources = [] - swig_sources = [] - swig_targets = {} - - # XXX this drops generated C/C++ files into the source tree, which - # is fine for developers who want to distribute the generated - # source -- but there should be an option to put SWIG output in - # the temp dir. - - if self.swig_cpp: - log.warn("--swig-cpp is deprecated - use --swig-opts=-c++") - - if ( - self.swig_cpp - or ('-c++' in self.swig_opts) - or ('-c++' in extension.swig_opts) - ): - target_ext = '.cpp' - else: - target_ext = '.c' - - for source in sources: - (base, ext) = os.path.splitext(source) - if ext == ".i": # SWIG interface file - new_sources.append(base + '_wrap' + target_ext) - swig_sources.append(source) - swig_targets[source] = new_sources[-1] - else: - new_sources.append(source) - - if not swig_sources: - return new_sources - - swig = self.swig or self.find_swig() - swig_cmd = [swig, "-python"] - swig_cmd.extend(self.swig_opts) - if self.swig_cpp: - swig_cmd.append("-c++") - - # Do not override commandline arguments - if not self.swig_opts: - for o in extension.swig_opts: - swig_cmd.append(o) - - for source in swig_sources: - target = swig_targets[source] - log.info("swigging %s to %s", source, target) - self.spawn(swig_cmd + ["-o", target, source]) - - return new_sources - - def find_swig(self): - """Return the name of the SWIG executable. On Unix, this is - just "swig" -- it should be in the PATH. Tries a bit harder on - Windows. - """ - if os.name == "posix": - return "swig" - elif os.name == "nt": - # Look for SWIG in its standard installation directory on - # Windows (or so I presume!). If we find it there, great; - # if not, act like Unix and assume it's in the PATH. - for vers in ("1.3", "1.2", "1.1"): - fn = os.path.join("c:\\swig%s" % vers, "swig.exe") - if os.path.isfile(fn): - return fn - else: - return "swig.exe" - else: - raise DistutilsPlatformError( - "I don't know how to find (much less run) SWIG " - "on platform '%s'" % os.name - ) - - # -- Name generators ----------------------------------------------- - # (extension names, filenames, whatever) - def get_ext_fullpath(self, ext_name): - """Returns the path of the filename for a given extension. - - The file is located in `build_lib` or directly in the package - (inplace option). - """ - fullname = self.get_ext_fullname(ext_name) - modpath = fullname.split('.') - filename = self.get_ext_filename(modpath[-1]) - - if not self.inplace: - # no further work needed - # returning : - # build_dir/package/path/filename - filename = os.path.join(*modpath[:-1] + [filename]) - return os.path.join(self.build_lib, filename) - - # the inplace option requires to find the package directory - # using the build_py command for that - package = '.'.join(modpath[0:-1]) - build_py = self.get_finalized_command('build_py') - package_dir = os.path.abspath(build_py.get_package_dir(package)) - - # returning - # package_dir/filename - return os.path.join(package_dir, filename) - - def get_ext_fullname(self, ext_name): - """Returns the fullname of a given extension name. - - Adds the `package.` prefix""" - if self.package is None: - return ext_name - else: - return self.package + '.' + ext_name - - def get_ext_filename(self, ext_name): - r"""Convert the name of an extension (eg. "foo.bar") into the name - of the file from which it will be loaded (eg. "foo/bar.so", or - "foo\bar.pyd"). - """ - from distutils.sysconfig import get_config_var - - ext_path = ext_name.split('.') - ext_suffix = get_config_var('EXT_SUFFIX') - return os.path.join(*ext_path) + ext_suffix - - def get_export_symbols(self, ext): - """Return the list of symbols that a shared extension has to - export. This either uses 'ext.export_symbols' or, if it's not - provided, "PyInit_" + module_name. Only relevant on Windows, where - the .pyd file (DLL) must export the module "PyInit_" function. - """ - name = ext.name.split('.')[-1] - try: - # Unicode module name support as defined in PEP-489 - # https://www.python.org/dev/peps/pep-0489/#export-hook-name - name.encode('ascii') - except UnicodeEncodeError: - suffix = 'U_' + name.encode('punycode').replace(b'-', b'_').decode('ascii') - else: - suffix = "_" + name - - initfunc_name = "PyInit" + suffix - if initfunc_name not in ext.export_symbols: - ext.export_symbols.append(initfunc_name) - return ext.export_symbols - - def get_libraries(self, ext): # noqa: C901 - """Return the list of libraries to link against when building a - shared extension. On most platforms, this is just 'ext.libraries'; - on Windows, we add the Python library (eg. python20.dll). - """ - # The python library is always needed on Windows. For MSVC, this - # is redundant, since the library is mentioned in a pragma in - # pyconfig.h that MSVC groks. The other Windows compilers all seem - # to need it mentioned explicitly, though, so that's what we do. - # Append '_d' to the python import library on debug builds. - if sys.platform == "win32": - from distutils._msvccompiler import MSVCCompiler - - if not isinstance(self.compiler, MSVCCompiler): - template = "python%d%d" - if self.debug: - template = template + '_d' - pythonlib = template % ( - sys.hexversion >> 24, - (sys.hexversion >> 16) & 0xFF, - ) - # don't extend ext.libraries, it may be shared with other - # extensions, it is a reference to the original list - return ext.libraries + [pythonlib] - else: - # On Android only the main executable and LD_PRELOADs are considered - # to be RTLD_GLOBAL, all the dependencies of the main executable - # remain RTLD_LOCAL and so the shared libraries must be linked with - # libpython when python is built with a shared python library (issue - # bpo-21536). - # On Cygwin (and if required, other POSIX-like platforms based on - # Windows like MinGW) it is simply necessary that all symbols in - # shared libraries are resolved at link time. - from distutils.sysconfig import get_config_var - - link_libpython = False - if get_config_var('Py_ENABLE_SHARED'): - # A native build on an Android device or on Cygwin - if hasattr(sys, 'getandroidapilevel'): - link_libpython = True - elif sys.platform == 'cygwin': - link_libpython = True - elif '_PYTHON_HOST_PLATFORM' in os.environ: - # We are cross-compiling for one of the relevant platforms - if get_config_var('ANDROID_API_LEVEL') != 0: - link_libpython = True - elif get_config_var('MACHDEP') == 'cygwin': - link_libpython = True - - if link_libpython: - ldversion = get_config_var('LDVERSION') - return ext.libraries + ['python' + ldversion] - - return ext.libraries + py37compat.pythonlib() diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/version.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/version.py deleted file mode 100644 index 95e1869658566aac3060562d8cd5a6b647887d1e..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/version.py +++ /dev/null @@ -1,6 +0,0 @@ -import pkg_resources - -try: - __version__ = pkg_resources.get_distribution('setuptools').version -except Exception: - __version__ = 'unknown' diff --git a/spaces/RatKing243/Test/Dockerfile b/spaces/RatKing243/Test/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/RatKing243/Test/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/Realcat/image-matching-webui/third_party/SGMNet/components/load_component.py b/spaces/Realcat/image-matching-webui/third_party/SGMNet/components/load_component.py deleted file mode 100644 index 1d46389bf64640dc928d08132765b9b4d5e0a8ad..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/SGMNet/components/load_component.py +++ /dev/null @@ -1,56 +0,0 @@ -from . import matchers -from . import readers -from . import evaluators -from . import extractors - - -def load_component(compo_name, model_name, config): - if compo_name == "extractor": - component = load_extractor(model_name, config) - elif compo_name == "reader": - component = load_reader(model_name, config) - elif compo_name == "matcher": - component = load_matcher(model_name, config) - elif compo_name == "evaluator": - component = load_evaluator(model_name, config) - else: - raise NotImplementedError - return component - - -def load_extractor(model_name, config): - if model_name == "root": - extractor = extractors.ExtractSIFT(config) - elif model_name == "sp": - extractor = extractors.ExtractSuperpoint(config) - else: - raise NotImplementedError - return extractor - - -def load_matcher(model_name, config): - if model_name == "SGM": - matcher = matchers.GNN_Matcher(config, "SGM") - elif model_name == "SG": - matcher = matchers.GNN_Matcher(config, "SG") - elif model_name == "NN": - matcher = matchers.NN_Matcher(config) - else: - raise NotImplementedError - return matcher - - -def load_reader(model_name, config): - if model_name == "standard": - reader = readers.standard_reader(config) - else: - raise NotImplementedError - return reader - - -def load_evaluator(model_name, config): - if model_name == "AUC": - evaluator = evaluators.auc_eval(config) - elif model_name == "FM": - evaluator = evaluators.FMbench_eval(config) - return evaluator diff --git a/spaces/RedBaron5/PatentSolver/App/bin/TechnologyFinder.py b/spaces/RedBaron5/PatentSolver/App/bin/TechnologyFinder.py deleted file mode 100644 index 4edd348a4d31e266798aa463c0f496fb2c315bd4..0000000000000000000000000000000000000000 --- a/spaces/RedBaron5/PatentSolver/App/bin/TechnologyFinder.py +++ /dev/null @@ -1,68 +0,0 @@ -#!/usr/bin/python3 -# -*- coding: utf-8 -* -import sys -import os -import math -import re - -from App.bin import constants - -from textblob import TextBlob as tb - -class TechnologyFinder(object): - - def __init__(self, corpus): - self.corpus = corpus - - print("Extracting technologies") - - def last_cleansing(self, tech): - tech = str(tech) - tech = re.sub(r'\s?\bcomprises\b', '', tech) - return tech - - def get_technologies(self): - - corpus = self.corpus - - technologies = [] - def tf(word, blob): - return (float)(blob.noun_phrases.count(word)) / (float)(len(blob.noun_phrases)) - - def n_containing(word, bloblist): - return sum(1 for blob in bloblist if word in blob.noun_phrases) - - def idf(word, bloblist): - return math.log(len(bloblist) / (float)(1 + n_containing(word, bloblist))) - - def tfidf(word, blob, bloblist): - return tf(word, blob) * idf(word, bloblist) - - stopwords = open(constants.ASSETS+'stopwords', 'r').read().split('\r\n') - bloblist = [] - filenamelist = [] - - for filepath,patent in corpus.items(): - - filename = os.path.basename(os.path.normpath(filepath)) - #name, extension = filename.split('.') - filenamelist.append(filepath) - - filteredtext = [t for t in patent if t.lower() not in stopwords] - filteredcontent = ''.join(filteredtext) - blob = tb(filteredcontent.lower()) - bloblist.append(blob) - - for i, blob in enumerate(bloblist): - filename = [] - technologies.append(filename) - scores = {word: tfidf(word, blob, bloblist) for word in blob.noun_phrases} - sorted_words = sorted(scores.items(), key=lambda x: x[1], reverse=True) - for word, score in sorted_words[:6]: - word = self.last_cleansing(word) - print("techologies found") - filename.append(word) - - technologies_list = dict(zip(filenamelist, technologies)) - return technologies_list - diff --git a/spaces/Rida/Semantic-Segmentation/README.md b/spaces/Rida/Semantic-Segmentation/README.md deleted file mode 100644 index 89a919dde5286e7ab45f69b6436a86893b680119..0000000000000000000000000000000000000000 --- a/spaces/Rida/Semantic-Segmentation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Semantic Segmentation -emoji: 👁 -colorFrom: yellow -colorTo: purple -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/yolo.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/yolo.py deleted file mode 100644 index 240aab20f857befe25e64114300ebb15a66c6a70..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/detectors/yolo.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (c) 2019 Western Digital Corporation or its affiliates. - -from ..builder import DETECTORS -from .single_stage import SingleStageDetector - - -@DETECTORS.register_module() -class YOLOV3(SingleStageDetector): - - def __init__(self, - backbone, - neck, - bbox_head, - train_cfg=None, - test_cfg=None, - pretrained=None): - super(YOLOV3, self).__init__(backbone, neck, bbox_head, train_cfg, - test_cfg, pretrained) diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/grid_roi_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/grid_roi_head.py deleted file mode 100644 index 4c52c79863ebaf17bd023382c7e5d4c237b4da77..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/roi_heads/grid_roi_head.py +++ /dev/null @@ -1,176 +0,0 @@ -import torch - -from mmdet.core import bbox2result, bbox2roi -from ..builder import HEADS, build_head, build_roi_extractor -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class GridRoIHead(StandardRoIHead): - """Grid roi head for Grid R-CNN. - - https://arxiv.org/abs/1811.12030 - """ - - def __init__(self, grid_roi_extractor, grid_head, **kwargs): - assert grid_head is not None - super(GridRoIHead, self).__init__(**kwargs) - if grid_roi_extractor is not None: - self.grid_roi_extractor = build_roi_extractor(grid_roi_extractor) - self.share_roi_extractor = False - else: - self.share_roi_extractor = True - self.grid_roi_extractor = self.bbox_roi_extractor - self.grid_head = build_head(grid_head) - - def init_weights(self, pretrained): - """Initialize the weights in head. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - super(GridRoIHead, self).init_weights(pretrained) - self.grid_head.init_weights() - if not self.share_roi_extractor: - self.grid_roi_extractor.init_weights() - - def _random_jitter(self, sampling_results, img_metas, amplitude=0.15): - """Ramdom jitter positive proposals for training.""" - for sampling_result, img_meta in zip(sampling_results, img_metas): - bboxes = sampling_result.pos_bboxes - random_offsets = bboxes.new_empty(bboxes.shape[0], 4).uniform_( - -amplitude, amplitude) - # before jittering - cxcy = (bboxes[:, 2:4] + bboxes[:, :2]) / 2 - wh = (bboxes[:, 2:4] - bboxes[:, :2]).abs() - # after jittering - new_cxcy = cxcy + wh * random_offsets[:, :2] - new_wh = wh * (1 + random_offsets[:, 2:]) - # xywh to xyxy - new_x1y1 = (new_cxcy - new_wh / 2) - new_x2y2 = (new_cxcy + new_wh / 2) - new_bboxes = torch.cat([new_x1y1, new_x2y2], dim=1) - # clip bboxes - max_shape = img_meta['img_shape'] - if max_shape is not None: - new_bboxes[:, 0::2].clamp_(min=0, max=max_shape[1] - 1) - new_bboxes[:, 1::2].clamp_(min=0, max=max_shape[0] - 1) - - sampling_result.pos_bboxes = new_bboxes - return sampling_results - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - # bbox head - outs = () - rois = bbox2roi([proposals]) - if self.with_bbox: - bbox_results = self._bbox_forward(x, rois) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - - # grid head - grid_rois = rois[:100] - grid_feats = self.grid_roi_extractor( - x[:self.grid_roi_extractor.num_inputs], grid_rois) - if self.with_shared_head: - grid_feats = self.shared_head(grid_feats) - grid_pred = self.grid_head(grid_feats) - outs = outs + (grid_pred, ) - - # mask head - if self.with_mask: - mask_rois = rois[:100] - mask_results = self._mask_forward(x, mask_rois) - outs = outs + (mask_results['mask_pred'], ) - return outs - - def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels, - img_metas): - """Run forward function and calculate loss for box head in training.""" - bbox_results = super(GridRoIHead, - self)._bbox_forward_train(x, sampling_results, - gt_bboxes, gt_labels, - img_metas) - - # Grid head forward and loss - sampling_results = self._random_jitter(sampling_results, img_metas) - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - - # GN in head does not support zero shape input - if pos_rois.shape[0] == 0: - return bbox_results - - grid_feats = self.grid_roi_extractor( - x[:self.grid_roi_extractor.num_inputs], pos_rois) - if self.with_shared_head: - grid_feats = self.shared_head(grid_feats) - # Accelerate training - max_sample_num_grid = self.train_cfg.get('max_num_grid', 192) - sample_idx = torch.randperm( - grid_feats.shape[0])[:min(grid_feats.shape[0], max_sample_num_grid - )] - grid_feats = grid_feats[sample_idx] - - grid_pred = self.grid_head(grid_feats) - - grid_targets = self.grid_head.get_targets(sampling_results, - self.train_cfg) - grid_targets = grid_targets[sample_idx] - - loss_grid = self.grid_head.loss(grid_pred, grid_targets) - - bbox_results['loss_bbox'].update(loss_grid) - return bbox_results - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = self.simple_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=False) - # pack rois into bboxes - grid_rois = bbox2roi([det_bbox[:, :4] for det_bbox in det_bboxes]) - if grid_rois.shape[0] != 0: - grid_feats = self.grid_roi_extractor( - x[:len(self.grid_roi_extractor.featmap_strides)], grid_rois) - self.grid_head.test_mode = True - grid_pred = self.grid_head(grid_feats) - # split batch grid head prediction back to each image - num_roi_per_img = tuple(len(det_bbox) for det_bbox in det_bboxes) - grid_pred = { - k: v.split(num_roi_per_img, 0) - for k, v in grid_pred.items() - } - - # apply bbox post-processing to each image individually - bbox_results = [] - num_imgs = len(det_bboxes) - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - bbox_results.append(grid_rois.new_tensor([])) - else: - det_bbox = self.grid_head.get_bboxes( - det_bboxes[i], grid_pred['fused'][i], [img_metas[i]]) - if rescale: - det_bbox[:, :4] /= img_metas[i]['scale_factor'] - bbox_results.append( - bbox2result(det_bbox, det_labels[i], - self.bbox_head.num_classes)) - else: - bbox_results = [ - grid_rois.new_tensor([]) for _ in range(len(det_bboxes)) - ] - - if not self.with_mask: - return bbox_results - else: - segm_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return list(zip(bbox_results, segm_results)) diff --git a/spaces/Ryukijano/Ryukijano-controlnet-fill-circle/README.md b/spaces/Ryukijano/Ryukijano-controlnet-fill-circle/README.md deleted file mode 100644 index 49ae439b4550c47675eba78930283a7e0d6a1696..0000000000000000000000000000000000000000 --- a/spaces/Ryukijano/Ryukijano-controlnet-fill-circle/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Ryukijano Controlnet Fill Circle -emoji: 🏢 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SIGGRAPH2022/DCT-Net/source/__init__.py b/spaces/SIGGRAPH2022/DCT-Net/source/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/vectorstores/__init__.py b/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/vectorstores/__init__.py deleted file mode 100644 index c16bacdecae88056f652d51983d1248af1fbdc3a..0000000000000000000000000000000000000000 --- a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/vectorstores/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -"""Wrappers on top of vector stores.""" -from streamlit_langchain_chat.customized_langchain.vectorstores.faiss import FAISS -from streamlit_langchain_chat.customized_langchain.vectorstores.pinecone import Pinecone - -__all__ = [ - "FAISS", - "Pinecone", -] diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py b/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py deleted file mode 100644 index 05ea84ae0326231fa2ffbd4ad936f8747a9fed2c..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py +++ /dev/null @@ -1,309 +0,0 @@ -import inspect -from typing import List, Optional, Union - -import numpy as np -import torch - -import PIL -from tqdm.auto import tqdm -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer - -from ...models import AutoencoderKL, UNet2DConditionModel -from ...pipeline_utils import DiffusionPipeline -from ...schedulers import DDIMScheduler, PNDMScheduler -from ...utils import logging -from . import StableDiffusionPipelineOutput -from .safety_checker import StableDiffusionSafetyChecker - - -logger = logging.get_logger(__name__) - - -def preprocess_image(image): - w, h = image.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL.Image.LANCZOS) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - return 2.0 * image - 1.0 - - -def preprocess_mask(mask): - mask = mask.convert("L") - w, h = mask.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - mask = mask.resize((w // 8, h // 8), resample=PIL.Image.NEAREST) - mask = np.array(mask).astype(np.float32) / 255.0 - mask = np.tile(mask, (4, 1, 1)) - mask = mask[None].transpose(0, 1, 2, 3) # what does this step do? - mask = 1 - mask # repaint white, keep black - mask = torch.from_numpy(mask) - return mask - - -class StableDiffusionInpaintPipeline(DiffusionPipeline): - r""" - Pipeline for text-guided image inpainting using Stable Diffusion. *This is an experimental feature*. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latens. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - safety_checker ([`StableDiffusionSafetyChecker`]): - Classification module that estimates whether generated images could be considered offsensive or harmful. - Please, refer to the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-4) for details. - feature_extractor ([`CLIPFeatureExtractor`]): - Model that extracts features from generated images to be used as inputs for the `safety_checker`. - """ - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, PNDMScheduler], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - ): - super().__init__() - scheduler = scheduler.set_format("pt") - logger.info("`StableDiffusionInpaintPipeline` is experimental and will very likely change in the future.") - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - r""" - Enable sliced attention computation. - - When this option is enabled, the attention module will split the input tensor in slices, to compute attention - in several steps. This is useful to save some memory in exchange for a small speed decrease. - - Args: - slice_size (`str` or `int`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case, - `attention_head_dim` must be a multiple of `slice_size`. - """ - if slice_size == "auto": - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = self.unet.config.attention_head_dim // 2 - self.unet.set_attention_slice(slice_size) - - def disable_attention_slicing(self): - r""" - Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go - back to computing attention in one step. - """ - # set slice_size = `None` to disable `set_attention_slice` - self.enable_attention_slice(None) - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - init_image: Union[torch.FloatTensor, PIL.Image.Image], - mask_image: Union[torch.FloatTensor, PIL.Image.Image], - strength: float = 0.8, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - eta: Optional[float] = 0.0, - generator: Optional[torch.Generator] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - init_image (`torch.FloatTensor` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, that will be used as the starting point for the - process. This is the image whose masked region will be inpainted. - mask_image (`torch.FloatTensor` or `PIL.Image.Image`): - `Image`, or tensor representing an image batch, to mask `init_image`. White pixels in the mask will be - replaced by noise and therefore repainted, while black pixels will be preserved. The mask image will be - converted to a single channel (luminance) before use. - strength (`float`, *optional*, defaults to 0.8): - Conceptually, indicates how much to inpaint the masked area. Must be between 0 and 1. When `strength` - is 1, the denoising process will be run on the masked area for the full number of iterations specified - in `num_inference_steps`. `init_image` will be used as a reference for the masked area, adding more - noise to that region the larger the `strength`. If `strength` is 0, no inpainting will occur. - num_inference_steps (`int`, *optional*, defaults to 50): - The reference number of denoising steps. More denoising steps usually lead to a higher quality image at - the expense of slower inference. This parameter will be modulated by `strength`, as explained above. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `nd.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}") - - # set timesteps - accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys()) - extra_set_kwargs = {} - offset = 0 - if accepts_offset: - offset = 1 - extra_set_kwargs["offset"] = 1 - - self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs) - - # preprocess image - init_image = preprocess_image(init_image).to(self.device) - - # encode the init image into latents and scale the latents - init_latent_dist = self.vae.encode(init_image.to(self.device)).latent_dist - init_latents = init_latent_dist.sample(generator=generator) - - init_latents = 0.18215 * init_latents - - # Expand init_latents for batch_size - init_latents = torch.cat([init_latents] * batch_size) - init_latents_orig = init_latents - - # preprocess mask - mask = preprocess_mask(mask_image).to(self.device) - mask = torch.cat([mask] * batch_size) - - # check sizes - if not mask.shape == init_latents.shape: - raise ValueError("The mask and init_image should be the same size!") - - # get the original timestep using init_timestep - init_timestep = int(num_inference_steps * strength) + offset - init_timestep = min(init_timestep, num_inference_steps) - timesteps = self.scheduler.timesteps[-init_timestep] - timesteps = torch.tensor([timesteps] * batch_size, dtype=torch.long, device=self.device) - - # add noise to latents using the timesteps - noise = torch.randn(init_latents.shape, generator=generator, device=self.device) - init_latents = self.scheduler.add_noise(init_latents, noise, timesteps) - - # get prompt text embeddings - text_input = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0] - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - max_length = text_input.input_ids.shape[-1] - uncond_input = self.tokenizer( - [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt" - ) - uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - latents = init_latents - t_start = max(num_inference_steps - init_timestep + offset, 0) - for i, t in tqdm(enumerate(self.scheduler.timesteps[t_start:])): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # masking - init_latents_proper = self.scheduler.add_noise(init_latents_orig, noise, t) - latents = (init_latents_proper * mask) + (latents * (1 - mask)) - - # scale and decode the image latents with vae - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - - # run safety checker - safety_cheker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device) - image, has_nsfw_concept = self.safety_checker(images=image, clip_input=safety_cheker_input.pixel_values) - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/Sanjar/airi_text_classification/README.md b/spaces/Sanjar/airi_text_classification/README.md deleted file mode 100644 index 4f4c0e2f54ebd7e2674710ba28361f299f9fb1a7..0000000000000000000000000000000000000000 --- a/spaces/Sanjar/airi_text_classification/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Airi Text Classification -emoji: 🌖 -colorFrom: pink -colorTo: pink -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sapphire-356/Video2MC/common/humaneva_dataset.py b/spaces/Sapphire-356/Video2MC/common/humaneva_dataset.py deleted file mode 100644 index 5dbfe023e75af62a2326a4de5af6675776379ed3..0000000000000000000000000000000000000000 --- a/spaces/Sapphire-356/Video2MC/common/humaneva_dataset.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) 2018-present, Facebook, Inc. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -# - -import copy - -import numpy as np - -from common.mocap_dataset import MocapDataset -from common.skeleton import Skeleton - -humaneva_skeleton = Skeleton(parents=[-1, 0, 1, 2, 3, 1, 5, 6, 0, 8, 9, 0, 11, 12, 1], - joints_left=[2, 3, 4, 8, 9, 10], - joints_right=[5, 6, 7, 11, 12, 13]) - -humaneva_cameras_intrinsic_params = [ - { - 'id': 'C1', - 'res_w': 640, - 'res_h': 480, - 'azimuth': 0, # Only used for visualization - }, - { - 'id': 'C2', - 'res_w': 640, - 'res_h': 480, - 'azimuth': -90, # Only used for visualization - }, - { - 'id': 'C3', - 'res_w': 640, - 'res_h': 480, - 'azimuth': 90, # Only used for visualization - }, -] - -humaneva_cameras_extrinsic_params = { - 'S1': [ - { - 'orientation': [0.424207, -0.4983646, -0.5802981, 0.4847012], - 'translation': [4062.227, 663.2477, 1528.397], - }, - { - 'orientation': [0.6503354, -0.7481602, -0.0919284, 0.0941766], - 'translation': [844.8131, -3805.2092, 1504.9929], - }, - { - 'orientation': [0.0664734, -0.0690535, 0.7416416, -0.6639132], - 'translation': [-797.67377, 3916.3174, 1433.6602], - }, - ], - 'S2': [ - { - 'orientation': [0.4214752, -0.4961493, -0.5838273, 0.4851187], - 'translation': [4112.9121, 626.4929, 1545.2988], - }, - { - 'orientation': [0.6501393, -0.7476588, -0.0954617, 0.0959808], - 'translation': [923.5740, -3877.9243, 1504.5518], - }, - { - 'orientation': [0.0699353, -0.0712403, 0.7421637, -0.662742], - 'translation': [-781.4915, 3838.8853, 1444.9929], - }, - ], - 'S3': [ - { - 'orientation': [0.424207, -0.4983646, -0.5802981, 0.4847012], - 'translation': [4062.2271, 663.2477, 1528.3970], - }, - { - 'orientation': [0.6503354, -0.7481602, -0.0919284, 0.0941766], - 'translation': [844.8131, -3805.2092, 1504.9929], - }, - { - 'orientation': [0.0664734, -0.0690535, 0.7416416, -0.6639132], - 'translation': [-797.6738, 3916.3174, 1433.6602], - }, - ], - 'S4': [ - {}, - {}, - {}, - ], - -} - - -class HumanEvaDataset(MocapDataset): - def __init__(self, path): - super().__init__(fps=60, skeleton=humaneva_skeleton) - - self._cameras = copy.deepcopy(humaneva_cameras_extrinsic_params) - for cameras in self._cameras.values(): - for i, cam in enumerate(cameras): - cam.update(humaneva_cameras_intrinsic_params[i]) - for k, v in cam.items(): - if k not in ['id', 'res_w', 'res_h']: - cam[k] = np.array(v, dtype='float32') - if 'translation' in cam: - cam['translation'] = cam['translation'] / 1000 # mm to meters - - for subject in list(self._cameras.keys()): - data = self._cameras[subject] - del self._cameras[subject] - for prefix in ['Train/', 'Validate/', 'Unlabeled/Train/', 'Unlabeled/Validate/', 'Unlabeled/']: - self._cameras[prefix + subject] = data - - # Load serialized dataset - data = np.load(path)['positions_3d'].item() - - self._data = {} - for subject, actions in data.items(): - self._data[subject] = {} - for action_name, positions in actions.items(): - self._data[subject][action_name] = { - 'positions': positions, - 'cameras': self._cameras[subject], - } diff --git a/spaces/SeViLA/SeViLA/lavis/models/blip_models/blip_vqa.py b/spaces/SeViLA/SeViLA/lavis/models/blip_models/blip_vqa.py deleted file mode 100644 index dd6e4144b8243e251d4c1c6451f88f97ef641a8b..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/blip_models/blip_vqa.py +++ /dev/null @@ -1,375 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import torch -import torch.nn.functional as F -from lavis.common.registry import registry -from lavis.models.base_model import tile -from lavis.models.blip_models.blip import BlipBase -from lavis.models.blip_models.blip_outputs import ( - BlipOutput, - BlipIntermediateOutput, -) -from lavis.models.med import XBertEncoder, XBertLMHeadDecoder -from lavis.models.vit import VisionTransformerEncoder - - -@registry.register_model("blip_vqa") -class BlipVQA(BlipBase): - """ - BLIP VQA models. - - Supported model types: - - base: vqa model initialized with pre-trained BLIP base model on 115M image-text pairs after CapFilt; not fine-tuned. - - vqav2: fine-tuned BLIP base model on VQA v2.0 dataset. - - Usage: - >>> from lavis.models import load_model - >>> model = load_model("blip_vqa", "vqav2") - >>> model = load_model("blip_vqa", "okvqa") - >>> model = load_model("blip_vqa", "aokvqa") - """ - - PRETRAINED_MODEL_CONFIG_DICT = { - "vqav2": "configs/models/blip_vqav2.yaml", - "okvqa": "configs/models/blip_vqa_okvqa.yaml", - "aokvqa": "configs/models/blip_vqa_aokvqa.yaml", - } - - def __init__(self, image_encoder, text_encoder, text_decoder, max_txt_len=35): - super().__init__() - self.tokenizer = self.init_tokenizer() - - self.visual_encoder = image_encoder - - self.text_encoder = text_encoder - self.text_decoder = text_decoder - - self.max_txt_len = max_txt_len - - def forward(self, samples): - """ - Args: - samples (dict): A dictionary containing the following keys: - - image (torch.Tensor): A tensor of shape (batch_size, 3, H, W). Default H=480, W=480. - - text_input (list): A list of strings, each string is a question - - answer (list): A list of strings, each string is an answer - - weight (torch.Tensor): A tensor used to weigh each answer in the loss computation. - The shape of the tensor is (sum(n_answers),) - - n_answers (torch.Tensor): A tensor shape (batch_size,) containing the number of answers - for each question in the batch. - - Returns: - A BlipOutput object containing loss and intermediate outputs, - see :class:`lavis.models.blip_outputs.BlipOutput` for more details. - - Examples: - ```python - >>> import torch - >>> from lavis.models import load_model - >>> model = load_model("blip_vqa") - >>> samples = { - ... "image": torch.rand(2, 3, 480, 480), - ... "text_input": ["What is this?", "What is that?"], - ... "answer": ["cat", "cat", "dog"], - ... "weight": torch.tensor([1.0, 1.0, 1.0]), - ... "n_answers": torch.tensor([2, 1]), - ... } - >>> output = model(samples) - >>> output.keys() - odict_keys(['intermediate_output', 'loss']) - >>> output.intermediate_output.keys() - odict_keys(['image_embeds', 'encoder_output', 'decoder_output', 'decoder_labels']) - ``` - """ - encoder_output, image_embeds = self.forward_encoder(samples) - loss, decoder_output, decoder_targets = self.forward_decoder( - samples=samples, encoder_out=encoder_output - ) - - return BlipOutput( - loss=loss, - intermediate_output=BlipIntermediateOutput( - image_embeds=image_embeds, - encoder_output=encoder_output, - decoder_output=decoder_output, - decoder_labels=decoder_targets, - ), - ) - - def forward_encoder(self, samples): - questions = samples["text_input"] - questions = self.tokenizer( - questions, - padding="longest", - truncation=True, - max_length=self.max_txt_len, - return_tensors="pt", - ).to(self.device) - questions.input_ids[:, 0] = self.tokenizer.enc_token_id - samples.update({"tokenized_text": questions}) - - image_embeds = self.visual_encoder.forward_features(samples["image"]) - encoder_output = self.text_encoder.forward_automask( - tokenized_text=samples["tokenized_text"], visual_embeds=image_embeds - ) - - return encoder_output, image_embeds - - def forward_decoder(self, samples, encoder_out, **kwargs): - answers = self.tokenizer( - samples["answer"], padding="longest", return_tensors="pt" - ).to(self.device) - answers.input_ids[:, 0] = self.tokenizer.bos_token_id - answer_targets = answers.input_ids.masked_fill( - answers.input_ids == self.tokenizer.pad_token_id, -100 - ) - - question_states = [] - question_atts = [] - - question = samples["tokenized_text"] - question_output = encoder_out - - for b, n in enumerate(samples["n_answers"]): - question_states += [question_output.last_hidden_state[b]] * n - question_atts += [question.attention_mask[b]] * n - - question_states = torch.stack(question_states, dim=0) - question_atts = torch.stack(question_atts, dim=0) - - answer_output = self.text_decoder( - answers.input_ids, - attention_mask=answers.attention_mask, - encoder_hidden_states=question_states, - encoder_attention_mask=question_atts, - labels=answer_targets, - return_dict=True, - reduction="none", - ) - - loss = samples["weight"] * answer_output.loss - bsz = samples["image"].size(0) - - loss = loss.sum() / bsz - - return loss, answer_output, answer_targets - - def predict_answers( - self, - samples, - num_beams=3, - inference_method="rank", - max_len=10, - min_len=1, - num_ans_candidates=128, - answer_list=None, - **kwargs - ): - """ - Args: - samples (dict): A dictionary containing the following keys: - - image (torch.Tensor): A tensor of shape (batch_size, 3, H, W). Default H=480, W=480. - - text_input (str or [str]): String or a list of strings, each string is a question. - The number of questions must be equal to the batch size. If a single string, will be converted to a list of string, with length 1 first. - num_beams (int): Number of beams for beam search. 1 means no beam search. - inference_method (str): Inference method. One of "rank", "generate". - - If "rank", the model will return answers with the highest probability from the answer list. - - If "generate", the model will generate answers. - max_len (int): Maximum length of generated answers. - min_len (int): Minimum length of generated answers. - num_ans_candidates (int): Number of answer candidates, used to filter out answers with low probability. - answer_list (list): A list of strings, each string is an answer. - - Returns: - List: A list of strings, each string is an answer. - - Examples: - ```python - >>> from PIL import Image - >>> from lavis.models import load_model_and_preprocess - >>> model, vis_processors, txt_processors = load_model_and_preprocess("blip_vqa", "vqav2") - >>> raw_image = Image.open("docs/data/merlion.png").convert("RGB") - >>> question = "Which city is this photo taken?" - >>> image = vis_processors["eval"](raw_image).unsqueeze(0) - >>> question = txt_processors["eval"](question) - >>> samples = {"image": image, "text_input": [question]} - >>> answers = model.predict_answers(samples) - >>> answers - ['singapore'] - >>> answer_list = ["Singapore", "London", "Palo Alto", "Tokyo"] - >>> answers = model.predict_answers(samples, answer_list=answer_list) - >>> answers - ['Singapore'] - ``` - """ - assert inference_method in [ - "rank", - "generate", - ], "Inference method must be one of 'rank' or 'generate', got {}.".format( - inference_method - ) - - if isinstance(samples["text_input"], str): - samples["text_input"] = [samples["text_input"]] - - assert len(samples["text_input"]) == samples["image"].size( - 0 - ), "The number of questions must be equal to the batch size." - - if inference_method == "generate": - return self._generate_answers( - samples, num_beams=num_beams, max_length=max_len, min_length=min_len - ) - elif inference_method == "rank": - assert answer_list is not None, "answer_list must be provided for ranking" - - num_ans_candidates = min(num_ans_candidates, len(answer_list)) - - return self._rank_answers( - samples, answer_list=answer_list, num_ans_candidates=num_ans_candidates - ) - - def _generate_answers(self, samples, num_beams=3, max_length=10, min_length=1): - encoder_out, _ = self.forward_encoder(samples) - - question_output = encoder_out - - question_states = question_output.last_hidden_state.repeat_interleave( - num_beams, dim=0 - ) - question_atts = torch.ones(question_states.size()[:-1], dtype=torch.long).to( - self.device - ) - - model_kwargs = { - "encoder_hidden_states": question_states, - "encoder_attention_mask": question_atts, - } - - bsz = samples["image"].size(0) - bos_ids = torch.full( - (bsz, 1), fill_value=self.tokenizer.bos_token_id, device=self.device - ) - - outputs = self.text_decoder.generate( - input_ids=bos_ids, - max_length=max_length, - min_length=min_length, - num_beams=num_beams, - eos_token_id=self.tokenizer.sep_token_id, - pad_token_id=self.tokenizer.pad_token_id, - **model_kwargs - ) - - # collect answers - answers = [] - for output in outputs: - answer = self.tokenizer.decode(output, skip_special_tokens=True) - answers.append(answer) - - return answers - - def _rank_answers(self, samples, answer_list, num_ans_candidates): - """ - Generate the first token of answers using decoder and select ${num_ans_candidates} - most probable ones. Then select answers from answer list, which start with the probable tokens. - Lastly, use the selected answers as the ground-truth labels for decoding and calculating LM loss. - Return the answers that minimize the losses as result. - - """ - answer_candidates = self.tokenizer( - answer_list, padding="longest", return_tensors="pt" - ).to(self.device) - answer_candidates.input_ids[:, 0] = self.tokenizer.bos_token_id - - answer_ids = answer_candidates.input_ids - answer_atts = answer_candidates.attention_mask - - question_output, _ = self.forward_encoder(samples) - question_states = question_output.last_hidden_state - - tokenized_question = samples["tokenized_text"] - question_atts = tokenized_question.attention_mask - - num_ques = question_states.size(0) - start_ids = answer_ids[0, 0].repeat(num_ques, 1) # bos token - - start_output = self.text_decoder( - start_ids, - encoder_hidden_states=question_states, - encoder_attention_mask=question_atts, - return_dict=True, - reduction="none", - ) - logits = start_output.logits[:, 0, :] # first token's logit - - # topk_probs: top-k probability - # topk_ids: [num_question, k] - answer_first_token = answer_ids[:, 1] - prob_first_token = F.softmax(logits, dim=1).index_select( - dim=1, index=answer_first_token - ) - topk_probs, topk_ids = prob_first_token.topk(num_ans_candidates, dim=1) - - # answer input: [num_question*k, answer_len] - input_ids = [] - input_atts = [] - for b, topk_id in enumerate(topk_ids): - input_ids.append(answer_ids.index_select(dim=0, index=topk_id)) - input_atts.append(answer_atts.index_select(dim=0, index=topk_id)) - input_ids = torch.cat(input_ids, dim=0) - input_atts = torch.cat(input_atts, dim=0) - - targets_ids = input_ids.masked_fill( - input_ids == self.tokenizer.pad_token_id, -100 - ) - - # repeat encoder's output for top-k answers - question_states = tile(question_states, 0, num_ans_candidates) - question_atts = tile(question_atts, 0, num_ans_candidates) - - output = self.text_decoder( - input_ids, - attention_mask=input_atts, - encoder_hidden_states=question_states, - encoder_attention_mask=question_atts, - labels=targets_ids, - return_dict=True, - reduction="none", - ) - - log_probs_sum = -output.loss - log_probs_sum = log_probs_sum.view(num_ques, num_ans_candidates) - - max_topk_ids = log_probs_sum.argmax(dim=1) - max_ids = topk_ids[max_topk_ids >= 0, max_topk_ids] - - answers = [answer_list[max_id] for max_id in max_ids] - - return answers - - @classmethod - def from_config(cls, cfg=None): - image_encoder = VisionTransformerEncoder.from_config(cfg) - - # text encoder + multimodal encoder - text_encoder = XBertEncoder.from_config(cfg) - text_decoder = XBertLMHeadDecoder.from_config(cfg) - - max_txt_len = cfg.get("max_txt_len", 35) - - model = cls( - image_encoder=image_encoder, - text_encoder=text_encoder, - text_decoder=text_decoder, - max_txt_len=max_txt_len, - ) - - model.load_checkpoint_from_config(cfg) - - return model diff --git a/spaces/SeViLA/SeViLA/lavis/models/topk.py b/spaces/SeViLA/SeViLA/lavis/models/topk.py deleted file mode 100644 index 33a78ebc407be1037eb5ba13f61b0500fe22ade7..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/topk.py +++ /dev/null @@ -1,339 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -DETR model and criterion classes. -""" - -import math -import torch -import copy -import einops -import torch.nn.functional as F -from torch import nn - -from dataclasses import dataclass -from typing import Optional -from enum import IntEnum -from einops import rearrange - -class PerturbedTopK(nn.Module): - def __init__(self, k: int, num_samples: int = 1000): - super(PerturbedTopK, self).__init__() - self.num_samples = num_samples - self.k = k - - def __call__(self, x, sigma): - return PerturbedTopKFunction.apply(x, self.k, self.num_samples, sigma) - - -class PerturbedTopKFunction(torch.autograd.Function): - @staticmethod - def forward(ctx, x, k: int, num_samples: int = 1000, sigma: float = 0.05): - #print('x', x.shape) - b, d = x.shape - # for Gaussian: noise and gradient are the same. - noise = torch.normal(mean=0.0, std=1.0, size=(b, num_samples, d)).to(x.device) - perturbed_x = x[:, None, :] + noise * sigma # b, nS, d - #print('perturbed_x', perturbed_x.shape) - topk_results = torch.topk(perturbed_x, k=k, dim=-1, sorted=False) - #print('topk_results',topk_results) - - indices = topk_results.indices # b, nS, k - indices = torch.sort(indices, dim=-1).values # b, nS, k - # print('indices', indices.shape ,indices[0,0,0]) - - perturbed_output = torch.nn.functional.one_hot(indices, num_classes=d).float() - indicators = perturbed_output.mean(dim=1) # b, k, d - # print('perturbed_output', perturbed_output.shape, perturbed_output[0,indices[0,0,0],0,0]) - - # constants for backward - ctx.k = k - ctx.num_samples = num_samples - ctx.sigma = sigma - - # tensors for backward - ctx.perturbed_output = perturbed_output - ctx.noise = noise - return indicators - - @staticmethod - def backward(ctx, grad_output): - if grad_output is None: - return tuple([None] * 5) - - noise_gradient = ctx.noise - if ctx.sigma <= 1e-20: - b, _, k, d = ctx.perturbed_output.size() - expected_gradient = torch.zeros(b, k, d).to(grad_output.device) - else: - expected_gradient = ( - torch.einsum("bnkd,bnd->bkd", ctx.perturbed_output, noise_gradient) - / ctx.num_samples - / (ctx.sigma) - ) - - grad_input = torch.einsum("bkd,bkd->bd", grad_output, expected_gradient) - - return (grad_input,) + tuple([None] * 5) - -def HardTopK(k, x): - topk_results = torch.topk(x, k=k, dim=-1, sorted=False) - indices = topk_results.indices # b, k - indices = torch.sort(indices, dim=-1).values - return indices - - -def batched_index_select(input, dim, index): - for i in range(1, len(input.shape)): - if i != dim: - index = index.unsqueeze(i) - expanse = list(input.shape) - expanse[0] = -1 - expanse[dim] = -1 - index = index.expand(expanse) - return torch.gather(input, dim, index) - -def extract_frames_from_indices(x, indices): - batch_size, _, n, channels = x.shape - k = indices.shape[-1] - all_frame = x - frames = batched_index_select(all_frame, 1, indices) - frames = frames.contiguous().view(batch_size, k, n, channels) - return frames - - -def extract_frames_from_indicators(x, indicators): - indicators = rearrange(indicators, "b d k -> b k d") - frames = torch.einsum("b k d, b d n c-> b k n c", - indicators, x) - return frames - - -class ModalityEmbeddingsID(IntEnum): - TEXT_QUESTION = 0 - TEXT_EMBEDDING = 1 - TEXT_UNUSED = 2 # ignore - VISUAL_EMBEDDING = 3 - VISUAL_UNUSED = 4 # ignore - -class ModalityEmbeddings(nn.Module): - """ - Provides embeddings that indicate type of modality; for use with multimodal inputs for ATP. See atp.py for usage. - """ - def __init__(self, - d_model: int, - use_text_query: bool = False, - use_text_cands: bool = False, - n_cands: int = 5): - """ - Details for each of these arguments are provided in ATPConfig. - """ - super().__init__() - self.d_model = d_model - self.embedding = nn.Embedding(num_embeddings=len(ModalityEmbeddingsID), - embedding_dim=d_model) - - self.use_text_query = use_text_query - self.use_text_cands = use_text_cands - self.n_cands = n_cands if use_text_cands else 0 - self.n_text_feats = 1 if use_text_query else 0 - if use_text_cands: - self.n_text_feats += n_cands - - def forward(self, x, num_frame): - """ - x: torch.tensor of size (L, N, D) - returns modality embeddings for x of size (L, *, D) - """ - L, N, D = x.size() # (sequence_length, batch_size, feature_dim) - num_txt = L - num_frame - - # assemble the IDs for the modality encodings, language inputs then vision inputs - class_ids = [] - if self.use_text_query: - class_ids.extend([ModalityEmbeddingsID.TEXT_QUESTION,] * num_txt) - # if self.use_text_cands: - # class_ids.extend([ModalityEmbeddingsID.TEXT_EMBEDDING,] * self.n_cands) - class_ids.extend([ModalityEmbeddingsID.VISUAL_EMBEDDING,] * num_frame) - - class_ids = torch.tensor( - class_ids, - dtype=torch.long, - device=x.device - ).unsqueeze(-1) - - # return modality embeddings - return self.embedding(class_ids) - -@dataclass -class ATPConfig: - ''' - ATPConfig contains the parameters needed for the ATPSelectorModel (and its ATPEncoder). - ''' - # ATPEncoder params - n_layers: int = 6 - n_heads: int = 4 - d_model: int = 256 - d_input_t: int = 2048 - d_input_v: int = 1408 - d_model_ff: int = 256 - enc_dropout: float = 0.1 - use_text_query: bool = True # at least one use_text_* needs to be true for ATP to be multimodal - use_text_cands: bool = False # ^ see above. (note: if both are false, ATP is vision-only) - n_cands: int = 5 # only relevant when use_text_cands is set to true - # ATPSelector params - use_ste: bool = True # controls type of selector during ATP training; see ATPSelectorModel.forward - sel_dropout: float = 0.0 - d_input: int = 512 # size of the input vision-language embeddings (e.g. CLIP-ViT-B32 is size 512) - - def default_args(cls): - return cls(n_layers = 6, - n_heads = 4, - d_model = 256, - d_input_t = 2048, - d_input_v = 1408, - d_model_ff = 256, - enc_dropout = 0.1, - use_text_query = True, - use_text_cands = False, - n_cands = 5, - use_ste = True, - sel_dropout = 0.0, - d_input = 512) - - @classmethod - def from_args(cls, args): - return cls(n_layers = args.n_layers, - n_heads = args.n_heads, - d_model = args.d_model, - d_model_ff = args.d_model_ff, - enc_dropout = args.enc_dropout, - use_text_query = args.use_text_query, - use_text_cands = args.use_text_cands, - n_cands = args.n_cands, - use_ste = args.use_ste, - sel_dropout = args.sel_dropout, - d_input = args.d_input) - -class ATPEncoder(nn.Module): - """ - The multimodal transformer encoder for the ATP model. For analysis purposes, the ATP encoder - does not use any positional information (no positional encodings + transformer / self-attention) - and is generally kept low-capacity. If the goal is raw accuracy (not analysis), you can relax these constraints. - """ - def __init__(self, config: ATPConfig): - """ - config: ATPConfig with parameters for the (transformer-based, atemporal) encoder for ATP. - See ATPConfig documentation for details. - """ - super().__init__() - self.d_model = config.d_model - - self.dropout = nn.Dropout(p=config.enc_dropout) - - - self.modality_encoding = ModalityEmbeddings(d_model=self.d_model, - use_text_query=config.use_text_query, - use_text_cands=config.use_text_cands, - n_cands=config.n_cands) - - atp_encoder_layer = nn.TransformerEncoderLayer( - d_model=self.d_model, - nhead=config.n_heads, - dim_feedforward=config.d_model_ff, - dropout=config.enc_dropout, - activation='relu' - ) - - self.transformer_encoder = nn.TransformerEncoder(atp_encoder_layer, config.n_layers) - - def forward(self, x_inputs: torch.tensor, vis_L): - """ - x_inputs: torch.tensor of shape (L, N, D) - """ - L, N, D = x_inputs.size() # (sequence_length, batch_size, d_model) - assert D == self.d_model, "inputs dimension mismatch" - x_encoded = x_inputs * math.sqrt(self.d_model) - x_encoded += self.modality_encoding(x_encoded, vis_L) - x_encoded = self.dropout(x_encoded) - x_encoded = self.transformer_encoder(x_encoded) - - return x_encoded - -class TopK_Selector(nn.Module): - """ - The Atemporal Probe (ATP) selector model. Takes as input a sequence of image-language - encoding and outputs a (discrete) selection over the input frames, to help analyze - downstream discriminative video-language tasks. - """ - - def __init__(self, config=ATPConfig, num_select=4): - """ - config: ATPConfig with parameters for initializing the ATPSelectorModel (and its encoder). - See ATPConfig documentation for details. - """ - super().__init__() - self.config = config - self.t_embedding = nn.Linear(config.d_input_t, config.d_input) - self.v_embedding = nn.Linear(config.d_input_v, config.d_input) - self.embedding = nn.Linear(config.d_input, config.d_model) - self.atp_encoder = ATPEncoder(config) - self.dropout = nn.Dropout(p=config.sel_dropout) - self.logits = nn.Linear(config.d_model, 1) - self.num_select = num_select - self.sigma = 0.1 - - def forward(self, - x_vis, # [b, t, d] - x_txt, # [b, n, d] - **kwargs): - """ - """ - x_vis_cls = x_vis[:, :, 0, :] # b t n c - N, vis_L, D = x_vis_cls.size() # (batch_size, sequence_length, feature_dimension) - # embed the input sequence to the (smaller) model dimension (d_model) with modality encodings. - x_vis_cls = self.v_embedding(self.dropout(x_vis_cls)) - x_txt = self.t_embedding(self.dropout(x_txt)) - x_inputs = [] - x_vis_cls = x_vis_cls.permute(1, 0, 2) - x_inputs.append(x_txt.permute(1,0,2)) # (n, b, d) - x_inputs.append(x_vis_cls) - x_inputs = torch.cat(x_inputs, dim=0) - x_encoded = self.embedding(self.dropout(x_inputs)) - x_atp_encoded = self.atp_encoder(x_encoded, vis_L) - x_atp_encoded = x_atp_encoded.permute(1, 0, 2) - x_encoded_v = x_atp_encoded[:, -vis_L: , :] - # obtain selection scores (logits) - x_logits = self.logits(self.dropout(x_encoded_v)).squeeze() - #print('x_logits', x_logits.shape) - - if self.training: - indices = PerturbedTopKFunction.apply(x_logits, self.num_select) - #print('indices', indices.shape) - indices = einops.rearrange(indices, "b k d -> b d k") - - if indices is not None: - qa_frames = extract_frames_from_indicators(x_vis, indices) - else: - raise RuntimeError("Empty indices!") - else: - indices = HardTopK(self.num_select, x_logits) - if indices is not None: - qa_frames = extract_frames_from_indices(x_vis, indices) - else: - raise RuntimeError("Empty indices!") - - - return qa_frames - -if __name__ == "__main__": - selector_config = ATPConfig.default_args - - Selector = TopK_Selector(num_select=4) #.eval() - - x_vis = torch.rand([2, 8, 257, 1408]) - x_txt = torch.rand([2, 68, 2048]) - - out = Selector(x_vis, x_txt) - print(out.shape) - - diff --git a/spaces/ServerX/PorcoDiaz/Applio-RVC-Fork/utils/dependency.py b/spaces/ServerX/PorcoDiaz/Applio-RVC-Fork/utils/dependency.py deleted file mode 100644 index b70338b02d31b1ef455fbac817d418d328db518d..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/Applio-RVC-Fork/utils/dependency.py +++ /dev/null @@ -1,170 +0,0 @@ -import os -import csv -import shutil -import tarfile -import subprocess -from pathlib import Path -from datetime import datetime - -def install_packages_but_jank_af(): - packages = ['build-essential', 'python3-dev', 'ffmpeg', 'aria2'] - pip_packages = ['pip', 'setuptools', 'wheel', 'httpx==0.23.0', 'faiss-gpu', 'fairseq', 'gradio==3.34.0', - 'ffmpeg', 'ffmpeg-python', 'praat-parselmouth', 'pyworld', 'numpy==1.23.5', - 'numba==0.56.4', 'librosa==0.9.2', 'mega.py', 'gdown', 'onnxruntime', 'pyngrok==4.1.12', - 'gTTS', 'elevenlabs', 'wget', 'tensorboardX', 'unidecode', 'huggingface-hub', 'stftpitchshift==1.5.1', - 'yt-dlp', 'pedalboard', 'pathvalidate', 'nltk', 'edge-tts', 'git+https://github.com/suno-ai/bark.git', 'python-dotenv' , 'av'] - - print("Updating and installing system packages...") - for package in packages: - print(f"Installing {package}...") - subprocess.check_call(['apt-get', 'install', '-qq', '-y', package]) - - print("Updating and installing pip packages...") - subprocess.check_call(['pip', 'install', '--upgrade'] + pip_packages) - - print('Packages up to date.') - - -def setup_environment(ForceUpdateDependencies, ForceTemporaryStorage): - # Mounting Google Drive - if not ForceTemporaryStorage: - from google.colab import drive - - if not os.path.exists('/content/drive'): - drive.mount('/content/drive') - else: - print('Drive is already mounted. Proceeding...') - - # Function to install dependencies with progress - def install_packages(): - packages = ['build-essential', 'python3-dev', 'ffmpeg', 'aria2'] - pip_packages = ['pip', 'setuptools', 'wheel', 'httpx==0.23.0', 'faiss-gpu', 'fairseq', 'gradio==3.34.0', - 'ffmpeg', 'ffmpeg-python', 'praat-parselmouth', 'pyworld', 'numpy==1.23.5', - 'numba==0.56.4', 'librosa==0.9.2', 'mega.py', 'gdown', 'onnxruntime', 'pyngrok==4.1.12', - 'gTTS', 'elevenlabs', 'wget', 'tensorboardX', 'unidecode', 'huggingface-hub', 'stftpitchshift==1.5.1', - 'yt-dlp', 'pedalboard', 'pathvalidate', 'nltk', 'edge-tts', 'git+https://github.com/suno-ai/bark.git', 'python-dotenv' , 'av'] - - print("Updating and installing system packages...") - for package in packages: - print(f"Installing {package}...") - subprocess.check_call(['apt-get', 'install', '-qq', '-y', package]) - - print("Updating and installing pip packages...") - subprocess.check_call(['pip', 'install', '--upgrade'] + pip_packages) - - - print('Packages up to date.') - - # Function to scan a directory and writes filenames and timestamps - def scan_and_write(base_path, output_file): - with open(output_file, 'w', newline='') as f: - writer = csv.writer(f) - for dirpath, dirs, files in os.walk(base_path): - for filename in files: - fname = os.path.join(dirpath, filename) - try: - mtime = os.path.getmtime(fname) - writer.writerow([fname, mtime]) - except Exception as e: - print(f'Skipping irrelevant nonexistent file {fname}: {str(e)}') - print(f'Finished recording filesystem timestamps to {output_file}.') - - # Function to compare files - def compare_files(old_file, new_file): - old_files = {} - new_files = {} - - with open(old_file, 'r') as f: - reader = csv.reader(f) - old_files = {rows[0]:rows[1] for rows in reader} - - with open(new_file, 'r') as f: - reader = csv.reader(f) - new_files = {rows[0]:rows[1] for rows in reader} - - removed_files = old_files.keys() - new_files.keys() - added_files = new_files.keys() - old_files.keys() - unchanged_files = old_files.keys() & new_files.keys() - - changed_files = {f for f in unchanged_files if old_files[f] != new_files[f]} - - for file in removed_files: - print(f'File has been removed: {file}') - - for file in changed_files: - print(f'File has been updated: {file}') - - return list(added_files) + list(changed_files) - - # Check if CachedRVC.tar.gz exists - if ForceTemporaryStorage: - file_path = '/content/CachedRVC.tar.gz' - else: - file_path = '/content/drive/MyDrive/RVC_Cached/CachedRVC.tar.gz' - - content_file_path = '/content/CachedRVC.tar.gz' - extract_path = '/' - - if not os.path.exists(file_path): - folder_path = os.path.dirname(file_path) - os.makedirs(folder_path, exist_ok=True) - print('No cached dependency install found. Attempting to download GitHub backup..') - - try: - download_url = "https://github.com/kalomaze/QuickMangioFixes/releases/download/release3/CachedRVC.tar.gz" - subprocess.run(["wget", "-O", file_path, download_url]) - print('Download completed successfully!') - except Exception as e: - print('Download failed:', str(e)) - - # Delete the failed download file - if os.path.exists(file_path): - os.remove(file_path) - print('Failed download file deleted. Continuing manual backup..') - - if Path(file_path).exists(): - if ForceTemporaryStorage: - print('Finished downloading CachedRVC.tar.gz.') - else: - print('CachedRVC.tar.gz found on Google Drive. Proceeding to copy and extract...') - - # Check if ForceTemporaryStorage is True and skip copying if it is - if ForceTemporaryStorage: - pass - else: - shutil.copy(file_path, content_file_path) - - print('Beginning backup copy operation...') - - with tarfile.open(content_file_path, 'r:gz') as tar: - for member in tar.getmembers(): - target_path = os.path.join(extract_path, member.name) - try: - tar.extract(member, extract_path) - except Exception as e: - print('Failed to extract a file (this isn\'t normal)... forcing an update to compensate') - ForceUpdateDependencies = True - print(f'Extraction of {content_file_path} to {extract_path} completed.') - - if ForceUpdateDependencies: - install_packages() - ForceUpdateDependencies = False - else: - print('CachedRVC.tar.gz not found. Proceeding to create an index of all current files...') - scan_and_write('/usr/', '/content/usr_files.csv') - - install_packages() - - scan_and_write('/usr/', '/content/usr_files_new.csv') - changed_files = compare_files('/content/usr_files.csv', '/content/usr_files_new.csv') - - with tarfile.open('/content/CachedRVC.tar.gz', 'w:gz') as new_tar: - for file in changed_files: - new_tar.add(file) - print(f'Added to tar: {file}') - - os.makedirs('/content/drive/MyDrive/RVC_Cached', exist_ok=True) - shutil.copy('/content/CachedRVC.tar.gz', '/content/drive/MyDrive/RVC_Cached/CachedRVC.tar.gz') - print('Updated CachedRVC.tar.gz copied to Google Drive.') - print('Dependencies fully up to date; future runs should be faster.') - diff --git a/spaces/Shad0ws/imagetomusic/utils.py b/spaces/Shad0ws/imagetomusic/utils.py deleted file mode 100644 index d302528fd6fc9be8d782f78b6c44f4d894147d07..0000000000000000000000000000000000000000 --- a/spaces/Shad0ws/imagetomusic/utils.py +++ /dev/null @@ -1,50 +0,0 @@ -import json -import numpy as np -import httpx - -from constants import MUBERT_TAGS, MUBERT_LICENSE, MUBERT_MODE, MUBERT_TOKEN - - -def get_mubert_tags_embeddings(w2v_model): - return w2v_model.encode(MUBERT_TAGS) - - -def get_pat(email: str): - r = httpx.post('https://api-b2b.mubert.com/v2/GetServiceAccess', - json={ - "method": "GetServiceAccess", - "params": { - "email": email, - "license": MUBERT_LICENSE, - "token": MUBERT_TOKEN, - "mode": MUBERT_MODE, - } - }) - - rdata = json.loads(r.text) - assert rdata['status'] == 1, "probably incorrect e-mail" - pat = rdata['data']['pat'] - return pat - - -def find_similar(em, embeddings, method='cosine'): - scores = [] - for ref in embeddings: - if method == 'cosine': - scores.append(1 - np.dot(ref, em) / (np.linalg.norm(ref) * np.linalg.norm(em))) - if method == 'norm': - scores.append(np.linalg.norm(ref - em)) - return np.array(scores), np.argsort(scores) - - -def get_tags_for_prompts(w2v_model, mubert_tags_embeddings, prompts, top_n=3, debug=False): - prompts_embeddings = w2v_model.encode(prompts) - ret = [] - for i, pe in enumerate(prompts_embeddings): - scores, idxs = find_similar(pe, mubert_tags_embeddings) - top_tags = MUBERT_TAGS[idxs[:top_n]] - top_prob = 1 - scores[idxs[:top_n]] - if debug: - print(f"Prompt: {prompts[i]}\nTags: {', '.join(top_tags)}\nScores: {top_prob}\n\n\n") - ret.append((prompts[i], list(top_tags))) - return ret diff --git a/spaces/Snb-ai/vuia/README.md b/spaces/Snb-ai/vuia/README.md deleted file mode 100644 index cedcd4200e5afd4fd946d7bc66dcd4b48d695279..0000000000000000000000000000000000000000 --- a/spaces/Snb-ai/vuia/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Vuia -emoji: 💻 -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/StarbucksCN/starbucks_doc/core/helper.py b/spaces/StarbucksCN/starbucks_doc/core/helper.py deleted file mode 100644 index c51058636e3f68883f9b514fde10ac494414ffa3..0000000000000000000000000000000000000000 --- a/spaces/StarbucksCN/starbucks_doc/core/helper.py +++ /dev/null @@ -1,31 +0,0 @@ -from core.lifecycle import Lifecycle - - -class LifecycleHelper: - @classmethod - def initialize_if_possible(cls, ls: Lifecycle) -> None: - if isinstance(ls, Lifecycle) and ls.lifecycle_state.can_initialize( - ls.lifecycle_state.phase - ): - ls.initialize() - - @classmethod - def start_if_possible(cls, ls: Lifecycle) -> None: - if isinstance(ls, Lifecycle) and ls.lifecycle_state.can_start( - ls.lifecycle_state.phase - ): - ls.start() - - @classmethod - def stop_if_possible(cls, ls: Lifecycle) -> None: - if isinstance(ls, Lifecycle) and ls.lifecycle_state.can_stop( - ls.lifecycle_state.phase - ): - ls.stop() - - @classmethod - def dispose_if_possible(cls, ls: Lifecycle) -> None: - if isinstance(ls, Lifecycle) and ls.lifecycle_state.can_dispose( - ls.lifecycle_state.phase - ): - ls.dispose() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/driver/ctypes.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/driver/ctypes.py deleted file mode 100644 index 398eb3066e0ca6bf7684065dd099da2cddc3e59a..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/driver/ctypes.py +++ /dev/null @@ -1,36 +0,0 @@ -import logging -import os - -import clickhouse_connect.driver.dataconv as pydc -import clickhouse_connect.driver.npconv as pync -from clickhouse_connect.driver.buffer import ResponseBuffer -from clickhouse_connect.driver.common import coerce_bool - -logger = logging.getLogger(__name__) - -RespBuffCls = ResponseBuffer -data_conv = pydc -numpy_conv = pync - -if coerce_bool(os.environ.get('CLICKHOUSE_CONNECT_USE_C', True)): - try: - from clickhouse_connect.driverc.buffer import ResponseBuffer as CResponseBuffer - import clickhouse_connect.driverc.dataconv as cdc - - data_conv = cdc - RespBuffCls = CResponseBuffer - logger.info('Successfully imported ClickHouse Connect C data optimizations') - except ImportError as ex: - CResponseBuffer = None - logger.warning('Unable to connect optimized C data functions [%s], falling back to pure Python', - str(ex)) - try: - import clickhouse_connect.driverc.npconv as cnc - - numpy_conv = cnc - logger.debug('Successfully import ClickHouse Connect C/Numpy optimizations') - except ImportError as ex: - logger.debug('Unable to connect ClickHouse Connect C to Numpy API [%s], falling back to pure Python', - str(ex)) -else: - logger.info('ClickHouse Connect C optimizations disabled') diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_utils.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_utils.py deleted file mode 100644 index fc5a8f8ae5bf8b7715abca4cb4dce01959cfa212..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_utils.py +++ /dev/null @@ -1,523 +0,0 @@ -from __future__ import nested_scopes -import traceback -import warnings -from _pydev_bundle import pydev_log -from _pydev_bundle._pydev_saved_modules import thread, threading -from _pydev_bundle import _pydev_saved_modules -import signal -import os -import ctypes -from importlib import import_module -from urllib.parse import quote # @UnresolvedImport -import time -import inspect -import sys -from _pydevd_bundle.pydevd_constants import USE_CUSTOM_SYS_CURRENT_FRAMES, IS_PYPY, SUPPORT_GEVENT, \ - GEVENT_SUPPORT_NOT_SET_MSG, GENERATED_LEN_ATTR_NAME, PYDEVD_WARN_SLOW_RESOLVE_TIMEOUT, \ - get_global_debugger - - -def save_main_module(file, module_name): - # patch provided by: Scott Schlesier - when script is run, it does not - # use globals from pydevd: - # This will prevent the pydevd script from contaminating the namespace for the script to be debugged - # pretend pydevd is not the main module, and - # convince the file to be debugged that it was loaded as main - sys.modules[module_name] = sys.modules['__main__'] - sys.modules[module_name].__name__ = module_name - - with warnings.catch_warnings(): - warnings.simplefilter("ignore", category=DeprecationWarning) - warnings.simplefilter("ignore", category=PendingDeprecationWarning) - from imp import new_module - - m = new_module('__main__') - sys.modules['__main__'] = m - if hasattr(sys.modules[module_name], '__loader__'): - m.__loader__ = getattr(sys.modules[module_name], '__loader__') - m.__file__ = file - - return m - - -def is_current_thread_main_thread(): - if hasattr(threading, 'main_thread'): - return threading.current_thread() is threading.main_thread() - else: - return isinstance(threading.current_thread(), threading._MainThread) - - -def get_main_thread(): - if hasattr(threading, 'main_thread'): - return threading.main_thread() - else: - for t in threading.enumerate(): - if isinstance(t, threading._MainThread): - return t - return None - - -def to_number(x): - if is_string(x): - try: - n = float(x) - return n - except ValueError: - pass - - l = x.find('(') - if l != -1: - y = x[0:l - 1] - # print y - try: - n = float(y) - return n - except ValueError: - pass - return None - - -def compare_object_attrs_key(x): - if GENERATED_LEN_ATTR_NAME == x: - as_number = to_number(x) - if as_number is None: - as_number = 99999999 - # len() should appear after other attributes in a list. - return (1, as_number) - else: - return (-1, to_string(x)) - - -def is_string(x): - return isinstance(x, str) - - -def to_string(x): - if isinstance(x, str): - return x - else: - return str(x) - - -def print_exc(): - if traceback: - traceback.print_exc() - - -def quote_smart(s, safe='/'): - return quote(s, safe) - - -def get_clsname_for_code(code, frame): - clsname = None - if len(code.co_varnames) > 0: - # We are checking the first argument of the function - # (`self` or `cls` for methods). - first_arg_name = code.co_varnames[0] - if first_arg_name in frame.f_locals: - first_arg_obj = frame.f_locals[first_arg_name] - if inspect.isclass(first_arg_obj): # class method - first_arg_class = first_arg_obj - else: # instance method - if hasattr(first_arg_obj, "__class__"): - first_arg_class = first_arg_obj.__class__ - else: # old style class, fall back on type - first_arg_class = type(first_arg_obj) - func_name = code.co_name - if hasattr(first_arg_class, func_name): - method = getattr(first_arg_class, func_name) - func_code = None - if hasattr(method, 'func_code'): # Python2 - func_code = method.func_code - elif hasattr(method, '__code__'): # Python3 - func_code = method.__code__ - if func_code and func_code == code: - clsname = first_arg_class.__name__ - - return clsname - - -def get_non_pydevd_threads(): - threads = threading.enumerate() - return [t for t in threads if t and not getattr(t, 'is_pydev_daemon_thread', False)] - - -if USE_CUSTOM_SYS_CURRENT_FRAMES and IS_PYPY: - # On PyPy we can use its fake_frames to get the traceback - # (instead of the actual real frames that need the tracing to be correct). - _tid_to_frame_for_dump_threads = sys._current_frames -else: - from _pydevd_bundle.pydevd_constants import _current_frames as _tid_to_frame_for_dump_threads - - -def dump_threads(stream=None, show_pydevd_threads=True): - ''' - Helper to dump thread info. - ''' - if stream is None: - stream = sys.stderr - thread_id_to_name_and_is_pydevd_thread = {} - try: - threading_enumerate = _pydev_saved_modules.pydevd_saved_threading_enumerate - if threading_enumerate is None: - threading_enumerate = threading.enumerate - - for t in threading_enumerate(): - is_pydevd_thread = getattr(t, 'is_pydev_daemon_thread', False) - thread_id_to_name_and_is_pydevd_thread[t.ident] = ( - '%s (daemon: %s, pydevd thread: %s)' % (t.name, t.daemon, is_pydevd_thread), - is_pydevd_thread - ) - except: - pass - - stream.write('===============================================================================\n') - stream.write('Threads running\n') - stream.write('================================= Thread Dump =================================\n') - stream.flush() - - for thread_id, frame in _tid_to_frame_for_dump_threads().items(): - name, is_pydevd_thread = thread_id_to_name_and_is_pydevd_thread.get(thread_id, (thread_id, False)) - if not show_pydevd_threads and is_pydevd_thread: - continue - - stream.write('\n-------------------------------------------------------------------------------\n') - stream.write(" Thread %s" % (name,)) - stream.write('\n\n') - - for i, (filename, lineno, name, line) in enumerate(traceback.extract_stack(frame)): - - stream.write(' File "%s", line %d, in %s\n' % (filename, lineno, name)) - if line: - stream.write(" %s\n" % (line.strip())) - - if i == 0 and 'self' in frame.f_locals: - stream.write(' self: ') - try: - stream.write(str(frame.f_locals['self'])) - except: - stream.write('Unable to get str of: %s' % (type(frame.f_locals['self']),)) - stream.write('\n') - stream.flush() - - stream.write('\n=============================== END Thread Dump ===============================') - stream.flush() - - -def _extract_variable_nested_braces(char_iter): - expression = [] - level = 0 - for c in char_iter: - if c == '{': - level += 1 - if c == '}': - level -= 1 - if level == -1: - return ''.join(expression).strip() - expression.append(c) - raise SyntaxError('Unbalanced braces in expression.') - - -def _extract_expression_list(log_message): - # Note: not using re because of nested braces. - expression = [] - expression_vars = [] - char_iter = iter(log_message) - for c in char_iter: - if c == '{': - expression_var = _extract_variable_nested_braces(char_iter) - if expression_var: - expression.append('%s') - expression_vars.append(expression_var) - else: - expression.append(c) - - expression = ''.join(expression) - return expression, expression_vars - - -def convert_dap_log_message_to_expression(log_message): - try: - expression, expression_vars = _extract_expression_list(log_message) - except SyntaxError: - return repr('Unbalanced braces in: %s' % (log_message)) - if not expression_vars: - return repr(expression) - # Note: use '%' to be compatible with Python 2.6. - return repr(expression) + ' % (' + ', '.join(str(x) for x in expression_vars) + ',)' - - -def notify_about_gevent_if_needed(stream=None): - ''' - When debugging with gevent check that the gevent flag is used if the user uses the gevent - monkey-patching. - - :return bool: - Returns True if a message had to be shown to the user and False otherwise. - ''' - stream = stream if stream is not None else sys.stderr - if not SUPPORT_GEVENT: - gevent_monkey = sys.modules.get('gevent.monkey') - if gevent_monkey is not None: - try: - saved = gevent_monkey.saved - except AttributeError: - pydev_log.exception_once('Error checking for gevent monkey-patching.') - return False - - if saved: - # Note: print to stderr as it may deadlock the debugger. - sys.stderr.write('%s\n' % (GEVENT_SUPPORT_NOT_SET_MSG,)) - return True - - return False - - -def hasattr_checked(obj, name): - try: - getattr(obj, name) - except: - # i.e.: Handle any exception, not only AttributeError. - return False - else: - return True - - -def getattr_checked(obj, name): - try: - return getattr(obj, name) - except: - # i.e.: Handle any exception, not only AttributeError. - return None - - -def dir_checked(obj): - try: - return dir(obj) - except: - return [] - - -def isinstance_checked(obj, cls): - try: - return isinstance(obj, cls) - except: - return False - - -class ScopeRequest(object): - - __slots__ = ['variable_reference', 'scope'] - - def __init__(self, variable_reference, scope): - assert scope in ('globals', 'locals') - self.variable_reference = variable_reference - self.scope = scope - - def __eq__(self, o): - if isinstance(o, ScopeRequest): - return self.variable_reference == o.variable_reference and self.scope == o.scope - - return False - - def __ne__(self, o): - return not self == o - - def __hash__(self): - return hash((self.variable_reference, self.scope)) - - -class DAPGrouper(object): - ''' - Note: this is a helper class to group variables on the debug adapter protocol (DAP). For - the xml protocol the type is just added to each variable and the UI can group/hide it as needed. - ''' - - SCOPE_SPECIAL_VARS = 'special variables' - SCOPE_PROTECTED_VARS = 'protected variables' - SCOPE_FUNCTION_VARS = 'function variables' - SCOPE_CLASS_VARS = 'class variables' - - SCOPES_SORTED = [ - SCOPE_SPECIAL_VARS, - SCOPE_PROTECTED_VARS, - SCOPE_FUNCTION_VARS, - SCOPE_CLASS_VARS, - ] - - __slots__ = ['variable_reference', 'scope', 'contents_debug_adapter_protocol'] - - def __init__(self, scope): - self.variable_reference = id(self) - self.scope = scope - self.contents_debug_adapter_protocol = [] - - def get_contents_debug_adapter_protocol(self): - return self.contents_debug_adapter_protocol[:] - - def __eq__(self, o): - if isinstance(o, ScopeRequest): - return self.variable_reference == o.variable_reference and self.scope == o.scope - - return False - - def __ne__(self, o): - return not self == o - - def __hash__(self): - return hash((self.variable_reference, self.scope)) - - def __repr__(self): - return '' - - def __str__(self): - return '' - - -def interrupt_main_thread(main_thread=None): - ''' - Generates a KeyboardInterrupt in the main thread by sending a Ctrl+C - or by calling thread.interrupt_main(). - - :param main_thread: - Needed because Jython needs main_thread._thread.interrupt() to be called. - - Note: if unable to send a Ctrl+C, the KeyboardInterrupt will only be raised - when the next Python instruction is about to be executed (so, it won't interrupt - a sleep(1000)). - ''' - if main_thread is None: - main_thread = threading.main_thread() - - pydev_log.debug('Interrupt main thread.') - called = False - try: - if os.name == 'posix': - # On Linux we can't interrupt 0 as in Windows because it's - # actually owned by a process -- on the good side, signals - # work much better on Linux! - os.kill(os.getpid(), signal.SIGINT) - called = True - - elif os.name == 'nt': - # This generates a Ctrl+C only for the current process and not - # to the process group! - # Note: there doesn't seem to be any public documentation for this - # function (although it seems to be present from Windows Server 2003 SP1 onwards - # according to: https://www.geoffchappell.com/studies/windows/win32/kernel32/api/index.htm) - ctypes.windll.kernel32.CtrlRoutine(0) - - # The code below is deprecated because it actually sends a Ctrl+C - # to the process group, so, if this was a process created without - # passing `CREATE_NEW_PROCESS_GROUP` the signal may be sent to the - # parent process and to sub-processes too (which is not ideal -- - # for instance, when using pytest-xdist, it'll actually stop the - # testing, even when called in the subprocess). - - # if hasattr_checked(signal, 'CTRL_C_EVENT'): - # os.kill(0, signal.CTRL_C_EVENT) - # else: - # # Python 2.6 - # ctypes.windll.kernel32.GenerateConsoleCtrlEvent(0, 0) - called = True - - except: - # If something went wrong, fallback to interrupting when the next - # Python instruction is being called. - pydev_log.exception('Error interrupting main thread (using fallback).') - - if not called: - try: - # In this case, we don't really interrupt a sleep() nor IO operations - # (this makes the KeyboardInterrupt be sent only when the next Python - # instruction is about to be executed). - if hasattr(thread, 'interrupt_main'): - thread.interrupt_main() - else: - main_thread._thread.interrupt() # Jython - except: - pydev_log.exception('Error on interrupt main thread fallback.') - - -class Timer(object): - - def __init__(self, min_diff=PYDEVD_WARN_SLOW_RESOLVE_TIMEOUT): - self.min_diff = min_diff - self._curr_time = time.time() - - def print_time(self, msg='Elapsed:'): - old = self._curr_time - new = self._curr_time = time.time() - diff = new - old - if diff >= self.min_diff: - print('%s: %.2fs' % (msg, diff)) - - def _report_slow(self, compute_msg, *args): - old = self._curr_time - new = self._curr_time = time.time() - diff = new - old - if diff >= self.min_diff: - py_db = get_global_debugger() - if py_db is not None: - msg = compute_msg(diff, *args) - py_db.writer.add_command(py_db.cmd_factory.make_warning_message(msg)) - - def report_if_compute_repr_attr_slow(self, attrs_tab_separated, attr_name, attr_type): - self._report_slow(self._compute_repr_slow, attrs_tab_separated, attr_name, attr_type) - - def _compute_repr_slow(self, diff, attrs_tab_separated, attr_name, attr_type): - try: - attr_type = attr_type.__name__ - except: - pass - if attrs_tab_separated: - return ( - 'pydevd warning: Computing repr of %s.%s (%s) was slow (took %.2fs).\n' - 'Customize report timeout by setting the `PYDEVD_WARN_SLOW_RESOLVE_TIMEOUT` environment variable to a higher timeout (default is: %ss)\n' - ) % ( - attrs_tab_separated.replace('\t', '.'), attr_name, attr_type, diff, PYDEVD_WARN_SLOW_RESOLVE_TIMEOUT) - else: - return ( - 'pydevd warning: Computing repr of %s (%s) was slow (took %.2fs)\n' - 'Customize report timeout by setting the `PYDEVD_WARN_SLOW_RESOLVE_TIMEOUT` environment variable to a higher timeout (default is: %ss)\n' - ) % ( - attr_name, attr_type, diff, PYDEVD_WARN_SLOW_RESOLVE_TIMEOUT) - - def report_if_getting_attr_slow(self, cls, attr_name): - self._report_slow(self._compute_get_attr_slow, cls, attr_name) - - def _compute_get_attr_slow(self, diff, cls, attr_name): - try: - cls = cls.__name__ - except: - pass - return ( - 'pydevd warning: Getting attribute %s.%s was slow (took %.2fs)\n' - 'Customize report timeout by setting the `PYDEVD_WARN_SLOW_RESOLVE_TIMEOUT` environment variable to a higher timeout (default is: %ss)\n' - ) % (cls, attr_name, diff, PYDEVD_WARN_SLOW_RESOLVE_TIMEOUT) - - -def import_attr_from_module(import_with_attr_access): - if '.' not in import_with_attr_access: - # We need at least one '.' (we don't support just the module import, we need the attribute access too). - raise ImportError('Unable to import module with attr access: %s' % (import_with_attr_access,)) - - module_name, attr_name = import_with_attr_access.rsplit('.', 1) - - while True: - try: - mod = import_module(module_name) - except ImportError: - if '.' not in module_name: - raise ImportError('Unable to import module with attr access: %s' % (import_with_attr_access,)) - - module_name, new_attr_part = module_name.rsplit('.', 1) - attr_name = new_attr_part + '.' + attr_name - else: - # Ok, we got the base module, now, get the attribute we need. - try: - for attr in attr_name.split('.'): - mod = getattr(mod, attr) - return mod - except: - raise ImportError('Unable to import module with attr access: %s' % (import_with_attr_access,)) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/terminal.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/terminal.py deleted file mode 100644 index abb8770811f6d763433eaa87cf745ee720f1d7c7..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/formatters/terminal.py +++ /dev/null @@ -1,127 +0,0 @@ -""" - pygments.formatters.terminal - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - - Formatter for terminal output with ANSI sequences. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pip._vendor.pygments.formatter import Formatter -from pip._vendor.pygments.token import Keyword, Name, Comment, String, Error, \ - Number, Operator, Generic, Token, Whitespace -from pip._vendor.pygments.console import ansiformat -from pip._vendor.pygments.util import get_choice_opt - - -__all__ = ['TerminalFormatter'] - - -#: Map token types to a tuple of color values for light and dark -#: backgrounds. -TERMINAL_COLORS = { - Token: ('', ''), - - Whitespace: ('gray', 'brightblack'), - Comment: ('gray', 'brightblack'), - Comment.Preproc: ('cyan', 'brightcyan'), - Keyword: ('blue', 'brightblue'), - Keyword.Type: ('cyan', 'brightcyan'), - Operator.Word: ('magenta', 'brightmagenta'), - Name.Builtin: ('cyan', 'brightcyan'), - Name.Function: ('green', 'brightgreen'), - Name.Namespace: ('_cyan_', '_brightcyan_'), - Name.Class: ('_green_', '_brightgreen_'), - Name.Exception: ('cyan', 'brightcyan'), - Name.Decorator: ('brightblack', 'gray'), - Name.Variable: ('red', 'brightred'), - Name.Constant: ('red', 'brightred'), - Name.Attribute: ('cyan', 'brightcyan'), - Name.Tag: ('brightblue', 'brightblue'), - String: ('yellow', 'yellow'), - Number: ('blue', 'brightblue'), - - Generic.Deleted: ('brightred', 'brightred'), - Generic.Inserted: ('green', 'brightgreen'), - Generic.Heading: ('**', '**'), - Generic.Subheading: ('*magenta*', '*brightmagenta*'), - Generic.Prompt: ('**', '**'), - Generic.Error: ('brightred', 'brightred'), - - Error: ('_brightred_', '_brightred_'), -} - - -class TerminalFormatter(Formatter): - r""" - Format tokens with ANSI color sequences, for output in a text console. - Color sequences are terminated at newlines, so that paging the output - works correctly. - - The `get_style_defs()` method doesn't do anything special since there is - no support for common styles. - - Options accepted: - - `bg` - Set to ``"light"`` or ``"dark"`` depending on the terminal's background - (default: ``"light"``). - - `colorscheme` - A dictionary mapping token types to (lightbg, darkbg) color names or - ``None`` (default: ``None`` = use builtin colorscheme). - - `linenos` - Set to ``True`` to have line numbers on the terminal output as well - (default: ``False`` = no line numbers). - """ - name = 'Terminal' - aliases = ['terminal', 'console'] - filenames = [] - - def __init__(self, **options): - Formatter.__init__(self, **options) - self.darkbg = get_choice_opt(options, 'bg', - ['light', 'dark'], 'light') == 'dark' - self.colorscheme = options.get('colorscheme', None) or TERMINAL_COLORS - self.linenos = options.get('linenos', False) - self._lineno = 0 - - def format(self, tokensource, outfile): - return Formatter.format(self, tokensource, outfile) - - def _write_lineno(self, outfile): - self._lineno += 1 - outfile.write("%s%04d: " % (self._lineno != 1 and '\n' or '', self._lineno)) - - def _get_color(self, ttype): - # self.colorscheme is a dict containing usually generic types, so we - # have to walk the tree of dots. The base Token type must be a key, - # even if it's empty string, as in the default above. - colors = self.colorscheme.get(ttype) - while colors is None: - ttype = ttype.parent - colors = self.colorscheme.get(ttype) - return colors[self.darkbg] - - def format_unencoded(self, tokensource, outfile): - if self.linenos: - self._write_lineno(outfile) - - for ttype, value in tokensource: - color = self._get_color(ttype) - - for line in value.splitlines(True): - if color: - outfile.write(ansiformat(color, line.rstrip('\n'))) - else: - outfile.write(line.rstrip('\n')) - if line.endswith('\n'): - if self.linenos: - self._write_lineno(outfile) - else: - outfile.write('\n') - - if self.linenos: - outfile.write("\n") diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/__init__.py deleted file mode 100644 index 34e3a9950cc557879af8d797f9382b18a870fb56..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pkg_resources/_vendor/importlib_resources/__init__.py +++ /dev/null @@ -1,36 +0,0 @@ -"""Read resources contained within a package.""" - -from ._common import ( - as_file, - files, - Package, -) - -from ._legacy import ( - contents, - open_binary, - read_binary, - open_text, - read_text, - is_resource, - path, - Resource, -) - -from .abc import ResourceReader - - -__all__ = [ - 'Package', - 'Resource', - 'ResourceReader', - 'as_file', - 'contents', - 'files', - 'is_resource', - 'open_binary', - 'open_text', - 'path', - 'read_binary', - 'read_text', -] diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/build_scripts.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/build_scripts.py deleted file mode 100644 index ce222f1e52d3e46840602cb9a8b09574a8b07cd8..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/build_scripts.py +++ /dev/null @@ -1,172 +0,0 @@ -"""distutils.command.build_scripts - -Implements the Distutils 'build_scripts' command.""" - -import os -import re -from stat import ST_MODE -from distutils import sysconfig -from ..core import Command -from ..dep_util import newer -from ..util import convert_path -from distutils._log import log -import tokenize - -shebang_pattern = re.compile('^#!.*python[0-9.]*([ \t].*)?$') -""" -Pattern matching a Python interpreter indicated in first line of a script. -""" - -# for Setuptools compatibility -first_line_re = shebang_pattern - - -class build_scripts(Command): - description = "\"build\" scripts (copy and fixup #! line)" - - user_options = [ - ('build-dir=', 'd', "directory to \"build\" (copy) to"), - ('force', 'f', "forcibly build everything (ignore file timestamps"), - ('executable=', 'e', "specify final destination interpreter path"), - ] - - boolean_options = ['force'] - - def initialize_options(self): - self.build_dir = None - self.scripts = None - self.force = None - self.executable = None - - def finalize_options(self): - self.set_undefined_options( - 'build', - ('build_scripts', 'build_dir'), - ('force', 'force'), - ('executable', 'executable'), - ) - self.scripts = self.distribution.scripts - - def get_source_files(self): - return self.scripts - - def run(self): - if not self.scripts: - return - self.copy_scripts() - - def copy_scripts(self): - """ - Copy each script listed in ``self.scripts``. - - If a script is marked as a Python script (first line matches - 'shebang_pattern', i.e. starts with ``#!`` and contains - "python"), then adjust in the copy the first line to refer to - the current Python interpreter. - """ - self.mkpath(self.build_dir) - outfiles = [] - updated_files = [] - for script in self.scripts: - self._copy_script(script, outfiles, updated_files) - - self._change_modes(outfiles) - - return outfiles, updated_files - - def _copy_script(self, script, outfiles, updated_files): # noqa: C901 - shebang_match = None - script = convert_path(script) - outfile = os.path.join(self.build_dir, os.path.basename(script)) - outfiles.append(outfile) - - if not self.force and not newer(script, outfile): - log.debug("not copying %s (up-to-date)", script) - return - - # Always open the file, but ignore failures in dry-run mode - # in order to attempt to copy directly. - try: - f = tokenize.open(script) - except OSError: - if not self.dry_run: - raise - f = None - else: - first_line = f.readline() - if not first_line: - self.warn("%s is an empty file (skipping)" % script) - return - - shebang_match = shebang_pattern.match(first_line) - - updated_files.append(outfile) - if shebang_match: - log.info("copying and adjusting %s -> %s", script, self.build_dir) - if not self.dry_run: - if not sysconfig.python_build: - executable = self.executable - else: - executable = os.path.join( - sysconfig.get_config_var("BINDIR"), - "python%s%s" - % ( - sysconfig.get_config_var("VERSION"), - sysconfig.get_config_var("EXE"), - ), - ) - post_interp = shebang_match.group(1) or '' - shebang = "#!" + executable + post_interp + "\n" - self._validate_shebang(shebang, f.encoding) - with open(outfile, "w", encoding=f.encoding) as outf: - outf.write(shebang) - outf.writelines(f.readlines()) - if f: - f.close() - else: - if f: - f.close() - self.copy_file(script, outfile) - - def _change_modes(self, outfiles): - if os.name != 'posix': - return - - for file in outfiles: - self._change_mode(file) - - def _change_mode(self, file): - if self.dry_run: - log.info("changing mode of %s", file) - return - - oldmode = os.stat(file)[ST_MODE] & 0o7777 - newmode = (oldmode | 0o555) & 0o7777 - if newmode != oldmode: - log.info("changing mode of %s from %o to %o", file, oldmode, newmode) - os.chmod(file, newmode) - - @staticmethod - def _validate_shebang(shebang, encoding): - # Python parser starts to read a script using UTF-8 until - # it gets a #coding:xxx cookie. The shebang has to be the - # first line of a file, the #coding:xxx cookie cannot be - # written before. So the shebang has to be encodable to - # UTF-8. - try: - shebang.encode('utf-8') - except UnicodeEncodeError: - raise ValueError( - "The shebang ({!r}) is not encodable " "to utf-8".format(shebang) - ) - - # If the script is encoded to a custom encoding (use a - # #coding:xxx cookie), the shebang has to be encodable to - # the script encoding too. - try: - shebang.encode(encoding) - except UnicodeEncodeError: - raise ValueError( - "The shebang ({!r}) is not encodable " - "to the script encoding ({})".format(shebang, encoding) - ) diff --git a/spaces/Tasendodificilterumnome/Foiounao/README.md b/spaces/Tasendodificilterumnome/Foiounao/README.md deleted file mode 100644 index 7dd83d461c9819a4390c22c58e2b4f0c76c1b5b0..0000000000000000000000000000000000000000 --- a/spaces/Tasendodificilterumnome/Foiounao/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Foiounao -emoji: 👁 -colorFrom: indigo -colorTo: green -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TencentARC/Caption-Anything/caption_anything/segmenter/base_segmenter.py b/spaces/TencentARC/Caption-Anything/caption_anything/segmenter/base_segmenter.py deleted file mode 100644 index d7aff5111f35f2b3b5fe959b4a41bcfda1a05556..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/Caption-Anything/caption_anything/segmenter/base_segmenter.py +++ /dev/null @@ -1,184 +0,0 @@ -import time -import torch -import cv2 -from PIL import Image, ImageDraw, ImageOps -import numpy as np -from typing import Union -from segment_anything import sam_model_registry, SamPredictor, SamAutomaticMaskGenerator -from caption_anything.utils.utils import prepare_segmenter, seg_model_map, load_image -import matplotlib.pyplot as plt -import PIL - - -class BaseSegmenter: - def __init__(self, device, checkpoint, model_name='huge', reuse_feature=True, model=None, args=None): - print(f"Initializing BaseSegmenter to {device}") - self.device = device - self.torch_dtype = torch.float16 if 'cuda' in device else torch.float32 - self.processor = None - if model is None: - if checkpoint is None: - _, checkpoint = prepare_segmenter(model_name) - self.model = sam_model_registry[seg_model_map[model_name]](checkpoint=checkpoint) - self.checkpoint = checkpoint - self.model.to(device=self.device) - else: - self.model = model - self.reuse_feature = reuse_feature - self.predictor = SamPredictor(self.model) - - sam_generator_keys = ['pred_iou_thresh', 'min_mask_region_area', 'stability_score_thresh', 'box_nms_thresh'] - generator_args = {k:v for k,v in vars(args).items() if k in sam_generator_keys} - self.mask_generator = SamAutomaticMaskGenerator(model=self.model, **generator_args) - self.image_embedding = None - self.image = None - - @torch.no_grad() - def set_image(self, image: Union[np.ndarray, Image.Image, str]): - image = load_image(image, return_type='numpy') - self.image = image - if self.reuse_feature: - self.predictor.set_image(image) - self.image_embedding = self.predictor.get_image_embedding() - print(self.image_embedding.shape) - - @torch.no_grad() - def inference(self, image: Union[np.ndarray, Image.Image, str], control: dict): - """ - SAM inference of image according to control. - Args: - image: str or PIL.Image or np.ndarray - control: dict to control SAM. - prompt_type: - 1. {control['prompt_type'] = ['everything']} to segment everything in the image. - 2. {control['prompt_type'] = ['click', 'box']} to segment according to click and box. - 3. {control['prompt_type'] = ['click'] to segment according to click. - 4. {control['prompt_type'] = ['box'] to segment according to box. - input_point: list of [x, y] coordinates of click. - input_label: List of labels for points accordingly, 0 for negative, 1 for positive. - input_box: List of [x1, y1, x2, y2] coordinates of box. - multimask_output: - If true, the model will return three masks. - For ambiguous input prompts (such as a single click), this will often - produce better masks than a single prediction. If only a single - mask is needed, the model's predicted quality score can be used - to select the best mask. For non-ambiguous prompts, such as multiple - input prompts, multimask_output=False can give better results. - Returns: - masks: np.ndarray of shape [num_masks, height, width] - - """ - image = load_image(image, return_type='numpy') - if 'everything' in control['prompt_type']: - masks = self.mask_generator.generate(image) - new_masks = np.concatenate([mask["segmentation"][np.newaxis, :] for mask in masks]) - bbox = np.array([mask["bbox"] for mask in masks]) - area = np.array([mask["area"] for mask in masks]) - return new_masks, bbox, area - else: - if not self.reuse_feature or self.image_embedding is None: - self.set_image(image) - self.predictor.set_image(self.image) - else: - assert self.image_embedding is not None - self.predictor.features = self.image_embedding - - if 'mutimask_output' in control: - masks, scores, logits = self.predictor.predict( - point_coords=np.array(control['input_point']), - point_labels=np.array(control['input_label']), - multimask_output=True, - ) - elif 'input_boxes' in control: - transformed_boxes = self.predictor.transform.apply_boxes_torch( - torch.tensor(control["input_boxes"], device=self.predictor.device), - image.shape[1::-1] # Reverse shape because numpy is (W, H) and function need (H, W) - ) - masks, _, _ = self.predictor.predict_torch( - point_coords=None, - point_labels=None, - boxes=transformed_boxes, - multimask_output=False, - ) - masks = masks.squeeze(1).cpu().numpy() - - else: - input_point = np.array(control['input_point']) if 'click' in control['prompt_type'] else None - input_label = np.array(control['input_label']) if 'click' in control['prompt_type'] else None - input_box = np.array(control['input_box']) if 'box' in control['prompt_type'] else None - - masks, scores, logits = self.predictor.predict( - point_coords=input_point, - point_labels=input_label, - box=input_box, - multimask_output=False, - ) - - if 0 in control['input_label']: - mask_input = logits[np.argmax(scores), :, :] - masks, scores, logits = self.predictor.predict( - point_coords=input_point, - point_labels=input_label, - box=input_box, - mask_input=mask_input[None, :, :], - multimask_output=False, - ) - - return masks - - -if __name__ == "__main__": - image_path = 'segmenter/images/truck.jpg' - prompts = [ - # { - # "prompt_type":["click"], - # "input_point":[[500, 375]], - # "input_label":[1], - # "multimask_output":"True", - # }, - { - "prompt_type": ["click"], - "input_point": [[1000, 600], [1325, 625]], - "input_label": [1, 0], - }, - # { - # "prompt_type":["click", "box"], - # "input_box":[425, 600, 700, 875], - # "input_point":[[575, 750]], - # "input_label": [0] - # }, - # { - # "prompt_type":["box"], - # "input_boxes": [ - # [75, 275, 1725, 850], - # [425, 600, 700, 875], - # [1375, 550, 1650, 800], - # [1240, 675, 1400, 750], - # ] - # }, - # { - # "prompt_type":["everything"] - # }, - ] - - init_time = time.time() - segmenter = BaseSegmenter( - device='cuda', - # checkpoint='sam_vit_h_4b8939.pth', - checkpoint='segmenter/sam_vit_h_4b8939.pth', - model_type='vit_h', - reuse_feature=True - ) - print(f'init time: {time.time() - init_time}') - - image_path = 'test_images/img2.jpg' - infer_time = time.time() - for i, prompt in enumerate(prompts): - print(f'{prompt["prompt_type"]} mode') - image = Image.open(image_path) - segmenter.set_image(np.array(image)) - masks = segmenter.inference(np.array(image), prompt) - Image.fromarray(masks[0]).save('seg.png') - print(masks.shape) - - print(f'infer time: {time.time() - infer_time}') diff --git a/spaces/TencentARC/VLog/models/grit_src/grit/modeling/soft_nms.py b/spaces/TencentARC/VLog/models/grit_src/grit/modeling/soft_nms.py deleted file mode 100644 index 6a5aae7c4261191b8e07e0fd25055d8917f7f97d..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/grit/modeling/soft_nms.py +++ /dev/null @@ -1,177 +0,0 @@ -import torch - -from detectron2.structures import Boxes, RotatedBoxes, pairwise_iou, pairwise_iou_rotated - - -def soft_nms(boxes, scores, method, gaussian_sigma, linear_threshold, prune_threshold): - """ - Performs soft non-maximum suppression algorithm on axis aligned boxes - - Args: - boxes (Tensor[N, 5]): - boxes where NMS will be performed. They - are expected to be in (x_ctr, y_ctr, width, height, angle_degrees) format - scores (Tensor[N]): - scores for each one of the boxes - method (str): - one of ['gaussian', 'linear', 'hard'] - see paper for details. users encouraged not to use "hard", as this is the - same nms available elsewhere in detectron2 - gaussian_sigma (float): - parameter for Gaussian penalty function - linear_threshold (float): - iou threshold for applying linear decay. Nt from the paper - re-used as threshold for standard "hard" nms - prune_threshold (float): - boxes with scores below this threshold are pruned at each iteration. - Dramatically reduces computation time. Authors use values in [10e-4, 10e-2] - - Returns: - tuple(Tensor, Tensor): - [0]: int64 tensor with the indices of the elements that have been kept - by Soft NMS, sorted in decreasing order of scores - [1]: float tensor with the re-scored scores of the elements that were kept -""" - return _soft_nms( - Boxes, - pairwise_iou, - boxes, - scores, - method, - gaussian_sigma, - linear_threshold, - prune_threshold, - ) - - -def batched_soft_nms( - boxes, scores, idxs, method, gaussian_sigma, linear_threshold, prune_threshold -): - """ - Performs soft non-maximum suppression in a batched fashion. - - Each index value correspond to a category, and NMS - will not be applied between elements of different categories. - - Args: - boxes (Tensor[N, 4]): - boxes where NMS will be performed. They - are expected to be in (x1, y1, x2, y2) format - scores (Tensor[N]): - scores for each one of the boxes - idxs (Tensor[N]): - indices of the categories for each one of the boxes. - method (str): - one of ['gaussian', 'linear', 'hard'] - see paper for details. users encouraged not to use "hard", as this is the - same nms available elsewhere in detectron2 - gaussian_sigma (float): - parameter for Gaussian penalty function - linear_threshold (float): - iou threshold for applying linear decay. Nt from the paper - re-used as threshold for standard "hard" nms - prune_threshold (float): - boxes with scores below this threshold are pruned at each iteration. - Dramatically reduces computation time. Authors use values in [10e-4, 10e-2] - Returns: - tuple(Tensor, Tensor): - [0]: int64 tensor with the indices of the elements that have been kept - by Soft NMS, sorted in decreasing order of scores - [1]: float tensor with the re-scored scores of the elements that were kept - """ - if boxes.numel() == 0: - return ( - torch.empty((0,), dtype=torch.int64, device=boxes.device), - torch.empty((0,), dtype=torch.float32, device=scores.device), - ) - # strategy: in order to perform NMS independently per class. - # we add an offset to all the boxes. The offset is dependent - # only on the class idx, and is large enough so that boxes - # from different classes do not overlap - max_coordinate = boxes.max() - offsets = idxs.to(boxes) * (max_coordinate + 1) - boxes_for_nms = boxes + offsets[:, None] - return soft_nms( - boxes_for_nms, scores, method, gaussian_sigma, linear_threshold, prune_threshold - ) - - -def _soft_nms( - box_class, - pairwise_iou_func, - boxes, - scores, - method, - gaussian_sigma, - linear_threshold, - prune_threshold, -): - """ - Soft non-max suppression algorithm. - - Implementation of [Soft-NMS -- Improving Object Detection With One Line of Codec] - (https://arxiv.org/abs/1704.04503) - - Args: - box_class (cls): one of Box, RotatedBoxes - pairwise_iou_func (func): one of pairwise_iou, pairwise_iou_rotated - boxes (Tensor[N, ?]): - boxes where NMS will be performed - if Boxes, in (x1, y1, x2, y2) format - if RotatedBoxes, in (x_ctr, y_ctr, width, height, angle_degrees) format - scores (Tensor[N]): - scores for each one of the boxes - method (str): - one of ['gaussian', 'linear', 'hard'] - see paper for details. users encouraged not to use "hard", as this is the - same nms available elsewhere in detectron2 - gaussian_sigma (float): - parameter for Gaussian penalty function - linear_threshold (float): - iou threshold for applying linear decay. Nt from the paper - re-used as threshold for standard "hard" nms - prune_threshold (float): - boxes with scores below this threshold are pruned at each iteration. - Dramatically reduces computation time. Authors use values in [10e-4, 10e-2] - - Returns: - tuple(Tensor, Tensor): - [0]: int64 tensor with the indices of the elements that have been kept - by Soft NMS, sorted in decreasing order of scores - [1]: float tensor with the re-scored scores of the elements that were kept - """ - boxes = boxes.clone() - scores = scores.clone() - idxs = torch.arange(scores.size()[0]) - - idxs_out = [] - scores_out = [] - - while scores.numel() > 0: - top_idx = torch.argmax(scores) - idxs_out.append(idxs[top_idx].item()) - scores_out.append(scores[top_idx].item()) - - top_box = boxes[top_idx] - ious = pairwise_iou_func(box_class(top_box.unsqueeze(0)), box_class(boxes))[0] - - if method == "linear": - decay = torch.ones_like(ious) - decay_mask = ious > linear_threshold - decay[decay_mask] = 1 - ious[decay_mask] - elif method == "gaussian": - decay = torch.exp(-torch.pow(ious, 2) / gaussian_sigma) - elif method == "hard": # standard NMS - decay = (ious < linear_threshold).float() - else: - raise NotImplementedError("{} soft nms method not implemented.".format(method)) - - scores *= decay - keep = scores > prune_threshold - keep[top_idx] = False - - boxes = boxes[keep] - scores = scores[keep] - idxs = idxs[keep] - - return torch.tensor(idxs_out).to(boxes.device), torch.tensor(scores_out).to(scores.device) \ No newline at end of file diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/config/test_yacs_config.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/config/test_yacs_config.py deleted file mode 100644 index 01dd6955f78e2700ffc10ed723ab1c95df0e5a18..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/config/test_yacs_config.py +++ /dev/null @@ -1,270 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. - - -import os -import tempfile -import unittest -import torch -from omegaconf import OmegaConf - -from detectron2 import model_zoo -from detectron2.config import configurable, downgrade_config, get_cfg, upgrade_config -from detectron2.layers import ShapeSpec -from detectron2.modeling import build_model - -_V0_CFG = """ -MODEL: - RPN_HEAD: - NAME: "TEST" -VERSION: 0 -""" - -_V1_CFG = """ -MODEL: - WEIGHT: "/path/to/weight" -""" - - -class TestConfigVersioning(unittest.TestCase): - def test_upgrade_downgrade_consistency(self): - cfg = get_cfg() - # check that custom is preserved - cfg.USER_CUSTOM = 1 - - down = downgrade_config(cfg, to_version=0) - up = upgrade_config(down) - self.assertTrue(up == cfg) - - def _merge_cfg_str(self, cfg, merge_str): - f = tempfile.NamedTemporaryFile(mode="w", suffix=".yaml", delete=False) - try: - f.write(merge_str) - f.close() - cfg.merge_from_file(f.name) - finally: - os.remove(f.name) - return cfg - - def test_auto_upgrade(self): - cfg = get_cfg() - latest_ver = cfg.VERSION - cfg.USER_CUSTOM = 1 - - self._merge_cfg_str(cfg, _V0_CFG) - - self.assertEqual(cfg.MODEL.RPN.HEAD_NAME, "TEST") - self.assertEqual(cfg.VERSION, latest_ver) - - def test_guess_v1(self): - cfg = get_cfg() - latest_ver = cfg.VERSION - self._merge_cfg_str(cfg, _V1_CFG) - self.assertEqual(cfg.VERSION, latest_ver) - - -class _TestClassA(torch.nn.Module): - @configurable - def __init__(self, arg1, arg2, arg3=3): - super().__init__() - self.arg1 = arg1 - self.arg2 = arg2 - self.arg3 = arg3 - assert arg1 == 1 - assert arg2 == 2 - assert arg3 == 3 - - @classmethod - def from_config(cls, cfg): - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - return args - - -class _TestClassB(_TestClassA): - @configurable - def __init__(self, input_shape, arg1, arg2, arg3=3): - """ - Doc of _TestClassB - """ - assert input_shape == "shape" - super().__init__(arg1, arg2, arg3) - - @classmethod - def from_config(cls, cfg, input_shape): # test extra positional arg in from_config - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - args["input_shape"] = input_shape - return args - - -class _LegacySubClass(_TestClassB): - # an old subclass written in cfg style - def __init__(self, cfg, input_shape, arg4=4): - super().__init__(cfg, input_shape) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _NewSubClassNewInit(_TestClassB): - # test new subclass with a new __init__ - @configurable - def __init__(self, input_shape, arg4=4, **kwargs): - super().__init__(input_shape, **kwargs) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _LegacySubClassNotCfg(_TestClassB): - # an old subclass written in cfg style, but argument is not called "cfg" - def __init__(self, config, input_shape): - super().__init__(config, input_shape) - assert self.arg1 == 1 - assert self.arg2 == 2 - assert self.arg3 == 3 - - -class _TestClassC(_TestClassB): - @classmethod - def from_config(cls, cfg, input_shape, **kwargs): # test extra kwarg overwrite - args = {"arg1": cfg.ARG1, "arg2": cfg.ARG2} - args["input_shape"] = input_shape - args.update(kwargs) - return args - - -class _TestClassD(_TestClassA): - @configurable - def __init__(self, input_shape: ShapeSpec, arg1: int, arg2, arg3=3): - assert input_shape == "shape" - super().__init__(arg1, arg2, arg3) - - # _TestClassA.from_config does not have input_shape args. - # Test whether input_shape will be forwarded to __init__ - - -@configurable(from_config=lambda cfg, arg2: {"arg1": cfg.ARG1, "arg2": arg2, "arg3": cfg.ARG3}) -def _test_func(arg1, arg2=2, arg3=3, arg4=4): - return arg1, arg2, arg3, arg4 - - -class TestConfigurable(unittest.TestCase): - def testInitWithArgs(self): - _ = _TestClassA(arg1=1, arg2=2, arg3=3) - _ = _TestClassB("shape", arg1=1, arg2=2) - _ = _TestClassC("shape", arg1=1, arg2=2) - _ = _TestClassD("shape", arg1=1, arg2=2, arg3=3) - - def testPatchedAttr(self): - self.assertTrue("Doc" in _TestClassB.__init__.__doc__) - self.assertEqual(_TestClassD.__init__.__annotations__["arg1"], int) - - def testInitWithCfg(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 2 - cfg.ARG3 = 3 - _ = _TestClassA(cfg) - _ = _TestClassB(cfg, input_shape="shape") - _ = _TestClassC(cfg, input_shape="shape") - _ = _TestClassD(cfg, input_shape="shape") - _ = _LegacySubClass(cfg, input_shape="shape") - _ = _NewSubClassNewInit(cfg, input_shape="shape") - _ = _LegacySubClassNotCfg(cfg, input_shape="shape") - with self.assertRaises(TypeError): - # disallow forwarding positional args to __init__ since it's prone to errors - _ = _TestClassD(cfg, "shape") - - # call with kwargs instead - _ = _TestClassA(cfg=cfg) - _ = _TestClassB(cfg=cfg, input_shape="shape") - _ = _TestClassC(cfg=cfg, input_shape="shape") - _ = _TestClassD(cfg=cfg, input_shape="shape") - _ = _LegacySubClass(cfg=cfg, input_shape="shape") - _ = _NewSubClassNewInit(cfg=cfg, input_shape="shape") - _ = _LegacySubClassNotCfg(config=cfg, input_shape="shape") - - def testInitWithCfgOverwrite(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 999 # wrong config - with self.assertRaises(AssertionError): - _ = _TestClassA(cfg, arg3=3) - - # overwrite arg2 with correct config later: - _ = _TestClassA(cfg, arg2=2, arg3=3) - _ = _TestClassB(cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassC(cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassD(cfg, input_shape="shape", arg2=2, arg3=3) - - # call with kwargs cfg=cfg instead - _ = _TestClassA(cfg=cfg, arg2=2, arg3=3) - _ = _TestClassB(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassC(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - _ = _TestClassD(cfg=cfg, input_shape="shape", arg2=2, arg3=3) - - def testInitWithCfgWrongArgs(self): - cfg = get_cfg() - cfg.ARG1 = 1 - cfg.ARG2 = 2 - with self.assertRaises(TypeError): - _ = _TestClassB(cfg, "shape", not_exist=1) - with self.assertRaises(TypeError): - _ = _TestClassC(cfg, "shape", not_exist=1) - with self.assertRaises(TypeError): - _ = _TestClassD(cfg, "shape", not_exist=1) - - def testBadClass(self): - class _BadClass1: - @configurable - def __init__(self, a=1, b=2): - pass - - class _BadClass2: - @configurable - def __init__(self, a=1, b=2): - pass - - def from_config(self, cfg): # noqa - pass - - class _BadClass3: - @configurable - def __init__(self, a=1, b=2): - pass - - # bad name: must be cfg - @classmethod - def from_config(cls, config): # noqa - pass - - with self.assertRaises(AttributeError): - _ = _BadClass1(a=1) - - with self.assertRaises(TypeError): - _ = _BadClass2(a=1) - - with self.assertRaises(TypeError): - _ = _BadClass3(get_cfg()) - - def testFuncWithCfg(self): - cfg = get_cfg() - cfg.ARG1 = 10 - cfg.ARG3 = 30 - - self.assertEqual(_test_func(1), (1, 2, 3, 4)) - with self.assertRaises(TypeError): - _test_func(cfg) - self.assertEqual(_test_func(cfg, arg2=2), (10, 2, 30, 4)) - self.assertEqual(_test_func(cfg, arg1=100, arg2=20), (100, 20, 30, 4)) - self.assertEqual(_test_func(cfg, arg1=100, arg2=20, arg4=40), (100, 20, 30, 40)) - - self.assertTrue(callable(_test_func.from_config)) - - def testOmegaConf(self): - cfg = model_zoo.get_config("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml") - cfg = OmegaConf.create(cfg.dump()) - if not torch.cuda.is_available(): - cfg.MODEL.DEVICE = "cpu" - # test that a model can be built with omegaconf config as well - build_model(cfg) diff --git a/spaces/Theopan/VoiceFixer/app.py b/spaces/Theopan/VoiceFixer/app.py deleted file mode 100644 index 9aeebd5a3134fb40a9b7de333d65aaaf84118f0e..0000000000000000000000000000000000000000 --- a/spaces/Theopan/VoiceFixer/app.py +++ /dev/null @@ -1,24 +0,0 @@ -import os -os.system('pip install gradio==2.3.0a0') -os.system('pip install voicefixer --upgrade') -from voicefixer import VoiceFixer -import gradio as gr -voicefixer = VoiceFixer() -def inference(audio,mode): - voicefixer.restore(input=audio.name, # input wav file path - output="output.wav", # output wav file path - cuda=False, # whether to use gpu acceleration - mode = int(mode)) # You can try out mode 0, 1 to find out the best result - return 'output.wav' - -inputs = [gr.inputs.Audio(type="file", label="Input Audio"),gr.inputs.Radio(choices=['0','1','2'], type="value", default='0', label='mode')] -outputs = gr.outputs.Audio(type="file",label="Output Audio") - - -title = "Voice Fixer" -description = "Gradio demo for VoiceFixer: Toward General Speech Restoration With Neural Vocoder. To use it, simply add your audio, or click one of the examples to load them. Read more at the links below." -article = "

    VoiceFixer: Toward General Speech Restoration With Neural Vocoder | Github Repo

    " - -examples=[['bruce.wav','2']] - -gr.Interface(inference, inputs, outputs, title=title, description=description, article=article, examples=examples, enable_queue=True).launch() \ No newline at end of file diff --git a/spaces/ThirdEyeData/Network_Data_Anomaly/app.py b/spaces/ThirdEyeData/Network_Data_Anomaly/app.py deleted file mode 100644 index 7dc94bd2ae3af519f12cfdde1eca84190256e083..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Network_Data_Anomaly/app.py +++ /dev/null @@ -1,42 +0,0 @@ -import streamlit as st -import numpy as np -import pandas as pd -import pickle -from PIL import Image - -image = Image.open('pic2.jpg') -st.image(image,caption = 'Network Data Anomaly',width =1000) - -st.title("Network Data Anomaly") -st.write("""An anomaly (also known as an outlier) is when something happens that is outside of the norm, -when it stands out or deviates from what is expected. There are different kinds of anomalies in an e-commerce setting, -they can be product anomaly, conversion anomaly or marketing anomaly. -The model used is Isolation Forest, which is built based on decision trees and is an unsupervised model. -Isolation forests can be used to detect anomaly in high dimensional and large datasets, with no labels. -""") - -with open("./median.pickle", 'rb') as f: - MED = pickle.load(f) -with open("./mad.pickle", 'rb') as g: - MA = pickle.load(g) - -def ZRscore_outlier(packet,med,ma): - z = (0.6745*(packet-med))/ (np.median(ma)) - if np.abs(z) > 3: - return "Outlier" - else: - return "Not an Outlier" - -packet = st.number_input("Packet Number",step=1) -st.header(ZRscore_outlier(packet,MED,MA)) - -st.write(""" -For a detailed description please look through our Documentation -""") - -url = 'https://huggingface.co/spaces/ThirdEyeData/Network_Data_Anomaly/blob/main/README.md' - -st.markdown(f''' - -''', -unsafe_allow_html=True) \ No newline at end of file diff --git a/spaces/ThomasSimonini/Compare-Reinforcement-Learning-Agents/app.py b/spaces/ThomasSimonini/Compare-Reinforcement-Learning-Agents/app.py deleted file mode 100644 index 4126d285265d4a2606e64a920ad92a651faae375..0000000000000000000000000000000000000000 --- a/spaces/ThomasSimonini/Compare-Reinforcement-Learning-Agents/app.py +++ /dev/null @@ -1,89 +0,0 @@ -import gradio as gr -import requests.exceptions -from huggingface_hub import HfApi, hf_hub_download -from huggingface_hub.repocard import metadata_load - -app = gr.Blocks() - -def load_agent(model_id_1, model_id_2): - """ - This function load the agent's video and results - :return: video_path - """ - # Load the metrics - metadata_1 = get_metadata(model_id_1) - - # Get the accuracy - results_1 = parse_metrics_accuracy(metadata_1) - - # Load the video - video_path_1 = hf_hub_download(model_id_1, filename="replay.mp4") - - # Load the metrics - metadata_2 = get_metadata(model_id_2) - - # Get the accuracy - results_2 = parse_metrics_accuracy(metadata_2) - - # Load the video - video_path_2 = hf_hub_download(model_id_2, filename="replay.mp4") - - return model_id_1, video_path_1, results_1, model_id_2, video_path_2, results_2 - -def parse_metrics_accuracy(meta): - if "model-index" not in meta: - return None - result = meta["model-index"][0]["results"] - metrics = result[0]["metrics"] - accuracy = metrics[0]["value"] - return accuracy - -def get_metadata(model_id): - """ - Get the metadata of the model repo - :param model_id: - :return: metadata - """ - try: - readme_path = hf_hub_download(model_id, filename="README.md") - metadata = metadata_load(readme_path) - print(metadata) - return metadata - except requests.exceptions.HTTPError: - return None - - - - -with app: - gr.Markdown( - """ - # Compare Deep Reinforcement Learning Agents 🤖 - - Type two models id you want to compare or check examples below. - """) - with gr.Row(): - model1_input = gr.Textbox(label="Model 1") - model2_input = gr.Textbox(label="Model 2") - with gr.Row(): - app_button = gr.Button("Compare models") - with gr.Row(): - with gr.Column(): - model1_name = gr.Markdown() - model1_video_output = gr.Video() - model1_score_output = gr.Textbox(label="Mean Reward +/- Std Reward") - with gr.Column(): - model2_name = gr.Markdown() - model2_video_output = gr.Video() - model2_score_output = gr.Textbox(label="Mean Reward +/- Std Reward") - - app_button.click(load_agent, inputs=[model1_input, model2_input], outputs=[model1_name, model1_video_output, model1_score_output, model2_name, model2_video_output, model2_score_output]) - - examples = gr.Examples(examples=[["sb3/a2c-AntBulletEnv-v0","sb3/ppo-AntBulletEnv-v0"], - ["ThomasSimonini/a2c-AntBulletEnv-v0", "sb3/a2c-AntBulletEnv-v0"], - ["sb3/dqn-SpaceInvadersNoFrameskip-v4", "sb3/a2c-SpaceInvadersNoFrameskip-v4"], - ["ThomasSimonini/ppo-QbertNoFrameskip-v4","sb3/ppo-QbertNoFrameskip-v4"]], - inputs=[model1_input, model2_input]) - - -app.launch() \ No newline at end of file diff --git a/spaces/VickyKira/NASAGPT/g4f/README.md b/spaces/VickyKira/NASAGPT/g4f/README.md deleted file mode 100644 index c2cbfd69dc169e2cb4f8d24104fb12a52b91688d..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/g4f/README.md +++ /dev/null @@ -1,5 +0,0 @@ -## 🚀 API G4F - -This API is built upon the [gpt4free](https://github.com/xtekky/gpt4free) project. - - diff --git a/spaces/Woocy/541GPT/chatgpt - windows.bat b/spaces/Woocy/541GPT/chatgpt - windows.bat deleted file mode 100644 index 0b78fdc3a559abd692e3a9e9af5e482124d13a99..0000000000000000000000000000000000000000 --- a/spaces/Woocy/541GPT/chatgpt - windows.bat +++ /dev/null @@ -1,14 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" - -REM The web page can be accessed with delayed start http://127.0.0.1:7860/ -ping -n 5 127.0.0.1>nul - -REM access chargpt via your default browser -start "" "http://127.0.0.1:7860/" - - -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). \ No newline at end of file diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/quantization/core_vq.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/quantization/core_vq.py deleted file mode 100644 index e1896bb1788a945a1f7be6369abb255ecf72c7a0..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/quantization/core_vq.py +++ /dev/null @@ -1,400 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -from einops import rearrange, repeat -import flashy -import torch -from torch import nn, einsum -import torch.nn.functional as F - - -def exists(val: tp.Optional[tp.Any]) -> bool: - return val is not None - - -def default(val: tp.Any, d: tp.Any) -> tp.Any: - return val if exists(val) else d - - -def l2norm(t): - return F.normalize(t, p=2, dim=-1) - - -def ema_inplace(moving_avg, new, decay: float): - moving_avg.data.mul_(decay).add_(new, alpha=(1 - decay)) - - -def laplace_smoothing(x, n_categories: int, epsilon: float = 1e-5): - return (x + epsilon) / (x.sum() + n_categories * epsilon) - - -def uniform_init(*shape: int): - t = torch.empty(shape) - nn.init.kaiming_uniform_(t) - return t - - -def sample_vectors(samples, num: int): - num_samples, device = samples.shape[0], samples.device - - if num_samples >= num: - indices = torch.randperm(num_samples, device=device)[:num] - else: - indices = torch.randint(0, num_samples, (num,), device=device) - - return samples[indices] - - -def kmeans(samples, num_clusters: int, num_iters: int = 10): - dim, dtype = samples.shape[-1], samples.dtype - - means = sample_vectors(samples, num_clusters) - - for _ in range(num_iters): - diffs = rearrange(samples, "n d -> n () d") - rearrange( - means, "c d -> () c d" - ) - dists = -(diffs ** 2).sum(dim=-1) - - buckets = dists.max(dim=-1).indices - bins = torch.bincount(buckets, minlength=num_clusters) - zero_mask = bins == 0 - bins_min_clamped = bins.masked_fill(zero_mask, 1) - - new_means = buckets.new_zeros(num_clusters, dim, dtype=dtype) - new_means.scatter_add_(0, repeat(buckets, "n -> n d", d=dim), samples) - new_means = new_means / bins_min_clamped[..., None] - - means = torch.where(zero_mask[..., None], means, new_means) - - return means, bins - - -def orthgonal_loss_fn(t): - # eq (2) from https://arxiv.org/abs/2112.00384 - n = t.shape[0] - normed_codes = l2norm(t) - identity = torch.eye(n, device=t.device) - cosine_sim = einsum("i d, j d -> i j", normed_codes, normed_codes) - return ((cosine_sim - identity) ** 2).sum() / (n ** 2) - - -class EuclideanCodebook(nn.Module): - """Codebook with Euclidean distance. - - Args: - dim (int): Dimension. - codebook_size (int): Codebook size. - kmeans_init (bool): Whether to use k-means to initialize the codebooks. - If set to true, run the k-means algorithm on the first training batch and use - the learned centroids as initialization. - kmeans_iters (int): Number of iterations used for k-means algorithm at initialization. - decay (float): Decay for exponential moving average over the codebooks. - epsilon (float): Epsilon value for numerical stability. - threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes - that have an exponential moving average cluster size less than the specified threshold with - randomly selected vector from the current batch. - """ - def __init__( - self, - dim: int, - codebook_size: int, - kmeans_init: int = False, - kmeans_iters: int = 10, - decay: float = 0.8, - epsilon: float = 1e-5, - threshold_ema_dead_code: int = 2, - ): - super().__init__() - self.decay = decay - init_fn: tp.Union[tp.Callable[..., torch.Tensor], tp.Any] = uniform_init if not kmeans_init else torch.zeros - embed = init_fn(codebook_size, dim) - - self.codebook_size = codebook_size - - self.kmeans_iters = kmeans_iters - self.epsilon = epsilon - self.threshold_ema_dead_code = threshold_ema_dead_code - - self.register_buffer("inited", torch.Tensor([not kmeans_init])) - self.register_buffer("cluster_size", torch.zeros(codebook_size)) - self.register_buffer("embed", embed) - self.register_buffer("embed_avg", embed.clone()) - - @torch.jit.ignore - def init_embed_(self, data): - if self.inited: - return - - embed, cluster_size = kmeans(data, self.codebook_size, self.kmeans_iters) - self.embed.data.copy_(embed) - self.embed_avg.data.copy_(embed.clone()) - self.cluster_size.data.copy_(cluster_size) - self.inited.data.copy_(torch.Tensor([True])) - # Make sure all buffers across workers are in sync after initialization - flashy.distrib.broadcast_tensors(self.buffers()) - - def replace_(self, samples, mask): - modified_codebook = torch.where( - mask[..., None], sample_vectors(samples, self.codebook_size), self.embed - ) - self.embed.data.copy_(modified_codebook) - - def expire_codes_(self, batch_samples): - if self.threshold_ema_dead_code == 0: - return - - expired_codes = self.cluster_size < self.threshold_ema_dead_code - if not torch.any(expired_codes): - return - - batch_samples = rearrange(batch_samples, "... d -> (...) d") - self.replace_(batch_samples, mask=expired_codes) - flashy.distrib.broadcast_tensors(self.buffers()) - - def preprocess(self, x): - x = rearrange(x, "... d -> (...) d") - return x - - def quantize(self, x): - embed = self.embed.t() - dist = -( - x.pow(2).sum(1, keepdim=True) - - 2 * x @ embed - + embed.pow(2).sum(0, keepdim=True) - ) - embed_ind = dist.max(dim=-1).indices - return embed_ind - - def postprocess_emb(self, embed_ind, shape): - return embed_ind.view(*shape[:-1]) - - def dequantize(self, embed_ind): - quantize = F.embedding(embed_ind, self.embed) - return quantize - - def encode(self, x): - shape = x.shape - # pre-process - x = self.preprocess(x) - # quantize - embed_ind = self.quantize(x) - # post-process - embed_ind = self.postprocess_emb(embed_ind, shape) - return embed_ind - - def decode(self, embed_ind): - quantize = self.dequantize(embed_ind) - return quantize - - def forward(self, x): - shape, dtype = x.shape, x.dtype - x = self.preprocess(x) - self.init_embed_(x) - - embed_ind = self.quantize(x) - embed_onehot = F.one_hot(embed_ind, self.codebook_size).type(dtype) - embed_ind = self.postprocess_emb(embed_ind, shape) - quantize = self.dequantize(embed_ind) - - if self.training: - # We do the expiry of code at that point as buffers are in sync - # and all the workers will take the same decision. - self.expire_codes_(x) - ema_inplace(self.cluster_size, embed_onehot.sum(0), self.decay) - embed_sum = x.t() @ embed_onehot - ema_inplace(self.embed_avg, embed_sum.t(), self.decay) - cluster_size = ( - laplace_smoothing(self.cluster_size, self.codebook_size, self.epsilon) - * self.cluster_size.sum() - ) - embed_normalized = self.embed_avg / cluster_size.unsqueeze(1) - self.embed.data.copy_(embed_normalized) - - return quantize, embed_ind - - -class VectorQuantization(nn.Module): - """Vector quantization implementation. - Currently supports only euclidean distance. - - Args: - dim (int): Dimension - codebook_size (int): Codebook size - codebook_dim (int): Codebook dimension. If not defined, uses the specified dimension in dim. - decay (float): Decay for exponential moving average over the codebooks. - epsilon (float): Epsilon value for numerical stability. - kmeans_init (bool): Whether to use kmeans to initialize the codebooks. - kmeans_iters (int): Number of iterations used for kmeans initialization. - threshold_ema_dead_code (int): - channels_last (bool): Channels are the last dimension in the input tensors. - commitment_weight (float): Weight for commitment loss. - orthogonal_reg_weight (float): Orthogonal regularization weights. - orthogonal_reg_active_codes_only (bool): Apply orthogonal regularization only on active codes. - orthogonal_reg_max_codes (optional int): Maximum number of codes to consider - for orthogonal regulariation. - threshold_ema_dead_code (int): Threshold for dead code expiration. Replace any codes - that have an exponential moving average cluster size less than the specified threshold with - randomly selected vector from the current batch. - """ - def __init__( - self, - dim: int, - codebook_size: int, - codebook_dim: tp.Optional[int] = None, - decay: float = 0.8, - epsilon: float = 1e-5, - kmeans_init: bool = False, - kmeans_iters: int = 10, - threshold_ema_dead_code: int = 2, - channels_last: bool = False, - commitment_weight: float = 1., - orthogonal_reg_weight: float = 0.0, - orthogonal_reg_active_codes_only: bool = False, - orthogonal_reg_max_codes: tp.Optional[int] = None, - ): - super().__init__() - _codebook_dim: int = default(codebook_dim, dim) - - requires_projection = _codebook_dim != dim - self.project_in = (nn.Linear(dim, _codebook_dim) if requires_projection else nn.Identity()) - self.project_out = (nn.Linear(_codebook_dim, dim) if requires_projection else nn.Identity()) - - self.epsilon = epsilon - self.commitment_weight = commitment_weight - - self.orthogonal_reg_weight = orthogonal_reg_weight - self.orthogonal_reg_active_codes_only = orthogonal_reg_active_codes_only - self.orthogonal_reg_max_codes = orthogonal_reg_max_codes - - self._codebook = EuclideanCodebook(dim=_codebook_dim, codebook_size=codebook_size, - kmeans_init=kmeans_init, kmeans_iters=kmeans_iters, - decay=decay, epsilon=epsilon, - threshold_ema_dead_code=threshold_ema_dead_code) - self.codebook_size = codebook_size - - self.channels_last = channels_last - - @property - def codebook(self): - return self._codebook.embed - - @property - def inited(self): - return self._codebook.inited - - def _preprocess(self, x): - if not self.channels_last: - x = rearrange(x, "b d n -> b n d") - return x - - def _postprocess(self, quantize): - if not self.channels_last: - quantize = rearrange(quantize, "b n d -> b d n") - return quantize - - def encode(self, x): - x = self._preprocess(x) - x = self.project_in(x) - embed_in = self._codebook.encode(x) - return embed_in - - def decode(self, embed_ind): - quantize = self._codebook.decode(embed_ind) - quantize = self.project_out(quantize) - quantize = self._postprocess(quantize) - return quantize - - def forward(self, x): - device = x.device - x = self._preprocess(x) - - x = self.project_in(x) - quantize, embed_ind = self._codebook(x) - - if self.training: - quantize = x + (quantize - x).detach() - - loss = torch.tensor([0.0], device=device, requires_grad=self.training) - - if self.training: - if self.commitment_weight > 0: - commit_loss = F.mse_loss(quantize.detach(), x) - loss = loss + commit_loss * self.commitment_weight - - if self.orthogonal_reg_weight > 0: - codebook = self.codebook - - if self.orthogonal_reg_active_codes_only: - # only calculate orthogonal loss for the activated codes for this batch - unique_code_ids = torch.unique(embed_ind) - codebook = codebook[unique_code_ids] - - num_codes = codebook.shape[0] - if exists(self.orthogonal_reg_max_codes) and num_codes > self.orthogonal_reg_max_codes: - rand_ids = torch.randperm(num_codes, device=device)[:self.orthogonal_reg_max_codes] - codebook = codebook[rand_ids] - - orthogonal_reg_loss = orthgonal_loss_fn(codebook) - loss = loss + orthogonal_reg_loss * self.orthogonal_reg_weight - - quantize = self.project_out(quantize) - quantize = self._postprocess(quantize) - - return quantize, embed_ind, loss - - -class ResidualVectorQuantization(nn.Module): - """Residual vector quantization implementation. - - Follows Algorithm 1. in https://arxiv.org/pdf/2107.03312.pdf - """ - def __init__(self, *, num_quantizers, **kwargs): - super().__init__() - self.layers = nn.ModuleList( - [VectorQuantization(**kwargs) for _ in range(num_quantizers)] - ) - - def forward(self, x, n_q: tp.Optional[int] = None): - quantized_out = 0.0 - residual = x - - all_losses = [] - all_indices = [] - - n_q = n_q or len(self.layers) - - for i, layer in enumerate(self.layers[:n_q]): - quantized, indices, loss = layer(residual) - residual = residual - quantized - quantized_out = quantized_out + quantized - all_indices.append(indices) - all_losses.append(loss) - - out_losses, out_indices = map(torch.stack, (all_losses, all_indices)) - return quantized_out, out_indices, out_losses - - def encode(self, x: torch.Tensor, n_q: tp.Optional[int] = None) -> torch.Tensor: - residual = x - all_indices = [] - n_q = n_q or len(self.layers) - for layer in self.layers[:n_q]: - indices = layer.encode(residual) - quantized = layer.decode(indices) - residual = residual - quantized - all_indices.append(indices) - out_indices = torch.stack(all_indices) - return out_indices - - def decode(self, q_indices: torch.Tensor) -> torch.Tensor: - quantized_out = torch.tensor(0.0, device=q_indices.device) - for i, indices in enumerate(q_indices): - layer = self.layers[i] - quantized = layer.decode(indices) - quantized_out = quantized_out + quantized - return quantized_out diff --git a/spaces/XzJosh/Azuma-Bert-VITS2/models.py b/spaces/XzJosh/Azuma-Bert-VITS2/models.py deleted file mode 100644 index d4afe44d883691610c5903e602a3ca245fcb3a5c..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Azuma-Bert-VITS2/models.py +++ /dev/null @@ -1,707 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm - -from commons import init_weights, get_padding -from text import symbols, num_tones, num_languages -class DurationDiscriminator(nn.Module): #vits2 - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.dur_proj = nn.Conv1d(1, filter_channels, 1) - - self.pre_out_conv_1 = nn.Conv1d(2*filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_1 = modules.LayerNorm(filter_channels) - self.pre_out_conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.pre_out_norm_2 = modules.LayerNorm(filter_channels) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - self.output_layer = nn.Sequential( - nn.Linear(filter_channels, 1), - nn.Sigmoid() - ) - - def forward_probability(self, x, x_mask, dur, g=None): - dur = self.dur_proj(dur) - x = torch.cat([x, dur], dim=1) - x = self.pre_out_conv_1(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_1(x) - x = self.drop(x) - x = self.pre_out_conv_2(x * x_mask) - x = torch.relu(x) - x = self.pre_out_norm_2(x) - x = self.drop(x) - x = x * x_mask - x = x.transpose(1, 2) - output_prob = self.output_layer(x) - return output_prob - - def forward(self, x, x_mask, dur_r, dur_hat, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - - output_probs = [] - for dur in [dur_r, dur_hat]: - output_prob = self.forward_probability(x, x_mask, dur, g) - output_probs.append(output_prob) - - return output_probs - -class TransformerCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - n_flows=4, - gin_channels=0, - share_parameter=False - ): - - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - - self.wn = attentions.FFT(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = self.gin_channels) if share_parameter else None - - for i in range(n_flows): - self.flows.append( - modules.TransformerCouplingLayer(channels, hidden_channels, kernel_size, n_layers, n_heads, p_dropout, filter_channels, mean_only=True, wn_sharing_parameter=self.wn, gin_channels = self.gin_channels)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1, 2]) - logq = torch.sum(-0.5 * (math.log(2 * math.pi) + (e_q ** 2)) * x_mask, [1, 2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2 * math.pi) + (z ** 2)) * x_mask, [1, 2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size // 2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=0): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - self.emb = nn.Embedding(len(symbols), hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels ** -0.5) - self.tone_emb = nn.Embedding(num_tones, hidden_channels) - nn.init.normal_(self.tone_emb.weight, 0.0, hidden_channels ** -0.5) - self.language_emb = nn.Embedding(num_languages, hidden_channels) - nn.init.normal_(self.language_emb.weight, 0.0, hidden_channels ** -0.5) - self.bert_proj = nn.Conv1d(1024, hidden_channels, 1) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, tone, language, bert, g=None): - x = (self.emb(x)+ self.tone_emb(tone)+ self.language_emb(language)+self.bert_proj(bert).transpose(1,2)) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask, g=g) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, - gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel // (2 ** i), upsample_initial_channel // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - -class ReferenceEncoder(nn.Module): - ''' - inputs --- [N, Ty/r, n_mels*r] mels - outputs --- [N, ref_enc_gru_size] - ''' - - def __init__(self, spec_channels, gin_channels=0): - - super().__init__() - self.spec_channels = spec_channels - ref_enc_filters = [32, 32, 64, 64, 128, 128] - K = len(ref_enc_filters) - filters = [1] + ref_enc_filters - convs = [weight_norm(nn.Conv2d(in_channels=filters[i], - out_channels=filters[i + 1], - kernel_size=(3, 3), - stride=(2, 2), - padding=(1, 1))) for i in range(K)] - self.convs = nn.ModuleList(convs) - # self.wns = nn.ModuleList([weight_norm(num_features=ref_enc_filters[i]) for i in range(K)]) - - out_channels = self.calculate_channels(spec_channels, 3, 2, 1, K) - self.gru = nn.GRU(input_size=ref_enc_filters[-1] * out_channels, - hidden_size=256 // 2, - batch_first=True) - self.proj = nn.Linear(128, gin_channels) - - def forward(self, inputs, mask=None): - N = inputs.size(0) - out = inputs.view(N, 1, -1, self.spec_channels) # [N, 1, Ty, n_freqs] - for conv in self.convs: - out = conv(out) - # out = wn(out) - out = F.relu(out) # [N, 128, Ty//2^K, n_mels//2^K] - - out = out.transpose(1, 2) # [N, Ty//2^K, 128, n_mels//2^K] - T = out.size(1) - N = out.size(0) - out = out.contiguous().view(N, T, -1) # [N, Ty//2^K, 128*n_mels//2^K] - - self.gru.flatten_parameters() - memory, out = self.gru(out) # out --- [1, N, 128] - - return self.proj(out.squeeze(0)) - - def calculate_channels(self, L, kernel_size, stride, pad, n_convs): - for i in range(n_convs): - L = (L - kernel_size + 2 * pad) // stride + 1 - return L - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=256, - gin_channels=256, - use_sdp=True, - n_flow_layer = 4, - n_layers_trans_flow = 3, - flow_share_parameter = False, - use_transformer_flow = True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.n_layers_trans_flow = n_layers_trans_flow - self.use_spk_conditioned_encoder = kwargs.get("use_spk_conditioned_encoder", True) - self.use_sdp = use_sdp - self.use_noise_scaled_mas = kwargs.get("use_noise_scaled_mas", False) - self.mas_noise_scale_initial = kwargs.get("mas_noise_scale_initial", 0.01) - self.noise_scale_delta = kwargs.get("noise_scale_delta", 2e-6) - self.current_mas_noise_scale = self.mas_noise_scale_initial - if self.use_spk_conditioned_encoder and gin_channels > 0: - self.enc_gin_channels = gin_channels - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - gin_channels=self.enc_gin_channels) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, - upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, - gin_channels=gin_channels) - if use_transformer_flow: - self.flow = TransformerCouplingBlock(inter_channels, hidden_channels, filter_channels, n_heads, n_layers_trans_flow, 5, p_dropout, n_flow_layer, gin_channels=gin_channels,share_parameter= flow_share_parameter) - else: - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, n_flow_layer, gin_channels=gin_channels) - self.sdp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers >= 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - else: - self.ref_enc = ReferenceEncoder(spec_channels, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid, tone, language, bert): - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), - s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - if self.use_noise_scaled_mas: - epsilon = torch.std(neg_cent) * torch.randn_like(neg_cent) * self.current_mas_noise_scale - neg_cent = neg_cent + epsilon - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - - l_length_sdp = self.sdp(x, x_mask, w, g=g) - l_length_sdp = l_length_sdp / torch.sum(x_mask) - - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length_dp = torch.sum((logw - logw_) ** 2, [1, 2]) / torch.sum(x_mask) # for averaging - - l_length = l_length_dp + l_length_sdp - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q), (x, logw, logw_) - - def infer(self, x, x_lengths, sid, tone, language, bert, noise_scale=.667, length_scale=1, noise_scale_w=0.8, max_len=None, sdp_ratio=0,y=None): - #x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert) - # g = self.gst(y) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = self.ref_enc(y.transpose(1,2)).unsqueeze(-1) - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths, tone, language, bert,g=g) - logw = self.sdp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) * (sdp_ratio) + self.dp(x, x_mask, g=g) * (1 - sdp_ratio) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, - 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:, :, :max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) diff --git a/spaces/XzJosh/Jianmo-Bert-VITS2/server.py b/spaces/XzJosh/Jianmo-Bert-VITS2/server.py deleted file mode 100644 index c736ca4f95fec853950eef6654ef79856beffc0a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Jianmo-Bert-VITS2/server.py +++ /dev/null @@ -1,123 +0,0 @@ -from flask import Flask, request, Response -from io import BytesIO -import torch -from av import open as avopen - -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -from scipy.io import wavfile - -# Flask Init -app = Flask(__name__) -app.config['JSON_AS_ASCII'] = False -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - print([f"{p}{t}" for p, t in zip(phone, tone)]) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language - -def infer(text, sdp_ratio, noise_scale, noise_scale_w,length_scale,sid): - bert, phones, tones, lang_ids = get_text(text,"ZH", hps,) - with torch.no_grad(): - x_tst=phones.to(dev).unsqueeze(0) - tones=tones.to(dev).unsqueeze(0) - lang_ids=lang_ids.to(dev).unsqueeze(0) - bert = bert.to(dev).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(dev) - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(dev) - audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids,bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - return audio - -def replace_punctuation(text, i=2): - punctuation = ",。?!" - for char in punctuation: - text = text.replace(char, char * i) - return text - -def wav2(i, o, format): - inp = avopen(i, 'rb') - out = avopen(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - - ostream = out.add_stream(format) - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - for p in ostream.encode(None): out.mux(p) - - out.close() - inp.close() - -# Load Generator -hps = utils.get_hparams_from_file("./configs/config.json") - -dev='cuda' -net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(dev) -_ = net_g.eval() - -_ = utils.load_checkpoint("logs/G_649000.pth", net_g, None,skip_optimizer=True) - -@app.route("/",methods=['GET','POST']) -def main(): - if request.method == 'GET': - try: - speaker = request.args.get('speaker') - text = request.args.get('text').replace("/n","") - sdp_ratio = float(request.args.get("sdp_ratio", 0.2)) - noise = float(request.args.get("noise", 0.5)) - noisew = float(request.args.get("noisew", 0.6)) - length = float(request.args.get("length", 1.2)) - if length >= 2: - return "Too big length" - if len(text) >=200: - return "Too long text" - fmt = request.args.get("format", "wav") - if None in (speaker, text): - return "Missing Parameter" - if fmt not in ("mp3", "wav", "ogg"): - return "Invalid Format" - except: - return "Invalid Parameter" - - with torch.no_grad(): - audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise, noise_scale_w=noisew, length_scale=length, sid=speaker) - - with BytesIO() as wav: - wavfile.write(wav, hps.data.sampling_rate, audio) - torch.cuda.empty_cache() - if fmt == "wav": - return Response(wav.getvalue(), mimetype="audio/wav") - wav.seek(0, 0) - with BytesIO() as ofp: - wav2(wav, ofp, fmt) - return Response( - ofp.getvalue(), - mimetype="audio/mpeg" if fmt == "mp3" else "audio/ogg" - ) diff --git a/spaces/Yiqin/ChatVID/model/fastchat/model/make_delta.py b/spaces/Yiqin/ChatVID/model/fastchat/model/make_delta.py deleted file mode 100644 index ebaa2db62e50f7d91ee0d1f9379e704c932b9ec2..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/fastchat/model/make_delta.py +++ /dev/null @@ -1,46 +0,0 @@ -""" -Make the delta weights by subtracting base weights. - -Usage: -python3 -m fastchat.model.make_delta --base ~/model_weights/llama-13b --target ~/model_weights/vicuna-13b --delta ~/model_weights/vicuna-13b-delta --hub-repo-id lmsys/vicuna-13b-delta-v1.1 -""" -import argparse - -import torch -from tqdm import tqdm -from transformers import AutoTokenizer, AutoModelForCausalLM - - -def make_delta(base_model_path, target_model_path, delta_path): - print(f"Loading the base model from {base_model_path}") - base = AutoModelForCausalLM.from_pretrained( - base_model_path, torch_dtype=torch.float16, low_cpu_mem_usage=True - ) - - print(f"Loading the target model from {target_model_path}") - target = AutoModelForCausalLM.from_pretrained( - target_model_path, torch_dtype=torch.float16, low_cpu_mem_usage=True - ) - - print("Calculating the delta") - for name, param in tqdm(target.state_dict().items(), desc="Calculating delta"): - assert name in base.state_dict() - param.data -= base.state_dict()[name] - - print(f"Saving the delta to {delta_path}") - if args.hub_repo_id: - kwargs = {"push_to_hub": True, "repo_id": args.hub_repo_id} - else: - kwargs = {} - target.save_pretrained(delta_path, **kwargs) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--base-model-path", type=str, required=True) - parser.add_argument("--target-model-path", type=str, required=True) - parser.add_argument("--delta-path", type=str, required=True) - parser.add_argument("--hub-repo-id", type=str) - args = parser.parse_args() - - make_delta(args.base_model_path, args.target_model_path, args.delta_path) diff --git a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/README.md b/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/README.md deleted file mode 100644 index b6610df03d409633e572ef49d67a445d35a63967..0000000000000000000000000000000000000000 --- a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/README.md +++ /dev/null @@ -1,163 +0,0 @@ -# Grounding DINO - ---- - -[![arXiv](https://img.shields.io/badge/arXiv-2303.05499-b31b1b.svg)](https://arxiv.org/abs/2303.05499) -[![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/wxWDt5UiwY8) -[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) -[![YouTube](https://badges.aleen42.com/src/youtube.svg)](https://youtu.be/cMa77r3YrDk) -[![HuggingFace space](https://img.shields.io/badge/🤗-HuggingFace%20Space-cyan.svg)](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo) - -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/zero-shot-object-detection-on-mscoco)](https://paperswithcode.com/sota/zero-shot-object-detection-on-mscoco?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/zero-shot-object-detection-on-odinw)](https://paperswithcode.com/sota/zero-shot-object-detection-on-odinw?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/object-detection-on-coco-minival)](https://paperswithcode.com/sota/object-detection-on-coco-minival?p=grounding-dino-marrying-dino-with-grounded) \ -[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/grounding-dino-marrying-dino-with-grounded/object-detection-on-coco)](https://paperswithcode.com/sota/object-detection-on-coco?p=grounding-dino-marrying-dino-with-grounded) - - - -Official PyTorch implementation of [Grounding DINO](https://arxiv.org/abs/2303.05499), a stronger open-set object detector. Code is available now! - - -## Highlight - -- **Open-Set Detection.** Detect **everything** with language! -- **High Performancce.** COCO zero-shot **52.5 AP** (training without COCO data!). COCO fine-tune **63.0 AP**. -- **Flexible.** Collaboration with Stable Diffusion for Image Editting. - -## News -[2023/03/28] A YouTube [video](https://youtu.be/cMa77r3YrDk) about Grounding DINO and basic object detection prompt engineering. [[SkalskiP](https://github.com/SkalskiP)] \ -[2023/03/28] Add a [demo](https://huggingface.co/spaces/ShilongLiu/Grounding_DINO_demo) on Hugging Face Space! \ -[2023/03/27] Support CPU-only mode. Now the model can run on machines without GPUs.\ -[2023/03/25] A [demo](https://colab.research.google.com/github/roboflow-ai/notebooks/blob/main/notebooks/zero-shot-object-detection-with-grounding-dino.ipynb) for Grounding DINO is available at Colab. [[SkalskiP](https://github.com/SkalskiP)] \ -[2023/03/22] Code is available Now! - -
    - -Description - -ODinW -
    - - - -## TODO - -- [x] Release inference code and demo. -- [x] Release checkpoints. -- [ ] Grounding DINO with Stable Diffusion and GLIGEN demos. -- [ ] Release training codes. - -## Install - -If you have a CUDA environment, please make sure the environment variable `CUDA_HOME` is set. It will be compiled under CPU-only mode if no CUDA available. - -```bash -pip install -e . -``` - -## Demo - -```bash -CUDA_VISIBLE_DEVICES=6 python demo/inference_on_a_image.py \ - -c /path/to/config \ - -p /path/to/checkpoint \ - -i .asset/cats.png \ - -o "outputs/0" \ - -t "cat ear." \ - [--cpu-only] # open it for cpu mode -``` -See the `demo/inference_on_a_image.py` for more details. - -**Web UI** - -We also provide a demo code to integrate Grounding DINO with Gradio Web UI. See the file `demo/gradio_app.py` for more details. - -## Checkpoints - - - - - - - - - - - - - - - - - - - - - - - - - -
    namebackboneDatabox AP on COCOCheckpointConfig
    1GroundingDINO-TSwin-TO365,GoldG,Cap4M48.4 (zero-shot) / 57.2 (fine-tune)Github link | HF linklink
    - -## Results - -
    - -COCO Object Detection Results - -COCO -
    - -
    - -ODinW Object Detection Results - -ODinW -
    - -
    - -Marrying Grounding DINO with Stable Diffusion for Image Editing - -GD_SD -
    - -
    - -Marrying Grounding DINO with GLIGEN for more Detailed Image Editing - -GD_GLIGEN -
    - -## Model - -Includes: a text backbone, an image backbone, a feature enhancer, a language-guided query selection, and a cross-modality decoder. - -![arch](.asset/arch.png) - - -## Acknowledgement - -Our model is related to [DINO](https://github.com/IDEA-Research/DINO) and [GLIP](https://github.com/microsoft/GLIP). Thanks for their great work! - -We also thank great previous work including DETR, Deformable DETR, SMCA, Conditional DETR, Anchor DETR, Dynamic DETR, DAB-DETR, DN-DETR, etc. More related work are available at [Awesome Detection Transformer](https://github.com/IDEACVR/awesome-detection-transformer). A new toolbox [detrex](https://github.com/IDEA-Research/detrex) is available as well. - -Thanks [Stable Diffusion](https://github.com/Stability-AI/StableDiffusion) and [GLIGEN](https://github.com/gligen/GLIGEN) for their awesome models. - - -## Citation - -If you find our work helpful for your research, please consider citing the following BibTeX entry. - -```bibtex -@inproceedings{ShilongLiu2023GroundingDM, - title={Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection}, - author={Shilong Liu and Zhaoyang Zeng and Tianhe Ren and Feng Li and Hao Zhang and Jie Yang and Chunyuan Li and Jianwei Yang and Hang Su and Jun Zhu and Lei Zhang}, - year={2023} -} -``` - - - - diff --git a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/datasets/__init__.py b/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/groundingdino/datasets/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Yuliang/ICON/lib/net/FBNet.py b/spaces/Yuliang/ICON/lib/net/FBNet.py deleted file mode 100644 index a4392c0544e259b3407559effad9174723590584..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ICON/lib/net/FBNet.py +++ /dev/null @@ -1,388 +0,0 @@ -''' -Copyright (C) 2019 NVIDIA Corporation. Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu. -BSD License. All rights reserved. - -Redistribution and use in source and binary forms, with or without -modification, are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright notice, this - list of conditions and the following disclaimer. - -* Redistributions in binary form must reproduce the above copyright notice, - this list of conditions and the following disclaimer in the documentation - and/or other materials provided with the distribution. - -THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE, INCLUDING ALL -IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR ANY PARTICULAR PURPOSE. -IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL -DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, -WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING -OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. -''' -import torch -import torch.nn as nn -import functools -import numpy as np -import pytorch_lightning as pl - - -############################################################################### -# Functions -############################################################################### -def weights_init(m): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - m.weight.data.normal_(0.0, 0.02) - elif classname.find('BatchNorm2d') != -1: - m.weight.data.normal_(1.0, 0.02) - m.bias.data.fill_(0) - - -def get_norm_layer(norm_type='instance'): - if norm_type == 'batch': - norm_layer = functools.partial(nn.BatchNorm2d, affine=True) - elif norm_type == 'instance': - norm_layer = functools.partial(nn.InstanceNorm2d, affine=False) - else: - raise NotImplementedError('normalization layer [%s] is not found' % - norm_type) - return norm_layer - - -def define_G(input_nc, - output_nc, - ngf, - netG, - n_downsample_global=3, - n_blocks_global=9, - n_local_enhancers=1, - n_blocks_local=3, - norm='instance', - gpu_ids=[], - last_op=nn.Tanh()): - norm_layer = get_norm_layer(norm_type=norm) - if netG == 'global': - netG = GlobalGenerator(input_nc, - output_nc, - ngf, - n_downsample_global, - n_blocks_global, - norm_layer, - last_op=last_op) - elif netG == 'local': - netG = LocalEnhancer(input_nc, output_nc, ngf, n_downsample_global, - n_blocks_global, n_local_enhancers, - n_blocks_local, norm_layer) - elif netG == 'encoder': - netG = Encoder(input_nc, output_nc, ngf, n_downsample_global, - norm_layer) - else: - raise ('generator not implemented!') - # print(netG) - if len(gpu_ids) > 0: - assert (torch.cuda.is_available()) - device=torch.device(f"cuda:{gpu_ids[0]}") - netG = netG.to(device) - netG.apply(weights_init) - return netG - - -def print_network(net): - if isinstance(net, list): - net = net[0] - num_params = 0 - for param in net.parameters(): - num_params += param.numel() - print(net) - print('Total number of parameters: %d' % num_params) - - -############################################################################## -# Generator -############################################################################## -class LocalEnhancer(pl.LightningModule): - def __init__(self, - input_nc, - output_nc, - ngf=32, - n_downsample_global=3, - n_blocks_global=9, - n_local_enhancers=1, - n_blocks_local=3, - norm_layer=nn.BatchNorm2d, - padding_type='reflect'): - super(LocalEnhancer, self).__init__() - self.n_local_enhancers = n_local_enhancers - - ###### global generator model ##### - ngf_global = ngf * (2**n_local_enhancers) - model_global = GlobalGenerator(input_nc, output_nc, ngf_global, - n_downsample_global, n_blocks_global, - norm_layer).model - model_global = [model_global[i] for i in range(len(model_global) - 3) - ] # get rid of final convolution layers - self.model = nn.Sequential(*model_global) - - ###### local enhancer layers ##### - for n in range(1, n_local_enhancers + 1): - # downsample - ngf_global = ngf * (2**(n_local_enhancers - n)) - model_downsample = [ - nn.ReflectionPad2d(3), - nn.Conv2d(input_nc, ngf_global, kernel_size=7, padding=0), - norm_layer(ngf_global), - nn.ReLU(True), - nn.Conv2d(ngf_global, - ngf_global * 2, - kernel_size=3, - stride=2, - padding=1), - norm_layer(ngf_global * 2), - nn.ReLU(True) - ] - # residual blocks - model_upsample = [] - for i in range(n_blocks_local): - model_upsample += [ - ResnetBlock(ngf_global * 2, - padding_type=padding_type, - norm_layer=norm_layer) - ] - - # upsample - model_upsample += [ - nn.ConvTranspose2d(ngf_global * 2, - ngf_global, - kernel_size=3, - stride=2, - padding=1, - output_padding=1), - norm_layer(ngf_global), - nn.ReLU(True) - ] - - # final convolution - if n == n_local_enhancers: - model_upsample += [ - nn.ReflectionPad2d(3), - nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0), - nn.Tanh() - ] - - setattr(self, 'model' + str(n) + '_1', - nn.Sequential(*model_downsample)) - setattr(self, 'model' + str(n) + '_2', - nn.Sequential(*model_upsample)) - - self.downsample = nn.AvgPool2d(3, - stride=2, - padding=[1, 1], - count_include_pad=False) - - def forward(self, input): - # create input pyramid - input_downsampled = [input] - for i in range(self.n_local_enhancers): - input_downsampled.append(self.downsample(input_downsampled[-1])) - - # output at coarest level - output_prev = self.model(input_downsampled[-1]) - # build up one layer at a time - for n_local_enhancers in range(1, self.n_local_enhancers + 1): - model_downsample = getattr(self, - 'model' + str(n_local_enhancers) + '_1') - model_upsample = getattr(self, - 'model' + str(n_local_enhancers) + '_2') - input_i = input_downsampled[self.n_local_enhancers - - n_local_enhancers] - output_prev = model_upsample( - model_downsample(input_i) + output_prev) - return output_prev - - -class GlobalGenerator(pl.LightningModule): - def __init__(self, - input_nc, - output_nc, - ngf=64, - n_downsampling=3, - n_blocks=9, - norm_layer=nn.BatchNorm2d, - padding_type='reflect', - last_op=nn.Tanh()): - assert (n_blocks >= 0) - super(GlobalGenerator, self).__init__() - activation = nn.ReLU(True) - - model = [ - nn.ReflectionPad2d(3), - nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0), - norm_layer(ngf), activation - ] - # downsample - for i in range(n_downsampling): - mult = 2**i - model += [ - nn.Conv2d(ngf * mult, - ngf * mult * 2, - kernel_size=3, - stride=2, - padding=1), - norm_layer(ngf * mult * 2), activation - ] - - # resnet blocks - mult = 2**n_downsampling - for i in range(n_blocks): - model += [ - ResnetBlock(ngf * mult, - padding_type=padding_type, - activation=activation, - norm_layer=norm_layer) - ] - - # upsample - for i in range(n_downsampling): - mult = 2**(n_downsampling - i) - model += [ - nn.ConvTranspose2d(ngf * mult, - int(ngf * mult / 2), - kernel_size=3, - stride=2, - padding=1, - output_padding=1), - norm_layer(int(ngf * mult / 2)), activation - ] - model += [ - nn.ReflectionPad2d(3), - nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0) - ] - if last_op is not None: - model += [last_op] - self.model = nn.Sequential(*model) - - def forward(self, input): - return self.model(input) - - -# Define a resnet block -class ResnetBlock(pl.LightningModule): - def __init__(self, - dim, - padding_type, - norm_layer, - activation=nn.ReLU(True), - use_dropout=False): - super(ResnetBlock, self).__init__() - self.conv_block = self.build_conv_block(dim, padding_type, norm_layer, - activation, use_dropout) - - def build_conv_block(self, dim, padding_type, norm_layer, activation, - use_dropout): - conv_block = [] - p = 0 - if padding_type == 'reflect': - conv_block += [nn.ReflectionPad2d(1)] - elif padding_type == 'replicate': - conv_block += [nn.ReplicationPad2d(1)] - elif padding_type == 'zero': - p = 1 - else: - raise NotImplementedError('padding [%s] is not implemented' % - padding_type) - - conv_block += [ - nn.Conv2d(dim, dim, kernel_size=3, padding=p), - norm_layer(dim), activation - ] - if use_dropout: - conv_block += [nn.Dropout(0.5)] - - p = 0 - if padding_type == 'reflect': - conv_block += [nn.ReflectionPad2d(1)] - elif padding_type == 'replicate': - conv_block += [nn.ReplicationPad2d(1)] - elif padding_type == 'zero': - p = 1 - else: - raise NotImplementedError('padding [%s] is not implemented' % - padding_type) - conv_block += [ - nn.Conv2d(dim, dim, kernel_size=3, padding=p), - norm_layer(dim) - ] - - return nn.Sequential(*conv_block) - - def forward(self, x): - out = x + self.conv_block(x) - return out - - -class Encoder(pl.LightningModule): - def __init__(self, - input_nc, - output_nc, - ngf=32, - n_downsampling=4, - norm_layer=nn.BatchNorm2d): - super(Encoder, self).__init__() - self.output_nc = output_nc - - model = [ - nn.ReflectionPad2d(3), - nn.Conv2d(input_nc, ngf, kernel_size=7, padding=0), - norm_layer(ngf), - nn.ReLU(True) - ] - # downsample - for i in range(n_downsampling): - mult = 2**i - model += [ - nn.Conv2d(ngf * mult, - ngf * mult * 2, - kernel_size=3, - stride=2, - padding=1), - norm_layer(ngf * mult * 2), - nn.ReLU(True) - ] - - # upsample - for i in range(n_downsampling): - mult = 2**(n_downsampling - i) - model += [ - nn.ConvTranspose2d(ngf * mult, - int(ngf * mult / 2), - kernel_size=3, - stride=2, - padding=1, - output_padding=1), - norm_layer(int(ngf * mult / 2)), - nn.ReLU(True) - ] - - model += [ - nn.ReflectionPad2d(3), - nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0), - nn.Tanh() - ] - self.model = nn.Sequential(*model) - - def forward(self, input, inst): - outputs = self.model(input) - - # instance-wise average pooling - outputs_mean = outputs.clone() - inst_list = np.unique(inst.cpu().numpy().astype(int)) - for i in inst_list: - for b in range(input.size()[0]): - indices = (inst[b:b + 1] == int(i)).nonzero() # n x 4 - for j in range(self.output_nc): - output_ins = outputs[indices[:, 0] + b, indices[:, 1] + j, - indices[:, 2], indices[:, 3]] - mean_feat = torch.mean(output_ins).expand_as(output_ins) - outputs_mean[indices[:, 0] + b, indices[:, 1] + j, - indices[:, 2], indices[:, 3]] = mean_feat - return outputs_mean diff --git a/spaces/Yunshansongbai/SVC-Nahida/cluster/train_cluster.py b/spaces/Yunshansongbai/SVC-Nahida/cluster/train_cluster.py deleted file mode 100644 index 48506b94ace50d2e955b61d93c12e5911e3b227f..0000000000000000000000000000000000000000 --- a/spaces/Yunshansongbai/SVC-Nahida/cluster/train_cluster.py +++ /dev/null @@ -1,88 +0,0 @@ -import os -from glob import glob -from pathlib import Path -import paddle -import logging -import argparse -import numpy as np -from sklearn.cluster import KMeans, MiniBatchKMeans -import tqdm -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) -import time -import random - -def train_cluster(in_dir, n_clusters, use_minibatch=True, verbose=False): - - logger.info(f"正在从{in_dir}加载特征") - features = [] - nums = 0 - for path in tqdm.tqdm(in_dir.glob("*.soft.pdtensor")): - path = str(path) - features.append(paddle.load(path).squeeze(0).numpy().T) - # print(features[-1].shape) - features = np.concatenate(features, axis=0) - print(nums, features.nbytes/ 1024**2, "MB , 形状:",features.shape, features.dtype) - features = features.astype(np.float32) - logger.info(f"聚类特征的形状:{features.shape}") - t = time.time() - if use_minibatch: - kmeans = MiniBatchKMeans(n_clusters=n_clusters,verbose=verbose, batch_size=4096, max_iter=80).fit(features) - else: - kmeans = KMeans(n_clusters=n_clusters,verbose=verbose).fit(features) - print(time.time()-t, "s") - - x = { - "n_features_in_": kmeans.n_features_in_, - "_n_threads": kmeans._n_threads, - "cluster_centers_": kmeans.cluster_centers_, - } - print("结束") - - return x - - -if __name__ == "__main__": - - parser = argparse.ArgumentParser() - parser.add_argument('--dataset', type=Path, default="./dataset/44k", - help='path of training data directory') - parser.add_argument('--output', type=Path, default="logs/44k", - help='path of model output directory') - - args = parser.parse_args() - - checkpoint_dir = args.output - dataset = args.dataset - n_clusters = 10000 - - ckpt = {} - for spk in os.listdir(dataset): - if os.path.isdir(dataset/spk): - print(f"正在给{spk}训练kmeans中……") - in_dir = dataset/spk - x = train_cluster(in_dir, n_clusters, verbose=False) - ckpt[spk] = x - - checkpoint_path = checkpoint_dir / f"kmeans_{n_clusters}.pdparams" - checkpoint_path.parent.mkdir(exist_ok=True, parents=True) - paddle.save( - ckpt, - str(checkpoint_path), - ) - - - # import cluster - # for spk in tqdm.tqdm(os.listdir("dataset")): - # if os.path.isdir(f"dataset/{spk}"): - # print(f"start kmeans inference for {spk}...") - # for feature_path in tqdm.tqdm(glob(f"dataset/{spk}/*.discrete.npy", recursive=True)): - # mel_path = feature_path.replace(".discrete.npy",".mel.npy") - # mel_spectrogram = np.load(mel_path) - # feature_len = mel_spectrogram.shape[-1] - # c = np.load(feature_path) - # c = utils.tools.repeat_expand_2d(torch.FloatTensor(c), feature_len).numpy() - # feature = c.T - # feature_class = cluster.get_cluster_result(feature, spk) - # np.save(feature_path.replace(".discrete.npy", ".discrete_class.npy"), feature_class) - diff --git a/spaces/ZalacDanijel/pujaguja/Dockerfile b/spaces/ZalacDanijel/pujaguja/Dockerfile deleted file mode 100644 index 94ee76a4f45af463ab7f945633c9258172f9cc80..0000000000000000000000000000000000000000 --- a/spaces/ZalacDanijel/pujaguja/Dockerfile +++ /dev/null @@ -1,2 +0,0 @@ -FROM huggingface/autotrain-advanced:latest -CMD autotrain app --port 7860 diff --git a/spaces/ZilliaxOfficial/nyaru-svc-3.0/models.py b/spaces/ZilliaxOfficial/nyaru-svc-3.0/models.py deleted file mode 100644 index bdbce8445304abda792f235a4761b831fd6f4d12..0000000000000000000000000000000000000000 --- a/spaces/ZilliaxOfficial/nyaru-svc-3.0/models.py +++ /dev/null @@ -1,351 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import attentions -import commons -import modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_lengths, f0=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = x + self.f0_emb(f0).transpose(1,2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - - return z, m, logs, x_mask - - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SpeakerEncoder(torch.nn.Module): - def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256): - super(SpeakerEncoder, self).__init__() - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - def forward(self, mels): - self.lstm.flatten_parameters() - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - def compute_partial_slices(self, total_frames, partial_frames, partial_hop): - mel_slices = [] - for i in range(0, total_frames-partial_frames, partial_hop): - mel_range = torch.arange(i, i+partial_frames) - mel_slices.append(mel_range) - - return mel_slices - - def embed_utterance(self, mel, partial_frames=128, partial_hop=64): - mel_len = mel.size(1) - last_mel = mel[:,-partial_frames:] - - if mel_len > partial_frames: - mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop) - mels = list(mel[:,s] for s in mel_slices) - mels.append(last_mel) - mels = torch.stack(tuple(mels), 0).squeeze(1) - - with torch.no_grad(): - partial_embeds = self(mels) - embed = torch.mean(partial_embeds, axis=0).unsqueeze(0) - #embed = embed / torch.linalg.norm(embed, 2) - else: - with torch.no_grad(): - embed = self(last_mel) - - return embed - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - **kwargs): - - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.enc_p_ = TextEncoder(ssl_dim, inter_channels, hidden_channels, 5, 1, 16,0, filter_channels, n_heads, p_dropout) - hps = { - "sampling_rate": 32000, - "inter_channels": 192, - "resblock": "1", - "resblock_kernel_sizes": [3, 7, 11], - "resblock_dilation_sizes": [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - "upsample_rates": [10, 8, 2, 2], - "upsample_initial_channel": 512, - "upsample_kernel_sizes": [16, 16, 4, 4], - "gin_channels": 256, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - def forward(self, c, f0, spec, g=None, mel=None, c_lengths=None, spec_lengths=None): - if c_lengths == None: - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - if spec_lengths == None: - spec_lengths = (torch.ones(spec.size(0)) * spec.size(-1)).to(spec.device) - - g = self.emb_g(g).transpose(1,2) - - z_ptemp, m_p, logs_p, _ = self.enc_p_(c, c_lengths, f0=f0_to_coarse(f0)) - z, m_q, logs_q, spec_mask = self.enc_q(spec, spec_lengths, g=g) - - z_p = self.flow(z, spec_mask, g=g) - z_slice, pitch_slice, ids_slice = commons.rand_slice_segments_with_pitch(z, f0, spec_lengths, self.segment_size) - - # o = self.dec(z_slice, g=g) - o = self.dec(z_slice, g=g, f0=pitch_slice) - - return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, c, f0, g=None, mel=None, c_lengths=None): - if c_lengths == None: - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - g = self.emb_g(g).transpose(1,2) - - z_p, m_p, logs_p, c_mask = self.enc_p_(c, c_lengths, f0=f0_to_coarse(f0)) - z = self.flow(z_p, c_mask, g=g, reverse=True) - - o = self.dec(z * c_mask, g=g, f0=f0) - - return o diff --git a/spaces/ZiyadCodes/ArabicGPT/index.js b/spaces/ZiyadCodes/ArabicGPT/index.js deleted file mode 100644 index 3e78455112772a29302751bb8b6ceb43d105e038..0000000000000000000000000000000000000000 --- a/spaces/ZiyadCodes/ArabicGPT/index.js +++ /dev/null @@ -1,233 +0,0 @@ -const stopGeneratingButton = document.querySelector(".stop-generating"); -const newChatButton = document.querySelector(".new-chat-button"); -const loadingText = document.getElementById('loading-text'); -const userInput = document.getElementById("user-input"); -const messages = document.querySelector(".messages"); -const sendBtn = document.getElementById("send-btn"); - -let conversationId = '-1'; -let continueMsg = true; -let loadingAnimation; -let chatHistory = []; -let canSend = true; -let currentMsg; - -async function sendPrompt(prompt) { - canSend = false - addMessage(prompt, false) - chatHistory.push(`{ "role": "user", "content": "${prompt.replace(/\\/g, '\\\\').replace(/\n/g, '\\n').replace(/"/g, '\\"') }" }`); - await fetch('https://arabicgpt-5yb4zfzupa-uc.a.run.app', { method: 'POST', headers: {'conversation-id': conversationId}, body: chatHistory.toString() }) - .then(async (response) => { - clearInterval(loadingAnimation); - if (response.status >= 200 && response.status < 400) { - loadingText.style.display = 'none'; - loadingText.textContent = ''; - addMessage('', true); - - if (response.headers.has('conversation-id')) { - conversationId = response.headers.get('conversation-id'); - } - - const d = new TextDecoder('utf8'); - const reader = response.body.getReader(); - while (continueMsg) { - let { value, done } = await reader.read(); - chunk = d.decode(value); - - if (chunk.includes('DONE') || done) break; - - chunk.split('\n\n').slice(0, -1).forEach(subChunk => { - console.log(subChunk); - delta = JSON.parse(subChunk.substring(6)).choices[0].delta; - if (delta.content) { - currentMsg.innerText += delta.content; - messages.scrollTop = messages.scrollHeight; - } - }); - } - - chatHistory.push(`{"role":"assistant", "content":"${currentMsg.innerText.replace(/\n/g, '\\n').replace(/"/g, '\\"')}"}`); - } - else if (response.status != 0) { - const d = new TextDecoder('utf8'); - const reader = response.body.getReader(); - const { value, done } = await reader.read(); - error = JSON.parse(d.decode(value)).error; - console.error(error); - - if (error.type == 'context_length_exceeded') { - loadingText.textContent = 'لقد تم تجاوز الحد الأقصى لطول المحادثة!'; - } - else if (response.status == 429) { - loadingText.textContent = 'لقد تم تجاوز الحد الأقصى لسرعة السؤال، انتظر ٢٠ ثانية ثم حاول مرة أخرى'; - } - else if (response.status == 423) { - const kalemgptAdHolder = document.createElement('div'); - kalemgptAdHolder.style.textAlign = 'center'; - - const homeKalemgptImg = document.createElement('img'); - homeKalemgptImg.src = 'home-kalemgpt.png'; - homeKalemgptImg.className = 'kalemgpt-img'; - homeKalemgptImg.onclick = () => { window.open('https://kalemgpt.com/', '_blank') } - - const chatKalemgptImg = document.createElement('img'); - chatKalemgptImg.src = 'chat-kalemgpt.png'; - chatKalemgptImg.className = 'kalemgpt-img'; - chatKalemgptImg.onclick = () => { window.open('https://kalemgpt.com/chat', '_blank') } - - loadingText.innerHTML = 'نفدت عمليات إكمال الدردشة الخاصة بك ، استخدم kalemgpt.com للحصول على المزيد من الدردشة مجانًا '; - document.getElementById('kalemgpt-text-link').onclick = () => { window.open('https://kalemgpt.com/', '_blank') } - - kalemgptAdHolder.appendChild(homeKalemgptImg); - kalemgptAdHolder.appendChild(chatKalemgptImg); - messages.appendChild(kalemgptAdHolder); - - setTimeout(() => { canSend = false }, 500); - } - else { - loadingText.textContent = `لقد حدث خطأ ما! جرب إعادة تحميل الموقع \n ${error.type}`; - } - - loadingText.style.background = 'linear-gradient(135deg, rgb(100, 50, 50), rgb(90, 40, 40))' - messages.scrollTop = messages.scrollHeight; - } - stopGeneratingButton.style.display = 'none'; - }).catch(error => { - console.error(error); - clearInterval(loadingAnimation); - loadingText.textContent = error; - messages.scrollTop = messages.scrollHeight; - stopGeneratingButton.style.display = 'none'; - loadingText.style.background = 'linear-gradient(135deg, rgb(100, 50, 50), rgb(90, 40, 40))'; - }); - - stopGeneratingButton.style.display = 'none'; - continueMsg = true; - currentMsg = null; - canSend = true; -} - -function addMessage(text, isntUser) { - const message = document.createElement("div"); - const textP = document.createElement('p'); - let svg; - - if (isntUser) { - currentMsg = textP; - svg = document.getElementById('bot-svg').cloneNode(true); - message.style.background = '#183d4d' - } else { - svg = document.getElementById('human-svg').cloneNode(true); - } - - message.className = 'message'; - textP.className = 'message-text'; - textP.innerText = text; - svg.style.display = 'block'; - message.appendChild(svg); - message.appendChild(textP); - - messages.insertBefore(message, messages.lastElementChild); -} - -sendBtn.addEventListener("click", () => { - if (canSend && userInput.value) { - sendPrompt(userInput.value) - userInput.style.height = '23px'; - userInput.value = ''; - - loadingText.style.display = 'block'; - loadingText.textContent = 'جاري التحميل'; - stopGeneratingButton.style.display = 'inline-flex'; - loadingText.style.background = 'linear-gradient(135deg, rgb(50, 50, 55), rgb(40, 40, 45))' - messages.scrollTop = messages.scrollHeight; - - let dotCount = 0; - loadingAnimation = setInterval(() => { - dotCount = (dotCount + 1) % 4; - let dots = '.'.repeat(dotCount); - loadingText.textContent = 'جاري التحميل' + dots; - }, 300); - } -}); - -userInput.addEventListener("keydown", (event) => { - if (event.key === "Enter" && !event.shiftKey) { - event.preventDefault(); - sendBtn.dispatchEvent(new Event('click')); - } -}); - -newChatButton.addEventListener("click", (e) => { - e.preventDefault(); - if (canSend){ - chatHistory = []; - currentMsg = null; - messages.innerHTML = ''; - clearInterval(loadingAnimation); - loadingText.textContent = ''; - messages.appendChild(loadingText) - loadingText.style.display = 'none'; - stopGeneratingButton.style.display = 'none'; - loadingText.style.background = 'linear-gradient(135deg, rgb(50, 50, 55), rgb(40, 40, 45))'; - } -}); - -stopGeneratingButton.addEventListener("click", (e) => { - e.preventDefault(); - if (messages.children.length > 1) { - canSend = true; - continueMsg = false; - loadingText.textContent = ''; - clearInterval(loadingAnimation); - loadingText.style.display = 'none'; - stopGeneratingButton.style.display = 'none'; - loadingText.style.background = 'linear-gradient(135deg, rgb(50, 50, 55), rgb(40, 40, 45))'; - chatHistory.push(`{"role":"assistant", "content":"${currentMsg.innerText.replace(/\n/g, '\\n').replace(/"/g, '\\"')}"}`); - currentMsg = null; - } -}); - -userInput.addEventListener("input", (event) => { - sendBtn.setAttribute('can-send', (userInput.value != '' && canSend).toString()) - userInput.style.height = '1px'; - const newHeight = Math.min(userInput.scrollHeight - 19.5, 100); - userInput.style.height = `${newHeight}px`; -}); - -function isMobile() { - return ( - /Android|webOS|iPhone|iPad|iPod|BlackBerry|IEMobile|Opera Mini/i.test( - navigator.userAgent - ) - ); -} - -function isMobileSafari() { - return ( - navigator.userAgent.match(/(iPod|iPhone|iPad)/) && - navigator.userAgent.match(/AppleWebKit/) - ); -} - -function estimateBottomMenuHeight() { - if (isMobileSafari()) { - userInput.style.background = 'red'; - const screenHeight = window.innerHeight; - if (screenHeight >= 812) { - return 34; - } else { - return 44; - } - } else if (isMobile()) { - userInput.style.background = 'blue'; - return 204; - } else { - return 0; - } -} - -const bottomMenuHeight = estimateBottomMenuHeight(); -if (bottomMenuHeight > 0) { - chatContainer.style.height = `calc(95vh - ${bottomMenuHeight}px)`; -} \ No newline at end of file diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/visualize/joints2smpl/src/smplify.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/visualize/joints2smpl/src/smplify.py deleted file mode 100644 index 580efef98dfdcf6e7486b7f5c5436820edfb6c4b..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/visualize/joints2smpl/src/smplify.py +++ /dev/null @@ -1,279 +0,0 @@ -import torch -import os, sys -import pickle -import smplx -import numpy as np - -sys.path.append(os.path.dirname(__file__)) -from customloss import (camera_fitting_loss, - body_fitting_loss, - camera_fitting_loss_3d, - body_fitting_loss_3d, - ) -from prior import MaxMixturePrior -from visualize.joints2smpl.src import config - - - -@torch.no_grad() -def guess_init_3d(model_joints, - j3d, - joints_category="orig"): - """Initialize the camera translation via triangle similarity, by using the torso joints . - :param model_joints: SMPL model with pre joints - :param j3d: 25x3 array of Kinect Joints - :returns: 3D vector corresponding to the estimated camera translation - """ - # get the indexed four - gt_joints = ['RHip', 'LHip', 'RShoulder', 'LShoulder'] - gt_joints_ind = [config.JOINT_MAP[joint] for joint in gt_joints] - - if joints_category=="orig": - joints_ind_category = [config.JOINT_MAP[joint] for joint in gt_joints] - elif joints_category=="AMASS": - joints_ind_category = [config.AMASS_JOINT_MAP[joint] for joint in gt_joints] - else: - print("NO SUCH JOINTS CATEGORY!") - - sum_init_t = (j3d[:, joints_ind_category] - model_joints[:, gt_joints_ind]).sum(dim=1) - init_t = sum_init_t / 4.0 - return init_t - - -# SMPLIfy 3D -class SMPLify3D(): - """Implementation of SMPLify, use 3D joints.""" - - def __init__(self, - smplxmodel, - step_size=1e-2, - batch_size=1, - num_iters=100, - use_collision=False, - use_lbfgs=True, - joints_category="orig", - device=torch.device('cuda:0'), - ): - - # Store options - self.batch_size = batch_size - self.device = device - self.step_size = step_size - - self.num_iters = num_iters - # --- choose optimizer - self.use_lbfgs = use_lbfgs - # GMM pose prior - self.pose_prior = MaxMixturePrior(prior_folder=config.GMM_MODEL_DIR, - num_gaussians=8, - dtype=torch.float32).to(device) - # collision part - self.use_collision = use_collision - if self.use_collision: - self.part_segm_fn = config.Part_Seg_DIR - - # reLoad SMPL-X model - self.smpl = smplxmodel - - self.model_faces = smplxmodel.faces_tensor.view(-1) - - # select joint joint_category - self.joints_category = joints_category - - if joints_category=="orig": - self.smpl_index = config.full_smpl_idx - self.corr_index = config.full_smpl_idx - elif joints_category=="AMASS": - self.smpl_index = config.amass_smpl_idx - self.corr_index = config.amass_idx - else: - self.smpl_index = None - self.corr_index = None - print("NO SUCH JOINTS CATEGORY!") - - # ---- get the man function here ------ - def __call__(self, init_pose, init_betas, init_cam_t, j3d, conf_3d=1.0, seq_ind=0): - """Perform body fitting. - Input: - init_pose: SMPL pose estimate - init_betas: SMPL betas estimate - init_cam_t: Camera translation estimate - j3d: joints 3d aka keypoints - conf_3d: confidence for 3d joints - seq_ind: index of the sequence - Returns: - vertices: Vertices of optimized shape - joints: 3D joints of optimized shape - pose: SMPL pose parameters of optimized shape - betas: SMPL beta parameters of optimized shape - camera_translation: Camera translation - """ - - # # # add the mesh inter-section to avoid - search_tree = None - pen_distance = None - filter_faces = None - - if self.use_collision: - from mesh_intersection.bvh_search_tree import BVH - import mesh_intersection.loss as collisions_loss - from mesh_intersection.filter_faces import FilterFaces - - search_tree = BVH(max_collisions=8) - - pen_distance = collisions_loss.DistanceFieldPenetrationLoss( - sigma=0.5, point2plane=False, vectorized=True, penalize_outside=True) - - if self.part_segm_fn: - # Read the part segmentation - part_segm_fn = os.path.expandvars(self.part_segm_fn) - with open(part_segm_fn, 'rb') as faces_parents_file: - face_segm_data = pickle.load(faces_parents_file, encoding='latin1') - faces_segm = face_segm_data['segm'] - faces_parents = face_segm_data['parents'] - # Create the module used to filter invalid collision pairs - filter_faces = FilterFaces( - faces_segm=faces_segm, faces_parents=faces_parents, - ign_part_pairs=None).to(device=self.device) - - - # Split SMPL pose to body pose and global orientation - body_pose = init_pose[:, 3:].detach().clone() - global_orient = init_pose[:, :3].detach().clone() - betas = init_betas.detach().clone() - - # use guess 3d to get the initial - smpl_output = self.smpl(global_orient=global_orient, - body_pose=body_pose, - betas=betas) - model_joints = smpl_output.joints - - init_cam_t = guess_init_3d(model_joints, j3d, self.joints_category).unsqueeze(1).detach() - camera_translation = init_cam_t.clone() - - preserve_pose = init_pose[:, 3:].detach().clone() - # -------------Step 1: Optimize camera translation and body orientation-------- - # Optimize only camera translation and body orientation - body_pose.requires_grad = False - betas.requires_grad = False - global_orient.requires_grad = True - camera_translation.requires_grad = True - - camera_opt_params = [global_orient, camera_translation] - - if self.use_lbfgs: - camera_optimizer = torch.optim.LBFGS(camera_opt_params, max_iter=self.num_iters, - lr=self.step_size, line_search_fn='strong_wolfe') - for i in range(10): - def closure(): - camera_optimizer.zero_grad() - smpl_output = self.smpl(global_orient=global_orient, - body_pose=body_pose, - betas=betas) - model_joints = smpl_output.joints - # print('model_joints', model_joints.shape) - # print('camera_translation', camera_translation.shape) - # print('init_cam_t', init_cam_t.shape) - # print('j3d', j3d.shape) - loss = camera_fitting_loss_3d(model_joints, camera_translation, - init_cam_t, j3d, self.joints_category) - loss.backward() - return loss - - camera_optimizer.step(closure) - else: - camera_optimizer = torch.optim.Adam(camera_opt_params, lr=self.step_size, betas=(0.9, 0.999)) - - for i in range(20): - smpl_output = self.smpl(global_orient=global_orient, - body_pose=body_pose, - betas=betas) - model_joints = smpl_output.joints - - loss = camera_fitting_loss_3d(model_joints[:, self.smpl_index], camera_translation, - init_cam_t, j3d[:, self.corr_index], self.joints_category) - camera_optimizer.zero_grad() - loss.backward() - camera_optimizer.step() - - # Fix camera translation after optimizing camera - # --------Step 2: Optimize body joints -------------------------- - # Optimize only the body pose and global orientation of the body - body_pose.requires_grad = True - global_orient.requires_grad = True - camera_translation.requires_grad = True - - # --- if we use the sequence, fix the shape - if seq_ind == 0: - betas.requires_grad = True - body_opt_params = [body_pose, betas, global_orient, camera_translation] - else: - betas.requires_grad = False - body_opt_params = [body_pose, global_orient, camera_translation] - - if self.use_lbfgs: - body_optimizer = torch.optim.LBFGS(body_opt_params, max_iter=self.num_iters, - lr=self.step_size, line_search_fn='strong_wolfe') - for i in range(self.num_iters): - def closure(): - body_optimizer.zero_grad() - smpl_output = self.smpl(global_orient=global_orient, - body_pose=body_pose, - betas=betas) - model_joints = smpl_output.joints - model_vertices = smpl_output.vertices - - loss = body_fitting_loss_3d(body_pose, preserve_pose, betas, model_joints[:, self.smpl_index], camera_translation, - j3d[:, self.corr_index], self.pose_prior, - joints3d_conf=conf_3d, - joint_loss_weight=600.0, - pose_preserve_weight=5.0, - use_collision=self.use_collision, - model_vertices=model_vertices, model_faces=self.model_faces, - search_tree=search_tree, pen_distance=pen_distance, filter_faces=filter_faces) - loss.backward() - return loss - - body_optimizer.step(closure) - else: - body_optimizer = torch.optim.Adam(body_opt_params, lr=self.step_size, betas=(0.9, 0.999)) - - for i in range(self.num_iters): - smpl_output = self.smpl(global_orient=global_orient, - body_pose=body_pose, - betas=betas) - model_joints = smpl_output.joints - model_vertices = smpl_output.vertices - - loss = body_fitting_loss_3d(body_pose, preserve_pose, betas, model_joints[:, self.smpl_index], camera_translation, - j3d[:, self.corr_index], self.pose_prior, - joints3d_conf=conf_3d, - joint_loss_weight=600.0, - use_collision=self.use_collision, - model_vertices=model_vertices, model_faces=self.model_faces, - search_tree=search_tree, pen_distance=pen_distance, filter_faces=filter_faces) - body_optimizer.zero_grad() - loss.backward() - body_optimizer.step() - - # Get final loss value - with torch.no_grad(): - smpl_output = self.smpl(global_orient=global_orient, - body_pose=body_pose, - betas=betas, return_full_pose=True) - model_joints = smpl_output.joints - model_vertices = smpl_output.vertices - - final_loss = body_fitting_loss_3d(body_pose, preserve_pose, betas, model_joints[:, self.smpl_index], camera_translation, - j3d[:, self.corr_index], self.pose_prior, - joints3d_conf=conf_3d, - joint_loss_weight=600.0, - use_collision=self.use_collision, model_vertices=model_vertices, model_faces=self.model_faces, - search_tree=search_tree, pen_distance=pen_distance, filter_faces=filter_faces) - - vertices = smpl_output.vertices.detach() - joints = smpl_output.joints.detach() - pose = torch.cat([global_orient, body_pose], dim=-1).detach() - betas = betas.detach() - - return vertices, joints, pose, betas, camera_translation, final_loss diff --git a/spaces/alamin655/websurfx/src/models/mod.rs b/spaces/alamin655/websurfx/src/models/mod.rs deleted file mode 100644 index 6a7d2353254bc42185e9e790929afa401984c22e..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/src/models/mod.rs +++ /dev/null @@ -1,8 +0,0 @@ -//! This module provides modules which in turn provides various models for aggregrating search -//! results, parsing config file, providing trait to standardize search engine handling code, -//! custom engine error for the search engine, etc. - -pub mod aggregation_models; -pub mod engine_models; -pub mod parser_models; -pub mod server_models; diff --git a/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/docs/infinibatch/index.html b/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/docs/infinibatch/index.html deleted file mode 100644 index b121c03951b6400592ed517bb0b6d8c94ff2b842..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/third_party/HMNet/DataLoader/infinibatch/docs/infinibatch/index.html +++ /dev/null @@ -1,629 +0,0 @@ - - - - - - -infinibatch API documentation - - - - - - - - - -
    -
    -
    -

    Module infinibatch

    -
    -
    -

    Infinibatch is a library of checkpointable iterators for randomized data loading of massive data sets in deep neural network training.

    -

    Features

    -
      -
    • support for corpora much larger than fit into RAM
    • -
    • hierarchical block+sentence-level randomization over the whole corpus, different randomization in each epoch
    • -
    • only load the data that is needed
    • -
    • very fast start-up time (does not need to read full corpus)
    • -
    • only requires the most basic of data preparation (e.g. no indexing)
    • -
    • for multi-GPU, only load what the respective GPU needs
    • -
    • 100% accurate check-pointing, restore from checkpoint should not read all data up to the checkpoint
    • -
    • support automatic bucketed batching with dynamic batch sizes
    • -
    • pre-fetching thread
    • -
    • composable, as to support for complex batching, e.g. negative samples from multiple documents
    • -
    -

    Getting Started

    -

    Infinibatch requires Python 3.5 and has no dependencies. -There is presently no pip package. -To install it, please copy this library into a subfolder in your project:

    -
    cd YOUR_PROJECT_FOLDER
    -git clone <https://msasg.visualstudio.com/DefaultCollection/SDRG/_git/infinibatch>
    -
    -

    or, better, as a submodule reference:

    -
    git submodule add <https://msasg.visualstudio.com/DefaultCollection/SDRG/_git/infinibatch>
    -
    -

    It is now located at infinibatch/infinibatch, e.g. the main import file is infinibatch/infinibatch/__init__.py.

    -

    To import it, you need to add that folder to your PYTHONPATH variable externally, or to sys.path inside the code:

    -
    import sys
    -sys.path.insert(0,'infinibatch')  # note: relative paths are relative to your current dir, not to the python script
    -import infinibatch
    -
    -

    Tutorial

    -

    This little tutorial walks you through the steps of preparing your data and consuming them from Python code as batches.

    -

    Infinibatch Basics: Iterators and Checkpointing

    -

    Infinibatch provides Python iterators -to read your data. -An iterator represents a stream of data that can be retrieved item by item, e.g. via a -for loop or repeatedly calling next() on it.

    -

    Infinibatch is agnostic to the data type of the items, which is determined by a user-supplied file-read function. -In NLP applications, items would typically be tuples of text. In other applications, -they can be images or an audio file with a textual annotation.

    -

    Infinibatch makes it easy to read your data in randomized order, and supports checkpointing, which allows you to restart training exactly where you left off.

    -

    Randomization is done on the fly, which means that it is not necessary to read the entire data set into memory -to be shuffled. Infinibatch implements a hierarchical shuffling algorithm -that only holds a subset of the data in RAM at any point in time.

    -

    Infinibatch iterators are checkpointable. -Checkpointing lets you retrieve the current position (the "checkpoint") in the data stream at any time, so that -later, you can "rewind" to that same position. -The sad reality is that long-running trainings occasionally crash. -To be able to continue a crashed training as if it had not crashed, -save your Infinibatch iterator's checkpoint to disk whenever you save an intermediate model during training. -To restart a crashed training, reset the iterator to the saved checkpoint. -The data reader will now yield the exact same data-item sequence it would have yielded without the crash.

    -

    Data Preparation

    -

    Infinibatch has one requirement on your data organization: -To use your data with Infinibatch, it must be split into a large number of small chunks. -A chunk is the smallest unit of data that is loaded from disk into RAM. Infinibatch holds a random subset of chunks in memory -that it randomly draws samples from.

    -

    Below we want to show how such a split can be created. An easy way to split your data into chunks is with the Linux split command.

    -

    In this tutorial, our "corpus" consists of 6 lines of text, where each line is one data item. -To create that corpus, please run this command in a bash shell. It creates a 6-line text file named corpus.txt:

    -
    echo \
    -'Lorem ipsum dolor sit amet,
    -consectetur adipiscing elit,
    -sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
    -Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
    -Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
    -The quick brown fox jumps over the lazy dog.' \
    -> corpus.txt
    -
    -

    Now let us split it into 3 chunks of 2 lines each. Each chunk is stored as a zipped text file. -We will create them inside a new subdirectory called corpus_chunks:

    -
    mkdir corpus_chunks
    -split  --lines 2  --numeric-suffixes                 \
    -       --filter 'gzip > corpus_chunks/$FILE.txt.gz'  \
    -       corpus.txt  corpus.
    -
    -

    This will have created three files: corpus_chunks/corpus.00.txt.gz, corpus_chunks/corpus.01.txt.gz, and corpus_chunks/corpus.02.txt.gz. -To verify whether the data has been split as expected, you can use this command:

    -
    zcat corpus_chunks/corpus.*.txt.gz
    -
    -

    Hint: For large corpora, we recommend replacing gzip by pigz (apt-get install pigz), which runs notably faster via multi-threading.

    -

    Reading Items in Random Order With Infinibatch

    -

    We will first show the easiest way to read data with Infinibatch, using the helper function chunked_dataset_iterator``(). -This function will create an Infinibatch iterator that yields the content of your data in random order. -Please the following program:

    -
    import sys, gzip, glob
    -sys.path.insert(0,'infinibatch')
    -from infinibatch import datasets as ds
    -
    -ds = ds.chunked_dataset_iterator(
    -    chunk_refs = glob.glob('corpus_chunks/corpus.*.txt.gz'),
    -    read_chunk_fn = lambda path: iter(gzip.decompress(open(path, "rb")  \
    -                                      .read()).decode(encoding='utf-8') \
    -                                      .splitlines()),
    -    buffer_size = 6, seed = 1)
    -
    -for i in range(10):
    -    print(next(ds))
    -
    -

    You should get output that contains the 6 example lines in randomized order:

    -
    Lorem ipsum dolor sit amet,
    -consectetur adipiscing elit,
    -Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
    -Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
    -The quick brown fox jumps over the lazy dog.
    -sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
    -consectetur adipiscing elit,
    -Lorem ipsum dolor sit amet,
    -The quick brown fox jumps over the lazy dog.
    -sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
    -
    -

    Note: The buffer_size parameter determines how many sentences are read into memory at any given time, -to draw randomized items from. In real settings with corpora of hundreds of millions of text lines, -the buffer_size parameter should be set in the millions. -RAM usage and startup time will be proportional to the buffer size -(but much lower than having to load the entire corpus into RAM).

    -

    Reading Items of Different Lengths in Batches

    -

    For deep learning, we want to group multiple items into batches. -For NLP tasks, items are often lines of text of varying length. -Infinibatch implements an algorithm that randomizes the input sequence and groups it into -batches of approximately the same length (aka bucketing).

    -

    Infinibatch's BucketedReadaheadBatchIterator performs this task. -It implements an algorithm modeled after the Marian toolkit -that preloads a large number of randomized items (typically millions; in this example: 6), -sorts them and groups them into batches of similar length, and then yields -them, in turn, in randomized order.

    -

    Here is an example. Note that the BucketedReadaheadBatchIterator accepts -the previous randomized sentence sequence iterator (ds) as the source of items to randomize over. -This is an example how one forms pipelines of iterators with Infinibatch -(a concept familiar from Python's own itertools). -Once an iterator is passed to another as its source, consider it owned by that other iterator, -it must no longer be accessed by the calling code.

    -
    import sys, gzip, glob
    -sys.path.insert(0,'infinibatch')
    -from infinibatch import datasets as ds
    -from infinibatch import iterators as it
    -
    -ds = ds.chunked_dataset_iterator(
    -    chunk_refs = glob.glob('corpus_chunks/corpus.*.txt.gz'),
    -    read_chunk_fn = lambda path: iter(gzip.decompress(open(path, "rb")  \
    -                                      .read()).decode(encoding='utf-8') \
    -                                      .splitlines()),
    -    buffer_size = 6, seed = 1)
    -
    -bs = it.BucketedReadaheadBatchIterator(
    -    source_iterator = ds,   # note: this is the iterator from above
    -    read_ahead = 6,
    -    key = lambda line: len(line),
    -    batch_size = 2,
    -    seed = 1)
    -
    -for i in range(25):
    -    print(next(bs))
    -
    -

    This code should output something like this:

    -
    ['sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.',
    - 'The quick brown fox jumps over the lazy dog.']
    -['consectetur adipiscing elit,', 'Lorem ipsum dolor sit amet,']
    -['Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.',
    - 'Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.']
    -
    -

    followed by different permutations of the same tuples. -As you can see, the sentences are in random order and grouped in batches of 2 of approximately the same length. -You may notice that there is no variation in how the items get grouped into batches–that -is an artifact of this example, and generally not the case in real use when the data size is much larger -than the batch size.

    -

    In NLP, sentence length often varies considerably. As a result, using batches of a fixed number of lines, -as in the example above, will waste GPU RAM and cores. -This is because the number of lines is limited by the longest possible sequence; batches of shorter lines -would leave GPU cycles on the table. -Ideally, one would use batches that have as many lines as fit into GPU RAM, -given the number of tokens of the longest line in the batch. -To support variable batch sizes, Infinibatch allows to pass a function as the batch_size parameter. -That function will be given the longest item of a batch and should estimate how many items of at most this length can fit.

    -

    In our example, we assume that batches can hold at most 150 tokens. -Please change the above code as follows:

    -
        batch_size = lambda longest_line: 150 // len(longest_line),
    -
    -

    The output looks like this:

    -
    ['consectetur adipiscing elit,', 'Lorem ipsum dolor sit amet,']
    -['Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.']
    -['sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.',
    - 'The quick brown fox jumps over the lazy dog.']
    -['Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.']
    -
    -

    That shorter sentences got grouped, while longer did not because they would exceed the total of 150 characters.

    -

    Reading Batches Into Numpy Arrays

    -

    Lastly, we will need to feed batches into our favorite deep-learning tool. -We will show how to convert the batches of text lines into padded numpy arrays.

    -

    In a typical NLP application, text items would be tokenized, and then each token -would be represented by an index into a unit vocabulary. -For simplicity, in this example each character is its own token, -and each token's numeric unit index is just its ASCII code. -These sequences are then padded to equal length with -1, and converted into a numpy array.

    -

    Please rerun the previous example, but first insert the following code before the final for loop. -This example uses an Infinibatch MapIterator, which applies a user-supplied function or -lambda to each item:

    -
    import numpy as np
    -def collate(lines_batch):
    -    # tokenize all lines in the batch and map to unit ids
    -    ids_batch = [[ord(c) for c in line] for line in lines_batch]
    -    # create a padded numpy array as wide as the longest line,
    -    # where shorter sequences are padded with -1
    -    width = max(len(ids) for ids in ids_batch)
    -    return np.array([ids + [-1] * (width-len(ids)) for ids in ids_batch])
    -
    -bs = it.MapIterator(
    -    source_iterator = bs,
    -    transform = collate)
    -
    -

    This will output batches like this. Note that in batches with multiple sentences, -some entries are padded with -1.

    -
    [[ 99 111 110 115 101  99 116 101 116 117 114  32  97 100 105 112 105 115
    -   99 105 110 103  32 101 108 105 116  44]
    - [ 76 111 114 101 109  32 105 112 115 117 109  32 100 111 108 111 114  32
    -  115 105 116  32  97 109 101 116  44  -1]]
    -[[ 85 116  32 101 110 105 109  32  97 100  32 109 105 110 105 109  32 118
    -  101 110 105  97 109  44  32 113 117 105 115  32 110 111 115 116 114 117
    -  100  32 101 120 101 114  99 105 116  97 116 105 111 110  32 117 108 108
    -   97 109  99 111  32 108  97  98 111 114 105 115  32 110 105 115 105  32
    -  117 116  32  97 108 105 113 117 105 112  32 101 120  32 101  97  32  99
    -  111 109 109 111 100 111  32  99 111 110 115 101 113 117  97 116  46]]
    -[[115 101 100  32 100 111  32 101 105 117 115 109 111 100  32 116 101 109
    -  112 111 114  32 105 110  99 105 100 105 100 117 110 116  32 117 116  32
    -  108  97  98 111 114 101  32 101 116  32 100 111 108 111 114 101  32 109
    -   97 103 110  97  32  97 108 105 113 117  97  46]
    - [ 84 104 101  32 113 117 105  99 107  32  98 114 111 119 110  32 102 111
    -  120  32 106 117 109 112 115  32 111 118 101 114  32 116 104 101  32 108
    -   97 122 121  32 100 111 103  46  -1  -1  -1  -1  -1  -1  -1  -1  -1  -1
    -   -1  -1  -1  -1  -1  -1  -1  -1  -1  -1  -1  -1]]
    -[[ 68 117 105 115  32  97 117 116 101  32 105 114 117 114 101  32 100 111
    -  108 111 114  32 105 110  32 114 101 112 114 101 104 101 110 100 101 114
    -  105 116  32 105 110  32 118 111 108 117 112 116  97 116 101  32 118 101
    -  108 105 116  32 101 115 115 101  32  99 105 108 108 117 109  32 100 111
    -  108 111 114 101  32 101 117  32 102 117 103 105  97 116  32 110 117 108
    -  108  97  32 112  97 114 105  97 116 117 114  46]]
    -
    -

    Where To Go From Here

    -

    The above tutorial showed you the use of the most common iterator type, as created by the -convenience function chunked_dataset_iterator().

    -

    Not all real-life scenarios are covered by this function. For example, multi-task learning -scenarios require more complex combinations of data. To create those, you will need -to compose the necessary data reader from the underlying building blocks. -This is described at the documentation of the module infinibatch.iterators.

    -
    - -Expand source code - -
    """
    -Infinibatch is a library of checkpointable iterators for randomized data loading of massive data sets in deep neural network training.
    -
    -
    -## Features
    -
    -  * support for corpora much larger than fit into RAM
    -  * hierarchical block+sentence-level randomization over the whole corpus, different randomization in each epoch
    -  * only load the data that is needed
    -  * very fast start-up time (does not need to read full corpus)
    -  * only requires the most basic of data preparation (e.g. no indexing)
    -  * for multi-GPU, only load what the respective GPU needs
    -  * 100% accurate check-pointing, restore from checkpoint should not read all data up to the checkpoint
    -  * support automatic bucketed batching with dynamic batch sizes
    -  * pre-fetching thread
    -  * composable, as to support for complex batching, e.g. negative samples from multiple documents
    -
    -
    -## Getting Started
    -
    -Infinibatch requires Python 3.5 and has no dependencies.
    -There is presently no pip package.
    -To install it, please copy this library into a subfolder in your project:
    -```bash
    -cd YOUR_PROJECT_FOLDER
    -git clone https://msasg.visualstudio.com/DefaultCollection/SDRG/_git/infinibatch
    -```
    -or, better, as a submodule reference:
    -```bash
    -git submodule add https://msasg.visualstudio.com/DefaultCollection/SDRG/_git/infinibatch
    -```
    -It is now located at `infinibatch/infinibatch`, e.g. the main import file is `infinibatch/infinibatch/__init__.py`.
    -
    -To import it, you need to add that folder to your `PYTHONPATH` variable externally, or to `sys.path` inside the code:
    -```python
    -import sys
    -sys.path.insert(0,'infinibatch')  # note: relative paths are relative to your current dir, not to the python script
    -import infinibatch
    -```
    -
    -## Tutorial
    -
    -This little tutorial walks you through the steps of preparing your data and consuming them from Python code as batches.
    -
    -### Infinibatch Basics: Iterators and Checkpointing
    -
    -Infinibatch provides [Python iterators](https://docs.python.org/3.5/glossary.html#term-iterator)
    -to read your data.
    -An iterator represents a stream of data that can be retrieved item by item, e.g. via a
    -`for` loop or repeatedly calling `next()` on it.
    -
    -Infinibatch is agnostic to the data type of the items, which is determined by a user-supplied file-read function.
    -In NLP applications, items would typically be tuples of text. In other applications,
    -they can be images or an audio file with a textual annotation.
    -
    -Infinibatch makes it easy to read your data in randomized order, and supports checkpointing, which allows you to restart training exactly where you left off.
    -
    -Randomization is done _on the fly_, which means that it is not necessary to read the entire data set into memory
    -to be shuffled. Infinibatch implements a hierarchical shuffling algorithm
    -that only holds a subset of the data in RAM at any point in time.
    -
    -Infinibatch iterators are _checkpointable_.
    -Checkpointing lets you retrieve the current position (the "checkpoint") in the data stream at any time, so that
    -later, you can "rewind" to that same position.
    -The sad reality is that long-running trainings occasionally crash.
    -To be able to continue a crashed training as if it had not crashed,
    -save your Infinibatch iterator's checkpoint to disk whenever you save an intermediate model during training.
    -To restart a crashed training, reset the iterator to the saved checkpoint.
    -The data reader will now yield the exact same data-item sequence it would have yielded without the crash.
    -
    -### Data Preparation
    -
    -Infinibatch has one requirement on your data organization:
    -To use your data with Infinibatch, it must be split into a large number of small chunks.
    -A chunk is the smallest unit of data that is loaded from disk into RAM. Infinibatch holds a random subset of chunks in memory
    -that it randomly draws samples from.
    -
    -Below we want to show how such a split can be created. An easy way to split your data into chunks is with the Linux `split` command.
    -
    -In this tutorial, our "corpus" consists of 6 lines of text, where each line is one data item.
    -To create that corpus, please run this command in a bash shell. It creates a 6-line text file named `corpus.txt`:
    -```bash
    -echo \\
    -'Lorem ipsum dolor sit amet,
    -consectetur adipiscing elit,
    -sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
    -Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
    -Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
    -The quick brown fox jumps over the lazy dog.' \\
    -> corpus.txt
    -```
    -Now let us split it into 3 chunks of 2 lines each. Each chunk is stored as a zipped text file.
    -We will create them inside a new subdirectory called `corpus_chunks`:
    -```bash
    -mkdir corpus_chunks
    -split  --lines 2  --numeric-suffixes                 \\
    -       --filter 'gzip > corpus_chunks/$FILE.txt.gz'  \\
    -       corpus.txt  corpus.
    -```
    -This will have created three files: `corpus_chunks/corpus.00.txt.gz`, `corpus_chunks/corpus.01.txt.gz`, and `corpus_chunks/corpus.02.txt.gz`.
    -To verify whether the data has been split as expected, you can use this command:
    -```bash
    -zcat corpus_chunks/corpus.*.txt.gz
    -```
    -
    -Hint: For large corpora, we recommend replacing `gzip` by `pigz` (`apt-get install pigz`), which runs notably faster via multi-threading.
    -
    -### Reading Items in Random Order With Infinibatch
    -
    -We will first show the easiest way to read data with Infinibatch, using the helper function `chunked_dataset_iterator``()`.
    -This function will create an Infinibatch iterator that yields the content of your data in random order.
    -Please the following program:
    -```python
    -import sys, gzip, glob
    -sys.path.insert(0,'infinibatch')
    -from infinibatch import datasets as ds
    -
    -ds = ds.chunked_dataset_iterator(
    -    chunk_refs = glob.glob('corpus_chunks/corpus.*.txt.gz'),
    -    read_chunk_fn = lambda path: iter(gzip.decompress(open(path, "rb")  \\
    -                                      .read()).decode(encoding='utf-8') \\
    -                                      .splitlines()),
    -    buffer_size = 6, seed = 1)
    -
    -for i in range(10):
    -    print(next(ds))
    -```
    -You should get output that contains the 6 example lines in randomized order:
    -```text
    -Lorem ipsum dolor sit amet,
    -consectetur adipiscing elit,
    -Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
    -Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
    -The quick brown fox jumps over the lazy dog.
    -sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
    -consectetur adipiscing elit,
    -Lorem ipsum dolor sit amet,
    -The quick brown fox jumps over the lazy dog.
    -sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
    -```
    -Note: The `buffer_size` parameter determines how many sentences are read into memory at any given time,
    -to draw randomized items from. In real settings with corpora of hundreds of millions of text lines,
    -the `buffer_size` parameter should be set in the millions.
    -RAM usage and startup time will be proportional to the buffer size
    -(but much lower than having to load the entire corpus into RAM).
    -
    -### Reading Items of Different Lengths in Batches
    -
    -For deep learning, we want to group multiple items into batches.
    -For NLP tasks, items are often lines of text of varying length.
    -Infinibatch implements an algorithm that randomizes the input sequence and groups it into
    -batches of approximately the same length (aka _bucketing_).
    -
    -Infinibatch's `BucketedReadaheadBatchIterator` performs this task.
    -It implements an algorithm modeled after the [Marian toolkit](https://github.com/marian-nmt/marian)
    -that preloads a large number of randomized items (typically millions; in this example: 6),
    -sorts them and groups them into batches of similar length, and then yields
    -them, in turn, in randomized order.
    -
    -Here is an example. Note that the `BucketedReadaheadBatchIterator` accepts
    -the previous randomized sentence sequence iterator (`ds`) as the source of items to randomize over.
    -This is an example how one forms pipelines of iterators with Infinibatch
    -(a concept familiar from Python's own `itertools`).
    -Once an iterator is passed to another as its source, consider it owned by that other iterator,
    -it must no longer be accessed by the calling code.
    -```python
    -import sys, gzip, glob
    -sys.path.insert(0,'infinibatch')
    -from infinibatch import datasets as ds
    -from infinibatch import iterators as it
    -
    -ds = ds.chunked_dataset_iterator(
    -    chunk_refs = glob.glob('corpus_chunks/corpus.*.txt.gz'),
    -    read_chunk_fn = lambda path: iter(gzip.decompress(open(path, "rb")  \\
    -                                      .read()).decode(encoding='utf-8') \\
    -                                      .splitlines()),
    -    buffer_size = 6, seed = 1)
    -
    -bs = it.BucketedReadaheadBatchIterator(
    -    source_iterator = ds,   # note: this is the iterator from above
    -    read_ahead = 6,
    -    key = lambda line: len(line),
    -    batch_size = 2,
    -    seed = 1)
    -
    -for i in range(25):
    -    print(next(bs))
    -```
    -This code should output something like this:
    -```python
    -['sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.',
    - 'The quick brown fox jumps over the lazy dog.']
    -['consectetur adipiscing elit,', 'Lorem ipsum dolor sit amet,']
    -['Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.',
    - 'Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.']
    -```
    -followed by different permutations of the same tuples.
    -As you can see, the sentences are in random order and grouped in batches of 2 of approximately the same length.
    -You may notice that there is no variation in how the items get grouped into batches--that
    -is an artifact of this example, and generally not the case in real use when the data size is much larger
    -than the batch size.
    -
    -In NLP, sentence length often varies considerably. As a result, using batches of a fixed number of lines,
    -as in the example above, will waste GPU RAM and cores.
    -This is because the number of lines is limited by the longest possible sequence; batches of shorter lines
    -would leave GPU cycles on the table.
    -Ideally, one would use batches that have as many lines as fit into GPU RAM,
    -given the number of tokens of the longest line in the batch.
    -To support variable batch sizes, Infinibatch allows to pass a function as the `batch_size` parameter.
    -That function will be given the longest item of a batch and should estimate how many items of at most this length can fit.
    -
    -In our example, we assume that batches can hold at most 150 tokens.
    -Please change the above code as follows:
    -```python
    -    batch_size = lambda longest_line: 150 // len(longest_line),
    -```
    -The output looks like this:
    -```
    -['consectetur adipiscing elit,', 'Lorem ipsum dolor sit amet,']
    -['Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.']
    -['sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.',
    - 'The quick brown fox jumps over the lazy dog.']
    -['Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.']
    -```
    -That shorter sentences got grouped, while longer did not because they would exceed the total of 150 characters.
    -
    -### Reading Batches Into Numpy Arrays
    -
    -Lastly, we will need to feed batches into our favorite deep-learning tool.
    -We will show how to convert the batches of text lines into padded `numpy` arrays.
    -
    -In a typical NLP application, text items would be tokenized, and then each token
    -would be represented by an index into a unit vocabulary.
    -For simplicity, in this example each character is its own token,
    -and each token's numeric unit index is just its ASCII code.
    -These sequences are then padded to equal length with -1, and converted into a `numpy` array.
    -
    -Please rerun the previous example, but first insert the following code before the final `for` loop.
    -This example uses an Infinibatch `MapIterator`, which applies a user-supplied function or
    -lambda to each item:
    -```python
    -import numpy as np
    -def collate(lines_batch):
    -    # tokenize all lines in the batch and map to unit ids
    -    ids_batch = [[ord(c) for c in line] for line in lines_batch]
    -    # create a padded numpy array as wide as the longest line,
    -    # where shorter sequences are padded with -1
    -    width = max(len(ids) for ids in ids_batch)
    -    return np.array([ids + [-1] * (width-len(ids)) for ids in ids_batch])
    -
    -bs = it.MapIterator(
    -    source_iterator = bs,
    -    transform = collate)
    -```
    -This will output batches like this. Note that in batches with multiple sentences,
    -some entries are padded with `-1`.
    -```python
    -[[ 99 111 110 115 101  99 116 101 116 117 114  32  97 100 105 112 105 115
    -   99 105 110 103  32 101 108 105 116  44]
    - [ 76 111 114 101 109  32 105 112 115 117 109  32 100 111 108 111 114  32
    -  115 105 116  32  97 109 101 116  44  -1]]
    -[[ 85 116  32 101 110 105 109  32  97 100  32 109 105 110 105 109  32 118
    -  101 110 105  97 109  44  32 113 117 105 115  32 110 111 115 116 114 117
    -  100  32 101 120 101 114  99 105 116  97 116 105 111 110  32 117 108 108
    -   97 109  99 111  32 108  97  98 111 114 105 115  32 110 105 115 105  32
    -  117 116  32  97 108 105 113 117 105 112  32 101 120  32 101  97  32  99
    -  111 109 109 111 100 111  32  99 111 110 115 101 113 117  97 116  46]]
    -[[115 101 100  32 100 111  32 101 105 117 115 109 111 100  32 116 101 109
    -  112 111 114  32 105 110  99 105 100 105 100 117 110 116  32 117 116  32
    -  108  97  98 111 114 101  32 101 116  32 100 111 108 111 114 101  32 109
    -   97 103 110  97  32  97 108 105 113 117  97  46]
    - [ 84 104 101  32 113 117 105  99 107  32  98 114 111 119 110  32 102 111
    -  120  32 106 117 109 112 115  32 111 118 101 114  32 116 104 101  32 108
    -   97 122 121  32 100 111 103  46  -1  -1  -1  -1  -1  -1  -1  -1  -1  -1
    -   -1  -1  -1  -1  -1  -1  -1  -1  -1  -1  -1  -1]]
    -[[ 68 117 105 115  32  97 117 116 101  32 105 114 117 114 101  32 100 111
    -  108 111 114  32 105 110  32 114 101 112 114 101 104 101 110 100 101 114
    -  105 116  32 105 110  32 118 111 108 117 112 116  97 116 101  32 118 101
    -  108 105 116  32 101 115 115 101  32  99 105 108 108 117 109  32 100 111
    -  108 111 114 101  32 101 117  32 102 117 103 105  97 116  32 110 117 108
    -  108  97  32 112  97 114 105  97 116 117 114  46]]
    -```
    -
    -## Where To Go From Here
    -
    -The above tutorial showed you the use of the most common iterator type, as created by the
    -convenience function `chunked_dataset_iterator()`.
    -
    -Not all real-life scenarios are covered by this function. For example, multi-task learning
    -scenarios require more complex combinations of data. To create those, you will need
    -to compose the necessary data reader from the underlying building blocks.
    -This is described at the documentation of the module `iterators`.
    -"""
    -
    -
    -
    -

    Sub-modules

    -
    -
    infinibatch.closablequeue
    -
    -
    -
    -
    infinibatch.datasets
    -
    -
    -
    -
    infinibatch.iterators
    -
    -

    Overview …

    -
    -
    infinibatch.torch
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    -
    - -
    - - - - - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test146/app.py b/spaces/allknowingroger/Image-Models-Test146/app.py deleted file mode 100644 index 5685093b60284ae8875024cc199784862679b916..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test146/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "AndrzejDD/lora-trained-xl-colab-me", - "livingbox/model-test-aug-05-v2", - "AndrzejDD/lora-trained-xl-colab", - "livingbox/model-test-02", - "paras1/dog-images-in-different-backgrounds", - "archith/archit", - "kear24100712/piconai321", - "jalandhar-2004/my-project-jld", - "andrewparkk/train_dreambooth_lora_sdxl_model", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test25/app.py b/spaces/allknowingroger/Image-Models-Test25/app.py deleted file mode 100644 index a2803039774b0bb66c2813329bf774708256f0fb..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test25/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "digiplay/unstableDiffusersYamerMIX_v3", - "hiddenbox/dog_emotion", - "digiplay/SoapMix2.5D_v2", - "iamkaikai/amazing-logos", - "Jade1211/textual_inversion_puppy", - "andressrg/textual_inversion_meal_0_100", - "Jade1211/textual_inversion_baby", - "whackett/textual_inversion_cat", - "XERO5000/my-pet-dog-and-cat", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/ammarnasr/Sem-GAN-Bird-Image-Generator/models.py b/spaces/ammarnasr/Sem-GAN-Bird-Image-Generator/models.py deleted file mode 100644 index a95008abaa0578e22dd48e2e53c11e0103c5e03a..0000000000000000000000000000000000000000 --- a/spaces/ammarnasr/Sem-GAN-Bird-Image-Generator/models.py +++ /dev/null @@ -1,110 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def upBlock(in_planes, out_planes): - block = nn.Sequential( - nn.Upsample(scale_factor=2, mode='nearest'), - conv3x3(in_planes, out_planes * 2), - nn.BatchNorm2d(out_planes * 2), - GLU()) - return block - - -def conv1x1(in_planes, out_planes): - return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=1, - padding=0, bias=False) - - -def conv3x3(in_planes, out_planes): - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=1, - padding=1, bias=False) - - - -def downBlock(in_planes, out_planes): - block = nn.Sequential( - nn.Conv2d(in_planes, out_planes, 4, 2, 1, bias=False), - nn.BatchNorm2d(out_planes), - nn.LeakyReLU(0.2, inplace=True) - ) - return block - -def Block3x3_leakRelu(in_planes, out_planes): - block = nn.Sequential( - conv3x3(in_planes, out_planes), - nn.BatchNorm2d(out_planes), - nn.LeakyReLU(0.2, inplace=True) - ) - return block - - -class GLU(nn.Module): - def __init__(self): - super(GLU, self).__init__() - - def forward(self, x): - nc = x.size(1) - assert nc % 2 == 0, 'channels dont divide 2!' - nc = int(nc / 2) - return x[:, :nc] * F.sigmoid(x[:, nc:]) - - - -class ResBlock(nn.Module): - def __init__(self, channel_num): - super(ResBlock, self).__init__() - self.block = nn.Sequential( - conv3x3(channel_num, channel_num * 2), - nn.BatchNorm2d(channel_num * 2), - GLU(), - conv3x3(channel_num, channel_num), - nn.BatchNorm2d(channel_num)) - - def forward(self, x): - residual = x - out = self.block(x) - out += residual - return out - - -class GlobalAttentionGeneral(nn.Module): - def __init__(self, idf, cdf): - super(GlobalAttentionGeneral, self).__init__() - self.conv_context = conv1x1(cdf, idf) - self.sm = nn.Softmax() - self.mask = None - - def applyMask(self, mask): - self.mask = mask # batch x sourceL - - def forward(self, input, context): - """ - input: batch x idf x ih x iw (queryL=ihxiw) - context: batch x cdf x sourceL - """ - ih, iw = input.size(2), input.size(3) - queryL = ih * iw - batch_size, sourceL = context.size(0), context.size(2) - - target = input.view(batch_size, -1, queryL) - targetT = torch.transpose(target, 1, 2).contiguous() - sourceT = context.unsqueeze(3) - sourceT = self.conv_context(sourceT).squeeze(3) - attn = torch.bmm(targetT, sourceT) - attn = attn.view(batch_size*queryL, sourceL) - if self.mask is not None: - mask = self.mask.repeat(queryL, 1) - attn.data.masked_fill_(mask.data, -float('inf')) - attn = self.sm(attn) # Eq. (2) - attn = attn.view(batch_size, queryL, sourceL) - attn = torch.transpose(attn, 1, 2).contiguous() - - weightedContext = torch.bmm(sourceT, attn) - weightedContext = weightedContext.view(batch_size, -1, ih, iw) - attn = attn.view(batch_size, -1, ih, iw) - - return weightedContext, attn - - \ No newline at end of file diff --git a/spaces/anakin87/fact-checking-rocks/app_utils/backend_utils.py b/spaces/anakin87/fact-checking-rocks/app_utils/backend_utils.py deleted file mode 100644 index 4c295301ed0703c03b2e93a8f91dc6f9e1293e2d..0000000000000000000000000000000000000000 --- a/spaces/anakin87/fact-checking-rocks/app_utils/backend_utils.py +++ /dev/null @@ -1,88 +0,0 @@ -import shutil -from typing import List - -from haystack import Document -from haystack.document_stores import FAISSDocumentStore -from haystack.nodes import EmbeddingRetriever, PromptNode -from haystack.pipelines import Pipeline -import streamlit as st - -from haystack_entailment_checker import EntailmentChecker -from app_utils.config import ( - STATEMENTS_PATH, - INDEX_DIR, - RETRIEVER_MODEL, - RETRIEVER_MODEL_FORMAT, - NLI_MODEL, - PROMPT_MODEL, -) - - -@st.cache_data -def load_statements(): - """Load statements from file""" - with open(STATEMENTS_PATH) as fin: - statements = [ - line.strip() for line in fin.readlines() if not line.startswith("#") - ] - return statements - - -# cached to make index and models load only at start -@st.cache_resource -def start_haystack(): - """ - load document store, retriever, entailment checker and create pipeline - """ - shutil.copy(f"{INDEX_DIR}/faiss_document_store.db", ".") - document_store = FAISSDocumentStore( - faiss_index_path=f"{INDEX_DIR}/my_faiss_index.faiss", - faiss_config_path=f"{INDEX_DIR}/my_faiss_index.json", - ) - print(f"Index size: {document_store.get_document_count()}") - retriever = EmbeddingRetriever( - document_store=document_store, - embedding_model=RETRIEVER_MODEL, - model_format=RETRIEVER_MODEL_FORMAT, - ) - entailment_checker = EntailmentChecker( - model_name_or_path=NLI_MODEL, - use_gpu=False, - entailment_contradiction_threshold=0.5, - ) - - pipe = Pipeline() - pipe.add_node(component=retriever, name="retriever", inputs=["Query"]) - pipe.add_node(component=entailment_checker, name="ec", inputs=["retriever"]) - - prompt_node = PromptNode(model_name_or_path=PROMPT_MODEL, max_length=150) - - return pipe, prompt_node - - -pipe, prompt_node = start_haystack() - -# the pipeline is not included as parameter of the following function, -# because it is difficult to cache -@st.cache_resource -def check_statement(statement: str, retriever_top_k: int = 5): - """Run query and verify statement""" - params = {"retriever": {"top_k": retriever_top_k}} - return pipe.run(statement, params=params) - - -@st.cache_resource -def explain_using_llm( - statement: str, documents: List[Document], entailment_or_contradiction: str -) -> str: - """Explain entailment/contradiction, by prompting a LLM""" - premise = " \n".join([doc.content.replace("\n", ". ") for doc in documents]) - if entailment_or_contradiction == "entailment": - verb = "entails" - elif entailment_or_contradiction == "contradiction": - verb = "contradicts" - - prompt = f"Premise: {premise}; Hypothesis: {statement}; Please explain in detail why the Premise {verb} the Hypothesis. Step by step Explanation:" - - print(prompt) - return prompt_node(prompt)[0] diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/py3d_tools.py b/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/py3d_tools.py deleted file mode 100644 index 5eb958607c4fd405a06bb67e33963e744fd2306f..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/py3d_tools.py +++ /dev/null @@ -1,1801 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the BSD-style license found in the -# LICENSE file in the root directory of this source tree. - -import sys -import math -import warnings -from typing import List, Optional, Sequence, Tuple, Union, Any - -import numpy as np -import torch -import torch.nn.functional as F - -import copy -import inspect -import torch.nn as nn - -Device = Union[str, torch.device] - -# Default values for rotation and translation matrices. -_R = torch.eye(3)[None] # (1, 3, 3) -_T = torch.zeros(1, 3) # (1, 3) - - -# Provide get_origin and get_args even in Python 3.7. - -if sys.version_info >= (3, 8, 0): - from typing import get_args, get_origin -elif sys.version_info >= (3, 7, 0): - - def get_origin(cls): # pragma: no cover - return getattr(cls, "__origin__", None) - - def get_args(cls): # pragma: no cover - return getattr(cls, "__args__", None) - - -else: - raise ImportError("This module requires Python 3.7+") - -################################################################ -## ██████╗██╗ █████╗ ███████╗███████╗███████╗███████╗ ## -## ██╔════╝██║ ██╔══██╗██╔════╝██╔════╝██╔════╝██╔════╝ ## -## ██║ ██║ ███████║███████╗███████╗█████╗ ███████╗ ## -## ██║ ██║ ██╔══██║╚════██║╚════██║██╔══╝ ╚════██║ ## -## ╚██████╗███████╗██║ ██║███████║███████║███████╗███████║ ## -## ╚═════╝╚══════╝╚═╝ ╚═╝╚══════╝╚══════╝╚══════╝╚══════╝ ## -################################################################ - -class Transform3d: - """ - A Transform3d object encapsulates a batch of N 3D transformations, and knows - how to transform points and normal vectors. Suppose that t is a Transform3d; - then we can do the following: - - .. code-block:: python - - N = len(t) - points = torch.randn(N, P, 3) - normals = torch.randn(N, P, 3) - points_transformed = t.transform_points(points) # => (N, P, 3) - normals_transformed = t.transform_normals(normals) # => (N, P, 3) - - - BROADCASTING - Transform3d objects supports broadcasting. Suppose that t1 and tN are - Transform3d objects with len(t1) == 1 and len(tN) == N respectively. Then we - can broadcast transforms like this: - - .. code-block:: python - - t1.transform_points(torch.randn(P, 3)) # => (P, 3) - t1.transform_points(torch.randn(1, P, 3)) # => (1, P, 3) - t1.transform_points(torch.randn(M, P, 3)) # => (M, P, 3) - tN.transform_points(torch.randn(P, 3)) # => (N, P, 3) - tN.transform_points(torch.randn(1, P, 3)) # => (N, P, 3) - - - COMBINING TRANSFORMS - Transform3d objects can be combined in two ways: composing and stacking. - Composing is function composition. Given Transform3d objects t1, t2, t3, - the following all compute the same thing: - - .. code-block:: python - - y1 = t3.transform_points(t2.transform_points(t1.transform_points(x))) - y2 = t1.compose(t2).compose(t3).transform_points(x) - y3 = t1.compose(t2, t3).transform_points(x) - - - Composing transforms should broadcast. - - .. code-block:: python - - if len(t1) == 1 and len(t2) == N, then len(t1.compose(t2)) == N. - - We can also stack a sequence of Transform3d objects, which represents - composition along the batch dimension; then the following should compute the - same thing. - - .. code-block:: python - - N, M = len(tN), len(tM) - xN = torch.randn(N, P, 3) - xM = torch.randn(M, P, 3) - y1 = torch.cat([tN.transform_points(xN), tM.transform_points(xM)], dim=0) - y2 = tN.stack(tM).transform_points(torch.cat([xN, xM], dim=0)) - - BUILDING TRANSFORMS - We provide convenience methods for easily building Transform3d objects - as compositions of basic transforms. - - .. code-block:: python - - # Scale by 0.5, then translate by (1, 2, 3) - t1 = Transform3d().scale(0.5).translate(1, 2, 3) - - # Scale each axis by a different amount, then translate, then scale - t2 = Transform3d().scale(1, 3, 3).translate(2, 3, 1).scale(2.0) - - t3 = t1.compose(t2) - tN = t1.stack(t3, t3) - - - BACKPROP THROUGH TRANSFORMS - When building transforms, we can also parameterize them by Torch tensors; - in this case we can backprop through the construction and application of - Transform objects, so they could be learned via gradient descent or - predicted by a neural network. - - .. code-block:: python - - s1_params = torch.randn(N, requires_grad=True) - t_params = torch.randn(N, 3, requires_grad=True) - s2_params = torch.randn(N, 3, requires_grad=True) - - t = Transform3d().scale(s1_params).translate(t_params).scale(s2_params) - x = torch.randn(N, 3) - y = t.transform_points(x) - loss = compute_loss(y) - loss.backward() - - with torch.no_grad(): - s1_params -= lr * s1_params.grad - t_params -= lr * t_params.grad - s2_params -= lr * s2_params.grad - - CONVENTIONS - We adopt a right-hand coordinate system, meaning that rotation about an axis - with a positive angle results in a counter clockwise rotation. - - This class assumes that transformations are applied on inputs which - are row vectors. The internal representation of the Nx4x4 transformation - matrix is of the form: - - .. code-block:: python - - M = [ - [Rxx, Ryx, Rzx, 0], - [Rxy, Ryy, Rzy, 0], - [Rxz, Ryz, Rzz, 0], - [Tx, Ty, Tz, 1], - ] - - To apply the transformation to points which are row vectors, the M matrix - can be pre multiplied by the points: - - .. code-block:: python - - points = [[0, 1, 2]] # (1 x 3) xyz coordinates of a point - transformed_points = points * M - - """ - - def __init__( - self, - dtype: torch.dtype = torch.float32, - device: Device = "cpu", - matrix: Optional[torch.Tensor] = None, - ) -> None: - """ - Args: - dtype: The data type of the transformation matrix. - to be used if `matrix = None`. - device: The device for storing the implemented transformation. - If `matrix != None`, uses the device of input `matrix`. - matrix: A tensor of shape (4, 4) or of shape (minibatch, 4, 4) - representing the 4x4 3D transformation matrix. - If `None`, initializes with identity using - the specified `device` and `dtype`. - """ - - if matrix is None: - self._matrix = torch.eye(4, dtype=dtype, device=device).view(1, 4, 4) - else: - if matrix.ndim not in (2, 3): - raise ValueError('"matrix" has to be a 2- or a 3-dimensional tensor.') - if matrix.shape[-2] != 4 or matrix.shape[-1] != 4: - raise ValueError( - '"matrix" has to be a tensor of shape (minibatch, 4, 4)' - ) - # set dtype and device from matrix - dtype = matrix.dtype - device = matrix.device - self._matrix = matrix.view(-1, 4, 4) - - self._transforms = [] # store transforms to compose - self._lu = None - self.device = make_device(device) - self.dtype = dtype - - def __len__(self) -> int: - return self.get_matrix().shape[0] - - def __getitem__( - self, index: Union[int, List[int], slice, torch.Tensor] - ) -> "Transform3d": - """ - Args: - index: Specifying the index of the transform to retrieve. - Can be an int, slice, list of ints, boolean, long tensor. - Supports negative indices. - - Returns: - Transform3d object with selected transforms. The tensors are not cloned. - """ - if isinstance(index, int): - index = [index] - return self.__class__(matrix=self.get_matrix()[index]) - - def compose(self, *others: "Transform3d") -> "Transform3d": - """ - Return a new Transform3d representing the composition of self with the - given other transforms, which will be stored as an internal list. - - Args: - *others: Any number of Transform3d objects - - Returns: - A new Transform3d with the stored transforms - """ - out = Transform3d(dtype=self.dtype, device=self.device) - out._matrix = self._matrix.clone() - for other in others: - if not isinstance(other, Transform3d): - msg = "Only possible to compose Transform3d objects; got %s" - raise ValueError(msg % type(other)) - out._transforms = self._transforms + list(others) - return out - - def get_matrix(self) -> torch.Tensor: - """ - Return a matrix which is the result of composing this transform - with others stored in self.transforms. Where necessary transforms - are broadcast against each other. - For example, if self.transforms contains transforms t1, t2, and t3, and - given a set of points x, the following should be true: - - .. code-block:: python - - y1 = t1.compose(t2, t3).transform(x) - y2 = t3.transform(t2.transform(t1.transform(x))) - y1.get_matrix() == y2.get_matrix() - - Returns: - A transformation matrix representing the composed inputs. - """ - composed_matrix = self._matrix.clone() - if len(self._transforms) > 0: - for other in self._transforms: - other_matrix = other.get_matrix() - composed_matrix = _broadcast_bmm(composed_matrix, other_matrix) - return composed_matrix - - def _get_matrix_inverse(self) -> torch.Tensor: - """ - Return the inverse of self._matrix. - """ - return torch.inverse(self._matrix) - - def inverse(self, invert_composed: bool = False) -> "Transform3d": - """ - Returns a new Transform3d object that represents an inverse of the - current transformation. - - Args: - invert_composed: - - True: First compose the list of stored transformations - and then apply inverse to the result. This is - potentially slower for classes of transformations - with inverses that can be computed efficiently - (e.g. rotations and translations). - - False: Invert the individual stored transformations - independently without composing them. - - Returns: - A new Transform3d object containing the inverse of the original - transformation. - """ - - tinv = Transform3d(dtype=self.dtype, device=self.device) - - if invert_composed: - # first compose then invert - tinv._matrix = torch.inverse(self.get_matrix()) - else: - # self._get_matrix_inverse() implements efficient inverse - # of self._matrix - i_matrix = self._get_matrix_inverse() - - # 2 cases: - if len(self._transforms) > 0: - # a) Either we have a non-empty list of transforms: - # Here we take self._matrix and append its inverse at the - # end of the reverted _transforms list. After composing - # the transformations with get_matrix(), this correctly - # right-multiplies by the inverse of self._matrix - # at the end of the composition. - tinv._transforms = [t.inverse() for t in reversed(self._transforms)] - last = Transform3d(dtype=self.dtype, device=self.device) - last._matrix = i_matrix - tinv._transforms.append(last) - else: - # b) Or there are no stored transformations - # we just set inverted matrix - tinv._matrix = i_matrix - - return tinv - - def stack(self, *others: "Transform3d") -> "Transform3d": - """ - Return a new batched Transform3d representing the batch elements from - self and all the given other transforms all batched together. - - Args: - *others: Any number of Transform3d objects - - Returns: - A new Transform3d. - """ - transforms = [self] + list(others) - matrix = torch.cat([t.get_matrix() for t in transforms], dim=0) - out = Transform3d(dtype=self.dtype, device=self.device) - out._matrix = matrix - return out - - def transform_points(self, points, eps: Optional[float] = None) -> torch.Tensor: - """ - Use this transform to transform a set of 3D points. Assumes row major - ordering of the input points. - - Args: - points: Tensor of shape (P, 3) or (N, P, 3) - eps: If eps!=None, the argument is used to clamp the - last coordinate before performing the final division. - The clamping corresponds to: - last_coord := (last_coord.sign() + (last_coord==0)) * - torch.clamp(last_coord.abs(), eps), - i.e. the last coordinates that are exactly 0 will - be clamped to +eps. - - Returns: - points_out: points of shape (N, P, 3) or (P, 3) depending - on the dimensions of the transform - """ - points_batch = points.clone() - if points_batch.dim() == 2: - points_batch = points_batch[None] # (P, 3) -> (1, P, 3) - if points_batch.dim() != 3: - msg = "Expected points to have dim = 2 or dim = 3: got shape %r" - raise ValueError(msg % repr(points.shape)) - - N, P, _3 = points_batch.shape - ones = torch.ones(N, P, 1, dtype=points.dtype, device=points.device) - points_batch = torch.cat([points_batch, ones], dim=2) - - composed_matrix = self.get_matrix() - points_out = _broadcast_bmm(points_batch, composed_matrix) - denom = points_out[..., 3:] # denominator - if eps is not None: - denom_sign = denom.sign() + (denom == 0.0).type_as(denom) - denom = denom_sign * torch.clamp(denom.abs(), eps) - points_out = points_out[..., :3] / denom - - # When transform is (1, 4, 4) and points is (P, 3) return - # points_out of shape (P, 3) - if points_out.shape[0] == 1 and points.dim() == 2: - points_out = points_out.reshape(points.shape) - - return points_out - - def transform_normals(self, normals) -> torch.Tensor: - """ - Use this transform to transform a set of normal vectors. - - Args: - normals: Tensor of shape (P, 3) or (N, P, 3) - - Returns: - normals_out: Tensor of shape (P, 3) or (N, P, 3) depending - on the dimensions of the transform - """ - if normals.dim() not in [2, 3]: - msg = "Expected normals to have dim = 2 or dim = 3: got shape %r" - raise ValueError(msg % (normals.shape,)) - composed_matrix = self.get_matrix() - - # TODO: inverse is bad! Solve a linear system instead - mat = composed_matrix[:, :3, :3] - normals_out = _broadcast_bmm(normals, mat.transpose(1, 2).inverse()) - - # This doesn't pass unit tests. TODO investigate further - # if self._lu is None: - # self._lu = self._matrix[:, :3, :3].transpose(1, 2).lu() - # normals_out = normals.lu_solve(*self._lu) - - # When transform is (1, 4, 4) and normals is (P, 3) return - # normals_out of shape (P, 3) - if normals_out.shape[0] == 1 and normals.dim() == 2: - normals_out = normals_out.reshape(normals.shape) - - return normals_out - - def translate(self, *args, **kwargs) -> "Transform3d": - return self.compose( - Translate(device=self.device, dtype=self.dtype, *args, **kwargs) - ) - - def scale(self, *args, **kwargs) -> "Transform3d": - return self.compose( - Scale(device=self.device, dtype=self.dtype, *args, **kwargs) - ) - - def rotate(self, *args, **kwargs) -> "Transform3d": - return self.compose( - Rotate(device=self.device, dtype=self.dtype, *args, **kwargs) - ) - - def rotate_axis_angle(self, *args, **kwargs) -> "Transform3d": - return self.compose( - RotateAxisAngle(device=self.device, dtype=self.dtype, *args, **kwargs) - ) - - def clone(self) -> "Transform3d": - """ - Deep copy of Transforms object. All internal tensors are cloned - individually. - - Returns: - new Transforms object. - """ - other = Transform3d(dtype=self.dtype, device=self.device) - if self._lu is not None: - other._lu = [elem.clone() for elem in self._lu] - other._matrix = self._matrix.clone() - other._transforms = [t.clone() for t in self._transforms] - return other - - def to( - self, - device: Device, - copy: bool = False, - dtype: Optional[torch.dtype] = None, - ) -> "Transform3d": - """ - Match functionality of torch.Tensor.to() - If copy = True or the self Tensor is on a different device, the - returned tensor is a copy of self with the desired torch.device. - If copy = False and the self Tensor already has the correct torch.device, - then self is returned. - - Args: - device: Device (as str or torch.device) for the new tensor. - copy: Boolean indicator whether or not to clone self. Default False. - dtype: If not None, casts the internal tensor variables - to a given torch.dtype. - - Returns: - Transform3d object. - """ - device_ = make_device(device) - dtype_ = self.dtype if dtype is None else dtype - skip_to = self.device == device_ and self.dtype == dtype_ - - if not copy and skip_to: - return self - - other = self.clone() - - if skip_to: - return other - - other.device = device_ - other.dtype = dtype_ - other._matrix = other._matrix.to(device=device_, dtype=dtype_) - other._transforms = [ - t.to(device_, copy=copy, dtype=dtype_) for t in other._transforms - ] - return other - - def cpu(self) -> "Transform3d": - return self.to("cpu") - - def cuda(self) -> "Transform3d": - return self.to("cuda") - -class Translate(Transform3d): - def __init__( - self, - x, - y=None, - z=None, - dtype: torch.dtype = torch.float32, - device: Optional[Device] = None, - ) -> None: - """ - Create a new Transform3d representing 3D translations. - - Option I: Translate(xyz, dtype=torch.float32, device='cpu') - xyz should be a tensor of shape (N, 3) - - Option II: Translate(x, y, z, dtype=torch.float32, device='cpu') - Here x, y, and z will be broadcast against each other and - concatenated to form the translation. Each can be: - - A python scalar - - A torch scalar - - A 1D torch tensor - """ - xyz = _handle_input(x, y, z, dtype, device, "Translate") - super().__init__(device=xyz.device, dtype=dtype) - N = xyz.shape[0] - - mat = torch.eye(4, dtype=dtype, device=self.device) - mat = mat.view(1, 4, 4).repeat(N, 1, 1) - mat[:, 3, :3] = xyz - self._matrix = mat - - def _get_matrix_inverse(self) -> torch.Tensor: - """ - Return the inverse of self._matrix. - """ - inv_mask = self._matrix.new_ones([1, 4, 4]) - inv_mask[0, 3, :3] = -1.0 - i_matrix = self._matrix * inv_mask - return i_matrix - -class Rotate(Transform3d): - def __init__( - self, - R: torch.Tensor, - dtype: torch.dtype = torch.float32, - device: Optional[Device] = None, - orthogonal_tol: float = 1e-5, - ) -> None: - """ - Create a new Transform3d representing 3D rotation using a rotation - matrix as the input. - - Args: - R: a tensor of shape (3, 3) or (N, 3, 3) - orthogonal_tol: tolerance for the test of the orthogonality of R - - """ - device_ = get_device(R, device) - super().__init__(device=device_, dtype=dtype) - if R.dim() == 2: - R = R[None] - if R.shape[-2:] != (3, 3): - msg = "R must have shape (3, 3) or (N, 3, 3); got %s" - raise ValueError(msg % repr(R.shape)) - R = R.to(device=device_, dtype=dtype) - _check_valid_rotation_matrix(R, tol=orthogonal_tol) - N = R.shape[0] - mat = torch.eye(4, dtype=dtype, device=device_) - mat = mat.view(1, 4, 4).repeat(N, 1, 1) - mat[:, :3, :3] = R - self._matrix = mat - - def _get_matrix_inverse(self) -> torch.Tensor: - """ - Return the inverse of self._matrix. - """ - return self._matrix.permute(0, 2, 1).contiguous() - -class TensorAccessor(nn.Module): - """ - A helper class to be used with the __getitem__ method. This can be used for - getting/setting the values for an attribute of a class at one particular - index. This is useful when the attributes of a class are batched tensors - and one element in the batch needs to be modified. - """ - - def __init__(self, class_object, index: Union[int, slice]) -> None: - """ - Args: - class_object: this should be an instance of a class which has - attributes which are tensors representing a batch of - values. - index: int/slice, an index indicating the position in the batch. - In __setattr__ and __getattr__ only the value of class - attributes at this index will be accessed. - """ - self.__dict__["class_object"] = class_object - self.__dict__["index"] = index - - def __setattr__(self, name: str, value: Any): - """ - Update the attribute given by `name` to the value given by `value` - at the index specified by `self.index`. - Args: - name: str, name of the attribute. - value: value to set the attribute to. - """ - v = getattr(self.class_object, name) - if not torch.is_tensor(v): - msg = "Can only set values on attributes which are tensors; got %r" - raise AttributeError(msg % type(v)) - - # Convert the attribute to a tensor if it is not a tensor. - if not torch.is_tensor(value): - value = torch.tensor( - value, device=v.device, dtype=v.dtype, requires_grad=v.requires_grad - ) - - # Check the shapes match the existing shape and the shape of the index. - if v.dim() > 1 and value.dim() > 1 and value.shape[1:] != v.shape[1:]: - msg = "Expected value to have shape %r; got %r" - raise ValueError(msg % (v.shape, value.shape)) - if ( - v.dim() == 0 - and isinstance(self.index, slice) - and len(value) != len(self.index) - ): - msg = "Expected value to have len %r; got %r" - raise ValueError(msg % (len(self.index), len(value))) - self.class_object.__dict__[name][self.index] = value - - def __getattr__(self, name: str): - """ - Return the value of the attribute given by "name" on self.class_object - at the index specified in self.index. - Args: - name: string of the attribute name - """ - if hasattr(self.class_object, name): - return self.class_object.__dict__[name][self.index] - else: - msg = "Attribute %s not found on %r" - return AttributeError(msg % (name, self.class_object.__name__)) - -BROADCAST_TYPES = (float, int, list, tuple, torch.Tensor, np.ndarray) - -class TensorProperties(nn.Module): - """ - A mix-in class for storing tensors as properties with helper methods. - """ - - def __init__( - self, - dtype: torch.dtype = torch.float32, - device: Device = "cpu", - **kwargs, - ) -> None: - """ - Args: - dtype: data type to set for the inputs - device: Device (as str or torch.device) - kwargs: any number of keyword arguments. Any arguments which are - of type (float/int/list/tuple/tensor/array) are broadcasted and - other keyword arguments are set as attributes. - """ - super().__init__() - self.device = make_device(device) - self._N = 0 - if kwargs is not None: - - # broadcast all inputs which are float/int/list/tuple/tensor/array - # set as attributes anything else e.g. strings, bools - args_to_broadcast = {} - for k, v in kwargs.items(): - if v is None or isinstance(v, (str, bool)): - setattr(self, k, v) - elif isinstance(v, BROADCAST_TYPES): - args_to_broadcast[k] = v - else: - msg = "Arg %s with type %r is not broadcastable" - warnings.warn(msg % (k, type(v))) - - names = args_to_broadcast.keys() - # convert from type dict.values to tuple - values = tuple(v for v in args_to_broadcast.values()) - - if len(values) > 0: - broadcasted_values = convert_to_tensors_and_broadcast( - *values, device=device - ) - - # Set broadcasted values as attributes on self. - for i, n in enumerate(names): - setattr(self, n, broadcasted_values[i]) - if self._N == 0: - self._N = broadcasted_values[i].shape[0] - - def __len__(self) -> int: - return self._N - - def isempty(self) -> bool: - return self._N == 0 - - def __getitem__(self, index: Union[int, slice]) -> TensorAccessor: - """ - Args: - index: an int or slice used to index all the fields. - Returns: - if `index` is an index int/slice return a TensorAccessor class - with getattribute/setattribute methods which return/update the value - at the index in the original class. - """ - if isinstance(index, (int, slice)): - return TensorAccessor(class_object=self, index=index) - - msg = "Expected index of type int or slice; got %r" - raise ValueError(msg % type(index)) - - # pyre-fixme[14]: `to` overrides method defined in `Module` inconsistently. - def to(self, device: Device = "cpu") -> "TensorProperties": - """ - In place operation to move class properties which are tensors to a - specified device. If self has a property "device", update this as well. - """ - device_ = make_device(device) - for k in dir(self): - v = getattr(self, k) - if k == "device": - setattr(self, k, device_) - if torch.is_tensor(v) and v.device != device_: - setattr(self, k, v.to(device_)) - return self - - def cpu(self) -> "TensorProperties": - return self.to("cpu") - - # pyre-fixme[14]: `cuda` overrides method defined in `Module` inconsistently. - def cuda(self, device: Optional[int] = None) -> "TensorProperties": - return self.to(f"cuda:{device}" if device is not None else "cuda") - - def clone(self, other) -> "TensorProperties": - """ - Update the tensor properties of other with the cloned properties of self. - """ - for k in dir(self): - v = getattr(self, k) - if inspect.ismethod(v) or k.startswith("__"): - continue - if torch.is_tensor(v): - v_clone = v.clone() - else: - v_clone = copy.deepcopy(v) - setattr(other, k, v_clone) - return other - - def gather_props(self, batch_idx) -> "TensorProperties": - """ - This is an in place operation to reformat all tensor class attributes - based on a set of given indices using torch.gather. This is useful when - attributes which are batched tensors e.g. shape (N, 3) need to be - multiplied with another tensor which has a different first dimension - e.g. packed vertices of shape (V, 3). - Example - .. code-block:: python - self.specular_color = (N, 3) tensor of specular colors for each mesh - A lighting calculation may use - .. code-block:: python - verts_packed = meshes.verts_packed() # (V, 3) - To multiply these two tensors the batch dimension needs to be the same. - To achieve this we can do - .. code-block:: python - batch_idx = meshes.verts_packed_to_mesh_idx() # (V) - This gives index of the mesh for each vertex in verts_packed. - .. code-block:: python - self.gather_props(batch_idx) - self.specular_color = (V, 3) tensor with the specular color for - each packed vertex. - torch.gather requires the index tensor to have the same shape as the - input tensor so this method takes care of the reshaping of the index - tensor to use with class attributes with arbitrary dimensions. - Args: - batch_idx: shape (B, ...) where `...` represents an arbitrary - number of dimensions - Returns: - self with all properties reshaped. e.g. a property with shape (N, 3) - is transformed to shape (B, 3). - """ - # Iterate through the attributes of the class which are tensors. - for k in dir(self): - v = getattr(self, k) - if torch.is_tensor(v): - if v.shape[0] > 1: - # There are different values for each batch element - # so gather these using the batch_idx. - # First clone the input batch_idx tensor before - # modifying it. - _batch_idx = batch_idx.clone() - idx_dims = _batch_idx.shape - tensor_dims = v.shape - if len(idx_dims) > len(tensor_dims): - msg = "batch_idx cannot have more dimensions than %s. " - msg += "got shape %r and %s has shape %r" - raise ValueError(msg % (k, idx_dims, k, tensor_dims)) - if idx_dims != tensor_dims: - # To use torch.gather the index tensor (_batch_idx) has - # to have the same shape as the input tensor. - new_dims = len(tensor_dims) - len(idx_dims) - new_shape = idx_dims + (1,) * new_dims - expand_dims = (-1,) + tensor_dims[1:] - _batch_idx = _batch_idx.view(*new_shape) - _batch_idx = _batch_idx.expand(*expand_dims) - - v = v.gather(0, _batch_idx) - setattr(self, k, v) - return self - -class CamerasBase(TensorProperties): - """ - `CamerasBase` implements a base class for all cameras. - For cameras, there are four different coordinate systems (or spaces) - - World coordinate system: This is the system the object lives - the world. - - Camera view coordinate system: This is the system that has its origin on the camera - and the and the Z-axis perpendicular to the image plane. - In PyTorch3D, we assume that +X points left, and +Y points up and - +Z points out from the image plane. - The transformation from world --> view happens after applying a rotation (R) - and translation (T) - - NDC coordinate system: This is the normalized coordinate system that confines - in a volume the rendered part of the object or scene. Also known as view volume. - For square images, given the PyTorch3D convention, (+1, +1, znear) - is the top left near corner, and (-1, -1, zfar) is the bottom right far - corner of the volume. - The transformation from view --> NDC happens after applying the camera - projection matrix (P) if defined in NDC space. - For non square images, we scale the points such that smallest side - has range [-1, 1] and the largest side has range [-u, u], with u > 1. - - Screen coordinate system: This is another representation of the view volume with - the XY coordinates defined in image space instead of a normalized space. - A better illustration of the coordinate systems can be found in - pytorch3d/docs/notes/cameras.md. - It defines methods that are common to all camera models: - - `get_camera_center` that returns the optical center of the camera in - world coordinates - - `get_world_to_view_transform` which returns a 3D transform from - world coordinates to the camera view coordinates (R, T) - - `get_full_projection_transform` which composes the projection - transform (P) with the world-to-view transform (R, T) - - `transform_points` which takes a set of input points in world coordinates and - projects to the space the camera is defined in (NDC or screen) - - `get_ndc_camera_transform` which defines the transform from screen/NDC to - PyTorch3D's NDC space - - `transform_points_ndc` which takes a set of points in world coordinates and - projects them to PyTorch3D's NDC space - - `transform_points_screen` which takes a set of points in world coordinates and - projects them to screen space - For each new camera, one should implement the `get_projection_transform` - routine that returns the mapping from camera view coordinates to camera - coordinates (NDC or screen). - Another useful function that is specific to each camera model is - `unproject_points` which sends points from camera coordinates (NDC or screen) - back to camera view or world coordinates depending on the `world_coordinates` - boolean argument of the function. - """ - - # Used in __getitem__ to index the relevant fields - # When creating a new camera, this should be set in the __init__ - _FIELDS: Tuple[str, ...] = () - - # Names of fields which are a constant property of the whole batch, rather - # than themselves a batch of data. - # When joining objects into a batch, they will have to agree. - _SHARED_FIELDS: Tuple[str, ...] = () - - def get_projection_transform(self): - """ - Calculate the projective transformation matrix. - Args: - **kwargs: parameters for the projection can be passed in as keyword - arguments to override the default values set in `__init__`. - Return: - a `Transform3d` object which represents a batch of projection - matrices of shape (N, 3, 3) - """ - raise NotImplementedError() - - def unproject_points(self, xy_depth: torch.Tensor, **kwargs): - """ - Transform input points from camera coodinates (NDC or screen) - to the world / camera coordinates. - Each of the input points `xy_depth` of shape (..., 3) is - a concatenation of the x, y location and its depth. - For instance, for an input 2D tensor of shape `(num_points, 3)` - `xy_depth` takes the following form: - `xy_depth[i] = [x[i], y[i], depth[i]]`, - for a each point at an index `i`. - The following example demonstrates the relationship between - `transform_points` and `unproject_points`: - .. code-block:: python - cameras = # camera object derived from CamerasBase - xyz = # 3D points of shape (batch_size, num_points, 3) - # transform xyz to the camera view coordinates - xyz_cam = cameras.get_world_to_view_transform().transform_points(xyz) - # extract the depth of each point as the 3rd coord of xyz_cam - depth = xyz_cam[:, :, 2:] - # project the points xyz to the camera - xy = cameras.transform_points(xyz)[:, :, :2] - # append depth to xy - xy_depth = torch.cat((xy, depth), dim=2) - # unproject to the world coordinates - xyz_unproj_world = cameras.unproject_points(xy_depth, world_coordinates=True) - print(torch.allclose(xyz, xyz_unproj_world)) # True - # unproject to the camera coordinates - xyz_unproj = cameras.unproject_points(xy_depth, world_coordinates=False) - print(torch.allclose(xyz_cam, xyz_unproj)) # True - Args: - xy_depth: torch tensor of shape (..., 3). - world_coordinates: If `True`, unprojects the points back to world - coordinates using the camera extrinsics `R` and `T`. - `False` ignores `R` and `T` and unprojects to - the camera view coordinates. - from_ndc: If `False` (default), assumes xy part of input is in - NDC space if self.in_ndc(), otherwise in screen space. If - `True`, assumes xy is in NDC space even if the camera - is defined in screen space. - Returns - new_points: unprojected points with the same shape as `xy_depth`. - """ - raise NotImplementedError() - - def get_camera_center(self, **kwargs) -> torch.Tensor: - """ - Return the 3D location of the camera optical center - in the world coordinates. - Args: - **kwargs: parameters for the camera extrinsics can be passed in - as keyword arguments to override the default values - set in __init__. - Setting T here will update the values set in init as this - value may be needed later on in the rendering pipeline e.g. for - lighting calculations. - Returns: - C: a batch of 3D locations of shape (N, 3) denoting - the locations of the center of each camera in the batch. - """ - w2v_trans = self.get_world_to_view_transform(**kwargs) - P = w2v_trans.inverse().get_matrix() - # the camera center is the translation component (the first 3 elements - # of the last row) of the inverted world-to-view - # transform (4x4 RT matrix) - C = P[:, 3, :3] - return C - - def get_world_to_view_transform(self, **kwargs) -> Transform3d: - """ - Return the world-to-view transform. - Args: - **kwargs: parameters for the camera extrinsics can be passed in - as keyword arguments to override the default values - set in __init__. - Setting R and T here will update the values set in init as these - values may be needed later on in the rendering pipeline e.g. for - lighting calculations. - Returns: - A Transform3d object which represents a batch of transforms - of shape (N, 3, 3) - """ - R: torch.Tensor = kwargs.get("R", self.R) - T: torch.Tensor = kwargs.get("T", self.T) - self.R = R # pyre-ignore[16] - self.T = T # pyre-ignore[16] - world_to_view_transform = get_world_to_view_transform(R=R, T=T) - return world_to_view_transform - - def get_full_projection_transform(self, **kwargs) -> Transform3d: - """ - Return the full world-to-camera transform composing the - world-to-view and view-to-camera transforms. - If camera is defined in NDC space, the projected points are in NDC space. - If camera is defined in screen space, the projected points are in screen space. - Args: - **kwargs: parameters for the projection transforms can be passed in - as keyword arguments to override the default values - set in __init__. - Setting R and T here will update the values set in init as these - values may be needed later on in the rendering pipeline e.g. for - lighting calculations. - Returns: - a Transform3d object which represents a batch of transforms - of shape (N, 3, 3) - """ - self.R: torch.Tensor = kwargs.get("R", self.R) # pyre-ignore[16] - self.T: torch.Tensor = kwargs.get("T", self.T) # pyre-ignore[16] - world_to_view_transform = self.get_world_to_view_transform(R=self.R, T=self.T) - view_to_proj_transform = self.get_projection_transform(**kwargs) - return world_to_view_transform.compose(view_to_proj_transform) - - def transform_points( - self, points, eps: Optional[float] = None, **kwargs - ) -> torch.Tensor: - """ - Transform input points from world to camera space with the - projection matrix defined by the camera. - For `CamerasBase.transform_points`, setting `eps > 0` - stabilizes gradients since it leads to avoiding division - by excessively low numbers for points close to the camera plane. - Args: - points: torch tensor of shape (..., 3). - eps: If eps!=None, the argument is used to clamp the - divisor in the homogeneous normalization of the points - transformed to the ndc space. Please see - `transforms.Transform3d.transform_points` for details. - For `CamerasBase.transform_points`, setting `eps > 0` - stabilizes gradients since it leads to avoiding division - by excessively low numbers for points close to the - camera plane. - Returns - new_points: transformed points with the same shape as the input. - """ - world_to_proj_transform = self.get_full_projection_transform(**kwargs) - return world_to_proj_transform.transform_points(points, eps=eps) - - def get_ndc_camera_transform(self, **kwargs) -> Transform3d: - """ - Returns the transform from camera projection space (screen or NDC) to NDC space. - For cameras that can be specified in screen space, this transform - allows points to be converted from screen to NDC space. - The default transform scales the points from [0, W]x[0, H] - to [-1, 1]x[-u, u] or [-u, u]x[-1, 1] where u > 1 is the aspect ratio of the image. - This function should be modified per camera definitions if need be, - e.g. for Perspective/Orthographic cameras we provide a custom implementation. - This transform assumes PyTorch3D coordinate system conventions for - both the NDC space and the input points. - This transform interfaces with the PyTorch3D renderer which assumes - input points to the renderer to be in NDC space. - """ - if self.in_ndc(): - return Transform3d(device=self.device, dtype=torch.float32) - else: - # For custom cameras which can be defined in screen space, - # users might might have to implement the screen to NDC transform based - # on the definition of the camera parameters. - # See PerspectiveCameras/OrthographicCameras for an example. - # We don't flip xy because we assume that world points are in - # PyTorch3D coordinates, and thus conversion from screen to ndc - # is a mere scaling from image to [-1, 1] scale. - image_size = kwargs.get("image_size", self.get_image_size()) - return get_screen_to_ndc_transform( - self, with_xyflip=False, image_size=image_size - ) - - def transform_points_ndc( - self, points, eps: Optional[float] = None, **kwargs - ) -> torch.Tensor: - """ - Transforms points from PyTorch3D world/camera space to NDC space. - Input points follow the PyTorch3D coordinate system conventions: +X left, +Y up. - Output points are in NDC space: +X left, +Y up, origin at image center. - Args: - points: torch tensor of shape (..., 3). - eps: If eps!=None, the argument is used to clamp the - divisor in the homogeneous normalization of the points - transformed to the ndc space. Please see - `transforms.Transform3d.transform_points` for details. - For `CamerasBase.transform_points`, setting `eps > 0` - stabilizes gradients since it leads to avoiding division - by excessively low numbers for points close to the - camera plane. - Returns - new_points: transformed points with the same shape as the input. - """ - world_to_ndc_transform = self.get_full_projection_transform(**kwargs) - if not self.in_ndc(): - to_ndc_transform = self.get_ndc_camera_transform(**kwargs) - world_to_ndc_transform = world_to_ndc_transform.compose(to_ndc_transform) - - return world_to_ndc_transform.transform_points(points, eps=eps) - - def transform_points_screen( - self, points, eps: Optional[float] = None, **kwargs - ) -> torch.Tensor: - """ - Transforms points from PyTorch3D world/camera space to screen space. - Input points follow the PyTorch3D coordinate system conventions: +X left, +Y up. - Output points are in screen space: +X right, +Y down, origin at top left corner. - Args: - points: torch tensor of shape (..., 3). - eps: If eps!=None, the argument is used to clamp the - divisor in the homogeneous normalization of the points - transformed to the ndc space. Please see - `transforms.Transform3d.transform_points` for details. - For `CamerasBase.transform_points`, setting `eps > 0` - stabilizes gradients since it leads to avoiding division - by excessively low numbers for points close to the - camera plane. - Returns - new_points: transformed points with the same shape as the input. - """ - points_ndc = self.transform_points_ndc(points, eps=eps, **kwargs) - image_size = kwargs.get("image_size", self.get_image_size()) - return get_ndc_to_screen_transform( - self, with_xyflip=True, image_size=image_size - ).transform_points(points_ndc, eps=eps) - - def clone(self): - """ - Returns a copy of `self`. - """ - cam_type = type(self) - other = cam_type(device=self.device) - return super().clone(other) - - def is_perspective(self): - raise NotImplementedError() - - def in_ndc(self): - """ - Specifies whether the camera is defined in NDC space - or in screen (image) space - """ - raise NotImplementedError() - - def get_znear(self): - return self.znear if hasattr(self, "znear") else None - - def get_image_size(self): - """ - Returns the image size, if provided, expected in the form of (height, width) - The image size is used for conversion of projected points to screen coordinates. - """ - return self.image_size if hasattr(self, "image_size") else None - - def __getitem__( - self, index: Union[int, List[int], torch.LongTensor] - ) -> "CamerasBase": - """ - Override for the __getitem__ method in TensorProperties which needs to be - refactored. - Args: - index: an int/list/long tensor used to index all the fields in the cameras given by - self._FIELDS. - Returns: - if `index` is an index int/list/long tensor return an instance of the current - cameras class with only the values at the selected index. - """ - - kwargs = {} - - if not isinstance(index, (int, list, torch.LongTensor, torch.cuda.LongTensor)): - msg = "Invalid index type, expected int, List[int] or torch.LongTensor; got %r" - raise ValueError(msg % type(index)) - - if isinstance(index, int): - index = [index] - - if max(index) >= len(self): - raise ValueError(f"Index {max(index)} is out of bounds for select cameras") - - for field in self._FIELDS: - val = getattr(self, field, None) - if val is None: - continue - - # e.g. "in_ndc" is set as attribute "_in_ndc" on the class - # but provided as "in_ndc" on initialization - if field.startswith("_"): - field = field[1:] - - if isinstance(val, (str, bool)): - kwargs[field] = val - elif isinstance(val, torch.Tensor): - # In the init, all inputs will be converted to - # tensors before setting as attributes - kwargs[field] = val[index] - else: - raise ValueError(f"Field {field} type is not supported for indexing") - - kwargs["device"] = self.device - return self.__class__(**kwargs) - -class FoVPerspectiveCameras(CamerasBase): - """ - A class which stores a batch of parameters to generate a batch of - projection matrices by specifying the field of view. - The definition of the parameters follow the OpenGL perspective camera. - - The extrinsics of the camera (R and T matrices) can also be set in the - initializer or passed in to `get_full_projection_transform` to get - the full transformation from world -> ndc. - - The `transform_points` method calculates the full world -> ndc transform - and then applies it to the input points. - - The transforms can also be returned separately as Transform3d objects. - - * Setting the Aspect Ratio for Non Square Images * - - If the desired output image size is non square (i.e. a tuple of (H, W) where H != W) - the aspect ratio needs special consideration: There are two aspect ratios - to be aware of: - - the aspect ratio of each pixel - - the aspect ratio of the output image - The `aspect_ratio` setting in the FoVPerspectiveCameras sets the - pixel aspect ratio. When using this camera with the differentiable rasterizer - be aware that in the rasterizer we assume square pixels, but allow - variable image aspect ratio (i.e rectangle images). - - In most cases you will want to set the camera `aspect_ratio=1.0` - (i.e. square pixels) and only vary the output image dimensions in pixels - for rasterization. - """ - - # For __getitem__ - _FIELDS = ( - "K", - "znear", - "zfar", - "aspect_ratio", - "fov", - "R", - "T", - "degrees", - ) - - _SHARED_FIELDS = ("degrees",) - - def __init__( - self, - znear=1.0, - zfar=100.0, - aspect_ratio=1.0, - fov=60.0, - degrees: bool = True, - R: torch.Tensor = _R, - T: torch.Tensor = _T, - K: Optional[torch.Tensor] = None, - device: Device = "cpu", - ) -> None: - """ - - Args: - znear: near clipping plane of the view frustrum. - zfar: far clipping plane of the view frustrum. - aspect_ratio: aspect ratio of the image pixels. - 1.0 indicates square pixels. - fov: field of view angle of the camera. - degrees: bool, set to True if fov is specified in degrees. - R: Rotation matrix of shape (N, 3, 3) - T: Translation matrix of shape (N, 3) - K: (optional) A calibration matrix of shape (N, 4, 4) - If provided, don't need znear, zfar, fov, aspect_ratio, degrees - device: Device (as str or torch.device) - """ - # The initializer formats all inputs to torch tensors and broadcasts - # all the inputs to have the same batch dimension where necessary. - super().__init__( - device=device, - znear=znear, - zfar=zfar, - aspect_ratio=aspect_ratio, - fov=fov, - R=R, - T=T, - K=K, - ) - - # No need to convert to tensor or broadcast. - self.degrees = degrees - - def compute_projection_matrix( - self, znear, zfar, fov, aspect_ratio, degrees: bool - ) -> torch.Tensor: - """ - Compute the calibration matrix K of shape (N, 4, 4) - - Args: - znear: near clipping plane of the view frustrum. - zfar: far clipping plane of the view frustrum. - fov: field of view angle of the camera. - aspect_ratio: aspect ratio of the image pixels. - 1.0 indicates square pixels. - degrees: bool, set to True if fov is specified in degrees. - - Returns: - torch.FloatTensor of the calibration matrix with shape (N, 4, 4) - """ - K = torch.zeros((self._N, 4, 4), device=self.device, dtype=torch.float32) - ones = torch.ones((self._N), dtype=torch.float32, device=self.device) - if degrees: - fov = (np.pi / 180) * fov - - if not torch.is_tensor(fov): - fov = torch.tensor(fov, device=self.device) - tanHalfFov = torch.tan((fov / 2)) - max_y = tanHalfFov * znear - min_y = -max_y - max_x = max_y * aspect_ratio - min_x = -max_x - - # NOTE: In OpenGL the projection matrix changes the handedness of the - # coordinate frame. i.e the NDC space positive z direction is the - # camera space negative z direction. This is because the sign of the z - # in the projection matrix is set to -1.0. - # In pytorch3d we maintain a right handed coordinate system throughout - # so the so the z sign is 1.0. - z_sign = 1.0 - - K[:, 0, 0] = 2.0 * znear / (max_x - min_x) - K[:, 1, 1] = 2.0 * znear / (max_y - min_y) - K[:, 0, 2] = (max_x + min_x) / (max_x - min_x) - K[:, 1, 2] = (max_y + min_y) / (max_y - min_y) - K[:, 3, 2] = z_sign * ones - - # NOTE: This maps the z coordinate from [0, 1] where z = 0 if the point - # is at the near clipping plane and z = 1 when the point is at the far - # clipping plane. - K[:, 2, 2] = z_sign * zfar / (zfar - znear) - K[:, 2, 3] = -(zfar * znear) / (zfar - znear) - - return K - - def get_projection_transform(self, **kwargs) -> Transform3d: - """ - Calculate the perspective projection matrix with a symmetric - viewing frustrum. Use column major order. - The viewing frustrum will be projected into ndc, s.t. - (max_x, max_y) -> (+1, +1) - (min_x, min_y) -> (-1, -1) - - Args: - **kwargs: parameters for the projection can be passed in as keyword - arguments to override the default values set in `__init__`. - - Return: - a Transform3d object which represents a batch of projection - matrices of shape (N, 4, 4) - - .. code-block:: python - - h1 = (max_y + min_y)/(max_y - min_y) - w1 = (max_x + min_x)/(max_x - min_x) - tanhalffov = tan((fov/2)) - s1 = 1/tanhalffov - s2 = 1/(tanhalffov * (aspect_ratio)) - - # To map z to the range [0, 1] use: - f1 = far / (far - near) - f2 = -(far * near) / (far - near) - - # Projection matrix - K = [ - [s1, 0, w1, 0], - [0, s2, h1, 0], - [0, 0, f1, f2], - [0, 0, 1, 0], - ] - """ - K = kwargs.get("K", self.K) - if K is not None: - if K.shape != (self._N, 4, 4): - msg = "Expected K to have shape of (%r, 4, 4)" - raise ValueError(msg % (self._N)) - else: - K = self.compute_projection_matrix( - kwargs.get("znear", self.znear), - kwargs.get("zfar", self.zfar), - kwargs.get("fov", self.fov), - kwargs.get("aspect_ratio", self.aspect_ratio), - kwargs.get("degrees", self.degrees), - ) - - # Transpose the projection matrix as PyTorch3D transforms use row vectors. - transform = Transform3d( - matrix=K.transpose(1, 2).contiguous(), device=self.device - ) - return transform - - def unproject_points( - self, - xy_depth: torch.Tensor, - world_coordinates: bool = True, - scaled_depth_input: bool = False, - **kwargs, - ) -> torch.Tensor: - """>! - FoV cameras further allow for passing depth in world units - (`scaled_depth_input=False`) or in the [0, 1]-normalized units - (`scaled_depth_input=True`) - - Args: - scaled_depth_input: If `True`, assumes the input depth is in - the [0, 1]-normalized units. If `False` the input depth is in - the world units. - """ - - # obtain the relevant transformation to ndc - if world_coordinates: - to_ndc_transform = self.get_full_projection_transform() - else: - to_ndc_transform = self.get_projection_transform() - - if scaled_depth_input: - # the input is scaled depth, so we don't have to do anything - xy_sdepth = xy_depth - else: - # parse out important values from the projection matrix - K_matrix = self.get_projection_transform(**kwargs.copy()).get_matrix() - # parse out f1, f2 from K_matrix - unsqueeze_shape = [1] * xy_depth.dim() - unsqueeze_shape[0] = K_matrix.shape[0] - f1 = K_matrix[:, 2, 2].reshape(unsqueeze_shape) - f2 = K_matrix[:, 3, 2].reshape(unsqueeze_shape) - # get the scaled depth - sdepth = (f1 * xy_depth[..., 2:3] + f2) / xy_depth[..., 2:3] - # concatenate xy + scaled depth - xy_sdepth = torch.cat((xy_depth[..., 0:2], sdepth), dim=-1) - - # unproject with inverse of the projection - unprojection_transform = to_ndc_transform.inverse() - return unprojection_transform.transform_points(xy_sdepth) - - def is_perspective(self): - return True - - def in_ndc(self): - return True - -####################################################################################### -## ██████╗ ███████╗███████╗██╗███╗ ██╗██╗████████╗██╗ ██████╗ ███╗ ██╗███████╗ ## -## ██╔══██╗██╔════╝██╔════╝██║████╗ ██║██║╚══██╔══╝██║██╔═══██╗████╗ ██║██╔════╝ ## -## ██║ ██║█████╗ █████╗ ██║██╔██╗ ██║██║ ██║ ██║██║ ██║██╔██╗ ██║███████╗ ## -## ██║ ██║██╔══╝ ██╔══╝ ██║██║╚██╗██║██║ ██║ ██║██║ ██║██║╚██╗██║╚════██║ ## -## ██████╔╝███████╗██║ ██║██║ ╚████║██║ ██║ ██║╚██████╔╝██║ ╚████║███████║ ## -## ╚═════╝ ╚══════╝╚═╝ ╚═╝╚═╝ ╚═══╝╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═══╝╚══════╝ ## -####################################################################################### - -def make_device(device: Device) -> torch.device: - """ - Makes an actual torch.device object from the device specified as - either a string or torch.device object. If the device is `cuda` without - a specific index, the index of the current device is assigned. - Args: - device: Device (as str or torch.device) - Returns: - A matching torch.device object - """ - device = torch.device(device) if isinstance(device, str) else device - if device.type == "cuda" and device.index is None: # pyre-ignore[16] - # If cuda but with no index, then the current cuda device is indicated. - # In that case, we fix to that device - device = torch.device(f"cuda:{torch.cuda.current_device()}") - return device - -def get_device(x, device: Optional[Device] = None) -> torch.device: - """ - Gets the device of the specified variable x if it is a tensor, or - falls back to a default CPU device otherwise. Allows overriding by - providing an explicit device. - Args: - x: a torch.Tensor to get the device from or another type - device: Device (as str or torch.device) to fall back to - Returns: - A matching torch.device object - """ - - # User overrides device - if device is not None: - return make_device(device) - - # Set device based on input tensor - if torch.is_tensor(x): - return x.device - - # Default device is cpu - return torch.device("cpu") - -def _axis_angle_rotation(axis: str, angle: torch.Tensor) -> torch.Tensor: - """ - Return the rotation matrices for one of the rotations about an axis - of which Euler angles describe, for each value of the angle given. - - Args: - axis: Axis label "X" or "Y or "Z". - angle: any shape tensor of Euler angles in radians - - Returns: - Rotation matrices as tensor of shape (..., 3, 3). - """ - - cos = torch.cos(angle) - sin = torch.sin(angle) - one = torch.ones_like(angle) - zero = torch.zeros_like(angle) - - if axis == "X": - R_flat = (one, zero, zero, zero, cos, -sin, zero, sin, cos) - elif axis == "Y": - R_flat = (cos, zero, sin, zero, one, zero, -sin, zero, cos) - elif axis == "Z": - R_flat = (cos, -sin, zero, sin, cos, zero, zero, zero, one) - else: - raise ValueError("letter must be either X, Y or Z.") - - return torch.stack(R_flat, -1).reshape(angle.shape + (3, 3)) - -def euler_angles_to_matrix(euler_angles: torch.Tensor, convention: str) -> torch.Tensor: - """ - Convert rotations given as Euler angles in radians to rotation matrices. - - Args: - euler_angles: Euler angles in radians as tensor of shape (..., 3). - convention: Convention string of three uppercase letters from - {"X", "Y", and "Z"}. - - Returns: - Rotation matrices as tensor of shape (..., 3, 3). - """ - if euler_angles.dim() == 0 or euler_angles.shape[-1] != 3: - raise ValueError("Invalid input euler angles.") - if len(convention) != 3: - raise ValueError("Convention must have 3 letters.") - if convention[1] in (convention[0], convention[2]): - raise ValueError(f"Invalid convention {convention}.") - for letter in convention: - if letter not in ("X", "Y", "Z"): - raise ValueError(f"Invalid letter {letter} in convention string.") - matrices = [ - _axis_angle_rotation(c, e) - for c, e in zip(convention, torch.unbind(euler_angles, -1)) - ] - # return functools.reduce(torch.matmul, matrices) - return torch.matmul(torch.matmul(matrices[0], matrices[1]), matrices[2]) - -def _broadcast_bmm(a, b) -> torch.Tensor: - """ - Batch multiply two matrices and broadcast if necessary. - - Args: - a: torch tensor of shape (P, K) or (M, P, K) - b: torch tensor of shape (N, K, K) - - Returns: - a and b broadcast multiplied. The output batch dimension is max(N, M). - - To broadcast transforms across a batch dimension if M != N then - expect that either M = 1 or N = 1. The tensor with batch dimension 1 is - expanded to have shape N or M. - """ - if a.dim() == 2: - a = a[None] - if len(a) != len(b): - if not ((len(a) == 1) or (len(b) == 1)): - msg = "Expected batch dim for bmm to be equal or 1; got %r, %r" - raise ValueError(msg % (a.shape, b.shape)) - if len(a) == 1: - a = a.expand(len(b), -1, -1) - if len(b) == 1: - b = b.expand(len(a), -1, -1) - return a.bmm(b) - -def _safe_det_3x3(t: torch.Tensor): - """ - Fast determinant calculation for a batch of 3x3 matrices. - Note, result of this function might not be the same as `torch.det()`. - The differences might be in the last significant digit. - Args: - t: Tensor of shape (N, 3, 3). - Returns: - Tensor of shape (N) with determinants. - """ - - det = ( - t[..., 0, 0] * (t[..., 1, 1] * t[..., 2, 2] - t[..., 1, 2] * t[..., 2, 1]) - - t[..., 0, 1] * (t[..., 1, 0] * t[..., 2, 2] - t[..., 2, 0] * t[..., 1, 2]) - + t[..., 0, 2] * (t[..., 1, 0] * t[..., 2, 1] - t[..., 2, 0] * t[..., 1, 1]) - ) - - return det - -def get_world_to_view_transform( - R: torch.Tensor = _R, T: torch.Tensor = _T -) -> Transform3d: - """ - This function returns a Transform3d representing the transformation - matrix to go from world space to view space by applying a rotation and - a translation. - PyTorch3D uses the same convention as Hartley & Zisserman. - I.e., for camera extrinsic parameters R (rotation) and T (translation), - we map a 3D point `X_world` in world coordinates to - a point `X_cam` in camera coordinates with: - `X_cam = X_world R + T` - Args: - R: (N, 3, 3) matrix representing the rotation. - T: (N, 3) matrix representing the translation. - Returns: - a Transform3d object which represents the composed RT transformation. - """ - # TODO: also support the case where RT is specified as one matrix - # of shape (N, 4, 4). - - if T.shape[0] != R.shape[0]: - msg = "Expected R, T to have the same batch dimension; got %r, %r" - raise ValueError(msg % (R.shape[0], T.shape[0])) - if T.dim() != 2 or T.shape[1:] != (3,): - msg = "Expected T to have shape (N, 3); got %r" - raise ValueError(msg % repr(T.shape)) - if R.dim() != 3 or R.shape[1:] != (3, 3): - msg = "Expected R to have shape (N, 3, 3); got %r" - raise ValueError(msg % repr(R.shape)) - - # Create a Transform3d object - T_ = Translate(T, device=T.device) - R_ = Rotate(R, device=R.device) - return R_.compose(T_) - -def _check_valid_rotation_matrix(R, tol: float = 1e-7) -> None: - """ - Determine if R is a valid rotation matrix by checking it satisfies the - following conditions: - - ``RR^T = I and det(R) = 1`` - - Args: - R: an (N, 3, 3) matrix - - Returns: - None - - Emits a warning if R is an invalid rotation matrix. - """ - N = R.shape[0] - eye = torch.eye(3, dtype=R.dtype, device=R.device) - eye = eye.view(1, 3, 3).expand(N, -1, -1) - orthogonal = torch.allclose(R.bmm(R.transpose(1, 2)), eye, atol=tol) - det_R = _safe_det_3x3(R) - no_distortion = torch.allclose(det_R, torch.ones_like(det_R)) - if not (orthogonal and no_distortion): - msg = "R is not a valid rotation matrix" - warnings.warn(msg) - return - -def format_tensor( - input, - dtype: torch.dtype = torch.float32, - device: Device = "cpu", -) -> torch.Tensor: - """ - Helper function for converting a scalar value to a tensor. - Args: - input: Python scalar, Python list/tuple, torch scalar, 1D torch tensor - dtype: data type for the input - device: Device (as str or torch.device) on which the tensor should be placed. - Returns: - input_vec: torch tensor with optional added batch dimension. - """ - device_ = make_device(device) - if not torch.is_tensor(input): - input = torch.tensor(input, dtype=dtype, device=device_) - elif not input.device.type.startswith('mps'): - input = torch.tensor(input, dtype=torch.float32,device=device_) - - if input.dim() == 0: - input = input.view(1) - - if input.device == device_: - return input - - input = input.to(device=device) - return input - -def convert_to_tensors_and_broadcast( - *args, - dtype: torch.dtype = torch.float32, - device: Device = "cpu", -): - """ - Helper function to handle parsing an arbitrary number of inputs (*args) - which all need to have the same batch dimension. - The output is a list of tensors. - Args: - *args: an arbitrary number of inputs - Each of the values in `args` can be one of the following - - Python scalar - - Torch scalar - - Torch tensor of shape (N, K_i) or (1, K_i) where K_i are - an arbitrary number of dimensions which can vary for each - value in args. In this case each input is broadcast to a - tensor of shape (N, K_i) - dtype: data type to use when creating new tensors. - device: torch device on which the tensors should be placed. - Output: - args: A list of tensors of shape (N, K_i) - """ - # Convert all inputs to tensors with a batch dimension - args_1d = [format_tensor(c, dtype, device) for c in args] - - # Find broadcast size - sizes = [c.shape[0] for c in args_1d] - N = max(sizes) - - args_Nd = [] - for c in args_1d: - if c.shape[0] != 1 and c.shape[0] != N: - msg = "Got non-broadcastable sizes %r" % sizes - raise ValueError(msg) - - # Expand broadcast dim and keep non broadcast dims the same size - expand_sizes = (N,) + (-1,) * len(c.shape[1:]) - args_Nd.append(c.expand(*expand_sizes)) - - return args_Nd - -def _handle_coord(c, dtype: torch.dtype, device: torch.device) -> torch.Tensor: - """ - Helper function for _handle_input. - - Args: - c: Python scalar, torch scalar, or 1D torch tensor - - Returns: - c_vec: 1D torch tensor - """ - if not torch.is_tensor(c): - c = torch.tensor(c, dtype=dtype, device=device) - if c.dim() == 0: - c = c.view(1) - if c.device != device or c.dtype != dtype: - c = c.to(device=device, dtype=dtype) - return c - -def _handle_input( - x, - y, - z, - dtype: torch.dtype, - device: Optional[Device], - name: str, - allow_singleton: bool = False, -) -> torch.Tensor: - """ - Helper function to handle parsing logic for building transforms. The output - is always a tensor of shape (N, 3), but there are several types of allowed - input. - - Case I: Single Matrix - In this case x is a tensor of shape (N, 3), and y and z are None. Here just - return x. - - Case II: Vectors and Scalars - In this case each of x, y, and z can be one of the following - - Python scalar - - Torch scalar - - Torch tensor of shape (N, 1) or (1, 1) - In this case x, y and z are broadcast to tensors of shape (N, 1) - and concatenated to a tensor of shape (N, 3) - - Case III: Singleton (only if allow_singleton=True) - In this case y and z are None, and x can be one of the following: - - Python scalar - - Torch scalar - - Torch tensor of shape (N, 1) or (1, 1) - Here x will be duplicated 3 times, and we return a tensor of shape (N, 3) - - Returns: - xyz: Tensor of shape (N, 3) - """ - device_ = get_device(x, device) - # If x is actually a tensor of shape (N, 3) then just return it - if torch.is_tensor(x) and x.dim() == 2: - if x.shape[1] != 3: - msg = "Expected tensor of shape (N, 3); got %r (in %s)" - raise ValueError(msg % (x.shape, name)) - if y is not None or z is not None: - msg = "Expected y and z to be None (in %s)" % name - raise ValueError(msg) - return x.to(device=device_, dtype=dtype) - - if allow_singleton and y is None and z is None: - y = x - z = x - - # Convert all to 1D tensors - xyz = [_handle_coord(c, dtype, device_) for c in [x, y, z]] - - # Broadcast and concatenate - sizes = [c.shape[0] for c in xyz] - N = max(sizes) - for c in xyz: - if c.shape[0] != 1 and c.shape[0] != N: - msg = "Got non-broadcastable sizes %r (in %s)" % (sizes, name) - raise ValueError(msg) - xyz = [c.expand(N) for c in xyz] - xyz = torch.stack(xyz, dim=1) - return xyz diff --git a/spaces/apsys/HSSR/README.md b/spaces/apsys/HSSR/README.md deleted file mode 100644 index 832ce488b3dad00264dadf7721c64f9629d3f28a..0000000000000000000000000000000000000000 --- a/spaces/apsys/HSSR/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -license: apache-2.0 -title: HSSR -sdk: gradio -emoji: ⚡ -colorFrom: indigo -colorTo: pink -pinned: true -app_file: test.py ---- \ No newline at end of file diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/_deprecated.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/_deprecated.py deleted file mode 100644 index 16a957af8efafdac94cb605d8ecc501942670e7b..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/vegalite/v3/_deprecated.py +++ /dev/null @@ -1,19 +0,0 @@ -from ...utils.deprecation import _deprecate -from . import channels - -# Deprecated classes (see https://github.com/altair-viz/altair/issues/1474). -# TODO: Remove these in Altair 3.2. -Fillopacity = _deprecate(channels.FillOpacity, "Fillopacity") -FillopacityValue = _deprecate(channels.FillOpacityValue, "FillopacityValue") -Strokeopacity = _deprecate(channels.StrokeOpacity, "Strokeopacity") -StrokeopacityValue = _deprecate(channels.StrokeOpacityValue, "StrokeopacityValue") -Strokewidth = _deprecate(channels.StrokeWidth, "Strokewidth") -StrokewidthValue = _deprecate(channels.StrokeWidthValue, "StrokewidthValue") -Xerror = _deprecate(channels.XError, "Xerror") -XerrorValue = _deprecate(channels.XErrorValue, "XerrorValue") -Xerror2 = _deprecate(channels.XError2, "Xerror2") -Xerror2Value = _deprecate(channels.XError2Value, "Xerror2Value") -Yerror = _deprecate(channels.YError, "Yerror") -YerrorValue = _deprecate(channels.YErrorValue, "YerrorValue") -Yerror2 = _deprecate(channels.YError2, "Yerror2") -Yerror2Value = _deprecate(channels.YError2Value, "Yerror2Value") diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/attrs/converters.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/attrs/converters.py deleted file mode 100644 index edfa8d3c16ac8642773651778012a3cd57005d9b..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/attrs/converters.py +++ /dev/null @@ -1,3 +0,0 @@ -# SPDX-License-Identifier: MIT - -from attr.converters import * # noqa diff --git a/spaces/asd123Xiao/kafuu_chino_sovits4.0/resample.py b/spaces/asd123Xiao/kafuu_chino_sovits4.0/resample.py deleted file mode 100644 index 5e96106c9a066e6d73652c544322d029dd98f746..0000000000000000000000000000000000000000 --- a/spaces/asd123Xiao/kafuu_chino_sovits4.0/resample.py +++ /dev/null @@ -1,48 +0,0 @@ -import os -import argparse -import librosa -import numpy as np -from multiprocessing import Pool, cpu_count -from scipy.io import wavfile -from tqdm import tqdm - - -def process(item): - spkdir, wav_name, args = item - # speaker 's5', 'p280', 'p315' are excluded, - speaker = spkdir.replace("\\", "/").split("/")[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and '.wav' in wav_path: - os.makedirs(os.path.join(args.out_dir2, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, None) - wav, _ = librosa.effects.trim(wav, top_db=20) - peak = np.abs(wav).max() - if peak > 1.0: - wav = 0.98 * wav / peak - wav2 = librosa.resample(wav, orig_sr=sr, target_sr=args.sr2) - wav2 /= max(wav2.max(), -wav2.min()) - save_name = wav_name - save_path2 = os.path.join(args.out_dir2, speaker, save_name) - wavfile.write( - save_path2, - args.sr2, - (wav2 * np.iinfo(np.int16).max).astype(np.int16) - ) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--sr2", type=int, default=44100, help="sampling rate") - parser.add_argument("--in_dir", type=str, default="./dataset_raw", help="path to source dir") - parser.add_argument("--out_dir2", type=str, default="./dataset/44k", help="path to target dir") - args = parser.parse_args() - processs = cpu_count()-2 if cpu_count() >4 else 1 - pool = Pool(processes=processs) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])): - pass diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Peng Jiang.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Peng Jiang.html deleted file mode 100644 index 5e3ed88e99ee06278f33d4c8e9102167b6733e36..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Peng Jiang.html +++ /dev/null @@ -1,134 +0,0 @@ - - - - Peng Jiang - - - - -
    -

    Peng Jiang

    - -
    -
    How did you hear about SM?
    • Did some google search
    • wish I had a platform like this when I was starting off

    Brief background
    • Sr DS at Amazon (2 years)
      • AWS and retail side
      • hands-on and roadmapping for his team
    • previously DCG - working with clients doing CV and NLP projects
    • 4-5 years as a DS at a e-commerce startup (very first DS, grew to team at 6) 
    • eng degree in large-scale ML systems

    Mentorship exp
    • always doing some mentoring
    • right now mentoring an economist (doing an internship with Amazon)
    • used to working with a lot of  juniors
    • AMZ has an internal platform that does some matching of mentors and mentees

    What do beginners need and how can you help?
    • the switch from academia to industry is tough
      • academics - one best solution, industry - many roads
      • learn how to make trade-offs and compromise
      • No binary delivery, ship early and learn from mistakes
      • a different way of thinking, have to deal with vague goals 
      • refine a problem, test feasibility, 
      • think ahead before making a commitment
    • depends on the needs of the mentee
      • if they are switching careers, focus on tech skills w/ mini project/bootcamp
        • arm you with the right techniques in the job you want to succeed
        • make sure they are setting reasonable goals, and asses what the gap is (tech skills, soft skills - leadership, stakeholder management)
        • LISTEN to what they think they need, then share with them what I THINK they might need
      • mutual agreement on what should be approved
    -
    -
    Questions about SM:
    • How does the process begin?
    • What does the time commitment look like?
    • How do you ensure time is being spent/ do you track time?
    • What is the success criteria of a mentorship?
    -
    - -
    - - - \ No newline at end of file diff --git a/spaces/atharvapawar/Email-Generator-App-Langchain-LLAMA2-LLM/README.md b/spaces/atharvapawar/Email-Generator-App-Langchain-LLAMA2-LLM/README.md deleted file mode 100644 index 1fb13d1155351464d804dbd5d9784907634d883c..0000000000000000000000000000000000000000 --- a/spaces/atharvapawar/Email-Generator-App-Langchain-LLAMA2-LLM/README.md +++ /dev/null @@ -1,15 +0,0 @@ -# Email-Generator-App-Langchain-LLAMA2-LLM -Email Generator using LLAMA 2- The Email Generator is a tool that automatically creates customized emails, saving time and effort in crafting personalized messages. - ---- -title: Llm Space Test -emoji: 👀 -colorFrom: blue -colorTo: green -sdk: streamlit -sdk_version: 1.25.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Grammar-Styler/app.py b/spaces/awacke1/Grammar-Styler/app.py deleted file mode 100644 index e3a700e6f75af974013101438392ea813d68fa74..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Grammar-Styler/app.py +++ /dev/null @@ -1,193 +0,0 @@ -import streamlit as st -from multiprocessing import Process -from annotated_text import annotated_text -from bs4 import BeautifulSoup -import pandas as pd -import torch -import math -import re -import json -import requests -import spacy -import errant -import time -import os - -def start_server(): - os.system("python3 -m spacy download en_core_web_sm") - os.system("uvicorn GrammarTokenize:app --port 8080 --host 0.0.0.0 --workers 2") - -def load_models(): - if not is_port_in_use(8080): - with st.spinner(text="Loading models, please wait..."): - proc = Process(target=start_server, args=(), daemon=True) - proc.start() - while not is_port_in_use(8080): - time.sleep(1) - st.success("Model server started.") - else: - st.success("Model server already running...") - st.session_state['models_loaded'] = True - -def is_port_in_use(port): - import socket - with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: - return s.connect_ex(('0.0.0.0', port)) == 0 - -if 'models_loaded' not in st.session_state: - st.session_state['models_loaded'] = False - - -def show_highlights(input_text, corrected_sentence): - try: - strikeout = lambda x: '\u0336'.join(x) + '\u0336' - highlight_text = highlight(input_text, corrected_sentence) - color_map = {'d':'#faa', 'a':'#afa', 'c':'#fea'} - tokens = re.split(r'(<[dac]\s.*?<\/[dac]>)', highlight_text) - annotations = [] - for token in tokens: - soup = BeautifulSoup(token, 'html.parser') - tags = soup.findAll() - if tags: - _tag = tags[0].name - _type = tags[0]['type'] - _text = tags[0]['edit'] - _color = color_map[_tag] - - if _tag == 'd': - _text = strikeout(tags[0].text) - - annotations.append((_text, _type, _color)) - else: - annotations.append(token) - annotated_text(*annotations) - except Exception as e: - st.error('Some error occured!' + str(e)) - st.stop() - -def show_edits(input_text, corrected_sentence): - try: - edits = get_edits(input_text, corrected_sentence) - df = pd.DataFrame(edits, columns=['type','original word', 'original start', 'original end', 'correct word', 'correct start', 'correct end']) - df = df.set_index('type') - st.table(df) - except Exception as e: - st.error('Some error occured!') - st.stop() - -def highlight(orig, cor): - edits = _get_edits(orig, cor) - orig_tokens = orig.split() - - ignore_indexes = [] - - for edit in edits: - edit_type = edit[0] - edit_str_start = edit[1] - edit_spos = edit[2] - edit_epos = edit[3] - edit_str_end = edit[4] - - # if no_of_tokens(edit_str_start) > 1 ==> excluding the first token, mark all other tokens for deletion - for i in range(edit_spos+1, edit_epos): - ignore_indexes.append(i) - - if edit_str_start == "": - if edit_spos - 1 >= 0: - new_edit_str = orig_tokens[edit_spos - 1] - edit_spos -= 1 - else: - new_edit_str = orig_tokens[edit_spos + 1] - edit_spos += 1 - if edit_type == "PUNCT": - st = "" + new_edit_str + "" - else: - st = "" + new_edit_str + "" - orig_tokens[edit_spos] = st - elif edit_str_end == "": - st = "" + edit_str_start + "" - orig_tokens[edit_spos] = st - else: - st = "" + edit_str_start + "" - orig_tokens[edit_spos] = st - - for i in sorted(ignore_indexes, reverse=True): - del(orig_tokens[i]) - - return(" ".join(orig_tokens)) - - -def _get_edits(orig, cor): - orig = annotator.parse(orig) - cor = annotator.parse(cor) - alignment = annotator.align(orig, cor) - edits = annotator.merge(alignment) - - if len(edits) == 0: - return [] - - edit_annotations = [] - for e in edits: - e = annotator.classify(e) - edit_annotations.append((e.type[2:], e.o_str, e.o_start, e.o_end, e.c_str, e.c_start, e.c_end)) - - if len(edit_annotations) > 0: - return edit_annotations - else: - return [] - -def get_edits(orig, cor): - return _get_edits(orig, cor) - -def get_correction(input_text): - correct_request = "http://0.0.0.0:8080/correct?input_sentence="+input_text - correct_response = requests.get(correct_request) - correct_json = json.loads(correct_response.text) - scored_corrected_sentence = correct_json["scored_corrected_sentence"] - - corrected_sentence, score = scored_corrected_sentence - st.markdown(f'##### Corrected text:') - st.write('') - st.success(corrected_sentence) - exp1 = st.expander(label='Show highlights', expanded=True) - with exp1: - show_highlights(input_text, corrected_sentence) - exp2 = st.expander(label='Show edits') - with exp2: - show_edits(input_text, corrected_sentence) - - -if __name__ == "__main__": - - st.title('Grammar Styler') - st.subheader('Grammar and sentence structure restyler') - examples = [ - "I looked at the med cabinet and meds are out. Can you order me more?", - "Been spendin my whole life jus to her dat song", - "whatdjya think about dat?", - "Lets git sum holesome waves and go surfin" - ] - - if not st.session_state['models_loaded']: - load_models() - - import en_core_web_sm - nlp = en_core_web_sm.load() - annotator = errant.load('en', nlp) - - st.markdown(f'##### Try it now:') - input_text = st.selectbox( - label="Choose an example", - options=examples - ) - st.write("(or)") - input_text = st.text_input( - label="Bring your own sentence", - value=input_text - ) - - if input_text.strip(): - get_correction(input_text) diff --git a/spaces/awacke1/chatGPT/README.md b/spaces/awacke1/chatGPT/README.md deleted file mode 100644 index 799948c169d953914e91d4e1bb867c5670e65ba7..0000000000000000000000000000000000000000 --- a/spaces/awacke1/chatGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatGPT -emoji: 📊 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -duplicated_from: yizhangliu/chatGPT ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/azusarang/so-vits-svc-models-ba_P/diffusion/data_loaders.py b/spaces/azusarang/so-vits-svc-models-ba_P/diffusion/data_loaders.py deleted file mode 100644 index bf18572329019d7a8f1df01799eda207c16dd7ff..0000000000000000000000000000000000000000 --- a/spaces/azusarang/so-vits-svc-models-ba_P/diffusion/data_loaders.py +++ /dev/null @@ -1,284 +0,0 @@ -import os -import random -import re -import numpy as np -import librosa -import torch -import random -from utils import repeat_expand_2d -from tqdm import tqdm -from torch.utils.data import Dataset - -def traverse_dir( - root_dir, - extensions, - amount=None, - str_include=None, - str_exclude=None, - is_pure=False, - is_sort=False, - is_ext=True): - - file_list = [] - cnt = 0 - for root, _, files in os.walk(root_dir): - for file in files: - if any([file.endswith(f".{ext}") for ext in extensions]): - # path - mix_path = os.path.join(root, file) - pure_path = mix_path[len(root_dir)+1:] if is_pure else mix_path - - # amount - if (amount is not None) and (cnt == amount): - if is_sort: - file_list.sort() - return file_list - - # check string - if (str_include is not None) and (str_include not in pure_path): - continue - if (str_exclude is not None) and (str_exclude in pure_path): - continue - - if not is_ext: - ext = pure_path.split('.')[-1] - pure_path = pure_path[:-(len(ext)+1)] - file_list.append(pure_path) - cnt += 1 - if is_sort: - file_list.sort() - return file_list - - -def get_data_loaders(args, whole_audio=False): - data_train = AudioDataset( - filelists = args.data.training_files, - waveform_sec=args.data.duration, - hop_size=args.data.block_size, - sample_rate=args.data.sampling_rate, - load_all_data=args.train.cache_all_data, - whole_audio=whole_audio, - extensions=args.data.extensions, - n_spk=args.model.n_spk, - spk=args.spk, - device=args.train.cache_device, - fp16=args.train.cache_fp16, - use_aug=True) - loader_train = torch.utils.data.DataLoader( - data_train , - batch_size=args.train.batch_size if not whole_audio else 1, - shuffle=True, - num_workers=args.train.num_workers if args.train.cache_device=='cpu' else 0, - persistent_workers=(args.train.num_workers > 0) if args.train.cache_device=='cpu' else False, - pin_memory=True if args.train.cache_device=='cpu' else False - ) - data_valid = AudioDataset( - filelists = args.data.validation_files, - waveform_sec=args.data.duration, - hop_size=args.data.block_size, - sample_rate=args.data.sampling_rate, - load_all_data=args.train.cache_all_data, - whole_audio=True, - spk=args.spk, - extensions=args.data.extensions, - n_spk=args.model.n_spk) - loader_valid = torch.utils.data.DataLoader( - data_valid, - batch_size=1, - shuffle=False, - num_workers=0, - pin_memory=True - ) - return loader_train, loader_valid - - -class AudioDataset(Dataset): - def __init__( - self, - filelists, - waveform_sec, - hop_size, - sample_rate, - spk, - load_all_data=True, - whole_audio=False, - extensions=['wav'], - n_spk=1, - device='cpu', - fp16=False, - use_aug=False, - ): - super().__init__() - - self.waveform_sec = waveform_sec - self.sample_rate = sample_rate - self.hop_size = hop_size - self.filelists = filelists - self.whole_audio = whole_audio - self.use_aug = use_aug - self.data_buffer={} - self.pitch_aug_dict = {} - # np.load(os.path.join(self.path_root, 'pitch_aug_dict.npy'), allow_pickle=True).item() - if load_all_data: - print('Load all the data filelists:', filelists) - else: - print('Load the f0, volume data filelists:', filelists) - with open(filelists,"r") as f: - self.paths = f.read().splitlines() - for name_ext in tqdm(self.paths, total=len(self.paths)): - name = os.path.splitext(name_ext)[0] - path_audio = name_ext - duration = librosa.get_duration(filename = path_audio, sr = self.sample_rate) - - path_f0 = name_ext + ".f0.npy" - f0,_ = np.load(path_f0,allow_pickle=True) - f0 = torch.from_numpy(np.array(f0,dtype=float)).float().unsqueeze(-1).to(device) - - path_volume = name_ext + ".vol.npy" - volume = np.load(path_volume) - volume = torch.from_numpy(volume).float().unsqueeze(-1).to(device) - - path_augvol = name_ext + ".aug_vol.npy" - aug_vol = np.load(path_augvol) - aug_vol = torch.from_numpy(aug_vol).float().unsqueeze(-1).to(device) - - if n_spk is not None and n_spk > 1: - spk_name = name_ext.split("/")[-2] - spk_id = spk[spk_name] if spk_name in spk else 0 - if spk_id < 0 or spk_id >= n_spk: - raise ValueError(' [x] Muiti-speaker traing error : spk_id must be a positive integer from 0 to n_spk-1 ') - else: - spk_id = 0 - spk_id = torch.LongTensor(np.array([spk_id])).to(device) - - if load_all_data: - ''' - audio, sr = librosa.load(path_audio, sr=self.sample_rate) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio) - audio = torch.from_numpy(audio).to(device) - ''' - path_mel = name_ext + ".mel.npy" - mel = np.load(path_mel) - mel = torch.from_numpy(mel).to(device) - - path_augmel = name_ext + ".aug_mel.npy" - aug_mel,keyshift = np.load(path_augmel, allow_pickle=True) - aug_mel = np.array(aug_mel,dtype=float) - aug_mel = torch.from_numpy(aug_mel).to(device) - self.pitch_aug_dict[name_ext] = keyshift - - path_units = name_ext + ".soft.pt" - units = torch.load(path_units).to(device) - units = units[0] - units = repeat_expand_2d(units,f0.size(0)).transpose(0,1) - - if fp16: - mel = mel.half() - aug_mel = aug_mel.half() - units = units.half() - - self.data_buffer[name_ext] = { - 'duration': duration, - 'mel': mel, - 'aug_mel': aug_mel, - 'units': units, - 'f0': f0, - 'volume': volume, - 'aug_vol': aug_vol, - 'spk_id': spk_id - } - else: - path_augmel = name_ext + ".aug_mel.npy" - aug_mel,keyshift = np.load(path_augmel, allow_pickle=True) - self.pitch_aug_dict[name_ext] = keyshift - self.data_buffer[name_ext] = { - 'duration': duration, - 'f0': f0, - 'volume': volume, - 'aug_vol': aug_vol, - 'spk_id': spk_id - } - - - def __getitem__(self, file_idx): - name_ext = self.paths[file_idx] - data_buffer = self.data_buffer[name_ext] - # check duration. if too short, then skip - if data_buffer['duration'] < (self.waveform_sec + 0.1): - return self.__getitem__( (file_idx + 1) % len(self.paths)) - - # get item - return self.get_data(name_ext, data_buffer) - - def get_data(self, name_ext, data_buffer): - name = os.path.splitext(name_ext)[0] - frame_resolution = self.hop_size / self.sample_rate - duration = data_buffer['duration'] - waveform_sec = duration if self.whole_audio else self.waveform_sec - - # load audio - idx_from = 0 if self.whole_audio else random.uniform(0, duration - waveform_sec - 0.1) - start_frame = int(idx_from / frame_resolution) - units_frame_len = int(waveform_sec / frame_resolution) - aug_flag = random.choice([True, False]) and self.use_aug - ''' - audio = data_buffer.get('audio') - if audio is None: - path_audio = os.path.join(self.path_root, 'audio', name) + '.wav' - audio, sr = librosa.load( - path_audio, - sr = self.sample_rate, - offset = start_frame * frame_resolution, - duration = waveform_sec) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio) - # clip audio into N seconds - audio = audio[ : audio.shape[-1] // self.hop_size * self.hop_size] - audio = torch.from_numpy(audio).float() - else: - audio = audio[start_frame * self.hop_size : (start_frame + units_frame_len) * self.hop_size] - ''' - # load mel - mel_key = 'aug_mel' if aug_flag else 'mel' - mel = data_buffer.get(mel_key) - if mel is None: - mel = name_ext + ".mel.npy" - mel = np.load(mel) - mel = mel[start_frame : start_frame + units_frame_len] - mel = torch.from_numpy(mel).float() - else: - mel = mel[start_frame : start_frame + units_frame_len] - - # load f0 - f0 = data_buffer.get('f0') - aug_shift = 0 - if aug_flag: - aug_shift = self.pitch_aug_dict[name_ext] - f0_frames = 2 ** (aug_shift / 12) * f0[start_frame : start_frame + units_frame_len] - - # load units - units = data_buffer.get('units') - if units is None: - path_units = name_ext + ".soft.pt" - units = torch.load(path_units) - units = units[0] - units = repeat_expand_2d(units,f0.size(0)).transpose(0,1) - - units = units[start_frame : start_frame + units_frame_len] - - # load volume - vol_key = 'aug_vol' if aug_flag else 'volume' - volume = data_buffer.get(vol_key) - volume_frames = volume[start_frame : start_frame + units_frame_len] - - # load spk_id - spk_id = data_buffer.get('spk_id') - - # load shift - aug_shift = torch.from_numpy(np.array([[aug_shift]])).float() - - return dict(mel=mel, f0=f0_frames, volume=volume_frames, units=units, spk_id=spk_id, aug_shift=aug_shift, name=name, name_ext=name_ext) - - def __len__(self): - return len(self.paths) \ No newline at end of file diff --git a/spaces/baixing/hackathon_chatbot_simple/README.md b/spaces/baixing/hackathon_chatbot_simple/README.md deleted file mode 100644 index dc6550bfd1a9bcccac35d2e8a83f07b762712d1b..0000000000000000000000000000000000000000 --- a/spaces/baixing/hackathon_chatbot_simple/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: hackathon chatbot simple -emoji: 🐨 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: cc-by-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/accessors/PositionNode.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/accessors/PositionNode.js deleted file mode 100644 index f8779567856d11aa0155355ff60f31bfeebeaf61..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/nodes/accessors/PositionNode.js +++ /dev/null @@ -1,136 +0,0 @@ -/** - * @author sunag / http://www.sunag.com.br/ - */ - -import { TempNode } from '../core/TempNode.js'; -import { NodeLib } from '../core/NodeLib.js'; - -function PositionNode( scope ) { - - TempNode.call( this, 'v3' ); - - this.scope = scope || PositionNode.LOCAL; - -} - -PositionNode.LOCAL = 'local'; -PositionNode.WORLD = 'world'; -PositionNode.VIEW = 'view'; -PositionNode.PROJECTION = 'projection'; - -PositionNode.prototype = Object.create( TempNode.prototype ); -PositionNode.prototype.constructor = PositionNode; -PositionNode.prototype.nodeType = "Position"; - -PositionNode.prototype.getType = function ( ) { - - switch ( this.scope ) { - - case PositionNode.PROJECTION: - - return 'v4'; - - } - - return this.type; - -}; - -PositionNode.prototype.getShared = function ( builder ) { - - switch ( this.scope ) { - - case PositionNode.LOCAL: - case PositionNode.WORLD: - - return false; - - } - - return true; - -}; - -PositionNode.prototype.generate = function ( builder, output ) { - - var result; - - switch ( this.scope ) { - - case PositionNode.LOCAL: - - builder.requires.position = true; - - result = builder.isShader( 'vertex' ) ? 'transformed' : 'vPosition'; - - break; - - case PositionNode.WORLD: - - builder.requires.worldPosition = true; - - result = 'vWPosition'; - - break; - - case PositionNode.VIEW: - - result = builder.isShader( 'vertex' ) ? '-mvPosition.xyz' : 'vViewPosition'; - - break; - - case PositionNode.PROJECTION: - - result = builder.isShader( 'vertex' ) ? '( projectionMatrix * modelViewMatrix * vec4( position, 1.0 ) )' : 'vec4( 0.0 )'; - - break; - - } - - return builder.format( result, this.getType( builder ), output ); - -}; - -PositionNode.prototype.copy = function ( source ) { - - TempNode.prototype.copy.call( this, source ); - - this.scope = source.scope; - -}; - -PositionNode.prototype.toJSON = function ( meta ) { - - var data = this.getJSONNode( meta ); - - if ( ! data ) { - - data = this.createJSONNode( meta ); - - data.scope = this.scope; - - } - - return data; - -}; - -NodeLib.addKeyword( 'position', function () { - - return new PositionNode(); - -} ); - -NodeLib.addKeyword( 'worldPosition', function () { - - return new PositionNode( PositionNode.WORLD ); - -} ); - -NodeLib.addKeyword( 'viewPosition', function () { - - return new PositionNode( NormalNode.VIEW ); - -} ); - -export { PositionNode }; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/loaders/Loader.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/loaders/Loader.d.ts deleted file mode 100644 index a0f5fdda5aed737c09504ad5957d3b96fa99bba5..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/loaders/Loader.d.ts +++ /dev/null @@ -1,58 +0,0 @@ -import { Material } from './../materials/Material'; -import { LoaderHandler } from './FileLoader'; - -// Loaders ////////////////////////////////////////////////////////////////////////////////// - -/** - * Base class for implementing loaders. - * - * Events: - * load - * Dispatched when the image has completed loading - * content — loaded image - * - * error - * - * Dispatched when the image can't be loaded - * message — error message - */ -export class Loader { - constructor(); - - /** - * Will be called when load starts. - * The default is a function with empty body. - */ - onLoadStart: () => void; - - /** - * Will be called while load progresses. - * The default is a function with empty body. - */ - onLoadProgress: () => void; - - /** - * Will be called when load completes. - * The default is a function with empty body. - */ - onLoadComplete: () => void; - - /** - * default — null. - * If set, assigns the crossOrigin attribute of the image to the value of crossOrigin, prior to starting the load. - */ - crossOrigin: string; - - /** - * @deprecated Use THREE.LoaderUtils.extractUrlBase() instead. - */ - extractUrlBase(url: string): string; - initMaterials(materials: Material[], texturePath: string): Material[]; - createMaterial( - m: Material, - texturePath: string, - crossOrigin?: string - ): boolean; - - static Handlers: LoaderHandler; -} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/math/Cylindrical.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/math/Cylindrical.d.ts deleted file mode 100644 index c32df09e2600b55228eb3272c4ea8dbe9b21c6bd..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/math/Cylindrical.d.ts +++ /dev/null @@ -1,14 +0,0 @@ -import { Vector3 } from './Vector3'; - -export class Cylindrical { - constructor(radius?: number, theta?: number, y?: number); - - radius: number; - theta: number; - y: number; - - clone(): this; - copy(other: Cylindrical): this; - set(radius: number, theta: number, y: number): this; - setFromVector3(vec3: Vector3): this; -} diff --git a/spaces/bankholdup/stylegan_petbreeder/app.py b/spaces/bankholdup/stylegan_petbreeder/app.py deleted file mode 100644 index 56c705515d0c82dfc7e5d43236c0bfd1b650b86e..0000000000000000000000000000000000000000 --- a/spaces/bankholdup/stylegan_petbreeder/app.py +++ /dev/null @@ -1,159 +0,0 @@ -import os -from PIL import Image -import torch -import gradio as gr -torch.backends.cudnn.benchmark = True -import math -import random -import numpy as np -from torch import nn, autograd, optim -from torch.nn import functional as F -from tqdm import tqdm -import lpips -import time - -from copy import deepcopy -import imageio - -import sys -from PIL import Image -import torchvision.transforms as transforms -from argparse import Namespace -from e4e.utils.common import tensor2im -from e4e.models.psp import pSp -from e4e.models.encoders import psp_encoders -from e4e.models.stylegan2.model import Generator -from huggingface_hub import hf_hub_download - -import dlib -from e4e.utils.alignment import align_face - -transform = transforms.Compose([ - transforms.Resize((256, 256)), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) -resize_dims = (256, 256) - -device= 'cpu' -ffhq_model_path = hf_hub_download(repo_id="bankholdup/stylegan_petbreeder", filename="e4e_ffhq512.pt") - -ffhq_ckpt = torch.load(ffhq_model_path, map_location='cpu') -ffhq_latent_avg = ffhq_ckpt['latent_avg'].to(device) -ffhq_opts = ffhq_ckpt['opts'] -ffhq_opts['checkpoint_path'] = ffhq_model_path -ffhq_opts= Namespace(**ffhq_opts) - -ffhq_encoder = psp_encoders.Encoder4Editing(50, 'ir_se', ffhq_opts) -ffhq_e_filt = {k[len('encoder') + 1:]: v for k, v in ffhq_ckpt['state_dict'].items() if k[:len('encoder')] == 'encoder'} -ffhq_encoder.load_state_dict(ffhq_e_filt, strict=True) -ffhq_encoder.eval() -ffhq_encoder.to(device) - -ffhq_decoder = Generator(512, 512, 8, channel_multiplier=2) -ffhq_d_filt = {k[len('decoder') + 1:]: v for k, v in ffhq_ckpt['state_dict'].items() if k[:len('decoder')] == 'decoder'} -ffhq_decoder.load_state_dict(ffhq_d_filt, strict=True) -ffhq_decoder.eval() -ffhq_decoder.to(device) - -dog_model_path = hf_hub_download(repo_id="bankholdup/stylegan_petbreeder", filename="e4e_ffhq512_dog.pt") - -dog_ckpt = torch.load(dog_model_path, map_location='cpu') -dog_latent_avg = dog_ckpt['latent_avg'].to(device) -dog_opts = dog_ckpt['opts'] -dog_opts['checkpoint_path'] = dog_model_path -dog_opts= Namespace(**dog_opts) - -dog_encoder = psp_encoders.Encoder4Editing(50, 'ir_se', dog_opts) -dog_e_filt = {k[len('encoder') + 1:]: v for k, v in dog_ckpt['state_dict'].items() if k[:len('encoder')] == 'encoder'} -dog_encoder.load_state_dict(dog_e_filt, strict=True) -dog_encoder.eval() -dog_encoder.to(device) - -dog_decoder = Generator(512, 512, 8, channel_multiplier=2) -dog_d_filt = {k[len('decoder') + 1:]: v for k, v in dog_ckpt['state_dict'].items() if k[:len('decoder')] == 'decoder'} -dog_decoder.load_state_dict(dog_d_filt, strict=True) -dog_decoder.eval() -dog_decoder.to(device) - -cat_model_path = hf_hub_download(repo_id="bankholdup/stylegan_petbreeder", filename="e4e_ffhq512_cat.pt") - -cat_ckpt = torch.load(cat_model_path, map_location='cpu') -cat_latent_avg = cat_ckpt['latent_avg'].to(device) -cat_opts = cat_ckpt['opts'] -cat_opts['checkpoint_path'] = cat_model_path -cat_opts= Namespace(**cat_opts) - -cat_encoder = psp_encoders.Encoder4Editing(50, 'ir_se', cat_opts) -cat_e_filt = {k[len('encoder') + 1:]: v for k, v in cat_ckpt['state_dict'].items() if k[:len('encoder')] == 'encoder'} -cat_encoder.load_state_dict(cat_e_filt, strict=True) -cat_encoder.eval() -cat_encoder.to(device) - -cat_decoder = Generator(512, 512, 8, channel_multiplier=2) -cat_d_filt = {k[len('decoder') + 1:]: v for k, v in cat_ckpt['state_dict'].items() if k[:len('decoder')] == 'decoder'} -cat_decoder.load_state_dict(cat_d_filt, strict=True) -cat_decoder.eval() -cat_decoder.to(device) - -dlib_path = hf_hub_download(repo_id="bankholdup/stylegan_petbreeder", filename="shape_predictor_68_face_landmarks.dat") -predictor = dlib.shape_predictor(dlib_path) - - -def run_alignment(image_path): - aligned_image = align_face(filepath=image_path, predictor=predictor) - print("Aligned image has shape: {}".format(aligned_image.size)) - return aligned_image - - -def gen_im(ffhq_codes, dog_codes, cat_codes, model_type='ffhq'): - if model_type=='ffhq': - imgs, _ = ffhq_decoder([ffhq_codes], input_is_latent=True, randomize_noise=False, return_latents=True) - elif model_type=='Dog': - imgs, _ = dog_decoder([dog_codes], input_is_latent=True, randomize_noise=False, return_latents=True) - elif model_type=='Cat': - imgs, _ = cat_decoder([cat_codes], input_is_latent=True, randomize_noise=False, return_latents=True) - else: - imgs, _ = custom_decoder([custom_codes], input_is_latent=True, randomize_noise=False, return_latents=True) - return tensor2im(imgs[0]) - -def set_seed(rd): - torch.manual_seed(rd) - -def inference(img, model): - random_seed = round(time.time() * 1000) - set_seed(random_seed) - - try: - img.save('out.jpg') - - try: - input_image = run_alignment('out.jpg') - except: - return 'out.jpg' - transformed_image = transform(input_image) - - ffhq_codes = ffhq_encoder(transformed_image.unsqueeze(0).to(device).float()) - ffhq_codes = ffhq_codes + ffhq_latent_avg.repeat(ffhq_codes.shape[0], 1, 1) - - cat_codes = cat_encoder(transformed_image.unsqueeze(0).to(device).float()) - cat_codes = cat_codes + cat_latent_avg.repeat(cat_codes.shape[0], 1, 1) - - dog_codes = dog_encoder(transformed_image.unsqueeze(0).to(device).float()) - dog_codes = dog_codes + dog_latent_avg.repeat(dog_codes.shape[0], 1, 1) - - npimage = gen_im(ffhq_codes, dog_codes, cat_codes, model) - - imageio.imwrite('filename.jpeg', npimage) - return 'filename.jpeg' - except: - pass - -title = "PetBreeder v1.1" -description = "Gradio Demo for PetBreeder. Based on [Colab](https://colab.research.google.com/github/tg-bomze/collection-of-notebooks/blob/master/PetBreeder.ipynb) by [@MLArt](https://t.me/MLArt)." - -gr.Interface(inference, -[gr.inputs.Image(type="pil"), -gr.inputs.Dropdown(choices=['Cat','Dog'], type='value', default='Cat', label='Model')], -gr.outputs.Image(type="file"), -title=title, -description=description).launch() diff --git a/spaces/beki/pii-anonymizer/flair_recognizer.py b/spaces/beki/pii-anonymizer/flair_recognizer.py deleted file mode 100644 index 9df87dc13c2a22ac47a11b29cd0e44ef9617ef27..0000000000000000000000000000000000000000 --- a/spaces/beki/pii-anonymizer/flair_recognizer.py +++ /dev/null @@ -1,245 +0,0 @@ -import logging -from typing import Optional, List, Tuple, Set - -from presidio_analyzer import ( - RecognizerResult, - EntityRecognizer, - AnalysisExplanation, -) -from presidio_analyzer.nlp_engine import NlpArtifacts - -try: - from flair.data import Sentence - from flair.models import SequenceTagger -except ImportError: - print("Flair is not installed") - - -logger = logging.getLogger("presidio-analyzer") - - -class FlairRecognizer(EntityRecognizer): - """ - Wrapper for a flair model, if needed to be used within Presidio Analyzer. - :example: - >from presidio_analyzer import AnalyzerEngine, RecognizerRegistry - >flair_recognizer = FlairRecognizer() - >registry = RecognizerRegistry() - >registry.add_recognizer(flair_recognizer) - >analyzer = AnalyzerEngine(registry=registry) - >results = analyzer.analyze( - > "My name is Christopher and I live in Irbid.", - > language="en", - > return_decision_process=True, - >) - >for result in results: - > print(result) - > print(result.analysis_explanation) - """ - - ENTITIES = [ - "LOCATION", - "PERSON", - "NRP", - "GPE", - "ORGANIZATION", - "MAC_ADDRESS", - "US_BANK_NUMBER", - "IMEI", - "TITLE", - "LICENSE_PLATE", - "US_PASSPORT", - "CURRENCY", - "ROUTING_NUMBER", - "US_ITIN", - "US_BANK_NUMBER", - "US_DRIVER_LICENSE", - "AGE", - "PASSWORD", - "SWIFT_CODE", - ] - - DEFAULT_EXPLANATION = "Identified as {} by Flair's Named Entity Recognition" - - CHECK_LABEL_GROUPS = [ - ({"LOCATION"}, {"LOC", "LOCATION", "STREET_ADDRESS", "COORDINATE"}), - ({"PERSON"}, {"PER", "PERSON"}), - ({"NRP"}, {"NORP", "NRP"}), - ({"GPE"}, {"GPE"}), - ({"ORGANIZATION"}, {"ORG"}), - ({"MAC_ADDRESS"}, {"MAC_ADDRESS"}), - ({"US_BANK_NUMBER"}, {"US_BANK_NUMBER"}), - ({"IMEI"}, {"IMEI"}), - ({"TITLE"}, {"TITLE"}), - ({"LICENSE_PLATE"}, {"LICENSE_PLATE"}), - ({"US_PASSPORT"}, {"US_PASSPORT"}), - ({"CURRENCY"}, {"CURRENCY"}), - ({"ROUTING_NUMBER"}, {"ROUTING_NUMBER"}), - ({"AGE"}, {"AGE"}), - ({"CURRENCY"}, {"CURRENCY"}), - ({"SWIFT_CODE"}, {"SWIFT_CODE"}), - ({"US_ITIN"}, {"US_ITIN"}), - ({"US_BANK_NUMBER"}, {"US_BANK_NUMBER"}), - ({"US_DRIVER_LICENSE"}, {"US_DRIVER_LICENSE"}), - ] - - MODEL_LANGUAGES = { - "en":"beki/flair-pii-english-large", - # "en":"flair-trf.pt", - } - - PRESIDIO_EQUIVALENCES = { - "PER": "PERSON", - "LOC": "LOCATION", - "ORG": "ORGANIZATION", - "NROP": "NRP", - "URL": "URL", - "US_ITIN": "US_ITIN", - "US_PASSPORT": "US_PASSPORT", - "IBAN_CODE": "IBAN_CODE", - "IP_ADDRESS": "IP_ADDRESS", - "EMAIL_ADDRESS": "EMAIL", - "US_DRIVER_LICENSE": "US_DRIVER_LICENSE", - "US_BANK_NUMBER": "US_BANK_NUMBER", - } - - def __init__( - self, - supported_language: str = "en", - supported_entities: Optional[List[str]] = None, - check_label_groups: Optional[Tuple[Set, Set]] = None, - model: SequenceTagger = None, - ): - self.check_label_groups = ( - check_label_groups if check_label_groups else self.CHECK_LABEL_GROUPS - ) - - supported_entities = supported_entities if supported_entities else self.ENTITIES - self.model = ( - model - if model - else SequenceTagger.load(self.MODEL_LANGUAGES.get(supported_language)) - ) - - super().__init__( - supported_entities=supported_entities, - supported_language=supported_language, - name="Flair Analytics", - ) - - def load(self) -> None: - """Load the model, not used. Model is loaded during initialization.""" - pass - - def get_supported_entities(self) -> List[str]: - """ - Return supported entities by this model. - :return: List of the supported entities. - """ - return self.supported_entities - - # Class to use Flair with Presidio as an external recognizer. - def analyze( - self, text: str, entities: List[str], nlp_artifacts: NlpArtifacts = None - ) -> List[RecognizerResult]: - """ - Analyze text using Text Analytics. - :param text: The text for analysis. - :param entities: Not working properly for this recognizer. - :param nlp_artifacts: Not used by this recognizer. - :param language: Text language. Supported languages in MODEL_LANGUAGES - :return: The list of Presidio RecognizerResult constructed from the recognized - Flair detections. - """ - - results = [] - - sentences = Sentence(text) - self.model.predict(sentences) - - # If there are no specific list of entities, we will look for all of it. - if not entities: - entities = self.supported_entities - - for entity in entities: - if entity not in self.supported_entities: - continue - - for ent in sentences.get_spans("ner"): - if not self.__check_label( - entity, ent.labels[0].value, self.check_label_groups - ): - continue - textual_explanation = self.DEFAULT_EXPLANATION.format( - ent.labels[0].value - ) - explanation = self.build_flair_explanation( - round(ent.score, 2), textual_explanation - ) - flair_result = self._convert_to_recognizer_result(ent, explanation) - - results.append(flair_result) - - return results - - def _convert_to_recognizer_result(self, entity, explanation) -> RecognizerResult: - - entity_type = self.PRESIDIO_EQUIVALENCES.get(entity.tag, entity.tag) - flair_score = round(entity.score, 2) - - flair_results = RecognizerResult( - entity_type=entity_type, - start=entity.start_position, - end=entity.end_position, - score=flair_score, - analysis_explanation=explanation, - ) - - return flair_results - - def build_flair_explanation( - self, original_score: float, explanation: str - ) -> AnalysisExplanation: - """ - Create explanation for why this result was detected. - :param original_score: Score given by this recognizer - :param explanation: Explanation string - :return: - """ - explanation = AnalysisExplanation( - recognizer=self.__class__.__name__, - original_score=original_score, - textual_explanation=explanation, - ) - return explanation - - @staticmethod - def __check_label( - entity: str, label: str, check_label_groups: Tuple[Set, Set] - ) -> bool: - return any( - [entity in egrp and label in lgrp for egrp, lgrp in check_label_groups] - ) - - -if __name__ == "__main__": - - from presidio_analyzer import AnalyzerEngine, RecognizerRegistry - - flair_recognizer = ( - FlairRecognizer() - ) # This would download a very large (+2GB) model on the first run - - registry = RecognizerRegistry() - registry.add_recognizer(flair_recognizer) - - analyzer = AnalyzerEngine(registry=registry) - - results = analyzer.analyze( - "{first_name: Moustafa, sale_id: 235234}", - language="en", - return_decision_process=True, - ) - for result in results: - print(result) - print(result.analysis_explanation) diff --git a/spaces/bg6293/neuralmind-bert-base-portuguese-cased/README.md b/spaces/bg6293/neuralmind-bert-base-portuguese-cased/README.md deleted file mode 100644 index b253b59521a11e3bf336928348b5b2a87a08e82c..0000000000000000000000000000000000000000 --- a/spaces/bg6293/neuralmind-bert-base-portuguese-cased/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Neuralmind Bert Base Portuguese Cased -emoji: 🦀 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/render.py b/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/render.py deleted file mode 100644 index 7a3d141f3e00216b530d05c205c5f94f0ad814ab..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/render.py +++ /dev/null @@ -1,507 +0,0 @@ -import os -import json -import pandas as pd -import cv2 -import numpy as np -from PIL import Image, ImageOps -from .rich import console - -from .generate import generate -from .noise import add_noise -from .animation import sample_from_cv2, sample_to_cv2, anim_frame_warp -from .animation_key_frames import DeformAnimKeys, LooperAnimKeys -from .video_audio_utilities import get_frame_name, get_next_frame -from .depth import DepthModel -from .colors import maintain_colors -from .parseq_adapter import ParseqAnimKeys -from .seed import next_seed -from .blank_frame_reroll import blank_frame_reroll -from .image_sharpening import unsharp_mask -from .load_images import get_mask, load_img, get_mask_from_file -from .hybrid_video import hybrid_generation, hybrid_composite -from .hybrid_video import get_matrix_for_hybrid_motion, get_matrix_for_hybrid_motion_prev, get_flow_for_hybrid_motion, get_flow_for_hybrid_motion_prev, image_transform_ransac, image_transform_optical_flow -from .save_images import save_image -from .composable_masks import compose_mask_with_check -from .settings import get_keys_to_exclude -from .deforum_controlnet import unpack_controlnet_vids, is_controlnet_enabled -# Webui -from modules.shared import opts, cmd_opts, state, sd_model -from modules import lowvram, devices, sd_hijack - -def render_animation(args, anim_args, video_args, parseq_args, loop_args, controlnet_args, animation_prompts, root): - # handle hybrid video generation - if anim_args.animation_mode in ['2D','3D']: - if anim_args.hybrid_composite or anim_args.hybrid_motion in ['Affine', 'Perspective', 'Optical Flow']: - args, anim_args, inputfiles = hybrid_generation(args, anim_args, root) - # path required by hybrid functions, even if hybrid_comp_save_extra_frames is False - hybrid_frame_path = os.path.join(args.outdir, 'hybridframes') - - # handle controlnet video input frames generation - if is_controlnet_enabled(controlnet_args): - unpack_controlnet_vids(args, anim_args, video_args, parseq_args, loop_args, controlnet_args, animation_prompts, root) - - # use parseq if manifest is provided - use_parseq = parseq_args.parseq_manifest != None and parseq_args.parseq_manifest.strip() - # expand key frame strings to values - keys = DeformAnimKeys(anim_args) if not use_parseq else ParseqAnimKeys(parseq_args, anim_args) - loopSchedulesAndData = LooperAnimKeys(loop_args, anim_args) - # resume animation - start_frame = 0 - if anim_args.resume_from_timestring: - for tmp in os.listdir(args.outdir): - if ".txt" in tmp : - pass - else: - filename = tmp.split("_") - # don't use saved depth maps to count number of frames - if anim_args.resume_timestring in filename and "depth" not in filename: - start_frame += 1 - #start_frame = start_frame - 1 - - # create output folder for the batch - os.makedirs(args.outdir, exist_ok=True) - print(f"Saving animation frames to:\n{args.outdir}") - - # save settings for the batch - exclude_keys = get_keys_to_exclude('general') - settings_filename = os.path.join(args.outdir, f"{args.timestring}_settings.txt") - with open(settings_filename, "w+", encoding="utf-8") as f: - args.__dict__["prompts"] = animation_prompts - s = {} - for d in [dict(args.__dict__), dict(anim_args.__dict__), dict(parseq_args.__dict__), dict(loop_args.__dict__)]: - for key, value in d.items(): - if key not in exclude_keys: - s[key] = value - json.dump(s, f, ensure_ascii=False, indent=4) - - # resume from timestring - if anim_args.resume_from_timestring: - args.timestring = anim_args.resume_timestring - - # Always enable pseudo-3d with parseq. No need for an extra toggle: - # Whether it's used or not in practice is defined by the schedules - if use_parseq: - anim_args.flip_2d_perspective = True - - # expand prompts out to per-frame - if use_parseq: - prompt_series = keys.prompts - else: - prompt_series = pd.Series([np.nan for a in range(anim_args.max_frames)]) - for i, prompt in animation_prompts.items(): - prompt_series[int(i)] = prompt - prompt_series = prompt_series.ffill().bfill() - - # check for video inits - using_vid_init = anim_args.animation_mode == 'Video Input' - - # load depth model for 3D - predict_depths = (anim_args.animation_mode == '3D' and anim_args.use_depth_warping) or anim_args.save_depth_maps - predict_depths = predict_depths or (anim_args.hybrid_composite and anim_args.hybrid_comp_mask_type in ['Depth','Video Depth']) - if predict_depths: - depth_model = DepthModel('cpu' if cmd_opts.lowvram or cmd_opts.medvram else root.device) - depth_model.load_midas(root.models_path, root.half_precision) - if anim_args.midas_weight < 1.0: - depth_model.load_adabins(root.models_path) - # depth-based hybrid composite mask requires saved depth maps - if anim_args.hybrid_composite and anim_args.hybrid_comp_mask_type =='Depth': - anim_args.save_depth_maps = True - else: - depth_model = None - anim_args.save_depth_maps = False - - # state for interpolating between diffusion steps - turbo_steps = 1 if using_vid_init else int(anim_args.diffusion_cadence) - turbo_prev_image, turbo_prev_frame_idx = None, 0 - turbo_next_image, turbo_next_frame_idx = None, 0 - - # resume animation - prev_img = None - color_match_sample = None - if anim_args.resume_from_timestring: - last_frame = start_frame-1 - if turbo_steps > 1: - last_frame -= last_frame%turbo_steps - path = os.path.join(args.outdir,f"{args.timestring}_{last_frame:05}.png") - img = cv2.imread(path) - #img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Changed the colors on resume - prev_img = img - if anim_args.color_coherence != 'None': - color_match_sample = img - if turbo_steps > 1: - turbo_next_image, turbo_next_frame_idx = prev_img, last_frame - turbo_prev_image, turbo_prev_frame_idx = turbo_next_image, turbo_next_frame_idx - start_frame = last_frame+turbo_steps - - args.n_samples = 1 - frame_idx = start_frame - - # reset the mask vals as they are overwritten in the compose_mask algorithm - mask_vals = {} - noise_mask_vals = {} - - mask_vals['everywhere'] = Image.new('1', (args.W, args.H), 1) - noise_mask_vals['everywhere'] = Image.new('1', (args.W, args.H), 1) - - mask_image = None - - if args.use_init and args.init_image != None and args.init_image != '': - _, mask_image = load_img(args.init_image, - shape=(args.W, args.H), - use_alpha_as_mask=args.use_alpha_as_mask) - mask_vals['init_mask'] = mask_image - noise_mask_vals['init_mask'] = mask_image - - # Grab the first frame masks since they wont be provided until next frame - if mask_image is None and args.use_mask: - mask_vals['init_mask'] = get_mask(args) - noise_mask_vals['init_mask'] = get_mask(args) # TODO?: add a different default noise mask - - if anim_args.use_mask_video: - mask_vals['video_mask'] = get_mask_from_file(get_next_frame(args.outdir, anim_args.video_mask_path, frame_idx, True), args) - noise_mask_vals['video_mask'] = get_mask_from_file(get_next_frame(args.outdir, anim_args.video_mask_path, frame_idx, True), args) - else: - mask_vals['video_mask'] = None - noise_mask_vals['video_mask'] = None - - #Webui - state.job_count = anim_args.max_frames - - while frame_idx < anim_args.max_frames: - #Webui - state.job = f"frame {frame_idx + 1}/{anim_args.max_frames}" - state.job_no = frame_idx + 1 - if state.interrupted: - break - - print(f"\033[36mAnimation frame: \033[0m{frame_idx}/{anim_args.max_frames} ") - - noise = keys.noise_schedule_series[frame_idx] - strength = keys.strength_schedule_series[frame_idx] - scale = keys.cfg_scale_schedule_series[frame_idx] - contrast = keys.contrast_schedule_series[frame_idx] - kernel = int(keys.kernel_schedule_series[frame_idx]) - sigma = keys.sigma_schedule_series[frame_idx] - amount = keys.amount_schedule_series[frame_idx] - threshold = keys.threshold_schedule_series[frame_idx] - hybrid_comp_schedules = { - "alpha": keys.hybrid_comp_alpha_schedule_series[frame_idx], - "mask_blend_alpha": keys.hybrid_comp_mask_blend_alpha_schedule_series[frame_idx], - "mask_contrast": keys.hybrid_comp_mask_contrast_schedule_series[frame_idx], - "mask_auto_contrast_cutoff_low": int(keys.hybrid_comp_mask_auto_contrast_cutoff_low_schedule_series[frame_idx]), - "mask_auto_contrast_cutoff_high": int(keys.hybrid_comp_mask_auto_contrast_cutoff_high_schedule_series[frame_idx]), - } - scheduled_sampler_name = None - scheduled_clipskip = None - mask_seq = None - noise_mask_seq = None - if anim_args.enable_steps_scheduling and keys.steps_schedule_series[frame_idx] is not None: - args.steps = int(keys.steps_schedule_series[frame_idx]) - if anim_args.enable_sampler_scheduling and keys.sampler_schedule_series[frame_idx] is not None: - scheduled_sampler_name = keys.sampler_schedule_series[frame_idx].casefold() - if anim_args.enable_clipskip_scheduling and keys.clipskip_schedule_series[frame_idx] is not None: - scheduled_clipskip = int(keys.clipskip_schedule_series[frame_idx]) - if args.use_mask and keys.mask_schedule_series[frame_idx] is not None: - mask_seq = keys.mask_schedule_series[frame_idx] - if anim_args.use_noise_mask and keys.noise_mask_schedule_series[frame_idx] is not None: - noise_mask_seq = keys.noise_mask_schedule_series[frame_idx] - - if args.use_mask and not anim_args.use_noise_mask: - noise_mask_seq = mask_seq - - depth = None - - if anim_args.animation_mode == '3D' and (cmd_opts.lowvram or cmd_opts.medvram): - # Unload the main checkpoint and load the depth model - lowvram.send_everything_to_cpu() - sd_hijack.model_hijack.undo_hijack(sd_model) - devices.torch_gc() - depth_model.to(root.device) - - # emit in-between frames - if turbo_steps > 1: - tween_frame_start_idx = max(0, frame_idx-turbo_steps) - for tween_frame_idx in range(tween_frame_start_idx, frame_idx): - tween = float(tween_frame_idx - tween_frame_start_idx + 1) / float(frame_idx - tween_frame_start_idx) - print(f" Creating in-between frame: {tween_frame_idx}; tween:{tween:0.2f};") - - advance_prev = turbo_prev_image is not None and tween_frame_idx > turbo_prev_frame_idx - advance_next = tween_frame_idx > turbo_next_frame_idx - - if depth_model is not None: - assert(turbo_next_image is not None) - depth = depth_model.predict(turbo_next_image, anim_args, root.half_precision) - - if advance_prev: - turbo_prev_image, _ = anim_frame_warp(turbo_prev_image, args, anim_args, keys, tween_frame_idx, depth_model, depth=depth, device=root.device, half_precision=root.half_precision) - if advance_next: - turbo_next_image, _ = anim_frame_warp(turbo_next_image, args, anim_args, keys, tween_frame_idx, depth_model, depth=depth, device=root.device, half_precision=root.half_precision) - - # hybrid video motion - warps turbo_prev_image or turbo_next_image to match motion - if tween_frame_idx > 0: - if anim_args.hybrid_motion in ['Affine', 'Perspective']: - if anim_args.hybrid_motion_use_prev_img: - if advance_prev: - matrix = get_matrix_for_hybrid_motion_prev(tween_frame_idx, (args.W, args.H), inputfiles, turbo_prev_image, anim_args.hybrid_motion) - turbo_prev_image = image_transform_ransac(turbo_prev_image, matrix, anim_args.hybrid_motion, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - if advance_next: - matrix = get_matrix_for_hybrid_motion_prev(tween_frame_idx, (args.W, args.H), inputfiles, turbo_next_image, anim_args.hybrid_motion) - turbo_next_image = image_transform_ransac(turbo_next_image, matrix, anim_args.hybrid_motion, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - else: - matrix = get_matrix_for_hybrid_motion(tween_frame_idx-1, (args.W, args.H), inputfiles, anim_args.hybrid_motion) - if advance_prev: - turbo_prev_image = image_transform_ransac(turbo_prev_image, matrix, anim_args.hybrid_motion, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - if advance_next: - turbo_next_image = image_transform_ransac(turbo_next_image, matrix, anim_args.hybrid_motion, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - if anim_args.hybrid_motion in ['Optical Flow']: - if anim_args.hybrid_motion_use_prev_img: - if advance_prev: - flow = get_flow_for_hybrid_motion_prev(tween_frame_idx-1, (args.W, args.H), inputfiles, hybrid_frame_path, turbo_prev_image, anim_args.hybrid_flow_method, anim_args.hybrid_comp_save_extra_frames) - turbo_prev_image = image_transform_optical_flow(turbo_prev_image, flow, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - if advance_next: - flow = get_flow_for_hybrid_motion_prev(tween_frame_idx-1, (args.W, args.H), inputfiles, hybrid_frame_path, turbo_next_image, anim_args.hybrid_flow_method, anim_args.hybrid_comp_save_extra_frames) - turbo_next_image = image_transform_optical_flow(turbo_next_image, flow, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - else: - flow = get_flow_for_hybrid_motion(tween_frame_idx-1, (args.W, args.H), inputfiles, hybrid_frame_path, anim_args.hybrid_flow_method, anim_args.hybrid_comp_save_extra_frames) - if advance_prev: - turbo_prev_image = image_transform_optical_flow(turbo_prev_image, flow, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - if advance_next: - turbo_next_image = image_transform_optical_flow(turbo_next_image, flow, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - - turbo_prev_frame_idx = turbo_next_frame_idx = tween_frame_idx - - if turbo_prev_image is not None and tween < 1.0: - img = turbo_prev_image*(1.0-tween) + turbo_next_image*tween - else: - img = turbo_next_image - - # intercept and override to grayscale - if anim_args.color_force_grayscale: - img = cv2.cvtColor(img.astype(np.uint8), cv2.COLOR_BGR2GRAY) - img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - - filename = f"{args.timestring}_{tween_frame_idx:05}.png" - cv2.imwrite(os.path.join(args.outdir, filename), img) - if anim_args.save_depth_maps: - depth_model.save(os.path.join(args.outdir, f"{args.timestring}_depth_{tween_frame_idx:05}.png"), depth) - if turbo_next_image is not None: - prev_img = turbo_next_image - - # apply transforms to previous frame - if prev_img is not None: - prev_img, depth = anim_frame_warp(prev_img, args, anim_args, keys, frame_idx, depth_model, depth=None, device=root.device, half_precision=root.half_precision) - - # hybrid video motion - warps prev_img to match motion, usually to prepare for compositing - if frame_idx > 0: - if anim_args.hybrid_motion in ['Affine', 'Perspective']: - if anim_args.hybrid_motion_use_prev_img: - matrix = get_matrix_for_hybrid_motion_prev(frame_idx, (args.W, args.H), inputfiles, prev_img, anim_args.hybrid_motion) - else: - matrix = get_matrix_for_hybrid_motion(frame_idx-1, (args.W, args.H), inputfiles, anim_args.hybrid_motion) - prev_img = image_transform_ransac(prev_img, matrix, anim_args.hybrid_motion, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - if anim_args.hybrid_motion in ['Optical Flow']: - if anim_args.hybrid_motion_use_prev_img: - flow = get_flow_for_hybrid_motion_prev(frame_idx-1, (args.W, args.H), inputfiles, hybrid_frame_path, prev_img, anim_args.hybrid_flow_method, anim_args.hybrid_comp_save_extra_frames) - else: - flow = get_flow_for_hybrid_motion(frame_idx-1, (args.W, args.H), inputfiles, hybrid_frame_path, anim_args.hybrid_flow_method, anim_args.hybrid_comp_save_extra_frames) - prev_img = image_transform_optical_flow(prev_img, flow, cv2.BORDER_WRAP if anim_args.border == 'wrap' else cv2.BORDER_REPLICATE) - - # do hybrid video - composites video frame into prev_img (now warped if using motion) - if anim_args.hybrid_composite: - args, prev_img = hybrid_composite(args, anim_args, frame_idx, prev_img, depth_model, hybrid_comp_schedules, root) - - # apply color matching - if anim_args.color_coherence != 'None': - # video color matching - hybrid_available = anim_args.hybrid_composite or anim_args.hybrid_motion in ['Optical Flow', 'Affine', 'Perspective'] - if anim_args.color_coherence == 'Video Input' and hybrid_available: - video_color_coherence_frame = int(frame_idx) % int(anim_args.color_coherence_video_every_N_frames) == 0 - if video_color_coherence_frame: - prev_vid_img = Image.open(os.path.join(args.outdir, 'inputframes', get_frame_name(anim_args.video_init_path) + f"{frame_idx:05}.jpg")) - prev_vid_img = prev_vid_img.resize((args.W, args.H), Image.Resampling.LANCZOS) - color_match_sample = np.asarray(prev_vid_img) - color_match_sample = cv2.cvtColor(color_match_sample, cv2.COLOR_RGB2BGR) - if color_match_sample is None: - color_match_sample = prev_img.copy() - else: - prev_img = maintain_colors(prev_img, color_match_sample, anim_args.color_coherence) - - # intercept and override to grayscale - if anim_args.color_force_grayscale: - prev_img = cv2.cvtColor(prev_img, cv2.COLOR_BGR2GRAY) - prev_img = cv2.cvtColor(prev_img, cv2.COLOR_GRAY2BGR) - - # apply scaling - contrast_image = (prev_img * contrast).round().astype(np.uint8) - # anti-blur - if amount > 0: - contrast_image = unsharp_mask(contrast_image, (kernel, kernel), sigma, amount, threshold, mask_image if args.use_mask else None) - # apply frame noising - if args.use_mask or anim_args.use_noise_mask: - args.noise_mask = compose_mask_with_check(root, args, noise_mask_seq, noise_mask_vals, Image.fromarray(cv2.cvtColor(contrast_image, cv2.COLOR_BGR2RGB))) - noised_image = add_noise(contrast_image, noise, args.seed, anim_args.noise_type, - (anim_args.perlin_w, anim_args.perlin_h, anim_args.perlin_octaves, anim_args.perlin_persistence), - args.noise_mask, args.invert_mask) - - # use transformed previous frame as init for current - args.use_init = True - args.init_sample = Image.fromarray(cv2.cvtColor(noised_image, cv2.COLOR_BGR2RGB)) - args.strength = max(0.0, min(1.0, strength)) - - args.scale = scale - - # Pix2Pix Image CFG Scale - does *nothing* with non pix2pix checkpoints - args.pix2pix_img_cfg_scale = float(keys.pix2pix_img_cfg_scale_series[frame_idx]) - - # grab prompt for current frame - args.prompt = prompt_series[frame_idx] - - if args.seed_behavior == 'schedule' or use_parseq: - args.seed = int(keys.seed_schedule_series[frame_idx]) - - if anim_args.enable_checkpoint_scheduling: - args.checkpoint = keys.checkpoint_schedule_series[frame_idx] - else: - args.checkpoint = None - - #SubSeed scheduling - if anim_args.enable_subseed_scheduling: - args.subseed = int(keys.subseed_schedule_series[frame_idx]) - args.subseed_strength = float(keys.subseed_strength_schedule_series[frame_idx]) - - if use_parseq: - args.seed_enable_extras = True - args.subseed = int(keys.subseed_series[frame_idx]) - args.subseed_strength = keys.subseed_strength_series[frame_idx] - - prompt_to_print, *after_neg = args.prompt.strip().split("--neg") - prompt_to_print = prompt_to_print.strip() - after_neg = "".join(after_neg).strip() - - print(f"\033[32mSeed: \033[0m{args.seed}") - print(f"\033[35mPrompt: \033[0m{prompt_to_print}") - if after_neg and after_neg.strip(): - print(f"\033[91mNeg Prompt: \033[0m{after_neg}") - if not using_vid_init: - # print motion table to cli if anim mode = 2D or 3D - if anim_args.animation_mode in ['2D','3D']: - print_render_table(anim_args, keys, frame_idx) - - # grab init image for current frame - elif using_vid_init: - init_frame = get_next_frame(args.outdir, anim_args.video_init_path, frame_idx, False) - print(f"Using video init frame {init_frame}") - args.init_image = init_frame - if anim_args.use_mask_video: - mask_vals['video_mask'] = get_mask_from_file(get_next_frame(args.outdir, anim_args.video_mask_path, frame_idx, True), args) - - if args.use_mask: - args.mask_image = compose_mask_with_check(root, args, mask_seq, mask_vals, args.init_sample) if args.init_sample is not None else None # we need it only after the first frame anyway - - # setting up some arguments for the looper - loop_args.imageStrength = loopSchedulesAndData.image_strength_schedule_series[frame_idx] - loop_args.blendFactorMax = loopSchedulesAndData.blendFactorMax_series[frame_idx] - loop_args.blendFactorSlope = loopSchedulesAndData.blendFactorSlope_series[frame_idx] - loop_args.tweeningFrameSchedule = loopSchedulesAndData.tweening_frames_schedule_series[frame_idx] - loop_args.colorCorrectionFactor = loopSchedulesAndData.color_correction_factor_series[frame_idx] - loop_args.use_looper = loopSchedulesAndData.use_looper - loop_args.imagesToKeyframe = loopSchedulesAndData.imagesToKeyframe - - if scheduled_clipskip is not None: - opts.data["CLIP_stop_at_last_layers"] = scheduled_clipskip - - if anim_args.animation_mode == '3D' and (cmd_opts.lowvram or cmd_opts.medvram): - depth_model.to('cpu') - devices.torch_gc() - lowvram.setup_for_low_vram(sd_model, cmd_opts.medvram) - sd_hijack.model_hijack.hijack(sd_model) - - # sample the diffusion model - image = generate(args, anim_args, loop_args, controlnet_args, root, frame_idx, sampler_name=scheduled_sampler_name) - patience = 10 - - # intercept and override to grayscale - if anim_args.color_force_grayscale: - image = ImageOps.grayscale(image) - image = ImageOps.colorize(image, black ="black", white ="white") - - # reroll blank frame - if not image.getbbox(): - print("Blank frame detected! If you don't have the NSFW filter enabled, this may be due to a glitch!") - if args.reroll_blank_frames == 'reroll': - while not image.getbbox(): - print("Rerolling with +1 seed...") - args.seed += 1 - image = generate(args, anim_args, loop_args, controlnet_args, root, frame_idx, sampler_name=scheduled_sampler_name) - patience -= 1 - if patience == 0: - print("Rerolling with +1 seed failed for 10 iterations! Try setting webui's precision to 'full' and if it fails, please report this to the devs! Interrupting...") - state.interrupted = True - state.current_image = image - return - elif args.reroll_blank_frames == 'interrupt': - print("Interrupting to save your eyes...") - state.interrupted = True - state.current_image = image - image = blank_frame_reroll(image, args, root, frame_idx) - if image == None: - return - - opencv_image = cv2.cvtColor(np.array(image), cv2.COLOR_RGB2BGR) - if not using_vid_init: - prev_img = opencv_image - - if turbo_steps > 1: - turbo_prev_image, turbo_prev_frame_idx = turbo_next_image, turbo_next_frame_idx - turbo_next_image, turbo_next_frame_idx = opencv_image, frame_idx - frame_idx += turbo_steps - else: - filename = f"{args.timestring}_{frame_idx:05}.png" - save_image(image, 'PIL', filename, args, video_args, root) - - if anim_args.save_depth_maps: - if cmd_opts.lowvram or cmd_opts.medvram: - lowvram.send_everything_to_cpu() - sd_hijack.model_hijack.undo_hijack(sd_model) - devices.torch_gc() - depth_model.to(root.device) - depth = depth_model.predict(opencv_image, anim_args, root.half_precision) - depth_model.save(os.path.join(args.outdir, f"{args.timestring}_depth_{frame_idx:05}.png"), depth) - if cmd_opts.lowvram or cmd_opts.medvram: - depth_model.to('cpu') - devices.torch_gc() - lowvram.setup_for_low_vram(sd_model, cmd_opts.medvram) - sd_hijack.model_hijack.hijack(sd_model) - frame_idx += 1 - - state.current_image = image - - args.seed = next_seed(args) - -def print_render_table(anim_args, keys, frame_idx): - from rich.table import Table - from rich import box - table = Table(padding=0, box=box.ROUNDED) - field_names = [] - if anim_args.animation_mode == '2D': - short_zoom = round(keys.zoom_series[frame_idx], 6) - field_names += ["Angle", "Zoom"] - field_names += ["Tr X", "Tr Y"] - if anim_args.animation_mode == '3D': - field_names += ["Tr Z", "Ro X", "Ro Y", "Ro Z"] - if anim_args.enable_perspective_flip: - field_names += ["Pf T", "Pf P", "Pf G", "Pf F"] - for field_name in field_names: - table.add_column(field_name, justify="center") - - rows = [] - if anim_args.animation_mode == '2D': - rows += [str(keys.angle_series[frame_idx]),str(short_zoom)] - rows += [str(keys.translation_x_series[frame_idx]),str(keys.translation_y_series[frame_idx])] - if anim_args.animation_mode == '3D': - rows += [str(keys.translation_z_series[frame_idx]),str(keys.rotation_3d_x_series[frame_idx]),str(keys.rotation_3d_y_series[frame_idx]),str(keys.rotation_3d_z_series[frame_idx])] - if anim_args.enable_perspective_flip: - rows +=[str(keys.perspective_flip_theta_series[frame_idx]), str(keys.perspective_flip_phi_series[frame_idx]), str(keys.perspective_flip_gamma_series[frame_idx]), str(keys.perspective_flip_fv_series[frame_idx])] - table.add_row(*rows) - - console.print(table) \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Akumu No Bikutoria Torrent The Ultimate Guide to the Fourth Episode of Gundam Wing.md b/spaces/bioriAsaeru/text-to-voice/Akumu No Bikutoria Torrent The Ultimate Guide to the Fourth Episode of Gundam Wing.md deleted file mode 100644 index a99f5c06856aef4bab58eb9ea72aad05609c2a17..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Akumu No Bikutoria Torrent The Ultimate Guide to the Fourth Episode of Gundam Wing.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Akumu No Bikutoria Torrent


    Download File ✏ ✏ ✏ https://urloso.com/2uyOZ4



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Alias SpeedForm 2006 Scaricare Keygen TOP 64 Bits IT.md b/spaces/bioriAsaeru/text-to-voice/Alias SpeedForm 2006 Scaricare Keygen TOP 64 Bits IT.md deleted file mode 100644 index 6718d60cd21f691cf0df164fab2ff3248d500236..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Alias SpeedForm 2006 Scaricare Keygen TOP 64 Bits IT.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Alias SpeedForm 2006 scaricare keygen 64 bits IT


    Download File · https://urloso.com/2uyPlA



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bla/tranny/test.py b/spaces/bla/tranny/test.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/losses/__init__.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/losses/__init__.py deleted file mode 100644 index d55107b2c11822cab749ed3683cf19020802898a..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/losses/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Loss related classes and functions. In particular the loss balancer from -EnCodec, and the usual spectral losses.""" - -# flake8: noqa -from .balancer import Balancer -from .sisnr import SISNR -from .stftloss import ( - LogSTFTMagnitudeLoss, - MRSTFTLoss, - SpectralConvergenceLoss, - STFTLoss -) -from .specloss import ( - MelSpectrogramL1Loss, - MultiScaleMelSpectrogramLoss, -) diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/filter.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/filter.py deleted file mode 100644 index 18a856789e390e0a54484db97488e2e869c27ac8..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/DensePose/densepose/modeling/filter.py +++ /dev/null @@ -1,94 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from typing import List -import torch - -from detectron2.config import CfgNode -from detectron2.structures import Instances -from detectron2.structures.boxes import matched_pairwise_iou - - -class DensePoseDataFilter(object): - def __init__(self, cfg: CfgNode): - self.iou_threshold = cfg.MODEL.ROI_DENSEPOSE_HEAD.FG_IOU_THRESHOLD - self.keep_masks = cfg.MODEL.ROI_DENSEPOSE_HEAD.COARSE_SEGM_TRAINED_BY_MASKS - - @torch.no_grad() - def __call__(self, features: List[torch.Tensor], proposals_with_targets: List[Instances]): - """ - Filters proposals with targets to keep only the ones relevant for - DensePose training - - Args: - features (list[Tensor]): input data as a list of features, - each feature is a tensor. Axis 0 represents the number of - images `N` in the input data; axes 1-3 are channels, - height, and width, which may vary between features - (e.g., if a feature pyramid is used). - proposals_with_targets (list[Instances]): length `N` list of - `Instances`. The i-th `Instances` contains instances - (proposals, GT) for the i-th input image, - Returns: - list[Tensor]: filtered features - list[Instances]: filtered proposals - """ - proposals_filtered = [] - # TODO: the commented out code was supposed to correctly deal with situations - # where no valid DensePose GT is available for certain images. The corresponding - # image features were sliced and proposals were filtered. This led to performance - # deterioration, both in terms of runtime and in terms of evaluation results. - # - # feature_mask = torch.ones( - # len(proposals_with_targets), - # dtype=torch.bool, - # device=features[0].device if len(features) > 0 else torch.device("cpu"), - # ) - for i, proposals_per_image in enumerate(proposals_with_targets): - if not proposals_per_image.has("gt_densepose") and ( - not proposals_per_image.has("gt_masks") or not self.keep_masks - ): - # feature_mask[i] = 0 - continue - gt_boxes = proposals_per_image.gt_boxes - est_boxes = proposals_per_image.proposal_boxes - # apply match threshold for densepose head - iou = matched_pairwise_iou(gt_boxes, est_boxes) - iou_select = iou > self.iou_threshold - proposals_per_image = proposals_per_image[iou_select] # pyre-ignore[6] - - N_gt_boxes = len(proposals_per_image.gt_boxes) - assert N_gt_boxes == len(proposals_per_image.proposal_boxes), ( - f"The number of GT boxes {N_gt_boxes} is different from the " - f"number of proposal boxes {len(proposals_per_image.proposal_boxes)}" - ) - # filter out any target without suitable annotation - if self.keep_masks: - gt_masks = ( - proposals_per_image.gt_masks - if hasattr(proposals_per_image, "gt_masks") - else [None] * N_gt_boxes - ) - else: - gt_masks = [None] * N_gt_boxes - gt_densepose = ( - proposals_per_image.gt_densepose - if hasattr(proposals_per_image, "gt_densepose") - else [None] * N_gt_boxes - ) - assert len(gt_masks) == N_gt_boxes - assert len(gt_densepose) == N_gt_boxes - selected_indices = [ - i - for i, (dp_target, mask_target) in enumerate(zip(gt_densepose, gt_masks)) - if (dp_target is not None) or (mask_target is not None) - ] - # if not len(selected_indices): - # feature_mask[i] = 0 - # continue - if len(selected_indices) != N_gt_boxes: - proposals_per_image = proposals_per_image[selected_indices] # pyre-ignore[6] - assert len(proposals_per_image.gt_boxes) == len(proposals_per_image.proposal_boxes) - proposals_filtered.append(proposals_per_image) - # features_filtered = [feature[feature_mask] for feature in features] - # return features_filtered, proposals_filtered - return features, proposals_filtered diff --git a/spaces/ceckenrode/Memory-Chat-Story-Generator-ChatGPT/README.md b/spaces/ceckenrode/Memory-Chat-Story-Generator-ChatGPT/README.md deleted file mode 100644 index f203d1c4c4c0a7b0c9d73bedd1e25905f3778d74..0000000000000000000000000000000000000000 --- a/spaces/ceckenrode/Memory-Chat-Story-Generator-ChatGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Memory Chat Story Generator ChatGPT -emoji: 💻 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/chaozn/fastai_dogs_vs_cats/README.md b/spaces/chaozn/fastai_dogs_vs_cats/README.md deleted file mode 100644 index e131642008fd708c337ac6072307e1fb6e870083..0000000000000000000000000000000000000000 --- a/spaces/chaozn/fastai_dogs_vs_cats/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Fastai Dogs Vs Cats -emoji: 👁 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/charles0519/ChuanhuChatGPT/overwrites.py b/spaces/charles0519/ChuanhuChatGPT/overwrites.py deleted file mode 100644 index a87499a81bb3c23bf34c1faadcc02085567cd447..0000000000000000000000000000000000000000 --- a/spaces/charles0519/ChuanhuChatGPT/overwrites.py +++ /dev/null @@ -1,55 +0,0 @@ -from __future__ import annotations -import logging - -from llama_index import Prompt -from typing import List, Tuple -import mdtex2html - -from presets import * -from llama_func import * - - -def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]: - logging.debug("Compacting text chunks...🚀🚀🚀") - combined_str = [c.strip() for c in text_chunks if c.strip()] - combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)] - combined_str = "\n\n".join(combined_str) - # resplit based on self.max_chunk_overlap - text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1) - return text_splitter.split_text(combined_str) - - -def postprocess( - self, y: List[Tuple[str | None, str | None]] -) -> List[Tuple[str | None, str | None]]: - """ - Parameters: - y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. - Returns: - List of tuples representing the message and response. Each message and response will be a string of HTML. - """ - if y is None or y == []: - return [] - tag_regex = re.compile(r"^<\w+>[^<]+") - if tag_regex.search(y[-1][1]): - y[-1] = (convert_user(y[-1][0]), y[-1][1]) - else: - y[-1] = (convert_user(y[-1][0]), convert_mdtext(y[-1][1])) - return y - -with open("./assets/custom.js", "r", encoding="utf-8") as f, open("./assets/Kelpy-Codos.js", "r", encoding="utf-8") as f2: - customJS = f.read() - kelpyCodos = f2.read() - -def reload_javascript(): - print("Reloading javascript...") - js = f'' - def template_response(*args, **kwargs): - res = GradioTemplateResponseOriginal(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - - gr.routes.templates.TemplateResponse = template_response - -GradioTemplateResponseOriginal = gr.routes.templates.TemplateResponse \ No newline at end of file diff --git a/spaces/charlesai/CLIP/app.py b/spaces/charlesai/CLIP/app.py deleted file mode 100644 index b0bf2a3595002caddb9c1eebd3d2a5de397e0880..0000000000000000000000000000000000000000 --- a/spaces/charlesai/CLIP/app.py +++ /dev/null @@ -1,44 +0,0 @@ -import gradio as gr -import torch -import clip -from PIL import Image - -print("Getting device...") -device = "cuda" if torch.cuda.is_available() else "cpu" -print("Loading model...") -model, preprocess = clip.load("ViT-B/32", device=device) -print("Loaded model.") - - -def process(image, prompt): - print("Inferring...") - image = preprocess(image).unsqueeze(0).to(device) - print("Image: ", image) - - prompts = prompt.split("\n") - print("Prompts: ", prompts) - text = clip.tokenize(prompts).to(device) - print("Tokens: ", text) - - with torch.no_grad(): - logits_per_image, logits_per_text = model(image, text) - probs = logits_per_image.softmax(dim=-1).cpu() - print("Probs: ", probs) - - return {k: v.item() for (k,v) in zip(prompts, probs[0])} - - -iface = gr.Interface( - fn=process, - inputs=[ - gr.Image(type="pil", label="Image"), - gr.Textbox(lines=5, label="Prompts (newline-separated)"), - ], - outputs="label", - examples=[ - ["dog.jpg", "a photo of a dog\na photo of a cat"], - ["cat.jpg", "a photo of a dog\na photo of a cat"], - ["car.jpg", "a red car on a golf course\na red sports car on a road\na blue sports car\na red family car"] - ] -) -iface.launch() diff --git a/spaces/chendl/compositional_test/transformers/examples/legacy/token-classification/run_chunk.sh b/spaces/chendl/compositional_test/transformers/examples/legacy/token-classification/run_chunk.sh deleted file mode 100644 index 13341555b699a45f3c2aed59672d950291f54dd4..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/legacy/token-classification/run_chunk.sh +++ /dev/null @@ -1,37 +0,0 @@ -if ! [ -f ./dev.txt ]; then - echo "Downloading CONLL2003 dev dataset...." - curl -L -o ./dev.txt 'https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/valid.txt' -fi - -if ! [ -f ./test.txt ]; then - echo "Downloading CONLL2003 test dataset...." - curl -L -o ./test.txt 'https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/test.txt' -fi - -if ! [ -f ./train.txt ]; then - echo "Downloading CONLL2003 train dataset...." - curl -L -o ./train.txt 'https://github.com/davidsbatista/NER-datasets/raw/master/CONLL2003/train.txt' -fi - -export MAX_LENGTH=200 -export BERT_MODEL=bert-base-uncased -export OUTPUT_DIR=chunker-model -export BATCH_SIZE=32 -export NUM_EPOCHS=3 -export SAVE_STEPS=750 -export SEED=1 - -python3 run_ner.py \ ---task_type Chunk \ ---data_dir . \ ---model_name_or_path $BERT_MODEL \ ---output_dir $OUTPUT_DIR \ ---max_seq_length $MAX_LENGTH \ ---num_train_epochs $NUM_EPOCHS \ ---per_gpu_train_batch_size $BATCH_SIZE \ ---save_steps $SAVE_STEPS \ ---seed $SEED \ ---do_train \ ---do_eval \ ---do_predict - diff --git a/spaces/chendl/compositional_test/transformers/examples/pytorch/text-generation/run_generation.py b/spaces/chendl/compositional_test/transformers/examples/pytorch/text-generation/run_generation.py deleted file mode 100644 index e0dda0ec0c2fa27b2af741ed80e49fb14c5b9792..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/pytorch/text-generation/run_generation.py +++ /dev/null @@ -1,435 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2018 Google AI, Google Brain and Carnegie Mellon University Authors and the HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Conditional text generation with the auto-regressive models of the library (GPT/GPT-2/CTRL/Transformer-XL/XLNet) -""" - - -import argparse -import logging -from typing import Tuple - -import numpy as np -import torch - -from transformers import ( - CTRLLMHeadModel, - CTRLTokenizer, - GenerationMixin, - GPT2LMHeadModel, - GPT2Tokenizer, - OpenAIGPTLMHeadModel, - OpenAIGPTTokenizer, - TransfoXLLMHeadModel, - TransfoXLTokenizer, - XLMTokenizer, - XLMWithLMHeadModel, - XLNetLMHeadModel, - XLNetTokenizer, -) -from transformers.modeling_outputs import CausalLMOutputWithPast - - -logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, -) -logger = logging.getLogger(__name__) - -MAX_LENGTH = int(10000) # Hardcoded max length to avoid infinite loop - -MODEL_CLASSES = { - "gpt2": (GPT2LMHeadModel, GPT2Tokenizer), - "ctrl": (CTRLLMHeadModel, CTRLTokenizer), - "openai-gpt": (OpenAIGPTLMHeadModel, OpenAIGPTTokenizer), - "xlnet": (XLNetLMHeadModel, XLNetTokenizer), - "transfo-xl": (TransfoXLLMHeadModel, TransfoXLTokenizer), - "xlm": (XLMWithLMHeadModel, XLMTokenizer), -} - -# Padding text to help Transformer-XL and XLNet with short prompts as proposed by Aman Rusia -# in https://github.com/rusiaaman/XLNet-gen#methodology -# and https://medium.com/@amanrusia/xlnet-speaks-comparison-to-gpt-2-ea1a4e9ba39e -PREFIX = """In 1991, the remains of Russian Tsar Nicholas II and his family -(except for Alexei and Maria) are discovered. -The voice of Nicholas's young son, Tsarevich Alexei Nikolaevich, narrates the -remainder of the story. 1883 Western Siberia, -a young Grigori Rasputin is asked by his father and a group of men to perform magic. -Rasputin has a vision and denounces one of the men as a horse thief. Although his -father initially slaps him for making such an accusation, Rasputin watches as the -man is chased outside and beaten. Twenty years later, Rasputin sees a vision of -the Virgin Mary, prompting him to become a priest. Rasputin quickly becomes famous, -with people, even a bishop, begging for his blessing. """ - - -def set_seed(args): - np.random.seed(args.seed) - torch.manual_seed(args.seed) - if args.n_gpu > 0: - torch.cuda.manual_seed_all(args.seed) - - -# -# Functions to prepare models' input -# - - -def prepare_ctrl_input(args, _, tokenizer, prompt_text): - if args.temperature > 0.7: - logger.info("CTRL typically works better with lower temperatures (and lower top_k).") - - encoded_prompt = tokenizer.encode(prompt_text, add_special_tokens=False) - if not any(encoded_prompt[0] == x for x in tokenizer.control_codes.values()): - logger.info("WARNING! You are not starting your generation from a control code so you won't get good results") - return prompt_text - - -def prepare_xlm_input(args, model, tokenizer, prompt_text): - # kwargs = {"language": None, "mask_token_id": None} - - # Set the language - use_lang_emb = hasattr(model.config, "use_lang_emb") and model.config.use_lang_emb - if hasattr(model.config, "lang2id") and use_lang_emb: - available_languages = model.config.lang2id.keys() - if args.xlm_language in available_languages: - language = args.xlm_language - else: - language = None - while language not in available_languages: - language = input("Using XLM. Select language in " + str(list(available_languages)) + " >>> ") - - model.config.lang_id = model.config.lang2id[language] - # kwargs["language"] = tokenizer.lang2id[language] - - # TODO fix mask_token_id setup when configurations will be synchronized between models and tokenizers - # XLM masked-language modeling (MLM) models need masked token - # is_xlm_mlm = "mlm" in args.model_name_or_path - # if is_xlm_mlm: - # kwargs["mask_token_id"] = tokenizer.mask_token_id - - return prompt_text - - -def prepare_xlnet_input(args, _, tokenizer, prompt_text): - prefix = args.prefix if args.prefix else args.padding_text if args.padding_text else PREFIX - prompt_text = prefix + prompt_text - return prompt_text - - -def prepare_transfoxl_input(args, _, tokenizer, prompt_text): - prefix = args.prefix if args.prefix else args.padding_text if args.padding_text else PREFIX - prompt_text = prefix + prompt_text - return prompt_text - - -PREPROCESSING_FUNCTIONS = { - "ctrl": prepare_ctrl_input, - "xlm": prepare_xlm_input, - "xlnet": prepare_xlnet_input, - "transfo-xl": prepare_transfoxl_input, -} - - -def adjust_length_to_model(length, max_sequence_length): - if length < 0 and max_sequence_length > 0: - length = max_sequence_length - elif 0 < max_sequence_length < length: - length = max_sequence_length # No generation bigger than model size - elif length < 0: - length = MAX_LENGTH # avoid infinite loop - return length - - -def sparse_model_config(model_config): - embedding_size = None - if hasattr(model_config, "hidden_size"): - embedding_size = model_config.hidden_size - elif hasattr(model_config, "n_embed"): - embedding_size = model_config.n_embed - elif hasattr(model_config, "n_embd"): - embedding_size = model_config.n_embd - - num_head = None - if hasattr(model_config, "num_attention_heads"): - num_head = model_config.num_attention_heads - elif hasattr(model_config, "n_head"): - num_head = model_config.n_head - - if embedding_size is None or num_head is None or num_head == 0: - raise ValueError("Check the model config") - - num_embedding_size_per_head = int(embedding_size / num_head) - num_layer = model_config.n_layer - - return num_layer, num_head, num_embedding_size_per_head - - -def prepare_jit_inputs(inputs, model, tokenizer): - num_batch = len(inputs) - dummy_input = tokenizer.batch_encode_plus(inputs, return_tensors="pt", padding=True) - num_block_layers, num_attention_heads, num_embedding_size_per_head = sparse_model_config(model.config) - if model.config.model_type == "bloom": - past_key_values = tuple( - ( - torch.zeros(int(num_attention_heads * num_batch), num_embedding_size_per_head, 1) - .to(model.config.torch_dtype) - .to(model.device), - torch.zeros(int(num_attention_heads * num_batch), 1, num_embedding_size_per_head) - .to(model.config.torch_dtype) - .to(model.device), - ) - for _ in range(num_block_layers) - ) - else: - past_key_values = tuple( - ( - torch.zeros(num_batch, num_attention_heads, 1, num_embedding_size_per_head) - .to(model.config.torch_dtype) - .to(model.device), - torch.zeros(num_batch, num_attention_heads, 1, num_embedding_size_per_head) - .to(model.config.torch_dtype) - .to(model.device), - ) - for _ in range(num_block_layers) - ) - - dummy_input["attention_mask"] = torch.cat( - [ - torch.zeros(dummy_input["attention_mask"].shape[0], 1).to(dummy_input["attention_mask"].dtype), - dummy_input["attention_mask"], - ], - -1, - ) - - if model.config.use_cache: - jit_inputs = ( - dummy_input["input_ids"].to(model.device), - past_key_values, - dummy_input["attention_mask"].to(model.device), - ) - else: - jit_inputs = ( - dummy_input["input_ids"].to(model.device), - dummy_input["attention_mask"].to(model.device), - ) - - return jit_inputs - - -class _ModelFallbackWrapper(GenerationMixin): - __slots__ = ("_optimized", "_default") - - def __init__(self, optimized, default): - self._optimized = optimized - self._default = default - - def __call__(self, *args, **kwargs): - if kwargs["past_key_values"] is None: - return self._default(*args, **kwargs) - trace_graph_inputs = [] - kwargs.pop("position_ids", None) - for k, v in kwargs.items(): - if v is not None and not isinstance(v, bool): - trace_graph_inputs.append(v) - trace_graph_inputs = tuple(trace_graph_inputs) - outputs = self._optimized(*trace_graph_inputs) - lm_logits = outputs[0] - past_key_values = outputs[1] - fixed_output = CausalLMOutputWithPast( - loss=None, - logits=lm_logits, - past_key_values=past_key_values, - hidden_states=None, - attentions=None, - ) - return fixed_output - - def __getattr__(self, item): - return getattr(self._default, item) - - def prepare_inputs_for_generation( - self, input_ids, past_key_values=None, inputs_embeds=None, use_cache=None, **kwargs - ): - return self._default.prepare_inputs_for_generation( - input_ids, past_key_values=past_key_values, inputs_embeds=inputs_embeds, use_cache=use_cache, **kwargs - ) - - def _reorder_cache( - self, past_key_values: Tuple[Tuple[torch.Tensor]], beam_idx: torch.Tensor - ) -> Tuple[Tuple[torch.Tensor]]: - """ - This function is used to re-order the `past_key_values` cache if [`~PretrainedModel.beam_search`] or - [`~PretrainedModel.beam_sample`] is called. This is required to match `past_key_values` with the correct - beam_idx at every generation step. - """ - return self._default._reorder_cache(past_key_values, beam_idx) - - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--model_type", - default=None, - type=str, - required=True, - help="Model type selected in the list: " + ", ".join(MODEL_CLASSES.keys()), - ) - parser.add_argument( - "--model_name_or_path", - default=None, - type=str, - required=True, - help="Path to pre-trained model or shortcut name selected in the list: " + ", ".join(MODEL_CLASSES.keys()), - ) - - parser.add_argument("--prompt", type=str, default="") - parser.add_argument("--length", type=int, default=20) - parser.add_argument("--stop_token", type=str, default=None, help="Token at which text generation is stopped") - - parser.add_argument( - "--temperature", - type=float, - default=1.0, - help="temperature of 1.0 has no effect, lower tend toward greedy sampling", - ) - parser.add_argument( - "--repetition_penalty", type=float, default=1.0, help="primarily useful for CTRL model; in that case, use 1.2" - ) - parser.add_argument("--k", type=int, default=0) - parser.add_argument("--p", type=float, default=0.9) - - parser.add_argument("--prefix", type=str, default="", help="Text added prior to input.") - parser.add_argument("--padding_text", type=str, default="", help="Deprecated, the use of `--prefix` is preferred.") - parser.add_argument("--xlm_language", type=str, default="", help="Optional language when used with the XLM model.") - - parser.add_argument("--seed", type=int, default=42, help="random seed for initialization") - parser.add_argument("--no_cuda", action="store_true", help="Avoid using CUDA when available") - parser.add_argument("--num_return_sequences", type=int, default=1, help="The number of samples to generate.") - parser.add_argument( - "--fp16", - action="store_true", - help="Whether to use 16-bit (mixed) precision (through NVIDIA apex) instead of 32-bit", - ) - parser.add_argument( - "--jit", type=bool, default=False, help="Whether or not to use jit trace to accelerate inference" - ) - args = parser.parse_args() - - args.device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu") - args.n_gpu = 0 if args.no_cuda else torch.cuda.device_count() - - logger.warning(f"device: {args.device}, n_gpu: {args.n_gpu}, 16-bits training: {args.fp16}") - - set_seed(args) - - # Initialize the model and tokenizer - try: - args.model_type = args.model_type.lower() - model_class, tokenizer_class = MODEL_CLASSES[args.model_type] - except KeyError: - raise KeyError("the model {} you specified is not supported. You are welcome to add it and open a PR :)") - - tokenizer = tokenizer_class.from_pretrained(args.model_name_or_path) - if tokenizer.pad_token is None: - tokenizer.pad_token = tokenizer.eos_token - model = model_class.from_pretrained(args.model_name_or_path) - model.to(args.device) - - if args.fp16: - model.half() - - args.length = adjust_length_to_model(args.length, max_sequence_length=model.config.max_position_embeddings) - logger.info(args) - - prompt_text = args.prompt if args.prompt else input("Model prompt >>> ") - - # Different models need different input formatting and/or extra arguments - requires_preprocessing = args.model_type in PREPROCESSING_FUNCTIONS.keys() - if requires_preprocessing: - prepare_input = PREPROCESSING_FUNCTIONS.get(args.model_type) - preprocessed_prompt_text = prepare_input(args, model, tokenizer, prompt_text) - - if model.__class__.__name__ in ["TransfoXLLMHeadModel"]: - tokenizer_kwargs = {"add_space_before_punct_symbol": True} - else: - tokenizer_kwargs = {} - - encoded_prompt = tokenizer.encode( - preprocessed_prompt_text, add_special_tokens=False, return_tensors="pt", **tokenizer_kwargs - ) - else: - prefix = args.prefix if args.prefix else args.padding_text - encoded_prompt = tokenizer.encode(prefix + prompt_text, add_special_tokens=False, return_tensors="pt") - encoded_prompt = encoded_prompt.to(args.device) - - if encoded_prompt.size()[-1] == 0: - input_ids = None - else: - input_ids = encoded_prompt - - if args.jit: - jit_input_texts = ["jit"] - jit_inputs = prepare_jit_inputs(jit_input_texts, model, tokenizer) - torch._C._jit_set_texpr_fuser_enabled(False) - model.config.return_dict = False - traced_model = torch.jit.trace(model, jit_inputs, strict=False) - traced_model = torch.jit.freeze(traced_model.eval()) - traced_model(*jit_inputs) - traced_model(*jit_inputs) - - model = _ModelFallbackWrapper(traced_model, model) - - output_sequences = model.generate( - input_ids=input_ids, - max_length=args.length + len(encoded_prompt[0]), - temperature=args.temperature, - top_k=args.k, - top_p=args.p, - repetition_penalty=args.repetition_penalty, - do_sample=True, - num_return_sequences=args.num_return_sequences, - ) - - # Remove the batch dimension when returning multiple sequences - if len(output_sequences.shape) > 2: - output_sequences.squeeze_() - - generated_sequences = [] - - for generated_sequence_idx, generated_sequence in enumerate(output_sequences): - print(f"=== GENERATED SEQUENCE {generated_sequence_idx + 1} ===") - generated_sequence = generated_sequence.tolist() - - # Decode text - text = tokenizer.decode(generated_sequence, clean_up_tokenization_spaces=True) - - # Remove all text after the stop token - text = text[: text.find(args.stop_token) if args.stop_token else None] - - # Add the prompt at the beginning of the sequence. Remove the excess text that was used for pre-processing - total_sequence = ( - prompt_text + text[len(tokenizer.decode(encoded_prompt[0], clean_up_tokenization_spaces=True)) :] - ) - - generated_sequences.append(total_sequence) - print(total_sequence) - - return generated_sequences - - -if __name__ == "__main__": - main() diff --git a/spaces/chilge/Fushimi/utils.py b/spaces/chilge/Fushimi/utils.py deleted file mode 100644 index 3733a75111dc89cefa333b34933ae01623550ea7..0000000000000000000000000000000000000000 --- a/spaces/chilge/Fushimi/utils.py +++ /dev/null @@ -1,338 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess - -import librosa -import numpy as np -import torchaudio -from scipy.io.wavfile import read -import torch -import torchvision -from torch.nn import functional as F -from commons import sequence_mask -from hubert import hubert_model -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def get_hubert_model(rank=None): - - hubert_soft = hubert_model.hubert_soft("hubert/hubert-soft-0d54a1f4.pt") - if rank is not None: - hubert_soft = hubert_soft.cuda(rank) - return hubert_soft - -def get_hubert_content(hmodel, y=None, path=None): - if path is not None: - source, sr = torchaudio.load(path) - source = torchaudio.functional.resample(source, sr, 16000) - if len(source.shape) == 2 and source.shape[1] >= 2: - source = torch.mean(source, dim=0).unsqueeze(0) - else: - source = y - source = source.unsqueeze(0) - with torch.inference_mode(): - units = hmodel.units(source) - return units.transpose(1,2) - - -def get_content(cmodel, y): - with torch.no_grad(): - c = cmodel.extract_features(y.squeeze(1))[0] - c = c.transpose(1, 2) - return c - - - -def transform(mel, height): # 68-92 - #r = np.random.random() - #rate = r * 0.3 + 0.85 # 0.85-1.15 - #height = int(mel.size(-2) * rate) - tgt = torchvision.transforms.functional.resize(mel, (height, mel.size(-1))) - if height >= mel.size(-2): - return tgt[:, :mel.size(-2), :] - else: - silence = tgt[:,-1:,:].repeat(1,mel.size(-2)-height,1) - silence += torch.randn_like(silence) / 10 - return torch.cat((tgt, silence), 1) - - -def stretch(mel, width): # 0.5-2 - return torchvision.transforms.functional.resize(mel, (mel.size(-2), width)) - - -def load_checkpoint(checkpoint_path, model, optimizer=None): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if iteration is None: - iteration = 1 - if learning_rate is None: - learning_rate = 0.0002 - if optimizer is not None and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict= {} - for k, v in state_dict.items(): - try: - new_state_dict[k] = saved_state_dict[k] - except: - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - logger.info("Loaded checkpoint '{}' (iteration {})" .format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - # ckptname = checkpoint_path.split(os.sep)[-1] - # newest_step = int(ckptname.split(".")[0].split("_")[1]) - # val_steps = 2000 - # last_ckptname = checkpoint_path.replace(str(newest_step), str(newest_step - val_steps*3)) - # if newest_step >= val_steps*3: - # os.system(f"rm {last_ckptname}") - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/db/test_base.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/db/test_base.py deleted file mode 100644 index 8bfaa1f733a08ef4a2f6ccee69dc44739ff8021c..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/db/test_base.py +++ /dev/null @@ -1,42 +0,0 @@ -from chromadb.db.base import ParameterValue, get_sql -import pypika - - -def test_value_params_default() -> None: - t = pypika.Table("foo") - - original_query = ( - pypika.Query.from_(t) - .select(t.a, t.b) - .where(t.a == pypika.Parameter("?")) - .where(t.b == pypika.Parameter("?")) - ) - - value_based_query = ( - pypika.Query.from_(t) - .select(t.a, t.b) - .where(t.a == ParameterValue(42)) - .where(t.b == ParameterValue(43)) - ) - sql, values = get_sql(value_based_query) - assert sql == original_query.get_sql() - assert values == (42, 43) - - -def test_value_params_numeric() -> None: - t = pypika.Table("foo") - original_query = ( - pypika.Query.from_(t) - .select(t.a, t.b) - .where(t.a == pypika.NumericParameter(1)) - .where(t.b == pypika.NumericParameter(2)) - ) - value_based_query = ( - pypika.Query.from_(t) - .select(t.a, t.b) - .where(t.a == ParameterValue(42)) - .where(t.b == ParameterValue(43)) - ) - sql, values = get_sql(value_based_query, formatstr=":{}") - assert sql == original_query.get_sql() - assert values == (42, 43) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/filetype/types/document.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/filetype/types/document.py deleted file mode 100644 index 9f57e98c46cb2f40b33bedf70a9adbae6e2afced..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/filetype/types/document.py +++ /dev/null @@ -1,256 +0,0 @@ -# -*- coding: utf-8 -*- - -from __future__ import absolute_import - -from .base import Type - - -class ZippedDocumentBase(Type): - def match(self, buf): - # start by checking for ZIP local file header signature - idx = self.search_signature(buf, 0, 6000) - if idx != 0: - return - - return self.match_document(buf) - - def match_document(self, buf): - raise NotImplementedError - - def compare_bytes(self, buf, subslice, start_offset): - sl = len(subslice) - - if start_offset + sl > len(buf): - return False - - return buf[start_offset:start_offset + sl] == subslice - - def search_signature(self, buf, start, rangeNum): - signature = b"PK\x03\x04" - length = len(buf) - - end = start + rangeNum - end = length if end > length else end - - if start >= end: - return -1 - - try: - return buf.index(signature, start, end) - except ValueError: - return -1 - - -class OpenDocument(ZippedDocumentBase): - def match_document(self, buf): - # Check if first file in archive is the identifying file - if not self.compare_bytes(buf, b"mimetype", 0x1E): - return - - # Check content of mimetype file if it matches current mime - return self.compare_bytes(buf, bytes(self.mime, "ASCII"), 0x26) - - -class OfficeOpenXml(ZippedDocumentBase): - def match_document(self, buf): - # Check if first file in archive is the identifying file - ft = self.match_filename(buf, 0x1E) - if ft: - return ft - - # Otherwise check that the fist file is one of these - if ( - not self.compare_bytes(buf, b"[Content_Types].xml", 0x1E) - and not self.compare_bytes(buf, b"_rels/.rels", 0x1E) - and not self.compare_bytes(buf, b"docProps", 0x1E) - ): - return - - # Loop through next 3 files and check if they match - # NOTE: OpenOffice/Libreoffice orders ZIP entry differently, so check the 4th file - # https://github.com/h2non/filetype/blob/d730d98ad5c990883148485b6fd5adbdd378364a/matchers/document.go#L134 - idx = 0 - for i in range(4): - # Search for next file header - idx = self.search_signature(buf, idx + 4, 6000) - if idx == -1: - return - - # Filename is at file header + 30 - ft = self.match_filename(buf, idx + 30) - if ft: - return ft - - def match_filename(self, buf, offset): - if self.compare_bytes(buf, b"word/", offset): - return ( - self.mime - == "application/vnd.openxmlformats-officedocument.wordprocessingml.document" - ) - if self.compare_bytes(buf, b"ppt/", offset): - return ( - self.mime - == "application/vnd.openxmlformats-officedocument.presentationml.presentation" - ) - if self.compare_bytes(buf, b"xl/", offset): - return ( - self.mime - == "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" - ) - - -class Doc(Type): - """ - Implements the Microsoft Word (Office 97-2003) document type matcher. - """ - - MIME = "application/msword" - EXTENSION = "doc" - - def __init__(self): - super(Doc, self).__init__(mime=Doc.MIME, extension=Doc.EXTENSION) - - def match(self, buf): - if len(buf) > 515 and buf[0:8] == b"\xD0\xCF\x11\xE0\xA1\xB1\x1A\xE1": - if buf[512:516] == b"\xEC\xA5\xC1\x00": - return True - if ( - len(buf) > 2142 - and b"\x00\x0A\x00\x00\x00MSWordDoc\x00\x10\x00\x00\x00Word.Document.8\x00\xF49\xB2q" - in buf[2075:2142] - ): - return True - - return False - - -class Docx(OfficeOpenXml): - """ - Implements the Microsoft Word OOXML (Office 2007+) document type matcher. - """ - - MIME = "application/vnd.openxmlformats-officedocument.wordprocessingml.document" - EXTENSION = "docx" - - def __init__(self): - super(Docx, self).__init__(mime=Docx.MIME, extension=Docx.EXTENSION) - - -class Odt(OpenDocument): - """ - Implements the OpenDocument Text document type matcher. - """ - - MIME = "application/vnd.oasis.opendocument.text" - EXTENSION = "odt" - - def __init__(self): - super(Odt, self).__init__(mime=Odt.MIME, extension=Odt.EXTENSION) - - -class Xls(Type): - """ - Implements the Microsoft Excel (Office 97-2003) document type matcher. - """ - - MIME = "application/vnd.ms-excel" - EXTENSION = "xls" - - def __init__(self): - super(Xls, self).__init__(mime=Xls.MIME, extension=Xls.EXTENSION) - - def match(self, buf): - if len(buf) > 520 and buf[0:8] == b"\xD0\xCF\x11\xE0\xA1\xB1\x1A\xE1": - if buf[512:516] == b"\xFD\xFF\xFF\xFF" and ( - buf[518] == 0x00 or buf[518] == 0x02 - ): - return True - if buf[512:520] == b"\x09\x08\x10\x00\x00\x06\x05\x00": - return True - if ( - len(buf) > 2095 - and b"\xE2\x00\x00\x00\x5C\x00\x70\x00\x04\x00\x00Calc" - in buf[1568:2095] - ): - return True - - return False - - -class Xlsx(OfficeOpenXml): - """ - Implements the Microsoft Excel OOXML (Office 2007+) document type matcher. - """ - - MIME = "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet" - EXTENSION = "xlsx" - - def __init__(self): - super(Xlsx, self).__init__(mime=Xlsx.MIME, extension=Xlsx.EXTENSION) - - -class Ods(OpenDocument): - """ - Implements the OpenDocument Spreadsheet document type matcher. - """ - - MIME = "application/vnd.oasis.opendocument.spreadsheet" - EXTENSION = "ods" - - def __init__(self): - super(Ods, self).__init__(mime=Ods.MIME, extension=Ods.EXTENSION) - - -class Ppt(Type): - """ - Implements the Microsoft PowerPoint (Office 97-2003) document type matcher. - """ - - MIME = "application/vnd.ms-powerpoint" - EXTENSION = "ppt" - - def __init__(self): - super(Ppt, self).__init__(mime=Ppt.MIME, extension=Ppt.EXTENSION) - - def match(self, buf): - if len(buf) > 524 and buf[0:8] == b"\xD0\xCF\x11\xE0\xA1\xB1\x1A\xE1": - if buf[512:516] == b"\xA0\x46\x1D\xF0": - return True - if buf[512:516] == b"\x00\x6E\x1E\xF0": - return True - if buf[512:516] == b"\x0F\x00\xE8\x03": - return True - if buf[512:516] == b"\xFD\xFF\xFF\xFF" and buf[522:524] == b"\x00\x00": - return True - if ( - len(buf) > 2096 - and buf[2072:2096] - == b"\x00\xB9\x29\xE8\x11\x00\x00\x00MS PowerPoint 97" - ): - return True - - return False - - -class Pptx(OfficeOpenXml): - """ - Implements the Microsoft PowerPoint OOXML (Office 2007+) document type matcher. - """ - - MIME = "application/vnd.openxmlformats-officedocument.presentationml.presentation" - EXTENSION = "pptx" - - def __init__(self): - super(Pptx, self).__init__(mime=Pptx.MIME, extension=Pptx.EXTENSION) - - -class Odp(OpenDocument): - """ - Implements the OpenDocument Presentation document type matcher. - """ - - MIME = "application/vnd.oasis.opendocument.presentation" - EXTENSION = "odp" - - def __init__(self): - super(Odp, self).__init__(mime=Odp.MIME, extension=Odp.EXTENSION) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/fontBuilder.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/fontBuilder.py deleted file mode 100644 index 8f83ea80034c431b39aa38b2fc28b67957c71fb9..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/fontBuilder.py +++ /dev/null @@ -1,993 +0,0 @@ -__all__ = ["FontBuilder"] - -""" -This module is *experimental*, meaning it still may evolve and change. - -The `FontBuilder` class is a convenient helper to construct working TTF or -OTF fonts from scratch. - -Note that the various setup methods cannot be called in arbitrary order, -due to various interdependencies between OpenType tables. Here is an order -that works: - - fb = FontBuilder(...) - fb.setupGlyphOrder(...) - fb.setupCharacterMap(...) - fb.setupGlyf(...) --or-- fb.setupCFF(...) - fb.setupHorizontalMetrics(...) - fb.setupHorizontalHeader() - fb.setupNameTable(...) - fb.setupOS2() - fb.addOpenTypeFeatures(...) - fb.setupPost() - fb.save(...) - -Here is how to build a minimal TTF: - -```python -from fontTools.fontBuilder import FontBuilder -from fontTools.pens.ttGlyphPen import TTGlyphPen - - -def drawTestGlyph(pen): - pen.moveTo((100, 100)) - pen.lineTo((100, 1000)) - pen.qCurveTo((200, 900), (400, 900), (500, 1000)) - pen.lineTo((500, 100)) - pen.closePath() - - -fb = FontBuilder(1024, isTTF=True) -fb.setupGlyphOrder([".notdef", ".null", "space", "A", "a"]) -fb.setupCharacterMap({32: "space", 65: "A", 97: "a"}) -advanceWidths = {".notdef": 600, "space": 500, "A": 600, "a": 600, ".null": 0} - -familyName = "HelloTestFont" -styleName = "TotallyNormal" -version = "0.1" - -nameStrings = dict( - familyName=dict(en=familyName, nl="HalloTestFont"), - styleName=dict(en=styleName, nl="TotaalNormaal"), - uniqueFontIdentifier="fontBuilder: " + familyName + "." + styleName, - fullName=familyName + "-" + styleName, - psName=familyName + "-" + styleName, - version="Version " + version, -) - -pen = TTGlyphPen(None) -drawTestGlyph(pen) -glyph = pen.glyph() -glyphs = {".notdef": glyph, "space": glyph, "A": glyph, "a": glyph, ".null": glyph} -fb.setupGlyf(glyphs) -metrics = {} -glyphTable = fb.font["glyf"] -for gn, advanceWidth in advanceWidths.items(): - metrics[gn] = (advanceWidth, glyphTable[gn].xMin) -fb.setupHorizontalMetrics(metrics) -fb.setupHorizontalHeader(ascent=824, descent=-200) -fb.setupNameTable(nameStrings) -fb.setupOS2(sTypoAscender=824, usWinAscent=824, usWinDescent=200) -fb.setupPost() -fb.save("test.ttf") -``` - -And here's how to build a minimal OTF: - -```python -from fontTools.fontBuilder import FontBuilder -from fontTools.pens.t2CharStringPen import T2CharStringPen - - -def drawTestGlyph(pen): - pen.moveTo((100, 100)) - pen.lineTo((100, 1000)) - pen.curveTo((200, 900), (400, 900), (500, 1000)) - pen.lineTo((500, 100)) - pen.closePath() - - -fb = FontBuilder(1024, isTTF=False) -fb.setupGlyphOrder([".notdef", ".null", "space", "A", "a"]) -fb.setupCharacterMap({32: "space", 65: "A", 97: "a"}) -advanceWidths = {".notdef": 600, "space": 500, "A": 600, "a": 600, ".null": 0} - -familyName = "HelloTestFont" -styleName = "TotallyNormal" -version = "0.1" - -nameStrings = dict( - familyName=dict(en=familyName, nl="HalloTestFont"), - styleName=dict(en=styleName, nl="TotaalNormaal"), - uniqueFontIdentifier="fontBuilder: " + familyName + "." + styleName, - fullName=familyName + "-" + styleName, - psName=familyName + "-" + styleName, - version="Version " + version, -) - -pen = T2CharStringPen(600, None) -drawTestGlyph(pen) -charString = pen.getCharString() -charStrings = { - ".notdef": charString, - "space": charString, - "A": charString, - "a": charString, - ".null": charString, -} -fb.setupCFF(nameStrings["psName"], {"FullName": nameStrings["psName"]}, charStrings, {}) -lsb = {gn: cs.calcBounds(None)[0] for gn, cs in charStrings.items()} -metrics = {} -for gn, advanceWidth in advanceWidths.items(): - metrics[gn] = (advanceWidth, lsb[gn]) -fb.setupHorizontalMetrics(metrics) -fb.setupHorizontalHeader(ascent=824, descent=200) -fb.setupNameTable(nameStrings) -fb.setupOS2(sTypoAscender=824, usWinAscent=824, usWinDescent=200) -fb.setupPost() -fb.save("test.otf") -``` -""" - -from .ttLib import TTFont, newTable -from .ttLib.tables._c_m_a_p import cmap_classes -from .ttLib.tables._g_l_y_f import flagCubic -from .ttLib.tables.O_S_2f_2 import Panose -from .misc.timeTools import timestampNow -import struct -from collections import OrderedDict - - -_headDefaults = dict( - tableVersion=1.0, - fontRevision=1.0, - checkSumAdjustment=0, - magicNumber=0x5F0F3CF5, - flags=0x0003, - unitsPerEm=1000, - created=0, - modified=0, - xMin=0, - yMin=0, - xMax=0, - yMax=0, - macStyle=0, - lowestRecPPEM=3, - fontDirectionHint=2, - indexToLocFormat=0, - glyphDataFormat=0, -) - -_maxpDefaultsTTF = dict( - tableVersion=0x00010000, - numGlyphs=0, - maxPoints=0, - maxContours=0, - maxCompositePoints=0, - maxCompositeContours=0, - maxZones=2, - maxTwilightPoints=0, - maxStorage=0, - maxFunctionDefs=0, - maxInstructionDefs=0, - maxStackElements=0, - maxSizeOfInstructions=0, - maxComponentElements=0, - maxComponentDepth=0, -) -_maxpDefaultsOTF = dict( - tableVersion=0x00005000, - numGlyphs=0, -) - -_postDefaults = dict( - formatType=3.0, - italicAngle=0, - underlinePosition=0, - underlineThickness=0, - isFixedPitch=0, - minMemType42=0, - maxMemType42=0, - minMemType1=0, - maxMemType1=0, -) - -_hheaDefaults = dict( - tableVersion=0x00010000, - ascent=0, - descent=0, - lineGap=0, - advanceWidthMax=0, - minLeftSideBearing=0, - minRightSideBearing=0, - xMaxExtent=0, - caretSlopeRise=1, - caretSlopeRun=0, - caretOffset=0, - reserved0=0, - reserved1=0, - reserved2=0, - reserved3=0, - metricDataFormat=0, - numberOfHMetrics=0, -) - -_vheaDefaults = dict( - tableVersion=0x00010000, - ascent=0, - descent=0, - lineGap=0, - advanceHeightMax=0, - minTopSideBearing=0, - minBottomSideBearing=0, - yMaxExtent=0, - caretSlopeRise=0, - caretSlopeRun=0, - reserved0=0, - reserved1=0, - reserved2=0, - reserved3=0, - reserved4=0, - metricDataFormat=0, - numberOfVMetrics=0, -) - -_nameIDs = dict( - copyright=0, - familyName=1, - styleName=2, - uniqueFontIdentifier=3, - fullName=4, - version=5, - psName=6, - trademark=7, - manufacturer=8, - designer=9, - description=10, - vendorURL=11, - designerURL=12, - licenseDescription=13, - licenseInfoURL=14, - # reserved = 15, - typographicFamily=16, - typographicSubfamily=17, - compatibleFullName=18, - sampleText=19, - postScriptCIDFindfontName=20, - wwsFamilyName=21, - wwsSubfamilyName=22, - lightBackgroundPalette=23, - darkBackgroundPalette=24, - variationsPostScriptNamePrefix=25, -) - -# to insert in setupNameTable doc string: -# print("\n".join(("%s (nameID %s)" % (k, v)) for k, v in sorted(_nameIDs.items(), key=lambda x: x[1]))) - -_panoseDefaults = Panose() - -_OS2Defaults = dict( - version=3, - xAvgCharWidth=0, - usWeightClass=400, - usWidthClass=5, - fsType=0x0004, # default: Preview & Print embedding - ySubscriptXSize=0, - ySubscriptYSize=0, - ySubscriptXOffset=0, - ySubscriptYOffset=0, - ySuperscriptXSize=0, - ySuperscriptYSize=0, - ySuperscriptXOffset=0, - ySuperscriptYOffset=0, - yStrikeoutSize=0, - yStrikeoutPosition=0, - sFamilyClass=0, - panose=_panoseDefaults, - ulUnicodeRange1=0, - ulUnicodeRange2=0, - ulUnicodeRange3=0, - ulUnicodeRange4=0, - achVendID="????", - fsSelection=0, - usFirstCharIndex=0, - usLastCharIndex=0, - sTypoAscender=0, - sTypoDescender=0, - sTypoLineGap=0, - usWinAscent=0, - usWinDescent=0, - ulCodePageRange1=0, - ulCodePageRange2=0, - sxHeight=0, - sCapHeight=0, - usDefaultChar=0, # .notdef - usBreakChar=32, # space - usMaxContext=0, - usLowerOpticalPointSize=0, - usUpperOpticalPointSize=0, -) - - -class FontBuilder(object): - def __init__(self, unitsPerEm=None, font=None, isTTF=True, glyphDataFormat=0): - """Initialize a FontBuilder instance. - - If the `font` argument is not given, a new `TTFont` will be - constructed, and `unitsPerEm` must be given. If `isTTF` is True, - the font will be a glyf-based TTF; if `isTTF` is False it will be - a CFF-based OTF. - - The `glyphDataFormat` argument corresponds to the `head` table field - that defines the format of the TrueType `glyf` table (default=0). - TrueType glyphs historically can only contain quadratic splines and static - components, but there's a proposal to add support for cubic Bezier curves as well - as variable composites/components at - https://github.com/harfbuzz/boring-expansion-spec/blob/main/glyf1.md - You can experiment with the new features by setting `glyphDataFormat` to 1. - A ValueError is raised if `glyphDataFormat` is left at 0 but glyphs are added - that contain cubic splines or varcomposites. This is to prevent accidentally - creating fonts that are incompatible with existing TrueType implementations. - - If `font` is given, it must be a `TTFont` instance and `unitsPerEm` - must _not_ be given. The `isTTF` and `glyphDataFormat` arguments will be ignored. - """ - if font is None: - self.font = TTFont(recalcTimestamp=False) - self.isTTF = isTTF - now = timestampNow() - assert unitsPerEm is not None - self.setupHead( - unitsPerEm=unitsPerEm, - create=now, - modified=now, - glyphDataFormat=glyphDataFormat, - ) - self.setupMaxp() - else: - assert unitsPerEm is None - self.font = font - self.isTTF = "glyf" in font - - def save(self, file): - """Save the font. The 'file' argument can be either a pathname or a - writable file object. - """ - self.font.save(file) - - def _initTableWithValues(self, tableTag, defaults, values): - table = self.font[tableTag] = newTable(tableTag) - for k, v in defaults.items(): - setattr(table, k, v) - for k, v in values.items(): - setattr(table, k, v) - return table - - def _updateTableWithValues(self, tableTag, values): - table = self.font[tableTag] - for k, v in values.items(): - setattr(table, k, v) - - def setupHead(self, **values): - """Create a new `head` table and initialize it with default values, - which can be overridden by keyword arguments. - """ - self._initTableWithValues("head", _headDefaults, values) - - def updateHead(self, **values): - """Update the head table with the fields and values passed as - keyword arguments. - """ - self._updateTableWithValues("head", values) - - def setupGlyphOrder(self, glyphOrder): - """Set the glyph order for the font.""" - self.font.setGlyphOrder(glyphOrder) - - def setupCharacterMap(self, cmapping, uvs=None, allowFallback=False): - """Build the `cmap` table for the font. The `cmapping` argument should - be a dict mapping unicode code points as integers to glyph names. - - The `uvs` argument, when passed, must be a list of tuples, describing - Unicode Variation Sequences. These tuples have three elements: - (unicodeValue, variationSelector, glyphName) - `unicodeValue` and `variationSelector` are integer code points. - `glyphName` may be None, to indicate this is the default variation. - Text processors will then use the cmap to find the glyph name. - Each Unicode Variation Sequence should be an officially supported - sequence, but this is not policed. - """ - subTables = [] - highestUnicode = max(cmapping) if cmapping else 0 - if highestUnicode > 0xFFFF: - cmapping_3_1 = dict((k, v) for k, v in cmapping.items() if k < 0x10000) - subTable_3_10 = buildCmapSubTable(cmapping, 12, 3, 10) - subTables.append(subTable_3_10) - else: - cmapping_3_1 = cmapping - format = 4 - subTable_3_1 = buildCmapSubTable(cmapping_3_1, format, 3, 1) - try: - subTable_3_1.compile(self.font) - except struct.error: - # format 4 overflowed, fall back to format 12 - if not allowFallback: - raise ValueError( - "cmap format 4 subtable overflowed; sort glyph order by unicode to fix." - ) - format = 12 - subTable_3_1 = buildCmapSubTable(cmapping_3_1, format, 3, 1) - subTables.append(subTable_3_1) - subTable_0_3 = buildCmapSubTable(cmapping_3_1, format, 0, 3) - subTables.append(subTable_0_3) - - if uvs is not None: - uvsDict = {} - for unicodeValue, variationSelector, glyphName in uvs: - if cmapping.get(unicodeValue) == glyphName: - # this is a default variation - glyphName = None - if variationSelector not in uvsDict: - uvsDict[variationSelector] = [] - uvsDict[variationSelector].append((unicodeValue, glyphName)) - uvsSubTable = buildCmapSubTable({}, 14, 0, 5) - uvsSubTable.uvsDict = uvsDict - subTables.append(uvsSubTable) - - self.font["cmap"] = newTable("cmap") - self.font["cmap"].tableVersion = 0 - self.font["cmap"].tables = subTables - - def setupNameTable(self, nameStrings, windows=True, mac=True): - """Create the `name` table for the font. The `nameStrings` argument must - be a dict, mapping nameIDs or descriptive names for the nameIDs to name - record values. A value is either a string, or a dict, mapping language codes - to strings, to allow localized name table entries. - - By default, both Windows (platformID=3) and Macintosh (platformID=1) name - records are added, unless any of `windows` or `mac` arguments is False. - - The following descriptive names are available for nameIDs: - - copyright (nameID 0) - familyName (nameID 1) - styleName (nameID 2) - uniqueFontIdentifier (nameID 3) - fullName (nameID 4) - version (nameID 5) - psName (nameID 6) - trademark (nameID 7) - manufacturer (nameID 8) - designer (nameID 9) - description (nameID 10) - vendorURL (nameID 11) - designerURL (nameID 12) - licenseDescription (nameID 13) - licenseInfoURL (nameID 14) - typographicFamily (nameID 16) - typographicSubfamily (nameID 17) - compatibleFullName (nameID 18) - sampleText (nameID 19) - postScriptCIDFindfontName (nameID 20) - wwsFamilyName (nameID 21) - wwsSubfamilyName (nameID 22) - lightBackgroundPalette (nameID 23) - darkBackgroundPalette (nameID 24) - variationsPostScriptNamePrefix (nameID 25) - """ - nameTable = self.font["name"] = newTable("name") - nameTable.names = [] - - for nameName, nameValue in nameStrings.items(): - if isinstance(nameName, int): - nameID = nameName - else: - nameID = _nameIDs[nameName] - if isinstance(nameValue, str): - nameValue = dict(en=nameValue) - nameTable.addMultilingualName( - nameValue, ttFont=self.font, nameID=nameID, windows=windows, mac=mac - ) - - def setupOS2(self, **values): - """Create a new `OS/2` table and initialize it with default values, - which can be overridden by keyword arguments. - """ - self._initTableWithValues("OS/2", _OS2Defaults, values) - if "xAvgCharWidth" not in values: - assert ( - "hmtx" in self.font - ), "the 'hmtx' table must be setup before the 'OS/2' table" - self.font["OS/2"].recalcAvgCharWidth(self.font) - if not ( - "ulUnicodeRange1" in values - or "ulUnicodeRange2" in values - or "ulUnicodeRange3" in values - or "ulUnicodeRange3" in values - ): - assert ( - "cmap" in self.font - ), "the 'cmap' table must be setup before the 'OS/2' table" - self.font["OS/2"].recalcUnicodeRanges(self.font) - - def setupCFF(self, psName, fontInfo, charStringsDict, privateDict): - from .cffLib import ( - CFFFontSet, - TopDictIndex, - TopDict, - CharStrings, - GlobalSubrsIndex, - PrivateDict, - ) - - assert not self.isTTF - self.font.sfntVersion = "OTTO" - fontSet = CFFFontSet() - fontSet.major = 1 - fontSet.minor = 0 - fontSet.otFont = self.font - fontSet.fontNames = [psName] - fontSet.topDictIndex = TopDictIndex() - - globalSubrs = GlobalSubrsIndex() - fontSet.GlobalSubrs = globalSubrs - private = PrivateDict() - for key, value in privateDict.items(): - setattr(private, key, value) - fdSelect = None - fdArray = None - - topDict = TopDict() - topDict.charset = self.font.getGlyphOrder() - topDict.Private = private - topDict.GlobalSubrs = fontSet.GlobalSubrs - for key, value in fontInfo.items(): - setattr(topDict, key, value) - if "FontMatrix" not in fontInfo: - scale = 1 / self.font["head"].unitsPerEm - topDict.FontMatrix = [scale, 0, 0, scale, 0, 0] - - charStrings = CharStrings( - None, topDict.charset, globalSubrs, private, fdSelect, fdArray - ) - for glyphName, charString in charStringsDict.items(): - charString.private = private - charString.globalSubrs = globalSubrs - charStrings[glyphName] = charString - topDict.CharStrings = charStrings - - fontSet.topDictIndex.append(topDict) - - self.font["CFF "] = newTable("CFF ") - self.font["CFF "].cff = fontSet - - def setupCFF2(self, charStringsDict, fdArrayList=None, regions=None): - from .cffLib import ( - CFFFontSet, - TopDictIndex, - TopDict, - CharStrings, - GlobalSubrsIndex, - PrivateDict, - FDArrayIndex, - FontDict, - ) - - assert not self.isTTF - self.font.sfntVersion = "OTTO" - fontSet = CFFFontSet() - fontSet.major = 2 - fontSet.minor = 0 - - cff2GetGlyphOrder = self.font.getGlyphOrder - fontSet.topDictIndex = TopDictIndex(None, cff2GetGlyphOrder, None) - - globalSubrs = GlobalSubrsIndex() - fontSet.GlobalSubrs = globalSubrs - - if fdArrayList is None: - fdArrayList = [{}] - fdSelect = None - fdArray = FDArrayIndex() - fdArray.strings = None - fdArray.GlobalSubrs = globalSubrs - for privateDict in fdArrayList: - fontDict = FontDict() - fontDict.setCFF2(True) - private = PrivateDict() - for key, value in privateDict.items(): - setattr(private, key, value) - fontDict.Private = private - fdArray.append(fontDict) - - topDict = TopDict() - topDict.cff2GetGlyphOrder = cff2GetGlyphOrder - topDict.FDArray = fdArray - scale = 1 / self.font["head"].unitsPerEm - topDict.FontMatrix = [scale, 0, 0, scale, 0, 0] - - private = fdArray[0].Private - charStrings = CharStrings(None, None, globalSubrs, private, fdSelect, fdArray) - for glyphName, charString in charStringsDict.items(): - charString.private = private - charString.globalSubrs = globalSubrs - charStrings[glyphName] = charString - topDict.CharStrings = charStrings - - fontSet.topDictIndex.append(topDict) - - self.font["CFF2"] = newTable("CFF2") - self.font["CFF2"].cff = fontSet - - if regions: - self.setupCFF2Regions(regions) - - def setupCFF2Regions(self, regions): - from .varLib.builder import buildVarRegionList, buildVarData, buildVarStore - from .cffLib import VarStoreData - - assert "fvar" in self.font, "fvar must to be set up first" - assert "CFF2" in self.font, "CFF2 must to be set up first" - axisTags = [a.axisTag for a in self.font["fvar"].axes] - varRegionList = buildVarRegionList(regions, axisTags) - varData = buildVarData(list(range(len(regions))), None, optimize=False) - varStore = buildVarStore(varRegionList, [varData]) - vstore = VarStoreData(otVarStore=varStore) - topDict = self.font["CFF2"].cff.topDictIndex[0] - topDict.VarStore = vstore - for fontDict in topDict.FDArray: - fontDict.Private.vstore = vstore - - def setupGlyf(self, glyphs, calcGlyphBounds=True, validateGlyphFormat=True): - """Create the `glyf` table from a dict, that maps glyph names - to `fontTools.ttLib.tables._g_l_y_f.Glyph` objects, for example - as made by `fontTools.pens.ttGlyphPen.TTGlyphPen`. - - If `calcGlyphBounds` is True, the bounds of all glyphs will be - calculated. Only pass False if your glyph objects already have - their bounding box values set. - - If `validateGlyphFormat` is True, raise ValueError if any of the glyphs contains - cubic curves or is a variable composite but head.glyphDataFormat=0. - Set it to False to skip the check if you know in advance all the glyphs are - compatible with the specified glyphDataFormat. - """ - assert self.isTTF - - if validateGlyphFormat and self.font["head"].glyphDataFormat == 0: - for name, g in glyphs.items(): - if g.isVarComposite(): - raise ValueError( - f"Glyph {name!r} is a variable composite, but glyphDataFormat=0" - ) - elif g.numberOfContours > 0 and any(f & flagCubic for f in g.flags): - raise ValueError( - f"Glyph {name!r} has cubic Bezier outlines, but glyphDataFormat=0; " - "either convert to quadratics with cu2qu or set glyphDataFormat=1." - ) - - self.font["loca"] = newTable("loca") - self.font["glyf"] = newTable("glyf") - self.font["glyf"].glyphs = glyphs - if hasattr(self.font, "glyphOrder"): - self.font["glyf"].glyphOrder = self.font.glyphOrder - if calcGlyphBounds: - self.calcGlyphBounds() - - def setupFvar(self, axes, instances): - """Adds an font variations table to the font. - - Args: - axes (list): See below. - instances (list): See below. - - ``axes`` should be a list of axes, with each axis either supplied as - a py:class:`.designspaceLib.AxisDescriptor` object, or a tuple in the - format ```tupletag, minValue, defaultValue, maxValue, name``. - The ``name`` is either a string, or a dict, mapping language codes - to strings, to allow localized name table entries. - - ```instances`` should be a list of instances, with each instance either - supplied as a py:class:`.designspaceLib.InstanceDescriptor` object, or a - dict with keys ``location`` (mapping of axis tags to float values), - ``stylename`` and (optionally) ``postscriptfontname``. - The ``stylename`` is either a string, or a dict, mapping language codes - to strings, to allow localized name table entries. - """ - - addFvar(self.font, axes, instances) - - def setupAvar(self, axes, mappings=None): - """Adds an axis variations table to the font. - - Args: - axes (list): A list of py:class:`.designspaceLib.AxisDescriptor` objects. - """ - from .varLib import _add_avar - - if "fvar" not in self.font: - raise KeyError("'fvar' table is missing; can't add 'avar'.") - - axisTags = [axis.axisTag for axis in self.font["fvar"].axes] - axes = OrderedDict(enumerate(axes)) # Only values are used - _add_avar(self.font, axes, mappings, axisTags) - - def setupGvar(self, variations): - gvar = self.font["gvar"] = newTable("gvar") - gvar.version = 1 - gvar.reserved = 0 - gvar.variations = variations - - def calcGlyphBounds(self): - """Calculate the bounding boxes of all glyphs in the `glyf` table. - This is usually not called explicitly by client code. - """ - glyphTable = self.font["glyf"] - for glyph in glyphTable.glyphs.values(): - glyph.recalcBounds(glyphTable) - - def setupHorizontalMetrics(self, metrics): - """Create a new `hmtx` table, for horizontal metrics. - - The `metrics` argument must be a dict, mapping glyph names to - `(width, leftSidebearing)` tuples. - """ - self.setupMetrics("hmtx", metrics) - - def setupVerticalMetrics(self, metrics): - """Create a new `vmtx` table, for horizontal metrics. - - The `metrics` argument must be a dict, mapping glyph names to - `(height, topSidebearing)` tuples. - """ - self.setupMetrics("vmtx", metrics) - - def setupMetrics(self, tableTag, metrics): - """See `setupHorizontalMetrics()` and `setupVerticalMetrics()`.""" - assert tableTag in ("hmtx", "vmtx") - mtxTable = self.font[tableTag] = newTable(tableTag) - roundedMetrics = {} - for gn in metrics: - w, lsb = metrics[gn] - roundedMetrics[gn] = int(round(w)), int(round(lsb)) - mtxTable.metrics = roundedMetrics - - def setupHorizontalHeader(self, **values): - """Create a new `hhea` table initialize it with default values, - which can be overridden by keyword arguments. - """ - self._initTableWithValues("hhea", _hheaDefaults, values) - - def setupVerticalHeader(self, **values): - """Create a new `vhea` table initialize it with default values, - which can be overridden by keyword arguments. - """ - self._initTableWithValues("vhea", _vheaDefaults, values) - - def setupVerticalOrigins(self, verticalOrigins, defaultVerticalOrigin=None): - """Create a new `VORG` table. The `verticalOrigins` argument must be - a dict, mapping glyph names to vertical origin values. - - The `defaultVerticalOrigin` argument should be the most common vertical - origin value. If omitted, this value will be derived from the actual - values in the `verticalOrigins` argument. - """ - if defaultVerticalOrigin is None: - # find the most frequent vorg value - bag = {} - for gn in verticalOrigins: - vorg = verticalOrigins[gn] - if vorg not in bag: - bag[vorg] = 1 - else: - bag[vorg] += 1 - defaultVerticalOrigin = sorted( - bag, key=lambda vorg: bag[vorg], reverse=True - )[0] - self._initTableWithValues( - "VORG", - {}, - dict(VOriginRecords={}, defaultVertOriginY=defaultVerticalOrigin), - ) - vorgTable = self.font["VORG"] - vorgTable.majorVersion = 1 - vorgTable.minorVersion = 0 - for gn in verticalOrigins: - vorgTable[gn] = verticalOrigins[gn] - - def setupPost(self, keepGlyphNames=True, **values): - """Create a new `post` table and initialize it with default values, - which can be overridden by keyword arguments. - """ - isCFF2 = "CFF2" in self.font - postTable = self._initTableWithValues("post", _postDefaults, values) - if (self.isTTF or isCFF2) and keepGlyphNames: - postTable.formatType = 2.0 - postTable.extraNames = [] - postTable.mapping = {} - else: - postTable.formatType = 3.0 - - def setupMaxp(self): - """Create a new `maxp` table. This is called implicitly by FontBuilder - itself and is usually not called by client code. - """ - if self.isTTF: - defaults = _maxpDefaultsTTF - else: - defaults = _maxpDefaultsOTF - self._initTableWithValues("maxp", defaults, {}) - - def setupDummyDSIG(self): - """This adds an empty DSIG table to the font to make some MS applications - happy. This does not properly sign the font. - """ - values = dict( - ulVersion=1, - usFlag=0, - usNumSigs=0, - signatureRecords=[], - ) - self._initTableWithValues("DSIG", {}, values) - - def addOpenTypeFeatures(self, features, filename=None, tables=None, debug=False): - """Add OpenType features to the font from a string containing - Feature File syntax. - - The `filename` argument is used in error messages and to determine - where to look for "include" files. - - The optional `tables` argument can be a list of OTL tables tags to - build, allowing the caller to only build selected OTL tables. See - `fontTools.feaLib` for details. - - The optional `debug` argument controls whether to add source debugging - information to the font in the `Debg` table. - """ - from .feaLib.builder import addOpenTypeFeaturesFromString - - addOpenTypeFeaturesFromString( - self.font, features, filename=filename, tables=tables, debug=debug - ) - - def addFeatureVariations(self, conditionalSubstitutions, featureTag="rvrn"): - """Add conditional substitutions to a Variable Font. - - See `fontTools.varLib.featureVars.addFeatureVariations`. - """ - from .varLib import featureVars - - if "fvar" not in self.font: - raise KeyError("'fvar' table is missing; can't add FeatureVariations.") - - featureVars.addFeatureVariations( - self.font, conditionalSubstitutions, featureTag=featureTag - ) - - def setupCOLR( - self, - colorLayers, - version=None, - varStore=None, - varIndexMap=None, - clipBoxes=None, - allowLayerReuse=True, - ): - """Build new COLR table using color layers dictionary. - - Cf. `fontTools.colorLib.builder.buildCOLR`. - """ - from fontTools.colorLib.builder import buildCOLR - - glyphMap = self.font.getReverseGlyphMap() - self.font["COLR"] = buildCOLR( - colorLayers, - version=version, - glyphMap=glyphMap, - varStore=varStore, - varIndexMap=varIndexMap, - clipBoxes=clipBoxes, - allowLayerReuse=allowLayerReuse, - ) - - def setupCPAL( - self, - palettes, - paletteTypes=None, - paletteLabels=None, - paletteEntryLabels=None, - ): - """Build new CPAL table using list of palettes. - - Optionally build CPAL v1 table using paletteTypes, paletteLabels and - paletteEntryLabels. - - Cf. `fontTools.colorLib.builder.buildCPAL`. - """ - from fontTools.colorLib.builder import buildCPAL - - self.font["CPAL"] = buildCPAL( - palettes, - paletteTypes=paletteTypes, - paletteLabels=paletteLabels, - paletteEntryLabels=paletteEntryLabels, - nameTable=self.font.get("name"), - ) - - def setupStat(self, axes, locations=None, elidedFallbackName=2): - """Build a new 'STAT' table. - - See `fontTools.otlLib.builder.buildStatTable` for details about - the arguments. - """ - from .otlLib.builder import buildStatTable - - buildStatTable(self.font, axes, locations, elidedFallbackName) - - -def buildCmapSubTable(cmapping, format, platformID, platEncID): - subTable = cmap_classes[format](format) - subTable.cmap = cmapping - subTable.platformID = platformID - subTable.platEncID = platEncID - subTable.language = 0 - return subTable - - -def addFvar(font, axes, instances): - from .ttLib.tables._f_v_a_r import Axis, NamedInstance - - assert axes - - fvar = newTable("fvar") - nameTable = font["name"] - - for axis_def in axes: - axis = Axis() - - if isinstance(axis_def, tuple): - ( - axis.axisTag, - axis.minValue, - axis.defaultValue, - axis.maxValue, - name, - ) = axis_def - else: - (axis.axisTag, axis.minValue, axis.defaultValue, axis.maxValue, name) = ( - axis_def.tag, - axis_def.minimum, - axis_def.default, - axis_def.maximum, - axis_def.name, - ) - if axis_def.hidden: - axis.flags = 0x0001 # HIDDEN_AXIS - - if isinstance(name, str): - name = dict(en=name) - - axis.axisNameID = nameTable.addMultilingualName(name, ttFont=font) - fvar.axes.append(axis) - - for instance in instances: - if isinstance(instance, dict): - coordinates = instance["location"] - name = instance["stylename"] - psname = instance.get("postscriptfontname") - else: - coordinates = instance.location - name = instance.localisedStyleName or instance.styleName - psname = instance.postScriptFontName - - if isinstance(name, str): - name = dict(en=name) - - inst = NamedInstance() - inst.subfamilyNameID = nameTable.addMultilingualName(name, ttFont=font) - if psname is not None: - inst.postscriptNameID = nameTable.addName(psname) - inst.coordinates = coordinates - fvar.instances.append(inst) - - font["fvar"] = fvar diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/merge/base.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/merge/base.py deleted file mode 100644 index 37f9097ab2595413066cebd102fdf697280a93bb..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/merge/base.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Behdad Esfahbod, Roozbeh Pournader - -from fontTools.ttLib.tables.DefaultTable import DefaultTable -import logging - - -log = logging.getLogger("fontTools.merge") - - -def add_method(*clazzes, **kwargs): - """Returns a decorator function that adds a new method to one or - more classes.""" - allowDefault = kwargs.get("allowDefaultTable", False) - - def wrapper(method): - done = [] - for clazz in clazzes: - if clazz in done: - continue # Support multiple names of a clazz - done.append(clazz) - assert allowDefault or clazz != DefaultTable, "Oops, table class not found." - assert ( - method.__name__ not in clazz.__dict__ - ), "Oops, class '%s' has method '%s'." % (clazz.__name__, method.__name__) - setattr(clazz, method.__name__, method) - return None - - return wrapper - - -def mergeObjects(lst): - lst = [item for item in lst if item is not NotImplemented] - if not lst: - return NotImplemented - lst = [item for item in lst if item is not None] - if not lst: - return None - - clazz = lst[0].__class__ - assert all(type(item) == clazz for item in lst), lst - - logic = clazz.mergeMap - returnTable = clazz() - returnDict = {} - - allKeys = set.union(set(), *(vars(table).keys() for table in lst)) - for key in allKeys: - try: - mergeLogic = logic[key] - except KeyError: - try: - mergeLogic = logic["*"] - except KeyError: - raise Exception( - "Don't know how to merge key %s of class %s" % (key, clazz.__name__) - ) - if mergeLogic is NotImplemented: - continue - value = mergeLogic(getattr(table, key, NotImplemented) for table in lst) - if value is not NotImplemented: - returnDict[key] = value - - returnTable.__dict__ = returnDict - - return returnTable - - -@add_method(DefaultTable, allowDefaultTable=True) -def merge(self, m, tables): - if not hasattr(self, "mergeMap"): - log.info("Don't know how to merge '%s'.", self.tableTag) - return NotImplemented - - logic = self.mergeMap - - if isinstance(logic, dict): - return m.mergeObjects(self, self.mergeMap, tables) - else: - return logic(tables) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/any_pb2.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/any_pb2.py deleted file mode 100644 index b0bfaf6351a6b1fe736c8809283f7c1909e36cba..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/any_pb2.py +++ /dev/null @@ -1,27 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/protobuf/any.proto -"""Generated protocol buffer code.""" -from google.protobuf import descriptor as _descriptor -from google.protobuf import descriptor_pool as _descriptor_pool -from google.protobuf import symbol_database as _symbol_database -from google.protobuf.internal import builder as _builder -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - - - -DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x19google/protobuf/any.proto\x12\x0fgoogle.protobuf\"6\n\x03\x41ny\x12\x19\n\x08type_url\x18\x01 \x01(\tR\x07typeUrl\x12\x14\n\x05value\x18\x02 \x01(\x0cR\x05valueBv\n\x13\x63om.google.protobufB\x08\x41nyProtoP\x01Z,google.golang.org/protobuf/types/known/anypb\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3') - -_globals = globals() -_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals) -_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.any_pb2', _globals) -if _descriptor._USE_C_DESCRIPTORS == False: - - DESCRIPTOR._options = None - DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\010AnyProtoP\001Z,google.golang.org/protobuf/types/known/anypb\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes' - _globals['_ANY']._serialized_start=46 - _globals['_ANY']._serialized_end=100 -# @@protoc_insertion_point(module_scope) diff --git a/spaces/cihyFjudo/fairness-paper-search/LG G5s Bang Olufsen Audio Module May Not Reach U.S. REPACK.md b/spaces/cihyFjudo/fairness-paper-search/LG G5s Bang Olufsen Audio Module May Not Reach U.S. REPACK.md deleted file mode 100644 index 63e263ccc5d3b2e8673af9657bf79bf2100b105f..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/LG G5s Bang Olufsen Audio Module May Not Reach U.S. REPACK.md +++ /dev/null @@ -1,19 +0,0 @@ -
    -

    Just like the LG CAM Plus, the B&O PLAY audio module snaps straight into the bottom of the G5, featuring its own dedicated 3.5mm jack, along with the same USB Type-C connector and bottom speaker grill that you find on the regular G5. As an immediate bonus, this second jack means that you can conveniently plug your headphones into the bottom of the phone. Interestingly, the regular LG G5 headphone jack continues to function while the external DAC is plugged, although obviously without any of the benefits offered by the modular component.

    -

    We can only assume there has been a hold up getting FCC certification for the module or that there has been a disruption in the supply chain. Either way, if you were planning on ordering an LG G5 specifically so you could enjoy the superior audio the Hi-Fi DAC affords, you might want to hold off a little.

    -

    LG G5’s Bang Olufsen audio module may not reach U.S.


    Download Ziphttps://tinurli.com/2uwkCV



    -

    Inside that module you'll find the standard USB-C port and loud speaker you expect to find on any G5 module (we have no indication that this speaker's any better than the default speaker), along with a new 3.5mm headphone jack powered by an integrated digital-to-audio converter.

    -

    You'll be doing this a lot if you want to swap between your G5's additional "Friend" modules, and the process can involve a certain amount of terror. See, swapping Friends sometimes involves removing the G5's 2,800mAh battery from one module and snapping it onto another. The best way I've found is the "removing a Band-Aid" approach -- a quick, decisive jerk while holding the battery in your left hand and the module in your right. Don't think about it -- just do it. It took me a good 15 minutes to figure out the process, because I was so worried I'd break something, but so far I've managed to avoid destroying either of our review loaners. Still, I'm curious about how long these things will last before some poor piece of plastic snaps. Beyond that, I'm frustrated that the G5 doesn't have some tiny auxiliary battery inside so that it doesn't have to restart every time you want to start using the camera grip or the audio DAC.

    -

    Meanwhile, the G5 has a single speaker wedged into its bottom, and it's one of the better ones through which I've listened to My Brother, My Brother and Me lately. It's surprisingly loud and does a fine job keeping the soundstage clear. You'll need some headphones (and maybe one of LG's HiFi audio modules) for long-term listening, but the G5 is a more capable audio machine out of the box than you might expect.

    -

    Then there's the Hi-Fi Plus module, which LG developed in tandem with Bang & Olufsen (and which doesn't seem to be coming to the US). Audiophiles will appreciate the fact that it upscales just about any audio -- be it from Spotify, YouTube, whatever -- to 32-bit quality. I'm currently on the hunt for the perfect earbuds, but none of the in-ears I tried with the Hi-Fi Plus sounded dramatically better than before. Best-case scenario, a track I had listened to hundreds of times in the past felt a little deeper. Other times, songs just sounded different. Not better, not worse, just different. In fairness, music buffs with more elaborate rigs will probably get more use out of this DAC than I did, especially since you can hook it up to other audio devices with an included cable.

    -

    The LG G5 is an Android smartphone developed by LG Electronics as part of the LG G series. It was announced during Mobile World Congress as the successor to the 2015 LG G4.[3][4][5] The G5 is distinguished from its predecessors by its aluminum chassis and a focus on modularity. Its lower housing, which holds the user-replaceable battery, can be slid from the bottom of the device, and replaced by alternative add-on modules that provide additional functions. Two modules are available: a camera grip, and a high-fidelity audio module with DAC. A lower-spec variation, dubbed the LG G5 SE, is available in some markets.

    -

    So, does the LG Hi-Fi make your audio sound "better"? I mean, if you like the way it sounds from the module better than the headphone jack, subjectively speaking, it could. But it's worth noting that a third-party music app with a few effects and an EQ could probably get you pretty close to doing the same thing, and with the knowledge that you are actually actively shaping the audio in a way that differs from the source. You could also turn it on and off (the Hi-Fi DAC offers no such feature - it is utterly opaque).

    -

    -

    Admittedly, whether you'll actually want to carry around extra modules is another matter entirely. What's more, there are currently only two modules available for the G5, and it doesn't look like LG's going to be following them up with more anytime soon. There's the £80 Cam Plus, a camera grip that adds physical buttons and a zoom wheel, and the £150 Hi-Fi Plus, a Bang & Olufsen-made portable Hi-Fi DAC with a built-in amplifier that supports 32-bit 384KHz high-definition audio and B&O Play, but that's it.

    -

    The LG G5 was a Mobile World Congress highlight as the company showed it was daring to innovate. The company behind the digital logic inside of LG Gi-Fi Plus audio module had a word with us to show us the difference in quality.

    -

    The LG G5 Hi-Fi Plus audio module for G5 phone with its Bang and Olufsen speakers includes the ES9028 SABRE Digital to Analog Converter (DAC) and SABRE 9602 Headphone Amplifier. The idea is to provide selected to provide a superior Hi-Fi audio experience. This is the technology that usually finds its way into higher end home amplifiers and receivers and LG has seen the need to use the High resolution audio and offer the SABRE to its audiophile customers.

    -

    ESS's audiophile technology supports the industry's most popular high resolution and lossless audio formats including FLAC, ALAC, AIFF and WAV. The LG HiFi Plus supports 32-bit, 384KHz high definition audio playback and can be used either as a module with the LG G5 or as a separate HiFi DAC by connecting to any smartphone or PC.

    -

    You then snap the battery into place on the module, and slide the module into the phone. You need to give it a good bang to lock it in, as I discovered the hard way when testing the CAM PLUS module and it kept slipping out in my bag and making the phone turn off.

    -

    The LG Hi-Fi Plus is a module developed in partnership with B&O Play, offering a 32-bit DAC and amp, designed to give you an audiophile experience from your handset. Again, you simply swap the bottom module of the phone for the Hi-Fi Pro module and you'll be ready for better quality audio. It will cost £149.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Skogg System Workout Schedule PDF The Best Way to Use Kettlebells for Your Goals.md b/spaces/cihyFjudo/fairness-paper-search/Skogg System Workout Schedule PDF The Best Way to Use Kettlebells for Your Goals.md deleted file mode 100644 index dc3792b25704f79d6024460b4a39d212f6a23fe9..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Skogg System Workout Schedule PDF The Best Way to Use Kettlebells for Your Goals.md +++ /dev/null @@ -1,7 +0,0 @@ -
    -

    With a kettlebell in hand, and this training plan as your guide, you can quickly transform your body and improve upon your current level of fitness. You can use this free six-week training plan to workout at home, on your schedule, no fancy gym equipment or pricey membership required.

    -

    During weeks 1-3 you will train three days during the week. Ideally, this will take place on Monday, Wednesday, Friday. However, you are free to structure your training days to accommodate your schedule.

    -

    skogg system workout schedule pdf


    Download Zip > https://tinurli.com/2uwj4T



    -

    During weeks 5 and 6 you will train four days during the week. Ideally, this will take place on Monday, Tuesday, Thursday and Friday. Again, you are free to structure your training days to accommodate your schedule.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Steven Slate Drums Mac Torrentl The Ultimate Solution for Drummers and Producers on Mac.md b/spaces/cihyFjudo/fairness-paper-search/Steven Slate Drums Mac Torrentl The Ultimate Solution for Drummers and Producers on Mac.md deleted file mode 100644 index 4d3c13581370b3dcc44c0010444cdaee16f731c5..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Steven Slate Drums Mac Torrentl The Ultimate Solution for Drummers and Producers on Mac.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Steven Slate Drums Mac Torrentl


    Download Zip 🗸 https://tinurli.com/2uwjW5



    -
    - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/The Ultimate Checklist for the Application for Temporary Resident Visa for Mexico.md b/spaces/cihyFjudo/fairness-paper-search/The Ultimate Checklist for the Application for Temporary Resident Visa for Mexico.md deleted file mode 100644 index 926b7571e315bf053547e36c69157a183c94d6d7..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/The Ultimate Checklist for the Application for Temporary Resident Visa for Mexico.md +++ /dev/null @@ -1,28 +0,0 @@ -
    -

    Keep in mind: Different embassy offices may have their own different requirements when it comes to the visa application process. This is why it is important to always contact them or visit their website to learn about the specific requirements, opening hours, etc.

    -

    The fee for a Mexico visa is around $36. However, the visa fee may change depending on the country in which you submit your application, as does the payment method. Some embassies may require you to pay the fee upfront via bank transfer while others ask you to pay in cash.

    -

    application for temporary resident visa for mexico


    Download Ziphttps://tinurli.com/2uwkxo



    -

    citizenship benefitsVisa-free or Visa on arrival to 142 countries, including Schengen Area, Canada, JapanThe right to live and work in Mexico at all times, and all the rights associated with membership of the MercosurLow cost citizenshipAccess to educational and healthcare benefitspermanent residency benefitsBe eligible for citizenship in a short period of timeThe right to live and work in Mexico, without any time limitTravel visa-free across MercosurAccess to healthcare and education systemtemporary residency benefitsWarm climateLow cost of livingLow cost programTravel visa-free across MercosurBe eventually eligible for permanent residency

    -

    Our qualified lawyers in Mexico will handle all paperwork, and guide you through each step of the process to ensure that it is as fast, efficient and pain-free as possible.

    First, our lawyers will conduct a due diligence check on you to ensure that you are eligible and meet the criteria to apply for the visa, and that your application will be approved. Then we will send you the application forms and provide assistance to complete the questionnaires, translate and legalize documents.

    If required, we will introduce you to banks in Mexico in order to open a bank account and transfer the required monthly income.
    You may undergo biometric procedures and submit the application with its supporting documents to a Mexican consulate or embassy.

    Once the visa application is approved, you will be granted a temporary resident visa. You must travel to Mexico, and within 30 days of your arrival apply for a Temporary resident permit at the National Migration Institute (INMI). The whole process may take 1-3 months.

    For further information on the Mexican temporary resident visa and detailed procedures, contact us for a free private consultation.

    -

    Near the end of the four consecutive years holding temporary residency, you can apply to exchange your Residente Temporal permit for a Residente Permanente permit. The transfer from temporary to permanent residency is undertaken at your nearest immigration office in Mexico with an application procedure and payment of the processing fees.

    -

    If you need assistance with your Mexico residency application, renewals, or regularization procedures, our Mexico Immigration Assistance Service provides consulting, advice, and practical support that assists you through the entire residency application or renewal process, including visa exchanges, regularization procedures, and troubleshooting.

    -

    Hi there! Thanks for the amazing information. I have a couple of questions. My boyfriend and I want to apply for temporary resident visa so we can live in Queretaro. I qualify for solvency but he does not. If I add him to my bank accounts so they are now joint, will we be able to qualify that way? Also if we obtain the temporary residency CARD in Cozumel, is it valid to live in Queretaro? How long do I have to change the address and how do I go about it? Thanks!

    -

    Hello-
    If I am seeking employment in Mexico, but unsure if I will obtain it, do I jeopardize my chances of getting a visa to work in Mexico by getting a temporary visa as well? Said another way, I want to keep my options open- work remotely in Mexico (via a temporary visa) AND apply for jobs teaching at international schools in Mexico. Can I apply for the temporary visa or will that cause issues if I get a job and need a work visa? Thank you.

    -

    -

    There are two approaches to this, similar to the previous type of temporary visa. You can qualify by showing an account with at least $20,200 US dollars at the end of every month for the last 12 months, or by showing an account with a monthly income which has a minimum average balance of $2,000 US dollars for the last 6 months.

    -

    Whether you are a student or a teacher, a farm worker or a businessperson, a refugee or a temporary resident, if you are an immigrant and you need to do business with Social Security, you have come to the right place.

    -

    Mexico has two types of residency visas, Temporary Resident (Residente Temporal) and Permanent Resident (Residente Permanente). These are actual visas for which you get a stamp (sticker) on your passport and, eventually, a resident card. They both have minimum financial requirements that must be met. See your local consulate's website for the specific amounts required. For your initial resident visa, the process must be started at a Mexican consulate outside of Mexico (except in special circumstances).

    -

    You'll want to visit the website of the consulate you'll be using to see how they do the resident visa process. Some (very few) take walk-ins, some require an appointment to be made by phone, some require you to make your appointment via email or online. For those that require an appointment, an appointment needs to be made for each person applying. If you're applying as a couple, you'll need two appointments.

    -

    When deciding which type of visa to get, temporary or permanent, there are a few things to consider. One is the obvious difference in the financial requirements needed to get the visa. If your income or savings/investments is not high enough to qualify for the permanent visa, you get the temporary visa instead.

    -

    Consider too, that with a temporary resident visa you must apply and get permission if you're going to be working (earning income) within Mexico. If you're working online and not earning money from a Mexican source, you don't need permission to work. However, if you are earning money from a Mexican source, you will need permission.

    -

    A temporary residency visa is good for one to four years. Typically, you'll receive one year from INM with your first visa and then you can renew for one to three years more. This varies by office, with some INM offices only allowing yearly renewals and others allowing you to renew for three years. After a maximum of four years, you must either leave the country or switch to permanent residency.

    -

    The minimum financial requirements for a temporary residency visa are either an income of 300 times the Mexican minimum wage (as of 2022, $172.87 pesos per day) for the previous six months, or, if using savings/investments, it's 5,000 times the Mexican minimum wage in your bank accounts for the previous 12 months. This means if you are using income, you need to show proof of earning an income of at least $51,861 pesos per month for the previous six months. For savings/investments, you must have had a balance in your account of at least $864,350 pesos for the previous 12 months. You can find the specific amount for your currency by dividing this by the peso exchange rate at the time you're applying. This amount varies a little by consulate which is why you'll want to visit the consulate's website to see what that specific consulate is requiring. Consulate "shopping" is allowed although some consulates will only serve people that live in the area.

    -

    Mexican permanent residency is just that, permanent. There is no need to renew it. Permanent residency has higher financial requirements than temporary residency. Many consulates will not give out permanent residency visas, except for people who have pension income. Don't argue with the consulate. Simply come in on a temporary visa and switch to permanent either at renewal time (the financial requirements must be met) or after four years (no further proof of income is required).

    -

    When you enter Mexico after getting your residency visa, you will get an FMM ("tourist visa") if you are entering at a port of entry still issuing paper FMMs. When you are filling this out, write either RESIDENTE TEMPORAL (temporary residency) or RESIDENTE PERMANENTE (permanent residency), whichever applies, across the top in big block letters. Use whatever address you will be staying at. Mark the purpose of your trip as "other." Show the immigration agent the visa in your passport. This is very important as you DO NOT want to enter Mexico as a tourist, which cancels the visa you just paid for! The agent will know what to do. He/she will give you 30 days on the FMM and mark the CANJE ("exchange") box. You will turn in this FMM with your initial application to finalize your residency with INM.

    -

    Holders of the visa may bring their spouses and dependents. They will also be considered tax residents of Mexico if they stay in the country for more than 183 days out of the year too.

    -

    The time has come for your visa appointment. Make sure you have all of your documents and completed application ready to bring with you. There will also be a fee of $48, payable in cash or money order.

    -

    Ensure that you have all the documentation you need for your visa application and allow sufficient time for processing a new visa. The documentation you may need for a new visa includes, but is not limited to the following:

    -

    No, not without advance permission. If you depart the United States with a pending Form I-485, you have abandoned your application unless you receive permission in advance from USCIS to return to the United States. We call this Advance Parole. Additionally, CBP may also consider you ineligible to return to the United States as an F-1 student because your application to change status to that of a permanent resident is evidence of intent to immigrate, which is inconsistent with nonimmigrant student status.

    -

    If you exit the United States and apply for a visa, you cannot return to the United States until DoS issues you a new visa. This could require a lengthy stay. If DoS denies your visa application, you will not be able to return to the United States as a student.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/bezierTools.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/bezierTools.py deleted file mode 100644 index 7772a4bf8588d2723f2435c7a2ba56ce47a71cf1..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/misc/bezierTools.py +++ /dev/null @@ -1,1474 +0,0 @@ -# -*- coding: utf-8 -*- -"""fontTools.misc.bezierTools.py -- tools for working with Bezier path segments. -""" - -from fontTools.misc.arrayTools import calcBounds, sectRect, rectArea -from fontTools.misc.transform import Identity -import math -from collections import namedtuple - -try: - import cython - - COMPILED = cython.compiled -except (AttributeError, ImportError): - # if cython not installed, use mock module with no-op decorators and types - from fontTools.misc import cython - - COMPILED = False - - -Intersection = namedtuple("Intersection", ["pt", "t1", "t2"]) - - -__all__ = [ - "approximateCubicArcLength", - "approximateCubicArcLengthC", - "approximateQuadraticArcLength", - "approximateQuadraticArcLengthC", - "calcCubicArcLength", - "calcCubicArcLengthC", - "calcQuadraticArcLength", - "calcQuadraticArcLengthC", - "calcCubicBounds", - "calcQuadraticBounds", - "splitLine", - "splitQuadratic", - "splitCubic", - "splitQuadraticAtT", - "splitCubicAtT", - "splitCubicAtTC", - "splitCubicIntoTwoAtTC", - "solveQuadratic", - "solveCubic", - "quadraticPointAtT", - "cubicPointAtT", - "cubicPointAtTC", - "linePointAtT", - "segmentPointAtT", - "lineLineIntersections", - "curveLineIntersections", - "curveCurveIntersections", - "segmentSegmentIntersections", -] - - -def calcCubicArcLength(pt1, pt2, pt3, pt4, tolerance=0.005): - """Calculates the arc length for a cubic Bezier segment. - - Whereas :func:`approximateCubicArcLength` approximates the length, this - function calculates it by "measuring", recursively dividing the curve - until the divided segments are shorter than ``tolerance``. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - tolerance: Controls the precision of the calcuation. - - Returns: - Arc length value. - """ - return calcCubicArcLengthC( - complex(*pt1), complex(*pt2), complex(*pt3), complex(*pt4), tolerance - ) - - -def _split_cubic_into_two(p0, p1, p2, p3): - mid = (p0 + 3 * (p1 + p2) + p3) * 0.125 - deriv3 = (p3 + p2 - p1 - p0) * 0.125 - return ( - (p0, (p0 + p1) * 0.5, mid - deriv3, mid), - (mid, mid + deriv3, (p2 + p3) * 0.5, p3), - ) - - -@cython.returns(cython.double) -@cython.locals( - p0=cython.complex, - p1=cython.complex, - p2=cython.complex, - p3=cython.complex, -) -@cython.locals(mult=cython.double, arch=cython.double, box=cython.double) -def _calcCubicArcLengthCRecurse(mult, p0, p1, p2, p3): - arch = abs(p0 - p3) - box = abs(p0 - p1) + abs(p1 - p2) + abs(p2 - p3) - if arch * mult >= box: - return (arch + box) * 0.5 - else: - one, two = _split_cubic_into_two(p0, p1, p2, p3) - return _calcCubicArcLengthCRecurse(mult, *one) + _calcCubicArcLengthCRecurse( - mult, *two - ) - - -@cython.returns(cython.double) -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, -) -@cython.locals( - tolerance=cython.double, - mult=cython.double, -) -def calcCubicArcLengthC(pt1, pt2, pt3, pt4, tolerance=0.005): - """Calculates the arc length for a cubic Bezier segment. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers. - tolerance: Controls the precision of the calcuation. - - Returns: - Arc length value. - """ - mult = 1.0 + 1.5 * tolerance # The 1.5 is a empirical hack; no math - return _calcCubicArcLengthCRecurse(mult, pt1, pt2, pt3, pt4) - - -epsilonDigits = 6 -epsilon = 1e-10 - - -@cython.cfunc -@cython.inline -@cython.returns(cython.double) -@cython.locals(v1=cython.complex, v2=cython.complex) -def _dot(v1, v2): - return (v1 * v2.conjugate()).real - - -@cython.cfunc -@cython.inline -@cython.returns(cython.double) -@cython.locals(x=cython.complex) -def _intSecAtan(x): - # In : sympy.integrate(sp.sec(sp.atan(x))) - # Out: x*sqrt(x**2 + 1)/2 + asinh(x)/2 - return x * math.sqrt(x**2 + 1) / 2 + math.asinh(x) / 2 - - -def calcQuadraticArcLength(pt1, pt2, pt3): - """Calculates the arc length for a quadratic Bezier segment. - - Args: - pt1: Start point of the Bezier as 2D tuple. - pt2: Handle point of the Bezier as 2D tuple. - pt3: End point of the Bezier as 2D tuple. - - Returns: - Arc length value. - - Example:: - - >>> calcQuadraticArcLength((0, 0), (0, 0), (0, 0)) # empty segment - 0.0 - >>> calcQuadraticArcLength((0, 0), (50, 0), (80, 0)) # collinear points - 80.0 - >>> calcQuadraticArcLength((0, 0), (0, 50), (0, 80)) # collinear points vertical - 80.0 - >>> calcQuadraticArcLength((0, 0), (50, 20), (100, 40)) # collinear points - 107.70329614269008 - >>> calcQuadraticArcLength((0, 0), (0, 100), (100, 0)) - 154.02976155645263 - >>> calcQuadraticArcLength((0, 0), (0, 50), (100, 0)) - 120.21581243984076 - >>> calcQuadraticArcLength((0, 0), (50, -10), (80, 50)) - 102.53273816445825 - >>> calcQuadraticArcLength((0, 0), (40, 0), (-40, 0)) # collinear points, control point outside - 66.66666666666667 - >>> calcQuadraticArcLength((0, 0), (40, 0), (0, 0)) # collinear points, looping back - 40.0 - """ - return calcQuadraticArcLengthC(complex(*pt1), complex(*pt2), complex(*pt3)) - - -@cython.returns(cython.double) -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - d0=cython.complex, - d1=cython.complex, - d=cython.complex, - n=cython.complex, -) -@cython.locals( - scale=cython.double, - origDist=cython.double, - a=cython.double, - b=cython.double, - x0=cython.double, - x1=cython.double, - Len=cython.double, -) -def calcQuadraticArcLengthC(pt1, pt2, pt3): - """Calculates the arc length for a quadratic Bezier segment. - - Args: - pt1: Start point of the Bezier as a complex number. - pt2: Handle point of the Bezier as a complex number. - pt3: End point of the Bezier as a complex number. - - Returns: - Arc length value. - """ - # Analytical solution to the length of a quadratic bezier. - # Documentation: https://github.com/fonttools/fonttools/issues/3055 - d0 = pt2 - pt1 - d1 = pt3 - pt2 - d = d1 - d0 - n = d * 1j - scale = abs(n) - if scale == 0.0: - return abs(pt3 - pt1) - origDist = _dot(n, d0) - if abs(origDist) < epsilon: - if _dot(d0, d1) >= 0: - return abs(pt3 - pt1) - a, b = abs(d0), abs(d1) - return (a * a + b * b) / (a + b) - x0 = _dot(d, d0) / origDist - x1 = _dot(d, d1) / origDist - Len = abs(2 * (_intSecAtan(x1) - _intSecAtan(x0)) * origDist / (scale * (x1 - x0))) - return Len - - -def approximateQuadraticArcLength(pt1, pt2, pt3): - """Calculates the arc length for a quadratic Bezier segment. - - Uses Gauss-Legendre quadrature for a branch-free approximation. - See :func:`calcQuadraticArcLength` for a slower but more accurate result. - - Args: - pt1: Start point of the Bezier as 2D tuple. - pt2: Handle point of the Bezier as 2D tuple. - pt3: End point of the Bezier as 2D tuple. - - Returns: - Approximate arc length value. - """ - return approximateQuadraticArcLengthC(complex(*pt1), complex(*pt2), complex(*pt3)) - - -@cython.returns(cython.double) -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, -) -@cython.locals( - v0=cython.double, - v1=cython.double, - v2=cython.double, -) -def approximateQuadraticArcLengthC(pt1, pt2, pt3): - """Calculates the arc length for a quadratic Bezier segment. - - Uses Gauss-Legendre quadrature for a branch-free approximation. - See :func:`calcQuadraticArcLength` for a slower but more accurate result. - - Args: - pt1: Start point of the Bezier as a complex number. - pt2: Handle point of the Bezier as a complex number. - pt3: End point of the Bezier as a complex number. - - Returns: - Approximate arc length value. - """ - # This, essentially, approximates the length-of-derivative function - # to be integrated with the best-matching fifth-degree polynomial - # approximation of it. - # - # https://en.wikipedia.org/wiki/Gaussian_quadrature#Gauss.E2.80.93Legendre_quadrature - - # abs(BezierCurveC[2].diff(t).subs({t:T})) for T in sorted(.5, .5±sqrt(3/5)/2), - # weighted 5/18, 8/18, 5/18 respectively. - v0 = abs( - -0.492943519233745 * pt1 + 0.430331482911935 * pt2 + 0.0626120363218102 * pt3 - ) - v1 = abs(pt3 - pt1) * 0.4444444444444444 - v2 = abs( - -0.0626120363218102 * pt1 - 0.430331482911935 * pt2 + 0.492943519233745 * pt3 - ) - - return v0 + v1 + v2 - - -def calcQuadraticBounds(pt1, pt2, pt3): - """Calculates the bounding rectangle for a quadratic Bezier segment. - - Args: - pt1: Start point of the Bezier as a 2D tuple. - pt2: Handle point of the Bezier as a 2D tuple. - pt3: End point of the Bezier as a 2D tuple. - - Returns: - A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``. - - Example:: - - >>> calcQuadraticBounds((0, 0), (50, 100), (100, 0)) - (0, 0, 100, 50.0) - >>> calcQuadraticBounds((0, 0), (100, 0), (100, 100)) - (0.0, 0.0, 100, 100) - """ - (ax, ay), (bx, by), (cx, cy) = calcQuadraticParameters(pt1, pt2, pt3) - ax2 = ax * 2.0 - ay2 = ay * 2.0 - roots = [] - if ax2 != 0: - roots.append(-bx / ax2) - if ay2 != 0: - roots.append(-by / ay2) - points = [ - (ax * t * t + bx * t + cx, ay * t * t + by * t + cy) - for t in roots - if 0 <= t < 1 - ] + [pt1, pt3] - return calcBounds(points) - - -def approximateCubicArcLength(pt1, pt2, pt3, pt4): - """Approximates the arc length for a cubic Bezier segment. - - Uses Gauss-Lobatto quadrature with n=5 points to approximate arc length. - See :func:`calcCubicArcLength` for a slower but more accurate result. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - - Returns: - Arc length value. - - Example:: - - >>> approximateCubicArcLength((0, 0), (25, 100), (75, 100), (100, 0)) - 190.04332968932817 - >>> approximateCubicArcLength((0, 0), (50, 0), (100, 50), (100, 100)) - 154.8852074945903 - >>> approximateCubicArcLength((0, 0), (50, 0), (100, 0), (150, 0)) # line; exact result should be 150. - 149.99999999999991 - >>> approximateCubicArcLength((0, 0), (50, 0), (100, 0), (-50, 0)) # cusp; exact result should be 150. - 136.9267662156362 - >>> approximateCubicArcLength((0, 0), (50, 0), (100, -50), (-50, 0)) # cusp - 154.80848416537057 - """ - return approximateCubicArcLengthC( - complex(*pt1), complex(*pt2), complex(*pt3), complex(*pt4) - ) - - -@cython.returns(cython.double) -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, -) -@cython.locals( - v0=cython.double, - v1=cython.double, - v2=cython.double, - v3=cython.double, - v4=cython.double, -) -def approximateCubicArcLengthC(pt1, pt2, pt3, pt4): - """Approximates the arc length for a cubic Bezier segment. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers. - - Returns: - Arc length value. - """ - # This, essentially, approximates the length-of-derivative function - # to be integrated with the best-matching seventh-degree polynomial - # approximation of it. - # - # https://en.wikipedia.org/wiki/Gaussian_quadrature#Gauss.E2.80.93Lobatto_rules - - # abs(BezierCurveC[3].diff(t).subs({t:T})) for T in sorted(0, .5±(3/7)**.5/2, .5, 1), - # weighted 1/20, 49/180, 32/90, 49/180, 1/20 respectively. - v0 = abs(pt2 - pt1) * 0.15 - v1 = abs( - -0.558983582205757 * pt1 - + 0.325650248872424 * pt2 - + 0.208983582205757 * pt3 - + 0.024349751127576 * pt4 - ) - v2 = abs(pt4 - pt1 + pt3 - pt2) * 0.26666666666666666 - v3 = abs( - -0.024349751127576 * pt1 - - 0.208983582205757 * pt2 - - 0.325650248872424 * pt3 - + 0.558983582205757 * pt4 - ) - v4 = abs(pt4 - pt3) * 0.15 - - return v0 + v1 + v2 + v3 + v4 - - -def calcCubicBounds(pt1, pt2, pt3, pt4): - """Calculates the bounding rectangle for a quadratic Bezier segment. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - - Returns: - A four-item tuple representing the bounding rectangle ``(xMin, yMin, xMax, yMax)``. - - Example:: - - >>> calcCubicBounds((0, 0), (25, 100), (75, 100), (100, 0)) - (0, 0, 100, 75.0) - >>> calcCubicBounds((0, 0), (50, 0), (100, 50), (100, 100)) - (0.0, 0.0, 100, 100) - >>> print("%f %f %f %f" % calcCubicBounds((50, 0), (0, 100), (100, 100), (50, 0))) - 35.566243 0.000000 64.433757 75.000000 - """ - (ax, ay), (bx, by), (cx, cy), (dx, dy) = calcCubicParameters(pt1, pt2, pt3, pt4) - # calc first derivative - ax3 = ax * 3.0 - ay3 = ay * 3.0 - bx2 = bx * 2.0 - by2 = by * 2.0 - xRoots = [t for t in solveQuadratic(ax3, bx2, cx) if 0 <= t < 1] - yRoots = [t for t in solveQuadratic(ay3, by2, cy) if 0 <= t < 1] - roots = xRoots + yRoots - - points = [ - ( - ax * t * t * t + bx * t * t + cx * t + dx, - ay * t * t * t + by * t * t + cy * t + dy, - ) - for t in roots - ] + [pt1, pt4] - return calcBounds(points) - - -def splitLine(pt1, pt2, where, isHorizontal): - """Split a line at a given coordinate. - - Args: - pt1: Start point of line as 2D tuple. - pt2: End point of line as 2D tuple. - where: Position at which to split the line. - isHorizontal: Direction of the ray splitting the line. If true, - ``where`` is interpreted as a Y coordinate; if false, then - ``where`` is interpreted as an X coordinate. - - Returns: - A list of two line segments (each line segment being two 2D tuples) - if the line was successfully split, or a list containing the original - line. - - Example:: - - >>> printSegments(splitLine((0, 0), (100, 100), 50, True)) - ((0, 0), (50, 50)) - ((50, 50), (100, 100)) - >>> printSegments(splitLine((0, 0), (100, 100), 100, True)) - ((0, 0), (100, 100)) - >>> printSegments(splitLine((0, 0), (100, 100), 0, True)) - ((0, 0), (0, 0)) - ((0, 0), (100, 100)) - >>> printSegments(splitLine((0, 0), (100, 100), 0, False)) - ((0, 0), (0, 0)) - ((0, 0), (100, 100)) - >>> printSegments(splitLine((100, 0), (0, 0), 50, False)) - ((100, 0), (50, 0)) - ((50, 0), (0, 0)) - >>> printSegments(splitLine((0, 100), (0, 0), 50, True)) - ((0, 100), (0, 50)) - ((0, 50), (0, 0)) - """ - pt1x, pt1y = pt1 - pt2x, pt2y = pt2 - - ax = pt2x - pt1x - ay = pt2y - pt1y - - bx = pt1x - by = pt1y - - a = (ax, ay)[isHorizontal] - - if a == 0: - return [(pt1, pt2)] - t = (where - (bx, by)[isHorizontal]) / a - if 0 <= t < 1: - midPt = ax * t + bx, ay * t + by - return [(pt1, midPt), (midPt, pt2)] - else: - return [(pt1, pt2)] - - -def splitQuadratic(pt1, pt2, pt3, where, isHorizontal): - """Split a quadratic Bezier curve at a given coordinate. - - Args: - pt1,pt2,pt3: Control points of the Bezier as 2D tuples. - where: Position at which to split the curve. - isHorizontal: Direction of the ray splitting the curve. If true, - ``where`` is interpreted as a Y coordinate; if false, then - ``where`` is interpreted as an X coordinate. - - Returns: - A list of two curve segments (each curve segment being three 2D tuples) - if the curve was successfully split, or a list containing the original - curve. - - Example:: - - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 150, False)) - ((0, 0), (50, 100), (100, 0)) - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 50, False)) - ((0, 0), (25, 50), (50, 50)) - ((50, 50), (75, 50), (100, 0)) - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 25, False)) - ((0, 0), (12.5, 25), (25, 37.5)) - ((25, 37.5), (62.5, 75), (100, 0)) - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 25, True)) - ((0, 0), (7.32233, 14.6447), (14.6447, 25)) - ((14.6447, 25), (50, 75), (85.3553, 25)) - ((85.3553, 25), (92.6777, 14.6447), (100, -7.10543e-15)) - >>> # XXX I'm not at all sure if the following behavior is desirable: - >>> printSegments(splitQuadratic((0, 0), (50, 100), (100, 0), 50, True)) - ((0, 0), (25, 50), (50, 50)) - ((50, 50), (50, 50), (50, 50)) - ((50, 50), (75, 50), (100, 0)) - """ - a, b, c = calcQuadraticParameters(pt1, pt2, pt3) - solutions = solveQuadratic( - a[isHorizontal], b[isHorizontal], c[isHorizontal] - where - ) - solutions = sorted(t for t in solutions if 0 <= t < 1) - if not solutions: - return [(pt1, pt2, pt3)] - return _splitQuadraticAtT(a, b, c, *solutions) - - -def splitCubic(pt1, pt2, pt3, pt4, where, isHorizontal): - """Split a cubic Bezier curve at a given coordinate. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - where: Position at which to split the curve. - isHorizontal: Direction of the ray splitting the curve. If true, - ``where`` is interpreted as a Y coordinate; if false, then - ``where`` is interpreted as an X coordinate. - - Returns: - A list of two curve segments (each curve segment being four 2D tuples) - if the curve was successfully split, or a list containing the original - curve. - - Example:: - - >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 150, False)) - ((0, 0), (25, 100), (75, 100), (100, 0)) - >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 50, False)) - ((0, 0), (12.5, 50), (31.25, 75), (50, 75)) - ((50, 75), (68.75, 75), (87.5, 50), (100, 0)) - >>> printSegments(splitCubic((0, 0), (25, 100), (75, 100), (100, 0), 25, True)) - ((0, 0), (2.29379, 9.17517), (4.79804, 17.5085), (7.47414, 25)) - ((7.47414, 25), (31.2886, 91.6667), (68.7114, 91.6667), (92.5259, 25)) - ((92.5259, 25), (95.202, 17.5085), (97.7062, 9.17517), (100, 1.77636e-15)) - """ - a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4) - solutions = solveCubic( - a[isHorizontal], b[isHorizontal], c[isHorizontal], d[isHorizontal] - where - ) - solutions = sorted(t for t in solutions if 0 <= t < 1) - if not solutions: - return [(pt1, pt2, pt3, pt4)] - return _splitCubicAtT(a, b, c, d, *solutions) - - -def splitQuadraticAtT(pt1, pt2, pt3, *ts): - """Split a quadratic Bezier curve at one or more values of t. - - Args: - pt1,pt2,pt3: Control points of the Bezier as 2D tuples. - *ts: Positions at which to split the curve. - - Returns: - A list of curve segments (each curve segment being three 2D tuples). - - Examples:: - - >>> printSegments(splitQuadraticAtT((0, 0), (50, 100), (100, 0), 0.5)) - ((0, 0), (25, 50), (50, 50)) - ((50, 50), (75, 50), (100, 0)) - >>> printSegments(splitQuadraticAtT((0, 0), (50, 100), (100, 0), 0.5, 0.75)) - ((0, 0), (25, 50), (50, 50)) - ((50, 50), (62.5, 50), (75, 37.5)) - ((75, 37.5), (87.5, 25), (100, 0)) - """ - a, b, c = calcQuadraticParameters(pt1, pt2, pt3) - return _splitQuadraticAtT(a, b, c, *ts) - - -def splitCubicAtT(pt1, pt2, pt3, pt4, *ts): - """Split a cubic Bezier curve at one or more values of t. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as 2D tuples. - *ts: Positions at which to split the curve. - - Returns: - A list of curve segments (each curve segment being four 2D tuples). - - Examples:: - - >>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5)) - ((0, 0), (12.5, 50), (31.25, 75), (50, 75)) - ((50, 75), (68.75, 75), (87.5, 50), (100, 0)) - >>> printSegments(splitCubicAtT((0, 0), (25, 100), (75, 100), (100, 0), 0.5, 0.75)) - ((0, 0), (12.5, 50), (31.25, 75), (50, 75)) - ((50, 75), (59.375, 75), (68.75, 68.75), (77.3438, 56.25)) - ((77.3438, 56.25), (85.9375, 43.75), (93.75, 25), (100, 0)) - """ - a, b, c, d = calcCubicParameters(pt1, pt2, pt3, pt4) - return _splitCubicAtT(a, b, c, d, *ts) - - -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, - a=cython.complex, - b=cython.complex, - c=cython.complex, - d=cython.complex, -) -def splitCubicAtTC(pt1, pt2, pt3, pt4, *ts): - """Split a cubic Bezier curve at one or more values of t. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers.. - *ts: Positions at which to split the curve. - - Yields: - Curve segments (each curve segment being four complex numbers). - """ - a, b, c, d = calcCubicParametersC(pt1, pt2, pt3, pt4) - yield from _splitCubicAtTC(a, b, c, d, *ts) - - -@cython.returns(cython.complex) -@cython.locals( - t=cython.double, - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, - pointAtT=cython.complex, - off1=cython.complex, - off2=cython.complex, -) -@cython.locals( - t2=cython.double, _1_t=cython.double, _1_t_2=cython.double, _2_t_1_t=cython.double -) -def splitCubicIntoTwoAtTC(pt1, pt2, pt3, pt4, t): - """Split a cubic Bezier curve at t. - - Args: - pt1,pt2,pt3,pt4: Control points of the Bezier as complex numbers. - t: Position at which to split the curve. - - Returns: - A tuple of two curve segments (each curve segment being four complex numbers). - """ - t2 = t * t - _1_t = 1 - t - _1_t_2 = _1_t * _1_t - _2_t_1_t = 2 * t * _1_t - pointAtT = ( - _1_t_2 * _1_t * pt1 + 3 * (_1_t_2 * t * pt2 + _1_t * t2 * pt3) + t2 * t * pt4 - ) - off1 = _1_t_2 * pt1 + _2_t_1_t * pt2 + t2 * pt3 - off2 = _1_t_2 * pt2 + _2_t_1_t * pt3 + t2 * pt4 - - pt2 = pt1 + (pt2 - pt1) * t - pt3 = pt4 + (pt3 - pt4) * _1_t - - return ((pt1, pt2, off1, pointAtT), (pointAtT, off2, pt3, pt4)) - - -def _splitQuadraticAtT(a, b, c, *ts): - ts = list(ts) - segments = [] - ts.insert(0, 0.0) - ts.append(1.0) - ax, ay = a - bx, by = b - cx, cy = c - for i in range(len(ts) - 1): - t1 = ts[i] - t2 = ts[i + 1] - delta = t2 - t1 - # calc new a, b and c - delta_2 = delta * delta - a1x = ax * delta_2 - a1y = ay * delta_2 - b1x = (2 * ax * t1 + bx) * delta - b1y = (2 * ay * t1 + by) * delta - t1_2 = t1 * t1 - c1x = ax * t1_2 + bx * t1 + cx - c1y = ay * t1_2 + by * t1 + cy - - pt1, pt2, pt3 = calcQuadraticPoints((a1x, a1y), (b1x, b1y), (c1x, c1y)) - segments.append((pt1, pt2, pt3)) - return segments - - -def _splitCubicAtT(a, b, c, d, *ts): - ts = list(ts) - ts.insert(0, 0.0) - ts.append(1.0) - segments = [] - ax, ay = a - bx, by = b - cx, cy = c - dx, dy = d - for i in range(len(ts) - 1): - t1 = ts[i] - t2 = ts[i + 1] - delta = t2 - t1 - - delta_2 = delta * delta - delta_3 = delta * delta_2 - t1_2 = t1 * t1 - t1_3 = t1 * t1_2 - - # calc new a, b, c and d - a1x = ax * delta_3 - a1y = ay * delta_3 - b1x = (3 * ax * t1 + bx) * delta_2 - b1y = (3 * ay * t1 + by) * delta_2 - c1x = (2 * bx * t1 + cx + 3 * ax * t1_2) * delta - c1y = (2 * by * t1 + cy + 3 * ay * t1_2) * delta - d1x = ax * t1_3 + bx * t1_2 + cx * t1 + dx - d1y = ay * t1_3 + by * t1_2 + cy * t1 + dy - pt1, pt2, pt3, pt4 = calcCubicPoints( - (a1x, a1y), (b1x, b1y), (c1x, c1y), (d1x, d1y) - ) - segments.append((pt1, pt2, pt3, pt4)) - return segments - - -@cython.locals( - a=cython.complex, - b=cython.complex, - c=cython.complex, - d=cython.complex, - t1=cython.double, - t2=cython.double, - delta=cython.double, - delta_2=cython.double, - delta_3=cython.double, - a1=cython.complex, - b1=cython.complex, - c1=cython.complex, - d1=cython.complex, -) -def _splitCubicAtTC(a, b, c, d, *ts): - ts = list(ts) - ts.insert(0, 0.0) - ts.append(1.0) - for i in range(len(ts) - 1): - t1 = ts[i] - t2 = ts[i + 1] - delta = t2 - t1 - - delta_2 = delta * delta - delta_3 = delta * delta_2 - t1_2 = t1 * t1 - t1_3 = t1 * t1_2 - - # calc new a, b, c and d - a1 = a * delta_3 - b1 = (3 * a * t1 + b) * delta_2 - c1 = (2 * b * t1 + c + 3 * a * t1_2) * delta - d1 = a * t1_3 + b * t1_2 + c * t1 + d - pt1, pt2, pt3, pt4 = calcCubicPointsC(a1, b1, c1, d1) - yield (pt1, pt2, pt3, pt4) - - -# -# Equation solvers. -# - -from math import sqrt, acos, cos, pi - - -def solveQuadratic(a, b, c, sqrt=sqrt): - """Solve a quadratic equation. - - Solves *a*x*x + b*x + c = 0* where a, b and c are real. - - Args: - a: coefficient of *x²* - b: coefficient of *x* - c: constant term - - Returns: - A list of roots. Note that the returned list is neither guaranteed to - be sorted nor to contain unique values! - """ - if abs(a) < epsilon: - if abs(b) < epsilon: - # We have a non-equation; therefore, we have no valid solution - roots = [] - else: - # We have a linear equation with 1 root. - roots = [-c / b] - else: - # We have a true quadratic equation. Apply the quadratic formula to find two roots. - DD = b * b - 4.0 * a * c - if DD >= 0.0: - rDD = sqrt(DD) - roots = [(-b + rDD) / 2.0 / a, (-b - rDD) / 2.0 / a] - else: - # complex roots, ignore - roots = [] - return roots - - -def solveCubic(a, b, c, d): - """Solve a cubic equation. - - Solves *a*x*x*x + b*x*x + c*x + d = 0* where a, b, c and d are real. - - Args: - a: coefficient of *x³* - b: coefficient of *x²* - c: coefficient of *x* - d: constant term - - Returns: - A list of roots. Note that the returned list is neither guaranteed to - be sorted nor to contain unique values! - - Examples:: - - >>> solveCubic(1, 1, -6, 0) - [-3.0, -0.0, 2.0] - >>> solveCubic(-10.0, -9.0, 48.0, -29.0) - [-2.9, 1.0, 1.0] - >>> solveCubic(-9.875, -9.0, 47.625, -28.75) - [-2.911392, 1.0, 1.0] - >>> solveCubic(1.0, -4.5, 6.75, -3.375) - [1.5, 1.5, 1.5] - >>> solveCubic(-12.0, 18.0, -9.0, 1.50023651123) - [0.5, 0.5, 0.5] - >>> solveCubic( - ... 9.0, 0.0, 0.0, -7.62939453125e-05 - ... ) == [-0.0, -0.0, -0.0] - True - """ - # - # adapted from: - # CUBIC.C - Solve a cubic polynomial - # public domain by Ross Cottrell - # found at: http://www.strangecreations.com/library/snippets/Cubic.C - # - if abs(a) < epsilon: - # don't just test for zero; for very small values of 'a' solveCubic() - # returns unreliable results, so we fall back to quad. - return solveQuadratic(b, c, d) - a = float(a) - a1 = b / a - a2 = c / a - a3 = d / a - - Q = (a1 * a1 - 3.0 * a2) / 9.0 - R = (2.0 * a1 * a1 * a1 - 9.0 * a1 * a2 + 27.0 * a3) / 54.0 - - R2 = R * R - Q3 = Q * Q * Q - R2 = 0 if R2 < epsilon else R2 - Q3 = 0 if abs(Q3) < epsilon else Q3 - - R2_Q3 = R2 - Q3 - - if R2 == 0.0 and Q3 == 0.0: - x = round(-a1 / 3.0, epsilonDigits) - return [x, x, x] - elif R2_Q3 <= epsilon * 0.5: - # The epsilon * .5 above ensures that Q3 is not zero. - theta = acos(max(min(R / sqrt(Q3), 1.0), -1.0)) - rQ2 = -2.0 * sqrt(Q) - a1_3 = a1 / 3.0 - x0 = rQ2 * cos(theta / 3.0) - a1_3 - x1 = rQ2 * cos((theta + 2.0 * pi) / 3.0) - a1_3 - x2 = rQ2 * cos((theta + 4.0 * pi) / 3.0) - a1_3 - x0, x1, x2 = sorted([x0, x1, x2]) - # Merge roots that are close-enough - if x1 - x0 < epsilon and x2 - x1 < epsilon: - x0 = x1 = x2 = round((x0 + x1 + x2) / 3.0, epsilonDigits) - elif x1 - x0 < epsilon: - x0 = x1 = round((x0 + x1) / 2.0, epsilonDigits) - x2 = round(x2, epsilonDigits) - elif x2 - x1 < epsilon: - x0 = round(x0, epsilonDigits) - x1 = x2 = round((x1 + x2) / 2.0, epsilonDigits) - else: - x0 = round(x0, epsilonDigits) - x1 = round(x1, epsilonDigits) - x2 = round(x2, epsilonDigits) - return [x0, x1, x2] - else: - x = pow(sqrt(R2_Q3) + abs(R), 1 / 3.0) - x = x + Q / x - if R >= 0.0: - x = -x - x = round(x - a1 / 3.0, epsilonDigits) - return [x] - - -# -# Conversion routines for points to parameters and vice versa -# - - -def calcQuadraticParameters(pt1, pt2, pt3): - x2, y2 = pt2 - x3, y3 = pt3 - cx, cy = pt1 - bx = (x2 - cx) * 2.0 - by = (y2 - cy) * 2.0 - ax = x3 - cx - bx - ay = y3 - cy - by - return (ax, ay), (bx, by), (cx, cy) - - -def calcCubicParameters(pt1, pt2, pt3, pt4): - x2, y2 = pt2 - x3, y3 = pt3 - x4, y4 = pt4 - dx, dy = pt1 - cx = (x2 - dx) * 3.0 - cy = (y2 - dy) * 3.0 - bx = (x3 - x2) * 3.0 - cx - by = (y3 - y2) * 3.0 - cy - ax = x4 - dx - cx - bx - ay = y4 - dy - cy - by - return (ax, ay), (bx, by), (cx, cy), (dx, dy) - - -@cython.cfunc -@cython.inline -@cython.locals( - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, - a=cython.complex, - b=cython.complex, - c=cython.complex, -) -def calcCubicParametersC(pt1, pt2, pt3, pt4): - c = (pt2 - pt1) * 3.0 - b = (pt3 - pt2) * 3.0 - c - a = pt4 - pt1 - c - b - return (a, b, c, pt1) - - -def calcQuadraticPoints(a, b, c): - ax, ay = a - bx, by = b - cx, cy = c - x1 = cx - y1 = cy - x2 = (bx * 0.5) + cx - y2 = (by * 0.5) + cy - x3 = ax + bx + cx - y3 = ay + by + cy - return (x1, y1), (x2, y2), (x3, y3) - - -def calcCubicPoints(a, b, c, d): - ax, ay = a - bx, by = b - cx, cy = c - dx, dy = d - x1 = dx - y1 = dy - x2 = (cx / 3.0) + dx - y2 = (cy / 3.0) + dy - x3 = (bx + cx) / 3.0 + x2 - y3 = (by + cy) / 3.0 + y2 - x4 = ax + dx + cx + bx - y4 = ay + dy + cy + by - return (x1, y1), (x2, y2), (x3, y3), (x4, y4) - - -@cython.cfunc -@cython.inline -@cython.locals( - a=cython.complex, - b=cython.complex, - c=cython.complex, - d=cython.complex, - p2=cython.complex, - p3=cython.complex, - p4=cython.complex, -) -def calcCubicPointsC(a, b, c, d): - p2 = c * (1 / 3) + d - p3 = (b + c) * (1 / 3) + p2 - p4 = a + b + c + d - return (d, p2, p3, p4) - - -# -# Point at time -# - - -def linePointAtT(pt1, pt2, t): - """Finds the point at time `t` on a line. - - Args: - pt1, pt2: Coordinates of the line as 2D tuples. - t: The time along the line. - - Returns: - A 2D tuple with the coordinates of the point. - """ - return ((pt1[0] * (1 - t) + pt2[0] * t), (pt1[1] * (1 - t) + pt2[1] * t)) - - -def quadraticPointAtT(pt1, pt2, pt3, t): - """Finds the point at time `t` on a quadratic curve. - - Args: - pt1, pt2, pt3: Coordinates of the curve as 2D tuples. - t: The time along the curve. - - Returns: - A 2D tuple with the coordinates of the point. - """ - x = (1 - t) * (1 - t) * pt1[0] + 2 * (1 - t) * t * pt2[0] + t * t * pt3[0] - y = (1 - t) * (1 - t) * pt1[1] + 2 * (1 - t) * t * pt2[1] + t * t * pt3[1] - return (x, y) - - -def cubicPointAtT(pt1, pt2, pt3, pt4, t): - """Finds the point at time `t` on a cubic curve. - - Args: - pt1, pt2, pt3, pt4: Coordinates of the curve as 2D tuples. - t: The time along the curve. - - Returns: - A 2D tuple with the coordinates of the point. - """ - t2 = t * t - _1_t = 1 - t - _1_t_2 = _1_t * _1_t - x = ( - _1_t_2 * _1_t * pt1[0] - + 3 * (_1_t_2 * t * pt2[0] + _1_t * t2 * pt3[0]) - + t2 * t * pt4[0] - ) - y = ( - _1_t_2 * _1_t * pt1[1] - + 3 * (_1_t_2 * t * pt2[1] + _1_t * t2 * pt3[1]) - + t2 * t * pt4[1] - ) - return (x, y) - - -@cython.returns(cython.complex) -@cython.locals( - t=cython.double, - pt1=cython.complex, - pt2=cython.complex, - pt3=cython.complex, - pt4=cython.complex, -) -@cython.locals(t2=cython.double, _1_t=cython.double, _1_t_2=cython.double) -def cubicPointAtTC(pt1, pt2, pt3, pt4, t): - """Finds the point at time `t` on a cubic curve. - - Args: - pt1, pt2, pt3, pt4: Coordinates of the curve as complex numbers. - t: The time along the curve. - - Returns: - A complex number with the coordinates of the point. - """ - t2 = t * t - _1_t = 1 - t - _1_t_2 = _1_t * _1_t - return _1_t_2 * _1_t * pt1 + 3 * (_1_t_2 * t * pt2 + _1_t * t2 * pt3) + t2 * t * pt4 - - -def segmentPointAtT(seg, t): - if len(seg) == 2: - return linePointAtT(*seg, t) - elif len(seg) == 3: - return quadraticPointAtT(*seg, t) - elif len(seg) == 4: - return cubicPointAtT(*seg, t) - raise ValueError("Unknown curve degree") - - -# -# Intersection finders -# - - -def _line_t_of_pt(s, e, pt): - sx, sy = s - ex, ey = e - px, py = pt - if abs(sx - ex) < epsilon and abs(sy - ey) < epsilon: - # Line is a point! - return -1 - # Use the largest - if abs(sx - ex) > abs(sy - ey): - return (px - sx) / (ex - sx) - else: - return (py - sy) / (ey - sy) - - -def _both_points_are_on_same_side_of_origin(a, b, origin): - xDiff = (a[0] - origin[0]) * (b[0] - origin[0]) - yDiff = (a[1] - origin[1]) * (b[1] - origin[1]) - return not (xDiff <= 0.0 and yDiff <= 0.0) - - -def lineLineIntersections(s1, e1, s2, e2): - """Finds intersections between two line segments. - - Args: - s1, e1: Coordinates of the first line as 2D tuples. - s2, e2: Coordinates of the second line as 2D tuples. - - Returns: - A list of ``Intersection`` objects, each object having ``pt``, ``t1`` - and ``t2`` attributes containing the intersection point, time on first - segment and time on second segment respectively. - - Examples:: - - >>> a = lineLineIntersections( (310,389), (453, 222), (289, 251), (447, 367)) - >>> len(a) - 1 - >>> intersection = a[0] - >>> intersection.pt - (374.44882952482897, 313.73458370177315) - >>> (intersection.t1, intersection.t2) - (0.45069111555824465, 0.5408153767394238) - """ - s1x, s1y = s1 - e1x, e1y = e1 - s2x, s2y = s2 - e2x, e2y = e2 - if ( - math.isclose(s2x, e2x) and math.isclose(s1x, e1x) and not math.isclose(s1x, s2x) - ): # Parallel vertical - return [] - if ( - math.isclose(s2y, e2y) and math.isclose(s1y, e1y) and not math.isclose(s1y, s2y) - ): # Parallel horizontal - return [] - if math.isclose(s2x, e2x) and math.isclose(s2y, e2y): # Line segment is tiny - return [] - if math.isclose(s1x, e1x) and math.isclose(s1y, e1y): # Line segment is tiny - return [] - if math.isclose(e1x, s1x): - x = s1x - slope34 = (e2y - s2y) / (e2x - s2x) - y = slope34 * (x - s2x) + s2y - pt = (x, y) - return [ - Intersection( - pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - ) - ] - if math.isclose(s2x, e2x): - x = s2x - slope12 = (e1y - s1y) / (e1x - s1x) - y = slope12 * (x - s1x) + s1y - pt = (x, y) - return [ - Intersection( - pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - ) - ] - - slope12 = (e1y - s1y) / (e1x - s1x) - slope34 = (e2y - s2y) / (e2x - s2x) - if math.isclose(slope12, slope34): - return [] - x = (slope12 * s1x - s1y - slope34 * s2x + s2y) / (slope12 - slope34) - y = slope12 * (x - s1x) + s1y - pt = (x, y) - if _both_points_are_on_same_side_of_origin( - pt, e1, s1 - ) and _both_points_are_on_same_side_of_origin(pt, s2, e2): - return [ - Intersection( - pt=pt, t1=_line_t_of_pt(s1, e1, pt), t2=_line_t_of_pt(s2, e2, pt) - ) - ] - return [] - - -def _alignment_transformation(segment): - # Returns a transformation which aligns a segment horizontally at the - # origin. Apply this transformation to curves and root-find to find - # intersections with the segment. - start = segment[0] - end = segment[-1] - angle = math.atan2(end[1] - start[1], end[0] - start[0]) - return Identity.rotate(-angle).translate(-start[0], -start[1]) - - -def _curve_line_intersections_t(curve, line): - aligned_curve = _alignment_transformation(line).transformPoints(curve) - if len(curve) == 3: - a, b, c = calcQuadraticParameters(*aligned_curve) - intersections = solveQuadratic(a[1], b[1], c[1]) - elif len(curve) == 4: - a, b, c, d = calcCubicParameters(*aligned_curve) - intersections = solveCubic(a[1], b[1], c[1], d[1]) - else: - raise ValueError("Unknown curve degree") - return sorted(i for i in intersections if 0.0 <= i <= 1) - - -def curveLineIntersections(curve, line): - """Finds intersections between a curve and a line. - - Args: - curve: List of coordinates of the curve segment as 2D tuples. - line: List of coordinates of the line segment as 2D tuples. - - Returns: - A list of ``Intersection`` objects, each object having ``pt``, ``t1`` - and ``t2`` attributes containing the intersection point, time on first - segment and time on second segment respectively. - - Examples:: - >>> curve = [ (100, 240), (30, 60), (210, 230), (160, 30) ] - >>> line = [ (25, 260), (230, 20) ] - >>> intersections = curveLineIntersections(curve, line) - >>> len(intersections) - 3 - >>> intersections[0].pt - (84.9000930760723, 189.87306176459828) - """ - if len(curve) == 3: - pointFinder = quadraticPointAtT - elif len(curve) == 4: - pointFinder = cubicPointAtT - else: - raise ValueError("Unknown curve degree") - intersections = [] - for t in _curve_line_intersections_t(curve, line): - pt = pointFinder(*curve, t) - # Back-project the point onto the line, to avoid problems with - # numerical accuracy in the case of vertical and horizontal lines - line_t = _line_t_of_pt(*line, pt) - pt = linePointAtT(*line, line_t) - intersections.append(Intersection(pt=pt, t1=t, t2=line_t)) - return intersections - - -def _curve_bounds(c): - if len(c) == 3: - return calcQuadraticBounds(*c) - elif len(c) == 4: - return calcCubicBounds(*c) - raise ValueError("Unknown curve degree") - - -def _split_segment_at_t(c, t): - if len(c) == 2: - s, e = c - midpoint = linePointAtT(s, e, t) - return [(s, midpoint), (midpoint, e)] - if len(c) == 3: - return splitQuadraticAtT(*c, t) - elif len(c) == 4: - return splitCubicAtT(*c, t) - raise ValueError("Unknown curve degree") - - -def _curve_curve_intersections_t( - curve1, curve2, precision=1e-3, range1=None, range2=None -): - bounds1 = _curve_bounds(curve1) - bounds2 = _curve_bounds(curve2) - - if not range1: - range1 = (0.0, 1.0) - if not range2: - range2 = (0.0, 1.0) - - # If bounds don't intersect, go home - intersects, _ = sectRect(bounds1, bounds2) - if not intersects: - return [] - - def midpoint(r): - return 0.5 * (r[0] + r[1]) - - # If they do overlap but they're tiny, approximate - if rectArea(bounds1) < precision and rectArea(bounds2) < precision: - return [(midpoint(range1), midpoint(range2))] - - c11, c12 = _split_segment_at_t(curve1, 0.5) - c11_range = (range1[0], midpoint(range1)) - c12_range = (midpoint(range1), range1[1]) - - c21, c22 = _split_segment_at_t(curve2, 0.5) - c21_range = (range2[0], midpoint(range2)) - c22_range = (midpoint(range2), range2[1]) - - found = [] - found.extend( - _curve_curve_intersections_t( - c11, c21, precision, range1=c11_range, range2=c21_range - ) - ) - found.extend( - _curve_curve_intersections_t( - c12, c21, precision, range1=c12_range, range2=c21_range - ) - ) - found.extend( - _curve_curve_intersections_t( - c11, c22, precision, range1=c11_range, range2=c22_range - ) - ) - found.extend( - _curve_curve_intersections_t( - c12, c22, precision, range1=c12_range, range2=c22_range - ) - ) - - unique_key = lambda ts: (int(ts[0] / precision), int(ts[1] / precision)) - seen = set() - unique_values = [] - - for ts in found: - key = unique_key(ts) - if key in seen: - continue - seen.add(key) - unique_values.append(ts) - - return unique_values - - -def curveCurveIntersections(curve1, curve2): - """Finds intersections between a curve and a curve. - - Args: - curve1: List of coordinates of the first curve segment as 2D tuples. - curve2: List of coordinates of the second curve segment as 2D tuples. - - Returns: - A list of ``Intersection`` objects, each object having ``pt``, ``t1`` - and ``t2`` attributes containing the intersection point, time on first - segment and time on second segment respectively. - - Examples:: - >>> curve1 = [ (10,100), (90,30), (40,140), (220,220) ] - >>> curve2 = [ (5,150), (180,20), (80,250), (210,190) ] - >>> intersections = curveCurveIntersections(curve1, curve2) - >>> len(intersections) - 3 - >>> intersections[0].pt - (81.7831487395506, 109.88904552375288) - """ - intersection_ts = _curve_curve_intersections_t(curve1, curve2) - return [ - Intersection(pt=segmentPointAtT(curve1, ts[0]), t1=ts[0], t2=ts[1]) - for ts in intersection_ts - ] - - -def segmentSegmentIntersections(seg1, seg2): - """Finds intersections between two segments. - - Args: - seg1: List of coordinates of the first segment as 2D tuples. - seg2: List of coordinates of the second segment as 2D tuples. - - Returns: - A list of ``Intersection`` objects, each object having ``pt``, ``t1`` - and ``t2`` attributes containing the intersection point, time on first - segment and time on second segment respectively. - - Examples:: - >>> curve1 = [ (10,100), (90,30), (40,140), (220,220) ] - >>> curve2 = [ (5,150), (180,20), (80,250), (210,190) ] - >>> intersections = segmentSegmentIntersections(curve1, curve2) - >>> len(intersections) - 3 - >>> intersections[0].pt - (81.7831487395506, 109.88904552375288) - >>> curve3 = [ (100, 240), (30, 60), (210, 230), (160, 30) ] - >>> line = [ (25, 260), (230, 20) ] - >>> intersections = segmentSegmentIntersections(curve3, line) - >>> len(intersections) - 3 - >>> intersections[0].pt - (84.9000930760723, 189.87306176459828) - - """ - # Arrange by degree - swapped = False - if len(seg2) > len(seg1): - seg2, seg1 = seg1, seg2 - swapped = True - if len(seg1) > 2: - if len(seg2) > 2: - intersections = curveCurveIntersections(seg1, seg2) - else: - intersections = curveLineIntersections(seg1, seg2) - elif len(seg1) == 2 and len(seg2) == 2: - intersections = lineLineIntersections(*seg1, *seg2) - else: - raise ValueError("Couldn't work out which intersection function to use") - if not swapped: - return intersections - return [Intersection(pt=i.pt, t1=i.t2, t2=i.t1) for i in intersections] - - -def _segmentrepr(obj): - """ - >>> _segmentrepr([1, [2, 3], [], [[2, [3, 4], [0.1, 2.2]]]]) - '(1, (2, 3), (), ((2, (3, 4), (0.1, 2.2))))' - """ - try: - it = iter(obj) - except TypeError: - return "%g" % obj - else: - return "(%s)" % ", ".join(_segmentrepr(x) for x in it) - - -def printSegments(segments): - """Helper for the doctests, displaying each segment in a list of - segments on a single line as a tuple. - """ - for segment in segments: - print(_segmentrepr(segment)) - - -if __name__ == "__main__": - import sys - import doctest - - sys.exit(doctest.testmod().failed) diff --git a/spaces/codertoro/gpt-academic/crazy_functions/test_project/cpp/cppipc/queue.h b/spaces/codertoro/gpt-academic/crazy_functions/test_project/cpp/cppipc/queue.h deleted file mode 100644 index a21f3446e06b5826af7b554c8a7d9c5d80848b62..0000000000000000000000000000000000000000 --- a/spaces/codertoro/gpt-academic/crazy_functions/test_project/cpp/cppipc/queue.h +++ /dev/null @@ -1,216 +0,0 @@ -#pragma once - -#include -#include -#include // [[since C++14]]: std::exchange -#include -#include -#include -#include -#include -#include -#include // assert - -#include "libipc/def.h" -#include "libipc/shm.h" -#include "libipc/rw_lock.h" - -#include "libipc/utility/log.h" -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" - -namespace ipc { -namespace detail { - -class queue_conn { -protected: - circ::cc_t connected_ = 0; - shm::handle elems_h_; - - template - Elems* open(char const * name) { - if (name == nullptr || name[0] == '\0') { - ipc::error("fail open waiter: name is empty!\n"); - return nullptr; - } - if (!elems_h_.acquire(name, sizeof(Elems))) { - return nullptr; - } - auto elems = static_cast(elems_h_.get()); - if (elems == nullptr) { - ipc::error("fail acquire elems: %s\n", name); - return nullptr; - } - elems->init(); - return elems; - } - - void close() { - elems_h_.release(); - } - -public: - queue_conn() = default; - queue_conn(const queue_conn&) = delete; - queue_conn& operator=(const queue_conn&) = delete; - - bool connected() const noexcept { - return connected_ != 0; - } - - circ::cc_t connected_id() const noexcept { - return connected_; - } - - template - auto connect(Elems* elems) noexcept - /*needs 'optional' here*/ - -> std::tuple().cursor())> { - if (elems == nullptr) return {}; - // if it's already connected, just return - if (connected()) return {connected(), false, 0}; - connected_ = elems->connect_receiver(); - return {connected(), true, elems->cursor()}; - } - - template - bool disconnect(Elems* elems) noexcept { - if (elems == nullptr) return false; - // if it's already disconnected, just return false - if (!connected()) return false; - elems->disconnect_receiver(std::exchange(connected_, 0)); - return true; - } -}; - -template -class queue_base : public queue_conn { - using base_t = queue_conn; - -public: - using elems_t = Elems; - using policy_t = typename elems_t::policy_t; - -protected: - elems_t * elems_ = nullptr; - decltype(std::declval().cursor()) cursor_ = 0; - bool sender_flag_ = false; - -public: - using base_t::base_t; - - queue_base() = default; - - explicit queue_base(char const * name) - : queue_base{} { - elems_ = open(name); - } - - explicit queue_base(elems_t * elems) noexcept - : queue_base{} { - assert(elems != nullptr); - elems_ = elems; - } - - /* not virtual */ ~queue_base() { - base_t::close(); - } - - elems_t * elems() noexcept { return elems_; } - elems_t const * elems() const noexcept { return elems_; } - - bool ready_sending() noexcept { - if (elems_ == nullptr) return false; - return sender_flag_ || (sender_flag_ = elems_->connect_sender()); - } - - void shut_sending() noexcept { - if (elems_ == nullptr) return; - if (!sender_flag_) return; - elems_->disconnect_sender(); - } - - bool connect() noexcept { - auto tp = base_t::connect(elems_); - if (std::get<0>(tp) && std::get<1>(tp)) { - cursor_ = std::get<2>(tp); - return true; - } - return std::get<0>(tp); - } - - bool disconnect() noexcept { - return base_t::disconnect(elems_); - } - - std::size_t conn_count() const noexcept { - return (elems_ == nullptr) ? static_cast(invalid_value) : elems_->conn_count(); - } - - bool valid() const noexcept { - return elems_ != nullptr; - } - - bool empty() const noexcept { - return !valid() || (cursor_ == elems_->cursor()); - } - - template - bool push(F&& prep, P&&... params) { - if (elems_ == nullptr) return false; - return elems_->push(this, [&](void* p) { - if (prep(p)) ::new (p) T(std::forward

    (params)...); - }); - } - - template - bool force_push(F&& prep, P&&... params) { - if (elems_ == nullptr) return false; - return elems_->force_push(this, [&](void* p) { - if (prep(p)) ::new (p) T(std::forward

    (params)...); - }); - } - - template - bool pop(T& item, F&& out) { - if (elems_ == nullptr) { - return false; - } - return elems_->pop(this, &(this->cursor_), [&item](void* p) { - ::new (&item) T(std::move(*static_cast(p))); - }, std::forward(out)); - } -}; - -} // namespace detail - -template -class queue final : public detail::queue_base> { - using base_t = detail::queue_base>; - -public: - using value_t = T; - - using base_t::base_t; - - template - bool push(P&&... params) { - return base_t::template push(std::forward

    (params)...); - } - - template - bool force_push(P&&... params) { - return base_t::template force_push(std::forward

    (params)...); - } - - bool pop(T& item) { - return base_t::pop(item, [](bool) {}); - } - - template - bool pop(T& item, F&& out) { - return base_t::pop(item, std::forward(out)); - } -}; - -} // namespace ipc diff --git a/spaces/coffeeee/nsfw-c0ffees-erotic-story-generator/app.py b/spaces/coffeeee/nsfw-c0ffees-erotic-story-generator/app.py deleted file mode 100644 index 1842b2b0687214ac175a5f3269ef7819e9779fed..0000000000000000000000000000000000000000 --- a/spaces/coffeeee/nsfw-c0ffees-erotic-story-generator/app.py +++ /dev/null @@ -1,98 +0,0 @@ -import gradio as gr - -import nltk -import string -from transformers import GPT2LMHeadModel, GPT2Tokenizer, GenerationConfig, set_seed -import random - -nltk.download('punkt') - -response_length = 200 - -sentence_detector = nltk.data.load('tokenizers/punkt/english.pickle') - -tokenizer = GPT2Tokenizer.from_pretrained("gpt2-medium") -tokenizer.truncation_side = 'right' - -# model = GPT2LMHeadModel.from_pretrained('checkpoint-10000') -model = GPT2LMHeadModel.from_pretrained('coffeeee/nsfw-story-generator') -generation_config = GenerationConfig.from_pretrained('gpt2-medium') -generation_config.max_new_tokens = response_length -generation_config.pad_token_id = generation_config.eos_token_id -def generate_response(outputs, new_prompt): - - story_so_far = "\n".join(outputs[:int(1024 / response_length + 1)]) if outputs else "" - - set_seed(random.randint(0, 4000000000)) - inputs = tokenizer.encode(story_so_far + "\n" + new_prompt if story_so_far else new_prompt, - return_tensors='pt', truncation=True, - max_length=1024 - response_length) - - output = model.generate(inputs, do_sample=True, generation_config=generation_config) - - response = clean_paragraph(tokenizer.batch_decode(output)[0][(len(story_so_far) + 1 if story_so_far else 0):]) - outputs.append(response) - return { - user_outputs: outputs, - story: (story_so_far + "\n" if story_so_far else "") + response, - prompt: None - } - -def undo(outputs): - - outputs = outputs[:-1] if outputs else [] - return { - user_outputs: outputs, - story: "\n".join(outputs) if outputs else None - } - -def clean_paragraph(entry): - paragraphs = entry.split('\n') - - for i in range(len(paragraphs)): - split_sentences = nltk.tokenize.sent_tokenize(paragraphs[i], language='english') - if i == len(paragraphs) - 1 and split_sentences[:1][-1] not in string.punctuation: - paragraphs[i] = " ".join(split_sentences[:-1]) - - return capitalize_first_char("\n".join(paragraphs)) - -def reset(): - return { - user_outputs: [], - story: None - } - -def capitalize_first_char(entry): - for i in range(len(entry)): - if entry[i].isalpha(): - return entry[:i] + entry[i].upper() + entry[i + 1:] - return entry - -with gr.Blocks(theme=gr.themes.Default(text_size='lg', font=[gr.themes.GoogleFont("Bitter"), "Arial", "sans-serif"])) as demo: - - placeholder_text = ''' - Disclaimer: everything this model generates is a work of fiction. - Content from this model WILL generate inappropriate and potentially offensive content. - - Use at your own discretion. Please respect the Huggingface code of conduct.''' - - story = gr.Textbox(label="Story", interactive=False, lines=20, placeholder=placeholder_text) - story.style(show_copy_button=True) - - user_outputs = gr.State([]) - - prompt = gr.Textbox(label="Prompt", placeholder="Start a new story, or continue your current one!", lines=3, max_lines=3) - - with gr.Row(): - gen_button = gr.Button('Generate') - undo_button = gr.Button("Undo") - res_button = gr.Button("Reset") - - prompt.submit(generate_response, [user_outputs, prompt], [user_outputs, story, prompt], scroll_to_output=True) - gen_button.click(generate_response, [user_outputs, prompt], [user_outputs, story, prompt], scroll_to_output=True) - undo_button.click(undo, user_outputs, [user_outputs, story], scroll_to_output=True) - res_button.click(reset, [], [user_outputs, story], scroll_to_output=True) - -demo.launch(inbrowser=True) - - diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h2645_parse.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h2645_parse.h deleted file mode 100644 index 787ce971ee4dc0d3ce5fa4ca775aedd7dc806499..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h2645_parse.h +++ /dev/null @@ -1,139 +0,0 @@ -/* - * H.264/HEVC common parsing code - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_H2645_PARSE_H -#define AVCODEC_H2645_PARSE_H - -#include - -#include "libavutil/buffer.h" -#include "libavutil/error.h" -#include "libavutil/log.h" -#include "codec_id.h" -#include "get_bits.h" - -#define MAX_MBPAIR_SIZE (256*1024) // a tighter bound could be calculated if someone cares about a few bytes - -typedef struct H2645NAL { - const uint8_t *data; - int size; - - /** - * Size, in bits, of just the data, excluding the stop bit and any trailing - * padding. I.e. what HEVC calls SODB. - */ - int size_bits; - - int raw_size; - const uint8_t *raw_data; - - GetBitContext gb; - - /** - * NAL unit type - */ - int type; - - /** - * H.264 only, nal_ref_idc - */ - int ref_idc; - - /** - * HEVC only, nuh_temporal_id_plus_1 - 1 - */ - int temporal_id; - - /* - * HEVC only, identifier of layer to which nal unit belongs - */ - int nuh_layer_id; - - int skipped_bytes; - int skipped_bytes_pos_size; - int *skipped_bytes_pos; -} H2645NAL; - -typedef struct H2645RBSP { - uint8_t *rbsp_buffer; - AVBufferRef *rbsp_buffer_ref; - int rbsp_buffer_alloc_size; - int rbsp_buffer_size; -} H2645RBSP; - -/* an input packet split into unescaped NAL units */ -typedef struct H2645Packet { - H2645NAL *nals; - H2645RBSP rbsp; - int nb_nals; - int nals_allocated; - unsigned nal_buffer_size; -} H2645Packet; - -/** - * Extract the raw (unescaped) bitstream. - */ -int ff_h2645_extract_rbsp(const uint8_t *src, int length, H2645RBSP *rbsp, - H2645NAL *nal, int small_padding); - -/** - * Split an input packet into NAL units. - * - * If data == raw_data holds true for a NAL unit of the returned pkt, then - * said NAL unit does not contain any emulation_prevention_three_byte and - * the data is contained in the input buffer pointed to by buf. - * Otherwise, the unescaped data is part of the rbsp_buffer described by the - * packet's H2645RBSP. - * - * If the packet's rbsp_buffer_ref is not NULL, the underlying AVBuffer must - * own rbsp_buffer. If not and rbsp_buffer is not NULL, use_ref must be 0. - * If use_ref is set, rbsp_buffer will be reference-counted and owned by - * the underlying AVBuffer of rbsp_buffer_ref. - */ -int ff_h2645_packet_split(H2645Packet *pkt, const uint8_t *buf, int length, - void *logctx, int is_nalff, int nal_length_size, - enum AVCodecID codec_id, int small_padding, int use_ref); - -/** - * Free all the allocated memory in the packet. - */ -void ff_h2645_packet_uninit(H2645Packet *pkt); - -static inline int get_nalsize(int nal_length_size, const uint8_t *buf, - int buf_size, int *buf_index, void *logctx) -{ - int i, nalsize = 0; - - if (*buf_index >= buf_size - nal_length_size) { - // the end of the buffer is reached, refill it - return AVERROR(EAGAIN); - } - - for (i = 0; i < nal_length_size; i++) - nalsize = ((unsigned)nalsize << 8) | buf[(*buf_index)++]; - if (nalsize <= 0 || nalsize > buf_size - *buf_index) { - av_log(logctx, AV_LOG_ERROR, - "Invalid NAL unit size (%d > %d).\n", nalsize, buf_size - *buf_index); - return AVERROR_INVALIDDATA; - } - return nalsize; -} - -#endif /* AVCODEC_H2645_PARSE_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpeglsdec.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpeglsdec.h deleted file mode 100644 index 0cafaba7a488efc0cc81322871f07d2b3fa4dd83..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/jpeglsdec.h +++ /dev/null @@ -1,42 +0,0 @@ -/* - * JPEG-LS decoder - * Copyright (c) 2003 Michael Niedermayer - * Copyright (c) 2006 Konstantin Shishkov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * JPEG-LS decoder. - */ - -#ifndef AVCODEC_JPEGLSDEC_H -#define AVCODEC_JPEGLSDEC_H - -#include "mjpeg.h" -#include "mjpegdec.h" - -/** - * Decode LSE block with initialization parameters - */ -int ff_jpegls_decode_lse(MJpegDecodeContext *s); - -int ff_jpegls_decode_picture(MJpegDecodeContext *s, int near, - int point_transform, int ilv); - -#endif /* AVCODEC_JPEGLSDEC_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/me_cmp_init_mips.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/me_cmp_init_mips.c deleted file mode 100644 index 90b8b912564216916425e8fd3ad97c169f7852fa..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/me_cmp_init_mips.c +++ /dev/null @@ -1,53 +0,0 @@ -/* - * Copyright (c) 2015 Parag Salasakar (Parag.Salasakar@imgtec.com) - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/attributes.h" -#include "libavutil/mips/cpu.h" -#include "me_cmp_mips.h" - -av_cold void ff_me_cmp_init_mips(MECmpContext *c, AVCodecContext *avctx) -{ - int cpu_flags = av_get_cpu_flags(); - - if (have_msa(cpu_flags)) { -#if BIT_DEPTH == 8 - c->pix_abs[0][0] = ff_pix_abs16_msa; - c->pix_abs[0][1] = ff_pix_abs16_x2_msa; - c->pix_abs[0][2] = ff_pix_abs16_y2_msa; - c->pix_abs[0][3] = ff_pix_abs16_xy2_msa; - c->pix_abs[1][0] = ff_pix_abs8_msa; - c->pix_abs[1][1] = ff_pix_abs8_x2_msa; - c->pix_abs[1][2] = ff_pix_abs8_y2_msa; - c->pix_abs[1][3] = ff_pix_abs8_xy2_msa; - - c->hadamard8_diff[0] = ff_hadamard8_diff16_msa; - c->hadamard8_diff[1] = ff_hadamard8_diff8x8_msa; - - c->hadamard8_diff[4] = ff_hadamard8_intra16_msa; - c->hadamard8_diff[5] = ff_hadamard8_intra8x8_msa; - - c->sad[0] = ff_pix_abs16_msa; - c->sad[1] = ff_pix_abs8_msa; - c->sse[0] = ff_sse16_msa; - c->sse[1] = ff_sse8_msa; - c->sse[2] = ff_sse4_msa; -#endif - } -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Car Parking Multiplayer APK 4.8.9.4.4 - Drive and Park in Various Scenarios with Multiplayer Mode and Realistic Physics.md b/spaces/congsaPfin/Manga-OCR/logs/Car Parking Multiplayer APK 4.8.9.4.4 - Drive and Park in Various Scenarios with Multiplayer Mode and Realistic Physics.md deleted file mode 100644 index e84f7a8981488779cd469a67a770a22058777d2e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Car Parking Multiplayer APK 4.8.9.4.4 - Drive and Park in Various Scenarios with Multiplayer Mode and Realistic Physics.md +++ /dev/null @@ -1,90 +0,0 @@ -
    -

    Car Parking Multiplayer APK 4.8 9.4 1: A Realistic and Fun Driving Simulation Game

    -

    Do you love driving and parking cars? Do you want to experience a realistic and fun driving simulation game that offers an exciting multiplayer mode, challenging levels, and realistic physics engine? If yes, then you should try Car Parking Multiplayer APK 4.8 9.4 1, the latest version of the popular Car Parking Multiplayer game for Android devices.

    -

    Introduction

    -

    In this article, we will tell you everything you need to know about Car Parking Multiplayer APK 4.8 9.4 1, including what it is, what are its features, how to download and install it, and how to play it. We will also answer some frequently asked questions about the game at the end of the article.

    -

    car parking multiplayer apk 4.8 9.4 1


    Download Zip ••• https://urlca.com/2uOfqX



    -

    What is Car Parking Multiplayer?

    -

    Car Parking Multiplayer is an engaging and realistic driving and parking simulation game that was developed by olzhass, a Turkish game developer. The game was first released in 2017 and has since gained millions of fans around the world. The game is available for both Android and iOS devices, but in this article, we will focus on the Android version.

    -

    Car Parking Multiplayer APK 4.8 9.4 1 is the latest version of the game that was updated on June 21, 2023. It has a file size of about 300 MB and requires Android 5.0 or higher to run. The game has over 100 million downloads on Google Play Store and has a rating of 4.5 out of 5 stars.

    -

    What are the features of Car Parking Multiplayer?

    -

    Car Parking Multiplayer has many features that make it one of the best driving and parking simulation games on the market. Some of these features are:

    -
      -
    • Multiplayer mode: You can play with your friends or other players online in real time. You can chat with them, exchange cars, race with them, or join them in free roam mode.
    • -
    • Different modes and levels: You can choose from different modes such as classic parking, drift parking, free roam, or racing. You can also choose from different levels of difficulty ranging from easy to hard.
    • -
    • Realistic physics engine: The game has a realistic physics engine that simulates the behavior of real cars. You can feel the weight, speed, acceleration, braking, steering, suspension, and traction of your car.
    • -
    • Various cars and customization options: The game has over 100 cars to choose from, including sports cars, trucks, buses, SUVs, and more. You can also customize your car with different colors, stickers, wheels, spoilers, exhausts, and more.
    • -
    • Open world environment: The game has a large open world environment that you can explore freely. You can drive around different cities, towns, villages, highways, deserts, forests, mountains, and more.
    • -
    • Realistic sounds and graphics: The game has realistic sounds and graphics that enhance the immersion and realism of the game. You can hear the engine sound, horn sound, tire sound, brake sound, etc. You can also see the details of your car, the environment, the weather effects, the shadows, etc.
    • -
    -

    How

    How to download and install Car Parking Multiplayer APK 4.8 9.4 1?

    -

    If you want to enjoy the latest version of Car Parking Multiplayer, you need to download and install the APK file on your Android device. Here are the steps to do so:

    -

    Step 1: Download the APK file from a trusted source

    -

    The first step is to download the APK file from a trusted source. You can use the link below to download the file from our website. The file is safe and virus-free, and we have tested it ourselves.

    -

    Download Car Parking Multiplayer APK 4.8 9.4 1 here

    -

    Step 2: Enable unknown sources on your device

    -

    The next step is to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, follow these steps:

    -

    car parking multiplayer mod apk 4.8 9.4 1
    -car parking multiplayer apk download 4.8 9.4 1
    -car parking multiplayer hack apk 4.8 9.4 1
    -car parking multiplayer unlimited money apk 4.8 9.4 1
    -car parking multiplayer latest version apk 4.8 9.4 1
    -car parking multiplayer android apk 4.8 9.4 1
    -car parking multiplayer free download apk 4.8 9.4 1
    -car parking multiplayer online apk 4.8 9.4 1
    -car parking multiplayer game apk 4.8 9.4 1
    -car parking multiplayer simulator apk 4.8 9.4 1
    -car parking multiplayer cheats apk 4.8 9.4 1
    -car parking multiplayer premium apk 4.8 9.4 1
    -car parking multiplayer unlocked apk 4.8 9.4 1
    -car parking multiplayer update apk 4.8 9.4 1
    -car parking multiplayer old version apk 4.8 9.4 1
    -car parking multiplayer real life cars apk 4.8 9.4 1
    -car parking multiplayer mod menu apk 4.8 9.4 1
    -car parking multiplayer no ads apk 4.8 9.4 1
    -car parking multiplayer offline apk 4.8 9.4 1
    -car parking multiplayer new cars apk 4.8 9.4 1
    -car parking multiplayer full version apk 4.8 9.4 1
    -car parking multiplayer cracked apk 4.8 9.4

    -
      -
    • Go to your device settings and tap on security or privacy.
    • -
    • Find the option that says unknown sources or install unknown apps and toggle it on.
    • -
    • A warning message will pop up, but you can ignore it and tap on OK.
    • -
    -

    Step 3: Install the APK file and launch the game

    -

    The final step is to install the APK file and launch the game. To do this, follow these steps:

    -
      -
    • Locate the downloaded APK file on your device storage and tap on it.
    • -
    • A confirmation message will appear, but you can ignore it and tap on install.
    • -
    • Wait for the installation process to finish and then tap on open.
    • -
    • The game will launch and you can start playing it.
    • -
    -

    How to play Car Parking Multiplayer?

    -

    Now that you have downloaded and installed Car Parking Multiplayer APK 4.8 9.4 1, you might be wondering how to play it. Here are some tips and tricks to help you out:

    -

    Choose your mode and level

    -

    The first thing you need to do is choose your mode and level. You can access the mode selection screen by tapping on the menu icon on the top left corner of the screen. You can choose from four modes: classic parking, drift parking, free roam, or racing. Each mode has different objectives and challenges.

    -

    You can also choose your level by tapping on the level icon on the top right corner of the screen. You can choose from different levels of difficulty ranging from easy to hard. Each level has different scenarios and parking spots.

    -

    Control your car and park it correctly

    -

    The next thing you need to do is control your car and park it correctly. You can use the virtual buttons on the screen to steer, accelerate, brake, reverse, or change gears. You can also use the tilt option or the steering wheel option if you prefer.

    -

    You need to follow the arrows and signs on the road to find your parking spot. You need to park your car within the marked area without hitting any obstacles or other cars. You need to park your car as fast as possible without damaging it.

    -

    Customize your car and interact with other players

    -

    The last thing you need to do is customize your car and interact with other players. You can access the customization screen by tapping on the garage icon on the bottom left corner of the screen. You can customize your car with different colors, stickers, wheels, spoilers, exhausts, and more.

    -

    You can also interact with other players by tapping on the multiplayer icon on the bottom right corner of the screen. You can chat with them, exchange cars, race with them, or join them in free roam mode.

    -

    Conclusion

    -

    Car Parking Multiplayer APK 4.8 9.4 1 is a realistic and fun driving simulation game that offers an exciting multiplayer mode, challenging levels, and realistic physics engine. You can download and install it easily on your Android device and enjoy driving and parking various cars in different environments.

    -

    Why should you play Car Parking Multiplayer?

    -

    You should play Car Parking Multiplayer because it is a game that will test your driving skills, improve your parking skills, entertain you with its graphics and sounds, and connect you with other players online. It is a game that will keep you hooked for hours and make you feel like a real driver.

    -

    FAQs

    -
      -
    • Q: Is Car Parking Multiplayer free?
    • -
    • A: Yes, Car Parking Multiplayer is free to download and play.
    • Q: How can I get more coins in Car Parking Multiplayer?
    • -
    • A: You can get more coins in Car Parking Multiplayer by completing levels, watching ads, or buying them with real money.
    • -
    • Q: How can I change the camera view in Car Parking Multiplayer?
    • -
    • A: You can change the camera view in Car Parking Multiplayer by tapping on the camera icon on the top center of the screen. You can choose from different views such as first-person, third-person, top-down, or rear-view.
    • -
    • Q: How can I report a bug or a problem in Car Parking Multiplayer?
    • -
    • A: You can report a bug or a problem in Car Parking Multiplayer by contacting the developer through their email address: olzhass@gmail.com. You can also leave a feedback or a review on the Google Play Store.
    • -
    • Q: How can I update Car Parking Multiplayer to the latest version?
    • -
    • A: You can update Car Parking Multiplayer to the latest version by visiting the Google Play Store and tapping on the update button. You can also download the latest APK file from our website and install it manually.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Dumpper Jumpstart for Free A Portable Software for Wireless Network Management and Security.md b/spaces/congsaPfin/Manga-OCR/logs/Download Dumpper Jumpstart for Free A Portable Software for Wireless Network Management and Security.md deleted file mode 100644 index 265c9a2868ee49e5eec2993a490cca06b2dcf9b4..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Dumpper Jumpstart for Free A Portable Software for Wireless Network Management and Security.md +++ /dev/null @@ -1,104 +0,0 @@ - -

    Dumpper Jumpstart Download: How to Hack Wi-Fi Passwords with Ease

    -

    Have you ever wanted to access a Wi-Fi network that is password-protected, but you don't know the password? Maybe you are in a public place, like a cafe or a hotel, and you need to connect to the internet for some reason. Or maybe you are curious about what your neighbors are doing online, and you want to spy on them. Whatever your motive, there is a way to hack Wi-Fi passwords using a software called dumpper jumpstart.

    -

    dumpper jumpstart download


    Download File ○○○ https://urlca.com/2uOaDU



    -

    Dumpper jumpstart is a free and portable software that allows you to scan and manage wireless networks in Windows. It also incorporates several methods to show and check some security flaws in the WPS protocol, which is used by many routers to set up Wi-Fi connections. By exploiting these flaws, you can crack the WPA or WPA2 password of any Wi-Fi network within minutes.

    -

    In this article, we will show you how to download and install dumpper jumpstart, how to use it to hack Wi-Fi passwords, and how to secure your own Wi-Fi network from hackers. Let's get started!

    -

    How to Download and Install Dumpper Jumpstart

    -

    Before you can use dumpper jumpstart, you need to download and install it on your computer. Here are the steps:

    -
      -
    1. Go to this website and download the latest version of dumpper jumpstart. It is a zip file that contains two files: Dumpper v.91.2.rar and JumpStart + WinPcap.rar.
    2. -
    3. Extract the zip file using a program like WinRAR or 7-Zip. You will get two folders: Dumpper v.91.2 and JumpStart + WinPcap.
    4. -
    5. Open the Dumpper v.91.2 folder and run Dumpper.exe as administrator. This will launch the dumpper jumpstart software.
    6. -
    7. Open the JumpStart + WinPcap folder and run setup.exe as administrator. This will install two programs: JumpStart, which is used to connect to wireless networks, and WinPcap, which is used to capture network packets.
    8. -
    9. Restart your computer after the installation is complete.
    10. -
    -

    Congratulations! You have successfully installed dumpper jumpstart on your computer. Now you can use it to scan and hack Wi-Fi networks.

    -

    How to Use Dumpper Jumpstart to Hack Wi-Fi Passwords

    -

    Now that you have installed dumpper jumpstart, you can use it to hack Wi-Fi passwords. Here are the steps:

    -
      -
    1. Run Dumpper.exe as administrator and go to the WPS tab. This tab shows you all the wireless networks that are within range of your computer.
    2. -
    3. Select a network that has WPS enabled and has a green lock icon next to it. This means that the network is vulnerable to WPS attacks.
    4. -
    5. Click on the Scan button at the bottom right corner of the window. This will scan the network for possible WPS pins that can be used to connect to it.
    6. -
    7. Wait for the scan to finish. You will see a list of WPS pins with their corresponding probabilities of success.
    8. -
    9. Select a pin that has a high probability of success (preferably above 90%) and click on the JumpStart button at the bottom right corner of the window. This will launch JumpStart and try to connect to the network using that pin.
    10. -
    11. If successful, JumpStart will show you a message saying "Connected!" and display the network name and password. You can copy the password and use it to access the network from any device.
    12. -
    -

    That's it! You have successfully hacked a Wi-Fi password using dumpper jumpstart. But how can you prevent others from doing the same to your own Wi-Fi network? Let's find out.

    -

    dumpper jumpstart download for windows 10
    -dumpper jumpstart download latest version
    -dumpper jumpstart download free
    -dumpper jumpstart download zip
    -dumpper jumpstart download osdn
    -dumpper jumpstart download cybernog
    -dumpper jumpstart download townscript
    -dumpper jumpstart download wifi
    -dumpper jumpstart download pc
    -dumpper jumpstart download software
    -dumpper jumpstart download rar
    -dumpper jumpstart download 2023
    -dumpper jumpstart download 64 bit
    -dumpper jumpstart download 32 bit
    -dumpper jumpstart download full version
    -dumpper jumpstart download portable
    -dumpper jumpstart download tutorial
    -dumpper jumpstart download video
    -dumpper jumpstart download hack wifi
    -dumpper jumpstart download wps
    -dumpper jumpstart download wpa2
    -dumpper jumpstart download bssid essid
    -dumpper jumpstart download wireless network
    -dumpper jumpstart download access point
    -dumpper jumpstart download winpcap
    -dumpper jumpstart download for wireless
    -dumpper jumpstart download setup
    -dumpper jumpstart download pin
    -dumpper jumpstart download security flaws
    -dumpper jumpstart download management of networks
    -dumpper jumpstart download new scientist
    -dumpper jumpstart download the sun
    -dumpper jumpstart download yahoo news
    -dumpper jumpstart download wikipedia
    -dumpper jumpstart download montana solar physics
    -dumpper jumpstart download cornell university astronomy
    -dumpper jumpstart download nasa fact sheet
    -dumpper jumpstart download how to use
    -dumpper jumpstart download features of pc
    -dumpper jumpstart download file list

    -

    How to Secure Your Wi-Fi Network from Hackers

    -

    Hacking Wi-Fi passwords using dumpper jumpstart is easy, but it also exposes the weaknesses of the WPS protocol. WPS, which stands for Wi-Fi Protected Setup, is a feature that allows users to connect to a Wi-Fi network by pressing a button on the router or entering a pin code. However, this feature also makes it easier for hackers to break into the network by guessing or brute-forcing the pin code.

    -

    Therefore, the best way to secure your Wi-Fi network from hackers is to disable WPS on your router. Here are the steps:

    -
      -
    1. Log in to your router's web interface by typing its IP address (usually 192.168.1.1 or 192.168.0.1) in your browser's address bar.
    2. -
    3. Enter your username and password (usually admin and admin) to access the settings.
    4. -
    5. Go to the Wireless or Wi-Fi section and look for the WPS option.
    6. -
    7. Turn off or disable the WPS option and save the changes.
    8. -
    -

    By disabling WPS, you will prevent hackers from using dumpper jumpstart or other similar tools to hack your Wi-Fi password. However, this is not enough to ensure complete security. You also need to follow some other best practices and tips for Wi-Fi security, such as:

    -
      -
    • Use a strong and unique password for your Wi-Fi network. Avoid using common or easy-to-guess passwords, such as 12345678, password, or your name. Use a combination of uppercase and lowercase letters, numbers, and symbols.
    • -
    • Change your Wi-Fi password regularly, at least once every six months. This will prevent hackers from using old passwords that they may have obtained from previous attacks.
    • -
    • Use a secure encryption method for your Wi-Fi network, such as WPA2 or WPA3. Avoid using outdated or weak encryption methods, such as WEP or WPA.
    • -
    • Hide your Wi-Fi network name (SSID) from public view. This will make it harder for hackers to find and target your network. You can do this by disabling the broadcast SSID option on your router's settings.
    • -
    • Limit the number of devices that can connect to your Wi-Fi network. This will reduce the chances of unauthorized access and bandwidth consumption. You can do this by enabling the MAC address filtering option on your router's settings and adding only the devices that you trust.
    • -
    -

    By following these tips, you will make your Wi-Fi network more secure and less vulnerable to hacking attacks. You will also enjoy a faster and smoother internet experience.

    -

    Conclusion

    -

    Dumpper jumpstart is a powerful software that allows you to hack Wi-Fi passwords with ease. However, it also exposes the flaws of the WPS protocol and the risks of using unsecured Wi-Fi networks. Therefore, you should use dumpper jumpstart responsibly and ethically, and only for educational purposes. You should also secure your own Wi-Fi network from hackers by disabling WPS and following some other best practices and tips for Wi-Fi security.

    -

    We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. And if you liked this article, please share it with your friends and family who might be interested in learning more about dumpper jumpstart and Wi-Fi hacking.

    -

    FAQs

    -

    Here are some frequently asked questions about dumpper jumpstart and Wi-Fi hacking:

    -

    Is dumpper jumpstart legal?

    -

    Dumpper jumpstart is legal as long as you use it for educational purposes and with permission from the owner of the Wi-Fi network that you are hacking. However, if you use it for malicious purposes or without permission from the owner of the Wi-Fi network that you are hacking, you are breaking the law and could face legal consequences.

    -

    Is dumpper jumpstart safe?

    -

    Dumpper jumpstart is safe as long as you download it from a trusted source and scan it with an antivirus program before running it. However, if you download it from an untrusted source or run it without scanning it with an antivirus program, you could expose your computer to malware or viruses that could harm your system or steal your data.

    -

    Does dumpper jumpstart work on all Wi-Fi networks?

    -

    Dumpper jumpstart works on most Wi-Fi networks that have WPS enabled and have a green lock icon next to them. However However, it does not work on Wi-Fi networks that have WPS disabled or have a red lock icon next to them. These networks are more secure and require a different method to hack them.

    -

    How long does it take to hack a Wi-Fi password using dumpper jumpstart?

    -

    The time it takes to hack a Wi-Fi password using dumpper jumpstart depends on several factors, such as the strength of the password, the number of WPS pins available, and the speed of your computer and internet connection. In general, it can take anywhere from a few seconds to a few minutes to hack a Wi-Fi password using dumpper jumpstart.

    -

    Can I use dumpper jumpstart on my smartphone or tablet?

    -

    No, you cannot use dumpper jumpstart on your smartphone or tablet. Dumpper jumpstart is only compatible with Windows operating systems and requires a computer with a wireless adapter that supports monitor mode and packet injection. However, there are other apps that you can use on your smartphone or tablet to hack Wi-Fi passwords, such as WPS WPA Tester, AndroDumpper, or WiFi Warden.

    -

    Can I use dumpper jumpstart to hack other types of passwords?

    -

    No, you cannot use dumpper jumpstart to hack other types of passwords. Dumpper jumpstart is only designed to hack Wi-Fi passwords using the WPS protocol. It cannot hack passwords for websites, email accounts, social media accounts, or other online services.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download FIFA World Cup 2022 Mobile Game for Free and Relive the Tournament.md b/spaces/congsaPfin/Manga-OCR/logs/Download FIFA World Cup 2022 Mobile Game for Free and Relive the Tournament.md deleted file mode 100644 index a1af0c834828d32d3edd2fc634224918b368d879..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download FIFA World Cup 2022 Mobile Game for Free and Relive the Tournament.md +++ /dev/null @@ -1,185 +0,0 @@ -
    -

    How to Download FIFA Game Free

    -

    If you are a fan of soccer, you might have heard of FIFA game, one of the most popular and realistic soccer video games in the world. FIFA game is developed by EA Sports and has been released annually since 1993. The latest version, FIFA 22, was launched on October 1, 2021, and features groundbreaking new HyperMotion gameplay technology, new modes, and more.

    -

    download fifa game free


    Download Zip ✯✯✯ https://urlca.com/2uO7l1



    -

    But how can you download FIFA game free and enjoy the ultimate soccer experience on your PC or mobile device? In this article, we will show you what is FIFA game, what are its features and system requirements, and where to download it free. Let's get started!

    -

    What is FIFA Game?

    -

    FIFA game is a soccer simulation game that lets you play as your favorite teams and players from over 30 leagues, including the Premier League, La Liga, Bundesliga, Serie A, Ligue 1, and more. You can also play as national teams from over 200 countries, including the 32 qualified teams for the FIFA World Cup 2022™ in Qatar.

    -

    FIFA game has various modes that cater to different preferences and skill levels. You can play solo or online with friends in Career Mode, Ultimate Team, Volta Football, Pro Clubs, and more. You can also relive the world's greatest soccer tournament in the FIFA World Cup™ mode, where you can replay the official tournament brackets with any of the qualified nations.

    -

    download fifa game free for pc
    -download fifa game free for android
    -download fifa game free full version
    -download fifa game free without license verification
    -download fifa game free offline
    -download fifa game free apk
    -download fifa game free for windows 10
    -download fifa game free for laptop
    -download fifa game free 2023
    -download fifa game free with crack
    -how to download fifa game free on ps4
    -how to download fifa game free on xbox one
    -how to download fifa game free on ios
    -how to download fifa game free on mac
    -how to download fifa game free on pc without origin
    -best site to download fifa game free
    -best way to download fifa game free
    -best app to download fifa game free
    -is it safe to download fifa game free
    -is it legal to download fifa game free
    -can i download fifa game free on my phone
    -can i download fifa game free on steam
    -can i download fifa game free from ea sports
    -where can i download fifa game free
    -where to download fifa game free for pc
    -where to download fifa game free for android
    -where to download fifa game free full version
    -where to download fifa game free without license verification
    -where to download fifa game free offline
    -where to download fifa game free apk
    -why can't i download fifa game free
    -why should i download fifa game free
    -why is it hard to download fifa game free
    -why is it illegal to download fifa game free
    -why is it slow to download fifa game free
    -what is the best fifa game to download for free
    -what is the latest fifa game to download for free
    -what is the size of the fifa game to download for free
    -what is the requirement of the fifa game to download for free
    -what is the password of the fifa game to download for free

    -

    Features of FIFA Game

    -

    FIFA game is powered by football and features many innovations and improvements across every mode in the game. Here are some of the features that make FIFA game stand out:

    -
      -
    • HyperMotion gameplay technology: This is a new feature that is only available on PlayStation 5, Xbox Series X|S, and Stadia. It uses advanced machine learning and real-time motion capture of 22 professional players to create more realistic animations, movements, and behaviors on the pitch.
    • -
    • New attacking tactics: This feature allows you to customize your offensive strategy and create more chances to score. You can choose from different formations, styles, and instructions for your team.
    • -
    • Rewritten goalkeepers: This feature enhances the intelligence and reactions of the goalkeepers, making them more reliable and consistent. You can also customize your goalkeeper's kit and appearance.
    • -
    • Career mode: This mode lets you create your own club or player and lead them to glory. You can customize your club's name, logo, kit, stadium, and more. You can also enjoy an overhauled player career experience that gives you more ways to progress, achieve, and immerse yourself in your pro's journey.
    • -
    • Volta football: This mode brings back the street football style of play with more flair and creativity. You can play in various locations around the world, such as London, Paris, Dubai, and Sydney. You can also customize your avatar's appearance, skills, and gear.
    • -
    • Ultimate team: This mode lets you build your dream team with players from past and present. You can collect player items, trade them on the transfer market, and compete in various online and offline modes. You can also enjoy new features such as FUT Heroes, Division Rivals, FUT Champions, and more.
    • -
    • Pro clubs: This mode lets you create or join a club with up to 10 friends and play online matches against other clubs. You can customize your club's name, kit, badge, and more. You can also control your virtual pro's development and growth.
    • -
    -

    System Requirements for FIFA Game

    -

    Before you download FIFA game free, you need to make sure that your PC or mobile device meets the minimum or recommended system requirements for the game. Here are the system requirements for FIFA game according to the official EA Sports website :

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    OSCPURAMGraphics CardHard DriveOnline Connection
    FIFA 22 Minimum PC Specs
    64-bit Windows 10Athlon X4 880K @4GHz or Core i3-6100 @3.7GHz or equivalent8 GBRadeon HD 7850 2GB or GeForce GTX 660 2GB or equivalentAt least 50 GB of free space512 KBPS or faster internet speed
    FIFA 22 Recommended PC Specs
    64-bit Windows 10FX 8150 @3.6GHz or Core i5-3550 @3.40GHz or equivalent8 GBRadeon R9 270X or GeForce GTX 670 or equivalentAt least 50 GB of free spaceBroadband connection recommended
    FIFA Mobile Minimum Specs
    Android 6.0 or iOS 12.0 or higherN/AN/AN/AN/AN/A
    FIFA Mobile Recommended Specs
    Android 8.0 or iOS 13.0 or higherN/AN/AN/AN/AN/A
    -

    Where to Download FIFA Game Free?

    -

    Now that you know what is FIFA game and what are its system requirements, you might be wondering where to download it free. Well, there are two ways to do that: FIFA Mobile and FIFA 22. Let's take a look at each of them.

    -

    FIFA Mobile

    -

    FIFA Mobile is a free-to-play version of FIFA game that is designed for mobile devices. It has many of the same features and modes as the PC and console versions, but with some differences and limitations. For example, FIFA Mobile has a smaller file size, simpler controls, and faster matches. It also has some exclusive modes, such as Team of the Week, VS Attack, and Legacy Team.

    -

    To download FIFA Mobile free, you need to have an Android or iOS device that meets the minimum or recommended specs. Here are the steps to install FIFA Mobile on your device:

    -

    How to Install FIFA Mobile on Android Devices

    -
      -
    1. Go to the Google Play Store and search for FIFA Mobile.
    2. -
    3. Tap on the Install button and wait for the download to finish.
    4. -
    5. Open the app and follow the on-screen instructions to set up your account and preferences.
    6. -
    7. Enjoy playing FIFA Mobile!
    8. -
    -

    How to Install FIFA Mobile on iOS Devices

    -
      -
    1. Go to the App Store and search for FIFA Mobile.
    2. -
    3. Tap on the Get button and wait for the download to finish.
    4. -
    5. Open the app and follow the on-screen instructions to set up your account and preferences.
    6. -
    7. Enjoy playing FIFA Mobile!
    8. -
    -

    FIFA 22

    -

    FIFA 22 is the latest and most advanced version of FIFA game that is available for PC, PlayStation 4, PlayStation 5, Xbox One, Xbox Series X|S, and Stadia. It has all the features and modes that we mentioned earlier, plus some more. For example, FIFA 22 has a new Career Mode story mode, a new FUT Champions mode, and a new Create a Club mode.

    -

    To download FIFA 22 free, you need to have a PC or console that meets the minimum or recommended specs. You also need to have one of these three options: EA Play subscription, Origin Access Premier subscription, or pre-order bonus. Here are the steps to get FIFA 22 free with each option:

    -

    How to Get FIFA 22 with EA Play Subscription

    -
      -
    1. EA Play is a subscription service that gives you access to a collection of EA games, including FIFA 22. It costs $4.99 per month or $29.99 per year.
    2. -
    3. To get EA Play, go to the official website and choose your platform (PC, PlayStation, Xbox, or Steam).
    4. -
    5. Sign up for an account and choose your payment method.
    6. -
    7. Download the EA Play app on your device and log in with your account.
    8. -
    9. Browse the library of games and find FIFA 22.
    10. -
    11. Download and install FIFA 22 on your device.
    12. -
    13. Enjoy playing FIFA 22!
    14. -
    -

    How to Get FIFA 22 with Origin Access Premier Subscription

    -
      -
    1. Origin Access Premier is a subscription service that gives you access to the full versions of EA games, including FIFA 22. It costs $14.99 per month or $99.99 per year.
    2. -
    3. To get Origin Access Premier, go to the official website and choose your platform (PC or Steam).
    4. -
    5. Sign up for an account and choose your payment method.
    6. -
    7. Download the Origin app on your PC and log in with your account.
    8. -
    9. Browse the library of games and find FIFA 22.
    10. -
    11. Download and install FIFA 22 on your PC.
    12. -
    13. Enjoy playing FIFA 22!
    14. -
    -

    How to Get FIFA 22 with Pre-Order Bonus

    -
      -
    1. If you pre-order FIFA 22 before October 1, 2021, you can get some exclusive bonuses, such as a FUT Heroes player item, a FUT Ambassador loan player item, a Career Mode homegrown talent, and more.
    2. -
    3. To pre-order FIFA 22, go to the official website and choose your platform (PC, PlayStation, Xbox, or Stadia).
    4. -
    5. Select the edition of FIFA 22 that you want to buy (Standard Edition, Ultimate Edition, or Legacy Edition).
    6. -
    7. Choose your payment method and confirm your order.
    8. -
    9. Wait for the release date of FIFA 22 (October 1, 2021) and download the game on your device.
    10. -
    11. Enjoy playing FIFA 22 with your pre-order bonuses!
    12. -
    -

    Conclusion

    -

    In this article, we have shown you how to download FIFA game free and enjoy the ultimate soccer experience on your PC or mobile device. We have explained what is FIFA game, what are its features and system requirements, and where to download it free. We have also given you three options to get FIFA 22 free: FIFA Mobile, EA Play subscription, Origin Access Premier subscription, or pre-order bonus.

    -

    We hope that this article has been helpful and informative for you. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

    -

    Summary of the Article

    -

    This article is about how to download FIFA game free. It covers the following points:

    -
      -
    • FIFA game is a soccer simulation game that has various modes and features.
    • -
    • FIFA game has different system requirements depending on the platform and version.
    • -
    • FIFA Mobile is a free-to-play version of FIFA game that is designed for mobile devices.
    • -
    • FIFA 22 is the latest and most advanced version of FIFA game that is available for PC and consoles.
    • -
    • You can get FIFA 22 free with EA Play subscription, Origin Access Premier subscription, or pre-order bonus.
    • -
    -

    FAQs

    -

    Here are some frequently asked questions about how to download FIFA game free:

    -
    1. Q: Can I play FIFA game offline?
    2. A: Yes, you can play some modes of FIFA game offline, such as Career Mode, Ultimate Team (single-player), Volta Football (single-player), and Pro Clubs (single-player). However, you need an online connection to access some features and updates of the game.

      -
    3. Q: Can I play FIFA game cross-platform?
    4. A: No, you cannot play FIFA game cross-platform. You can only play with other players who have the same platform and version of the game as you.

      -
    5. Q: How can I update FIFA game?
    6. A: You can update FIFA game automatically or manually depending on your device settings. You need an online connection to download and install the updates. The updates may include new features, modes, players, kits, stadiums, and more.

      -
    7. Q: How can I contact EA Sports for support?
    8. A: You can contact EA Sports for support by visiting their official website and choosing your platform and game. You can also find answers to common issues and problems in their help center and community forums.

      -
    9. Q: How can I give feedback or suggestions for FIFA game?
    10. A: You can give feedback or suggestions for FIFA game by visiting their official website and filling out a survey. You can also join their social media channels and share your opinions and ideas with other players and developers.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Dr. Driving A Driving Game that Drives You Crazy on Google Play.md b/spaces/congsaPfin/Manga-OCR/logs/Dr. Driving A Driving Game that Drives You Crazy on Google Play.md deleted file mode 100644 index 647b0fda89343dd5878f6760a154c96924684b8e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Dr. Driving A Driving Game that Drives You Crazy on Google Play.md +++ /dev/null @@ -1,92 +0,0 @@ - -

    Dr. Driving: A Fun and Realistic Driving Simulation Game

    -

    If you are looking for a driving game that is not about racing or speed, but rather about driving well, finding parking, and managing traffic, then you should try Dr. Driving. Dr. Driving is a mobile simulation game from SUD Inc. that has been downloaded over 500 million times from the Google Play Store. In this article, we will tell you why you should download Dr. Driving from the play store, what features it offers, how to install it on your device, and some tips and tricks for playing it.

    -

    Features of Dr. Driving

    -

    Stunning graphics and smooth gameplay

    -

    Dr. Driving is not your typical driving game that has flashy cars and unrealistic physics. Instead, it has realistic cars and environments that look like real life. The graphics are crisp and clear, and the animations are smooth and fluid. You can see the details of the streets, buildings, traffic lights, pedestrians, and other vehicles as you drive around the city. The game also runs smoothly on most devices, without any lag or glitches.

    -

    dr driving download play store


    Download Zip ✑ ✑ ✑ https://urlca.com/2uOf4f



    -

    Various modes and missions to challenge your driving skills

    -

    Dr. Driving has different modes and missions that test your driving skills in different ways. You can choose from modes like Highway, Speed Parking, Broken Brake, Fuel Efficiency, VIP Escort, Truck Delivery, Lane Change, Drift Mode, and more. Each mode has its own objectives and challenges that require you to drive carefully, accurately, and efficiently. For example, in Speed Parking mode, you have to park your car in a designated spot within a time limit, without hitting any obstacles or other cars. In Broken Brake mode, you have to drive with a faulty brake system that makes your car stop randomly.

    -

    Online multiplayer and leaderboards to compete with other players

    -

    If you want to spice up your driving experience, you can play online multiplayer mode in Dr. Driving. You can sign in with your Google account to play online multiplayer mode. You can challenge other players from around the world in real-time races or tournaments. You can also see your ranking on the global or regional leaderboards and compare your scores with other players.

    -

    How to Download and Install Dr. Driving from the Play Store

    -

    Step 1: Open the Play Store app on your device

    -

    To download Dr. Driving from the play store, you need to have a device that runs on Android OS version 4.1 or higher.

    Step 2: Search for Dr. Driving or use this link

    -

    Once you open the Play Store app, you can search for Dr. Driving by typing its name in the search bar. You can also use this link to go directly to the game's page on the Play Store. You will see the game's icon, name, rating, and description on the page.

    -

    Step 3: Tap on Install and wait for the download to finish

    -

    To install Dr. Driving on your device, you need to tap on the Install button on the game's page. The game is about 13 MB in size, so it will not take long to download. You will see a progress bar that shows how much of the game has been downloaded. You will also need to accept the permissions that the game requires, such as access to your device's storage, network, and vibration.

    -

    Step 4: Launch the game and enjoy driving

    -

    After the download and installation are complete, you can launch the game by tapping on the Open button on the game's page. You can also find the game's icon on your device's home screen or app drawer. When you launch the game, you will see a splash screen with the game's logo and name. Then, you will see the main menu where you can choose your mode, car, settings, and more. You can also sign in with your Google account to play online multiplayer mode and save your progress. Now, you are ready to enjoy driving in Dr. Driving.

    -

    Tips and Tricks for Playing Dr. Driving

    -

    Choose the right car for each mission

    -

    Dr. Driving has a variety of cars that you can choose from, each with different attributes such as speed, acceleration, handling, and braking. You can unlock more cars by completing missions or buying them with coins or coupons. You can also customize your car's color and appearance. However, not all cars are suitable for all missions. Some missions require you to drive fast, while others require you to drive carefully or efficiently. Therefore, you should choose the right car for each mission based on its attributes and objectives.

    -

    dr driving game download from play store
    -how to download dr driving on play store
    -dr driving 2 download play store
    -dr driving free download play store
    -dr driving apk download play store
    -dr driving game install from play store
    -dr driving game play online on play store
    -dr driving game update on play store
    -dr driving game review on play store
    -dr driving game rating on play store
    -dr driving game features on play store
    -dr driving game tips and tricks on play store
    -dr driving game cheats and hacks on play store
    -dr driving game best cars on play store
    -dr driving game multiplayer mode on play store
    -dr driving game offline mode on play store
    -dr driving game simulation mode on play store
    -dr driving game casual mode on play store
    -dr driving game realistic graphics on play store
    -dr driving game stylized graphics on play store
    -dr driving game data safety on play store
    -dr driving game data privacy on play store
    -dr driving game data encryption on play store
    -dr driving game data deletion on play store
    -dr driving game compatible devices on play store
    -dr driving game phone version on play store
    -dr driving game tablet version on play store
    -dr driving game watch version on play store
    -dr driving game chromebook version on play store
    -dr driving game tv version on play store
    -dr driving game latest version on play store
    -dr driving game old version on play store
    -dr driving game new update on play store
    -dr driving game bug fixes on play store
    -dr driving game performance improvement on play store
    -dr driving game in-app purchases on play store
    -dr driving game ads removal on play store
    -dr driving game premium features on play store
    -dr driving game free trial on play store
    -dr driving game refund policy on play store
    -dr driving game customer support on play store
    -dr driving game developer contact on play store
    -dr driving game sud inc. on play store
    -alternatives to dr driving game on play store
    -similar games to dr driving on play store
    -best driving games on play store
    -best simulation games on play store

    -

    Upgrade your car with coins and coupons

    -

    As you play Dr. Driving, you will earn coins and coupons that you can use to upgrade your car's performance and features. You can upgrade your car's engine, tires, brakes, suspension, and turbo with coins. You can also buy special features such as nitro boost, auto brake, auto fuel, and auto repair with coupons. Upgrading your car will make it faster, smoother, and more durable. However, upgrading your car will also increase its fuel consumption and repair cost, so you should balance your upgrades wisely.

    -

    Follow the traffic rules and avoid collisions

    -

    Dr. Driving is not a racing game where you can drive recklessly and crash into other cars or objects. Instead, it is a simulation game where you have to follow the traffic rules and avoid collisions. If you break the traffic rules or cause collisions, you will lose points and money. You will also damage your car and have to pay for repairs. Therefore, you should drive carefully and responsibly in Dr. Driving. You should obey the speed limit, stop at red lights, signal before turning or changing lanes, yield to pedestrians and other vehicles, and park properly.

    -

    Use the tilt or touch controls according to your preference

    -

    Dr. Driving has two types of controls that you can use to steer your car: tilt or touch. You can choose your preferred control type in the settings menu of the game. Tilt control means that you tilt your device left or right to steer your car accordingly. Touch control means that you tap on the left or right side of the screen to steer your car accordingly. Both control types have their advantages and disadvantages depending on your personal preference and comfort level.

    -

    Connect with your Google account to play online and save your progress

    -

    If you want to play online multiplayer mode in Dr. Driving or save your progress across different devices, you need to connect with your Google account in the game. You can sign in with your Google account by tapping on the Google Play Games icon on the main menu of the game. By signing in with your Google account, you can access online multiplayer mode where you can challenge other players from around the world in real-time races or tournaments. You can also see your ranking on the global or regional leaderboards and compare your scores with other players. Moreover, by signing in with your Google account, you can save your progress in the cloud and sync it across different devices that have Dr. Driving installed.

    -

    Conclusion

    -

    Dr. Driving is

    Dr. Driving is a fun and realistic driving simulation game that you can download from the play store and enjoy on your mobile device. It has stunning graphics, smooth gameplay, various modes and missions, online multiplayer and leaderboards, and a lot of cars and features to choose from. It also tests your driving skills and challenges you to drive well, find parking, and manage traffic. If you are looking for a driving game that is not about racing or speed, but rather about driving well, then you should download Dr. Driving from the play store today.

    -

    FAQs

    -

    Q1: Is Dr. Driving free to play?

    -

    A1: Yes, Dr. Driving is free to play and download from the play store. However, it contains ads and in-app purchases that you can disable or buy with real money if you want.

    -

    Q2: How can I get more coins and coupons in Dr. Driving?

    -

    A2: You can get more coins and coupons in Dr. Driving by completing missions, winning races or tournaments, watching ads, or buying them with real money.

    -

    Q3: What is the difference between Dr. Driving and Dr. Driving 2?

    -

    A3: Dr. Driving 2 is the sequel to Dr. Driving that has more features, modes, cars, graphics, and challenges than the original game. However, both games are similar in terms of gameplay and concept.

    -

    Q4: How can I play Dr. Driving on my PC or laptop?

    -

    A4: You can play Dr. Driving on your PC or laptop by using an Android emulator such as BlueStacks or NoxPlayer that allows you to run Android apps on your computer.

    -

    Q5: How can I contact the developer of Dr. Driving?

    -

    A5: You can contact the developer of Dr. Driving by sending an email to sudinc@naver.com or visiting their website at http://www.sudinc.net/.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Instagram se photo video aur reels kaise download kare.md b/spaces/congsaPfin/Manga-OCR/logs/Instagram se photo video aur reels kaise download kare.md deleted file mode 100644 index 081e58792bc2b6dc3b18db47d6062531f6dc1ff3..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Instagram se photo video aur reels kaise download kare.md +++ /dev/null @@ -1,101 +0,0 @@ - -

    Instagram Kaise Download Kare: A Complete Guide

    -

    Instagram is one of the most popular social media platforms in the world, with over one billion monthly active users. It allows you to create and share your photos, videos, stories, reels, and more with your friends and followers. You can also discover new content and people based on your interests, shop for products, chat with others, and have fun.

    -

    instagram kaise download kare


    Download Filehttps://urlca.com/2uOfGO



    -

    But how do you download Instagram on your device? In this article, we will show you how to download Instagram on Android, iOS, and PC or laptop. We will also share some of the features, benefits, and tips of using Instagram. So, let's get started!

    -

    What is Instagram and why you should use it

    -

    Instagram is a photo and video-sharing social networking service owned by Meta Platforms, formerly known as Facebook. It was launched in 2010 and has since grown to become one of the most influential and engaging platforms in the world.

    -

    There are many reasons why you should use Instagram, whether you are an individual, a creator, or a business. Here are some of them:

    -

    Instagram features and benefits

    -
      -
    • Find like-minded users who share your interests and passions. You can follow your favorite accounts, celebrities, influencers, brands, and hashtags. You can also explore new content and creators based on your preferences.
    • -
    • Market your content to a large and active audience. You can showcase your creativity, personality, products, or services to millions of potential customers. You can also use free tools like analytics, filters, stickers, and more to enhance your posts.
    • -
    • Make money by promoting your products or services, or collaborating with brands. You can sell your products directly on Instagram through shopping features. You can also earn money by partnering with brands that align with your niche and audience.
    • -
    • Share photos and videos that showcase your creativity and personality. You can post photos and videos to your feed or stories that reflect your style and mood. You can also create short and entertaining videos with reels, or go live with your followers.
    • -
    • Brand your business and build trust and loyalty with your customers. You can use your bio, profile picture, highlights, and posts to convey your brand identity and message. You can also interact with your customers through likes, comments, messages, polls, quizzes, and more.
    • -
    -

    Instagram tips and tricks

    -
      -
    • Add and manage multiple accounts from the same device. You can switch between your personal and business accounts easily by holding down your profile picture in the navigation bar.
    • -
    • See all the posts you've liked. You can go to Settings > Your Activity > Interactions > Likes to see the last 300 posts you've liked.
    • -

      How to download Instagram on your device

      -

      Now that you know what Instagram is and why you should use it, let's see how you can download it on your device. Instagram is available for free on Android, iOS, and PC or laptop. Here are the steps to download Instagram on each of these devices:

      -

      How to download Instagram on Android

      -

      If you have an Android device, such as a smartphone or tablet, you can download Instagram from the Google Play Store. Here are the steps to do so:

      -

      instagram app kaise download kare
      -instagram kaise download karte hain
      -instagram download karne ka tarika
      -mobile mein instagram kaise download karen
      -instagram kaise download kare jio phone me
      -instagram kaise download kare laptop me
      -instagram kaise download kare pc me
      -instagram kaise download kare video
      -instagram kaise download kare 2023
      -instagram kaise download kare bina play store ke
      -instagram lite kaise download kare
      -instagram reels kaise download kare
      -instagram story kaise download kare
      -instagram photo kaise download kare
      -instagram video kaise download kare gallery me
      -instagram par video kaise download kare
      -instagram se video kaise download kare
      -instagram se photo kaise download kare
      -instagram se status kaise download kare
      -instagram se song kaise download kare
      -instagram se reel video kaise download kare
      -instagram se live video kaise download kare
      -instagram se private video kaise download kare
      -instagram se deleted photo/video kaise download kare
      -instagram se dp photo/video kaise download/downloaded/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloadable/downloaded/downloaded/downloaded/downloaded/downloaded/downloaded/downloaded/downloaded/downloaded/downloaded/downloaded/karen/karen/karen/karen/karen/karen/karen/karen/karen/karen/karen/karen/karen/karen/karen/karen/karen/karen/karen/karen/karen/karen/

      -

      Step 1: Go to Google Play Store

      -

      On your Android device, open the Google Play Store app. You can find it on your home screen or in your app drawer.

      -

      Step 2: Search for Instagram

      -

      In the Google Play Store app, tap on the search bar at the top and type "Instagram". You will see a list of results. Tap on the one that says "Instagram" and has the logo of a camera.

      -

      Step 3: Tap on Install

      -

      On the Instagram app page, tap on the green button that says "Install". This will start downloading and installing the app on your device. You may need to accept some permissions and terms of service before proceeding.

      -

      Step 4: Open the app and sign up or log in

      -

      Once the app is installed, you can open it by tapping on the icon on your home screen or in your app drawer. You will see a welcome screen that asks you to sign up or log in. If you already have an Instagram account, you can log in with your username and password. If you don't have an account, you can sign up with your email address, phone number, or Facebook account. You will also need to create a username and password for your account.

      -

      How to download Instagram on iOS

      -

      If you have an iOS device, such as an iPhone or iPad, you can download Instagram from the App Store. Here are the steps to do so:

      -

      Step 1: Go to App Store

      -

      On your iOS device, open the App Store app. You can find it on your home screen.

      -

      Step 2: Search for Instagram

      -

      In the App Store app, tap on the search icon at the bottom right and type "Instagram". You will see a list of results. Tap on the one that says "Instagram" and has the logo of a camera.

      -

      Step 3: Tap on Get

      -

      On the Instagram app page, tap on the blue button that says "Get". This will start downloading and installing the app on your device. You may need to enter your Apple ID password or use Touch ID or Face ID before proceeding.

      -

      Step 4: Open the app and sign up or log in

      -

      Once the app is installed, you can open it by tapping on the icon on your home screen. You will see a welcome screen that asks you to sign up or log in. If you already have an Instagram account, you can log in with your username and password. If you don't have an account, you can sign up with your email address, phone number, or Facebook account. You will also need to create a username and password for your account.

      -

      How to download Instagram on PC or laptop

      -

      If you have a PC or laptop, you can use Instagram either through its web version or through its app from Microsoft Store. Here are the steps to do so:

      -

      Step 1: Go to [1](https://www.instagram.com/)

      -

      On your PC or laptop, open a web browser and go to [1](https://www.instagram.com/). This is the official website of Instagram where you can access its web version.

      -

      Step 2: Sign up or log in with your account

      -

      On the website, you will see a screen that asks you to sign up or log in with your account. If you already have an Instagram account, you can log in with your username and password. If you don't have an account, you can sign up with your email address, phone number, or Facebook account. You will also need to create a username and password for your account.

      -

      Step 3: Use the web version of Instagram or download the app from Microsoft Store

      -

      and more. However, some features, such as stories, reels, and live, are not available on the web version. If you want to use these features, you can download the app from Microsoft Store. To do so, follow these steps:

      -

      Step 1: Go to Microsoft Store

      -

      On your PC or laptop, open the Microsoft Store app. You can find it on your Start menu or taskbar.

      -

      Step 2: Search for Instagram

      -

      In the Microsoft Store app, click on the search icon at the top right and type "Instagram". You will see a list of results. Click on the one that says "Instagram" and has the logo of a camera.

      -

      Step 3: Click on Get

      -

      On the Instagram app page, click on the blue button that says "Get". This will start downloading and installing the app on your device. You may need to sign in with your Microsoft account before proceeding.

      -

      Step 4: Open the app and sign up or log in

      -

      Once the app is installed, you can open it by clicking on the icon on your Start menu or taskbar. You will see a welcome screen that asks you to sign up or log in. If you already have an Instagram account, you can log in with your username and password. If you don't have an account, you can sign up with your email address, phone number, or Facebook account. You will also need to create a username and password for your account.

      -

      Conclusion and FAQs

      -

      In this article, we have shown you how to download Instagram on Android, iOS, and PC or laptop. We have also shared some of the features, benefits, and tips of using Instagram. We hope you found this article helpful and informative.

      -

      If you have any questions or doubts about downloading Instagram, you can check out these FAQs:

      -

      FAQs

      -
        -
      • Q: Is Instagram free to use?
      • -
      • A: Yes, Instagram is free to use. However, some features may require in-app purchases or subscriptions.
      • -
      • Q: How can I update Instagram to the latest version?
      • -
      • A: You can update Instagram to the latest version by going to Google Play Store, App Store, or Microsoft Store and checking for updates. Alternatively, you can enable automatic updates for Instagram in your device settings.
      • -
      • Q: How can I delete my Instagram account?
      • -
      • A: You can delete your Instagram account by going to Settings > Help > Help Center > Managing Your Account > Delete Your Account. You will need to log in with your account and follow the instructions.
      • -
      • Q: How can I contact Instagram support?
      • -
      • A: You can contact Instagram support by going to Settings > Help > Report a Problem. You can also visit [2](https://help.instagram.com/) for more help and resources.
      • -
      • Q: How can I download my data from Instagram?
      • -
      • A: You can download your data from Instagram by going to Settings > Security > Download Data. You will need to enter your email address and password and wait for a link to download your data.
      • -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Mediafire Spaceflight Simulator MOD APK - Explore the Solar System with Unlimited Content.md b/spaces/congsaPfin/Manga-OCR/logs/Mediafire Spaceflight Simulator MOD APK - Explore the Solar System with Unlimited Content.md deleted file mode 100644 index 514748a942d51a3bb68b71b75fae4df3263feba7..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Mediafire Spaceflight Simulator MOD APK - Explore the Solar System with Unlimited Content.md +++ /dev/null @@ -1,98 +0,0 @@ - -

      Spaceflight Simulator Full APK Mediafıre: How to Download and Play the Ultimate Space Game

      -

      Do you dream of exploring the vastness of space, building your own rockets, and visiting other planets and moons? If so, you will love Spaceflight Simulator, a realistic and fun game that lets you do all that and more. In this article, we will show you how to download Spaceflight Simulator full APK mediafıre, a modded version of the game that gives you access to all the features and content for free. We will also give you some tips and tricks on how to play the game and make the most out of your space adventures.

      -

      spaceflight simulator full apk mediafıre


      DOWNLOADhttps://urlca.com/2uO5Qm



      -

      What is Spaceflight Simulator?

      -

      Spaceflight Simulator is a game that simulates space flight in a 2D environment. It was developed by Stefo Mai Morojna, an independent game developer from Romania. The game was released in 2017 for Android and iOS devices, and in 2022 for Windows PC via Steam. The game has received overwhelmingly positive reviews from critics and players alike, who praised its realism, creativity, and replay value.

      -

      Features of Spaceflight Simulator

      -

      Spaceflight Simulator has many features that make it an engaging and immersive game. Some of these features are:

      -
        -
      • You can build your own rockets from scratch, using various parts such as engines, fuel tanks, capsules, landing legs, solar panels, and more.
      • -
      • You can launch your rockets from different locations on Earth, such as Cape Canaveral, Baikonur, Vandenberg, and others.
      • -
      • You can explore a realistic solar system that includes all the planets and their moons, as well as some asteroids and comets.
      • -
      • You can perform various maneuvers such as orbiting, landing, docking, reentry, and more.
      • -
      • You can deploy payloads such as satellites, rovers, probes, and space stations.
      • -
      • You can customize your rockets with different colors and skins.
      • -
      • You can save and load your rockets and flights.
      • -
      • You can share your rockets and flights with other players online.
      • -
      -

      Benefits of Spaceflight Simulator Mod APK

      -

      Spaceflight Simulator is a free-to-play game, but it has some limitations and in-app purchases that may affect your gameplay experience. For example, you can only use a limited number of parts to build your rockets, you can only launch from one location on Earth, you can only visit a few planets and moons, and you have to watch ads to unlock some features. However, there is a way to bypass these restrictions and enjoy the full potential of the game. That is by downloading Spaceflight Simulator full APK mediafıre.

      -

      How to Download Spaceflight Simulator Full APK Mediafıre

      -

      If you want to download Spaceflight Simulator full APK mediafıre, you will need to follow these simple steps:

      -

      Step 1: Go to the Mediafıre link

      -

      The first thing you need to do is to go to the Mediafıre link where the APK file is hosted. You can find the link in the description of this video or in the comment section below. Alternatively, you can search for "Spaceflight Simulator full APK mediafıre" on Google or any other search engine and look for a reliable source. Once you find the link, click on it and you will be redirected to the Mediafıre website.

      -

      spaceflight simulator mod apk download
      -spaceflight simulator unlocked all content apk
      -spaceflight simulator apk latest version
      -spaceflight simulator free download for android
      -spaceflight simulator hack apk mediafıre
      -spaceflight simulator premium apk 2023
      -spaceflight simulator realistic physics mod apk
      -spaceflight simulator unlimited fuel apk
      -spaceflight simulator 1.5.10.2 mod apk
      -spaceflight simulator best rockets apk
      -spaceflight simulator cheats apk mediafıre
      -spaceflight simulator cracked apk download
      -spaceflight simulator expansion pack apk
      -spaceflight simulator full version apk 2023
      -spaceflight simulator game apk mediafıre
      -spaceflight simulator mod menu apk
      -spaceflight simulator no ads apk download
      -spaceflight simulator offline apk mediafıre
      -spaceflight simulator pro apk 2023
      -spaceflight simulator rocket parts mod apk
      -spaceflight simulator sandbox mode apk
      -spaceflight simulator unlimited money apk
      -spaceflight simulator update 1.6 apk
      -spaceflight simulator v1.5.10.2 mod apk mediafıre
      -spaceflight simulator with planets mod apk

      -

      Step 2: Download the APK file

      -

      Once you are on the Mediafıre website, you will see a green button that says "Download". Click on it and wait for a few seconds until the download starts. The APK file is about 50 MB in size, so it should not take too long to download. You can check the progress of the download on your notification bar or in your browser.

      -

      Step 3: Install the APK file

      -

      After the download is complete, you will need to install the APK file on your device. To do this, you will need to enable the installation of apps from unknown sources on your device settings. This is a security measure that prevents malicious apps from harming your device. To enable this option, go to your device settings, then security, then unknown sources, and toggle it on. Alternatively, you can also tap on the downloaded APK file and it will prompt you to enable this option.

      -

      Once you have enabled this option, you can proceed to install the APK file. Tap on the downloaded APK file and it will open a window that shows you the permissions that the app requires. Review them carefully and then tap on "Install". Wait for a few seconds until the installation is complete.

      -

      Step 4: Launch the game and enjoy

      -

      Now that you have installed Spaceflight Simulator full APK mediafıre, you can launch the game and enjoy all its features and content for free. You will see a new icon on your home screen or app drawer that says "Spaceflight Simulator". Tap on it and it will open the game. You will be greeted by a welcome screen that shows you some basic information about the game. Tap on "Start" and you will be taken to the main menu of the game.

      -

      From here, you can choose to play in sandbox mode or career mode, access your saved rockets and flights, customize your settings, and more. You can also tap on the "More" button to see more options such as sharing your rockets and flights online, joining the online community, watching tutorials, and more.

      -

      Tips and Tricks for Playing Spaceflight Simulator

      -

      Spaceflight Simulator is a game that requires some skill and knowledge to master. It is not a simple arcade game where you just press buttons and watch things happen. It is a realistic simulation game where you have to design, build, launch, control, and land your rockets. It is also a creative game where you can experiment with different parts, configurations, planets, and scenarios. To help you get started and improve your gameplay experience, here are some tips and tricks that you should know:

      -

      Tip 1: Use the sandbox mode to experiment with different rockets and planets

      -

      The sandbox mode is where you can unleash your creativity and imagination. In this mode, you have access to all the parts, locations, planets, and moons in the game. You can build any rocket you want, from simple rockets to complex space stations. You can also launch your rockets from any location on Earth or from any planet or moon in the solar system. You can also change the time of day, gravity, atmosphere, and other parameters of each planet or moon.

      -

      The sandbox mode is a great way to learn how different parts work, how different planets affect your flight, how different maneuvers are performed, and how different scenarios are possible. You can also use this mode to test your rockets before launching them in career mode or online.

      -

      Tip 2: Learn how to orbit, land, and dock your spacecraft

      -

      you need to slow down your rocket and align it with the surface. To dock with another spacecraft, you need to match its orbit and rendezvous with it. These maneuvers require precise timing, thrust, and direction. You can use the map view and the maneuver nodes to plan your flights and see the effects of your actions. You can also use the autopilot feature to assist you with some of these maneuvers.

      -

      Tip 3: Use the map view and the maneuver nodes to plan your flights

      -

      The map view is a useful tool that shows you the orbits of your spacecraft and other celestial bodies. You can access it by tapping on the globe icon on the top right corner of the screen. In the map view, you can see your current position, velocity, altitude, apoapsis, periapsis, inclination, and other orbital parameters. You can also zoom in and out, drag the screen, and change the focus of the camera.

      -

      The maneuver nodes are another useful tool that allows you to plan your flights ahead of time. You can access them by tapping on your orbit in the map view. A maneuver node is a point on your orbit where you can change your velocity by applying a certain amount of thrust in a certain direction. By doing this, you can change your orbit to achieve different goals, such as reaching a higher or lower altitude, changing your inclination, escaping or entering a planet's sphere of influence, or intercepting another spacecraft.

      -

      To create a maneuver node, tap on your orbit and drag the prograde, retrograde, normal, antinormal, radial in, or radial out icons. These icons represent the different directions that you can apply thrust. As you drag them, you will see a dotted line that shows you the new orbit that you will get after performing the maneuver. You will also see some information such as the delta-v required, the time until the maneuver, and the estimated burn time.

      -

      To execute a maneuver node, you need to align your spacecraft with the blue marker on the navball and start burning at the right time. The game will show you a countdown timer and a green bar that indicates how much thrust you need to apply. You can also use the autopilot feature to execute the maneuver node automatically.

      -

      Tip 4: Adjust your thrust, fuel, and mass to optimize your performance

      -

      One of the most important factors that affect your space flight is your thrust-to-weight ratio (TWR). This is a measure of how much thrust your rocket can produce compared to how much it weighs. A higher TWR means that your rocket can accelerate faster and reach higher speeds. A lower TWR means that your rocket can barely lift off or struggle to overcome gravity.

      -

      To increase your TWR, you can do several things such as using more powerful engines, using less fuel, or reducing your mass. However, these actions also have trade-offs such as increasing your cost, reducing your range, or limiting your payload capacity. Therefore, you need to find a balance between these factors depending on your mission objectives.

      -

      Another way to optimize your performance is to use staging. Staging is a technique where you separate parts of your rocket that are no longer needed during flight. For example, you can jettison empty fuel tanks or boosters that are only useful for liftoff. By doing this, you can reduce your mass and increase your TWR without sacrificing fuel or payload.

      -

      Tip 5: Join the online community and share your creations and discoveries

      -

      Spaceflight Simulator is not only a game but also a platform where you can express your creativity and curiosity. You can share your rockets and flights with other players online and see what they have made as well. You can also join the online community and interact with other space enthusiasts. You can ask questions, give feedback, exchange tips, join challenges, participate in events, and more.

      -

      To share your rockets and flights online, you need to tap on the "More" button on the main menu and then tap on "Share". You will see two options: "Share Rocket" and "Share Flight". Tap on either one and you will be able to upload your rocket or flight to the online database. You will also be able to add a name, description, tags, and screenshots for your rocket or flight.

      -

      you need to tap on the "More" button on the main menu and then tap on "Online". You will see two tabs: "Rockets" and "Flights". Tap on either one and you will be able to browse, download, rate, and comment on other players' rockets or flights. You can also use the search function to find rockets or flights by name, tag, or rating.

      -

      To join the online community, you need to tap on the "More" button on the main menu and then tap on "Community". You will see several options such as "Forum", "Discord", "Reddit", "YouTube", and "Twitter". Tap on any of them and you will be redirected to the corresponding website or app where you can interact with other players and fans of Spaceflight Simulator. You can also tap on the "Feedback" option to send your suggestions, bug reports, or compliments to the developer.

      -

      Conclusion

      -

      Spaceflight Simulator is a game that simulates space flight in a 2D environment. It is a realistic and fun game that lets you build your own rockets, explore the solar system, and perform various maneuvers. It is also a creative and educational game that allows you to experiment with different parts, configurations, planets, and scenarios. It is a game that appeals to both casual and hardcore gamers, as well as to space enthusiasts and aspiring astronauts.

      -

      If you want to enjoy the full potential of the game, you should download Spaceflight Simulator full APK mediafıre, a modded version of the game that gives you access to all the features and content for free. You can download it easily by following the steps we have shown you in this article. You can also improve your gameplay experience by following the tips and tricks we have given you in this article. And finally, you can join the online community and share your creations and discoveries with other players online.

      -

      We hope that this article has helped you learn more about Spaceflight Simulator and how to download Spaceflight Simulator full APK mediafıre. We hope that you have fun playing this game and exploring the wonders of space. Thank you for reading and happy spaceflight!

      -

      FAQs

      -

      Here are some frequently asked questions about Spaceflight Simulator and Spaceflight Simulator full APK mediafıre:

      -
        -
      1. Is Spaceflight Simulator safe to play?
      2. -

        Yes, Spaceflight Simulator is safe to play. It does not contain any viruses, malware, or spyware. It does not require any special permissions or access to your device. It does not collect or share any personal information or data. It is a legitimate game that is available on official app stores such as Google Play Store, Apple App Store, and Steam.

        -
      3. Is Spaceflight Simulator full APK mediafıre safe to download?
      4. -

        Yes, Spaceflight Simulator full APK mediafıre is safe to download. It does not contain any viruses, malware, or spyware. It does not require any special permissions or access to your device. It does not collect or share any personal information or data. It is a modded version of the game that is hosted on Mediafıre, a reputable file hosting service.

        -
      5. Is Spaceflight Simulator realistic?
      6. -

        Yes, Spaceflight Simulator is realistic. It uses real physics and math to simulate space flight in a 2D environment. It uses real data and models for the solar system and its planets and moons. It uses real units and measurements for distance, time, speed, mass, force, etc. It uses real rocket parts and components for building your rockets. It uses real orbital mechanics and maneuvers for controlling your rockets.

        -
      7. Is Spaceflight Simulator educational?
      8. -

        Yes, Spaceflight Simulator is educational. It teaches you about space flight in a fun and interactive way. It helps you learn about rocket science, engineering, astronomy, physics, math, geography, history, and more. It sparks your curiosity and creativity about space exploration and discovery.

        -
      9. Is Spaceflight Simulator multiplayer?
      10. -

        No, Spaceflight Simulator is not multiplayer. It is a single-player game that does not support online or offline co-op or versus modes. However, it does have an online feature that allows you to share your rockets and flights with other players online and see what they have made as well. You can also join the online community and interact with other players and fans of Spaceflight Simulator.

        -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Real Football 2012 APK The Best Way to Enjoy Konamis Soccer on Your Mobile.md b/spaces/congsaPfin/Manga-OCR/logs/Real Football 2012 APK The Best Way to Enjoy Konamis Soccer on Your Mobile.md deleted file mode 100644 index b8cfdcc630108af9ec9ad96ea5fcac1c265d77ad..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Real Football 2012 APK The Best Way to Enjoy Konamis Soccer on Your Mobile.md +++ /dev/null @@ -1,171 +0,0 @@ -
      -

      Real Football 2012 Konami APK: A Review

      -

      If you are a fan of soccer games, you may have heard of Real Football 2012 Konami APK, a popular mobile game that lets you experience the thrill of playing soccer on your Android device. But what is this game all about, and how does it compare to other soccer games on the market? In this article, we will review Real Football 2012 Konami APK, its features, pros and cons, and how it stacks up against other soccer games. We will also answer some frequently asked questions about the game, and give you some tips and tricks to improve your skills. Let's get started!

      -

      real football 2012 konami apk


      Download ►►►►► https://urlca.com/2uO5tG



      -

      What is Real Football 2012 Konami APK?

      -

      A brief introduction to the game and its features

      -

      Real Football 2012 Konami APK is a mobile soccer simulation game developed by Gameloft, a leading developer of mobile games. The game was released in 2021 as a sequel to Real Football 2011, and it features improved graphics, gameplay, and content. The game allows you to play as your favorite soccer teams and players from around the world, in various leagues, cups, and tournaments. You can also create your own custom team and player, and customize their appearance, skills, and attributes. The game offers a realistic soccer experience, with lifelike animations, physics, and sounds. You can also enjoy various game modes and challenges, such as quick match, exhibition, league or cup mode, super challenge mode (similar to master league), free kick challenge mode, online match challenge mode (where you can play against other players online), and Facebook mode (where you can share your achievements and challenge your friends).

      -

      How to download and install the game on your Android device

      -

      To download and install Real Football 2012 Konami APK on your Android device, you need to follow these steps:

      -
        -
      1. Go to or or on your web browser.
      2. -
      3. Download the APK file of the game (about 30 MB) to your device.
      4. -
      5. Enable unknown sources on your device settings (if not already enabled).
      6. -
      7. Locate the downloaded APK file on your device storage and tap on it.
      8. -
      9. Follow the instructions on the screen to install the game.
      10. -
      11. Launch the game and enjoy!
      12. -
      -

      What are the pros and cons of Real Football 2012 Konami APK?

      -

      The pros of the game

      -

      Real Football 2012 Konami APK has many advantages that make it a great choice for soccer fans. Some of the pros of the game are:

      -

      real football 2012 konami apk download
      -real football 2012 konami apk mod
      -real football 2012 konami apk data
      -real football 2012 konami apk obb
      -real football 2012 konami apk offline
      -real football 2012 konami apk android
      -real football 2012 konami apk latest version
      -real football 2012 konami apk free
      -real football 2012 konami apk full
      -real football 2012 konami apk update
      -real football 2012 konami apk hack
      -real football 2012 konami apk unlimited money
      -real football 2012 konami apk revdl
      -real football 2012 konami apk rexdl
      -real football 2012 konami apk filehippo
      -real football 2012 konami apk apkpure
      -real football 2012 konami apk apkcombo
      -real football 2012 konami apk uptodown
      -real football 2012 konami apk android oyun club
      -real football 2012 konami apk andropalace
      -real football 2012 konami apk mob.org
      -real football 2012 konami apk highly compressed
      -real football 2012 konami apk hd graphics
      -real football 2012 konami apk english version
      -real football 2012 konami apk original
      -real football 2012 konami apk old version
      -real football 2012 konami apk new version
      -real football 2012 konami apk for pc
      -real football 2012 konami apk for ios
      -real football 2012 konami apk for windows phone
      -real football 2012 konami apk for java
      -real football 2012 konami apk for samsung galaxy y
      -real football 2012 konami apk for low end devices
      -real football 2012 konami apk for kitkat
      -real football 2012 konami apk for lollipop
      -real football 2012 konami apk for marshmallow
      -real football 2012 konami apk for nougat
      -real football 2012 konami apk for oreo
      -real football 2012 konami apk for pie
      -real football 2012 konami apk for q10
      -how to install real football 2012 konami apk
      -how to play real football 2012 konami apk
      -how to download real football 2012 konami apk
      -how to update real football 2012 konami apk
      -how to hack real football 2012 konami apk
      -how to get unlimited money in real football 2012 konami apk
      -how to fix lag in real football 2012 konami apk
      -how to change language in real football 2012 konami apk
      -how to unlock all teams in real football 2012 konami apk

      -
        -
      • Realistic graphics and animations: The game boasts high-quality graphics and animations that create a realistic and immersive soccer experience. The players, stadiums, crowds, and weather effects are all well-designed and detailed. The game also uses motion capture technology to capture the movements and expressions of real soccer players, such as Cristiano Ronaldo, Lionel Messi, Neymar, and more.
      • -
      • Various game modes and challenges: The game offers a variety of game modes and challenges that suit different preferences and skill levels. You can play a quick match, an exhibition, a league or cup mode, a super challenge mode, a free kick challenge mode, an online match challenge mode, or a Facebook mode. Each mode has its own objectives, rewards, and difficulties. You can also unlock new teams, players, stadiums, and items as you progress in the game.
      • -
      • Online multiplayer and social features: The game allows you to play online with other players from around the world, or with your friends on Facebook. You can compete in online tournaments, leagues, and leaderboards, and chat with other players. You can also share your achievements, stats, and screenshots on Facebook, and challenge your friends to beat your scores.
      • -
      -

      The cons of the game

      -

      Real Football 2012 Konami APK is not without its flaws, however. Some of the cons of the game are:

      -
        -
      • Requires a lot of storage space and internet connection: The game requires about 1 GB of storage space on your device, which may be too much for some users. The game also requires a stable internet connection to play online and access some features, such as Facebook mode. This may cause some lagging or crashing issues for some users.
      • -
      • Some bugs and glitches may occur: The game is not perfect, and some users have reported some bugs and glitches that affect the gameplay. For example, some users have experienced problems with the controls, the sound, the graphics, or the loading time. Some users have also encountered errors or crashes when trying to play online or access Facebook mode.
      • -
      • Not compatible with some devices and regions: The game is not compatible with all Android devices and regions. Some users have reported that the game does not work on their devices or in their regions. For example, some users have said that the game does not support their screen resolution or language. Some users have also said that the game is not available in their country or region.
      • -
      -

      How does Real Football 2012 Konami APK compare to other soccer games?

      -

      A comparison table of Real Football 2012 Konami APK, PES 2012 APK, and FIFA 12 APK

      -

      Real Football 2012 Konami APK is not the only soccer game on the market. There are other popular soccer games that you may want to try out, such as PES 2012 APK and FIFA 12 APK. How do these games compare to Real Football 2012 Konami APK? Here is a comparison table that shows some of the main features and differences among the three games:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      FeatureReal Football 2012 Konami APKPES 2012 APKFIFA 12 APK
      DeveloperGameloftKonamiEA Sports
      SizeAbout 1 GBAbout 200 MBAbout 1.5 GB
      GraphicsHigh-quality graphics and animations; motion capture technology; realistic weather effects.Good graphics and animations; smooth gameplay; realistic player movements.Excellent graphics and animations; HD quality; realistic player faces and expressions.
      GameplayRealistic soccer simulation; various game modes and challenges; online multiplayer and social features.Arcade-style soccer gameplay; improved AI and controls; team play mode; online multiplayer.Authentic soccer gameplay; enhanced physics engine; career mode; online multiplayer.
      ContentHundreds of teams and players from around the world; customizable teams and players; unlockable items.Hundreds of teams and players from around the world; official licenses from UEFA and Copa Santander Libertadores; editable teams and players.Over 500 teams and 15,000 players from around the world; official licenses from FIFA and EA Sports; authentic teams and players.
      -

      A summary of the main differences and similarities among the three games

      -

      As you can see from the table, Real Football 2012 Konami APK, PES 2012 APK, and FIFA 12 APK have some similarities and differences in terms of features, graphics, gameplay, and content. Here is a summary of the main points:

      -
        -
      • All three games are soccer simulation games that allow you to play as your favorite teams and players from around the world, in various modes and challenges.
      • -
      • All three games have good graphics and animations, but FIFA 12 APK has the best graphics and HD quality, followed by Real Football 2012 Konami APK, and then PES 2012 APK.
      • -
      • All three games have realistic gameplay, but FIFA 12 APK has the most authentic gameplay and physics engine, followed by Real Football 2012 Konami APK, and then PES 2012 APK.
      • -
      • All three games have a lot of content, but FIFA 12 APK has the most content and official licenses, followed by PES 2012 APK, and then Real Football 2012 Konami APK.
      • -
      • All three games have online multiplayer features, but Real Football 2012 Konami APK has the most social features and Facebook integration, followed by PES 2012 APK, and then FIFA 12 APK.
      • -
      • All three games require a lot of storage space on your device, but FIFA 12 APK requires the most space (1.5 GB), followed by Real Football 2012 Konami APK (1 GB), and then PES 2012 APK (200 MB).
      • -
      -

      Conclusion

      -

      A recap of the main points of the article

      -

      In conclusion, Real Football 2012 Konami APK is a great soccer game that offers a realistic and immersive soccer experience on your Android device. The game has high-quality graphics and animations, various game modes and challenges, online multiplayer and social features, and hundreds of teams and players from around the world. The game also allows you to customize your own team and player, and unlock new items as you progress. The game has some drawbacks, such as requiring a lot of storage space and internet connection, having some bugs and glitches, and not being compatible with some devices and regions. However, these issues are not major enough to ruin the overall enjoyment of the game. The game is also comparable to other soccer games on the market, such as PES 2012 APK and FIFA 12 APK, but it has its own strengths and weaknesses that make it unique.

      -

      A recommendation for the readers who are interested in playing Real Football 2012 Konami APK

      -

      If you are interested in playing Real Football 2012 Konami APK, we recommend that you give it a try. The game is free to download and play on your Android device, and it will provide you with hours of fun and excitement. You can download the game from or or , or scan the QR code below. You can also check out the official website of the game for more information and updates. We hope you enjoy playing Real Football 2012 Konami APK!

      - QR code for Real Football 2012 Konami APK -

      FAQs

      -

      Q1. Is Real Football 2012 Konami APK free to play?

      -

      A1. Yes, Real Football 2012 Konami APK is free to play on your Android device. However, the game may contain some in-app purchases or ads that require real money.

      -

      Q2. What are the minimum requirements to play Real Football 2012 Konami APK?

      -

      A2. The minimum requirements to play Real Football 2012 Konami APK are:

      -
        -
      • An Android device with Android version 4.0 or higher.
      • -
      • At least 1 GB of free storage space on your device.
      • -
      • A stable internet connection to play online and access some features.
      • -
      -

      Q3. How can I update Real Football 2012 Konami APK to the latest version?

      -

      A3. You can update Real Football 2012 Konami APK to the latest version by following these steps:

      -
        -
      1. Go to or or on your web browser.
      2. -
      3. Download the latest APK file of the game (about 30 MB) to your device.
      4. -
      5. Enable unknown sources on your device settings (if not already enabled).
      6. -
      7. Locate the downloaded APK file on your device storage and tap on it.
      8. -
      9. Follow the instructions on the screen to install the update.
      10. -
      11. Launch the game and enjoy!
      12. -
      -

      Q4. How can I contact the developers of Real Football 2012 Konami APK for feedback or support?

      -

      A4. You can contact the developers of Real Football 2012 Konami APK for feedback or support by using one of these methods:

      -
        -
      • Email: You can send an email to with your feedback or queries.
      • -
      • Website: You can visit the official website of the game and fill out the contact form or check out the FAQ section.
      • -
      • Facebook: You can follow the official Facebook page of the game and leave a comment or message.
      • -
      -

      Q5. What are some tips and tricks to improve my skills in Real Football 2012 Konami APK?

      -

      A5. Here are some tips and tricks to improve your skills in Real Football 2012 Konami APK:

      -
        -
      • Practice: The best way to improve your skills is to practice regularly and try out different game modes and challenges. You can also watch tutorials and tips videos on YouTube or other platforms.
      • -
      • Customize: You can customize your own team and player, and adjust their skills and attributes according to your preferences and play style. You can also unlock new items and equipment that can enhance your performance.
      • -
      • Strategize: You can use different strategies and tactics to win matches, such as changing formations, switching players, passing, shooting, dribbling, defending, etc. You can also use special moves and skills that can give you an edge over your opponents.
      • -
      • Compete: You can compete with other players online or with your friends on Facebook, and learn from their strengths and weaknesses. You can also join online tournaments, leagues, and leaderboards, and challenge yourself to reach higher ranks and rewards.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Download GTA 5 on PC PS4 and Xbox One.md b/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Download GTA 5 on PC PS4 and Xbox One.md deleted file mode 100644 index 9553e62c2c6d37c581096548de644c937b278c77..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/The Ultimate Guide to Download GTA 5 on PC PS4 and Xbox One.md +++ /dev/null @@ -1,136 +0,0 @@ - -

      Easy Way to Download GTA 5

      -

      If you are a fan of action-adventure games, you have probably heard of Grand Theft Auto V, or GTA 5 for short. GTA 5 is one of the most popular and successful games of all time, with over 150 million copies sold worldwide. But how can you download GTA 5 on your PC and enjoy its amazing features? In this article, we will show you the easy way to download GTA 5 and have fun with this incredible game.

      -

      easy way to download gta 5


      DOWNLOAD > https://urlca.com/2uOe9l



      -

      What is GTA 5?

      -

      GTA 5 is the latest installment in the Grand Theft Auto series, developed by Rockstar Games. It was released in 2013 for PlayStation 3 and Xbox 360, and later for PlayStation 4, Xbox One, and PC. GTA 5 is set in the fictional state of San Andreas, which is based on Southern California. The game follows the lives of three protagonists: Michael, a retired bank robber; Franklin, a street hustler; and Trevor, a psychopathic criminal. The game allows you to switch between these characters at any time, and explore the vast open world of Los Santos and Blaine County.

      -

      GTA 5 offers a variety of activities and missions for you to complete, such as heists, races, shootouts, robberies, assassinations, and more. You can also customize your characters, vehicles, weapons, and properties, and interact with other characters and objects in the world. The game also features a realistic physics engine, dynamic weather system, day-night cycle, and radio stations with licensed music.

      -

      Why download GTA 5?

      -

      GTA 5 is not only a fun and entertaining game, but also a masterpiece of technical achievement. If you download GTA 5 on your PC, you can enjoy several benefits that make the game even better than on consoles. Here are some of them:

      -

      High-resolution graphics and performance

      -

      GTA 5 for PC offers players the option to explore the award-winning world of Los Santos and Blaine County in resolutions of up to 4k and beyond, as well as the chance to experience the game running at 60 frames per second. This means that you can see every detail of the game's stunning graphics, from the reflections on the cars to the shadows on the buildings. You can also adjust the graphics settings to suit your preferences and system capabilities.

      -

      Enhanced gameplay and customization options

      -

      GTA 5 for PC also gives you more control over your gameplay experience. You can choose between different control schemes, such as keyboard and mouse, gamepad, or both. You can also use mods to modify or add new content to the game, such as new vehicles, weapons, missions, characters, maps, and more. Mods are created by other players and can be downloaded from various websites or platforms. However, be careful when using mods online, as they may violate the game's terms of service or cause compatibility issues.

      -

      How to download GTA 5 for free on PC
      -GTA 5 download size and system requirements
      -GTA 5 Steam download guide and tips
      -GTA 5 Epic Games Store download and installation
      -GTA 5 Rockstar Games Launcher download and activation
      -How to download GTA 5 on PS4 and PS5
      -How to download GTA 5 on Xbox One and Xbox Series X/S
      -GTA 5 download link and serial key
      -GTA 5 download error and how to fix it
      -GTA 5 download speed and how to increase it
      -How to download GTA 5 mods and install them
      -GTA 5 online download and how to play with friends
      -GTA 5 mobile download for Android and iOS
      -GTA 5 free download for laptop and PC
      -GTA 5 full version download with crack
      -How to download GTA 5 updates and patches
      -GTA 5 premium edition download and what's included
      -GTA 5 digital download vs physical copy
      -GTA 5 direct download from official website
      -GTA 5 torrent download and how to avoid viruses
      -How to download GTA 5 faster and save bandwidth
      -GTA 5 compressed download and how to extract it
      -GTA 5 demo download and how to play it
      -GTA 5 beta download and how to join it
      -GTA 5 original download and how to verify it
      -How to download GTA 5 in parts and combine them
      -GTA 5 highly compressed download for low-end PC
      -GTA 5 latest version download and how to update it
      -GTA 5 offline download and how to play without internet
      -GTA 5 setup download and how to run it
      -How to download GTA 5 from Steam library
      -GTA 5 PC game download with all DLCs
      -GTA 5 PS3 and PS2 download and compatibility
      -GTA 5 Xbox 360 download and backward compatibility
      -GTA 5 apk + obb download for Android devices
      -GTA 5 iso file download for PSP and PS Vita
      -GTA 5 zip file download for Windows and Mac OS
      -How to resume GTA 5 download if interrupted or paused
      -How to transfer GTA 5 download from one device to another
      -How to delete GTA 5 download and uninstall the game

      -

      Online multiplayer mode and community support

      -

      Another reason to download GTA 5 is to play GTA Online, the online multiplayer mode of the game. GTA Online allows you to create your own character and join up to 30 other players in a shared world. You can cooperate or compete with other players in various modes, such as races, deathmatches, heists, missions, freemode events, and more. You can also join or create crews with other players, chat with them via voice or text, and share your creations with them.

      -

      GTA Online

      GTA Online is constantly updated with new content and features, such as new vehicles, weapons, clothing, missions, events, and more. You can also access the Rockstar Games Social Club, a platform that allows you to track your progress, stats, achievements, and rewards in the game. You can also join the GTA Online community and participate in forums, contests, live streams, and more.

      -

      How to download GTA 5?

      -

      Now that you know why you should download GTA 5, you may be wondering how to do it. Well, it's not that hard, but you need to follow some steps and meet some requirements. Here are the things you need to do:

      -

      Check your system specifications and disk space

      -

      Before you download GTA 5, you need to make sure that your PC can run the game smoothly. GTA 5 is a demanding game that requires a lot of resources and power. Here are the minimum and recommended system specifications for GTA 5:

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
      MinimumRecommended
      OS: Windows 10 64 Bit, Windows 8.1 64 Bit, Windows 8 64 Bit, Windows 7 64 Bit Service Pack 1OS: Windows 10 64 Bit
      Processor: Intel Core 2 Quad CPU Q6600 @ 2.40GHz (4 CPUs) / AMD Phenom 9850 Quad-Core Processor (4 CPUs) @ 2.5GHzProcessor: Intel Core i5 3470 @ 3.2GHz (4 CPUs) / AMD X8 FX-8350 @ 4GHz (8 CPUs)
      Memory: 4GBMemory: 8GB
      Video Card: NVIDIA GeForce 9800 GT 1GB / AMD Radeon HD 4870 1GB (DX 10, 10.1, 11)Video Card: NVIDIA GeForce GTX 660 2GB / AMD Radeon HD 7870 2GB
      Sound Card: DirectX compatibleSound Card: DirectX compatible
      HDD Space: 72GBHDD Space: 72GB
      DVD DriveDVD Drive
      -

      You can check your system specifications by going to the Control Panel > System and Security > System on your PC. You can also use a tool like Speccy or CPU-Z to get more detailed information about your hardware.

      -

      You also need to make sure that you have enough disk space to download and install the game. GTA 5 requires about 72 GB of free space on your hard drive. You can check your disk space by going to the File Explorer > This PC on your PC.

      -

      Choose a platform and purchase the game

      -

      The next step is to choose a platform where you can download and play GTA 5 on your PC. There are several options available, such as Steam, Epic Games Store, Rockstar Games Launcher, or physical discs. Each platform has its own advantages and disadvantages, such as price, availability, features, support, and more. You can compare them and choose the one that suits you best.

      -

      Once you have chosen a platform, you need to purchase the game from it. You can either buy the game online or from a local store. The price of the game may vary depending on the platform and the edition of the game. The standard edition of GTA 5 costs about $29.99 on Steam and Epic Games Store, while the premium edition costs about $34.99 on both platforms . The premium edition includes the Criminal Enterprise Starter Pack for GTA Online, which gives you access to additional content and cash worth over $10 million . The Rockstar Games Launcher offers both editions of GTA 5 for the same prices as Steam and Epic Games Store. However, if you buy the game from the Rockstar Games Launcher, you will also get a free copy of GTA San Andreas, another classic game from the series.

      -

      If you prefer to buy a physical copy of GTA 5, you can also do that from a local store or an online retailer. However, you will still need to download and install the game launcher and files from the internet, which may take longer than downloading the game directly from a platform. You will also need a DVD drive to insert the discs into your PC.

      -

      Download and install the game launcher and files

      -

      After you have purchased the game, you need to download and install the game launcher and files on your PC. The game launcher is a program that allows you to access, manage, and play the game. The game files are the data that contain the game's content and features. Depending on the platform you chose, the process of downloading and installing the game launcher and files may differ slightly. Here are the general steps you need to follow:

      -
        -
      1. Open the platform's website or app on your PC and log in with your account.
      2. -
      3. Find GTA 5 in the platform's library or store and click on it.
      4. -
      5. Click on the download or install button and follow the instructions on the screen.
      6. -
      7. Choose a location on your PC where you want to save the game launcher and files.
      8. -
      9. Wait for the download and installation to complete. This may take several hours depending on your internet speed and disk space.
      10. -
      11. Once the download and installation are done, you can launch the game from the platform's website or app, or from a shortcut on your desktop.
      12. -
      -

      Launch the game and enjoy

      -

      Congratulations! You have successfully downloaded GTA 5 on your PC. Now you can launch the game and start playing. Here are some tips to help you enjoy the game:

      -
        -
      • Before you start playing, make sure to adjust the graphics settings to optimize your performance and quality. You can do this by going to the Settings > Graphics menu in the game.
      • -
      • If you want to play GTA Online, you need to create a Rockstar Games Social Club account and link it to your platform account. You can do this by going to the Social Club website or by following the prompts in the game.
      • -
      • If you encounter any problems or issues with the game, such as crashes, bugs, errors, or glitches, you can contact the platform's customer support or visit the Rockstar Games Support website for help.
      • -
      • If you want to learn more about the game, such as its story, characters, missions, activities, secrets, tips, tricks, and more, you can visit the GTA Wiki website or watch some videos on YouTube.
      • -
      -

      Conclusion

      -

      GTA 5 is an amazing game that offers endless hours of fun and entertainment. If you want to download GTA 5 on your PC, you just need to follow some simple steps and meet some requirements. You need to check your system specifications and disk space, choose a platform and purchase the game, download and install the game launcher and files, and launch the game and enjoy. We hope this article helped you learn how to download GTA 5 easily. Now go ahead and play GTA 5 on your PC!

      -

      Frequently Asked Questions

      -

      Here are some common questions that people ask about downloading GTA 5:

      -
        -
      1. How long does it take to download GTA 5?
      2. -

        The time it takes to download GTA 5 depends on several factors, such as your internet speed, disk space, platform, and edition of the game. On average, it may take between 4 to 8 hours to download GTA 5 on PC.

        -
      3. How much does GTA 5 cost?
      4. -

        The price of GTA 5 varies depending on the platform and edition of the game. The standard edition of GTA 5 costs about $29.99 on Steam and Epic Games Store, while the premium edition costs about $34.99 on both platforms. The premium edition includes the Criminal Enterprise Starter Pack for GTA Online, which gives you access to additional content and cash worth over $10 million. The Rockstar Games Launcher offers both editions of GTA 5 for the same prices as Steam and Epic Games Store. However, if you buy the game from the Rockstar Games Launcher, you will also get a free copy of GTA San Andreas, another classic game from the series. If you buy a physical copy of GTA 5, you may find different prices depending on the retailer and the availability of the game.

        -
      5. Can I play GTA 5 offline?
      6. -

        Yes, you can play GTA 5 offline, but only the single-player mode. You need to have an internet connection to play GTA Online, the online multiplayer mode of the game. You also need to have an internet connection to download and install the game launcher and files, as well as to update the game with new patches and content.

        -
      7. Can I play GTA 5 on Mac?
      8. -

        No, GTA 5 is not officially supported on Mac. However, there are some ways to play GTA 5 on Mac, such as using Boot Camp, Parallels Desktop, or GeForce Now. These methods involve installing Windows or streaming the game from a cloud service on your Mac. However, these methods may not work well or at all, and may require additional costs and technical skills. Therefore, we do not recommend playing GTA 5 on Mac.

        -
      9. Is GTA 5 safe to download?
      10. -

        Yes, GTA 5 is safe to download, as long as you download it from a legitimate and trusted platform, such as Steam, Epic Games Store, Rockstar Games Launcher, or physical discs. These platforms ensure that the game is free of viruses, malware, or other harmful software. However, be careful when downloading mods or other files from unknown or unverified sources, as they may contain malicious code or damage your game files.

        -

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/7am Arivu Video Songs HD 1080p Blu-ray Download Sites Listen to the Catchy Tunes and Lyrics of a Fusion Music Album on Player FM.md b/spaces/contluForse/HuggingGPT/assets/7am Arivu Video Songs HD 1080p Blu-ray Download Sites Listen to the Catchy Tunes and Lyrics of a Fusion Music Album on Player FM.md deleted file mode 100644 index 7811e497f0825eca1f7a1523aa6f8792c45153f0..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/7am Arivu Video Songs HD 1080p Blu-ray Download Sites Listen to the Catchy Tunes and Lyrics of a Fusion Music Album on Player FM.md +++ /dev/null @@ -1,6 +0,0 @@ -

      7am arivu video songs hd 1080p blu-ray download sites


      Download Filehttps://ssurll.com/2uzvSE



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cooelf/Multimodal-CoT/timm/scheduler/tanh_lr.py b/spaces/cooelf/Multimodal-CoT/timm/scheduler/tanh_lr.py deleted file mode 100644 index 8cc338bb1df7a564d9207b32ab0f59cdf1ef4c59..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/scheduler/tanh_lr.py +++ /dev/null @@ -1,120 +0,0 @@ -""" TanH Scheduler - -TanH schedule with warmup, cycle/restarts, noise. - -Hacked together by / Copyright 2020 Ross Wightman -""" -import logging -import math -import numpy as np -import torch - -from .scheduler import Scheduler - - -_logger = logging.getLogger(__name__) - - -class TanhLRScheduler(Scheduler): - """ - Hyberbolic-Tangent decay with restarts. - This is described in the paper https://arxiv.org/abs/1806.01593 - """ - - def __init__(self, - optimizer: torch.optim.Optimizer, - t_initial: int, - lb: float = -6., - ub: float = 4., - t_mul: float = 1., - lr_min: float = 0., - decay_rate: float = 1., - warmup_t=0, - warmup_lr_init=0, - warmup_prefix=False, - cycle_limit=0, - t_in_epochs=True, - noise_range_t=None, - noise_pct=0.67, - noise_std=1.0, - noise_seed=42, - initialize=True) -> None: - super().__init__( - optimizer, param_group_field="lr", - noise_range_t=noise_range_t, noise_pct=noise_pct, noise_std=noise_std, noise_seed=noise_seed, - initialize=initialize) - - assert t_initial > 0 - assert lr_min >= 0 - assert lb < ub - assert cycle_limit >= 0 - assert warmup_t >= 0 - assert warmup_lr_init >= 0 - self.lb = lb - self.ub = ub - self.t_initial = t_initial - self.t_mul = t_mul - self.lr_min = lr_min - self.decay_rate = decay_rate - self.cycle_limit = cycle_limit - self.warmup_t = warmup_t - self.warmup_lr_init = warmup_lr_init - self.warmup_prefix = warmup_prefix - self.t_in_epochs = t_in_epochs - if self.warmup_t: - t_v = self.base_values if self.warmup_prefix else self._get_lr(self.warmup_t) - self.warmup_steps = [(v - warmup_lr_init) / self.warmup_t for v in t_v] - super().update_groups(self.warmup_lr_init) - else: - self.warmup_steps = [1 for _ in self.base_values] - - def _get_lr(self, t): - if t < self.warmup_t: - lrs = [self.warmup_lr_init + t * s for s in self.warmup_steps] - else: - if self.warmup_prefix: - t = t - self.warmup_t - - if self.t_mul != 1: - i = math.floor(math.log(1 - t / self.t_initial * (1 - self.t_mul), self.t_mul)) - t_i = self.t_mul ** i * self.t_initial - t_curr = t - (1 - self.t_mul ** i) / (1 - self.t_mul) * self.t_initial - else: - i = t // self.t_initial - t_i = self.t_initial - t_curr = t - (self.t_initial * i) - - if self.cycle_limit == 0 or (self.cycle_limit > 0 and i < self.cycle_limit): - gamma = self.decay_rate ** i - lr_min = self.lr_min * gamma - lr_max_values = [v * gamma for v in self.base_values] - - tr = t_curr / t_i - lrs = [ - lr_min + 0.5 * (lr_max - lr_min) * (1 - math.tanh(self.lb * (1. - tr) + self.ub * tr)) - for lr_max in lr_max_values - ] - else: - lrs = [self.lr_min * (self.decay_rate ** self.cycle_limit) for _ in self.base_values] - return lrs - - def get_epoch_values(self, epoch: int): - if self.t_in_epochs: - return self._get_lr(epoch) - else: - return None - - def get_update_values(self, num_updates: int): - if not self.t_in_epochs: - return self._get_lr(num_updates) - else: - return None - - def get_cycle_length(self, cycles=0): - if not cycles: - cycles = self.cycle_limit - cycles = max(1, cycles) - if self.t_mul == 1.0: - return self.t_initial * cycles - else: - return int(math.floor(-self.t_initial * (self.t_mul ** cycles - 1) / (1 - self.t_mul))) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/meta_arch/panoptic_fpn.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/meta_arch/panoptic_fpn.py deleted file mode 100644 index 1ca5f19a0ce0099a49aad8bb6b659355c4f6e200..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/meta_arch/panoptic_fpn.py +++ /dev/null @@ -1,269 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -from typing import Dict, List -import torch -from torch import nn - -from annotator.oneformer.detectron2.config import configurable -from annotator.oneformer.detectron2.structures import ImageList - -from ..postprocessing import detector_postprocess, sem_seg_postprocess -from .build import META_ARCH_REGISTRY -from .rcnn import GeneralizedRCNN -from .semantic_seg import build_sem_seg_head - -__all__ = ["PanopticFPN"] - - -@META_ARCH_REGISTRY.register() -class PanopticFPN(GeneralizedRCNN): - """ - Implement the paper :paper:`PanopticFPN`. - """ - - @configurable - def __init__( - self, - *, - sem_seg_head: nn.Module, - combine_overlap_thresh: float = 0.5, - combine_stuff_area_thresh: float = 4096, - combine_instances_score_thresh: float = 0.5, - **kwargs, - ): - """ - NOTE: this interface is experimental. - - Args: - sem_seg_head: a module for the semantic segmentation head. - combine_overlap_thresh: combine masks into one instances if - they have enough overlap - combine_stuff_area_thresh: ignore stuff areas smaller than this threshold - combine_instances_score_thresh: ignore instances whose score is - smaller than this threshold - - Other arguments are the same as :class:`GeneralizedRCNN`. - """ - super().__init__(**kwargs) - self.sem_seg_head = sem_seg_head - # options when combining instance & semantic outputs - self.combine_overlap_thresh = combine_overlap_thresh - self.combine_stuff_area_thresh = combine_stuff_area_thresh - self.combine_instances_score_thresh = combine_instances_score_thresh - - @classmethod - def from_config(cls, cfg): - ret = super().from_config(cfg) - ret.update( - { - "combine_overlap_thresh": cfg.MODEL.PANOPTIC_FPN.COMBINE.OVERLAP_THRESH, - "combine_stuff_area_thresh": cfg.MODEL.PANOPTIC_FPN.COMBINE.STUFF_AREA_LIMIT, - "combine_instances_score_thresh": cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH, # noqa - } - ) - ret["sem_seg_head"] = build_sem_seg_head(cfg, ret["backbone"].output_shape()) - logger = logging.getLogger(__name__) - if not cfg.MODEL.PANOPTIC_FPN.COMBINE.ENABLED: - logger.warning( - "PANOPTIC_FPN.COMBINED.ENABLED is no longer used. " - " model.inference(do_postprocess=) should be used to toggle postprocessing." - ) - if cfg.MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT != 1.0: - w = cfg.MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT - logger.warning( - "PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT should be replaced by weights on each ROI head." - ) - - def update_weight(x): - if isinstance(x, dict): - return {k: v * w for k, v in x.items()} - else: - return x * w - - roi_heads = ret["roi_heads"] - roi_heads.box_predictor.loss_weight = update_weight(roi_heads.box_predictor.loss_weight) - roi_heads.mask_head.loss_weight = update_weight(roi_heads.mask_head.loss_weight) - return ret - - def forward(self, batched_inputs): - """ - Args: - batched_inputs: a list, batched outputs of :class:`DatasetMapper`. - Each item in the list contains the inputs for one image. - - For now, each item in the list is a dict that contains: - - * "image": Tensor, image in (C, H, W) format. - * "instances": Instances - * "sem_seg": semantic segmentation ground truth. - * Other information that's included in the original dicts, such as: - "height", "width" (int): the output resolution of the model, used in inference. - See :meth:`postprocess` for details. - - Returns: - list[dict]: - each dict has the results for one image. The dict contains the following keys: - - * "instances": see :meth:`GeneralizedRCNN.forward` for its format. - * "sem_seg": see :meth:`SemanticSegmentor.forward` for its format. - * "panoptic_seg": See the return value of - :func:`combine_semantic_and_instance_outputs` for its format. - """ - if not self.training: - return self.inference(batched_inputs) - images = self.preprocess_image(batched_inputs) - features = self.backbone(images.tensor) - - assert "sem_seg" in batched_inputs[0] - gt_sem_seg = [x["sem_seg"].to(self.device) for x in batched_inputs] - gt_sem_seg = ImageList.from_tensors( - gt_sem_seg, - self.backbone.size_divisibility, - self.sem_seg_head.ignore_value, - self.backbone.padding_constraints, - ).tensor - sem_seg_results, sem_seg_losses = self.sem_seg_head(features, gt_sem_seg) - - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - proposals, proposal_losses = self.proposal_generator(images, features, gt_instances) - detector_results, detector_losses = self.roi_heads( - images, features, proposals, gt_instances - ) - - losses = sem_seg_losses - losses.update(proposal_losses) - losses.update(detector_losses) - return losses - - def inference(self, batched_inputs: List[Dict[str, torch.Tensor]], do_postprocess: bool = True): - """ - Run inference on the given inputs. - - Args: - batched_inputs (list[dict]): same as in :meth:`forward` - do_postprocess (bool): whether to apply post-processing on the outputs. - - Returns: - When do_postprocess=True, see docs in :meth:`forward`. - Otherwise, returns a (list[Instances], list[Tensor]) that contains - the raw detector outputs, and raw semantic segmentation outputs. - """ - images = self.preprocess_image(batched_inputs) - features = self.backbone(images.tensor) - sem_seg_results, sem_seg_losses = self.sem_seg_head(features, None) - proposals, _ = self.proposal_generator(images, features, None) - detector_results, _ = self.roi_heads(images, features, proposals, None) - - if do_postprocess: - processed_results = [] - for sem_seg_result, detector_result, input_per_image, image_size in zip( - sem_seg_results, detector_results, batched_inputs, images.image_sizes - ): - height = input_per_image.get("height", image_size[0]) - width = input_per_image.get("width", image_size[1]) - sem_seg_r = sem_seg_postprocess(sem_seg_result, image_size, height, width) - detector_r = detector_postprocess(detector_result, height, width) - - processed_results.append({"sem_seg": sem_seg_r, "instances": detector_r}) - - panoptic_r = combine_semantic_and_instance_outputs( - detector_r, - sem_seg_r.argmax(dim=0), - self.combine_overlap_thresh, - self.combine_stuff_area_thresh, - self.combine_instances_score_thresh, - ) - processed_results[-1]["panoptic_seg"] = panoptic_r - return processed_results - else: - return detector_results, sem_seg_results - - -def combine_semantic_and_instance_outputs( - instance_results, - semantic_results, - overlap_threshold, - stuff_area_thresh, - instances_score_thresh, -): - """ - Implement a simple combining logic following - "combine_semantic_and_instance_predictions.py" in panopticapi - to produce panoptic segmentation outputs. - - Args: - instance_results: output of :func:`detector_postprocess`. - semantic_results: an (H, W) tensor, each element is the contiguous semantic - category id - - Returns: - panoptic_seg (Tensor): of shape (height, width) where the values are ids for each segment. - segments_info (list[dict]): Describe each segment in `panoptic_seg`. - Each dict contains keys "id", "category_id", "isthing". - """ - panoptic_seg = torch.zeros_like(semantic_results, dtype=torch.int32) - - # sort instance outputs by scores - sorted_inds = torch.argsort(-instance_results.scores) - - current_segment_id = 0 - segments_info = [] - - instance_masks = instance_results.pred_masks.to(dtype=torch.bool, device=panoptic_seg.device) - - # Add instances one-by-one, check for overlaps with existing ones - for inst_id in sorted_inds: - score = instance_results.scores[inst_id].item() - if score < instances_score_thresh: - break - mask = instance_masks[inst_id] # H,W - mask_area = mask.sum().item() - - if mask_area == 0: - continue - - intersect = (mask > 0) & (panoptic_seg > 0) - intersect_area = intersect.sum().item() - - if intersect_area * 1.0 / mask_area > overlap_threshold: - continue - - if intersect_area > 0: - mask = mask & (panoptic_seg == 0) - - current_segment_id += 1 - panoptic_seg[mask] = current_segment_id - segments_info.append( - { - "id": current_segment_id, - "isthing": True, - "score": score, - "category_id": instance_results.pred_classes[inst_id].item(), - "instance_id": inst_id.item(), - } - ) - - # Add semantic results to remaining empty areas - semantic_labels = torch.unique(semantic_results).cpu().tolist() - for semantic_label in semantic_labels: - if semantic_label == 0: # 0 is a special "thing" class - continue - mask = (semantic_results == semantic_label) & (panoptic_seg == 0) - mask_area = mask.sum().item() - if mask_area < stuff_area_thresh: - continue - - current_segment_id += 1 - panoptic_seg[mask] = current_segment_id - segments_info.append( - { - "id": current_segment_id, - "isthing": False, - "category_id": semantic_label, - "area": mask_area, - } - ) - - return panoptic_seg, segments_info diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/transforms.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/transforms.py deleted file mode 100644 index 350cbc11662633ad7f8968eb10be2e7de6e384e9..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/zoe/zoedepth/models/base_models/midas_repo/midas/transforms.py +++ /dev/null @@ -1,234 +0,0 @@ -import numpy as np -import cv2 -import math - - -def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA): - """Rezise the sample to ensure the given size. Keeps aspect ratio. - - Args: - sample (dict): sample - size (tuple): image size - - Returns: - tuple: new size - """ - shape = list(sample["disparity"].shape) - - if shape[0] >= size[0] and shape[1] >= size[1]: - return sample - - scale = [0, 0] - scale[0] = size[0] / shape[0] - scale[1] = size[1] / shape[1] - - scale = max(scale) - - shape[0] = math.ceil(scale * shape[0]) - shape[1] = math.ceil(scale * shape[1]) - - # resize - sample["image"] = cv2.resize( - sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method - ) - - sample["disparity"] = cv2.resize( - sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST - ) - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - tuple(shape[::-1]), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return tuple(shape) - - -class Resize(object): - """Resize sample to given size (width, height). - """ - - def __init__( - self, - width, - height, - resize_target=True, - keep_aspect_ratio=False, - ensure_multiple_of=1, - resize_method="lower_bound", - image_interpolation_method=cv2.INTER_AREA, - ): - """Init. - - Args: - width (int): desired output width - height (int): desired output height - resize_target (bool, optional): - True: Resize the full sample (image, mask, target). - False: Resize image only. - Defaults to True. - keep_aspect_ratio (bool, optional): - True: Keep the aspect ratio of the input sample. - Output sample might not have the given width and height, and - resize behaviour depends on the parameter 'resize_method'. - Defaults to False. - ensure_multiple_of (int, optional): - Output width and height is constrained to be multiple of this parameter. - Defaults to 1. - resize_method (str, optional): - "lower_bound": Output will be at least as large as the given size. - "upper_bound": Output will be at max as large as the given size. (Output size might be smaller than given size.) - "minimal": Scale as least as possible. (Output size might be smaller than given size.) - Defaults to "lower_bound". - """ - self.__width = width - self.__height = height - - self.__resize_target = resize_target - self.__keep_aspect_ratio = keep_aspect_ratio - self.__multiple_of = ensure_multiple_of - self.__resize_method = resize_method - self.__image_interpolation_method = image_interpolation_method - - def constrain_to_multiple_of(self, x, min_val=0, max_val=None): - y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if max_val is not None and y > max_val: - y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if y < min_val: - y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int) - - return y - - def get_size(self, width, height): - # determine new height and width - scale_height = self.__height / height - scale_width = self.__width / width - - if self.__keep_aspect_ratio: - if self.__resize_method == "lower_bound": - # scale such that output size is lower bound - if scale_width > scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "upper_bound": - # scale such that output size is upper bound - if scale_width < scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "minimal": - # scale as least as possbile - if abs(1 - scale_width) < abs(1 - scale_height): - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - else: - raise ValueError( - f"resize_method {self.__resize_method} not implemented" - ) - - if self.__resize_method == "lower_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, min_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, min_val=self.__width - ) - elif self.__resize_method == "upper_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, max_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, max_val=self.__width - ) - elif self.__resize_method == "minimal": - new_height = self.constrain_to_multiple_of(scale_height * height) - new_width = self.constrain_to_multiple_of(scale_width * width) - else: - raise ValueError(f"resize_method {self.__resize_method} not implemented") - - return (new_width, new_height) - - def __call__(self, sample): - width, height = self.get_size( - sample["image"].shape[1], sample["image"].shape[0] - ) - - # resize sample - sample["image"] = cv2.resize( - sample["image"], - (width, height), - interpolation=self.__image_interpolation_method, - ) - - if self.__resize_target: - if "disparity" in sample: - sample["disparity"] = cv2.resize( - sample["disparity"], - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - - if "depth" in sample: - sample["depth"] = cv2.resize( - sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST - ) - - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return sample - - -class NormalizeImage(object): - """Normlize image by given mean and std. - """ - - def __init__(self, mean, std): - self.__mean = mean - self.__std = std - - def __call__(self, sample): - sample["image"] = (sample["image"] - self.__mean) / self.__std - - return sample - - -class PrepareForNet(object): - """Prepare sample for usage as network input. - """ - - def __init__(self): - pass - - def __call__(self, sample): - image = np.transpose(sample["image"], (2, 0, 1)) - sample["image"] = np.ascontiguousarray(image).astype(np.float32) - - if "mask" in sample: - sample["mask"] = sample["mask"].astype(np.float32) - sample["mask"] = np.ascontiguousarray(sample["mask"]) - - if "disparity" in sample: - disparity = sample["disparity"].astype(np.float32) - sample["disparity"] = np.ascontiguousarray(disparity) - - if "depth" in sample: - depth = sample["depth"].astype(np.float32) - sample["depth"] = np.ascontiguousarray(depth) - - return sample diff --git a/spaces/cozyanduofen/bingo/src/components/ui/input.tsx b/spaces/cozyanduofen/bingo/src/components/ui/input.tsx deleted file mode 100644 index 684a857f3d769b78818fb13de1abaebfb09ca79c..0000000000000000000000000000000000000000 --- a/spaces/cozyanduofen/bingo/src/components/ui/input.tsx +++ /dev/null @@ -1,25 +0,0 @@ -import * as React from 'react' - -import { cn } from '@/lib/utils' - -export interface InputProps - extends React.InputHTMLAttributes {} - -const Input = React.forwardRef( - ({ className, type, ...props }, ref) => { - return ( - - ) - } -) -Input.displayName = 'Input' - -export { Input } diff --git a/spaces/cscan/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/upfirdn2d.py b/spaces/cscan/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/upfirdn2d.py deleted file mode 100644 index 667f96e1ded35d48f163f37e21d1ed8ff191aac3..0000000000000000000000000000000000000000 --- a/spaces/cscan/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/upfirdn2d.py +++ /dev/null @@ -1,186 +0,0 @@ -# modify from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.py # noqa:E501 - -import torch -from torch.autograd import Function -from torch.nn import functional as F - -try: - from . import upfirdn2d_ext -except ImportError: - import os - BASICSR_JIT = os.getenv('BASICSR_JIT') - if BASICSR_JIT == 'True': - from torch.utils.cpp_extension import load - module_path = os.path.dirname(__file__) - upfirdn2d_ext = load( - 'upfirdn2d', - sources=[ - os.path.join(module_path, 'src', 'upfirdn2d.cpp'), - os.path.join(module_path, 'src', 'upfirdn2d_kernel.cu'), - ], - ) - - -class UpFirDn2dBackward(Function): - - @staticmethod - def forward(ctx, grad_output, kernel, grad_kernel, up, down, pad, g_pad, in_size, out_size): - - up_x, up_y = up - down_x, down_y = down - g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1 = g_pad - - grad_output = grad_output.reshape(-1, out_size[0], out_size[1], 1) - - grad_input = upfirdn2d_ext.upfirdn2d( - grad_output, - grad_kernel, - down_x, - down_y, - up_x, - up_y, - g_pad_x0, - g_pad_x1, - g_pad_y0, - g_pad_y1, - ) - grad_input = grad_input.view(in_size[0], in_size[1], in_size[2], in_size[3]) - - ctx.save_for_backward(kernel) - - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - ctx.up_x = up_x - ctx.up_y = up_y - ctx.down_x = down_x - ctx.down_y = down_y - ctx.pad_x0 = pad_x0 - ctx.pad_x1 = pad_x1 - ctx.pad_y0 = pad_y0 - ctx.pad_y1 = pad_y1 - ctx.in_size = in_size - ctx.out_size = out_size - - return grad_input - - @staticmethod - def backward(ctx, gradgrad_input): - kernel, = ctx.saved_tensors - - gradgrad_input = gradgrad_input.reshape(-1, ctx.in_size[2], ctx.in_size[3], 1) - - gradgrad_out = upfirdn2d_ext.upfirdn2d( - gradgrad_input, - kernel, - ctx.up_x, - ctx.up_y, - ctx.down_x, - ctx.down_y, - ctx.pad_x0, - ctx.pad_x1, - ctx.pad_y0, - ctx.pad_y1, - ) - # gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.out_size[0], - # ctx.out_size[1], ctx.in_size[3]) - gradgrad_out = gradgrad_out.view(ctx.in_size[0], ctx.in_size[1], ctx.out_size[0], ctx.out_size[1]) - - return gradgrad_out, None, None, None, None, None, None, None, None - - -class UpFirDn2d(Function): - - @staticmethod - def forward(ctx, input, kernel, up, down, pad): - up_x, up_y = up - down_x, down_y = down - pad_x0, pad_x1, pad_y0, pad_y1 = pad - - kernel_h, kernel_w = kernel.shape - batch, channel, in_h, in_w = input.shape - ctx.in_size = input.shape - - input = input.reshape(-1, in_h, in_w, 1) - - ctx.save_for_backward(kernel, torch.flip(kernel, [0, 1])) - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - ctx.out_size = (out_h, out_w) - - ctx.up = (up_x, up_y) - ctx.down = (down_x, down_y) - ctx.pad = (pad_x0, pad_x1, pad_y0, pad_y1) - - g_pad_x0 = kernel_w - pad_x0 - 1 - g_pad_y0 = kernel_h - pad_y0 - 1 - g_pad_x1 = in_w * up_x - out_w * down_x + pad_x0 - up_x + 1 - g_pad_y1 = in_h * up_y - out_h * down_y + pad_y0 - up_y + 1 - - ctx.g_pad = (g_pad_x0, g_pad_x1, g_pad_y0, g_pad_y1) - - out = upfirdn2d_ext.upfirdn2d(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1) - # out = out.view(major, out_h, out_w, minor) - out = out.view(-1, channel, out_h, out_w) - - return out - - @staticmethod - def backward(ctx, grad_output): - kernel, grad_kernel = ctx.saved_tensors - - grad_input = UpFirDn2dBackward.apply( - grad_output, - kernel, - grad_kernel, - ctx.up, - ctx.down, - ctx.pad, - ctx.g_pad, - ctx.in_size, - ctx.out_size, - ) - - return grad_input, None, None, None, None - - -def upfirdn2d(input, kernel, up=1, down=1, pad=(0, 0)): - if input.device.type == 'cpu': - out = upfirdn2d_native(input, kernel, up, up, down, down, pad[0], pad[1], pad[0], pad[1]) - else: - out = UpFirDn2d.apply(input, kernel, (up, up), (down, down), (pad[0], pad[1], pad[0], pad[1])) - - return out - - -def upfirdn2d_native(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1): - _, channel, in_h, in_w = input.shape - input = input.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = input.shape - kernel_h, kernel_w = kernel.shape - - out = input.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad(out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)]) - out = out[:, max(-pad_y0, 0):out.shape[1] - max(-pad_y1, 0), max(-pad_x0, 0):out.shape[2] - max(-pad_x1, 0), :, ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape([-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1]) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h) // down_y + 1 - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w) // down_x + 1 - - return out.view(-1, channel, out_h, out_w) diff --git a/spaces/cymic/Waifu_Diffusion_Webui/scripts/outpainting_mk_2.py b/spaces/cymic/Waifu_Diffusion_Webui/scripts/outpainting_mk_2.py deleted file mode 100644 index c5d8972a42872a5941e91c52707bd6e20c44fd72..0000000000000000000000000000000000000000 --- a/spaces/cymic/Waifu_Diffusion_Webui/scripts/outpainting_mk_2.py +++ /dev/null @@ -1,262 +0,0 @@ -import math - -import numpy as np -import skimage - -import modules.scripts as scripts -import gradio as gr -from PIL import Image, ImageDraw - -from modules import images, processing, devices -from modules.processing import Processed, process_images -from modules.shared import opts, cmd_opts, state - - -# this function is taken from https://github.com/parlance-zz/g-diffuser-bot -def get_matched_noise(_np_src_image, np_mask_rgb, noise_q=1, color_variation=0.05): - # helper fft routines that keep ortho normalization and auto-shift before and after fft - def _fft2(data): - if data.ndim > 2: # has channels - out_fft = np.zeros((data.shape[0], data.shape[1], data.shape[2]), dtype=np.complex128) - for c in range(data.shape[2]): - c_data = data[:, :, c] - out_fft[:, :, c] = np.fft.fft2(np.fft.fftshift(c_data), norm="ortho") - out_fft[:, :, c] = np.fft.ifftshift(out_fft[:, :, c]) - else: # one channel - out_fft = np.zeros((data.shape[0], data.shape[1]), dtype=np.complex128) - out_fft[:, :] = np.fft.fft2(np.fft.fftshift(data), norm="ortho") - out_fft[:, :] = np.fft.ifftshift(out_fft[:, :]) - - return out_fft - - def _ifft2(data): - if data.ndim > 2: # has channels - out_ifft = np.zeros((data.shape[0], data.shape[1], data.shape[2]), dtype=np.complex128) - for c in range(data.shape[2]): - c_data = data[:, :, c] - out_ifft[:, :, c] = np.fft.ifft2(np.fft.fftshift(c_data), norm="ortho") - out_ifft[:, :, c] = np.fft.ifftshift(out_ifft[:, :, c]) - else: # one channel - out_ifft = np.zeros((data.shape[0], data.shape[1]), dtype=np.complex128) - out_ifft[:, :] = np.fft.ifft2(np.fft.fftshift(data), norm="ortho") - out_ifft[:, :] = np.fft.ifftshift(out_ifft[:, :]) - - return out_ifft - - def _get_gaussian_window(width, height, std=3.14, mode=0): - window_scale_x = float(width / min(width, height)) - window_scale_y = float(height / min(width, height)) - - window = np.zeros((width, height)) - x = (np.arange(width) / width * 2. - 1.) * window_scale_x - for y in range(height): - fy = (y / height * 2. - 1.) * window_scale_y - if mode == 0: - window[:, y] = np.exp(-(x ** 2 + fy ** 2) * std) - else: - window[:, y] = (1 / ((x ** 2 + 1.) * (fy ** 2 + 1.))) ** (std / 3.14) # hey wait a minute that's not gaussian - - return window - - def _get_masked_window_rgb(np_mask_grey, hardness=1.): - np_mask_rgb = np.zeros((np_mask_grey.shape[0], np_mask_grey.shape[1], 3)) - if hardness != 1.: - hardened = np_mask_grey[:] ** hardness - else: - hardened = np_mask_grey[:] - for c in range(3): - np_mask_rgb[:, :, c] = hardened[:] - return np_mask_rgb - - width = _np_src_image.shape[0] - height = _np_src_image.shape[1] - num_channels = _np_src_image.shape[2] - - np_src_image = _np_src_image[:] * (1. - np_mask_rgb) - np_mask_grey = (np.sum(np_mask_rgb, axis=2) / 3.) - img_mask = np_mask_grey > 1e-6 - ref_mask = np_mask_grey < 1e-3 - - windowed_image = _np_src_image * (1. - _get_masked_window_rgb(np_mask_grey)) - windowed_image /= np.max(windowed_image) - windowed_image += np.average(_np_src_image) * np_mask_rgb # / (1.-np.average(np_mask_rgb)) # rather than leave the masked area black, we get better results from fft by filling the average unmasked color - - src_fft = _fft2(windowed_image) # get feature statistics from masked src img - src_dist = np.absolute(src_fft) - src_phase = src_fft / src_dist - - # create a generator with a static seed to make outpainting deterministic / only follow global seed - rng = np.random.default_rng(0) - - noise_window = _get_gaussian_window(width, height, mode=1) # start with simple gaussian noise - noise_rgb = rng.random((width, height, num_channels)) - noise_grey = (np.sum(noise_rgb, axis=2) / 3.) - noise_rgb *= color_variation # the colorfulness of the starting noise is blended to greyscale with a parameter - for c in range(num_channels): - noise_rgb[:, :, c] += (1. - color_variation) * noise_grey - - noise_fft = _fft2(noise_rgb) - for c in range(num_channels): - noise_fft[:, :, c] *= noise_window - noise_rgb = np.real(_ifft2(noise_fft)) - shaped_noise_fft = _fft2(noise_rgb) - shaped_noise_fft[:, :, :] = np.absolute(shaped_noise_fft[:, :, :]) ** 2 * (src_dist ** noise_q) * src_phase # perform the actual shaping - - brightness_variation = 0. # color_variation # todo: temporarily tieing brightness variation to color variation for now - contrast_adjusted_np_src = _np_src_image[:] * (brightness_variation + 1.) - brightness_variation * 2. - - # scikit-image is used for histogram matching, very convenient! - shaped_noise = np.real(_ifft2(shaped_noise_fft)) - shaped_noise -= np.min(shaped_noise) - shaped_noise /= np.max(shaped_noise) - shaped_noise[img_mask, :] = skimage.exposure.match_histograms(shaped_noise[img_mask, :] ** 1., contrast_adjusted_np_src[ref_mask, :], channel_axis=1) - shaped_noise = _np_src_image[:] * (1. - np_mask_rgb) + shaped_noise * np_mask_rgb - - matched_noise = shaped_noise[:] - - return np.clip(matched_noise, 0., 1.) - - - -class Script(scripts.Script): - def title(self): - return "Outpainting mk2" - - def show(self, is_img2img): - return is_img2img - - def ui(self, is_img2img): - if not is_img2img: - return None - - info = gr.HTML("

      Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8

      ") - - pixels = gr.Slider(label="Pixels to expand", minimum=8, maximum=256, step=8, value=128) - mask_blur = gr.Slider(label='Mask blur', minimum=0, maximum=64, step=1, value=8, visible=False) - direction = gr.CheckboxGroup(label="Outpainting direction", choices=['left', 'right', 'up', 'down'], value=['left', 'right', 'up', 'down']) - noise_q = gr.Slider(label="Fall-off exponent (lower=higher detail)", minimum=0.0, maximum=4.0, step=0.01, value=1.0) - color_variation = gr.Slider(label="Color variation", minimum=0.0, maximum=1.0, step=0.01, value=0.05) - - return [info, pixels, mask_blur, direction, noise_q, color_variation] - - def run(self, p, _, pixels, mask_blur, direction, noise_q, color_variation): - initial_seed_and_info = [None, None] - - process_width = p.width - process_height = p.height - - p.mask_blur = mask_blur*4 - p.inpaint_full_res = False - p.inpainting_fill = 1 - p.do_not_save_samples = True - p.do_not_save_grid = True - - left = pixels if "left" in direction else 0 - right = pixels if "right" in direction else 0 - up = pixels if "up" in direction else 0 - down = pixels if "down" in direction else 0 - - init_img = p.init_images[0] - target_w = math.ceil((init_img.width + left + right) / 64) * 64 - target_h = math.ceil((init_img.height + up + down) / 64) * 64 - - if left > 0: - left = left * (target_w - init_img.width) // (left + right) - - if right > 0: - right = target_w - init_img.width - left - - if up > 0: - up = up * (target_h - init_img.height) // (up + down) - - if down > 0: - down = target_h - init_img.height - up - - init_image = p.init_images[0] - - state.job_count = (1 if left > 0 else 0) + (1 if right > 0 else 0) + (1 if up > 0 else 0) + (1 if down > 0 else 0) - - def expand(init, expand_pixels, is_left=False, is_right=False, is_top=False, is_bottom=False): - is_horiz = is_left or is_right - is_vert = is_top or is_bottom - pixels_horiz = expand_pixels if is_horiz else 0 - pixels_vert = expand_pixels if is_vert else 0 - - res_w = init.width + pixels_horiz - res_h = init.height + pixels_vert - process_res_w = math.ceil(res_w / 64) * 64 - process_res_h = math.ceil(res_h / 64) * 64 - - img = Image.new("RGB", (process_res_w, process_res_h)) - img.paste(init, (pixels_horiz if is_left else 0, pixels_vert if is_top else 0)) - mask = Image.new("RGB", (process_res_w, process_res_h), "white") - draw = ImageDraw.Draw(mask) - draw.rectangle(( - expand_pixels + mask_blur if is_left else 0, - expand_pixels + mask_blur if is_top else 0, - mask.width - expand_pixels - mask_blur if is_right else res_w, - mask.height - expand_pixels - mask_blur if is_bottom else res_h, - ), fill="black") - - np_image = (np.asarray(img) / 255.0).astype(np.float64) - np_mask = (np.asarray(mask) / 255.0).astype(np.float64) - noised = get_matched_noise(np_image, np_mask, noise_q, color_variation) - out = Image.fromarray(np.clip(noised * 255., 0., 255.).astype(np.uint8), mode="RGB") - - target_width = min(process_width, init.width + pixels_horiz) if is_horiz else img.width - target_height = min(process_height, init.height + pixels_vert) if is_vert else img.height - - crop_region = ( - 0 if is_left else out.width - target_width, - 0 if is_top else out.height - target_height, - target_width if is_left else out.width, - target_height if is_top else out.height, - ) - - image_to_process = out.crop(crop_region) - mask = mask.crop(crop_region) - - p.width = target_width if is_horiz else img.width - p.height = target_height if is_vert else img.height - p.init_images = [image_to_process] - p.image_mask = mask - - latent_mask = Image.new("RGB", (p.width, p.height), "white") - draw = ImageDraw.Draw(latent_mask) - draw.rectangle(( - expand_pixels + mask_blur * 2 if is_left else 0, - expand_pixels + mask_blur * 2 if is_top else 0, - mask.width - expand_pixels - mask_blur * 2 if is_right else res_w, - mask.height - expand_pixels - mask_blur * 2 if is_bottom else res_h, - ), fill="black") - p.latent_mask = latent_mask - - proc = process_images(p) - proc_img = proc.images[0] - - if initial_seed_and_info[0] is None: - initial_seed_and_info[0] = proc.seed - initial_seed_and_info[1] = proc.info - - out.paste(proc_img, (0 if is_left else out.width - proc_img.width, 0 if is_top else out.height - proc_img.height)) - out = out.crop((0, 0, res_w, res_h)) - return out - - img = init_image - - if left > 0: - img = expand(img, left, is_left=True) - if right > 0: - img = expand(img, right, is_right=True) - if up > 0: - img = expand(img, up, is_top=True) - if down > 0: - img = expand(img, down, is_bottom=True) - - res = Processed(p, [img], initial_seed_and_info[0], initial_seed_and_info[1]) - - if opts.samples_save: - images.save_image(img, p.outpath_samples, "", res.seed, p.prompt, opts.grid_format, info=res.info, p=p) - - return res - diff --git a/spaces/cynika/taffy/resample.py b/spaces/cynika/taffy/resample.py deleted file mode 100644 index fabae4afbb330cccad1681b7941a63547c93c640..0000000000000000000000000000000000000000 --- a/spaces/cynika/taffy/resample.py +++ /dev/null @@ -1,47 +0,0 @@ -import os -import argparse -import librosa -import numpy as np -from multiprocessing import Pool, cpu_count -from scipy.io import wavfile -from tqdm import tqdm - - -def process(item): - spkdir, wav_name, args = item - # speaker 's5', 'p280', 'p315' are excluded, - speaker = spkdir.split(os.sep)[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and '.wav' in wav_path: - os.makedirs(os.path.join(args.out_dir2, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, None) - wav, _ = librosa.effects.trim(wav, top_db=20) - peak = np.abs(wav).max() - if peak > 1.0: - wav = 0.98 * wav / peak - wav2 = librosa.resample(wav, orig_sr=sr, target_sr=args.sr2) - save_name = wav_name - save_path2 = os.path.join(args.out_dir2, speaker, save_name) - wavfile.write( - save_path2, - args.sr2, - (wav2 * np.iinfo(np.int16).max).astype(np.int16) - ) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--sr2", type=int, default=32000, help="sampling rate") - parser.add_argument("--in_dir", type=str, default="./dataset_raw", help="path to source dir") - parser.add_argument("--out_dir2", type=str, default="./dataset/32k", help="path to target dir") - args = parser.parse_args() - processs = cpu_count()-2 if cpu_count() >4 else 1 - pool = Pool(processes=processs) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])): - pass diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/vegalite/v5/schema/mixins.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/vegalite/v5/schema/mixins.py deleted file mode 100644 index 569daefb8f3f00c519d350de98e542c7562db1b6..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/vegalite/v5/schema/mixins.py +++ /dev/null @@ -1,1292 +0,0 @@ -# The contents of this file are automatically written by -# tools/generate_schema_wrapper.py. Do not modify directly. -import sys - -from . import core -from altair.utils import use_signature -from altair.utils.schemapi import Undefined - -if sys.version_info >= (3, 11): - from typing import Self -else: - from typing_extensions import Self - - -class MarkMethodMixin: - """A mixin class that defines mark methods""" - - def mark_arc(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, dir=Undefined, - discreteBandSize=Undefined, dx=Undefined, dy=Undefined, ellipsis=Undefined, - fill=Undefined, fillOpacity=Undefined, filled=Undefined, font=Undefined, - fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, height=Undefined, - href=Undefined, innerRadius=Undefined, interpolate=Undefined, invalid=Undefined, - limit=Undefined, line=Undefined, lineBreak=Undefined, lineHeight=Undefined, - opacity=Undefined, order=Undefined, orient=Undefined, outerRadius=Undefined, - padAngle=Undefined, point=Undefined, radius=Undefined, radius2=Undefined, - radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, size=Undefined, - smooth=Undefined, stroke=Undefined, strokeCap=Undefined, strokeDash=Undefined, - strokeDashOffset=Undefined, strokeJoin=Undefined, strokeMiterLimit=Undefined, - strokeOffset=Undefined, strokeOpacity=Undefined, strokeWidth=Undefined, - style=Undefined, tension=Undefined, text=Undefined, theta=Undefined, theta2=Undefined, - theta2Offset=Undefined, thetaOffset=Undefined, thickness=Undefined, - timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, tooltip=Undefined, - url=Undefined, width=Undefined, x=Undefined, x2=Undefined, x2Offset=Undefined, - xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, yOffset=Undefined, - **kwds) -> Self: - """Set the chart's mark to 'arc' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="arc", **kwds) - else: - copy.mark = "arc" - return copy - - def mark_area(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'area' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="area", **kwds) - else: - copy.mark = "area" - return copy - - def mark_bar(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, dir=Undefined, - discreteBandSize=Undefined, dx=Undefined, dy=Undefined, ellipsis=Undefined, - fill=Undefined, fillOpacity=Undefined, filled=Undefined, font=Undefined, - fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, height=Undefined, - href=Undefined, innerRadius=Undefined, interpolate=Undefined, invalid=Undefined, - limit=Undefined, line=Undefined, lineBreak=Undefined, lineHeight=Undefined, - opacity=Undefined, order=Undefined, orient=Undefined, outerRadius=Undefined, - padAngle=Undefined, point=Undefined, radius=Undefined, radius2=Undefined, - radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, size=Undefined, - smooth=Undefined, stroke=Undefined, strokeCap=Undefined, strokeDash=Undefined, - strokeDashOffset=Undefined, strokeJoin=Undefined, strokeMiterLimit=Undefined, - strokeOffset=Undefined, strokeOpacity=Undefined, strokeWidth=Undefined, - style=Undefined, tension=Undefined, text=Undefined, theta=Undefined, theta2=Undefined, - theta2Offset=Undefined, thetaOffset=Undefined, thickness=Undefined, - timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, tooltip=Undefined, - url=Undefined, width=Undefined, x=Undefined, x2=Undefined, x2Offset=Undefined, - xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, yOffset=Undefined, - **kwds) -> Self: - """Set the chart's mark to 'bar' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="bar", **kwds) - else: - copy.mark = "bar" - return copy - - def mark_image(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'image' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="image", **kwds) - else: - copy.mark = "image" - return copy - - def mark_line(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'line' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="line", **kwds) - else: - copy.mark = "line" - return copy - - def mark_point(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'point' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="point", **kwds) - else: - copy.mark = "point" - return copy - - def mark_rect(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'rect' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="rect", **kwds) - else: - copy.mark = "rect" - return copy - - def mark_rule(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'rule' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="rule", **kwds) - else: - copy.mark = "rule" - return copy - - def mark_text(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'text' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="text", **kwds) - else: - copy.mark = "text" - return copy - - def mark_tick(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'tick' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="tick", **kwds) - else: - copy.mark = "tick" - return copy - - def mark_trail(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'trail' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="trail", **kwds) - else: - copy.mark = "trail" - return copy - - def mark_circle(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, - y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'circle' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="circle", **kwds) - else: - copy.mark = "circle" - return copy - - def mark_square(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, - y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'square' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="square", **kwds) - else: - copy.mark = "square" - return copy - - def mark_geoshape(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, - shape=Undefined, size=Undefined, smooth=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOffset=Undefined, - strokeOpacity=Undefined, strokeWidth=Undefined, style=Undefined, - tension=Undefined, text=Undefined, theta=Undefined, theta2=Undefined, - theta2Offset=Undefined, thetaOffset=Undefined, thickness=Undefined, - timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, tooltip=Undefined, - url=Undefined, width=Undefined, x=Undefined, x2=Undefined, x2Offset=Undefined, - xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'geoshape' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - opacity=opacity, order=order, orient=orient, outerRadius=outerRadius, - padAngle=padAngle, point=point, radius=radius, radius2=radius2, - radius2Offset=radius2Offset, radiusOffset=radiusOffset, shape=shape, size=size, - smooth=smooth, stroke=stroke, strokeCap=strokeCap, strokeDash=strokeDash, - strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="geoshape", **kwds) - else: - copy.mark = "geoshape" - return copy - - def mark_boxplot(self, box=Undefined, clip=Undefined, color=Undefined, extent=Undefined, - invalid=Undefined, median=Undefined, opacity=Undefined, orient=Undefined, - outliers=Undefined, rule=Undefined, size=Undefined, ticks=Undefined, **kwds) -> Self: - """Set the chart's mark to 'boxplot' (see :class:`BoxPlotDef`) - """ - kwds = dict(box=box, clip=clip, color=color, extent=extent, invalid=invalid, median=median, - opacity=opacity, orient=orient, outliers=outliers, rule=rule, size=size, - ticks=ticks, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.BoxPlotDef(type="boxplot", **kwds) - else: - copy.mark = "boxplot" - return copy - - def mark_errorbar(self, clip=Undefined, color=Undefined, extent=Undefined, opacity=Undefined, - orient=Undefined, rule=Undefined, size=Undefined, thickness=Undefined, - ticks=Undefined, **kwds) -> Self: - """Set the chart's mark to 'errorbar' (see :class:`ErrorBarDef`) - """ - kwds = dict(clip=clip, color=color, extent=extent, opacity=opacity, orient=orient, rule=rule, - size=size, thickness=thickness, ticks=ticks, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.ErrorBarDef(type="errorbar", **kwds) - else: - copy.mark = "errorbar" - return copy - - def mark_errorband(self, band=Undefined, borders=Undefined, clip=Undefined, color=Undefined, - extent=Undefined, interpolate=Undefined, opacity=Undefined, orient=Undefined, - tension=Undefined, **kwds) -> Self: - """Set the chart's mark to 'errorband' (see :class:`ErrorBandDef`) - """ - kwds = dict(band=band, borders=borders, clip=clip, color=color, extent=extent, - interpolate=interpolate, opacity=opacity, orient=orient, tension=tension, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.ErrorBandDef(type="errorband", **kwds) - else: - copy.mark = "errorband" - return copy - - -class ConfigMethodMixin: - """A mixin class that defines config methods""" - - @use_signature(core.Config) - def configure(self, *args, **kwargs) -> Self: - copy = self.copy(deep=False) - copy.config = core.Config(*args, **kwargs) - return copy - - @use_signature(core.RectConfig) - def configure_arc(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["arc"] = core.RectConfig(*args, **kwargs) - return copy - - @use_signature(core.AreaConfig) - def configure_area(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["area"] = core.AreaConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axis(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axis"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisBand(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisBand"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisBottom(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisBottom"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisDiscrete(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisDiscrete"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisLeft(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisLeft"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisPoint(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisPoint"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisQuantitative(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisQuantitative"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisRight(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisRight"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisTemporal(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisTemporal"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisTop(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisTop"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisX(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisX"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXBand(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXBand"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXDiscrete(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXDiscrete"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXPoint(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXPoint"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXQuantitative(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXQuantitative"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXTemporal(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXTemporal"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisY(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisY"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYBand(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYBand"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYDiscrete(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYDiscrete"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYPoint(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYPoint"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYQuantitative(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYQuantitative"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYTemporal(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYTemporal"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.BarConfig) - def configure_bar(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["bar"] = core.BarConfig(*args, **kwargs) - return copy - - @use_signature(core.BoxPlotConfig) - def configure_boxplot(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["boxplot"] = core.BoxPlotConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_circle(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["circle"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.CompositionConfig) - def configure_concat(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["concat"] = core.CompositionConfig(*args, **kwargs) - return copy - - @use_signature(core.ErrorBandConfig) - def configure_errorband(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["errorband"] = core.ErrorBandConfig(*args, **kwargs) - return copy - - @use_signature(core.ErrorBarConfig) - def configure_errorbar(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["errorbar"] = core.ErrorBarConfig(*args, **kwargs) - return copy - - @use_signature(core.CompositionConfig) - def configure_facet(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["facet"] = core.CompositionConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_geoshape(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["geoshape"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_header(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["header"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_headerColumn(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["headerColumn"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_headerFacet(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["headerFacet"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_headerRow(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["headerRow"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.RectConfig) - def configure_image(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["image"] = core.RectConfig(*args, **kwargs) - return copy - - @use_signature(core.LegendConfig) - def configure_legend(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["legend"] = core.LegendConfig(*args, **kwargs) - return copy - - @use_signature(core.LineConfig) - def configure_line(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["line"] = core.LineConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_mark(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["mark"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_point(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["point"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.ProjectionConfig) - def configure_projection(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["projection"] = core.ProjectionConfig(*args, **kwargs) - return copy - - @use_signature(core.RangeConfig) - def configure_range(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["range"] = core.RangeConfig(*args, **kwargs) - return copy - - @use_signature(core.RectConfig) - def configure_rect(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["rect"] = core.RectConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_rule(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["rule"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.ScaleConfig) - def configure_scale(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["scale"] = core.ScaleConfig(*args, **kwargs) - return copy - - @use_signature(core.SelectionConfig) - def configure_selection(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["selection"] = core.SelectionConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_square(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["square"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_text(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["text"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.TickConfig) - def configure_tick(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["tick"] = core.TickConfig(*args, **kwargs) - return copy - - @use_signature(core.TitleConfig) - def configure_title(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["title"] = core.TitleConfig(*args, **kwargs) - return copy - - @use_signature(core.LineConfig) - def configure_trail(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["trail"] = core.LineConfig(*args, **kwargs) - return copy - - @use_signature(core.ViewConfig) - def configure_view(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["view"] = core.ViewConfig(*args, **kwargs) - return copy \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attr/setters.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attr/setters.py deleted file mode 100644 index 12ed6750df35b96e2ccde24a9752dca22929188d..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/attr/setters.py +++ /dev/null @@ -1,73 +0,0 @@ -# SPDX-License-Identifier: MIT - -""" -Commonly used hooks for on_setattr. -""" - - -from . import _config -from .exceptions import FrozenAttributeError - - -def pipe(*setters): - """ - Run all *setters* and return the return value of the last one. - - .. versionadded:: 20.1.0 - """ - - def wrapped_pipe(instance, attrib, new_value): - rv = new_value - - for setter in setters: - rv = setter(instance, attrib, rv) - - return rv - - return wrapped_pipe - - -def frozen(_, __, ___): - """ - Prevent an attribute to be modified. - - .. versionadded:: 20.1.0 - """ - raise FrozenAttributeError() - - -def validate(instance, attrib, new_value): - """ - Run *attrib*'s validator on *new_value* if it has one. - - .. versionadded:: 20.1.0 - """ - if _config._run_validators is False: - return new_value - - v = attrib.validator - if not v: - return new_value - - v(instance, attrib, new_value) - - return new_value - - -def convert(instance, attrib, new_value): - """ - Run *attrib*'s converter -- if it has one -- on *new_value* and return the - result. - - .. versionadded:: 20.1.0 - """ - c = attrib.converter - if c: - return c(new_value) - - return new_value - - -# Sentinel for disabling class-wide *on_setattr* hooks for certain attributes. -# autodata stopped working, so the docstring is inlined in the API docs. -NO_OP = object() diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/otTraverse.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/otTraverse.py deleted file mode 100644 index bf22dcfdb500cd50525fce749562384a82b1cb0f..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/otTraverse.py +++ /dev/null @@ -1,161 +0,0 @@ -"""Methods for traversing trees of otData-driven OpenType tables.""" -from collections import deque -from typing import Callable, Deque, Iterable, List, Optional, Tuple -from .otBase import BaseTable - - -__all__ = [ - "bfs_base_table", - "dfs_base_table", - "SubTablePath", -] - - -class SubTablePath(Tuple[BaseTable.SubTableEntry, ...]): - def __str__(self) -> str: - path_parts = [] - for entry in self: - path_part = entry.name - if entry.index is not None: - path_part += f"[{entry.index}]" - path_parts.append(path_part) - return ".".join(path_parts) - - -# Given f(current frontier, new entries) add new entries to frontier -AddToFrontierFn = Callable[[Deque[SubTablePath], List[SubTablePath]], None] - - -def dfs_base_table( - root: BaseTable, - root_accessor: Optional[str] = None, - skip_root: bool = False, - predicate: Optional[Callable[[SubTablePath], bool]] = None, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - """Depth-first search tree of BaseTables. - - Args: - root (BaseTable): the root of the tree. - root_accessor (Optional[str]): attribute name for the root table, if any (mostly - useful for debugging). - skip_root (Optional[bool]): if True, the root itself is not visited, only its - children. - predicate (Optional[Callable[[SubTablePath], bool]]): function to filter out - paths. If True, the path is yielded and its subtables are added to the - queue. If False, the path is skipped and its subtables are not traversed. - iter_subtables_fn (Optional[Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]]]): - function to iterate over subtables of a table. If None, the default - BaseTable.iterSubTables() is used. - - Yields: - SubTablePath: tuples of BaseTable.SubTableEntry(name, table, index) namedtuples - for each of the nodes in the tree. The last entry in a path is the current - subtable, whereas preceding ones refer to its parent tables all the way up to - the root. - """ - yield from _traverse_ot_data( - root, - root_accessor, - skip_root, - predicate, - lambda frontier, new: frontier.extendleft(reversed(new)), - iter_subtables_fn, - ) - - -def bfs_base_table( - root: BaseTable, - root_accessor: Optional[str] = None, - skip_root: bool = False, - predicate: Optional[Callable[[SubTablePath], bool]] = None, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - """Breadth-first search tree of BaseTables. - - Args: - the root of the tree. - root_accessor (Optional[str]): attribute name for the root table, if any (mostly - useful for debugging). - skip_root (Optional[bool]): if True, the root itself is not visited, only its - children. - predicate (Optional[Callable[[SubTablePath], bool]]): function to filter out - paths. If True, the path is yielded and its subtables are added to the - queue. If False, the path is skipped and its subtables are not traversed. - iter_subtables_fn (Optional[Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]]]): - function to iterate over subtables of a table. If None, the default - BaseTable.iterSubTables() is used. - - Yields: - SubTablePath: tuples of BaseTable.SubTableEntry(name, table, index) namedtuples - for each of the nodes in the tree. The last entry in a path is the current - subtable, whereas preceding ones refer to its parent tables all the way up to - the root. - """ - yield from _traverse_ot_data( - root, - root_accessor, - skip_root, - predicate, - lambda frontier, new: frontier.extend(new), - iter_subtables_fn, - ) - - -def _traverse_ot_data( - root: BaseTable, - root_accessor: Optional[str], - skip_root: bool, - predicate: Optional[Callable[[SubTablePath], bool]], - add_to_frontier_fn: AddToFrontierFn, - iter_subtables_fn: Optional[ - Callable[[BaseTable], Iterable[BaseTable.SubTableEntry]] - ] = None, -) -> Iterable[SubTablePath]: - # no visited because general otData cannot cycle (forward-offset only) - if root_accessor is None: - root_accessor = type(root).__name__ - - if predicate is None: - - def predicate(path): - return True - - if iter_subtables_fn is None: - - def iter_subtables_fn(table): - return table.iterSubTables() - - frontier: Deque[SubTablePath] = deque() - - root_entry = BaseTable.SubTableEntry(root_accessor, root) - if not skip_root: - frontier.append((root_entry,)) - else: - add_to_frontier_fn( - frontier, - [ - (root_entry, subtable_entry) - for subtable_entry in iter_subtables_fn(root) - ], - ) - - while frontier: - # path is (value, attr_name) tuples. attr_name is attr of parent to get value - path = frontier.popleft() - current = path[-1].value - - if not predicate(path): - continue - - yield SubTablePath(path) - - new_entries = [ - path + (subtable_entry,) for subtable_entry in iter_subtables_fn(current) - ] - - add_to_frontier_fn(frontier, new_entries) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/unicodedata/Blocks.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/unicodedata/Blocks.py deleted file mode 100644 index b35c93d9b6fa563d1ba5ec162dd5e06d867d033a..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/unicodedata/Blocks.py +++ /dev/null @@ -1,779 +0,0 @@ -# -*- coding: utf-8 -*- -# -# NOTE: This file was auto-generated with MetaTools/buildUCD.py. -# Source: https://unicode.org/Public/UNIDATA/Blocks.txt -# License: http://unicode.org/copyright.html#License -# -# Blocks-15.0.0.txt -# Date: 2022-01-28, 20:58:00 GMT [KW] -# © 2022 Unicode®, Inc. -# For terms of use, see https://www.unicode.org/terms_of_use.html -# -# Unicode Character Database -# For documentation, see https://www.unicode.org/reports/tr44/ -# -# Format: -# Start Code..End Code; Block Name - - -RANGES = [ - 0x0000, # .. 0x007F ; Basic Latin - 0x0080, # .. 0x00FF ; Latin-1 Supplement - 0x0100, # .. 0x017F ; Latin Extended-A - 0x0180, # .. 0x024F ; Latin Extended-B - 0x0250, # .. 0x02AF ; IPA Extensions - 0x02B0, # .. 0x02FF ; Spacing Modifier Letters - 0x0300, # .. 0x036F ; Combining Diacritical Marks - 0x0370, # .. 0x03FF ; Greek and Coptic - 0x0400, # .. 0x04FF ; Cyrillic - 0x0500, # .. 0x052F ; Cyrillic Supplement - 0x0530, # .. 0x058F ; Armenian - 0x0590, # .. 0x05FF ; Hebrew - 0x0600, # .. 0x06FF ; Arabic - 0x0700, # .. 0x074F ; Syriac - 0x0750, # .. 0x077F ; Arabic Supplement - 0x0780, # .. 0x07BF ; Thaana - 0x07C0, # .. 0x07FF ; NKo - 0x0800, # .. 0x083F ; Samaritan - 0x0840, # .. 0x085F ; Mandaic - 0x0860, # .. 0x086F ; Syriac Supplement - 0x0870, # .. 0x089F ; Arabic Extended-B - 0x08A0, # .. 0x08FF ; Arabic Extended-A - 0x0900, # .. 0x097F ; Devanagari - 0x0980, # .. 0x09FF ; Bengali - 0x0A00, # .. 0x0A7F ; Gurmukhi - 0x0A80, # .. 0x0AFF ; Gujarati - 0x0B00, # .. 0x0B7F ; Oriya - 0x0B80, # .. 0x0BFF ; Tamil - 0x0C00, # .. 0x0C7F ; Telugu - 0x0C80, # .. 0x0CFF ; Kannada - 0x0D00, # .. 0x0D7F ; Malayalam - 0x0D80, # .. 0x0DFF ; Sinhala - 0x0E00, # .. 0x0E7F ; Thai - 0x0E80, # .. 0x0EFF ; Lao - 0x0F00, # .. 0x0FFF ; Tibetan - 0x1000, # .. 0x109F ; Myanmar - 0x10A0, # .. 0x10FF ; Georgian - 0x1100, # .. 0x11FF ; Hangul Jamo - 0x1200, # .. 0x137F ; Ethiopic - 0x1380, # .. 0x139F ; Ethiopic Supplement - 0x13A0, # .. 0x13FF ; Cherokee - 0x1400, # .. 0x167F ; Unified Canadian Aboriginal Syllabics - 0x1680, # .. 0x169F ; Ogham - 0x16A0, # .. 0x16FF ; Runic - 0x1700, # .. 0x171F ; Tagalog - 0x1720, # .. 0x173F ; Hanunoo - 0x1740, # .. 0x175F ; Buhid - 0x1760, # .. 0x177F ; Tagbanwa - 0x1780, # .. 0x17FF ; Khmer - 0x1800, # .. 0x18AF ; Mongolian - 0x18B0, # .. 0x18FF ; Unified Canadian Aboriginal Syllabics Extended - 0x1900, # .. 0x194F ; Limbu - 0x1950, # .. 0x197F ; Tai Le - 0x1980, # .. 0x19DF ; New Tai Lue - 0x19E0, # .. 0x19FF ; Khmer Symbols - 0x1A00, # .. 0x1A1F ; Buginese - 0x1A20, # .. 0x1AAF ; Tai Tham - 0x1AB0, # .. 0x1AFF ; Combining Diacritical Marks Extended - 0x1B00, # .. 0x1B7F ; Balinese - 0x1B80, # .. 0x1BBF ; Sundanese - 0x1BC0, # .. 0x1BFF ; Batak - 0x1C00, # .. 0x1C4F ; Lepcha - 0x1C50, # .. 0x1C7F ; Ol Chiki - 0x1C80, # .. 0x1C8F ; Cyrillic Extended-C - 0x1C90, # .. 0x1CBF ; Georgian Extended - 0x1CC0, # .. 0x1CCF ; Sundanese Supplement - 0x1CD0, # .. 0x1CFF ; Vedic Extensions - 0x1D00, # .. 0x1D7F ; Phonetic Extensions - 0x1D80, # .. 0x1DBF ; Phonetic Extensions Supplement - 0x1DC0, # .. 0x1DFF ; Combining Diacritical Marks Supplement - 0x1E00, # .. 0x1EFF ; Latin Extended Additional - 0x1F00, # .. 0x1FFF ; Greek Extended - 0x2000, # .. 0x206F ; General Punctuation - 0x2070, # .. 0x209F ; Superscripts and Subscripts - 0x20A0, # .. 0x20CF ; Currency Symbols - 0x20D0, # .. 0x20FF ; Combining Diacritical Marks for Symbols - 0x2100, # .. 0x214F ; Letterlike Symbols - 0x2150, # .. 0x218F ; Number Forms - 0x2190, # .. 0x21FF ; Arrows - 0x2200, # .. 0x22FF ; Mathematical Operators - 0x2300, # .. 0x23FF ; Miscellaneous Technical - 0x2400, # .. 0x243F ; Control Pictures - 0x2440, # .. 0x245F ; Optical Character Recognition - 0x2460, # .. 0x24FF ; Enclosed Alphanumerics - 0x2500, # .. 0x257F ; Box Drawing - 0x2580, # .. 0x259F ; Block Elements - 0x25A0, # .. 0x25FF ; Geometric Shapes - 0x2600, # .. 0x26FF ; Miscellaneous Symbols - 0x2700, # .. 0x27BF ; Dingbats - 0x27C0, # .. 0x27EF ; Miscellaneous Mathematical Symbols-A - 0x27F0, # .. 0x27FF ; Supplemental Arrows-A - 0x2800, # .. 0x28FF ; Braille Patterns - 0x2900, # .. 0x297F ; Supplemental Arrows-B - 0x2980, # .. 0x29FF ; Miscellaneous Mathematical Symbols-B - 0x2A00, # .. 0x2AFF ; Supplemental Mathematical Operators - 0x2B00, # .. 0x2BFF ; Miscellaneous Symbols and Arrows - 0x2C00, # .. 0x2C5F ; Glagolitic - 0x2C60, # .. 0x2C7F ; Latin Extended-C - 0x2C80, # .. 0x2CFF ; Coptic - 0x2D00, # .. 0x2D2F ; Georgian Supplement - 0x2D30, # .. 0x2D7F ; Tifinagh - 0x2D80, # .. 0x2DDF ; Ethiopic Extended - 0x2DE0, # .. 0x2DFF ; Cyrillic Extended-A - 0x2E00, # .. 0x2E7F ; Supplemental Punctuation - 0x2E80, # .. 0x2EFF ; CJK Radicals Supplement - 0x2F00, # .. 0x2FDF ; Kangxi Radicals - 0x2FE0, # .. 0x2FEF ; No_Block - 0x2FF0, # .. 0x2FFF ; Ideographic Description Characters - 0x3000, # .. 0x303F ; CJK Symbols and Punctuation - 0x3040, # .. 0x309F ; Hiragana - 0x30A0, # .. 0x30FF ; Katakana - 0x3100, # .. 0x312F ; Bopomofo - 0x3130, # .. 0x318F ; Hangul Compatibility Jamo - 0x3190, # .. 0x319F ; Kanbun - 0x31A0, # .. 0x31BF ; Bopomofo Extended - 0x31C0, # .. 0x31EF ; CJK Strokes - 0x31F0, # .. 0x31FF ; Katakana Phonetic Extensions - 0x3200, # .. 0x32FF ; Enclosed CJK Letters and Months - 0x3300, # .. 0x33FF ; CJK Compatibility - 0x3400, # .. 0x4DBF ; CJK Unified Ideographs Extension A - 0x4DC0, # .. 0x4DFF ; Yijing Hexagram Symbols - 0x4E00, # .. 0x9FFF ; CJK Unified Ideographs - 0xA000, # .. 0xA48F ; Yi Syllables - 0xA490, # .. 0xA4CF ; Yi Radicals - 0xA4D0, # .. 0xA4FF ; Lisu - 0xA500, # .. 0xA63F ; Vai - 0xA640, # .. 0xA69F ; Cyrillic Extended-B - 0xA6A0, # .. 0xA6FF ; Bamum - 0xA700, # .. 0xA71F ; Modifier Tone Letters - 0xA720, # .. 0xA7FF ; Latin Extended-D - 0xA800, # .. 0xA82F ; Syloti Nagri - 0xA830, # .. 0xA83F ; Common Indic Number Forms - 0xA840, # .. 0xA87F ; Phags-pa - 0xA880, # .. 0xA8DF ; Saurashtra - 0xA8E0, # .. 0xA8FF ; Devanagari Extended - 0xA900, # .. 0xA92F ; Kayah Li - 0xA930, # .. 0xA95F ; Rejang - 0xA960, # .. 0xA97F ; Hangul Jamo Extended-A - 0xA980, # .. 0xA9DF ; Javanese - 0xA9E0, # .. 0xA9FF ; Myanmar Extended-B - 0xAA00, # .. 0xAA5F ; Cham - 0xAA60, # .. 0xAA7F ; Myanmar Extended-A - 0xAA80, # .. 0xAADF ; Tai Viet - 0xAAE0, # .. 0xAAFF ; Meetei Mayek Extensions - 0xAB00, # .. 0xAB2F ; Ethiopic Extended-A - 0xAB30, # .. 0xAB6F ; Latin Extended-E - 0xAB70, # .. 0xABBF ; Cherokee Supplement - 0xABC0, # .. 0xABFF ; Meetei Mayek - 0xAC00, # .. 0xD7AF ; Hangul Syllables - 0xD7B0, # .. 0xD7FF ; Hangul Jamo Extended-B - 0xD800, # .. 0xDB7F ; High Surrogates - 0xDB80, # .. 0xDBFF ; High Private Use Surrogates - 0xDC00, # .. 0xDFFF ; Low Surrogates - 0xE000, # .. 0xF8FF ; Private Use Area - 0xF900, # .. 0xFAFF ; CJK Compatibility Ideographs - 0xFB00, # .. 0xFB4F ; Alphabetic Presentation Forms - 0xFB50, # .. 0xFDFF ; Arabic Presentation Forms-A - 0xFE00, # .. 0xFE0F ; Variation Selectors - 0xFE10, # .. 0xFE1F ; Vertical Forms - 0xFE20, # .. 0xFE2F ; Combining Half Marks - 0xFE30, # .. 0xFE4F ; CJK Compatibility Forms - 0xFE50, # .. 0xFE6F ; Small Form Variants - 0xFE70, # .. 0xFEFF ; Arabic Presentation Forms-B - 0xFF00, # .. 0xFFEF ; Halfwidth and Fullwidth Forms - 0xFFF0, # .. 0xFFFF ; Specials - 0x10000, # .. 0x1007F ; Linear B Syllabary - 0x10080, # .. 0x100FF ; Linear B Ideograms - 0x10100, # .. 0x1013F ; Aegean Numbers - 0x10140, # .. 0x1018F ; Ancient Greek Numbers - 0x10190, # .. 0x101CF ; Ancient Symbols - 0x101D0, # .. 0x101FF ; Phaistos Disc - 0x10200, # .. 0x1027F ; No_Block - 0x10280, # .. 0x1029F ; Lycian - 0x102A0, # .. 0x102DF ; Carian - 0x102E0, # .. 0x102FF ; Coptic Epact Numbers - 0x10300, # .. 0x1032F ; Old Italic - 0x10330, # .. 0x1034F ; Gothic - 0x10350, # .. 0x1037F ; Old Permic - 0x10380, # .. 0x1039F ; Ugaritic - 0x103A0, # .. 0x103DF ; Old Persian - 0x103E0, # .. 0x103FF ; No_Block - 0x10400, # .. 0x1044F ; Deseret - 0x10450, # .. 0x1047F ; Shavian - 0x10480, # .. 0x104AF ; Osmanya - 0x104B0, # .. 0x104FF ; Osage - 0x10500, # .. 0x1052F ; Elbasan - 0x10530, # .. 0x1056F ; Caucasian Albanian - 0x10570, # .. 0x105BF ; Vithkuqi - 0x105C0, # .. 0x105FF ; No_Block - 0x10600, # .. 0x1077F ; Linear A - 0x10780, # .. 0x107BF ; Latin Extended-F - 0x107C0, # .. 0x107FF ; No_Block - 0x10800, # .. 0x1083F ; Cypriot Syllabary - 0x10840, # .. 0x1085F ; Imperial Aramaic - 0x10860, # .. 0x1087F ; Palmyrene - 0x10880, # .. 0x108AF ; Nabataean - 0x108B0, # .. 0x108DF ; No_Block - 0x108E0, # .. 0x108FF ; Hatran - 0x10900, # .. 0x1091F ; Phoenician - 0x10920, # .. 0x1093F ; Lydian - 0x10940, # .. 0x1097F ; No_Block - 0x10980, # .. 0x1099F ; Meroitic Hieroglyphs - 0x109A0, # .. 0x109FF ; Meroitic Cursive - 0x10A00, # .. 0x10A5F ; Kharoshthi - 0x10A60, # .. 0x10A7F ; Old South Arabian - 0x10A80, # .. 0x10A9F ; Old North Arabian - 0x10AA0, # .. 0x10ABF ; No_Block - 0x10AC0, # .. 0x10AFF ; Manichaean - 0x10B00, # .. 0x10B3F ; Avestan - 0x10B40, # .. 0x10B5F ; Inscriptional Parthian - 0x10B60, # .. 0x10B7F ; Inscriptional Pahlavi - 0x10B80, # .. 0x10BAF ; Psalter Pahlavi - 0x10BB0, # .. 0x10BFF ; No_Block - 0x10C00, # .. 0x10C4F ; Old Turkic - 0x10C50, # .. 0x10C7F ; No_Block - 0x10C80, # .. 0x10CFF ; Old Hungarian - 0x10D00, # .. 0x10D3F ; Hanifi Rohingya - 0x10D40, # .. 0x10E5F ; No_Block - 0x10E60, # .. 0x10E7F ; Rumi Numeral Symbols - 0x10E80, # .. 0x10EBF ; Yezidi - 0x10EC0, # .. 0x10EFF ; Arabic Extended-C - 0x10F00, # .. 0x10F2F ; Old Sogdian - 0x10F30, # .. 0x10F6F ; Sogdian - 0x10F70, # .. 0x10FAF ; Old Uyghur - 0x10FB0, # .. 0x10FDF ; Chorasmian - 0x10FE0, # .. 0x10FFF ; Elymaic - 0x11000, # .. 0x1107F ; Brahmi - 0x11080, # .. 0x110CF ; Kaithi - 0x110D0, # .. 0x110FF ; Sora Sompeng - 0x11100, # .. 0x1114F ; Chakma - 0x11150, # .. 0x1117F ; Mahajani - 0x11180, # .. 0x111DF ; Sharada - 0x111E0, # .. 0x111FF ; Sinhala Archaic Numbers - 0x11200, # .. 0x1124F ; Khojki - 0x11250, # .. 0x1127F ; No_Block - 0x11280, # .. 0x112AF ; Multani - 0x112B0, # .. 0x112FF ; Khudawadi - 0x11300, # .. 0x1137F ; Grantha - 0x11380, # .. 0x113FF ; No_Block - 0x11400, # .. 0x1147F ; Newa - 0x11480, # .. 0x114DF ; Tirhuta - 0x114E0, # .. 0x1157F ; No_Block - 0x11580, # .. 0x115FF ; Siddham - 0x11600, # .. 0x1165F ; Modi - 0x11660, # .. 0x1167F ; Mongolian Supplement - 0x11680, # .. 0x116CF ; Takri - 0x116D0, # .. 0x116FF ; No_Block - 0x11700, # .. 0x1174F ; Ahom - 0x11750, # .. 0x117FF ; No_Block - 0x11800, # .. 0x1184F ; Dogra - 0x11850, # .. 0x1189F ; No_Block - 0x118A0, # .. 0x118FF ; Warang Citi - 0x11900, # .. 0x1195F ; Dives Akuru - 0x11960, # .. 0x1199F ; No_Block - 0x119A0, # .. 0x119FF ; Nandinagari - 0x11A00, # .. 0x11A4F ; Zanabazar Square - 0x11A50, # .. 0x11AAF ; Soyombo - 0x11AB0, # .. 0x11ABF ; Unified Canadian Aboriginal Syllabics Extended-A - 0x11AC0, # .. 0x11AFF ; Pau Cin Hau - 0x11B00, # .. 0x11B5F ; Devanagari Extended-A - 0x11B60, # .. 0x11BFF ; No_Block - 0x11C00, # .. 0x11C6F ; Bhaiksuki - 0x11C70, # .. 0x11CBF ; Marchen - 0x11CC0, # .. 0x11CFF ; No_Block - 0x11D00, # .. 0x11D5F ; Masaram Gondi - 0x11D60, # .. 0x11DAF ; Gunjala Gondi - 0x11DB0, # .. 0x11EDF ; No_Block - 0x11EE0, # .. 0x11EFF ; Makasar - 0x11F00, # .. 0x11F5F ; Kawi - 0x11F60, # .. 0x11FAF ; No_Block - 0x11FB0, # .. 0x11FBF ; Lisu Supplement - 0x11FC0, # .. 0x11FFF ; Tamil Supplement - 0x12000, # .. 0x123FF ; Cuneiform - 0x12400, # .. 0x1247F ; Cuneiform Numbers and Punctuation - 0x12480, # .. 0x1254F ; Early Dynastic Cuneiform - 0x12550, # .. 0x12F8F ; No_Block - 0x12F90, # .. 0x12FFF ; Cypro-Minoan - 0x13000, # .. 0x1342F ; Egyptian Hieroglyphs - 0x13430, # .. 0x1345F ; Egyptian Hieroglyph Format Controls - 0x13460, # .. 0x143FF ; No_Block - 0x14400, # .. 0x1467F ; Anatolian Hieroglyphs - 0x14680, # .. 0x167FF ; No_Block - 0x16800, # .. 0x16A3F ; Bamum Supplement - 0x16A40, # .. 0x16A6F ; Mro - 0x16A70, # .. 0x16ACF ; Tangsa - 0x16AD0, # .. 0x16AFF ; Bassa Vah - 0x16B00, # .. 0x16B8F ; Pahawh Hmong - 0x16B90, # .. 0x16E3F ; No_Block - 0x16E40, # .. 0x16E9F ; Medefaidrin - 0x16EA0, # .. 0x16EFF ; No_Block - 0x16F00, # .. 0x16F9F ; Miao - 0x16FA0, # .. 0x16FDF ; No_Block - 0x16FE0, # .. 0x16FFF ; Ideographic Symbols and Punctuation - 0x17000, # .. 0x187FF ; Tangut - 0x18800, # .. 0x18AFF ; Tangut Components - 0x18B00, # .. 0x18CFF ; Khitan Small Script - 0x18D00, # .. 0x18D7F ; Tangut Supplement - 0x18D80, # .. 0x1AFEF ; No_Block - 0x1AFF0, # .. 0x1AFFF ; Kana Extended-B - 0x1B000, # .. 0x1B0FF ; Kana Supplement - 0x1B100, # .. 0x1B12F ; Kana Extended-A - 0x1B130, # .. 0x1B16F ; Small Kana Extension - 0x1B170, # .. 0x1B2FF ; Nushu - 0x1B300, # .. 0x1BBFF ; No_Block - 0x1BC00, # .. 0x1BC9F ; Duployan - 0x1BCA0, # .. 0x1BCAF ; Shorthand Format Controls - 0x1BCB0, # .. 0x1CEFF ; No_Block - 0x1CF00, # .. 0x1CFCF ; Znamenny Musical Notation - 0x1CFD0, # .. 0x1CFFF ; No_Block - 0x1D000, # .. 0x1D0FF ; Byzantine Musical Symbols - 0x1D100, # .. 0x1D1FF ; Musical Symbols - 0x1D200, # .. 0x1D24F ; Ancient Greek Musical Notation - 0x1D250, # .. 0x1D2BF ; No_Block - 0x1D2C0, # .. 0x1D2DF ; Kaktovik Numerals - 0x1D2E0, # .. 0x1D2FF ; Mayan Numerals - 0x1D300, # .. 0x1D35F ; Tai Xuan Jing Symbols - 0x1D360, # .. 0x1D37F ; Counting Rod Numerals - 0x1D380, # .. 0x1D3FF ; No_Block - 0x1D400, # .. 0x1D7FF ; Mathematical Alphanumeric Symbols - 0x1D800, # .. 0x1DAAF ; Sutton SignWriting - 0x1DAB0, # .. 0x1DEFF ; No_Block - 0x1DF00, # .. 0x1DFFF ; Latin Extended-G - 0x1E000, # .. 0x1E02F ; Glagolitic Supplement - 0x1E030, # .. 0x1E08F ; Cyrillic Extended-D - 0x1E090, # .. 0x1E0FF ; No_Block - 0x1E100, # .. 0x1E14F ; Nyiakeng Puachue Hmong - 0x1E150, # .. 0x1E28F ; No_Block - 0x1E290, # .. 0x1E2BF ; Toto - 0x1E2C0, # .. 0x1E2FF ; Wancho - 0x1E300, # .. 0x1E4CF ; No_Block - 0x1E4D0, # .. 0x1E4FF ; Nag Mundari - 0x1E500, # .. 0x1E7DF ; No_Block - 0x1E7E0, # .. 0x1E7FF ; Ethiopic Extended-B - 0x1E800, # .. 0x1E8DF ; Mende Kikakui - 0x1E8E0, # .. 0x1E8FF ; No_Block - 0x1E900, # .. 0x1E95F ; Adlam - 0x1E960, # .. 0x1EC6F ; No_Block - 0x1EC70, # .. 0x1ECBF ; Indic Siyaq Numbers - 0x1ECC0, # .. 0x1ECFF ; No_Block - 0x1ED00, # .. 0x1ED4F ; Ottoman Siyaq Numbers - 0x1ED50, # .. 0x1EDFF ; No_Block - 0x1EE00, # .. 0x1EEFF ; Arabic Mathematical Alphabetic Symbols - 0x1EF00, # .. 0x1EFFF ; No_Block - 0x1F000, # .. 0x1F02F ; Mahjong Tiles - 0x1F030, # .. 0x1F09F ; Domino Tiles - 0x1F0A0, # .. 0x1F0FF ; Playing Cards - 0x1F100, # .. 0x1F1FF ; Enclosed Alphanumeric Supplement - 0x1F200, # .. 0x1F2FF ; Enclosed Ideographic Supplement - 0x1F300, # .. 0x1F5FF ; Miscellaneous Symbols and Pictographs - 0x1F600, # .. 0x1F64F ; Emoticons - 0x1F650, # .. 0x1F67F ; Ornamental Dingbats - 0x1F680, # .. 0x1F6FF ; Transport and Map Symbols - 0x1F700, # .. 0x1F77F ; Alchemical Symbols - 0x1F780, # .. 0x1F7FF ; Geometric Shapes Extended - 0x1F800, # .. 0x1F8FF ; Supplemental Arrows-C - 0x1F900, # .. 0x1F9FF ; Supplemental Symbols and Pictographs - 0x1FA00, # .. 0x1FA6F ; Chess Symbols - 0x1FA70, # .. 0x1FAFF ; Symbols and Pictographs Extended-A - 0x1FB00, # .. 0x1FBFF ; Symbols for Legacy Computing - 0x1FC00, # .. 0x1FFFF ; No_Block - 0x20000, # .. 0x2A6DF ; CJK Unified Ideographs Extension B - 0x2A6E0, # .. 0x2A6FF ; No_Block - 0x2A700, # .. 0x2B73F ; CJK Unified Ideographs Extension C - 0x2B740, # .. 0x2B81F ; CJK Unified Ideographs Extension D - 0x2B820, # .. 0x2CEAF ; CJK Unified Ideographs Extension E - 0x2CEB0, # .. 0x2EBEF ; CJK Unified Ideographs Extension F - 0x2EBF0, # .. 0x2F7FF ; No_Block - 0x2F800, # .. 0x2FA1F ; CJK Compatibility Ideographs Supplement - 0x2FA20, # .. 0x2FFFF ; No_Block - 0x30000, # .. 0x3134F ; CJK Unified Ideographs Extension G - 0x31350, # .. 0x323AF ; CJK Unified Ideographs Extension H - 0x323B0, # .. 0xDFFFF ; No_Block - 0xE0000, # .. 0xE007F ; Tags - 0xE0080, # .. 0xE00FF ; No_Block - 0xE0100, # .. 0xE01EF ; Variation Selectors Supplement - 0xE01F0, # .. 0xEFFFF ; No_Block - 0xF0000, # .. 0xFFFFF ; Supplementary Private Use Area-A - 0x100000, # .. 0x10FFFF ; Supplementary Private Use Area-B -] - -VALUES = [ - "Basic Latin", # 0000..007F - "Latin-1 Supplement", # 0080..00FF - "Latin Extended-A", # 0100..017F - "Latin Extended-B", # 0180..024F - "IPA Extensions", # 0250..02AF - "Spacing Modifier Letters", # 02B0..02FF - "Combining Diacritical Marks", # 0300..036F - "Greek and Coptic", # 0370..03FF - "Cyrillic", # 0400..04FF - "Cyrillic Supplement", # 0500..052F - "Armenian", # 0530..058F - "Hebrew", # 0590..05FF - "Arabic", # 0600..06FF - "Syriac", # 0700..074F - "Arabic Supplement", # 0750..077F - "Thaana", # 0780..07BF - "NKo", # 07C0..07FF - "Samaritan", # 0800..083F - "Mandaic", # 0840..085F - "Syriac Supplement", # 0860..086F - "Arabic Extended-B", # 0870..089F - "Arabic Extended-A", # 08A0..08FF - "Devanagari", # 0900..097F - "Bengali", # 0980..09FF - "Gurmukhi", # 0A00..0A7F - "Gujarati", # 0A80..0AFF - "Oriya", # 0B00..0B7F - "Tamil", # 0B80..0BFF - "Telugu", # 0C00..0C7F - "Kannada", # 0C80..0CFF - "Malayalam", # 0D00..0D7F - "Sinhala", # 0D80..0DFF - "Thai", # 0E00..0E7F - "Lao", # 0E80..0EFF - "Tibetan", # 0F00..0FFF - "Myanmar", # 1000..109F - "Georgian", # 10A0..10FF - "Hangul Jamo", # 1100..11FF - "Ethiopic", # 1200..137F - "Ethiopic Supplement", # 1380..139F - "Cherokee", # 13A0..13FF - "Unified Canadian Aboriginal Syllabics", # 1400..167F - "Ogham", # 1680..169F - "Runic", # 16A0..16FF - "Tagalog", # 1700..171F - "Hanunoo", # 1720..173F - "Buhid", # 1740..175F - "Tagbanwa", # 1760..177F - "Khmer", # 1780..17FF - "Mongolian", # 1800..18AF - "Unified Canadian Aboriginal Syllabics Extended", # 18B0..18FF - "Limbu", # 1900..194F - "Tai Le", # 1950..197F - "New Tai Lue", # 1980..19DF - "Khmer Symbols", # 19E0..19FF - "Buginese", # 1A00..1A1F - "Tai Tham", # 1A20..1AAF - "Combining Diacritical Marks Extended", # 1AB0..1AFF - "Balinese", # 1B00..1B7F - "Sundanese", # 1B80..1BBF - "Batak", # 1BC0..1BFF - "Lepcha", # 1C00..1C4F - "Ol Chiki", # 1C50..1C7F - "Cyrillic Extended-C", # 1C80..1C8F - "Georgian Extended", # 1C90..1CBF - "Sundanese Supplement", # 1CC0..1CCF - "Vedic Extensions", # 1CD0..1CFF - "Phonetic Extensions", # 1D00..1D7F - "Phonetic Extensions Supplement", # 1D80..1DBF - "Combining Diacritical Marks Supplement", # 1DC0..1DFF - "Latin Extended Additional", # 1E00..1EFF - "Greek Extended", # 1F00..1FFF - "General Punctuation", # 2000..206F - "Superscripts and Subscripts", # 2070..209F - "Currency Symbols", # 20A0..20CF - "Combining Diacritical Marks for Symbols", # 20D0..20FF - "Letterlike Symbols", # 2100..214F - "Number Forms", # 2150..218F - "Arrows", # 2190..21FF - "Mathematical Operators", # 2200..22FF - "Miscellaneous Technical", # 2300..23FF - "Control Pictures", # 2400..243F - "Optical Character Recognition", # 2440..245F - "Enclosed Alphanumerics", # 2460..24FF - "Box Drawing", # 2500..257F - "Block Elements", # 2580..259F - "Geometric Shapes", # 25A0..25FF - "Miscellaneous Symbols", # 2600..26FF - "Dingbats", # 2700..27BF - "Miscellaneous Mathematical Symbols-A", # 27C0..27EF - "Supplemental Arrows-A", # 27F0..27FF - "Braille Patterns", # 2800..28FF - "Supplemental Arrows-B", # 2900..297F - "Miscellaneous Mathematical Symbols-B", # 2980..29FF - "Supplemental Mathematical Operators", # 2A00..2AFF - "Miscellaneous Symbols and Arrows", # 2B00..2BFF - "Glagolitic", # 2C00..2C5F - "Latin Extended-C", # 2C60..2C7F - "Coptic", # 2C80..2CFF - "Georgian Supplement", # 2D00..2D2F - "Tifinagh", # 2D30..2D7F - "Ethiopic Extended", # 2D80..2DDF - "Cyrillic Extended-A", # 2DE0..2DFF - "Supplemental Punctuation", # 2E00..2E7F - "CJK Radicals Supplement", # 2E80..2EFF - "Kangxi Radicals", # 2F00..2FDF - "No_Block", # 2FE0..2FEF - "Ideographic Description Characters", # 2FF0..2FFF - "CJK Symbols and Punctuation", # 3000..303F - "Hiragana", # 3040..309F - "Katakana", # 30A0..30FF - "Bopomofo", # 3100..312F - "Hangul Compatibility Jamo", # 3130..318F - "Kanbun", # 3190..319F - "Bopomofo Extended", # 31A0..31BF - "CJK Strokes", # 31C0..31EF - "Katakana Phonetic Extensions", # 31F0..31FF - "Enclosed CJK Letters and Months", # 3200..32FF - "CJK Compatibility", # 3300..33FF - "CJK Unified Ideographs Extension A", # 3400..4DBF - "Yijing Hexagram Symbols", # 4DC0..4DFF - "CJK Unified Ideographs", # 4E00..9FFF - "Yi Syllables", # A000..A48F - "Yi Radicals", # A490..A4CF - "Lisu", # A4D0..A4FF - "Vai", # A500..A63F - "Cyrillic Extended-B", # A640..A69F - "Bamum", # A6A0..A6FF - "Modifier Tone Letters", # A700..A71F - "Latin Extended-D", # A720..A7FF - "Syloti Nagri", # A800..A82F - "Common Indic Number Forms", # A830..A83F - "Phags-pa", # A840..A87F - "Saurashtra", # A880..A8DF - "Devanagari Extended", # A8E0..A8FF - "Kayah Li", # A900..A92F - "Rejang", # A930..A95F - "Hangul Jamo Extended-A", # A960..A97F - "Javanese", # A980..A9DF - "Myanmar Extended-B", # A9E0..A9FF - "Cham", # AA00..AA5F - "Myanmar Extended-A", # AA60..AA7F - "Tai Viet", # AA80..AADF - "Meetei Mayek Extensions", # AAE0..AAFF - "Ethiopic Extended-A", # AB00..AB2F - "Latin Extended-E", # AB30..AB6F - "Cherokee Supplement", # AB70..ABBF - "Meetei Mayek", # ABC0..ABFF - "Hangul Syllables", # AC00..D7AF - "Hangul Jamo Extended-B", # D7B0..D7FF - "High Surrogates", # D800..DB7F - "High Private Use Surrogates", # DB80..DBFF - "Low Surrogates", # DC00..DFFF - "Private Use Area", # E000..F8FF - "CJK Compatibility Ideographs", # F900..FAFF - "Alphabetic Presentation Forms", # FB00..FB4F - "Arabic Presentation Forms-A", # FB50..FDFF - "Variation Selectors", # FE00..FE0F - "Vertical Forms", # FE10..FE1F - "Combining Half Marks", # FE20..FE2F - "CJK Compatibility Forms", # FE30..FE4F - "Small Form Variants", # FE50..FE6F - "Arabic Presentation Forms-B", # FE70..FEFF - "Halfwidth and Fullwidth Forms", # FF00..FFEF - "Specials", # FFF0..FFFF - "Linear B Syllabary", # 10000..1007F - "Linear B Ideograms", # 10080..100FF - "Aegean Numbers", # 10100..1013F - "Ancient Greek Numbers", # 10140..1018F - "Ancient Symbols", # 10190..101CF - "Phaistos Disc", # 101D0..101FF - "No_Block", # 10200..1027F - "Lycian", # 10280..1029F - "Carian", # 102A0..102DF - "Coptic Epact Numbers", # 102E0..102FF - "Old Italic", # 10300..1032F - "Gothic", # 10330..1034F - "Old Permic", # 10350..1037F - "Ugaritic", # 10380..1039F - "Old Persian", # 103A0..103DF - "No_Block", # 103E0..103FF - "Deseret", # 10400..1044F - "Shavian", # 10450..1047F - "Osmanya", # 10480..104AF - "Osage", # 104B0..104FF - "Elbasan", # 10500..1052F - "Caucasian Albanian", # 10530..1056F - "Vithkuqi", # 10570..105BF - "No_Block", # 105C0..105FF - "Linear A", # 10600..1077F - "Latin Extended-F", # 10780..107BF - "No_Block", # 107C0..107FF - "Cypriot Syllabary", # 10800..1083F - "Imperial Aramaic", # 10840..1085F - "Palmyrene", # 10860..1087F - "Nabataean", # 10880..108AF - "No_Block", # 108B0..108DF - "Hatran", # 108E0..108FF - "Phoenician", # 10900..1091F - "Lydian", # 10920..1093F - "No_Block", # 10940..1097F - "Meroitic Hieroglyphs", # 10980..1099F - "Meroitic Cursive", # 109A0..109FF - "Kharoshthi", # 10A00..10A5F - "Old South Arabian", # 10A60..10A7F - "Old North Arabian", # 10A80..10A9F - "No_Block", # 10AA0..10ABF - "Manichaean", # 10AC0..10AFF - "Avestan", # 10B00..10B3F - "Inscriptional Parthian", # 10B40..10B5F - "Inscriptional Pahlavi", # 10B60..10B7F - "Psalter Pahlavi", # 10B80..10BAF - "No_Block", # 10BB0..10BFF - "Old Turkic", # 10C00..10C4F - "No_Block", # 10C50..10C7F - "Old Hungarian", # 10C80..10CFF - "Hanifi Rohingya", # 10D00..10D3F - "No_Block", # 10D40..10E5F - "Rumi Numeral Symbols", # 10E60..10E7F - "Yezidi", # 10E80..10EBF - "Arabic Extended-C", # 10EC0..10EFF - "Old Sogdian", # 10F00..10F2F - "Sogdian", # 10F30..10F6F - "Old Uyghur", # 10F70..10FAF - "Chorasmian", # 10FB0..10FDF - "Elymaic", # 10FE0..10FFF - "Brahmi", # 11000..1107F - "Kaithi", # 11080..110CF - "Sora Sompeng", # 110D0..110FF - "Chakma", # 11100..1114F - "Mahajani", # 11150..1117F - "Sharada", # 11180..111DF - "Sinhala Archaic Numbers", # 111E0..111FF - "Khojki", # 11200..1124F - "No_Block", # 11250..1127F - "Multani", # 11280..112AF - "Khudawadi", # 112B0..112FF - "Grantha", # 11300..1137F - "No_Block", # 11380..113FF - "Newa", # 11400..1147F - "Tirhuta", # 11480..114DF - "No_Block", # 114E0..1157F - "Siddham", # 11580..115FF - "Modi", # 11600..1165F - "Mongolian Supplement", # 11660..1167F - "Takri", # 11680..116CF - "No_Block", # 116D0..116FF - "Ahom", # 11700..1174F - "No_Block", # 11750..117FF - "Dogra", # 11800..1184F - "No_Block", # 11850..1189F - "Warang Citi", # 118A0..118FF - "Dives Akuru", # 11900..1195F - "No_Block", # 11960..1199F - "Nandinagari", # 119A0..119FF - "Zanabazar Square", # 11A00..11A4F - "Soyombo", # 11A50..11AAF - "Unified Canadian Aboriginal Syllabics Extended-A", # 11AB0..11ABF - "Pau Cin Hau", # 11AC0..11AFF - "Devanagari Extended-A", # 11B00..11B5F - "No_Block", # 11B60..11BFF - "Bhaiksuki", # 11C00..11C6F - "Marchen", # 11C70..11CBF - "No_Block", # 11CC0..11CFF - "Masaram Gondi", # 11D00..11D5F - "Gunjala Gondi", # 11D60..11DAF - "No_Block", # 11DB0..11EDF - "Makasar", # 11EE0..11EFF - "Kawi", # 11F00..11F5F - "No_Block", # 11F60..11FAF - "Lisu Supplement", # 11FB0..11FBF - "Tamil Supplement", # 11FC0..11FFF - "Cuneiform", # 12000..123FF - "Cuneiform Numbers and Punctuation", # 12400..1247F - "Early Dynastic Cuneiform", # 12480..1254F - "No_Block", # 12550..12F8F - "Cypro-Minoan", # 12F90..12FFF - "Egyptian Hieroglyphs", # 13000..1342F - "Egyptian Hieroglyph Format Controls", # 13430..1345F - "No_Block", # 13460..143FF - "Anatolian Hieroglyphs", # 14400..1467F - "No_Block", # 14680..167FF - "Bamum Supplement", # 16800..16A3F - "Mro", # 16A40..16A6F - "Tangsa", # 16A70..16ACF - "Bassa Vah", # 16AD0..16AFF - "Pahawh Hmong", # 16B00..16B8F - "No_Block", # 16B90..16E3F - "Medefaidrin", # 16E40..16E9F - "No_Block", # 16EA0..16EFF - "Miao", # 16F00..16F9F - "No_Block", # 16FA0..16FDF - "Ideographic Symbols and Punctuation", # 16FE0..16FFF - "Tangut", # 17000..187FF - "Tangut Components", # 18800..18AFF - "Khitan Small Script", # 18B00..18CFF - "Tangut Supplement", # 18D00..18D7F - "No_Block", # 18D80..1AFEF - "Kana Extended-B", # 1AFF0..1AFFF - "Kana Supplement", # 1B000..1B0FF - "Kana Extended-A", # 1B100..1B12F - "Small Kana Extension", # 1B130..1B16F - "Nushu", # 1B170..1B2FF - "No_Block", # 1B300..1BBFF - "Duployan", # 1BC00..1BC9F - "Shorthand Format Controls", # 1BCA0..1BCAF - "No_Block", # 1BCB0..1CEFF - "Znamenny Musical Notation", # 1CF00..1CFCF - "No_Block", # 1CFD0..1CFFF - "Byzantine Musical Symbols", # 1D000..1D0FF - "Musical Symbols", # 1D100..1D1FF - "Ancient Greek Musical Notation", # 1D200..1D24F - "No_Block", # 1D250..1D2BF - "Kaktovik Numerals", # 1D2C0..1D2DF - "Mayan Numerals", # 1D2E0..1D2FF - "Tai Xuan Jing Symbols", # 1D300..1D35F - "Counting Rod Numerals", # 1D360..1D37F - "No_Block", # 1D380..1D3FF - "Mathematical Alphanumeric Symbols", # 1D400..1D7FF - "Sutton SignWriting", # 1D800..1DAAF - "No_Block", # 1DAB0..1DEFF - "Latin Extended-G", # 1DF00..1DFFF - "Glagolitic Supplement", # 1E000..1E02F - "Cyrillic Extended-D", # 1E030..1E08F - "No_Block", # 1E090..1E0FF - "Nyiakeng Puachue Hmong", # 1E100..1E14F - "No_Block", # 1E150..1E28F - "Toto", # 1E290..1E2BF - "Wancho", # 1E2C0..1E2FF - "No_Block", # 1E300..1E4CF - "Nag Mundari", # 1E4D0..1E4FF - "No_Block", # 1E500..1E7DF - "Ethiopic Extended-B", # 1E7E0..1E7FF - "Mende Kikakui", # 1E800..1E8DF - "No_Block", # 1E8E0..1E8FF - "Adlam", # 1E900..1E95F - "No_Block", # 1E960..1EC6F - "Indic Siyaq Numbers", # 1EC70..1ECBF - "No_Block", # 1ECC0..1ECFF - "Ottoman Siyaq Numbers", # 1ED00..1ED4F - "No_Block", # 1ED50..1EDFF - "Arabic Mathematical Alphabetic Symbols", # 1EE00..1EEFF - "No_Block", # 1EF00..1EFFF - "Mahjong Tiles", # 1F000..1F02F - "Domino Tiles", # 1F030..1F09F - "Playing Cards", # 1F0A0..1F0FF - "Enclosed Alphanumeric Supplement", # 1F100..1F1FF - "Enclosed Ideographic Supplement", # 1F200..1F2FF - "Miscellaneous Symbols and Pictographs", # 1F300..1F5FF - "Emoticons", # 1F600..1F64F - "Ornamental Dingbats", # 1F650..1F67F - "Transport and Map Symbols", # 1F680..1F6FF - "Alchemical Symbols", # 1F700..1F77F - "Geometric Shapes Extended", # 1F780..1F7FF - "Supplemental Arrows-C", # 1F800..1F8FF - "Supplemental Symbols and Pictographs", # 1F900..1F9FF - "Chess Symbols", # 1FA00..1FA6F - "Symbols and Pictographs Extended-A", # 1FA70..1FAFF - "Symbols for Legacy Computing", # 1FB00..1FBFF - "No_Block", # 1FC00..1FFFF - "CJK Unified Ideographs Extension B", # 20000..2A6DF - "No_Block", # 2A6E0..2A6FF - "CJK Unified Ideographs Extension C", # 2A700..2B73F - "CJK Unified Ideographs Extension D", # 2B740..2B81F - "CJK Unified Ideographs Extension E", # 2B820..2CEAF - "CJK Unified Ideographs Extension F", # 2CEB0..2EBEF - "No_Block", # 2EBF0..2F7FF - "CJK Compatibility Ideographs Supplement", # 2F800..2FA1F - "No_Block", # 2FA20..2FFFF - "CJK Unified Ideographs Extension G", # 30000..3134F - "CJK Unified Ideographs Extension H", # 31350..323AF - "No_Block", # 323B0..DFFFF - "Tags", # E0000..E007F - "No_Block", # E0080..E00FF - "Variation Selectors Supplement", # E0100..E01EF - "No_Block", # E01F0..EFFFF - "Supplementary Private Use Area-A", # F0000..FFFFF - "Supplementary Private Use Area-B", # 100000..10FFFF -] diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/tar.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/tar.py deleted file mode 100644 index 62bb58f84f2aeefe9927823cb7cb236e65f326e2..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/tar.py +++ /dev/null @@ -1,123 +0,0 @@ -import logging -import tarfile - -import fsspec -from fsspec.archive import AbstractArchiveFileSystem -from fsspec.compression import compr -from fsspec.utils import infer_compression - -typemap = {b"0": "file", b"5": "directory"} - -logger = logging.getLogger("tar") - - -class TarFileSystem(AbstractArchiveFileSystem): - """Compressed Tar archives as a file-system (read-only) - - Supports the following formats: - tar.gz, tar.bz2, tar.xz - """ - - root_marker = "" - protocol = "tar" - cachable = False - - def __init__( - self, - fo="", - index_store=None, - target_options=None, - target_protocol=None, - compression=None, - **kwargs, - ): - super().__init__(**kwargs) - target_options = target_options or {} - - if isinstance(fo, str): - self.of = fsspec.open(fo, protocol=target_protocol, **target_options) - fo = self.of.open() # keep the reference - - # Try to infer compression. - if compression is None: - name = None - - # Try different ways to get hold of the filename. `fo` might either - # be a `fsspec.LocalFileOpener`, an `io.BufferedReader` or an - # `fsspec.AbstractFileSystem` instance. - try: - # Amended io.BufferedReader or similar. - # This uses a "protocol extension" where original filenames are - # propagated to archive-like filesystems in order to let them - # infer the right compression appropriately. - if hasattr(fo, "original"): - name = fo.original - - # fsspec.LocalFileOpener - elif hasattr(fo, "path"): - name = fo.path - - # io.BufferedReader - elif hasattr(fo, "name"): - name = fo.name - - # fsspec.AbstractFileSystem - elif hasattr(fo, "info"): - name = fo.info()["name"] - - except Exception as ex: - logger.warning( - f"Unable to determine file name, not inferring compression: {ex}" - ) - - if name is not None: - compression = infer_compression(name) - logger.info(f"Inferred compression {compression} from file name {name}") - - if compression is not None: - # TODO: tarfile already implements compression with modes like "'r:gz'", - # but then would seek to offset in the file work? - fo = compr[compression](fo) - - self._fo_ref = fo - self.fo = fo # the whole instance is a context - self.tar = tarfile.TarFile(fileobj=self.fo) - self.dir_cache = None - - self.index_store = index_store - self.index = None - self._index() - - def _index(self): - # TODO: load and set saved index, if exists - out = {} - for ti in self.tar: - info = ti.get_info() - info["type"] = typemap.get(info["type"], "file") - name = ti.get_info()["name"].rstrip("/") - out[name] = (info, ti.offset_data) - - self.index = out - # TODO: save index to self.index_store here, if set - - def _get_dirs(self): - if self.dir_cache is not None: - return - - # This enables ls to get directories as children as well as files - self.dir_cache = { - dirname + "/": {"name": dirname + "/", "size": 0, "type": "directory"} - for dirname in self._all_dirnames(self.tar.getnames()) - } - for member in self.tar.getmembers(): - info = member.get_info() - info["type"] = typemap.get(info["type"], "file") - self.dir_cache[info["name"]] = info - - def _open(self, path, mode="rb", **kwargs): - if mode != "rb": - raise ValueError("Read-only filesystem implementation") - details, offset = self.index[path] - if details["type"] != "file": - raise ValueError("Can only handle regular files") - return self.tar.extractfile(path) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/filters.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/filters.py deleted file mode 100644 index ed07c4c0e2ae1b6203b3468cda8a303ecf3d7832..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/jinja2/filters.py +++ /dev/null @@ -1,1840 +0,0 @@ -"""Built-in template filters used with the ``|`` operator.""" -import math -import random -import re -import typing -import typing as t -from collections import abc -from itertools import chain -from itertools import groupby - -from markupsafe import escape -from markupsafe import Markup -from markupsafe import soft_str - -from .async_utils import async_variant -from .async_utils import auto_aiter -from .async_utils import auto_await -from .async_utils import auto_to_list -from .exceptions import FilterArgumentError -from .runtime import Undefined -from .utils import htmlsafe_json_dumps -from .utils import pass_context -from .utils import pass_environment -from .utils import pass_eval_context -from .utils import pformat -from .utils import url_quote -from .utils import urlize - -if t.TYPE_CHECKING: - import typing_extensions as te - from .environment import Environment - from .nodes import EvalContext - from .runtime import Context - from .sandbox import SandboxedEnvironment # noqa: F401 - - class HasHTML(te.Protocol): - def __html__(self) -> str: - pass - - -F = t.TypeVar("F", bound=t.Callable[..., t.Any]) -K = t.TypeVar("K") -V = t.TypeVar("V") - - -def ignore_case(value: V) -> V: - """For use as a postprocessor for :func:`make_attrgetter`. Converts strings - to lowercase and returns other types as-is.""" - if isinstance(value, str): - return t.cast(V, value.lower()) - - return value - - -def make_attrgetter( - environment: "Environment", - attribute: t.Optional[t.Union[str, int]], - postprocess: t.Optional[t.Callable[[t.Any], t.Any]] = None, - default: t.Optional[t.Any] = None, -) -> t.Callable[[t.Any], t.Any]: - """Returns a callable that looks up the given attribute from a - passed object with the rules of the environment. Dots are allowed - to access attributes of attributes. Integer parts in paths are - looked up as integers. - """ - parts = _prepare_attribute_parts(attribute) - - def attrgetter(item: t.Any) -> t.Any: - for part in parts: - item = environment.getitem(item, part) - - if default is not None and isinstance(item, Undefined): - item = default - - if postprocess is not None: - item = postprocess(item) - - return item - - return attrgetter - - -def make_multi_attrgetter( - environment: "Environment", - attribute: t.Optional[t.Union[str, int]], - postprocess: t.Optional[t.Callable[[t.Any], t.Any]] = None, -) -> t.Callable[[t.Any], t.List[t.Any]]: - """Returns a callable that looks up the given comma separated - attributes from a passed object with the rules of the environment. - Dots are allowed to access attributes of each attribute. Integer - parts in paths are looked up as integers. - - The value returned by the returned callable is a list of extracted - attribute values. - - Examples of attribute: "attr1,attr2", "attr1.inner1.0,attr2.inner2.0", etc. - """ - if isinstance(attribute, str): - split: t.Sequence[t.Union[str, int, None]] = attribute.split(",") - else: - split = [attribute] - - parts = [_prepare_attribute_parts(item) for item in split] - - def attrgetter(item: t.Any) -> t.List[t.Any]: - items = [None] * len(parts) - - for i, attribute_part in enumerate(parts): - item_i = item - - for part in attribute_part: - item_i = environment.getitem(item_i, part) - - if postprocess is not None: - item_i = postprocess(item_i) - - items[i] = item_i - - return items - - return attrgetter - - -def _prepare_attribute_parts( - attr: t.Optional[t.Union[str, int]] -) -> t.List[t.Union[str, int]]: - if attr is None: - return [] - - if isinstance(attr, str): - return [int(x) if x.isdigit() else x for x in attr.split(".")] - - return [attr] - - -def do_forceescape(value: "t.Union[str, HasHTML]") -> Markup: - """Enforce HTML escaping. This will probably double escape variables.""" - if hasattr(value, "__html__"): - value = t.cast("HasHTML", value).__html__() - - return escape(str(value)) - - -def do_urlencode( - value: t.Union[str, t.Mapping[str, t.Any], t.Iterable[t.Tuple[str, t.Any]]] -) -> str: - """Quote data for use in a URL path or query using UTF-8. - - Basic wrapper around :func:`urllib.parse.quote` when given a - string, or :func:`urllib.parse.urlencode` for a dict or iterable. - - :param value: Data to quote. A string will be quoted directly. A - dict or iterable of ``(key, value)`` pairs will be joined as a - query string. - - When given a string, "/" is not quoted. HTTP servers treat "/" and - "%2F" equivalently in paths. If you need quoted slashes, use the - ``|replace("/", "%2F")`` filter. - - .. versionadded:: 2.7 - """ - if isinstance(value, str) or not isinstance(value, abc.Iterable): - return url_quote(value) - - if isinstance(value, dict): - items: t.Iterable[t.Tuple[str, t.Any]] = value.items() - else: - items = value # type: ignore - - return "&".join( - f"{url_quote(k, for_qs=True)}={url_quote(v, for_qs=True)}" for k, v in items - ) - - -@pass_eval_context -def do_replace( - eval_ctx: "EvalContext", s: str, old: str, new: str, count: t.Optional[int] = None -) -> str: - """Return a copy of the value with all occurrences of a substring - replaced with a new one. The first argument is the substring - that should be replaced, the second is the replacement string. - If the optional third argument ``count`` is given, only the first - ``count`` occurrences are replaced: - - .. sourcecode:: jinja - - {{ "Hello World"|replace("Hello", "Goodbye") }} - -> Goodbye World - - {{ "aaaaargh"|replace("a", "d'oh, ", 2) }} - -> d'oh, d'oh, aaargh - """ - if count is None: - count = -1 - - if not eval_ctx.autoescape: - return str(s).replace(str(old), str(new), count) - - if ( - hasattr(old, "__html__") - or hasattr(new, "__html__") - and not hasattr(s, "__html__") - ): - s = escape(s) - else: - s = soft_str(s) - - return s.replace(soft_str(old), soft_str(new), count) - - -def do_upper(s: str) -> str: - """Convert a value to uppercase.""" - return soft_str(s).upper() - - -def do_lower(s: str) -> str: - """Convert a value to lowercase.""" - return soft_str(s).lower() - - -def do_items(value: t.Union[t.Mapping[K, V], Undefined]) -> t.Iterator[t.Tuple[K, V]]: - """Return an iterator over the ``(key, value)`` items of a mapping. - - ``x|items`` is the same as ``x.items()``, except if ``x`` is - undefined an empty iterator is returned. - - This filter is useful if you expect the template to be rendered with - an implementation of Jinja in another programming language that does - not have a ``.items()`` method on its mapping type. - - .. code-block:: html+jinja - -
      - {% for key, value in my_dict|items %} -
      {{ key }} -
      {{ value }} - {% endfor %} -
      - - .. versionadded:: 3.1 - """ - if isinstance(value, Undefined): - return - - if not isinstance(value, abc.Mapping): - raise TypeError("Can only get item pairs from a mapping.") - - yield from value.items() - - -@pass_eval_context -def do_xmlattr( - eval_ctx: "EvalContext", d: t.Mapping[str, t.Any], autospace: bool = True -) -> str: - """Create an SGML/XML attribute string based on the items in a dict. - All values that are neither `none` nor `undefined` are automatically - escaped: - - .. sourcecode:: html+jinja - - - ... -
    - - Results in something like this: - - .. sourcecode:: html - -
      - ... -
    - - As you can see it automatically prepends a space in front of the item - if the filter returned something unless the second parameter is false. - """ - rv = " ".join( - f'{escape(key)}="{escape(value)}"' - for key, value in d.items() - if value is not None and not isinstance(value, Undefined) - ) - - if autospace and rv: - rv = " " + rv - - if eval_ctx.autoescape: - rv = Markup(rv) - - return rv - - -def do_capitalize(s: str) -> str: - """Capitalize a value. The first character will be uppercase, all others - lowercase. - """ - return soft_str(s).capitalize() - - -_word_beginning_split_re = re.compile(r"([-\s({\[<]+)") - - -def do_title(s: str) -> str: - """Return a titlecased version of the value. I.e. words will start with - uppercase letters, all remaining characters are lowercase. - """ - return "".join( - [ - item[0].upper() + item[1:].lower() - for item in _word_beginning_split_re.split(soft_str(s)) - if item - ] - ) - - -def do_dictsort( - value: t.Mapping[K, V], - case_sensitive: bool = False, - by: 'te.Literal["key", "value"]' = "key", - reverse: bool = False, -) -> t.List[t.Tuple[K, V]]: - """Sort a dict and yield (key, value) pairs. Python dicts may not - be in the order you want to display them in, so sort them first. - - .. sourcecode:: jinja - - {% for key, value in mydict|dictsort %} - sort the dict by key, case insensitive - - {% for key, value in mydict|dictsort(reverse=true) %} - sort the dict by key, case insensitive, reverse order - - {% for key, value in mydict|dictsort(true) %} - sort the dict by key, case sensitive - - {% for key, value in mydict|dictsort(false, 'value') %} - sort the dict by value, case insensitive - """ - if by == "key": - pos = 0 - elif by == "value": - pos = 1 - else: - raise FilterArgumentError('You can only sort by either "key" or "value"') - - def sort_func(item: t.Tuple[t.Any, t.Any]) -> t.Any: - value = item[pos] - - if not case_sensitive: - value = ignore_case(value) - - return value - - return sorted(value.items(), key=sort_func, reverse=reverse) - - -@pass_environment -def do_sort( - environment: "Environment", - value: "t.Iterable[V]", - reverse: bool = False, - case_sensitive: bool = False, - attribute: t.Optional[t.Union[str, int]] = None, -) -> "t.List[V]": - """Sort an iterable using Python's :func:`sorted`. - - .. sourcecode:: jinja - - {% for city in cities|sort %} - ... - {% endfor %} - - :param reverse: Sort descending instead of ascending. - :param case_sensitive: When sorting strings, sort upper and lower - case separately. - :param attribute: When sorting objects or dicts, an attribute or - key to sort by. Can use dot notation like ``"address.city"``. - Can be a list of attributes like ``"age,name"``. - - The sort is stable, it does not change the relative order of - elements that compare equal. This makes it is possible to chain - sorts on different attributes and ordering. - - .. sourcecode:: jinja - - {% for user in users|sort(attribute="name") - |sort(reverse=true, attribute="age") %} - ... - {% endfor %} - - As a shortcut to chaining when the direction is the same for all - attributes, pass a comma separate list of attributes. - - .. sourcecode:: jinja - - {% for user in users|sort(attribute="age,name") %} - ... - {% endfor %} - - .. versionchanged:: 2.11.0 - The ``attribute`` parameter can be a comma separated list of - attributes, e.g. ``"age,name"``. - - .. versionchanged:: 2.6 - The ``attribute`` parameter was added. - """ - key_func = make_multi_attrgetter( - environment, attribute, postprocess=ignore_case if not case_sensitive else None - ) - return sorted(value, key=key_func, reverse=reverse) - - -@pass_environment -def do_unique( - environment: "Environment", - value: "t.Iterable[V]", - case_sensitive: bool = False, - attribute: t.Optional[t.Union[str, int]] = None, -) -> "t.Iterator[V]": - """Returns a list of unique items from the given iterable. - - .. sourcecode:: jinja - - {{ ['foo', 'bar', 'foobar', 'FooBar']|unique|list }} - -> ['foo', 'bar', 'foobar'] - - The unique items are yielded in the same order as their first occurrence in - the iterable passed to the filter. - - :param case_sensitive: Treat upper and lower case strings as distinct. - :param attribute: Filter objects with unique values for this attribute. - """ - getter = make_attrgetter( - environment, attribute, postprocess=ignore_case if not case_sensitive else None - ) - seen = set() - - for item in value: - key = getter(item) - - if key not in seen: - seen.add(key) - yield item - - -def _min_or_max( - environment: "Environment", - value: "t.Iterable[V]", - func: "t.Callable[..., V]", - case_sensitive: bool, - attribute: t.Optional[t.Union[str, int]], -) -> "t.Union[V, Undefined]": - it = iter(value) - - try: - first = next(it) - except StopIteration: - return environment.undefined("No aggregated item, sequence was empty.") - - key_func = make_attrgetter( - environment, attribute, postprocess=ignore_case if not case_sensitive else None - ) - return func(chain([first], it), key=key_func) - - -@pass_environment -def do_min( - environment: "Environment", - value: "t.Iterable[V]", - case_sensitive: bool = False, - attribute: t.Optional[t.Union[str, int]] = None, -) -> "t.Union[V, Undefined]": - """Return the smallest item from the sequence. - - .. sourcecode:: jinja - - {{ [1, 2, 3]|min }} - -> 1 - - :param case_sensitive: Treat upper and lower case strings as distinct. - :param attribute: Get the object with the min value of this attribute. - """ - return _min_or_max(environment, value, min, case_sensitive, attribute) - - -@pass_environment -def do_max( - environment: "Environment", - value: "t.Iterable[V]", - case_sensitive: bool = False, - attribute: t.Optional[t.Union[str, int]] = None, -) -> "t.Union[V, Undefined]": - """Return the largest item from the sequence. - - .. sourcecode:: jinja - - {{ [1, 2, 3]|max }} - -> 3 - - :param case_sensitive: Treat upper and lower case strings as distinct. - :param attribute: Get the object with the max value of this attribute. - """ - return _min_or_max(environment, value, max, case_sensitive, attribute) - - -def do_default( - value: V, - default_value: V = "", # type: ignore - boolean: bool = False, -) -> V: - """If the value is undefined it will return the passed default value, - otherwise the value of the variable: - - .. sourcecode:: jinja - - {{ my_variable|default('my_variable is not defined') }} - - This will output the value of ``my_variable`` if the variable was - defined, otherwise ``'my_variable is not defined'``. If you want - to use default with variables that evaluate to false you have to - set the second parameter to `true`: - - .. sourcecode:: jinja - - {{ ''|default('the string was empty', true) }} - - .. versionchanged:: 2.11 - It's now possible to configure the :class:`~jinja2.Environment` with - :class:`~jinja2.ChainableUndefined` to make the `default` filter work - on nested elements and attributes that may contain undefined values - in the chain without getting an :exc:`~jinja2.UndefinedError`. - """ - if isinstance(value, Undefined) or (boolean and not value): - return default_value - - return value - - -@pass_eval_context -def sync_do_join( - eval_ctx: "EvalContext", - value: t.Iterable, - d: str = "", - attribute: t.Optional[t.Union[str, int]] = None, -) -> str: - """Return a string which is the concatenation of the strings in the - sequence. The separator between elements is an empty string per - default, you can define it with the optional parameter: - - .. sourcecode:: jinja - - {{ [1, 2, 3]|join('|') }} - -> 1|2|3 - - {{ [1, 2, 3]|join }} - -> 123 - - It is also possible to join certain attributes of an object: - - .. sourcecode:: jinja - - {{ users|join(', ', attribute='username') }} - - .. versionadded:: 2.6 - The `attribute` parameter was added. - """ - if attribute is not None: - value = map(make_attrgetter(eval_ctx.environment, attribute), value) - - # no automatic escaping? joining is a lot easier then - if not eval_ctx.autoescape: - return str(d).join(map(str, value)) - - # if the delimiter doesn't have an html representation we check - # if any of the items has. If yes we do a coercion to Markup - if not hasattr(d, "__html__"): - value = list(value) - do_escape = False - - for idx, item in enumerate(value): - if hasattr(item, "__html__"): - do_escape = True - else: - value[idx] = str(item) - - if do_escape: - d = escape(d) - else: - d = str(d) - - return d.join(value) - - # no html involved, to normal joining - return soft_str(d).join(map(soft_str, value)) - - -@async_variant(sync_do_join) # type: ignore -async def do_join( - eval_ctx: "EvalContext", - value: t.Union[t.AsyncIterable, t.Iterable], - d: str = "", - attribute: t.Optional[t.Union[str, int]] = None, -) -> str: - return sync_do_join(eval_ctx, await auto_to_list(value), d, attribute) - - -def do_center(value: str, width: int = 80) -> str: - """Centers the value in a field of a given width.""" - return soft_str(value).center(width) - - -@pass_environment -def sync_do_first( - environment: "Environment", seq: "t.Iterable[V]" -) -> "t.Union[V, Undefined]": - """Return the first item of a sequence.""" - try: - return next(iter(seq)) - except StopIteration: - return environment.undefined("No first item, sequence was empty.") - - -@async_variant(sync_do_first) # type: ignore -async def do_first( - environment: "Environment", seq: "t.Union[t.AsyncIterable[V], t.Iterable[V]]" -) -> "t.Union[V, Undefined]": - try: - return await auto_aiter(seq).__anext__() - except StopAsyncIteration: - return environment.undefined("No first item, sequence was empty.") - - -@pass_environment -def do_last( - environment: "Environment", seq: "t.Reversible[V]" -) -> "t.Union[V, Undefined]": - """Return the last item of a sequence. - - Note: Does not work with generators. You may want to explicitly - convert it to a list: - - .. sourcecode:: jinja - - {{ data | selectattr('name', '==', 'Jinja') | list | last }} - """ - try: - return next(iter(reversed(seq))) - except StopIteration: - return environment.undefined("No last item, sequence was empty.") - - -# No async do_last, it may not be safe in async mode. - - -@pass_context -def do_random(context: "Context", seq: "t.Sequence[V]") -> "t.Union[V, Undefined]": - """Return a random item from the sequence.""" - try: - return random.choice(seq) - except IndexError: - return context.environment.undefined("No random item, sequence was empty.") - - -def do_filesizeformat(value: t.Union[str, float, int], binary: bool = False) -> str: - """Format the value like a 'human-readable' file size (i.e. 13 kB, - 4.1 MB, 102 Bytes, etc). Per default decimal prefixes are used (Mega, - Giga, etc.), if the second parameter is set to `True` the binary - prefixes are used (Mebi, Gibi). - """ - bytes = float(value) - base = 1024 if binary else 1000 - prefixes = [ - ("KiB" if binary else "kB"), - ("MiB" if binary else "MB"), - ("GiB" if binary else "GB"), - ("TiB" if binary else "TB"), - ("PiB" if binary else "PB"), - ("EiB" if binary else "EB"), - ("ZiB" if binary else "ZB"), - ("YiB" if binary else "YB"), - ] - - if bytes == 1: - return "1 Byte" - elif bytes < base: - return f"{int(bytes)} Bytes" - else: - for i, prefix in enumerate(prefixes): - unit = base ** (i + 2) - - if bytes < unit: - return f"{base * bytes / unit:.1f} {prefix}" - - return f"{base * bytes / unit:.1f} {prefix}" - - -def do_pprint(value: t.Any) -> str: - """Pretty print a variable. Useful for debugging.""" - return pformat(value) - - -_uri_scheme_re = re.compile(r"^([\w.+-]{2,}:(/){0,2})$") - - -@pass_eval_context -def do_urlize( - eval_ctx: "EvalContext", - value: str, - trim_url_limit: t.Optional[int] = None, - nofollow: bool = False, - target: t.Optional[str] = None, - rel: t.Optional[str] = None, - extra_schemes: t.Optional[t.Iterable[str]] = None, -) -> str: - """Convert URLs in text into clickable links. - - This may not recognize links in some situations. Usually, a more - comprehensive formatter, such as a Markdown library, is a better - choice. - - Works on ``http://``, ``https://``, ``www.``, ``mailto:``, and email - addresses. Links with trailing punctuation (periods, commas, closing - parentheses) and leading punctuation (opening parentheses) are - recognized excluding the punctuation. Email addresses that include - header fields are not recognized (for example, - ``mailto:address@example.com?cc=copy@example.com``). - - :param value: Original text containing URLs to link. - :param trim_url_limit: Shorten displayed URL values to this length. - :param nofollow: Add the ``rel=nofollow`` attribute to links. - :param target: Add the ``target`` attribute to links. - :param rel: Add the ``rel`` attribute to links. - :param extra_schemes: Recognize URLs that start with these schemes - in addition to the default behavior. Defaults to - ``env.policies["urlize.extra_schemes"]``, which defaults to no - extra schemes. - - .. versionchanged:: 3.0 - The ``extra_schemes`` parameter was added. - - .. versionchanged:: 3.0 - Generate ``https://`` links for URLs without a scheme. - - .. versionchanged:: 3.0 - The parsing rules were updated. Recognize email addresses with - or without the ``mailto:`` scheme. Validate IP addresses. Ignore - parentheses and brackets in more cases. - - .. versionchanged:: 2.8 - The ``target`` parameter was added. - """ - policies = eval_ctx.environment.policies - rel_parts = set((rel or "").split()) - - if nofollow: - rel_parts.add("nofollow") - - rel_parts.update((policies["urlize.rel"] or "").split()) - rel = " ".join(sorted(rel_parts)) or None - - if target is None: - target = policies["urlize.target"] - - if extra_schemes is None: - extra_schemes = policies["urlize.extra_schemes"] or () - - for scheme in extra_schemes: - if _uri_scheme_re.fullmatch(scheme) is None: - raise FilterArgumentError(f"{scheme!r} is not a valid URI scheme prefix.") - - rv = urlize( - value, - trim_url_limit=trim_url_limit, - rel=rel, - target=target, - extra_schemes=extra_schemes, - ) - - if eval_ctx.autoescape: - rv = Markup(rv) - - return rv - - -def do_indent( - s: str, width: t.Union[int, str] = 4, first: bool = False, blank: bool = False -) -> str: - """Return a copy of the string with each line indented by 4 spaces. The - first line and blank lines are not indented by default. - - :param width: Number of spaces, or a string, to indent by. - :param first: Don't skip indenting the first line. - :param blank: Don't skip indenting empty lines. - - .. versionchanged:: 3.0 - ``width`` can be a string. - - .. versionchanged:: 2.10 - Blank lines are not indented by default. - - Rename the ``indentfirst`` argument to ``first``. - """ - if isinstance(width, str): - indention = width - else: - indention = " " * width - - newline = "\n" - - if isinstance(s, Markup): - indention = Markup(indention) - newline = Markup(newline) - - s += newline # this quirk is necessary for splitlines method - - if blank: - rv = (newline + indention).join(s.splitlines()) - else: - lines = s.splitlines() - rv = lines.pop(0) - - if lines: - rv += newline + newline.join( - indention + line if line else line for line in lines - ) - - if first: - rv = indention + rv - - return rv - - -@pass_environment -def do_truncate( - env: "Environment", - s: str, - length: int = 255, - killwords: bool = False, - end: str = "...", - leeway: t.Optional[int] = None, -) -> str: - """Return a truncated copy of the string. The length is specified - with the first parameter which defaults to ``255``. If the second - parameter is ``true`` the filter will cut the text at length. Otherwise - it will discard the last word. If the text was in fact - truncated it will append an ellipsis sign (``"..."``). If you want a - different ellipsis sign than ``"..."`` you can specify it using the - third parameter. Strings that only exceed the length by the tolerance - margin given in the fourth parameter will not be truncated. - - .. sourcecode:: jinja - - {{ "foo bar baz qux"|truncate(9) }} - -> "foo..." - {{ "foo bar baz qux"|truncate(9, True) }} - -> "foo ba..." - {{ "foo bar baz qux"|truncate(11) }} - -> "foo bar baz qux" - {{ "foo bar baz qux"|truncate(11, False, '...', 0) }} - -> "foo bar..." - - The default leeway on newer Jinja versions is 5 and was 0 before but - can be reconfigured globally. - """ - if leeway is None: - leeway = env.policies["truncate.leeway"] - - assert length >= len(end), f"expected length >= {len(end)}, got {length}" - assert leeway >= 0, f"expected leeway >= 0, got {leeway}" - - if len(s) <= length + leeway: - return s - - if killwords: - return s[: length - len(end)] + end - - result = s[: length - len(end)].rsplit(" ", 1)[0] - return result + end - - -@pass_environment -def do_wordwrap( - environment: "Environment", - s: str, - width: int = 79, - break_long_words: bool = True, - wrapstring: t.Optional[str] = None, - break_on_hyphens: bool = True, -) -> str: - """Wrap a string to the given width. Existing newlines are treated - as paragraphs to be wrapped separately. - - :param s: Original text to wrap. - :param width: Maximum length of wrapped lines. - :param break_long_words: If a word is longer than ``width``, break - it across lines. - :param break_on_hyphens: If a word contains hyphens, it may be split - across lines. - :param wrapstring: String to join each wrapped line. Defaults to - :attr:`Environment.newline_sequence`. - - .. versionchanged:: 2.11 - Existing newlines are treated as paragraphs wrapped separately. - - .. versionchanged:: 2.11 - Added the ``break_on_hyphens`` parameter. - - .. versionchanged:: 2.7 - Added the ``wrapstring`` parameter. - """ - import textwrap - - if wrapstring is None: - wrapstring = environment.newline_sequence - - # textwrap.wrap doesn't consider existing newlines when wrapping. - # If the string has a newline before width, wrap will still insert - # a newline at width, resulting in a short line. Instead, split and - # wrap each paragraph individually. - return wrapstring.join( - [ - wrapstring.join( - textwrap.wrap( - line, - width=width, - expand_tabs=False, - replace_whitespace=False, - break_long_words=break_long_words, - break_on_hyphens=break_on_hyphens, - ) - ) - for line in s.splitlines() - ] - ) - - -_word_re = re.compile(r"\w+") - - -def do_wordcount(s: str) -> int: - """Count the words in that string.""" - return len(_word_re.findall(soft_str(s))) - - -def do_int(value: t.Any, default: int = 0, base: int = 10) -> int: - """Convert the value into an integer. If the - conversion doesn't work it will return ``0``. You can - override this default using the first parameter. You - can also override the default base (10) in the second - parameter, which handles input with prefixes such as - 0b, 0o and 0x for bases 2, 8 and 16 respectively. - The base is ignored for decimal numbers and non-string values. - """ - try: - if isinstance(value, str): - return int(value, base) - - return int(value) - except (TypeError, ValueError): - # this quirk is necessary so that "42.23"|int gives 42. - try: - return int(float(value)) - except (TypeError, ValueError): - return default - - -def do_float(value: t.Any, default: float = 0.0) -> float: - """Convert the value into a floating point number. If the - conversion doesn't work it will return ``0.0``. You can - override this default using the first parameter. - """ - try: - return float(value) - except (TypeError, ValueError): - return default - - -def do_format(value: str, *args: t.Any, **kwargs: t.Any) -> str: - """Apply the given values to a `printf-style`_ format string, like - ``string % values``. - - .. sourcecode:: jinja - - {{ "%s, %s!"|format(greeting, name) }} - Hello, World! - - In most cases it should be more convenient and efficient to use the - ``%`` operator or :meth:`str.format`. - - .. code-block:: text - - {{ "%s, %s!" % (greeting, name) }} - {{ "{}, {}!".format(greeting, name) }} - - .. _printf-style: https://docs.python.org/library/stdtypes.html - #printf-style-string-formatting - """ - if args and kwargs: - raise FilterArgumentError( - "can't handle positional and keyword arguments at the same time" - ) - - return soft_str(value) % (kwargs or args) - - -def do_trim(value: str, chars: t.Optional[str] = None) -> str: - """Strip leading and trailing characters, by default whitespace.""" - return soft_str(value).strip(chars) - - -def do_striptags(value: "t.Union[str, HasHTML]") -> str: - """Strip SGML/XML tags and replace adjacent whitespace by one space.""" - if hasattr(value, "__html__"): - value = t.cast("HasHTML", value).__html__() - - return Markup(str(value)).striptags() - - -def sync_do_slice( - value: "t.Collection[V]", slices: int, fill_with: "t.Optional[V]" = None -) -> "t.Iterator[t.List[V]]": - """Slice an iterator and return a list of lists containing - those items. Useful if you want to create a div containing - three ul tags that represent columns: - - .. sourcecode:: html+jinja - -
    - {%- for column in items|slice(3) %} -
      - {%- for item in column %} -
    • {{ item }}
    • - {%- endfor %} -
    - {%- endfor %} -
    - - If you pass it a second argument it's used to fill missing - values on the last iteration. - """ - seq = list(value) - length = len(seq) - items_per_slice = length // slices - slices_with_extra = length % slices - offset = 0 - - for slice_number in range(slices): - start = offset + slice_number * items_per_slice - - if slice_number < slices_with_extra: - offset += 1 - - end = offset + (slice_number + 1) * items_per_slice - tmp = seq[start:end] - - if fill_with is not None and slice_number >= slices_with_extra: - tmp.append(fill_with) - - yield tmp - - -@async_variant(sync_do_slice) # type: ignore -async def do_slice( - value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", - slices: int, - fill_with: t.Optional[t.Any] = None, -) -> "t.Iterator[t.List[V]]": - return sync_do_slice(await auto_to_list(value), slices, fill_with) - - -def do_batch( - value: "t.Iterable[V]", linecount: int, fill_with: "t.Optional[V]" = None -) -> "t.Iterator[t.List[V]]": - """ - A filter that batches items. It works pretty much like `slice` - just the other way round. It returns a list of lists with the - given number of items. If you provide a second parameter this - is used to fill up missing items. See this example: - - .. sourcecode:: html+jinja - - - {%- for row in items|batch(3, ' ') %} - - {%- for column in row %} - - {%- endfor %} - - {%- endfor %} -
    {{ column }}
    - """ - tmp: "t.List[V]" = [] - - for item in value: - if len(tmp) == linecount: - yield tmp - tmp = [] - - tmp.append(item) - - if tmp: - if fill_with is not None and len(tmp) < linecount: - tmp += [fill_with] * (linecount - len(tmp)) - - yield tmp - - -def do_round( - value: float, - precision: int = 0, - method: 'te.Literal["common", "ceil", "floor"]' = "common", -) -> float: - """Round the number to a given precision. The first - parameter specifies the precision (default is ``0``), the - second the rounding method: - - - ``'common'`` rounds either up or down - - ``'ceil'`` always rounds up - - ``'floor'`` always rounds down - - If you don't specify a method ``'common'`` is used. - - .. sourcecode:: jinja - - {{ 42.55|round }} - -> 43.0 - {{ 42.55|round(1, 'floor') }} - -> 42.5 - - Note that even if rounded to 0 precision, a float is returned. If - you need a real integer, pipe it through `int`: - - .. sourcecode:: jinja - - {{ 42.55|round|int }} - -> 43 - """ - if method not in {"common", "ceil", "floor"}: - raise FilterArgumentError("method must be common, ceil or floor") - - if method == "common": - return round(value, precision) - - func = getattr(math, method) - return t.cast(float, func(value * (10**precision)) / (10**precision)) - - -class _GroupTuple(t.NamedTuple): - grouper: t.Any - list: t.List - - # Use the regular tuple repr to hide this subclass if users print - # out the value during debugging. - def __repr__(self) -> str: - return tuple.__repr__(self) - - def __str__(self) -> str: - return tuple.__str__(self) - - -@pass_environment -def sync_do_groupby( - environment: "Environment", - value: "t.Iterable[V]", - attribute: t.Union[str, int], - default: t.Optional[t.Any] = None, - case_sensitive: bool = False, -) -> "t.List[_GroupTuple]": - """Group a sequence of objects by an attribute using Python's - :func:`itertools.groupby`. The attribute can use dot notation for - nested access, like ``"address.city"``. Unlike Python's ``groupby``, - the values are sorted first so only one group is returned for each - unique value. - - For example, a list of ``User`` objects with a ``city`` attribute - can be rendered in groups. In this example, ``grouper`` refers to - the ``city`` value of the group. - - .. sourcecode:: html+jinja - -
      {% for city, items in users|groupby("city") %} -
    • {{ city }} -
        {% for user in items %} -
      • {{ user.name }} - {% endfor %}
      -
    • - {% endfor %}
    - - ``groupby`` yields namedtuples of ``(grouper, list)``, which - can be used instead of the tuple unpacking above. ``grouper`` is the - value of the attribute, and ``list`` is the items with that value. - - .. sourcecode:: html+jinja - -
      {% for group in users|groupby("city") %} -
    • {{ group.grouper }}: {{ group.list|join(", ") }} - {% endfor %}
    - - You can specify a ``default`` value to use if an object in the list - does not have the given attribute. - - .. sourcecode:: jinja - -
      {% for city, items in users|groupby("city", default="NY") %} -
    • {{ city }}: {{ items|map(attribute="name")|join(", ") }}
    • - {% endfor %}
    - - Like the :func:`~jinja-filters.sort` filter, sorting and grouping is - case-insensitive by default. The ``key`` for each group will have - the case of the first item in that group of values. For example, if - a list of users has cities ``["CA", "NY", "ca"]``, the "CA" group - will have two values. This can be disabled by passing - ``case_sensitive=True``. - - .. versionchanged:: 3.1 - Added the ``case_sensitive`` parameter. Sorting and grouping is - case-insensitive by default, matching other filters that do - comparisons. - - .. versionchanged:: 3.0 - Added the ``default`` parameter. - - .. versionchanged:: 2.6 - The attribute supports dot notation for nested access. - """ - expr = make_attrgetter( - environment, - attribute, - postprocess=ignore_case if not case_sensitive else None, - default=default, - ) - out = [ - _GroupTuple(key, list(values)) - for key, values in groupby(sorted(value, key=expr), expr) - ] - - if not case_sensitive: - # Return the real key from the first value instead of the lowercase key. - output_expr = make_attrgetter(environment, attribute, default=default) - out = [_GroupTuple(output_expr(values[0]), values) for _, values in out] - - return out - - -@async_variant(sync_do_groupby) # type: ignore -async def do_groupby( - environment: "Environment", - value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", - attribute: t.Union[str, int], - default: t.Optional[t.Any] = None, - case_sensitive: bool = False, -) -> "t.List[_GroupTuple]": - expr = make_attrgetter( - environment, - attribute, - postprocess=ignore_case if not case_sensitive else None, - default=default, - ) - out = [ - _GroupTuple(key, await auto_to_list(values)) - for key, values in groupby(sorted(await auto_to_list(value), key=expr), expr) - ] - - if not case_sensitive: - # Return the real key from the first value instead of the lowercase key. - output_expr = make_attrgetter(environment, attribute, default=default) - out = [_GroupTuple(output_expr(values[0]), values) for _, values in out] - - return out - - -@pass_environment -def sync_do_sum( - environment: "Environment", - iterable: "t.Iterable[V]", - attribute: t.Optional[t.Union[str, int]] = None, - start: V = 0, # type: ignore -) -> V: - """Returns the sum of a sequence of numbers plus the value of parameter - 'start' (which defaults to 0). When the sequence is empty it returns - start. - - It is also possible to sum up only certain attributes: - - .. sourcecode:: jinja - - Total: {{ items|sum(attribute='price') }} - - .. versionchanged:: 2.6 - The ``attribute`` parameter was added to allow summing up over - attributes. Also the ``start`` parameter was moved on to the right. - """ - if attribute is not None: - iterable = map(make_attrgetter(environment, attribute), iterable) - - return sum(iterable, start) # type: ignore[no-any-return, call-overload] - - -@async_variant(sync_do_sum) # type: ignore -async def do_sum( - environment: "Environment", - iterable: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", - attribute: t.Optional[t.Union[str, int]] = None, - start: V = 0, # type: ignore -) -> V: - rv = start - - if attribute is not None: - func = make_attrgetter(environment, attribute) - else: - - def func(x: V) -> V: - return x - - async for item in auto_aiter(iterable): - rv += func(item) - - return rv - - -def sync_do_list(value: "t.Iterable[V]") -> "t.List[V]": - """Convert the value into a list. If it was a string the returned list - will be a list of characters. - """ - return list(value) - - -@async_variant(sync_do_list) # type: ignore -async def do_list(value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]") -> "t.List[V]": - return await auto_to_list(value) - - -def do_mark_safe(value: str) -> Markup: - """Mark the value as safe which means that in an environment with automatic - escaping enabled this variable will not be escaped. - """ - return Markup(value) - - -def do_mark_unsafe(value: str) -> str: - """Mark a value as unsafe. This is the reverse operation for :func:`safe`.""" - return str(value) - - -@typing.overload -def do_reverse(value: str) -> str: - ... - - -@typing.overload -def do_reverse(value: "t.Iterable[V]") -> "t.Iterable[V]": - ... - - -def do_reverse(value: t.Union[str, t.Iterable[V]]) -> t.Union[str, t.Iterable[V]]: - """Reverse the object or return an iterator that iterates over it the other - way round. - """ - if isinstance(value, str): - return value[::-1] - - try: - return reversed(value) # type: ignore - except TypeError: - try: - rv = list(value) - rv.reverse() - return rv - except TypeError as e: - raise FilterArgumentError("argument must be iterable") from e - - -@pass_environment -def do_attr( - environment: "Environment", obj: t.Any, name: str -) -> t.Union[Undefined, t.Any]: - """Get an attribute of an object. ``foo|attr("bar")`` works like - ``foo.bar`` just that always an attribute is returned and items are not - looked up. - - See :ref:`Notes on subscriptions ` for more details. - """ - try: - name = str(name) - except UnicodeError: - pass - else: - try: - value = getattr(obj, name) - except AttributeError: - pass - else: - if environment.sandboxed: - environment = t.cast("SandboxedEnvironment", environment) - - if not environment.is_safe_attribute(obj, name, value): - return environment.unsafe_undefined(obj, name) - - return value - - return environment.undefined(obj=obj, name=name) - - -@typing.overload -def sync_do_map( - context: "Context", value: t.Iterable, name: str, *args: t.Any, **kwargs: t.Any -) -> t.Iterable: - ... - - -@typing.overload -def sync_do_map( - context: "Context", - value: t.Iterable, - *, - attribute: str = ..., - default: t.Optional[t.Any] = None, -) -> t.Iterable: - ... - - -@pass_context -def sync_do_map( - context: "Context", value: t.Iterable, *args: t.Any, **kwargs: t.Any -) -> t.Iterable: - """Applies a filter on a sequence of objects or looks up an attribute. - This is useful when dealing with lists of objects but you are really - only interested in a certain value of it. - - The basic usage is mapping on an attribute. Imagine you have a list - of users but you are only interested in a list of usernames: - - .. sourcecode:: jinja - - Users on this page: {{ users|map(attribute='username')|join(', ') }} - - You can specify a ``default`` value to use if an object in the list - does not have the given attribute. - - .. sourcecode:: jinja - - {{ users|map(attribute="username", default="Anonymous")|join(", ") }} - - Alternatively you can let it invoke a filter by passing the name of the - filter and the arguments afterwards. A good example would be applying a - text conversion filter on a sequence: - - .. sourcecode:: jinja - - Users on this page: {{ titles|map('lower')|join(', ') }} - - Similar to a generator comprehension such as: - - .. code-block:: python - - (u.username for u in users) - (getattr(u, "username", "Anonymous") for u in users) - (do_lower(x) for x in titles) - - .. versionchanged:: 2.11.0 - Added the ``default`` parameter. - - .. versionadded:: 2.7 - """ - if value: - func = prepare_map(context, args, kwargs) - - for item in value: - yield func(item) - - -@typing.overload -def do_map( - context: "Context", - value: t.Union[t.AsyncIterable, t.Iterable], - name: str, - *args: t.Any, - **kwargs: t.Any, -) -> t.Iterable: - ... - - -@typing.overload -def do_map( - context: "Context", - value: t.Union[t.AsyncIterable, t.Iterable], - *, - attribute: str = ..., - default: t.Optional[t.Any] = None, -) -> t.Iterable: - ... - - -@async_variant(sync_do_map) # type: ignore -async def do_map( - context: "Context", - value: t.Union[t.AsyncIterable, t.Iterable], - *args: t.Any, - **kwargs: t.Any, -) -> t.AsyncIterable: - if value: - func = prepare_map(context, args, kwargs) - - async for item in auto_aiter(value): - yield await auto_await(func(item)) - - -@pass_context -def sync_do_select( - context: "Context", value: "t.Iterable[V]", *args: t.Any, **kwargs: t.Any -) -> "t.Iterator[V]": - """Filters a sequence of objects by applying a test to each object, - and only selecting the objects with the test succeeding. - - If no test is specified, each object will be evaluated as a boolean. - - Example usage: - - .. sourcecode:: jinja - - {{ numbers|select("odd") }} - {{ numbers|select("odd") }} - {{ numbers|select("divisibleby", 3) }} - {{ numbers|select("lessthan", 42) }} - {{ strings|select("equalto", "mystring") }} - - Similar to a generator comprehension such as: - - .. code-block:: python - - (n for n in numbers if test_odd(n)) - (n for n in numbers if test_divisibleby(n, 3)) - - .. versionadded:: 2.7 - """ - return select_or_reject(context, value, args, kwargs, lambda x: x, False) - - -@async_variant(sync_do_select) # type: ignore -async def do_select( - context: "Context", - value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", - *args: t.Any, - **kwargs: t.Any, -) -> "t.AsyncIterator[V]": - return async_select_or_reject(context, value, args, kwargs, lambda x: x, False) - - -@pass_context -def sync_do_reject( - context: "Context", value: "t.Iterable[V]", *args: t.Any, **kwargs: t.Any -) -> "t.Iterator[V]": - """Filters a sequence of objects by applying a test to each object, - and rejecting the objects with the test succeeding. - - If no test is specified, each object will be evaluated as a boolean. - - Example usage: - - .. sourcecode:: jinja - - {{ numbers|reject("odd") }} - - Similar to a generator comprehension such as: - - .. code-block:: python - - (n for n in numbers if not test_odd(n)) - - .. versionadded:: 2.7 - """ - return select_or_reject(context, value, args, kwargs, lambda x: not x, False) - - -@async_variant(sync_do_reject) # type: ignore -async def do_reject( - context: "Context", - value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", - *args: t.Any, - **kwargs: t.Any, -) -> "t.AsyncIterator[V]": - return async_select_or_reject(context, value, args, kwargs, lambda x: not x, False) - - -@pass_context -def sync_do_selectattr( - context: "Context", value: "t.Iterable[V]", *args: t.Any, **kwargs: t.Any -) -> "t.Iterator[V]": - """Filters a sequence of objects by applying a test to the specified - attribute of each object, and only selecting the objects with the - test succeeding. - - If no test is specified, the attribute's value will be evaluated as - a boolean. - - Example usage: - - .. sourcecode:: jinja - - {{ users|selectattr("is_active") }} - {{ users|selectattr("email", "none") }} - - Similar to a generator comprehension such as: - - .. code-block:: python - - (u for user in users if user.is_active) - (u for user in users if test_none(user.email)) - - .. versionadded:: 2.7 - """ - return select_or_reject(context, value, args, kwargs, lambda x: x, True) - - -@async_variant(sync_do_selectattr) # type: ignore -async def do_selectattr( - context: "Context", - value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", - *args: t.Any, - **kwargs: t.Any, -) -> "t.AsyncIterator[V]": - return async_select_or_reject(context, value, args, kwargs, lambda x: x, True) - - -@pass_context -def sync_do_rejectattr( - context: "Context", value: "t.Iterable[V]", *args: t.Any, **kwargs: t.Any -) -> "t.Iterator[V]": - """Filters a sequence of objects by applying a test to the specified - attribute of each object, and rejecting the objects with the test - succeeding. - - If no test is specified, the attribute's value will be evaluated as - a boolean. - - .. sourcecode:: jinja - - {{ users|rejectattr("is_active") }} - {{ users|rejectattr("email", "none") }} - - Similar to a generator comprehension such as: - - .. code-block:: python - - (u for user in users if not user.is_active) - (u for user in users if not test_none(user.email)) - - .. versionadded:: 2.7 - """ - return select_or_reject(context, value, args, kwargs, lambda x: not x, True) - - -@async_variant(sync_do_rejectattr) # type: ignore -async def do_rejectattr( - context: "Context", - value: "t.Union[t.AsyncIterable[V], t.Iterable[V]]", - *args: t.Any, - **kwargs: t.Any, -) -> "t.AsyncIterator[V]": - return async_select_or_reject(context, value, args, kwargs, lambda x: not x, True) - - -@pass_eval_context -def do_tojson( - eval_ctx: "EvalContext", value: t.Any, indent: t.Optional[int] = None -) -> Markup: - """Serialize an object to a string of JSON, and mark it safe to - render in HTML. This filter is only for use in HTML documents. - - The returned string is safe to render in HTML documents and - `` - - - - - - - - -
    -

    Loading ...

    -
    - - -
    -
    -
    - - - - - - - diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/dataset/image_translation/__init__.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/dataset/image_translation/__init__.py deleted file mode 100644 index 7f3999734455352473532ef25cddf059eb5baee3..0000000000000000000000000000000000000000 --- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/dataset/image_translation/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -""" - # Copyright 2020 Adobe - # All Rights Reserved. - - # NOTICE: Adobe permits you to use, modify, and distribute this file in - # accordance with the terms of the Adobe license agreement accompanying - # it. - -""" - diff --git a/spaces/masterkram/finance_news_classifier/src/data.py b/spaces/masterkram/finance_news_classifier/src/data.py deleted file mode 100644 index 79daea624cb88bff1e710b8151fac306a85dc93d..0000000000000000000000000000000000000000 --- a/spaces/masterkram/finance_news_classifier/src/data.py +++ /dev/null @@ -1,50 +0,0 @@ -from datasets import load_dataset -import typed_settings as ts -from settings import Dataset, Model -from transformers import RobertaTokenizerFast, AutoConfig - -settings = ts.load(Dataset, appname="dataset", config_files=["src/finetune.toml"]) -model_settings = ts.load(Model, appname="model", config_files=["src/finetune.toml"]) - -tokenizer = RobertaTokenizerFast.from_pretrained(model_settings.name) - - -def tokenize(batch): - """ - This function tokenizes the input text using the RoBERTa tokenizer. - It applies padding and truncation to ensure that all sequences have the same length (256 tokens). - """ - return tokenizer(batch["sentence"], padding=True, truncation=True, max_length=256) - - -def get_id_to_label_map(dataset) -> dict: - class_names = dataset.features["label"].names - id2label = {i: label for i, label in enumerate(class_names)} - return id2label - - -def load_data(): - dataset = load_dataset(settings.id, settings.agreement_levels[settings.level]) - splits = [] - # => 90% train ∪ development and 10% test - dataset = dataset["train"].train_test_split( - test_size=0.1, shuffle=True, seed=42, stratify_by_column="label" - ) - - # => 80% train, 10% development and 10% test - train_dev_split = dataset["train"].train_test_split( - test_size=0.1114, shuffle=True, seed=42, stratify_by_column="label" - ) - - splits.append(train_dev_split["train"]) - splits.append(train_dev_split["test"]) - splits.append(dataset["test"]) - - preprocessed = [ - split.map(tokenize, batched=True, batch_size=len(split)) for split in splits - ] - - for split in preprocessed: - split.set_format("torch", columns=["input_ids", "attention_mask", "label"]) - - return preprocessed, get_id_to_label_map(splits[0]) diff --git a/spaces/menghanxia/ReversibleHalftoning/model/hourglass.py b/spaces/menghanxia/ReversibleHalftoning/model/hourglass.py deleted file mode 100644 index 1d89a7d176b09e00c02f37d3479ec09012cd1980..0000000000000000000000000000000000000000 --- a/spaces/menghanxia/ReversibleHalftoning/model/hourglass.py +++ /dev/null @@ -1,70 +0,0 @@ -import torch.nn as nn -from .base_module import ConvBlock, DownsampleBlock, ResidualBlock, SkipConnection, UpsampleBlock - - -class HourGlass(nn.Module): - def __init__(self, convNum=4, resNum=4, inChannel=6, outChannel=3): - super(HourGlass, self).__init__() - self.inConv = ConvBlock(inChannel, 64, convNum=2) - self.down1 = nn.Sequential(*[DownsampleBlock(64, 128, withConvRelu=False), ConvBlock(128, 128, convNum=2)]) - self.down2 = nn.Sequential( - *[DownsampleBlock(128, 256, withConvRelu=False), ConvBlock(256, 256, convNum=convNum)]) - self.down3 = nn.Sequential( - *[DownsampleBlock(256, 512, withConvRelu=False), ConvBlock(512, 512, convNum=convNum)]) - self.residual = nn.Sequential(*[ResidualBlock(512) for _ in range(resNum)]) - self.up3 = nn.Sequential(*[UpsampleBlock(512, 256), ConvBlock(256, 256, convNum=convNum)]) - self.skip3 = SkipConnection(256) - self.up2 = nn.Sequential(*[UpsampleBlock(256, 128), ConvBlock(128, 128, convNum=2)]) - self.skip2 = SkipConnection(128) - self.up1 = nn.Sequential(*[UpsampleBlock(128, 64), ConvBlock(64, 64, convNum=2)]) - self.skip1 = SkipConnection(64) - self.outConv = nn.Sequential( - nn.Conv2d(64, 64, kernel_size=3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(64, outChannel, kernel_size=1, padding=0) - ) - - def forward(self, x): - f1 = self.inConv(x) - f2 = self.down1(f1) - f3 = self.down2(f2) - f4 = self.down3(f3) - r4 = self.residual(f4) - r3 = self.skip3(self.up3(r4), f3) - r2 = self.skip2(self.up2(r3), f2) - r1 = self.skip1(self.up1(r2), f1) - y = self.outConv(r1) - return y - - -class ResidualHourGlass(nn.Module): - def __init__(self, resNum=4, inChannel=6, outChannel=3): - super(ResidualHourGlass, self).__init__() - self.inConv = nn.Conv2d(inChannel, 64, kernel_size=3, padding=1) - self.residualBefore = nn.Sequential(*[ResidualBlock(64) for _ in range(2)]) - self.down1 = nn.Sequential( - *[DownsampleBlock(64, 128, withConvRelu=False), ConvBlock(128, 128, convNum=2)]) - self.down2 = nn.Sequential( - *[DownsampleBlock(128, 256, withConvRelu=False), ConvBlock(256, 256, convNum=2)]) - self.residual = nn.Sequential(*[ResidualBlock(256) for _ in range(resNum)]) - self.up2 = nn.Sequential(*[UpsampleBlock(256, 128), ConvBlock(128, 128, convNum=2)]) - self.skip2 = SkipConnection(128) - self.up1 = nn.Sequential(*[UpsampleBlock(128, 64), ConvBlock(64, 64, convNum=2)]) - self.skip1 = SkipConnection(64) - self.residualAfter = nn.Sequential(*[ResidualBlock(64) for _ in range(2)]) - self.outConv = nn.Sequential( - nn.Conv2d(64, outChannel, kernel_size=3, padding=1), - nn.Tanh() - ) - - def forward(self, x): - f1 = self.inConv(x) - f1 = self.residualBefore(f1) - f2 = self.down1(f1) - f3 = self.down2(f2) - r3 = self.residual(f3) - r2 = self.skip2(self.up2(r3), f2) - r1 = self.skip1(self.up1(r2), f1) - y = self.residualAfter(r1) - y = self.outConv(y) - return y diff --git a/spaces/merve/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h b/spaces/merve/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h deleted file mode 100644 index b2b88e8c46f19b6db0933163e57ccdb51180f517..0000000000000000000000000000000000000000 --- a/spaces/merve/Grounding_DINO_demo/groundingdino/models/GroundingDINO/csrc/MsDeformAttn/ms_deform_attn_cpu.h +++ /dev/null @@ -1,35 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -#pragma once -#include - -namespace groundingdino { - -at::Tensor -ms_deform_attn_cpu_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector -ms_deform_attn_cpu_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - -} // namespace groundingdino diff --git a/spaces/merve/measuring-fairness/public/third_party/mobilenet@1.0.0.js b/spaces/merve/measuring-fairness/public/third_party/mobilenet@1.0.0.js deleted file mode 100644 index d50ffe68663e1aabfc07faec02e8a3cb41b5dfe5..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/public/third_party/mobilenet@1.0.0.js +++ /dev/null @@ -1,2 +0,0 @@ -// @tensorflow/tfjs-models Copyright 2019 Google -!function(e,a){"object"==typeof exports&&"undefined"!=typeof module?a(exports,require("@tensorflow/tfjs")):"function"==typeof define&&define.amd?define(["exports","@tensorflow/tfjs"],a):a((e=e||self).mobilenet={},e.tf)}(this,function(e,a){"use strict";function r(e,a,r,o){return new(r||(r=Promise))(function(i,t){function n(e){try{l(o.next(e))}catch(e){t(e)}}function s(e){try{l(o.throw(e))}catch(e){t(e)}}function l(e){e.done?i(e.value):new r(function(a){a(e.value)}).then(n,s)}l((o=o.apply(e,a||[])).next())})}function o(e,a){var r,o,i,t,n={label:0,sent:function(){if(1&i[0])throw i[1];return i[1]},trys:[],ops:[]};return t={next:s(0),throw:s(1),return:s(2)},"function"==typeof Symbol&&(t[Symbol.iterator]=function(){return this}),t;function s(t){return function(s){return function(t){if(r)throw new TypeError("Generator is already executing.");for(;n;)try{if(r=1,o&&(i=2&t[0]?o.return:t[0]?o.throw||((i=o.return)&&i.call(o),0):o.next)&&!(i=i.call(o,t[1])).done)return i;switch(o=0,i&&(t=[2&t[0],i.value]),t[0]){case 0:case 1:i=t;break;case 4:return n.label++,{value:t[1],done:!1};case 5:n.label++,o=t[1],t=[0];continue;case 7:t=n.ops.pop(),n.trys.pop();continue;default:if(!(i=(i=n.trys).length>0&&i[i.length-1])&&(6===t[0]||2===t[0])){n=0;continue}if(3===t[0]&&(!i||t[1]>i[0]&&t[1] tag, please also include @tensorflow/tfjs on the page before using this model.");if(r=e.toFixed(2),t=i.toFixed(2),!(r in n))throw new Error("Invalid version of MobileNet. Valid versions are: "+Object.keys(n));if(!(t in n[r]))throw new Error("MobileNet constructed with invalid alpha "+i+". Valid multipliers for this version are: "+Object.keys(n[r])+".");return[4,(l=new s(r,t)).load()];case 1:return o.sent(),[2,l]}})})},e.MobileNet=s,Object.defineProperty(e,"__esModule",{value:!0})}); \ No newline at end of file diff --git a/spaces/merve/measuring-fairness/source/anonymization/make-axii.js b/spaces/merve/measuring-fairness/source/anonymization/make-axii.js deleted file mode 100644 index c69b5eba387ec07f01ce2849726fda5461002aef..0000000000000000000000000000000000000000 --- a/spaces/merve/measuring-fairness/source/anonymization/make-axii.js +++ /dev/null @@ -1,86 +0,0 @@ -window.makeAxii = function(){ - - var stateScale = d3.scaleBand().domain(states).range(c.x.range()) - var stateAxis = c.svg.append('g.axis.state.init-hidden') - - var bw = stateScale.bandwidth()/2 - - stateAxis.appendMany('text', states) - .translate(d => [stateScale(d) + bw, c.height + 22]) - .text(d => d) - .at({ - textAnchor: 'middle', - }) - .st({fill: '#444'}) - - stateAxis.appendMany('path', d3.range(ages.length + 1)) - .at({ - d: d => ['M', d*c.width/(ages.length), '0 V', c.height].join(' '), - stroke: '#aaa', - }) - - stateAxis.append('text.bold').text('Home State') - .translate([c.width/2, c.height + 45]) - .at({textAnchor: 'middle'}) - - var ageScale = d3.scaleBand().domain(ages.slice().reverse()).range(c.x.range()) - var ageAxis = c.svg.append('g.axis.age.init-hidden') - - ageAxis.appendMany('text', ages) - .translate(d => [-30, ageScale(d) + bw]) - .text(d => d) - .at({dy: '.33em'}) - .st({fill: '#444'}) - - ageAxis.appendMany('path', d3.range(ages.length + 1)) - .at({ - d: d => ['M 0', d*c.width/(ages.length), 'H', c.width].join(' '), - stroke: '#aaa', - }) - - if (scale == 1){ - ageAxis - .append('g').translate([-43, c.height/2]) - .append('text.bold').text('Age') - .at({textAnchor: 'middle', transform: 'rotate(-90)'}) - } else { - ageAxis - .append('g').translate([-22, 14]) - .append('text.bold').text('Age') - .at({textAnchor: 'middle'}) - } - - var seasonAxis = c.svg.append('g.axis.state.init-hidden').lower() - seasonAxis.appendMany('g', ages) - .translate(d => ageScale(d), 1) - .appendMany('path', d3.range(1, 4)) - .at({ - d: d => ['M 0', d*bw/4*2, 'H', c.width].join(' '), - stroke: '#ddd', - }) - - var headAxis = c.svg.append('g.axis.state.init-hidden') - headAxis.appendMany('text.bold', ['Heads', 'Tails']) - .text(d => d) - .translate((d, i) => [i ? c.width/4*3 + 20 : c.width/4 - 20, 88]) - .at({textAnchor: 'middle'}) - - - var headCaptionAxis = c.svg.append('g.axis.state.init-hidden') - headCaptionAxis.appendMany('text', ['reports plagiarism', 'reports truth']) - .text(d => d) - .translate((d, i) => [i ? c.width/4*3 + 20 : c.width/4 - 20, 88 + 15]) - .at({textAnchor: 'middle'}) - .st({fill: '#444'}) - - - return {stateScale, stateAxis, headAxis, headCaptionAxis, ageScale, ageAxis, bw, seasonAxis} -} - - - - - - - -if (window.init) window.init() \ No newline at end of file diff --git a/spaces/mikkoar/marco/src/components/settings.tsx b/spaces/mikkoar/marco/src/components/settings.tsx deleted file mode 100644 index e18aa5b484852bb5d047442a06e7143b6893cb0d..0000000000000000000000000000000000000000 --- a/spaces/mikkoar/marco/src/components/settings.tsx +++ /dev/null @@ -1,141 +0,0 @@ -import { useEffect, useState } from 'react' -import { useAtom } from 'jotai' -import { Switch } from '@headlessui/react' -import { toast } from 'react-hot-toast' -import { hashAtom, voiceAtom } from '@/state' -import { - Dialog, - DialogContent, - DialogDescription, - DialogFooter, - DialogHeader, - DialogTitle -} from '@/components/ui/dialog' -import { Button } from './ui/button' -import { Input } from './ui/input' -import { ChunkKeys, parseCookies, extraCurlFromCookie, randomIP, encodeHeadersToCookie } from '@/lib/utils' -import { ExternalLink } from './external-link' -import { useCopyToClipboard } from '@/lib/hooks/use-copy-to-clipboard' - -export function Settings() { - const { isCopied, copyToClipboard } = useCopyToClipboard({ timeout: 2000 }) - const [loc, setLoc] = useAtom(hashAtom) - const [curlValue, setCurlValue] = useState(extraCurlFromCookie(parseCookies(document.cookie, ChunkKeys))) - const [enableTTS, setEnableTTS] = useAtom(voiceAtom) - - useEffect(() => { - if (isCopied) { - toast.success('复制成功') - } - }, [isCopied]) - - if (loc === 'settings') { - return ( - setLoc('')} modal> - - - 设置你的用户信息 - - 请使用 Edge 浏览器 - - 打开并登录 Bing - - ,然后再打开 - Challenge 接口 - 右键 》检查。打开开发者工具,在网络里面找到 Create 接口 》右键复制》复制为 cURL(bash),粘贴到此处,然后保存。 -
    - 图文示例: - 如何获取 BING_HEADER - - -
    - -
    - setCurlValue(e.target.value)} - /> - - - - - - -
    - ) - } else if (loc === 'voice') { - return ( - setLoc('')} modal> - - - 语音设置 - - 目前仅支持 PC 端 Edge 及 Chrome 浏览器 - - - -
    - 启用语音回答 - setEnableTTS(checked)} - > - - -
    - - - - -
    -
    - ) - } - return null -} diff --git a/spaces/milyiyo/reimagine-it/captioning/models/AttModel.py b/spaces/milyiyo/reimagine-it/captioning/models/AttModel.py deleted file mode 100644 index 3dc4e5b7a78c4affbfba4044ca8c96c30b26e36a..0000000000000000000000000000000000000000 --- a/spaces/milyiyo/reimagine-it/captioning/models/AttModel.py +++ /dev/null @@ -1,969 +0,0 @@ -# This file contains Att2in2, AdaAtt, AdaAttMO, UpDown model - -# AdaAtt is from Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning -# https://arxiv.org/abs/1612.01887 -# AdaAttMO is a modified version with maxout lstm - -# Att2in is from Self-critical Sequence Training for Image Captioning -# https://arxiv.org/abs/1612.00563 -# In this file we only have Att2in2, which is a slightly different version of att2in, -# in which the img feature embedding and word embedding is the same as what in adaatt. - -# UpDown is from Bottom-Up and Top-Down Attention for Image Captioning and VQA -# https://arxiv.org/abs/1707.07998 -# However, it may not be identical to the author's architecture. - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from . import utils -from torch.nn.utils.rnn import PackedSequence, pack_padded_sequence, pad_packed_sequence - -from .CaptionModel import CaptionModel - -bad_endings = ['a','an','the','in','for','at','of','with','before','after','on','upon','near','to','is','are','am'] -bad_endings += ['the'] - -def sort_pack_padded_sequence(input, lengths): - sorted_lengths, indices = torch.sort(lengths, descending=True) - # tmp = pack_padded_sequence(input[indices], sorted_lengths, batch_first=True) - tmp = pack_padded_sequence(input[indices], sorted_lengths.cpu(), batch_first=True) - inv_ix = indices.clone() - inv_ix[indices] = torch.arange(0,len(indices)).type_as(inv_ix) - return tmp, inv_ix - -def pad_unsort_packed_sequence(input, inv_ix): - tmp, _ = pad_packed_sequence(input, batch_first=True) - tmp = tmp[inv_ix] - return tmp - -def pack_wrapper(module, att_feats, att_masks): - if att_masks is not None: - packed, inv_ix = sort_pack_padded_sequence(att_feats, att_masks.data.long().sum(1)) - return pad_unsort_packed_sequence(PackedSequence(module(packed[0]), packed[1]), inv_ix) - else: - return module(att_feats) - -class AttModel(CaptionModel): - def __init__(self, opt): - super(AttModel, self).__init__() - self.vocab_size = opt.vocab_size - self.input_encoding_size = opt.input_encoding_size - #self.rnn_type = opt.rnn_type - self.rnn_size = opt.rnn_size - self.num_layers = opt.num_layers - self.drop_prob_lm = opt.drop_prob_lm - self.seq_length = getattr(opt, 'max_length', 20) or opt.seq_length # maximum sample length - self.fc_feat_size = opt.fc_feat_size - self.att_feat_size = opt.att_feat_size - self.att_hid_size = opt.att_hid_size - - self.bos_idx = getattr(opt, 'bos_idx', 0) - self.eos_idx = getattr(opt, 'eos_idx', 0) - self.pad_idx = getattr(opt, 'pad_idx', 0) - - self.use_bn = getattr(opt, 'use_bn', 0) - - self.ss_prob = 0.0 # Schedule sampling probability - - self.embed = nn.Sequential(nn.Embedding(self.vocab_size + 1, self.input_encoding_size), - nn.ReLU(), - nn.Dropout(self.drop_prob_lm)) - self.fc_embed = nn.Sequential(nn.Linear(self.fc_feat_size, self.rnn_size), - nn.ReLU(), - nn.Dropout(self.drop_prob_lm)) - self.att_embed = nn.Sequential(*( - ((nn.BatchNorm1d(self.att_feat_size),) if self.use_bn else ())+ - (nn.Linear(self.att_feat_size, self.rnn_size), - nn.ReLU(), - nn.Dropout(self.drop_prob_lm))+ - ((nn.BatchNorm1d(self.rnn_size),) if self.use_bn==2 else ()))) - - self.logit_layers = getattr(opt, 'logit_layers', 1) - if self.logit_layers == 1: - self.logit = nn.Linear(self.rnn_size, self.vocab_size + 1) - else: - self.logit = [[nn.Linear(self.rnn_size, self.rnn_size), nn.ReLU(), nn.Dropout(0.5)] for _ in range(opt.logit_layers - 1)] - self.logit = nn.Sequential(*(reduce(lambda x,y:x+y, self.logit) + [nn.Linear(self.rnn_size, self.vocab_size + 1)])) - self.ctx2att = nn.Linear(self.rnn_size, self.att_hid_size) - - # For remove bad endding - self.vocab = opt.vocab - self.bad_endings_ix = [int(k) for k,v in self.vocab.items() if v in bad_endings] - - def init_hidden(self, bsz): - weight = self.logit.weight \ - if hasattr(self.logit, "weight") \ - else self.logit[0].weight - return (weight.new_zeros(self.num_layers, bsz, self.rnn_size), - weight.new_zeros(self.num_layers, bsz, self.rnn_size)) - - def clip_att(self, att_feats, att_masks): - # Clip the length of att_masks and att_feats to the maximum length - if att_masks is not None: - max_len = att_masks.data.long().sum(1).max() - att_feats = att_feats[:, :max_len].contiguous() - att_masks = att_masks[:, :max_len].contiguous() - return att_feats, att_masks - - def _prepare_feature(self, fc_feats, att_feats, att_masks): - att_feats, att_masks = self.clip_att(att_feats, att_masks) - - # embed fc and att feats - fc_feats = self.fc_embed(fc_feats) - att_feats = pack_wrapper(self.att_embed, att_feats, att_masks) - - # Project the attention feats first to reduce memory and computation comsumptions. - p_att_feats = self.ctx2att(att_feats) - - return fc_feats, att_feats, p_att_feats, att_masks - - def _forward(self, fc_feats, att_feats, seq, att_masks=None): - batch_size = fc_feats.size(0) - if seq.ndim == 3: # B * seq_per_img * seq_len - seq = seq.reshape(-1, seq.shape[2]) - seq_per_img = seq.shape[0] // batch_size - state = self.init_hidden(batch_size*seq_per_img) - - outputs = fc_feats.new_zeros(batch_size*seq_per_img, seq.size(1), self.vocab_size+1) - - # Prepare the features - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = self._prepare_feature(fc_feats, att_feats, att_masks) - # pp_att_feats is used for attention, we cache it in advance to reduce computation cost - - if seq_per_img > 1: - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = utils.repeat_tensors(seq_per_img, - [p_fc_feats, p_att_feats, pp_att_feats, p_att_masks] - ) - - for i in range(seq.size(1)): - if self.training and i >= 1 and self.ss_prob > 0.0: # otherwiste no need to sample - sample_prob = fc_feats.new(batch_size*seq_per_img).uniform_(0, 1) - sample_mask = sample_prob < self.ss_prob - if sample_mask.sum() == 0: - it = seq[:, i].clone() - else: - sample_ind = sample_mask.nonzero().view(-1) - it = seq[:, i].data.clone() - prob_prev = torch.exp(outputs[:, i-1].detach()) # fetch prev distribution: shape Nx(M+1) - it.index_copy_(0, sample_ind, torch.multinomial(prob_prev, 1).view(-1).index_select(0, sample_ind)) - else: - it = seq[:, i].clone() - # break if all the sequences end - if i >= 1 and seq[:, i].sum() == 0: - break - - output, state = self.get_logprobs_state(it, p_fc_feats, p_att_feats, pp_att_feats, p_att_masks, state) - outputs[:, i] = output - - return outputs - - def get_logprobs_state(self, it, fc_feats, att_feats, p_att_feats, att_masks, state, output_logsoftmax=1): - # 'it' contains a word index - xt = self.embed(it) - - output, state = self.core(xt, fc_feats, att_feats, p_att_feats, state, att_masks) - if output_logsoftmax: - logprobs = F.log_softmax(self.logit(output), dim=1) - else: - logprobs = self.logit(output) - - return logprobs, state - - def _old_sample_beam(self, fc_feats, att_feats, att_masks=None, opt={}): - beam_size = opt.get('beam_size', 10) - group_size = opt.get('group_size', 1) - sample_n = opt.get('sample_n', 10) - # when sample_n == beam_size then each beam is a sample. - assert sample_n == 1 or sample_n == beam_size // group_size, 'when beam search, sample_n == 1 or beam search' - batch_size = fc_feats.size(0) - - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = self._prepare_feature(fc_feats, att_feats, att_masks) - - assert beam_size <= self.vocab_size + 1, 'lets assume this for now, otherwise this corner case causes a few headaches down the road. can be dealt with in future if needed' - seq = fc_feats.new_full((batch_size*sample_n, self.seq_length), self.pad_idx, dtype=torch.long) - seqLogprobs = fc_feats.new_zeros(batch_size*sample_n, self.seq_length, self.vocab_size + 1) - # lets process every image independently for now, for simplicity - - self.done_beams = [[] for _ in range(batch_size)] - for k in range(batch_size): - state = self.init_hidden(beam_size) - tmp_fc_feats, tmp_att_feats, tmp_p_att_feats, tmp_att_masks = utils.repeat_tensors(beam_size, - [p_fc_feats[k:k+1], p_att_feats[k:k+1], pp_att_feats[k:k+1], p_att_masks[k:k+1] if att_masks is not None else None] - ) - - for t in range(1): - if t == 0: # input - it = fc_feats.new_full([beam_size], self.bos_idx, dtype=torch.long) - - logprobs, state = self.get_logprobs_state(it, tmp_fc_feats, tmp_att_feats, tmp_p_att_feats, tmp_att_masks, state) - - self.done_beams[k] = self.old_beam_search(state, logprobs, tmp_fc_feats, tmp_att_feats, tmp_p_att_feats, tmp_att_masks, opt=opt) - if sample_n == beam_size: - for _n in range(sample_n): - seq[k*sample_n+_n, :] = self.done_beams[k][_n]['seq'] - seqLogprobs[k*sample_n+_n, :] = self.done_beams[k][_n]['logps'] - else: - seq[k, :] = self.done_beams[k][0]['seq'] # the first beam has highest cumulative score - seqLogprobs[k, :] = self.done_beams[k][0]['logps'] - # return the samples and their log likelihoods - return seq, seqLogprobs - - - def _sample_beam(self, fc_feats, att_feats, att_masks=None, opt={}): - beam_size = opt.get('beam_size', 10) - group_size = opt.get('group_size', 1) - sample_n = opt.get('sample_n', 10) - # when sample_n == beam_size then each beam is a sample. - assert sample_n == 1 or sample_n == beam_size // group_size, 'when beam search, sample_n == 1 or beam search' - batch_size = fc_feats.size(0) - - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = self._prepare_feature(fc_feats, att_feats, att_masks) - - assert beam_size <= self.vocab_size + 1, 'lets assume this for now, otherwise this corner case causes a few headaches down the road. can be dealt with in future if needed' - seq = fc_feats.new_full((batch_size*sample_n, self.seq_length), self.pad_idx, dtype=torch.long) - seqLogprobs = fc_feats.new_zeros(batch_size*sample_n, self.seq_length, self.vocab_size + 1) - # lets process every image independently for now, for simplicity - - self.done_beams = [[] for _ in range(batch_size)] - - state = self.init_hidden(batch_size) - - # first step, feed bos - it = fc_feats.new_full([batch_size], self.bos_idx, dtype=torch.long) - logprobs, state = self.get_logprobs_state(it, p_fc_feats, p_att_feats, pp_att_feats, p_att_masks, state) - - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = utils.repeat_tensors(beam_size, - [p_fc_feats, p_att_feats, pp_att_feats, p_att_masks] - ) - self.done_beams = self.beam_search(state, logprobs, p_fc_feats, p_att_feats, pp_att_feats, p_att_masks, opt=opt) - for k in range(batch_size): - if sample_n == beam_size: - for _n in range(sample_n): - seq_len = self.done_beams[k][_n]['seq'].shape[0] - seq[k*sample_n+_n, :seq_len] = self.done_beams[k][_n]['seq'] - seqLogprobs[k*sample_n+_n, :seq_len] = self.done_beams[k][_n]['logps'] - else: - seq_len = self.done_beams[k][0]['seq'].shape[0] - seq[k, :seq_len] = self.done_beams[k][0]['seq'] # the first beam has highest cumulative score - seqLogprobs[k, :seq_len] = self.done_beams[k][0]['logps'] - # return the samples and their log likelihoods - return seq, seqLogprobs - - def _sample(self, fc_feats, att_feats, att_masks=None, opt={}): - - sample_method = opt.get('sample_method', 'greedy') - beam_size = opt.get('beam_size', 1) - temperature = opt.get('temperature', 1.0) - sample_n = int(opt.get('sample_n', 1)) - group_size = opt.get('group_size', 1) - output_logsoftmax = opt.get('output_logsoftmax', 1) - decoding_constraint = opt.get('decoding_constraint', 0) - block_trigrams = opt.get('block_trigrams', 0) - remove_bad_endings = opt.get('remove_bad_endings', 0) - if beam_size > 1 and sample_method in ['greedy', 'beam_search']: - return self._sample_beam(fc_feats, att_feats, att_masks, opt) - if group_size > 1: - return self._diverse_sample(fc_feats, att_feats, att_masks, opt) - - batch_size = fc_feats.size(0) - state = self.init_hidden(batch_size*sample_n) - - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = self._prepare_feature(fc_feats, att_feats, att_masks) - - if sample_n > 1: - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = utils.repeat_tensors(sample_n, - [p_fc_feats, p_att_feats, pp_att_feats, p_att_masks] - ) - - trigrams = [] # will be a list of batch_size dictionaries - - seq = fc_feats.new_full((batch_size*sample_n, self.seq_length), self.pad_idx, dtype=torch.long) - seqLogprobs = fc_feats.new_zeros(batch_size*sample_n, self.seq_length, self.vocab_size + 1) - for t in range(self.seq_length + 1): - if t == 0: # input - it = fc_feats.new_full([batch_size*sample_n], self.bos_idx, dtype=torch.long) - - logprobs, state = self.get_logprobs_state(it, p_fc_feats, p_att_feats, pp_att_feats, p_att_masks, state, output_logsoftmax=output_logsoftmax) - - if decoding_constraint and t > 0: - tmp = logprobs.new_zeros(logprobs.size()) - tmp.scatter_(1, seq[:,t-1].data.unsqueeze(1), float('-inf')) - logprobs = logprobs + tmp - - if remove_bad_endings and t > 0: - tmp = logprobs.new_zeros(logprobs.size()) - prev_bad = np.isin(seq[:,t-1].data.cpu().numpy(), self.bad_endings_ix) - # Make it impossible to generate bad_endings - tmp[torch.from_numpy(prev_bad.astype('uint8')), 0] = float('-inf') - logprobs = logprobs + tmp - - # Mess with trigrams - # Copy from https://github.com/lukemelas/image-paragraph-captioning - if block_trigrams and t >= 3: - # Store trigram generated at last step - prev_two_batch = seq[:,t-3:t-1] - for i in range(batch_size): # = seq.size(0) - prev_two = (prev_two_batch[i][0].item(), prev_two_batch[i][1].item()) - current = seq[i][t-1] - if t == 3: # initialize - trigrams.append({prev_two: [current]}) # {LongTensor: list containing 1 int} - elif t > 3: - if prev_two in trigrams[i]: # add to list - trigrams[i][prev_two].append(current) - else: # create list - trigrams[i][prev_two] = [current] - # Block used trigrams at next step - prev_two_batch = seq[:,t-2:t] - mask = torch.zeros(logprobs.size(), requires_grad=False).to(logprobs.device) # batch_size x vocab_size - for i in range(batch_size): - prev_two = (prev_two_batch[i][0].item(), prev_two_batch[i][1].item()) - if prev_two in trigrams[i]: - for j in trigrams[i][prev_two]: - mask[i,j] += 1 - # Apply mask to log probs - #logprobs = logprobs - (mask * 1e9) - alpha = 2.0 # = 4 - logprobs = logprobs + (mask * -0.693 * alpha) # ln(1/2) * alpha (alpha -> infty works best) - - # sample the next word - if t == self.seq_length: # skip if we achieve maximum length - break - it, sampleLogprobs = self.sample_next_word(logprobs, sample_method, temperature) - - # stop when all finished - if t == 0: - unfinished = it != self.eos_idx - else: - it[~unfinished] = self.pad_idx # This allows eos_idx not being overwritten to 0 - logprobs = logprobs * unfinished.unsqueeze(1).to(logprobs) - unfinished = unfinished & (it != self.eos_idx) - seq[:,t] = it - seqLogprobs[:,t] = logprobs - # quit loop if all sequences have finished - if unfinished.sum() == 0: - break - - return seq, seqLogprobs - - def _diverse_sample(self, fc_feats, att_feats, att_masks=None, opt={}): - - sample_method = opt.get('sample_method', 'greedy') - beam_size = opt.get('beam_size', 1) - temperature = opt.get('temperature', 1.0) - group_size = opt.get('group_size', 1) - diversity_lambda = opt.get('diversity_lambda', 0.5) - decoding_constraint = opt.get('decoding_constraint', 0) - block_trigrams = opt.get('block_trigrams', 0) - remove_bad_endings = opt.get('remove_bad_endings', 0) - - batch_size = fc_feats.size(0) - state = self.init_hidden(batch_size) - - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = self._prepare_feature(fc_feats, att_feats, att_masks) - - trigrams_table = [[] for _ in range(group_size)] # will be a list of batch_size dictionaries - - seq_table = [fc_feats.new_full((batch_size, self.seq_length), self.pad_idx, dtype=torch.long) for _ in range(group_size)] - seqLogprobs_table = [fc_feats.new_zeros(batch_size, self.seq_length) for _ in range(group_size)] - state_table = [self.init_hidden(batch_size) for _ in range(group_size)] - - for tt in range(self.seq_length + group_size): - for divm in range(group_size): - t = tt - divm - seq = seq_table[divm] - seqLogprobs = seqLogprobs_table[divm] - trigrams = trigrams_table[divm] - if t >= 0 and t <= self.seq_length-1: - if t == 0: # input - it = fc_feats.new_full([batch_size], self.bos_idx, dtype=torch.long) - else: - it = seq[:, t-1] # changed - - logprobs, state_table[divm] = self.get_logprobs_state(it, p_fc_feats, p_att_feats, pp_att_feats, p_att_masks, state_table[divm]) # changed - logprobs = F.log_softmax(logprobs / temperature, dim=-1) - - # Add diversity - if divm > 0: - unaug_logprobs = logprobs.clone() - for prev_choice in range(divm): - prev_decisions = seq_table[prev_choice][:, t] - logprobs[:, prev_decisions] = logprobs[:, prev_decisions] - diversity_lambda - - if decoding_constraint and t > 0: - tmp = logprobs.new_zeros(logprobs.size()) - tmp.scatter_(1, seq[:,t-1].data.unsqueeze(1), float('-inf')) - logprobs = logprobs + tmp - - if remove_bad_endings and t > 0: - tmp = logprobs.new_zeros(logprobs.size()) - prev_bad = np.isin(seq[:,t-1].data.cpu().numpy(), self.bad_endings_ix) - # Impossible to generate remove_bad_endings - tmp[torch.from_numpy(prev_bad.astype('uint8')), 0] = float('-inf') - logprobs = logprobs + tmp - - # Mess with trigrams - if block_trigrams and t >= 3: - # Store trigram generated at last step - prev_two_batch = seq[:,t-3:t-1] - for i in range(batch_size): # = seq.size(0) - prev_two = (prev_two_batch[i][0].item(), prev_two_batch[i][1].item()) - current = seq[i][t-1] - if t == 3: # initialize - trigrams.append({prev_two: [current]}) # {LongTensor: list containing 1 int} - elif t > 3: - if prev_two in trigrams[i]: # add to list - trigrams[i][prev_two].append(current) - else: # create list - trigrams[i][prev_two] = [current] - # Block used trigrams at next step - prev_two_batch = seq[:,t-2:t] - mask = torch.zeros(logprobs.size(), requires_grad=False).cuda() # batch_size x vocab_size - for i in range(batch_size): - prev_two = (prev_two_batch[i][0].item(), prev_two_batch[i][1].item()) - if prev_two in trigrams[i]: - for j in trigrams[i][prev_two]: - mask[i,j] += 1 - # Apply mask to log probs - #logprobs = logprobs - (mask * 1e9) - alpha = 2.0 # = 4 - logprobs = logprobs + (mask * -0.693 * alpha) # ln(1/2) * alpha (alpha -> infty works best) - - it, sampleLogprobs = self.sample_next_word(logprobs, sample_method, 1) - - # stop when all finished - if t == 0: - unfinished = it != self.eos_idx - else: - unfinished = (seq[:,t-1] != self.pad_idx) & (seq[:,t-1] != self.eos_idx) - it[~unfinished] = self.pad_idx - unfinished = unfinished & (it != self.eos_idx) # changed - seq[:,t] = it - seqLogprobs[:,t] = sampleLogprobs.view(-1) - - return torch.stack(seq_table, 1).reshape(batch_size * group_size, -1), torch.stack(seqLogprobs_table, 1).reshape(batch_size * group_size, -1) - -class AdaAtt_lstm(nn.Module): - def __init__(self, opt, use_maxout=True): - super(AdaAtt_lstm, self).__init__() - self.input_encoding_size = opt.input_encoding_size - #self.rnn_type = opt.rnn_type - self.rnn_size = opt.rnn_size - self.num_layers = opt.num_layers - self.drop_prob_lm = opt.drop_prob_lm - self.fc_feat_size = opt.fc_feat_size - self.att_feat_size = opt.att_feat_size - self.att_hid_size = opt.att_hid_size - - self.use_maxout = use_maxout - - # Build a LSTM - self.w2h = nn.Linear(self.input_encoding_size, (4+(use_maxout==True)) * self.rnn_size) - self.v2h = nn.Linear(self.rnn_size, (4+(use_maxout==True)) * self.rnn_size) - - self.i2h = nn.ModuleList([nn.Linear(self.rnn_size, (4+(use_maxout==True)) * self.rnn_size) for _ in range(self.num_layers - 1)]) - self.h2h = nn.ModuleList([nn.Linear(self.rnn_size, (4+(use_maxout==True)) * self.rnn_size) for _ in range(self.num_layers)]) - - # Layers for getting the fake region - if self.num_layers == 1: - self.r_w2h = nn.Linear(self.input_encoding_size, self.rnn_size) - self.r_v2h = nn.Linear(self.rnn_size, self.rnn_size) - else: - self.r_i2h = nn.Linear(self.rnn_size, self.rnn_size) - self.r_h2h = nn.Linear(self.rnn_size, self.rnn_size) - - - def forward(self, xt, img_fc, state): - - hs = [] - cs = [] - for L in range(self.num_layers): - # c,h from previous timesteps - prev_h = state[0][L] - prev_c = state[1][L] - # the input to this layer - if L == 0: - x = xt - i2h = self.w2h(x) + self.v2h(img_fc) - else: - x = hs[-1] - x = F.dropout(x, self.drop_prob_lm, self.training) - i2h = self.i2h[L-1](x) - - all_input_sums = i2h+self.h2h[L](prev_h) - - sigmoid_chunk = all_input_sums.narrow(1, 0, 3 * self.rnn_size) - sigmoid_chunk = torch.sigmoid(sigmoid_chunk) - # decode the gates - in_gate = sigmoid_chunk.narrow(1, 0, self.rnn_size) - forget_gate = sigmoid_chunk.narrow(1, self.rnn_size, self.rnn_size) - out_gate = sigmoid_chunk.narrow(1, self.rnn_size * 2, self.rnn_size) - # decode the write inputs - if not self.use_maxout: - in_transform = torch.tanh(all_input_sums.narrow(1, 3 * self.rnn_size, self.rnn_size)) - else: - in_transform = all_input_sums.narrow(1, 3 * self.rnn_size, 2 * self.rnn_size) - in_transform = torch.max(\ - in_transform.narrow(1, 0, self.rnn_size), - in_transform.narrow(1, self.rnn_size, self.rnn_size)) - # perform the LSTM update - next_c = forget_gate * prev_c + in_gate * in_transform - # gated cells form the output - tanh_nex_c = torch.tanh(next_c) - next_h = out_gate * tanh_nex_c - if L == self.num_layers-1: - if L == 0: - i2h = self.r_w2h(x) + self.r_v2h(img_fc) - else: - i2h = self.r_i2h(x) - n5 = i2h+self.r_h2h(prev_h) - fake_region = torch.sigmoid(n5) * tanh_nex_c - - cs.append(next_c) - hs.append(next_h) - - # set up the decoder - top_h = hs[-1] - top_h = F.dropout(top_h, self.drop_prob_lm, self.training) - fake_region = F.dropout(fake_region, self.drop_prob_lm, self.training) - - state = (torch.cat([_.unsqueeze(0) for _ in hs], 0), - torch.cat([_.unsqueeze(0) for _ in cs], 0)) - return top_h, fake_region, state - -class AdaAtt_attention(nn.Module): - def __init__(self, opt): - super(AdaAtt_attention, self).__init__() - self.input_encoding_size = opt.input_encoding_size - #self.rnn_type = opt.rnn_type - self.rnn_size = opt.rnn_size - self.drop_prob_lm = opt.drop_prob_lm - self.att_hid_size = opt.att_hid_size - - # fake region embed - self.fr_linear = nn.Sequential( - nn.Linear(self.rnn_size, self.input_encoding_size), - nn.ReLU(), - nn.Dropout(self.drop_prob_lm)) - self.fr_embed = nn.Linear(self.input_encoding_size, self.att_hid_size) - - # h out embed - self.ho_linear = nn.Sequential( - nn.Linear(self.rnn_size, self.input_encoding_size), - nn.Tanh(), - nn.Dropout(self.drop_prob_lm)) - self.ho_embed = nn.Linear(self.input_encoding_size, self.att_hid_size) - - self.alpha_net = nn.Linear(self.att_hid_size, 1) - self.att2h = nn.Linear(self.rnn_size, self.rnn_size) - - def forward(self, h_out, fake_region, conv_feat, conv_feat_embed, att_masks=None): - - # View into three dimensions - att_size = conv_feat.numel() // conv_feat.size(0) // self.rnn_size - conv_feat = conv_feat.view(-1, att_size, self.rnn_size) - conv_feat_embed = conv_feat_embed.view(-1, att_size, self.att_hid_size) - - # view neighbor from bach_size * neighbor_num x rnn_size to bach_size x rnn_size * neighbor_num - fake_region = self.fr_linear(fake_region) - fake_region_embed = self.fr_embed(fake_region) - - h_out_linear = self.ho_linear(h_out) - h_out_embed = self.ho_embed(h_out_linear) - - txt_replicate = h_out_embed.unsqueeze(1).expand(h_out_embed.size(0), att_size + 1, h_out_embed.size(1)) - - img_all = torch.cat([fake_region.view(-1,1,self.input_encoding_size), conv_feat], 1) - img_all_embed = torch.cat([fake_region_embed.view(-1,1,self.input_encoding_size), conv_feat_embed], 1) - - hA = torch.tanh(img_all_embed + txt_replicate) - hA = F.dropout(hA,self.drop_prob_lm, self.training) - - hAflat = self.alpha_net(hA.view(-1, self.att_hid_size)) - PI = F.softmax(hAflat.view(-1, att_size + 1), dim=1) - - if att_masks is not None: - att_masks = att_masks.view(-1, att_size) - PI = PI * torch.cat([att_masks[:,:1], att_masks], 1) # assume one one at the first time step. - PI = PI / PI.sum(1, keepdim=True) - - visAtt = torch.bmm(PI.unsqueeze(1), img_all) - visAttdim = visAtt.squeeze(1) - - atten_out = visAttdim + h_out_linear - - h = torch.tanh(self.att2h(atten_out)) - h = F.dropout(h, self.drop_prob_lm, self.training) - return h - -class AdaAttCore(nn.Module): - def __init__(self, opt, use_maxout=False): - super(AdaAttCore, self).__init__() - self.lstm = AdaAtt_lstm(opt, use_maxout) - self.attention = AdaAtt_attention(opt) - - def forward(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks=None): - h_out, p_out, state = self.lstm(xt, fc_feats, state) - atten_out = self.attention(h_out, p_out, att_feats, p_att_feats, att_masks) - return atten_out, state - -class UpDownCore(nn.Module): - def __init__(self, opt, use_maxout=False): - super(UpDownCore, self).__init__() - self.drop_prob_lm = opt.drop_prob_lm - - self.att_lstm = nn.LSTMCell(opt.input_encoding_size + opt.rnn_size * 2, opt.rnn_size) # we, fc, h^2_t-1 - self.lang_lstm = nn.LSTMCell(opt.rnn_size * 2, opt.rnn_size) # h^1_t, \hat v - self.attention = Attention(opt) - - def forward(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks=None): - prev_h = state[0][-1] - att_lstm_input = torch.cat([prev_h, fc_feats, xt], 1) - - h_att, c_att = self.att_lstm(att_lstm_input, (state[0][0], state[1][0])) - - att = self.attention(h_att, att_feats, p_att_feats, att_masks) - - lang_lstm_input = torch.cat([att, h_att], 1) - # lang_lstm_input = torch.cat([att, F.dropout(h_att, self.drop_prob_lm, self.training)], 1) ????? - - h_lang, c_lang = self.lang_lstm(lang_lstm_input, (state[0][1], state[1][1])) - - output = F.dropout(h_lang, self.drop_prob_lm, self.training) - state = (torch.stack([h_att, h_lang]), torch.stack([c_att, c_lang])) - - return output, state - - -############################################################################ -# Notice: -# StackAtt and DenseAtt are models that I randomly designed. -# They are not related to any paper. -############################################################################ - -from .FCModel import LSTMCore -class StackAttCore(nn.Module): - def __init__(self, opt, use_maxout=False): - super(StackAttCore, self).__init__() - self.drop_prob_lm = opt.drop_prob_lm - - # self.att0 = Attention(opt) - self.att1 = Attention(opt) - self.att2 = Attention(opt) - - opt_input_encoding_size = opt.input_encoding_size - opt.input_encoding_size = opt.input_encoding_size + opt.rnn_size - self.lstm0 = LSTMCore(opt) # att_feat + word_embedding - opt.input_encoding_size = opt.rnn_size * 2 - self.lstm1 = LSTMCore(opt) - self.lstm2 = LSTMCore(opt) - opt.input_encoding_size = opt_input_encoding_size - - # self.emb1 = nn.Linear(opt.rnn_size, opt.rnn_size) - self.emb2 = nn.Linear(opt.rnn_size, opt.rnn_size) - - def forward(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks=None): - # att_res_0 = self.att0(state[0][-1], att_feats, p_att_feats, att_masks) - h_0, state_0 = self.lstm0(torch.cat([xt,fc_feats],1), [state[0][0:1], state[1][0:1]]) - att_res_1 = self.att1(h_0, att_feats, p_att_feats, att_masks) - h_1, state_1 = self.lstm1(torch.cat([h_0,att_res_1],1), [state[0][1:2], state[1][1:2]]) - att_res_2 = self.att2(h_1 + self.emb2(att_res_1), att_feats, p_att_feats, att_masks) - h_2, state_2 = self.lstm2(torch.cat([h_1,att_res_2],1), [state[0][2:3], state[1][2:3]]) - - return h_2, [torch.cat(_, 0) for _ in zip(state_0, state_1, state_2)] - -class DenseAttCore(nn.Module): - def __init__(self, opt, use_maxout=False): - super(DenseAttCore, self).__init__() - self.drop_prob_lm = opt.drop_prob_lm - - # self.att0 = Attention(opt) - self.att1 = Attention(opt) - self.att2 = Attention(opt) - - opt_input_encoding_size = opt.input_encoding_size - opt.input_encoding_size = opt.input_encoding_size + opt.rnn_size - self.lstm0 = LSTMCore(opt) # att_feat + word_embedding - opt.input_encoding_size = opt.rnn_size * 2 - self.lstm1 = LSTMCore(opt) - self.lstm2 = LSTMCore(opt) - opt.input_encoding_size = opt_input_encoding_size - - # self.emb1 = nn.Linear(opt.rnn_size, opt.rnn_size) - self.emb2 = nn.Linear(opt.rnn_size, opt.rnn_size) - - # fuse h_0 and h_1 - self.fusion1 = nn.Sequential(nn.Linear(opt.rnn_size*2, opt.rnn_size), - nn.ReLU(), - nn.Dropout(opt.drop_prob_lm)) - # fuse h_0, h_1 and h_2 - self.fusion2 = nn.Sequential(nn.Linear(opt.rnn_size*3, opt.rnn_size), - nn.ReLU(), - nn.Dropout(opt.drop_prob_lm)) - - def forward(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks=None): - # att_res_0 = self.att0(state[0][-1], att_feats, p_att_feats, att_masks) - h_0, state_0 = self.lstm0(torch.cat([xt,fc_feats],1), [state[0][0:1], state[1][0:1]]) - att_res_1 = self.att1(h_0, att_feats, p_att_feats, att_masks) - h_1, state_1 = self.lstm1(torch.cat([h_0,att_res_1],1), [state[0][1:2], state[1][1:2]]) - att_res_2 = self.att2(h_1 + self.emb2(att_res_1), att_feats, p_att_feats, att_masks) - h_2, state_2 = self.lstm2(torch.cat([self.fusion1(torch.cat([h_0, h_1], 1)),att_res_2],1), [state[0][2:3], state[1][2:3]]) - - return self.fusion2(torch.cat([h_0, h_1, h_2], 1)), [torch.cat(_, 0) for _ in zip(state_0, state_1, state_2)] - -class Attention(nn.Module): - def __init__(self, opt): - super(Attention, self).__init__() - self.rnn_size = opt.rnn_size - self.att_hid_size = opt.att_hid_size - - self.h2att = nn.Linear(self.rnn_size, self.att_hid_size) - self.alpha_net = nn.Linear(self.att_hid_size, 1) - - def forward(self, h, att_feats, p_att_feats, att_masks=None): - # The p_att_feats here is already projected - att_size = att_feats.numel() // att_feats.size(0) // att_feats.size(-1) - att = p_att_feats.view(-1, att_size, self.att_hid_size) - - att_h = self.h2att(h) # batch * att_hid_size - att_h = att_h.unsqueeze(1).expand_as(att) # batch * att_size * att_hid_size - dot = att + att_h # batch * att_size * att_hid_size - dot = torch.tanh(dot) # batch * att_size * att_hid_size - dot = dot.view(-1, self.att_hid_size) # (batch * att_size) * att_hid_size - dot = self.alpha_net(dot) # (batch * att_size) * 1 - dot = dot.view(-1, att_size) # batch * att_size - - weight = F.softmax(dot, dim=1) # batch * att_size - if att_masks is not None: - weight = weight * att_masks.view(-1, att_size).to(weight) - weight = weight / weight.sum(1, keepdim=True) # normalize to 1 - att_feats_ = att_feats.view(-1, att_size, att_feats.size(-1)) # batch * att_size * att_feat_size - att_res = torch.bmm(weight.unsqueeze(1), att_feats_).squeeze(1) # batch * att_feat_size - - return att_res - -class Att2in2Core(nn.Module): - def __init__(self, opt): - super(Att2in2Core, self).__init__() - self.input_encoding_size = opt.input_encoding_size - #self.rnn_type = opt.rnn_type - self.rnn_size = opt.rnn_size - #self.num_layers = opt.num_layers - self.drop_prob_lm = opt.drop_prob_lm - self.fc_feat_size = opt.fc_feat_size - self.att_feat_size = opt.att_feat_size - self.att_hid_size = opt.att_hid_size - - # Build a LSTM - self.a2c = nn.Linear(self.rnn_size, 2 * self.rnn_size) - self.i2h = nn.Linear(self.input_encoding_size, 5 * self.rnn_size) - self.h2h = nn.Linear(self.rnn_size, 5 * self.rnn_size) - self.dropout = nn.Dropout(self.drop_prob_lm) - - self.attention = Attention(opt) - - def forward(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks=None): - att_res = self.attention(state[0][-1], att_feats, p_att_feats, att_masks) - - all_input_sums = self.i2h(xt) + self.h2h(state[0][-1]) - sigmoid_chunk = all_input_sums.narrow(1, 0, 3 * self.rnn_size) - sigmoid_chunk = torch.sigmoid(sigmoid_chunk) - in_gate = sigmoid_chunk.narrow(1, 0, self.rnn_size) - forget_gate = sigmoid_chunk.narrow(1, self.rnn_size, self.rnn_size) - out_gate = sigmoid_chunk.narrow(1, self.rnn_size * 2, self.rnn_size) - - in_transform = all_input_sums.narrow(1, 3 * self.rnn_size, 2 * self.rnn_size) + \ - self.a2c(att_res) - in_transform = torch.max(\ - in_transform.narrow(1, 0, self.rnn_size), - in_transform.narrow(1, self.rnn_size, self.rnn_size)) - next_c = forget_gate * state[1][-1] + in_gate * in_transform - next_h = out_gate * torch.tanh(next_c) - - output = self.dropout(next_h) - state = (next_h.unsqueeze(0), next_c.unsqueeze(0)) - return output, state - -class Att2inCore(Att2in2Core): - def __init__(self, opt): - super(Att2inCore, self).__init__(opt) - del self.a2c - self.a2c = nn.Linear(self.att_feat_size, 2 * self.rnn_size) - -""" -Note this is my attempt to replicate att2all model in self-critical paper. -However, this is not a correct replication actually. Will fix it. -""" -class Att2all2Core(nn.Module): - def __init__(self, opt): - super(Att2all2Core, self).__init__() - self.input_encoding_size = opt.input_encoding_size - #self.rnn_type = opt.rnn_type - self.rnn_size = opt.rnn_size - #self.num_layers = opt.num_layers - self.drop_prob_lm = opt.drop_prob_lm - self.fc_feat_size = opt.fc_feat_size - self.att_feat_size = opt.att_feat_size - self.att_hid_size = opt.att_hid_size - - # Build a LSTM - self.a2h = nn.Linear(self.rnn_size, 5 * self.rnn_size) - self.i2h = nn.Linear(self.input_encoding_size, 5 * self.rnn_size) - self.h2h = nn.Linear(self.rnn_size, 5 * self.rnn_size) - self.dropout = nn.Dropout(self.drop_prob_lm) - - self.attention = Attention(opt) - - def forward(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks=None): - att_res = self.attention(state[0][-1], att_feats, p_att_feats, att_masks) - - all_input_sums = self.i2h(xt) + self.h2h(state[0][-1]) + self.a2h(att_res) - sigmoid_chunk = all_input_sums.narrow(1, 0, 3 * self.rnn_size) - sigmoid_chunk = torch.sigmoid(sigmoid_chunk) - in_gate = sigmoid_chunk.narrow(1, 0, self.rnn_size) - forget_gate = sigmoid_chunk.narrow(1, self.rnn_size, self.rnn_size) - out_gate = sigmoid_chunk.narrow(1, self.rnn_size * 2, self.rnn_size) - - in_transform = all_input_sums.narrow(1, 3 * self.rnn_size, 2 * self.rnn_size) - in_transform = torch.max(\ - in_transform.narrow(1, 0, self.rnn_size), - in_transform.narrow(1, self.rnn_size, self.rnn_size)) - next_c = forget_gate * state[1][-1] + in_gate * in_transform - next_h = out_gate * torch.tanh(next_c) - - output = self.dropout(next_h) - state = (next_h.unsqueeze(0), next_c.unsqueeze(0)) - return output, state - -class AdaAttModel(AttModel): - def __init__(self, opt): - super(AdaAttModel, self).__init__(opt) - self.core = AdaAttCore(opt) - -# AdaAtt with maxout lstm -class AdaAttMOModel(AttModel): - def __init__(self, opt): - super(AdaAttMOModel, self).__init__(opt) - self.core = AdaAttCore(opt, True) - -class Att2in2Model(AttModel): - def __init__(self, opt): - super(Att2in2Model, self).__init__(opt) - self.core = Att2in2Core(opt) - delattr(self, 'fc_embed') - self.fc_embed = lambda x : x - -class Att2all2Model(AttModel): - def __init__(self, opt): - super(Att2all2Model, self).__init__(opt) - self.core = Att2all2Core(opt) - delattr(self, 'fc_embed') - self.fc_embed = lambda x : x - -class UpDownModel(AttModel): - def __init__(self, opt): - super(UpDownModel, self).__init__(opt) - self.num_layers = 2 - self.core = UpDownCore(opt) - -class StackAttModel(AttModel): - def __init__(self, opt): - super(StackAttModel, self).__init__(opt) - self.num_layers = 3 - self.core = StackAttCore(opt) - -class DenseAttModel(AttModel): - def __init__(self, opt): - super(DenseAttModel, self).__init__(opt) - self.num_layers = 3 - self.core = DenseAttCore(opt) - -class Att2inModel(AttModel): - def __init__(self, opt): - super(Att2inModel, self).__init__(opt) - del self.embed, self.fc_embed, self.att_embed - self.embed = nn.Embedding(self.vocab_size + 1, self.input_encoding_size) - self.fc_embed = self.att_embed = lambda x: x - del self.ctx2att - self.ctx2att = nn.Linear(self.att_feat_size, self.att_hid_size) - self.core = Att2inCore(opt) - self.init_weights() - - def init_weights(self): - initrange = 0.1 - self.embed.weight.data.uniform_(-initrange, initrange) - self.logit.bias.data.fill_(0) - self.logit.weight.data.uniform_(-initrange, initrange) - - -class NewFCModel(AttModel): - def __init__(self, opt): - super(NewFCModel, self).__init__(opt) - self.fc_embed = nn.Linear(self.fc_feat_size, self.input_encoding_size) - self.embed = nn.Embedding(self.vocab_size + 1, self.input_encoding_size) - self._core = LSTMCore(opt) - delattr(self, 'att_embed') - self.att_embed = lambda x : x - delattr(self, 'ctx2att') - self.ctx2att = lambda x: x - - def core(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks): - # Step 0, feed the input image - # if (self.training and state[0].is_leaf) or \ - # (not self.training and state[0].sum() == 0): - # _, state = self._core(fc_feats, state) - # three cases - # normal mle training - # Sample - # beam search (diverse beam search) - # fixed captioning module. - is_first_step = (state[0]==0).all(2).all(0) # size: B - if is_first_step.all(): - _, state = self._core(fc_feats, state) - elif is_first_step.any(): - # This is mostly for diverse beam search I think - new_state = [torch.zeros_like(_) for _ in state] - new_state[0][:, ~is_first_step] = state[0][:, ~is_first_step] - new_state[1][:, ~is_first_step] = state[1][:, ~is_first_step] - _, state = self._core(fc_feats, state) - new_state[0][:, is_first_step] = state[0][:, is_first_step] - new_state[1][:, is_first_step] = state[1][:, is_first_step] - state = new_state - # if (state[0]==0).all(): - # # Let's forget about diverse beam search first - # _, state = self._core(fc_feats, state) - return self._core(xt, state) - - def _prepare_feature(self, fc_feats, att_feats, att_masks): - fc_feats = self.fc_embed(fc_feats) - - return fc_feats, att_feats, att_feats, att_masks - - -class LMModel(AttModel): - def __init__(self, opt): - super(LMModel, self).__init__(opt) - delattr(self, 'fc_embed') - self.fc_embed = lambda x: x.new_zeros(x.shape[0], self.input_encoding_size) - self.embed = nn.Embedding(self.vocab_size + 1, self.input_encoding_size) - self._core = LSTMCore(opt) - delattr(self, 'att_embed') - self.att_embed = lambda x : x - delattr(self, 'ctx2att') - self.ctx2att = lambda x: x - - def core(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks): - if (state[0]==0).all(): - # Let's forget about diverse beam search first - _, state = self._core(fc_feats, state) - return self._core(xt, state) - - def _prepare_feature(self, fc_feats, att_feats, att_masks): - fc_feats = self.fc_embed(fc_feats) - - return fc_feats, None, None, None \ No newline at end of file diff --git a/spaces/mishtert/tracer/meta.py b/spaces/mishtert/tracer/meta.py deleted file mode 100644 index 63ec8ce438073b078a1e8277ecc51ae813a4b9b0..0000000000000000000000000000000000000000 --- a/spaces/mishtert/tracer/meta.py +++ /dev/null @@ -1,35 +0,0 @@ -HEADER_INFO = """""".strip() -SIDEBAR_INFO = """ - -""" - - -CONCEPT_INFO = """ -
    -

    Tracer Data Flow

    -Domain Agnostic Concept -
    -""" - - -CHEF_INFO = """ -

    Welcome to Tracer!

    -

    -(your aide to help you find the specific answer in myriad of textual content in seconds) -

    -""".strip() -PROMPT_BOX = "Add custom ingredients here (separated by `,`): " -STORY = """

    Hello everyone 👋, I am Tracer!. - -Tracer helps you to find the information that you look within seconds on a large text that would take you minutes, hours for you to go -through. - -Tracer can be customized to specific needs looking only for specific information if required and build -generative text on the answer - -

    - -
    -""".strip() diff --git a/spaces/mithril-security/blind_chat/src/lib/server/generateFromDefaultEndpoint.ts b/spaces/mithril-security/blind_chat/src/lib/server/generateFromDefaultEndpoint.ts deleted file mode 100644 index 8b16bf80bc70f4f9c179d38e4388798b69e212ca..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/src/lib/server/generateFromDefaultEndpoint.ts +++ /dev/null @@ -1,104 +0,0 @@ -import { defaultModel } from "$lib/server/models"; -import { modelEndpoint } from "./modelEndpoint"; -import { trimSuffix } from "$lib/utils/trimSuffix"; -import { trimPrefix } from "$lib/utils/trimPrefix"; -import { PUBLIC_SEP_TOKEN } from "$lib/constants/publicSepToken"; -import { AwsClient } from "aws4fetch"; - -interface Parameters { - temperature: number; - truncate: number; - max_new_tokens: number; - stop: string[]; -} -export async function generateFromDefaultEndpoint( - prompt: string, - parameters?: Partial -) { - const newParameters = { - ...defaultModel.parameters, - ...parameters, - return_full_text: false, - }; - - const randomEndpoint = modelEndpoint(defaultModel); - - const abortController = new AbortController(); - - let resp: Response; - - if (randomEndpoint.host === "sagemaker") { - const requestParams = JSON.stringify({ - ...newParameters, - inputs: prompt, - }); - - const aws = new AwsClient({ - accessKeyId: randomEndpoint.accessKey, - secretAccessKey: randomEndpoint.secretKey, - sessionToken: randomEndpoint.sessionToken, - service: "sagemaker", - }); - - resp = await aws.fetch(randomEndpoint.url, { - method: "POST", - body: requestParams, - signal: abortController.signal, - headers: { - "Content-Type": "application/json", - }, - }); - } else { - resp = await fetch(randomEndpoint.url, { - headers: { - "Content-Type": "application/json", - Authorization: randomEndpoint.authorization, - }, - method: "POST", - body: JSON.stringify({ - ...newParameters, - inputs: prompt, - }), - signal: abortController.signal, - }); - } - - if (!resp.ok) { - throw new Error(await resp.text()); - } - - if (!resp.body) { - throw new Error("Response body is empty"); - } - - const decoder = new TextDecoder(); - const reader = resp.body.getReader(); - - let isDone = false; - let result = ""; - - while (!isDone) { - const { done, value } = await reader.read(); - - isDone = done; - result += decoder.decode(value, { stream: true }); // Convert current chunk to text - } - - // Close the reader when done - reader.releaseLock(); - - const results = await JSON.parse(result); - - let generated_text = trimSuffix( - trimPrefix(trimPrefix(results[0].generated_text, "<|startoftext|>"), prompt), - PUBLIC_SEP_TOKEN - ).trimEnd(); - - for (const stop of [...(newParameters?.stop ?? []), "<|endoftext|>"]) { - if (generated_text.endsWith(stop)) { - generated_text = generated_text.slice(0, -stop.length).trimEnd(); - } - } - - return generated_text; -} diff --git a/spaces/mithril-security/blind_chat/src/routes/conversation/[id]/phi/m_bg.wasm.d.ts b/spaces/mithril-security/blind_chat/src/routes/conversation/[id]/phi/m_bg.wasm.d.ts deleted file mode 100644 index 4afd6e17fb21f0631772cb8ea4efd4a39d8a2666..0000000000000000000000000000000000000000 --- a/spaces/mithril-security/blind_chat/src/routes/conversation/[id]/phi/m_bg.wasm.d.ts +++ /dev/null @@ -1,24 +0,0 @@ -/* tslint:disable */ -/* eslint-disable */ -export const memory: WebAssembly.Memory; -export function __wbg_model_free(a: number): void; -export function model_load(a: number, b: number, c: number, d: number, e: number, f: number): void; -export function model_init_with_prompt( - a: number, - b: number, - c: number, - d: number, - e: number, - f: number, - g: number, - h: number, - i: number -): void; -export function model_next_token(a: number, b: number): void; -export function main(a: number, b: number): number; -export function __wbindgen_add_to_stack_pointer(a: number): number; -export function __wbindgen_malloc(a: number, b: number): number; -export function __wbindgen_realloc(a: number, b: number, c: number, d: number): number; -export function __wbindgen_free(a: number, b: number, c: number): void; -export function __wbindgen_exn_store(a: number): void; -export function __wbindgen_start(): void; diff --git a/spaces/moro23/sentiment-anlysis-app/README.md b/spaces/moro23/sentiment-anlysis-app/README.md deleted file mode 100644 index 346d85f04a887c3ecb49526c40947a89f4f0164b..0000000000000000000000000000000000000000 --- a/spaces/moro23/sentiment-anlysis-app/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sentiment Anlysis App -emoji: 📉 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature.py b/spaces/mshukor/UnIVAL/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature.py deleted file mode 100644 index 5c7b67f8b1967ca515c5f7606253b46f903ea37e..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/hubert/simple_kmeans/dump_hubert_feature.py +++ /dev/null @@ -1,93 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import os -import sys - -import fairseq -import soundfile as sf -import torch -import torch.nn.functional as F - -from feature_utils import get_path_iterator, dump_feature - - -logging.basicConfig( - format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - level=os.environ.get("LOGLEVEL", "INFO").upper(), - stream=sys.stdout, -) -logger = logging.getLogger("dump_hubert_feature") - - -class HubertFeatureReader(object): - def __init__(self, ckpt_path, layer, max_chunk=1600000): - ( - model, - cfg, - task, - ) = fairseq.checkpoint_utils.load_model_ensemble_and_task([ckpt_path]) - self.model = model[0].eval().cuda() - self.task = task - self.layer = layer - self.max_chunk = max_chunk - logger.info(f"TASK CONFIG:\n{self.task.cfg}") - logger.info(f" max_chunk = {self.max_chunk}") - - def read_audio(self, path, ref_len=None): - wav, sr = sf.read(path) - assert sr == self.task.cfg.sample_rate, sr - if wav.ndim == 2: - wav = wav.mean(-1) - assert wav.ndim == 1, wav.ndim - if ref_len is not None and abs(ref_len - len(wav)) > 160: - logging.warning(f"ref {ref_len} != read {len(wav)} ({path})") - return wav - - def get_feats(self, path, ref_len=None): - x = self.read_audio(path, ref_len) - with torch.no_grad(): - x = torch.from_numpy(x).float().cuda() - if self.task.cfg.normalize: - x = F.layer_norm(x, x.shape) - x = x.view(1, -1) - - feat = [] - for start in range(0, x.size(1), self.max_chunk): - x_chunk = x[:, start: start + self.max_chunk] - feat_chunk, _ = self.model.extract_features( - source=x_chunk, - padding_mask=None, - mask=False, - output_layer=self.layer, - ) - feat.append(feat_chunk) - return torch.cat(feat, 1).squeeze(0) - - -def main(tsv_dir, split, ckpt_path, layer, nshard, rank, feat_dir, max_chunk): - reader = HubertFeatureReader(ckpt_path, layer, max_chunk) - generator, num = get_path_iterator(f"{tsv_dir}/{split}.tsv", nshard, rank) - dump_feature(reader, generator, num, split, nshard, rank, feat_dir) - - -if __name__ == "__main__": - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("tsv_dir") - parser.add_argument("split") - parser.add_argument("ckpt_path") - parser.add_argument("layer", type=int) - parser.add_argument("nshard", type=int) - parser.add_argument("rank", type=int) - parser.add_argument("feat_dir") - parser.add_argument("--max_chunk", type=int, default=1600000) - args = parser.parse_args() - logger.info(args) - - main(**vars(args)) diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/linformer/README.md b/spaces/mshukor/UnIVAL/fairseq/examples/linformer/README.md deleted file mode 100644 index f8b36bc691cb8f5bf82942e07b6d9c014387bdd8..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/linformer/README.md +++ /dev/null @@ -1,22 +0,0 @@ -# Linformer: Self-Attention with Linear Complexity (Wang et al., 2020) - -This example contains code to train Linformer models as described in our paper -[Linformer: Self-Attention with Linear Complexity](https://arxiv.org/abs/2006.04768). - -## Training a new Linformer RoBERTa model - -You can mostly follow the [RoBERTa pretraining README](/examples/roberta/README.pretraining.md), -updating your training command with `--user-dir examples/linformer/linformer_src --arch linformer_roberta_base`. - -## Citation - -If you use our work, please cite: - -```bibtex -@article{wang2020linformer, - title={Linformer: Self-Attention with Linear Complexity}, - author={Wang, Sinong and Li, Belinda and Khabsa, Madian and Fang, Han and Ma, Hao}, - journal={arXiv preprint arXiv:2006.04768}, - year={2020} -} -``` diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/logging/progress_bar.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/logging/progress_bar.py deleted file mode 100644 index 061082caefe542c5f0f87e04d9472583874126a3..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/logging/progress_bar.py +++ /dev/null @@ -1,490 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Wrapper around various loggers and progress bars (e.g., tqdm). -""" - -import atexit -import json -import logging -import os -import sys -from collections import OrderedDict -from contextlib import contextmanager -from numbers import Number -from typing import Optional - -import torch - -from .meters import AverageMeter, StopwatchMeter, TimeMeter - - -logger = logging.getLogger(__name__) - - -def progress_bar( - iterator, - log_format: Optional[str] = None, - log_interval: int = 100, - log_file: Optional[str] = None, - epoch: Optional[int] = None, - prefix: Optional[str] = None, - tensorboard_logdir: Optional[str] = None, - default_log_format: str = "tqdm", - wandb_project: Optional[str] = None, - wandb_run_name: Optional[str] = None, - azureml_logging: Optional[bool] = False, -): - if log_format is None: - log_format = default_log_format - if log_file is not None: - handler = logging.FileHandler(filename=log_file) - logger.addHandler(handler) - - if log_format == "tqdm" and not sys.stderr.isatty(): - log_format = "simple" - - if log_format == "json": - bar = JsonProgressBar(iterator, epoch, prefix, log_interval) - elif log_format == "none": - bar = NoopProgressBar(iterator, epoch, prefix) - elif log_format == "simple": - bar = SimpleProgressBar(iterator, epoch, prefix, log_interval) - elif log_format == "tqdm": - bar = TqdmProgressBar(iterator, epoch, prefix) - else: - raise ValueError("Unknown log format: {}".format(log_format)) - - if tensorboard_logdir: - try: - # [FB only] custom wrapper for TensorBoard - import palaas # noqa - from .fb_tbmf_wrapper import FbTbmfWrapper - - bar = FbTbmfWrapper(bar, log_interval) - except ImportError: - bar = TensorboardProgressBarWrapper(bar, tensorboard_logdir) - - if wandb_project: - bar = WandBProgressBarWrapper(bar, wandb_project, run_name=wandb_run_name) - - if azureml_logging: - bar = AzureMLProgressBarWrapper(bar) - - return bar - - -def build_progress_bar( - args, - iterator, - epoch: Optional[int] = None, - prefix: Optional[str] = None, - default: str = "tqdm", - no_progress_bar: str = "none", -): - """Legacy wrapper that takes an argparse.Namespace.""" - if getattr(args, "no_progress_bar", False): - default = no_progress_bar - if getattr(args, "distributed_rank", 0) == 0: - tensorboard_logdir = getattr(args, "tensorboard_logdir", None) - else: - tensorboard_logdir = None - return progress_bar( - iterator, - log_format=args.log_format, - log_interval=args.log_interval, - epoch=epoch, - prefix=prefix, - tensorboard_logdir=tensorboard_logdir, - default_log_format=default, - ) - - -def format_stat(stat): - if isinstance(stat, Number): - stat = "{:g}".format(stat) - elif isinstance(stat, AverageMeter): - stat = "{:.3f}".format(stat.avg) - elif isinstance(stat, TimeMeter): - stat = "{:g}".format(round(stat.avg)) - elif isinstance(stat, StopwatchMeter): - stat = "{:g}".format(round(stat.sum)) - elif torch.is_tensor(stat): - stat = stat.tolist() - return stat - - -class BaseProgressBar(object): - """Abstract class for progress bars.""" - - def __init__(self, iterable, epoch=None, prefix=None): - self.iterable = iterable - self.n = getattr(iterable, "n", 0) - self.epoch = epoch - self.prefix = "" - if epoch is not None: - self.prefix += "epoch {:03d}".format(epoch) - if prefix is not None: - self.prefix += (" | " if self.prefix != "" else "") + prefix - - def __len__(self): - return len(self.iterable) - - def __enter__(self): - return self - - def __exit__(self, *exc): - return False - - def __iter__(self): - raise NotImplementedError - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - raise NotImplementedError - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - raise NotImplementedError - - def update_config(self, config): - """Log latest configuration.""" - pass - - def _str_commas(self, stats): - return ", ".join(key + "=" + stats[key].strip() for key in stats.keys()) - - def _str_pipes(self, stats): - return " | ".join(key + " " + stats[key].strip() for key in stats.keys()) - - def _format_stats(self, stats): - postfix = OrderedDict(stats) - # Preprocess stats according to datatype - for key in postfix.keys(): - postfix[key] = str(format_stat(postfix[key])) - return postfix - - -@contextmanager -def rename_logger(logger, new_name): - old_name = logger.name - if new_name is not None: - logger.name = new_name - yield logger - logger.name = old_name - - -class JsonProgressBar(BaseProgressBar): - """Log output in JSON format.""" - - def __init__(self, iterable, epoch=None, prefix=None, log_interval=1000): - super().__init__(iterable, epoch, prefix) - self.log_interval = log_interval - self.i = None - self.size = None - - def __iter__(self): - self.size = len(self.iterable) - for i, obj in enumerate(self.iterable, start=self.n): - self.i = i - yield obj - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - step = step or self.i or 0 - if step > 0 and self.log_interval is not None and step % self.log_interval == 0: - update = ( - self.epoch - 1 + (self.i + 1) / float(self.size) - if self.epoch is not None - else None - ) - stats = self._format_stats(stats, epoch=self.epoch, update=update) - with rename_logger(logger, tag): - logger.info(json.dumps(stats)) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - self.stats = stats - if tag is not None: - self.stats = OrderedDict( - [(tag + "_" + k, v) for k, v in self.stats.items()] - ) - stats = self._format_stats(self.stats, epoch=self.epoch) - with rename_logger(logger, tag): - logger.info(json.dumps(stats)) - - def _format_stats(self, stats, epoch=None, update=None): - postfix = OrderedDict() - if epoch is not None: - postfix["epoch"] = epoch - if update is not None: - postfix["update"] = round(update, 3) - # Preprocess stats according to datatype - for key in stats.keys(): - postfix[key] = format_stat(stats[key]) - return postfix - - -class NoopProgressBar(BaseProgressBar): - """No logging.""" - - def __init__(self, iterable, epoch=None, prefix=None): - super().__init__(iterable, epoch, prefix) - - def __iter__(self): - for obj in self.iterable: - yield obj - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - pass - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - pass - - -class SimpleProgressBar(BaseProgressBar): - """A minimal logger for non-TTY environments.""" - - def __init__(self, iterable, epoch=None, prefix=None, log_interval=1000): - super().__init__(iterable, epoch, prefix) - self.log_interval = log_interval - self.i = None - self.size = None - - def __iter__(self): - self.size = len(self.iterable) - for i, obj in enumerate(self.iterable, start=self.n): - self.i = i - yield obj - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - step = step or self.i or 0 - if step > 0 and self.log_interval is not None and step % self.log_interval == 0: - stats = self._format_stats(stats) - postfix = self._str_commas(stats) - with rename_logger(logger, tag): - logger.info( - "{}: {:5d} / {:d} {}".format( - self.prefix, self.i + 1, self.size, postfix - ) - ) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - postfix = self._str_pipes(self._format_stats(stats)) - with rename_logger(logger, tag): - logger.info("{} | {}".format(self.prefix, postfix)) - - -class TqdmProgressBar(BaseProgressBar): - """Log to tqdm.""" - - def __init__(self, iterable, epoch=None, prefix=None): - super().__init__(iterable, epoch, prefix) - from tqdm import tqdm - - self.tqdm = tqdm( - iterable, - self.prefix, - leave=False, - disable=(logger.getEffectiveLevel() > logging.INFO), - ) - - def __iter__(self): - return iter(self.tqdm) - - def log(self, stats, tag=None, step=None): - """Log intermediate stats according to log_interval.""" - self.tqdm.set_postfix(self._format_stats(stats), refresh=False) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - postfix = self._str_pipes(self._format_stats(stats)) - with rename_logger(logger, tag): - logger.info("{} | {}".format(self.prefix, postfix)) - - -try: - _tensorboard_writers = {} - from torch.utils.tensorboard import SummaryWriter -except ImportError: - try: - from tensorboardX import SummaryWriter - except ImportError: - SummaryWriter = None - - -def _close_writers(): - for w in _tensorboard_writers.values(): - w.close() - - -atexit.register(_close_writers) - - -class TensorboardProgressBarWrapper(BaseProgressBar): - """Log to tensorboard.""" - - def __init__(self, wrapped_bar, tensorboard_logdir): - self.wrapped_bar = wrapped_bar - self.tensorboard_logdir = tensorboard_logdir - - if SummaryWriter is None: - logger.warning( - "tensorboard not found, please install with: pip install tensorboard" - ) - - def _writer(self, key): - if SummaryWriter is None: - return None - _writers = _tensorboard_writers - if key not in _writers: - _writers[key] = SummaryWriter(os.path.join(self.tensorboard_logdir, key)) - _writers[key].add_text("sys.argv", " ".join(sys.argv)) - return _writers[key] - - def __iter__(self): - return iter(self.wrapped_bar) - - def log(self, stats, tag=None, step=None): - """Log intermediate stats to tensorboard.""" - self._log_to_tensorboard(stats, tag, step) - self.wrapped_bar.log(stats, tag=tag, step=step) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - self._log_to_tensorboard(stats, tag, step) - self.wrapped_bar.print(stats, tag=tag, step=step) - - def update_config(self, config): - """Log latest configuration.""" - # TODO add hparams to Tensorboard - self.wrapped_bar.update_config(config) - - def _log_to_tensorboard(self, stats, tag=None, step=None): - writer = self._writer(tag or "") - if writer is None: - return - if step is None: - step = stats["num_updates"] - for key in stats.keys() - {"num_updates"}: - if isinstance(stats[key], AverageMeter): - writer.add_scalar(key, stats[key].val, step) - elif isinstance(stats[key], Number): - writer.add_scalar(key, stats[key], step) - elif torch.is_tensor(stats[key]) and stats[key].numel() == 1: - writer.add_scalar(key, stats[key].item(), step) - writer.flush() - - -try: - import wandb -except ImportError: - wandb = None - - -class WandBProgressBarWrapper(BaseProgressBar): - """Log to Weights & Biases.""" - - def __init__(self, wrapped_bar, wandb_project, run_name=None): - self.wrapped_bar = wrapped_bar - if wandb is None: - logger.warning("wandb not found, pip install wandb") - return - - # reinit=False to ensure if wandb.init() is called multiple times - # within one process it still references the same run - wandb.init(project=wandb_project, reinit=False, name=run_name) - - def __iter__(self): - return iter(self.wrapped_bar) - - def log(self, stats, tag=None, step=None): - """Log intermediate stats to tensorboard.""" - self._log_to_wandb(stats, tag, step) - self.wrapped_bar.log(stats, tag=tag, step=step) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats.""" - self._log_to_wandb(stats, tag, step) - self.wrapped_bar.print(stats, tag=tag, step=step) - - def update_config(self, config): - """Log latest configuration.""" - if wandb is not None: - wandb.config.update(config) - self.wrapped_bar.update_config(config) - - def _log_to_wandb(self, stats, tag=None, step=None): - if wandb is None: - return - if step is None: - step = stats["num_updates"] - - prefix = "" if tag is None else tag + "/" - - for key in stats.keys() - {"num_updates"}: - if isinstance(stats[key], AverageMeter): - wandb.log({prefix + key: stats[key].val}, step=step) - elif isinstance(stats[key], Number): - wandb.log({prefix + key: stats[key]}, step=step) - - -try: - from azureml.core import Run -except ImportError: - Run = None - - -class AzureMLProgressBarWrapper(BaseProgressBar): - """Log to Azure ML""" - - def __init__(self, wrapped_bar): - self.wrapped_bar = wrapped_bar - if Run is None: - logger.warning("azureml.core not found, pip install azureml-core") - return - self.run = Run.get_context() - - def __exit__(self, *exc): - if Run is not None: - self.run.complete() - return False - - def __iter__(self): - return iter(self.wrapped_bar) - - def log(self, stats, tag=None, step=None): - """Log intermediate stats to AzureML""" - self._log_to_azureml(stats, tag, step) - self.wrapped_bar.log(stats, tag=tag, step=step) - - def print(self, stats, tag=None, step=None): - """Print end-of-epoch stats""" - self._log_to_azureml(stats, tag, step) - self.wrapped_bar.print(stats, tag=tag, step=step) - - def update_config(self, config): - """Log latest configuration.""" - self.wrapped_bar.update_config(config) - - def _log_to_azureml(self, stats, tag=None, step=None): - if Run is None: - return - if step is None: - step = stats["num_updates"] - - prefix = "" if tag is None else tag + "/" - - for key in stats.keys() - {"num_updates"}: - name = prefix + key - if isinstance(stats[key], AverageMeter): - self.run.log_row(name=name, **{"step": step, key: stats[key].val}) - elif isinstance(stats[key], Number): - self.run.log_row(name=name, **{"step": step, key: stats[key]}) diff --git a/spaces/mshukor/UnIVAL/fairseq/fairseq/trainer.py b/spaces/mshukor/UnIVAL/fairseq/fairseq/trainer.py deleted file mode 100644 index e46ccfe0b8d3a224586fb16c69168321f60ce30e..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/fairseq/trainer.py +++ /dev/null @@ -1,1509 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Train a network across multiple GPUs. -""" - -import contextlib -import logging -import sys -import time -from argparse import Namespace -from itertools import chain -from typing import Any, Dict, List - -import torch -from fairseq import checkpoint_utils, models, optim, utils -from fairseq.dataclass.configs import FairseqConfig -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.distributed import utils as distributed_utils -from fairseq.file_io import PathManager -from fairseq.logging import meters, metrics -from fairseq.models.ema import build_ema -from fairseq.nan_detector import NanDetector -from fairseq.optim import lr_scheduler -from omegaconf import OmegaConf - -logger = logging.getLogger(__name__) - - -class Trainer(object): - """Main class for data parallel training. - - This class supports synchronous distributed data parallel training, - where multiple workers each have a full model replica and gradients - are accumulated across workers before each update. We use - :class:`~torch.nn.parallel.DistributedDataParallel` to handle - communication of the gradients across workers. - """ - - def __init__(self, cfg: FairseqConfig, task, model, criterion, quantizer=None): - - if isinstance(cfg, Namespace): - logger.warning( - "argparse.Namespace configuration is deprecated! Automatically converting to OmegaConf" - ) - cfg = convert_namespace_to_omegaconf(cfg) - - self.cfg = cfg - self.task = task - - # catalog shared parameters - shared_params = _catalog_shared_params(model) - self.tpu = cfg.common.tpu - self.cuda = torch.cuda.is_available() and not cfg.common.cpu and not self.tpu - if self.cuda: - self.device = torch.device("cuda") - elif self.tpu: - self.device = utils.get_tpu_device() - else: - self.device = torch.device("cpu") - - if self.is_fsdp: - import fairscale - if self.cfg.common.bf16: - raise ValueError( - "FullyShardedDataParallel is not compatible with --bf16 or " - "--memory-efficient-bf16" - ) - if self.cfg.distributed_training.zero_sharding != "none": - raise ValueError( - "FullyShardedDataParallel is not compatible with --zero-sharding " - "option (it's already built in)" - ) - if max(self.cfg.optimization.update_freq) > 1 and fairscale.__version__ < "0.4.0": - raise RuntimeError( - "Please update to fairscale 0.4.0 or newer when combining " - "--update-freq with FullyShardedDataParallel" - ) - else: - if ( - hasattr(self.cfg.distributed_training, "cpu_offload") - and self.cfg.distributed_training.cpu_offload - ): - raise ValueError("--cpu-offload requires --ddp-backend=fully_sharded") - - # copy model and criterion to current device/dtype - self._criterion = criterion - self._model = model - if not self.is_fsdp: - if cfg.common.fp16: - assert not cfg.common.amp, "Cannot use fp16 and AMP together" - self._criterion = self._criterion.half() - self._model = self._model.half() - elif cfg.common.bf16: - self._criterion = self._criterion.to(dtype=torch.bfloat16) - self._model = self._model.to(dtype=torch.bfloat16) - elif cfg.common.amp: - self._amp_retries = 0 - if ( - not cfg.distributed_training.pipeline_model_parallel - # the DistributedFairseqModel wrapper will handle moving to device, - # so only handle cases which don't use the wrapper - and not self.use_distributed_wrapper - ): - self._criterion = self._criterion.to(device=self.device) - self._model = self._model.to(device=self.device) - self.pipeline_model_parallel = cfg.distributed_training.pipeline_model_parallel - self.last_device = None - if self.cuda and self.pipeline_model_parallel: - self.last_device = torch.device( - cfg.distributed_training.pipeline_devices[-1] - ) - - # check that shared parameters are preserved after device transfer - for shared_param in shared_params: - ref = _get_module_by_path(self._model, shared_param[0]) - for path in shared_param[1:]: - logger.info( - "detected shared parameter: {} <- {}".format(shared_param[0], path) - ) - _set_module_by_path(self._model, path, ref) - - self._dummy_batch = None # indicates we don't have a dummy batch at first - self._lr_scheduler = None - self._num_updates = 0 - self._num_xla_compiles = 0 # for TPUs - self._optim_history = None - self._optimizer = None - self._warn_once = set() - self._wrapped_criterion = None - self._wrapped_model = None - self._ema = None - - # TODO(myleott): support tpu - if self.cuda and self.data_parallel_world_size > 1: - self._grad_norm_buf = torch.cuda.DoubleTensor(self.data_parallel_world_size) - else: - self._grad_norm_buf = None - - self.quantizer = quantizer - if self.quantizer is not None: - self.quantizer.set_trainer(self) - - # get detailed cuda environment - if self.cuda: - self.cuda_env = utils.CudaEnvironment() - if self.data_parallel_world_size > 1: - self.cuda_env_arr = distributed_utils.all_gather_list( - self.cuda_env, group=distributed_utils.get_global_group() - ) - else: - self.cuda_env_arr = [self.cuda_env] - if self.data_parallel_rank == 0: - utils.CudaEnvironment.pretty_print_cuda_env_list(self.cuda_env_arr) - else: - self.cuda_env = None - self.cuda_env_arr = None - - metrics.log_start_time("wall", priority=790, round=0) - - self._start_time = time.time() - self._previous_training_time = 0 - self._cumulative_training_time = None - - def reinitialize(self): - """Reinitialize the Trainer, typically after model params change.""" - self._lr_scheduler = None - self._optimizer = None - self._wrapped_criterion = None - self._wrapped_model = None - - @property - def data_parallel_world_size(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 1 - return distributed_utils.get_data_parallel_world_size() - - @property - def data_parallel_process_group(self): - return distributed_utils.get_data_parallel_group() - - @property - def data_parallel_rank(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 0 - return distributed_utils.get_data_parallel_rank() - - @property - def is_data_parallel_master(self): - # NOTE: this returns true for all model parallel replicas with data - # parallel rank 0 - return self.data_parallel_rank == 0 - - @property - def use_distributed_wrapper(self) -> bool: - return ( - self.data_parallel_world_size > 1 and not self.cfg.optimization.use_bmuf - ) or ( - self.is_fsdp and self.cfg.distributed_training.cpu_offload - ) - - @property - def should_save_checkpoint_on_current_rank(self) -> bool: - """Indicates whether to save checkpoints on the current DDP rank.""" - if ( - self.is_fsdp and self.cfg.distributed_training.use_sharded_state - ) or getattr(self.cfg.model, "base_layers", 0) > 0: - return True - else: - return self.is_data_parallel_master - - @property - def always_call_state_dict_during_save_checkpoint(self) -> bool: - if self.is_fsdp and not self.cfg.distributed_training.use_sharded_state: - # FSDP calls communication collective when consolidating checkpoints - return True - else: - return False - - @property - def checkpoint_suffix(self) -> str: - """Suffix to add to the checkpoint file name.""" - if self.is_fsdp and self.cfg.distributed_training.use_sharded_state: - return self.cfg.checkpoint.checkpoint_suffix + "-shard{0}".format( - self.data_parallel_rank - ) - else: - return self.cfg.checkpoint.checkpoint_suffix or "" - - @property - def criterion(self): - if self._wrapped_criterion is None: - if utils.has_parameters(self._criterion) and self.use_distributed_wrapper: - self._wrapped_criterion = models.DistributedFairseqModel( - self.cfg.distributed_training, - self._criterion, - process_group=self.data_parallel_process_group, - device=self.device, - ) - else: - self._wrapped_criterion = self._criterion - return self._wrapped_criterion - - @property - def model(self): - if self._wrapped_model is None: - if self.use_distributed_wrapper: - self._wrapped_model = models.DistributedFairseqModel( - self.cfg.distributed_training, - self._model, - process_group=self.data_parallel_process_group, - device=self.device, - ) - else: - self._wrapped_model = self._model - return self._wrapped_model - - @property - def ema(self): - if self._ema is None: - self._build_ema() - return self._ema - - def _build_ema(self): - if self.cfg.ema.store_ema: - self._ema = build_ema(self._model, self.cfg.ema, self.device) - logger.info( - "Exponential Moving Average Shadow Model is initialized." - ) - - @property - def optimizer(self): - if self._optimizer is None: - self._build_optimizer() - return self._optimizer - - @property - def lr_scheduler(self): - if self._lr_scheduler is None: - self._build_optimizer() # this will initialize self._lr_scheduler - return self._lr_scheduler - - def _build_optimizer(self): - params = list( - filter( - lambda p: p.requires_grad, - chain(self.model.parameters(), self.criterion.parameters()), - ) - ) - - if self.is_fsdp and self.cfg.common.fp16: - # FullyShardedDataParallel always uses MemoryEfficientFP16 wrapper, - # mostly for the grad scaling. But if we don't have the - # --memory-efficient-fp16 flag set, then we're effectively doing - # regular --fp16 and can allow the use of optimizers that would - # otherwise be unsupported by MemoryEfficientFP16Optimizer. - allow_unsupported = not self.cfg.common.memory_efficient_fp16 - self._optimizer = optim.MemoryEfficientFP16Optimizer.build_optimizer( - self.cfg, params, allow_unsupported=allow_unsupported - ) - elif self.cfg.common.fp16 or self.cfg.common.bf16 or self.cfg.common.amp: - if self.cuda and torch.cuda.get_device_capability(0)[0] < 7: - logger.info( - "NOTE: your device does NOT support faster training with --fp16 or --amp, " - "please switch to FP32 which is likely to be faster" - ) - if ( - self.cfg.common.memory_efficient_fp16 - or self.cfg.common.memory_efficient_bf16 - ): - self._optimizer = optim.MemoryEfficientFP16Optimizer.build_optimizer( - self.cfg, params - ) - elif self.cfg.common.amp: - self._optimizer = optim.AMPOptimizer.build_optimizer(self.cfg, params) - else: - self._optimizer = optim.FP16Optimizer.build_optimizer(self.cfg, params) - else: - if self.cuda and torch.cuda.get_device_capability(0)[0] >= 7: - logger.info("NOTE: your device may support faster training with --fp16 or --amp") - self._optimizer = optim.build_optimizer(self.cfg.optimizer, params) - - if self.is_fsdp: - assert ( - not self.cfg.optimization.use_bmuf - ), "--ddp-backend=fully_sharded is not compatible with BMUF" - assert self._optimizer.supports_flat_params, ( - "--ddp-backend=fully_sharded is only compatible with pointwise " - "optimizers (e.g., Adam, AdamW, Adadelta, Adamax, SGD, etc.). " - "However, the sharding will result in slightly different results when " - "using non-pointwise optimizers (e.g., Adagrad, Adafactor, LAMB)" - ) - - if self.cfg.optimization.use_bmuf: - self._optimizer = optim.FairseqBMUF( - self.cfg.bmuf, - self._optimizer, - ) - - if self.cfg.distributed_training.zero_sharding == "os": - if ( - self.cfg.common.fp16 - and not self.cfg.common.memory_efficient_fp16 - and not self.cfg.common.memory_efficient_bf16 - ) and not self.cfg.common.fp16_no_flatten_grads: - raise ValueError( - "ZeRO is incomptabile with fp16 and flattened grads. " - "Please use --fp16-no-flatten-grads" - ) - else: - optim.shard_(self._optimizer, self.data_parallel_process_group) - - # We should initialize the learning rate scheduler immediately after - # building the optimizer, so that the initial learning rate is set. - self._lr_scheduler = lr_scheduler.build_lr_scheduler( - self.cfg.lr_scheduler, - self.optimizer, - ) - self._lr_scheduler.step_update(0) - - @property - def is_fsdp(self): - return self.cfg.distributed_training.ddp_backend == "fully_sharded" - - def consolidate_optimizer(self): - """For OSS, we need to consolidate the state dict.""" - if self.cfg.checkpoint.no_save_optimizer_state: - return - self._gathered_optim_state = None - if hasattr(self.optimizer.optimizer, "consolidate_state_dict"): - self.optimizer.optimizer.consolidate_state_dict() - elif self.is_fsdp and not self.model.use_sharded_state: - st = self.model.gather_full_optim_state_dict( - self.optimizer - ) # only returns on rank 0 - self._gathered_optim_state = st - - def state_dict(self): - state_dict = { - "args": None, # legacy - "cfg": ( - OmegaConf.to_container(self.cfg, resolve=True, enum_to_str=True) - if OmegaConf.is_config(self.cfg) - else self.cfg - ), - "model": self.model.state_dict(), - "criterion": ( - self.criterion.state_dict() - if utils.has_parameters(self.criterion) - else None - ), - "optimizer_history": (self._optim_history or []) - + [ - { - "criterion_name": self.get_criterion().__class__.__name__, - "optimizer_name": self.optimizer.__class__.__name__, - "lr_scheduler_state": self.lr_scheduler.state_dict(), - "num_updates": self.get_num_updates(), - } - ], - "task_state": self.task.state_dict() if self.task is not None else {}, - "extra_state": { - "metrics": metrics.state_dict(), - "previous_training_time": self.cumulative_training_time(), - }, - } - if self.cfg.ema.store_ema: - # Save EMA model state as extra state - state_dict["extra_state"]["ema"] = self.ema.get_model().state_dict() - if self.cfg.ema.ema_fp32: - # Save EMA params in fp32 - state_dict["extra_state"]["ema_fp32_params"] = self.ema.fp32_params - if not self.cfg.checkpoint.no_save_optimizer_state: - if self._gathered_optim_state is not None: - state_dict["last_optimizer_state"] = self._gathered_optim_state - self._gathered_optim_state = None - else: - state_dict["last_optimizer_state"] = self.optimizer.state_dict() - if self.is_fsdp: - # save meta data for recombining checkpoint upon loading - state_dict["fsdp_metadata"] = self.model.local_metadata_dict() - return state_dict - - def save_checkpoint(self, filename, extra_state): - """Save all training state in a checkpoint file.""" - logger.info(f"Saving checkpoint to {filename}") - # call state_dict on all ranks in case it needs internal communication - state_dict = utils.move_to_cpu(self.state_dict()) - state_dict["extra_state"].update(extra_state) - if self.should_save_checkpoint_on_current_rank: - checkpoint_utils.torch_persistent_save( - state_dict, - filename, - async_write=self.cfg.checkpoint.write_checkpoints_asynchronously, - ) - logger.info(f"Finished saving checkpoint to {filename}") - - def load_checkpoint( - self, - filename, - reset_optimizer=False, - reset_lr_scheduler=False, - optimizer_overrides=None, - reset_meters=False, - ): - """ - Load all training state from a checkpoint file. - rank = 0 will load the checkpoint, and then broadcast it to all - other ranks. - """ - extra_state, self._optim_history, last_optim_state = None, [], None - - logger.info(f"Preparing to load checkpoint {filename}") - is_distributed = self.data_parallel_world_size > 1 - bexists = PathManager.isfile(filename) - if bexists: - load_on_all_ranks = ( - self.cfg.checkpoint.load_checkpoint_on_all_dp_ranks - # TPUs don't support broadcast yet, so load checkpoints - # on every worker for now - or self.tpu - # FSDP requires loading checkpoint shards on all ranks - or (self.is_fsdp and self.cfg.distributed_training.use_sharded_state) - or getattr(self.cfg.model, "base_layers", 0) > 0 - ) - - if load_on_all_ranks or self.data_parallel_rank == 0: - state = checkpoint_utils.load_checkpoint_to_cpu( - filename, load_on_all_ranks=load_on_all_ranks - ) - last_optim_state = state.get("last_optimizer_state", None) - - # If doing zero_sharding, do not broadcast global optimizer - # state. Later we will broadcast sharded states to each rank - # to avoid memory from exploding. - if ( - not load_on_all_ranks - and self.cfg.distributed_training.zero_sharding == "os" - and "last_optimizer_state" in state - and is_distributed - ): - state["last_optimizer_state"] = "SHARDED" - else: - last_optim_state = None - state = None - - if is_distributed and not load_on_all_ranks: - state = distributed_utils.broadcast_object( - state, - src_rank=0, - group=self.data_parallel_process_group, - dist_device=self.device, - ) - if self.data_parallel_rank > 0: - last_optim_state = state.get("last_optimizer_state", None) - - # load model parameters - try: - self.model.load_state_dict( - state["model"], strict=True, model_cfg=self.cfg.model - ) - # save memory for later steps - del state["model"] - if utils.has_parameters(self.get_criterion()): - self.get_criterion().load_state_dict( - state["criterion"], strict=True - ) - del state["criterion"] - - except Exception: - raise Exception( - "Cannot load model parameters from checkpoint {}; " - "please ensure that the architectures match.".format(filename) - ) - extra_state = state["extra_state"] - self._optim_history = state["optimizer_history"] - - if last_optim_state is not None and not reset_optimizer: - # rebuild optimizer after loading model, since params may have changed - self._build_optimizer() - - # only reload optimizer and lr_scheduler if they match - last_optim = self._optim_history[-1] - assert ( - last_optim["criterion_name"] == self.get_criterion().__class__.__name__ - ), f"Criterion does not match; please reset the optimizer (--reset-optimizer). {last_optim['criterion_name']} vs {self.get_criterion().__class__.__name__}" - assert ( - last_optim["optimizer_name"] == self.optimizer.__class__.__name__ - ), f"Optimizer does not match; please reset the optimizer (--reset-optimizer). {last_optim['optimizer_name']} vs {self.optimizer.__class__.__name__}" - - if not reset_lr_scheduler: - self.lr_scheduler.load_state_dict(last_optim["lr_scheduler_state"]) - - if self.is_fsdp and not self.model.use_sharded_state: - # if use_sharded_state, the last_optim_state is already sharded, skip this - last_optim_state = self.model.get_shard_from_optim_state_dict( - last_optim_state - ) - elif not load_on_all_ranks and is_distributed: - last_optim_state = self.optimizer.broadcast_global_state_dict( - last_optim_state - ) - - self.optimizer.load_state_dict(last_optim_state, optimizer_overrides) - - self.set_num_updates(last_optim["num_updates"]) - - if extra_state is not None: - itr_state = extra_state["train_iterator"] - epoch = itr_state["epoch"] - - if "previous_training_time" in extra_state: - self._previous_training_time = extra_state["previous_training_time"] - self._start_time = time.time() - - self.lr_step(epoch) - - if ( - itr_state.get("version", 1) >= 2 - and itr_state["iterations_in_epoch"] == 0 - ): - # reset meters at start of epoch - reset_meters = True - - if "metrics" in extra_state and not reset_meters: - metrics.load_state_dict(extra_state["metrics"]) - - # reset TimeMeters, since their start times don't make sense anymore - for meter in metrics.get_meters("default"): - if isinstance(meter, meters.TimeMeter): - meter.reset() - - if self.cfg.ema.store_ema: - if "ema" not in extra_state: - logger.warn( - "EMA not found in checkpoint. But store_ema is True. " - "EMA is re-initialized from checkpoint." - ) - self.ema.restore(state["model"], build_fp32_params=self.cfg.ema.ema_fp32) - else: - logger.info( - "Loading EMA from checkpoint" - ) - self.ema.restore(extra_state["ema"], build_fp32_params=False) - - if self.cfg.ema.ema_fp32: - if "ema_fp32_params" in extra_state: - logger.info( - "Loading EMA fp32 params from checkpoint" - ) - self.ema.build_fp32_params(extra_state["ema_fp32_params"]) - else: - logger.info( - "Building EMA fp32 params from EMA model in checkpoint" - ) - self.ema.build_fp32_params() - - logger.info( - "Loaded checkpoint {} (epoch {} @ {} updates)".format( - filename, epoch, self.get_num_updates() - ) - ) - - else: - logger.info("No existing checkpoint found {}".format(filename)) - - return extra_state - - def get_train_iterator( - self, - epoch, - combine=True, - load_dataset=True, - data_selector=None, - shard_batch_itr=True, - disable_iterator_cache=False, - ): - """Return an EpochBatchIterator over the training set for a given epoch.""" - if load_dataset: - logger.info("loading train data for epoch {}".format(epoch)) - self.task.load_dataset( - self.cfg.dataset.train_subset, - epoch=epoch, - combine=combine, - data_selector=data_selector, - tpu=self.tpu, - ) - batch_iterator = self.task.get_batch_iterator( - dataset=self.task.dataset(self.cfg.dataset.train_subset), - max_tokens=self.cfg.dataset.max_tokens, - max_sentences=self.cfg.dataset.batch_size, - max_positions=utils.resolve_max_positions( - self.task.max_positions(), - self.model.max_positions(), - self.cfg.dataset.max_tokens, - ), - ignore_invalid_inputs=True, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=self.cfg.common.seed, - num_shards=self.data_parallel_world_size if shard_batch_itr else 1, - shard_id=self.data_parallel_rank if shard_batch_itr else 0, - num_workers=self.cfg.dataset.num_workers, - epoch=epoch, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - ) - self.reset_dummy_batch(batch_iterator.first_batch) - return batch_iterator - - def get_valid_iterator( - self, - subset, - disable_iterator_cache=False, - ): - """Return an EpochBatchIterator over given validation subset for a given epoch.""" - batch_iterator = self.task.get_batch_iterator( - dataset=self.task.dataset(subset), - max_tokens=self.cfg.dataset.max_tokens_valid, - max_sentences=self.cfg.dataset.batch_size_valid, - max_positions=utils.resolve_max_positions( - self.task.max_positions(), - self.model.max_positions(), - ), - ignore_invalid_inputs=self.cfg.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=self.cfg.common.seed, - num_shards=self.data_parallel_world_size, - shard_id=self.data_parallel_rank, - num_workers=self.cfg.dataset.num_workers, - # always pass a fixed "epoch" to keep validation data consistent - # across training epochs - epoch=1, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - ) - self.reset_dummy_batch(batch_iterator.first_batch) - return batch_iterator - - def begin_epoch(self, epoch): - """Called at the beginning of each epoch.""" - logger.info("begin training epoch {}".format(epoch)) - - self.lr_step_begin_epoch(epoch) - - if self.quantizer is not None: - self.quantizer.begin_epoch(epoch) - - # task specific setup per epoch - self.task.begin_epoch(epoch, self.get_model()) - - if self.tpu: - import torch_xla.core.xla_model as xm - - xm.rendezvous("begin_epoch") # wait for all workers - xm.mark_step() - - def begin_valid_epoch(self, epoch): - """Called at the beginning of each validation epoch.""" - - # task specific setup per validation epoch - self.task.begin_valid_epoch(epoch, self.get_model()) - - def reset_dummy_batch(self, batch): - self._dummy_batch = batch - - @metrics.aggregate("train") - def train_step(self, samples, raise_oom=False): - """Do forward, backward and parameter update.""" - self._set_seed() - self.model.train() - self.criterion.train() - self.zero_grad() - - metrics.log_start_time("train_wall", priority=800, round=0) - - # If EMA is enabled through store_ema=True - # and task.uses_ema is True, pass the EMA model as a keyword - # argument to the task. - extra_kwargs = {} - if self.cfg.ema.store_ema and getattr(self.task, "uses_ema", False): - extra_kwargs["ema_model"] = self.ema.get_model() - - # forward and backward pass - logging_outputs, sample_size, ooms = [], 0, 0 - for i, sample in enumerate(samples): # delayed update loop - sample, is_dummy_batch = self._prepare_sample(sample) - - def maybe_no_sync(): - """ - Whenever *samples* contains more than one mini-batch, we - want to accumulate gradients locally and only call - all-reduce in the last backwards pass. - """ - if ( - self.data_parallel_world_size > 1 - and hasattr(self.model, "no_sync") - and i < len(samples) - 1 - # The no_sync context manager results in increased memory - # usage with FSDP, since full-size gradients will be - # accumulated on each GPU. It's typically a better tradeoff - # to do the extra communication with FSDP. - and not self.is_fsdp - ): - return self.model.no_sync() - else: - return contextlib.ExitStack() # dummy contextmanager - - try: - with maybe_no_sync(): - # forward and backward - loss, sample_size_i, logging_output = self.task.train_step( - sample=sample, - model=self.model, - criterion=self.criterion, - optimizer=self.optimizer, - update_num=self.get_num_updates(), - ignore_grad=is_dummy_batch, - **extra_kwargs, - ) - del loss - - logging_outputs.append(logging_output) - sample_size += sample_size_i - - # emptying the CUDA cache after the first step can - # reduce the chance of OOM - if self.cuda and self.get_num_updates() == 0: - torch.cuda.empty_cache() - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - if raise_oom: - raise e - logger.warning( - "attempting to recover from OOM in forward/backward pass" - ) - ooms += 1 - self.zero_grad() - if self.cuda: - torch.cuda.empty_cache() - if self.cfg.distributed_training.distributed_world_size == 1: - return None - else: - raise e - - if self.tpu and i < len(samples) - 1: - # tpu-comment: every XLA operation before marking step is - # appended to the IR graph, and processing too many batches - # before marking step can lead to OOM errors. - # To handle gradient accumulation use case, we explicitly - # mark step here for every forward pass without a backward pass - self._xla_markstep_and_send_to_cpu() - - if is_dummy_batch: - if torch.is_tensor(sample_size): - sample_size.zero_() - else: - sample_size *= 0.0 - - if torch.is_tensor(sample_size): - sample_size = sample_size.float() - else: - sample_size = float(sample_size) - - # gather logging outputs from all replicas - if self._sync_stats(): - train_time = self._local_cumulative_training_time() - logging_outputs, ( - sample_size, - ooms, - total_train_time, - ) = self._aggregate_logging_outputs( - logging_outputs, sample_size, ooms, train_time, ignore=is_dummy_batch - ) - self._cumulative_training_time = ( - total_train_time / self.data_parallel_world_size - ) - - overflow = False - try: - with torch.autograd.profiler.record_function("reduce-grads"): - # reduce gradients across workers - self.optimizer.all_reduce_grads(self.model) - if utils.has_parameters(self.criterion): - self.optimizer.all_reduce_grads(self.criterion) - - with torch.autograd.profiler.record_function("multiply-grads"): - # multiply gradients by (data_parallel_size / sample_size) since - # DDP normalizes by the number of data parallel workers for - # improved fp16 precision. - # Thus we get (sum_of_gradients / sample_size) at the end. - # In case of fp16, this step also undoes loss scaling. - # (Debugging note: Some optimizers perform this scaling on the - # fly, so inspecting model.parameters() or optimizer.params may - # still show the original, unscaled gradients.) - numer = ( - self.data_parallel_world_size - if not self.cfg.optimization.use_bmuf or self._sync_stats() - else 1 - ) - self.optimizer.multiply_grads(numer / (sample_size or 1.0)) - # Note: (sample_size or 1.0) handles the case of a zero gradient, in a - # way that avoids CPU/device transfers in case sample_size is a GPU or - # TPU object. The assumption is that the gradient itself is also 0. - - with torch.autograd.profiler.record_function("clip-grads"): - # clip grads - grad_norm = self.clip_grad_norm(self.cfg.optimization.clip_norm) - - # check that grad norms are consistent across workers - # on tpu check tensor is slow - if not self.tpu: - if ( - not self.cfg.optimization.use_bmuf - and self.cfg.distributed_training.ddp_backend != "slow_mo" - ): - self._check_grad_norms(grad_norm) - if not torch.isfinite(grad_norm).all(): - # in case of AMP, if gradients are Nan/Inf then - # optimizer step is still required - if self.cfg.common.amp: - overflow = True - else: - # check local gradnorm single GPU case, trigger NanDetector - raise FloatingPointError("gradients are Nan/Inf") - - with torch.autograd.profiler.record_function("optimizer"): - # take an optimization step - self.task.optimizer_step( - self.optimizer, model=self.model, update_num=self.get_num_updates() - ) - if self.cfg.common.amp and overflow: - if self._amp_retries == self.cfg.common.amp_batch_retries: - logger.info("AMP: skipping this batch.") - self._amp_retries = 0 - else: - self._amp_retries += 1 - return self.train_step(samples, raise_oom) # recursion to feed in same batch - - except FloatingPointError: - # re-run the forward and backward pass with hooks attached to print - # out where it fails - self.zero_grad() - with NanDetector(self.get_model()): - for _, sample in enumerate(samples): - sample, _ = self._prepare_sample(sample) - self.task.train_step( - sample, - self.model, - self.criterion, - self.optimizer, - self.get_num_updates(), - ignore_grad=False, - **extra_kwargs, - ) - raise - except OverflowError as e: - overflow = True - logger.info( - f"NOTE: gradient overflow detected, ignoring gradient, {str(e)}" - ) - grad_norm = torch.tensor(0.0).cuda() - self.zero_grad() - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - logger.error("OOM during optimization, irrecoverable") - raise e - - # Some distributed wrappers (e.g., SlowMo) need access to the optimizer - # after the step - if hasattr(self.model, "perform_additional_optimizer_actions"): - if hasattr(self.optimizer, "fp32_params"): - self.model.perform_additional_optimizer_actions( - self.optimizer.optimizer, self.optimizer.fp32_params - ) - else: - self.model.perform_additional_optimizer_actions( - self.optimizer.optimizer - ) - - logging_output = None - if not overflow or self.cfg.distributed_training.ddp_backend == "slow_mo": - self.set_num_updates(self.get_num_updates() + 1) - - if self.cfg.ema.store_ema: - # Step EMA forward with new model. - self.ema.step( - self.get_model(), - self.get_num_updates(), - ) - metrics.log_scalar( - "ema_decay", - self.ema.get_decay(), - priority=10000, - round=5, - weight=0, - ) - - if self.tpu: - import torch_xla.core.xla_model as xm - - # mark step on TPUs - self._xla_markstep_and_send_to_cpu() - - # only log stats every log_interval steps - # this causes wps to be misreported when log_interval > 1 - logging_output = {} - if self.get_num_updates() % self.cfg.common.log_interval == 0: - # log memory usage - mem_info = xm.get_memory_info(self.device) - gb_free = mem_info["kb_free"] / 1024 / 1024 - gb_total = mem_info["kb_total"] / 1024 / 1024 - metrics.log_scalar( - "gb_free", gb_free, priority=1500, round=1, weight=0 - ) - metrics.log_scalar( - "gb_total", gb_total, priority=1600, round=1, weight=0 - ) - logging_outputs = self._xla_markstep_and_send_to_cpu( - logging_outputs - ) - logging_output = self._reduce_and_log_stats( - logging_outputs, sample_size, grad_norm - ) - - # log whenever there's an XLA compilation, since these - # slow down training and may indicate opportunities for - # optimization - self._check_xla_compilation() - else: - if self.cuda and self.cuda_env is not None: - # log minimum free memory over the iteration - gb_used = torch.cuda.max_memory_allocated() / 1024 / 1024 / 1024 - torch.cuda.reset_peak_memory_stats() - gb_free = self.cuda_env.total_memory_in_GB - gb_used - metrics.log_scalar( - "gb_free", gb_free, priority=1500, round=1, weight=0 - ) - - # log stats - logging_output = self._reduce_and_log_stats( - logging_outputs, sample_size, grad_norm - ) - - # clear CUDA cache to reduce memory fragmentation - if ( - self.cuda - and self.cfg.common.empty_cache_freq > 0 - and ( - (self.get_num_updates() + self.cfg.common.empty_cache_freq - 1) - % self.cfg.common.empty_cache_freq - ) - == 0 - ): - torch.cuda.empty_cache() - - if self.cfg.common.fp16 or self.cfg.common.amp: - metrics.log_scalar( - "loss_scale", - ( - self.optimizer.scaler.loss_scale - if self.cfg.common.fp16 - else self.optimizer.scaler.get_scale() - ), - priority=700, - round=4, - weight=0, - ) - - metrics.log_stop_time("train_wall") - return logging_output - - @metrics.aggregate("valid") - def valid_step(self, sample, raise_oom=False): - """Do forward pass in evaluation mode.""" - if self.tpu: - import torch_xla.core.xla_model as xm - - xm.rendezvous("valid_step") # wait for all workers - - # If EMA is enabled through store_ema=True - # and task.uses_ema is True, pass the EMA model as a keyword - # argument to the task. - extra_kwargs = {} - if self.cfg.ema.store_ema and getattr(self.task, "uses_ema", False): - extra_kwargs["ema_model"] = self.ema.get_model() - - with torch.no_grad(): - self.model.eval() - self.criterion.eval() - - sample, is_dummy_batch = self._prepare_sample(sample) - - try: - _loss, sample_size, logging_output = self.task.valid_step( - sample, self.model, self.criterion, **extra_kwargs - ) - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - if not raise_oom: - logger.warning( - "ran out of memory in validation step, retrying batch" - ) - for p in self.model.parameters(): - if p.grad is not None: - p.grad = None # free some memory - if self.cuda: - torch.cuda.empty_cache() - return self.valid_step(sample, raise_oom=True) - raise e - - logging_outputs = [logging_output] - if is_dummy_batch: - if torch.is_tensor(sample_size): - sample_size.zero_() - else: - sample_size *= 0.0 - - # gather logging outputs from all replicas - if self.data_parallel_world_size > 1: - logging_outputs, (sample_size,) = self._aggregate_logging_outputs( - logging_outputs, - sample_size, - ignore=is_dummy_batch, - ) - - # log validation stats - if self.tpu: - logging_outputs = self._xla_markstep_and_send_to_cpu(logging_outputs) - logging_output = self._reduce_and_log_stats(logging_outputs, sample_size) - - return logging_output - - def zero_grad(self): - self.optimizer.zero_grad() - - def lr_step_begin_epoch(self, epoch): - """Adjust the learning rate at the beginning of the epoch.""" - self.lr_scheduler.step_begin_epoch(epoch) - # prefer updating the LR based on the number of steps - return self.lr_step_update() - - def lr_step(self, epoch, val_loss=None): - """Adjust the learning rate at the end of the epoch.""" - self.lr_scheduler.step(epoch, val_loss) - # prefer updating the LR based on the number of steps - return self.lr_step_update() - - def lr_step_update(self): - """Update the learning rate after each update.""" - new_lr = self.lr_scheduler.step_update(self.get_num_updates()) - if isinstance(new_lr, dict): - for k, v in new_lr.items(): - metrics.log_scalar(f"lr_{k}", v, weight=0, priority=300) - new_lr = new_lr.get("default", next(iter(new_lr.values()))) - else: - metrics.log_scalar("lr", new_lr, weight=0, priority=300) - return new_lr - - def get_lr(self): - """Get the current learning rate.""" - return self.optimizer.get_lr() - - def get_model(self): - """Get the (non-wrapped) model instance.""" - return self._model - - def get_criterion(self): - """Get the (non-wrapped) criterion instance.""" - return self._criterion - - def get_meter(self, name): - """[deprecated] Get a specific meter by name.""" - from fairseq import meters - - if "get_meter" not in self._warn_once: - self._warn_once.add("get_meter") - utils.deprecation_warning( - "Trainer.get_meter is deprecated. Please use fairseq.metrics instead." - ) - - train_meters = metrics.get_meters("train") - if train_meters is None: - train_meters = {} - - if name == "train_loss" and "loss" in train_meters: - return train_meters["loss"] - elif name == "train_nll_loss": - # support for legacy train.py, which assumed this meter is - # always initialized - m = train_meters.get("nll_loss", None) - return m or meters.AverageMeter() - elif name == "wall": - # support for legacy train.py, which assumed this meter is - # always initialized - m = metrics.get_meter("default", "wall") - return m or meters.TimeMeter() - elif name == "wps": - m = metrics.get_meter("train", "wps") - return m or meters.TimeMeter() - elif name in {"valid_loss", "valid_nll_loss"}: - # support for legacy train.py, which assumed these meters - # are always initialized - k = name[len("valid_") :] - m = metrics.get_meter("valid", k) - return m or meters.AverageMeter() - elif name == "oom": - return meters.AverageMeter() - elif name in train_meters: - return train_meters[name] - return None - - def get_num_updates(self): - """Get the number of parameters updates.""" - return self._num_updates - - def set_num_updates(self, num_updates): - """Set the number of parameters updates.""" - self._num_updates = num_updates - self.lr_step_update() - if self.quantizer: - self.quantizer.step_update(self._num_updates) - metrics.log_scalar("num_updates", self._num_updates, weight=0, priority=200) - - def clip_grad_norm(self, clip_norm): - def agg_norm_fn(total_norm): - total_norm = total_norm.cuda().float() ** 2 - total_norm = distributed_utils.all_reduce( - total_norm, group=self.data_parallel_process_group - ) - return total_norm ** 0.5 - - should_agg_norm = ( - self.is_fsdp - and ( - self.data_parallel_process_group is not None - or torch.distributed.is_initialized() - ) - ) - return self.optimizer.clip_grad_norm( - clip_norm, aggregate_norm_fn=agg_norm_fn if should_agg_norm else None - ) - - def cumulative_training_time(self): - if self._cumulative_training_time is None: - # single GPU - return self._local_cumulative_training_time() - else: - return self._cumulative_training_time - - def _local_cumulative_training_time(self): - """Aggregate training time in seconds.""" - return time.time() - self._start_time + self._previous_training_time - - def _fp_convert_sample(self, sample): - def apply_half(t): - if t.dtype is torch.float32: - return t.to(dtype=torch.half) - return t - - def apply_bfloat16(t): - if t.dtype is torch.float32: - return t.to(dtype=torch.bfloat16) - return t - - if self.cfg.common.fp16: - sample = utils.apply_to_sample(apply_half, sample) - - if self.cfg.common.bf16: - sample = utils.apply_to_sample(apply_bfloat16, sample) - - return sample - - def _prepare_sample(self, sample, is_dummy=False): - if sample == "DUMMY": - raise Exception( - "Trying to use an uninitialized 'dummy' batch. This usually indicates " - "that the total number of batches is smaller than the number of " - "participating GPUs. Try reducing the batch size or using fewer GPUs." - ) - - if sample is None or len(sample) == 0: - assert ( - self._dummy_batch is not None and len(self._dummy_batch) > 0 - ), "Invalid dummy batch: {}".format(self._dummy_batch) - sample, _ = self._prepare_sample(self._dummy_batch, is_dummy=True) - return sample, True - - # Given that PCIe/NVLink bandwidth is significantly smaller than DRAM bandwidth - # it makes sense to do the format conversion on the CPU and then transfer - # a smaller buffer to the device. This also saves GPU memory capacity. - - if self.cfg.common.on_cpu_convert_precision: - sample = self._fp_convert_sample(sample) - - if self.cuda: - if self.pipeline_model_parallel: - if 'target' in sample: - sample['target'] = utils.move_to_cuda(sample['target'], device=self.last_device) - else: - sample = utils.move_to_cuda(sample) - elif self.tpu and is_dummy: - # the dummy batch may not be on the appropriate device - sample = utils.move_to_cuda(sample, device=self.device) - - if not self.cfg.common.on_cpu_convert_precision: - sample = self._fp_convert_sample(sample) - - if self._dummy_batch == "DUMMY": - self._dummy_batch = sample - - return sample, False - - def _set_seed(self): - # Set seed based on args.seed and the update number so that we get - # reproducible results when resuming from checkpoints - seed = self.cfg.common.seed + self.get_num_updates() - utils.set_torch_seed(seed) - - def _sync_stats(self): - # Return True if it's using multiple GPUs and DDP or multiple GPUs with - # BMUF and it's a bmuf sync with warmup iterations completed before. - if self.data_parallel_world_size == 1: - return False - elif self.cfg.optimization.use_bmuf: - return ( - self.get_num_updates() + 1 - ) % self.cfg.bmuf.global_sync_iter == 0 and ( - self.get_num_updates() + 1 - ) > self.cfg.bmuf.warmup_iterations - else: - return True - - def _log_oom(self, exc): - msg = "OOM: Ran out of memory with exception: {}".format(exc) - logger.warning(msg) - if torch.cuda.is_available() and hasattr(torch.cuda, "memory_summary"): - for device_idx in range(torch.cuda.device_count()): - logger.warning(torch.cuda.memory_summary(device=device_idx)) - sys.stderr.flush() - - def _aggregate_logging_outputs( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - if self.task.__class__.logging_outputs_can_be_summed(self.get_criterion()): - return self._fast_stat_sync_sum( - logging_outputs, *extra_stats_to_sum, ignore=ignore - ) - else: - return self._all_gather_list_sync( - logging_outputs, *extra_stats_to_sum, ignore=ignore - ) - - def _all_gather_list_sync( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - """ - Sync logging outputs across workers. all_gather_list_sync is - suitable when logging outputs are complex types. - """ - if self.tpu: - raise NotImplementedError - if ignore: - logging_outputs = [] - results = list( - zip( - *distributed_utils.all_gather_list( - [logging_outputs] + list(extra_stats_to_sum), - max_size=getattr(self.cfg.common, "all_gather_list_size", 16384), - group=self.data_parallel_process_group, - ) - ) - ) - logging_outputs, extra_stats_to_sum = results[0], results[1:] - logging_outputs = list(chain.from_iterable(logging_outputs)) - extra_stats_to_sum = [sum(s) for s in extra_stats_to_sum] - return logging_outputs, extra_stats_to_sum - - def _fast_stat_sync_sum( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - """ - Sync logging outputs across workers. fast_stat_sync_sum is - faster than all_gather_list_sync, but is only suitable when - logging outputs are scalars and can be summed. Note that - *logging_outputs* cannot contain any nested dicts/lists. - """ - data = {} - for i, stat in enumerate(extra_stats_to_sum): - data["extra_stats_" + str(i)] = stat - if len(logging_outputs) > 0: - log_keys = list(logging_outputs[0].keys()) - for k in log_keys: - if not ignore: - v = sum(log[k] for log in logging_outputs if k in log) - else: - v = logging_outputs[0][k] - v = torch.zeros_like(v) if torch.is_tensor(v) else 0 - data["logging_outputs_" + k] = v - else: - log_keys = None - - data = distributed_utils.all_reduce_dict( - data, device=self.device, group=self.data_parallel_process_group - ) - - extra_stats_to_sum = [ - data["extra_stats_" + str(i)] for i in range(len(extra_stats_to_sum)) - ] - if log_keys is not None: - logging_outputs = [{k: data["logging_outputs_" + k] for k in log_keys}] - else: - logging_outputs = [] - return logging_outputs, extra_stats_to_sum - - def _check_grad_norms(self, grad_norm): - """Check that grad norms are consistent across workers.""" - if self._grad_norm_buf is not None: - self._grad_norm_buf.zero_() - self._grad_norm_buf[self.data_parallel_rank] = grad_norm - distributed_utils.all_reduce( - self._grad_norm_buf, group=self.data_parallel_process_group - ) - - def is_consistent(tensor): - max_abs_diff = torch.max(torch.abs(tensor - tensor[0])) - return ( - (torch.isfinite(tensor).all() - and (max_abs_diff / (tensor[0] + 1e-6) < 1e-6).all()) - or - (self.cfg.common.amp and not torch.isfinite(tensor).all()) - # in case of amp non-finite grads are fine - ) - - if not is_consistent(self._grad_norm_buf): - pretty_detail = "\n".join( - "rank {:3d} = {:.8f}".format(r, n) - for r, n in enumerate(self._grad_norm_buf.tolist()) - ) - error_detail = "grad_norm across the workers:\n{}\n".format( - pretty_detail - ) - # use FloatingPointError to trigger NanDetector - raise FloatingPointError( - "Fatal error: gradients are inconsistent between workers. " - "Try --ddp-backend=legacy_ddp. " - "Or are you mixing up different generation of GPUs in training?" - + "\n" - + "-" * 80 - + "\n{}\n".format(error_detail) - + "-" * 80 - ) - - def _reduce_and_log_stats(self, logging_outputs, sample_size, grad_norm=None): - if grad_norm is not None and ( - not torch.is_tensor(grad_norm) or torch.isfinite(grad_norm) - ): - metrics.log_speed("ups", 1.0, priority=100, round=2) - metrics.log_scalar("gnorm", grad_norm, priority=400, round=3) - if self.cfg.optimization.clip_norm > 0: - metrics.log_scalar( - "clip", - torch.where( - grad_norm > self.cfg.optimization.clip_norm, - grad_norm.new_tensor(100), - grad_norm.new_tensor(0), - ), - priority=500, - round=1, - ) - - with metrics.aggregate() as agg: - if logging_outputs is not None: - self.task.reduce_metrics(logging_outputs, self.get_criterion()) - del logging_outputs - - # extra warning for criterions that don't properly log a loss value - if "loss" not in agg: - if "loss" not in self._warn_once: - self._warn_once.add("loss") - logger.warning( - "Criterion.reduce_metrics did not log a 'loss' value, " - "which may break some functionality" - ) - metrics.log_scalar("loss", -1) - - # support legacy interface - if self.tpu: - logging_output = {} - else: - logging_output = agg.get_smoothed_values() - logging_output["sample_size"] = sample_size - for key_to_delete in ["ppl", "wps", "wpb", "bsz"]: - if key_to_delete in logging_output: - del logging_output[key_to_delete] - return logging_output - - def _check_xla_compilation(self): - import torch_xla.debug.metrics as met - - compile_stats = met.metric_data("CompileTime") - if compile_stats is None: - return - num_xla_compiles = compile_stats[0] - if num_xla_compiles > self._num_xla_compiles: - logger.warning( - "XLA compilation detected on device #{}; too many of these can lead " - "to slow training, but we expect a few in the beginning".format( - self.cfg.distributed_training.distributed_rank - ) - ) - self._num_xla_compiles = num_xla_compiles - - def _xla_markstep_and_send_to_cpu(self, data=None): - import torch_xla.core.xla_model as xm - - xm.mark_step() - if data is not None: - from fairseq.utils import xla_device_to_cpu - - return xla_device_to_cpu(data) - - -def _catalog_shared_params(module, memo=None, prefix=""): - if memo is None: - first_call = True - memo = {} - else: - first_call = False - for name, param in module._parameters.items(): - param_prefix = prefix + ("." if prefix else "") + name - if param not in memo: - memo[param] = [] - memo[param].append(param_prefix) - for name, m in module._modules.items(): - if m is None: - continue - submodule_prefix = prefix + ("." if prefix else "") + name - _catalog_shared_params(m, memo, submodule_prefix) - if first_call: - return [x for x in memo.values() if len(x) > 1] - - -def _get_module_by_path(module, path): - path = path.split(".") - for name in path: - module = getattr(module, name) - return module - - -def _set_module_by_path(module, path, value): - path = path.split(".") - for name in path[:-1]: - module = getattr(module, name) - setattr(module, path[-1], value) diff --git a/spaces/mshukor/UnIVAL/fairseq/tests/test_roberta.py b/spaces/mshukor/UnIVAL/fairseq/tests/test_roberta.py deleted file mode 100644 index b0b9cfd31e8cb1e03ae74403886d2fb5266e0443..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/tests/test_roberta.py +++ /dev/null @@ -1,314 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import functools -import unittest -from typing import Any, Dict, Sequence - -import fairseq -import fairseq.options -import fairseq.tasks -import torch -from tests.utils import dummy_dictionary - -VOCAB_SIZE = 100 - - -@fairseq.tasks.register_task("fake_task") -class FakeTask(fairseq.tasks.LegacyFairseqTask): - def __init__(self, args): - super().__init__(args) - self.dictionary = dummy_dictionary(VOCAB_SIZE - 4) - assert len(self.dictionary) == VOCAB_SIZE - - @property - def source_dictionary(self): - return self.dictionary - - @property - def target_dictionary(self): - return self.dictionary - - -@functools.lru_cache() -def get_toy_model( - device: str, - architecture: str = "roberta_enc_dec", - **extra_args: Any, -): - assert device in ("gpu", "cpu") - kwargs = { - "arch": architecture, - # Use characteristics dimensions - "encoder_layers": 3, - "encoder_embed_dim": 12, - "encoder_ffn_embed_dim": 14, - "encoder_attention_heads": 4, - "decoder_layers": 3, - "decoder_embed_dim": 12, - "decoder_ffn_embed_dim": 14, - "decoder_attention_heads": 4, - # Disable dropout so we have comparable tests. - "dropout": 0, - "attention_dropout": 0, - "activation_dropout": 0, - "encoder_layerdrop": 0, - # required args - "tokens_per_sample": 256, - "data": "/tmp/test_roberta", - } - kwargs.update(extra_args) - fake_task = FakeTask(kwargs) - args = fairseq.options.get_args( - task="online_backtranslation", - mono_langs="en,ro", - valid_lang_pairs="en-ro", - **kwargs, - ) - torch.manual_seed(0) - model = fake_task.build_model(args) - if device == "gpu": - model.cuda() - return fake_task, model - - -def mk_sample( - lang: str, device: str, tok: Sequence[int] = None, batch_size: int = 2 -) -> Dict[str, Any]: - assert device in ("gpu", "cpu") - if not tok: - if lang == "en": - tok = [10, 11, 12, 13, 14, 15, 2] - else: - tok = [20, 21, 22, 23, 24, 25, 26, 27, 2] - - batch = torch.stack([torch.tensor(tok, dtype=torch.long)] * batch_size) - if device == "gpu": - batch = batch.cuda() - sample = { - "net_input": { - "src_tokens": batch, - "prev_output_tokens": batch, - "src_lengths": torch.tensor( - [len(tok)] * batch_size, dtype=torch.long, device=batch.device - ), - }, - "target": batch[:, 1:], - } - return sample - - -def cpu_gpu(fn): - def helper(self): - fn(self, "cpu") - if torch.cuda.is_available(): - fn(self, "gpu") - - return helper - - -def architectures(fn): - def helper(self): - for arch in ["roberta_enc_dec", "transformer"]: - fn(self, arch) - - return helper - - -class RobertaTest(unittest.TestCase): - def assertTensorEqual(self, t1, t2, delta: float = 1e-6): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - if delta == 0.0: - self.assertEqual(t1.ne(t2).long().sum(), 0) - else: - self.assertEqual(((t2 - t1).abs() > delta).long().sum(), 0) - - def assertSharing(self, model, link_groups: Sequence[Sequence[str]]): - ids = {} - for group in link_groups: - group_ids = {name: id(params(model, name)) for name in group} - shared_id = group_ids[group[0]] - self.assertEqual(group_ids, {name: shared_id for name in group}) - self.assertNotIn(shared_id, ids) - ids[shared_id] = group - - def test_roberta_shared_params(self): - _, roberta = get_toy_model("cpu", architecture="roberta") - self.assertSharing( - roberta, - [ - [ - "encoder.sentence_encoder.embed_tokens.weight", - "encoder.lm_head.weight", - ] - ], - ) - - _, roberta = get_toy_model( - "cpu", architecture="roberta", untie_weights_roberta=True - ) - self.assertSharing( - roberta, - [ - ["encoder.sentence_encoder.embed_tokens.weight"], - ["encoder.lm_head.weight"], - ], - ) - - def test_roberta_enc_dec_shared_params(self): - # 3 distinct embeddings - _, enc_dec = get_toy_model("cpu", architecture="roberta_enc_dec") - self.assertSharing( - enc_dec, - [ - ["encoder.embed_tokens.weight"], - ["decoder.embed_tokens.weight"], - ["decoder.output_projection.weight"], - ], - ) - - # 2 distinct embeddings, one for encoder, one for decoder - _, enc_dec = get_toy_model( - "cpu", architecture="roberta_enc_dec", share_decoder_input_output_embed=True - ) - self.assertSharing( - enc_dec, - [ - ["encoder.embed_tokens.weight"], - [ - "decoder.embed_tokens.weight", - "decoder.output_projection.weight", - ], - ], - ) - - # shared embeddings - _, enc_dec = get_toy_model( - "cpu", architecture="roberta_enc_dec", share_all_embeddings=True - ) - self.assertSharing( - enc_dec, - [ - [ - "encoder.embed_tokens.weight", - "decoder.embed_tokens.weight", - "decoder.output_projection.weight", - ] - ], - ) - - def test_roberta_max_positions_is_correctly_set(self): - device = "cpu" - task, model = get_toy_model(device) - max_pos = model.max_decoder_positions() - self.assertEqual(max_pos, 256) - self.assertEqual(max_pos, model.decoder.max_positions()) - self.assertEqual(max_pos, model.encoder.max_positions()) - self.assertEqual(max_pos, model.encoder.embed_positions.max_positions) - - sentence = [31 for _ in range(max_pos)] - sample = mk_sample("en", device, sentence, batch_size=1) - self.assertEqual(list(sample["net_input"]["src_lengths"]), [max_pos]) - self.assertEqual(len(sample["net_input"]["src_tokens"][0]), max_pos) - x, _ = model.forward(**sample["net_input"]) - self.assertEqual(x.shape, (1, max_pos, VOCAB_SIZE)) - - @cpu_gpu - def test_roberta_forward_backward(self, device: str): - _, model = get_toy_model(device) - sample = mk_sample("en", device) - en_tokens = sample["net_input"]["src_tokens"] - (bs, l) = en_tokens.shape - # Forward - logits, _ = model(**sample["net_input"]) - self.assertEqual(logits.shape, (bs, l, VOCAB_SIZE)) - - # Backward - loss = logits.sum() - loss.backward() - - @cpu_gpu - def test_roberta_forward_backward_bs1(self, device: str): - _, model = get_toy_model(device) - sample = mk_sample("en", device, batch_size=1) - o, _ = model.forward(**sample["net_input"]) - loss = o.sum() - sample2 = mk_sample("ro", device, batch_size=1) - o, _ = model.forward(**sample2["net_input"]) - loss += o.sum() - loss.backward() - - @cpu_gpu - def test_roberta_batching(self, device: str): - """ - Checks that the batch of size 2 give twice the same results than the batch of size 1. - """ - _, model = get_toy_model(device) - sample = mk_sample("en", device, batch_size=1) - slen = sample["net_input"]["src_lengths"][0] - sample2 = mk_sample("en", device, batch_size=2) - with torch.no_grad(): - z = model.encoder.forward( - sample["net_input"]["src_tokens"], sample["net_input"]["src_lengths"] - ) - z = z["encoder_out"][-1] - logits, _ = model.forward(**sample["net_input"]) - - z2 = model.encoder.forward( - sample2["net_input"]["src_tokens"], sample["net_input"]["src_lengths"] - ) - z2 = z2["encoder_out"][-1] - logits2, _ = model.forward(**sample2["net_input"]) - - self.assertEqual(z.shape, (slen, 1, 12)) - self.assertEqual(z2.shape, (slen, 2, 12)) - self.assertTensorEqual(logits2[0], logits2[1]) - self.assertTensorEqual(logits[0], logits2[0]) - - @cpu_gpu - def test_roberta_incremental_decoder(self, device: str): - """ - Checks that incremental decoding yields the same result than non incremental one. - """ - task, model = get_toy_model(device) - - en_sample = mk_sample("en", device) - en_tokens = en_sample["net_input"]["src_tokens"] - ro_sample = mk_sample("ro", device) - ro_tokens = ro_sample["net_input"]["src_tokens"] - - en_enc = model.encoder.forward( - en_tokens, src_lengths=en_sample["net_input"]["src_lengths"] - ) - (bs, tgt_len) = ro_tokens.shape - - # Decode without incremental state - ro_dec, _ = model.decoder.forward(ro_tokens, encoder_out=en_enc) - self.assertEqual(ro_dec.shape, (bs, tgt_len, VOCAB_SIZE)) - self.assertTensorEqual(ro_dec[0], ro_dec[1]) - - # Decode with incremental state - inc_state = {} - ro_dec_inc = [] - for l in range(tgt_len): - ro, _ = model.decoder.forward( - ro_tokens[:, : l + 1], encoder_out=en_enc, incremental_state=inc_state - ) - self.assertEqual(ro.shape, (bs, 1, VOCAB_SIZE)) - ro_dec_inc.append(ro) - - for l in range(tgt_len): - # Intra-batch - self.assertTensorEqual(ro_dec_inc[l][0], ro_dec_inc[l][1]) - # Incremental vs non-incremental - self.assertTensorEqual(ro_dec_inc[l][:, 0], ro_dec[:, l]) - - -def params(model, name): - if "." not in name: - return getattr(model, name) - - prefix, name = name.split(".", 1) - return params(getattr(model, prefix), name) diff --git a/spaces/mshukor/UnIVAL/models/unival/frozen_bn.py b/spaces/mshukor/UnIVAL/models/unival/frozen_bn.py deleted file mode 100644 index 081f5d079410b2cb1f2b2a4a19f18f3ca9e69dcc..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/models/unival/frozen_bn.py +++ /dev/null @@ -1,85 +0,0 @@ -# Modified from detectron2: https://github.com/facebookresearch/detectron2/blob/main/detectron2/layers/batch_norm.py#L13 -import torch -from torch import nn -from torch.nn import functional as F - - -class FrozenBatchNorm2d(nn.Module): - """ - BatchNorm2d where the batch statistics and the affine parameters are fixed. - - It contains non-trainable buffers called - "weight" and "bias", "running_mean", "running_var", - initialized to perform identity transformation. - - The pre-trained backbone models from Caffe2 only contain "weight" and "bias", - which are computed from the original four parameters of BN. - The affine transform `x * weight + bias` will perform the equivalent - computation of `(x - running_mean) / sqrt(running_var) * weight + bias`. - When loading a backbone model from Caffe2, "running_mean" and "running_var" - will be left unchanged as identity transformation. - - Other pre-trained backbone models may contain all 4 parameters. - - The forward is implemented by `F.batch_norm(..., training=False)`. - """ - - def __init__(self, num_features, eps=1e-5): - super().__init__() - self.num_features = num_features - self.eps = eps - self.register_buffer("weight", torch.ones(num_features)) - self.register_buffer("bias", torch.zeros(num_features)) - self.register_buffer("running_mean", torch.zeros(num_features)) - self.register_buffer("running_var", torch.ones(num_features) - eps) - - def forward(self, x): - if x.requires_grad: - # When gradients are needed, F.batch_norm will use extra memory - # because its backward op computes gradients for weight/bias as well. - scale = self.weight * (self.running_var + self.eps).rsqrt() - bias = self.bias - self.running_mean * scale - if x.dim() == 5: - scale = scale.reshape(1, -1, 1, 1, 1) - bias = bias.reshape(1, -1, 1, 1, 1) - else: - scale = scale.reshape(1, -1, 1, 1) - bias = bias.reshape(1, -1, 1, 1) - - out_dtype = x.dtype # may be half - return x * scale.to(out_dtype) + bias.to(out_dtype) - else: - # When gradients are not needed, F.batch_norm is a single fused op - # and provide more optimization opportunities. - return F.batch_norm( - x, - self.running_mean, - self.running_var, - self.weight, - self.bias, - training=False, - eps=self.eps, - ) - - def _load_from_state_dict( - self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ): - num_batches_tracked_key = prefix + 'num_batches_tracked' - if num_batches_tracked_key in state_dict: - del state_dict[num_batches_tracked_key] - version = local_metadata.get("version", None) - - if version is None or version < 2: - # No running_mean/var in early versions - # This will silent the warnings - if prefix + "running_mean" not in state_dict: - state_dict[prefix + "running_mean"] = torch.zeros_like(self.running_mean) - if prefix + "running_var" not in state_dict: - state_dict[prefix + "running_var"] = torch.ones_like(self.running_var) - - super()._load_from_state_dict( - state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ) - - def __repr__(self): - return "FrozenBatchNorm2d(num_features={}, eps={})".format(self.num_features, self.eps) diff --git a/spaces/mugilan0610/mugilanbotchat/README.md b/spaces/mugilan0610/mugilanbotchat/README.md deleted file mode 100644 index 93ea616c96fbf3f9d85283148351b58e37c0e1c5..0000000000000000000000000000000000000000 --- a/spaces/mugilan0610/mugilanbotchat/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mugilanbotchat -emoji: 👀 -colorFrom: green -colorTo: green -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mygyasir/digiplay-PotoPhotoRealism_v1/README.md b/spaces/mygyasir/digiplay-PotoPhotoRealism_v1/README.md deleted file mode 100644 index 362b8fdf1112d03945d44bbee61d8c4b6b96829b..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/digiplay-PotoPhotoRealism_v1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Digiplay-PotoPhotoRealism V1 -emoji: 🏃 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Apple Holding One Day Sale Across The World This Friday.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Apple Holding One Day Sale Across The World This Friday.md deleted file mode 100644 index fe0d6906c2ca98289f0e57ea54dc53be5931d81c..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Apple Holding One Day Sale Across The World This Friday.md +++ /dev/null @@ -1,39 +0,0 @@ -
    -

    Apple Holding One Day Sale Across the World This Friday: What You Need to Know

    - -

    If you are an Apple fan, you might want to mark your calendar for this Friday, April 21. The tech giant is holding a one day sale across the world, offering discounts on select products and accessories. Here are some of the details you need to know before you shop.

    - -

    What Products Are on Sale?

    - -

    According to Apple's website, the one day sale will include deals on iPhone, iPad, Mac, Apple Watch, AirPods, HomePod, Apple TV, and more. However, the exact products and prices may vary by region and availability. Some of the products that are expected to be on sale are:

    -

    Apple holding one day sale across the world this Friday


    DOWNLOAD ✫✫✫ https://urlcod.com/2uIap3



    - -
      -
    • iPhone 12 and iPhone 12 mini: Save up to $100 when you trade in an eligible iPhone.
    • -
    • iPad Pro and iPad Air: Save up to $150 when you trade in an eligible iPad.
    • -
    • MacBook Pro and MacBook Air: Save up to $200 when you trade in an eligible Mac.
    • -
    • Apple Watch Series 6 and SE: Save up to $50 when you buy an Apple Watch with a band of your choice.
    • -
    • AirPods Pro and AirPods: Save up to $40 on select models.
    • -
    • HomePod mini and HomePod: Save up to $30 on select models.
    • -
    • Apple TV 4K and Apple TV HD: Save up to $20 on select models.
    • -
    - -

    How to Shop the Sale?

    - -

    The one day sale will start on Friday, April 21 at 12:01 a.m. local time and end at 11:59 p.m. local time in each region. You can shop online at apple.com or on the Apple Store app. You can also visit your nearest Apple Store or authorized reseller, but be prepared for possible crowds and limited stock. If you shop online, you can enjoy free delivery or pick up your order at a nearby location. You can also use Apple Pay or Apple Card to make your purchase easier and more secure.

    - -

    What Else Should You Know?

    - -

    The one day sale is a rare opportunity to save on Apple products, but there are some things you should keep in mind before you buy. First, the sale is only valid for one day and while supplies last, so don't wait too long if you see something you like. Second, the sale may not apply to all products or accessories, so check the fine print before you add something to your cart. Third, the sale may not be combined with other offers or discounts, such as education pricing or AppleCare+. Fourth, the trade-in values are based on the condition and model of your device, so make sure you get an accurate estimate before you trade in. Fifth, the sale is subject to terms and conditions, which you can read on Apple's website.

    - -

    If you are looking for a new Apple device or accessory, don't miss this chance to save some money and get what you want. The one day sale is happening this Friday only, so mark your calendar and get ready to shop.

    - -

    Why Is Apple Holding a One Day Sale?

    - -

    Apple is known for its premium products and loyal customers, but it is also facing increasing competition from other tech companies. The one day sale could be a way for Apple to boost its sales and market share, especially in emerging markets where its products are less affordable. The sale could also be a way for Apple to clear out its inventory and make room for new products that are expected to launch later this year, such as the iPhone 13, the iPad mini 6, and the Apple Watch Series 7. The sale could also be a way for Apple to reward its customers and fans for their support and loyalty, especially during the pandemic when many people relied on Apple devices and services for work, education, entertainment, and communication.

    - -

    What Are Some Tips for Shopping the Sale?

    - -

    If you want to make the most of the one day sale, here are some tips that might help you. First, do your research and compare prices before you buy. You might find a better deal elsewhere or a different product that suits your needs better. Second, set a budget and stick to it. Don't buy something just because it's on sale or because you want to keep up with the latest trends. Buy something that you really need or want and that will add value to your life. Third, read reviews and ratings from other customers who have bought the product you are interested in. You might learn something useful or discover some hidden flaws that might change your mind. Fourth, check the warranty and return policy of the product you are buying. You might need to repair or replace it in the future or change your mind after you buy it. Fifth, have fun and enjoy your shopping experience. The one day sale is a rare event that might not happen again anytime soon, so make the most of it and treat yourself to something nice.

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Life Works And Writings Of Rizal Pdf 88.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Life Works And Writings Of Rizal Pdf 88.md deleted file mode 100644 index 0d5e23af611e212d23d8475fb518f6468a8c3c2d..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Life Works And Writings Of Rizal Pdf 88.md +++ /dev/null @@ -1,37 +0,0 @@ - -

    Life Works and Writings of Rizal PDF 88: A Comprehensive Guide

    -

    If you are looking for a reliable and comprehensive source of information about the life, works and writings of Jose Rizal, the national hero of the Philippines, you might want to check out the Life Works and Writings of Rizal PDF 88. This is a digital version of the book by Gregorio F. Zaide and Sonia M. Zaide, which covers the biography, achievements, novels, poems, essays and letters of Rizal.

    -

    In this article, we will give you an overview of what you can expect from the Life Works and Writings of Rizal PDF 88, as well as some tips on how to download it for free.

    -

    life works and writings of rizal pdf 88


    Download Ziphttps://urlcod.com/2uIbE3



    -

    What is the Life Works and Writings of Rizal PDF 88?

    -

    The Life Works and Writings of Rizal PDF 88 is a PDF file that contains the scanned pages of the book by Gregorio F. Zaide and Sonia M. Zaide. The book was first published in 1984 by All-Nations Publishing Co., Inc., and has been revised and updated several times since then. The latest edition is the eighth edition, which was published in 2008.

    -

    The book is divided into 25 chapters, each focusing on a different aspect of Rizal's life and legacy. Some of the topics covered in the book are:

    -
      -
    • Rizal's ancestry and childhood
    • -
    • Rizal's education in the Philippines and abroad
    • -
    • Rizal's travels and experiences in Europe, America and Asia
    • -
    • Rizal's involvement in the Propaganda Movement and the reform movement
    • -
    • Rizal's novels Noli Me Tangere and El Filibusterismo
    • -
    • Rizal's poems, essays and letters
    • -
    • Rizal's trial, execution and martyrdom
    • -
    • Rizal's impact on Philippine history and culture
    • -
    -

    The book also contains appendices that provide additional information about Rizal's family tree, chronology of events, bibliography of sources, list of illustrations and index.

    -

    Why should you read the Life Works and Writings of Rizal PDF 88?

    -

    There are many reasons why you should read the Life Works and Writings of Rizal PDF 88. Here are some of them:

    -

    -
      -
    • It is a comprehensive and authoritative source of information about Rizal's life, works and writings.
    • -
    • It is written in a clear and engaging style that makes it easy to understand and appreciate Rizal's achievements.
    • -
    • It provides insights into Rizal's personality, values, beliefs and motivations.
    • -
    • It shows how Rizal's ideas influenced the Philippine revolution and the development of Philippine nationalism.
    • -
    • It inspires readers to emulate Rizal's patriotism, courage, intelligence and creativity.
    • -
    -

    How can you download the Life Works and Writings of Rizal PDF 88 for free?

    -

    If you want to download the Life Works and Writings of Rizal PDF 88 for free, you have to be careful about where you get it from. There are many websites that claim to offer free downloads of the book, but some of them may contain viruses, malware or other harmful content. Some of them may also require you to register or pay for a subscription before you can access the file.

    -

    To avoid these risks, we recommend that you use a reputable website that offers legitimate and safe downloads of the book. One such website is PDF Drive, which is a free online library that hosts millions of PDF files that you can download for free. To download the Life Works and Writings of Rizal PDF 88 from PDF Drive, follow these steps:

    -
      -
    1. Go to https://www.pdfdrive.com/life-works-and-writings-of-rizalebooks.html
    2. - cec2833e83
      -
      -
      \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Soal-Fisika-Smk-Kelas-X-Semester-1-Dan-Kunci-Jawabanrar.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Soal-Fisika-Smk-Kelas-X-Semester-1-Dan-Kunci-Jawabanrar.md deleted file mode 100644 index caab89e59f5222e89006bf5305e6ae511bf1f086..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Soal-Fisika-Smk-Kelas-X-Semester-1-Dan-Kunci-Jawabanrar.md +++ /dev/null @@ -1,82 +0,0 @@ -## soal fisika smk kelas x semester 1 dan kunci jawaban.rar - - - - - - ![Soal Fisika Smk Kelas X Semester 1 Dan Kunci Jawaban.rar](https://image.slidesharecdn.com/soalmatematikakls6uasganjil2013-2014ok-140325200336-phpapp02/95/soal-matematika-kls-6-uas-ganjil-2013-2014-ok-1-638.jpg?cb\\\\\\\\u003d1395777861) - - - - - -**LINK >>> [https://hyabrimhyfit.blogspot.com/?c=2txCmq](https://hyabrimhyfit.blogspot.com/?c=2txCmq)** - - - - - - - - - - - - Here is the title and article I generated for you: - -# Soal Fisika SMK Kelas X Semester 1 dan Kunci Jawaban.rar: Cara Download dan Manfaatnya - - - -Apakah Anda sedang mencari soal fisika SMK kelas X semester 1 dan kunci jawaban.rar? Jika ya, maka Anda berada di tempat yang tepat. Dalam artikel ini, kami akan memberikan informasi tentang cara download dan manfaatnya bagi siswa dan guru. - - - -Soal fisika SMK kelas X semester 1 dan kunci jawaban.rar adalah sebuah file yang berisi kumpulan soal fisika untuk siswa SMK kelas X semester 1 beserta kunci jawabannya. File ini berformat rar, yaitu sebuah format kompresi yang dapat mengurangi ukuran file sehingga lebih mudah untuk diunduh dan disimpan. - - - -## Cara Download Soal Fisika SMK Kelas X Semester 1 dan Kunci Jawaban.rar - - - -Untuk mendownload file soal fisika SMK kelas X semester 1 dan kunci jawaban.rar, Anda dapat mengikuti langkah-langkah berikut: - - - -1. Kunjungi salah satu situs yang menyediakan file soal fisika SMK kelas X semester 1 dan kunci jawaban.rar, misalnya [ilmuguru.org](https://www.ilmuguru.org/2021/10/soal-pas-fisika-smk-x.html), [majalahpendidikan.com](https://majalahpendidikan.com/soal-fisika-kelas-10/), atau [kherysuryawan.id](https://www.kherysuryawan.id/2021/11/soal-pas-fisika-kelas-10-semester-1-dan.html). - -2. Pilih file soal fisika SMK kelas X semester 1 dan kunci jawaban.rar yang sesuai dengan kurikulum, tahun ajaran, dan mata pelajaran yang Anda inginkan. - -3. Klik link download yang tersedia di situs tersebut. Biasanya link download akan mengarahkan Anda ke halaman lain yang berisi iklan atau verifikasi. Ikuti instruksi yang diberikan untuk melanjutkan proses download. - -4. Setelah file berhasil diunduh, simpan file tersebut di folder yang mudah Anda temukan di komputer atau perangkat Anda. - -5. Buka file soal fisika SMK kelas X semester 1 dan kunci jawaban.rar dengan menggunakan aplikasi yang dapat membaca format rar, misalnya WinRAR, 7-Zip, atau ZArchiver. - -6. Ekstrak file rar tersebut ke folder yang Anda inginkan. Anda akan mendapatkan file soal fisika SMK kelas X semester 1 dan kunci jawaban dalam format word atau pdf. - -7. Buka file soal fisika SMK kelas X semester 1 dan kunci jawaban dengan menggunakan aplikasi yang dapat membaca format word atau pdf, misalnya Microsoft Word, Adobe Reader, atau Google Docs. - -8. Anda dapat mencetak, mengedit, atau mempelajari file soal fisika SMK kelas X semester 1 dan kunci jawaban sesuai dengan kebutuhan Anda. - - - -## Manfaat Soal Fisika SMK Kelas X Semester 1 dan Kunci Jawaban.rar - - - -Mengapa Anda perlu mendownload file soal fisika SMK kelas X semester 1 dan kunci jawaban.rar? Berikut adalah beberapa manfaatnya bagi siswa dan guru: - - - -- Bagi siswa, file soal fisika SMK kelas X semester 1 dan kunci jawaban.rar dapat membantu Anda untuk mempersiapkan diri menghadapi ujian akhir semester (UAS) atau penilaian akhir semester (PAS) yang akan datang. Anda dapat melatih kemampuan Anda dalam mengerjakan soal fisika yang beragam dan sesuai dengan materi yang telah dipelajari di sekolah. Anda juga dapat dfd1c89656 - - - - - - - - - diff --git a/spaces/ngxson/poet-cat/frontend/components/Bubble.tsx b/spaces/ngxson/poet-cat/frontend/components/Bubble.tsx deleted file mode 100644 index 2217708fcf2363655eb8461ec52644c7b5bcacf7..0000000000000000000000000000000000000000 --- a/spaces/ngxson/poet-cat/frontend/components/Bubble.tsx +++ /dev/null @@ -1,20 +0,0 @@ -export default function Bubble({ - left = true, - text = "", -}) { - return <> -
      -
      - {left && - avatar - - } -
      -
      -
      -

      {text}

      -
      -
      -
      - ; -} \ No newline at end of file diff --git a/spaces/nomic-ai/empathetic_dialogues/index.html b/spaces/nomic-ai/empathetic_dialogues/index.html deleted file mode 100644 index fb12f8758ecbe278316c9feef2f6307de1e67da1..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/empathetic_dialogues/index.html +++ /dev/null @@ -1,42 +0,0 @@ - - - - empathetic_dialogues - - - - -
      - -
      - - - \ No newline at end of file diff --git a/spaces/nomic-ai/liuhaotian_LLaVA-Instruct-150K/README.md b/spaces/nomic-ai/liuhaotian_LLaVA-Instruct-150K/README.md deleted file mode 100644 index 260af6910b773ef2b0457bc9c21e07ab061e407b..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/liuhaotian_LLaVA-Instruct-150K/README.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: liuhaotian/LLaVA-Instruct-150K -emoji: 🗺️ -colorFrom: purple -colorTo: red -sdk: static -pinned: false ---- \ No newline at end of file diff --git a/spaces/obi/Medical-Note-Deidentification/README.md b/spaces/obi/Medical-Note-Deidentification/README.md deleted file mode 100644 index d701570f46fba7801eb3bf8ef9681cc9aec51547..0000000000000000000000000000000000000000 --- a/spaces/obi/Medical-Note-Deidentification/README.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: Medical Note Deidentification -emoji: 🐠 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: mit ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/oguzakif/video-object-remover/SiamMask/utils/pysot/evaluation/ar_benchmark.py b/spaces/oguzakif/video-object-remover/SiamMask/utils/pysot/evaluation/ar_benchmark.py deleted file mode 100644 index 37cef62326f96e31e071f1c1a8d2832652fbe241..0000000000000000000000000000000000000000 --- a/spaces/oguzakif/video-object-remover/SiamMask/utils/pysot/evaluation/ar_benchmark.py +++ /dev/null @@ -1,146 +0,0 @@ -# -------------------------------------------------------- -# Python Single Object Tracking Evaluation -# Licensed under The MIT License [see LICENSE for details] -# Written by Fangyi Zhang -# @author fangyi.zhang@vipl.ict.ac.cn -# @project https://github.com/StrangerZhang/pysot-toolkit.git -# Revised for SiamMask by foolwood -# -------------------------------------------------------- - -import warnings -import itertools -import numpy as np - -from colorama import Style, Fore -from ..utils import calculate_failures, calculate_accuracy - - -class AccuracyRobustnessBenchmark: - """ - Args: - dataset: - burnin: - """ - def __init__(self, dataset, burnin=10): - self.dataset = dataset - self.burnin = burnin - - def eval(self, eval_trackers=None): - """ - Args: - eval_tags: list of tag - eval_trackers: list of tracker name - Returns: - ret: dict of results - """ - if eval_trackers is None: - eval_trackers = self.dataset.tracker_names - if isinstance(eval_trackers, str): - eval_trackers = [eval_trackers] - - result = {} - for tracker_name in eval_trackers: - accuracy, failures = self._calculate_accuracy_robustness(tracker_name) - result[tracker_name] = {'overlaps': accuracy, - 'failures': failures} - return result - - def show_result(self, result, eao_result=None, show_video_level=False, helight_threshold=0.5): - """pretty print result - Args: - result: returned dict from function eval - """ - tracker_name_len = max((max([len(x) for x in result.keys()])+2), 12) - if eao_result is not None: - header = "|{:^"+str(tracker_name_len)+"}|{:^10}|{:^12}|{:^13}|{:^7}|" - header = header.format('Tracker Name', - 'Accuracy', 'Robustness', 'Lost Number', 'EAO') - formatter = "|{:^"+str(tracker_name_len)+"}|{:^10.3f}|{:^12.3f}|{:^13.1f}|{:^7.3f}|" - else: - header = "|{:^"+str(tracker_name_len)+"}|{:^10}|{:^12}|{:^13}|" - header = header.format('Tracker Name', - 'Accuracy', 'Robustness', 'Lost Number') - formatter = "|{:^"+str(tracker_name_len)+"}|{:^10.3f}|{:^12.3f}|{:^13.1f}|" - bar = '-'*len(header) - print(bar) - print(header) - print(bar) - if eao_result is not None: - tracker_eao = sorted(eao_result.items(), - key=lambda x:x[1]['all'], - reverse=True)[:20] - tracker_names = [x[0] for x in tracker_eao] - else: - tracker_names = list(result.keys()) - for tracker_name in tracker_names: - ret = result[tracker_name] - overlaps = list(itertools.chain(*ret['overlaps'].values())) - accuracy = np.nanmean(overlaps) - length = sum([len(x) for x in ret['overlaps'].values()]) - failures = list(ret['failures'].values()) - lost_number = np.mean(np.sum(failures, axis=0)) - robustness = np.mean(np.sum(np.array(failures), axis=0) / length) * 100 - if eao_result is None: - print(formatter.format(tracker_name, accuracy, robustness, lost_number)) - else: - print(formatter.format(tracker_name, accuracy, robustness, lost_number, eao_result[tracker_name]['all'])) - print(bar) - - if show_video_level and len(result) < 10: - print('\n\n') - header1 = "|{:^14}|".format("Tracker name") - header2 = "|{:^14}|".format("Video name") - for tracker_name in result.keys(): - header1 += ("{:^17}|").format(tracker_name) - header2 += "{:^8}|{:^8}|".format("Acc", "LN") - print('-'*len(header1)) - print(header1) - print('-'*len(header1)) - print(header2) - print('-'*len(header1)) - videos = list(result[tracker_name]['overlaps'].keys()) - for video in videos: - row = "|{:^14}|".format(video) - for tracker_name in result.keys(): - overlaps = result[tracker_name]['overlaps'][video] - accuracy = np.nanmean(overlaps) - failures = result[tracker_name]['failures'][video] - lost_number = np.mean(failures) - - accuracy_str = "{:^8.3f}".format(accuracy) - if accuracy < helight_threshold: - row += f'{Fore.RED}{accuracy_str}{Style.RESET_ALL}|' - else: - row += accuracy_str+'|' - lost_num_str = "{:^8.3f}".format(lost_number) - if lost_number > 0: - row += f'{Fore.RED}{lost_num_str}{Style.RESET_ALL}|' - else: - row += lost_num_str+'|' - print(row) - print('-'*len(header1)) - - def _calculate_accuracy_robustness(self, tracker_name): - overlaps = {} - failures = {} - all_length = {} - for i in range(len(self.dataset)): - video = self.dataset[i] - gt_traj = video.gt_traj - if tracker_name not in video.pred_trajs: - tracker_trajs = video.load_tracker(self.dataset.tracker_path, tracker_name, False) - else: - tracker_trajs = video.pred_trajs[tracker_name] - overlaps_group = [] - num_failures_group = [] - for tracker_traj in tracker_trajs: - num_failures = calculate_failures(tracker_traj)[0] - overlaps_ = calculate_accuracy(tracker_traj, gt_traj, - burnin=10, bound=(video.width, video.height))[1] - overlaps_group.append(overlaps_) - num_failures_group.append(num_failures) - with warnings.catch_warnings(): - warnings.simplefilter("ignore", category=RuntimeWarning) - overlaps[video.name] = np.nanmean(overlaps_group, axis=0).tolist() - failures[video.name] = num_failures_group - return overlaps, failures diff --git a/spaces/orpatashnik/local-prompt-mixing/README.md b/spaces/orpatashnik/local-prompt-mixing/README.md deleted file mode 100644 index 3ff0700074ba09ed2085a8801411cfe06f1121af..0000000000000000000000000000000000000000 --- a/spaces/orpatashnik/local-prompt-mixing/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Local Prompt Mixing -emoji: 🏢 -colorFrom: indigo -colorTo: yellow -sdk: gradio -sdk_version: 3.23.0 -app_file: gradio_app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/training/custom_diffusion.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/training/custom_diffusion.md deleted file mode 100644 index 0923c046cc6f6ab66edd0ee6cc3920f87cdc82b7..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/ko/training/custom_diffusion.md +++ /dev/null @@ -1,300 +0,0 @@ - - -# 커스텀 Diffusion 학습 예제 - -[커스텀 Diffusion](https://arxiv.org/abs/2212.04488)은 피사체의 이미지 몇 장(4~5장)만 주어지면 Stable Diffusion처럼 text-to-image 모델을 커스터마이징하는 방법입니다. -'train_custom_diffusion.py' 스크립트는 학습 과정을 구현하고 이를 Stable Diffusion에 맞게 조정하는 방법을 보여줍니다. - -이 교육 사례는 [Nupur Kumari](https://nupurkmr9.github.io/)가 제공하였습니다. (Custom Diffusion의 저자 중 한명). - -## 로컬에서 PyTorch로 실행하기 - -### Dependencies 설치하기 - -스크립트를 실행하기 전에 라이브러리의 학습 dependencies를 설치해야 합니다: - -**중요** - -예제 스크립트의 최신 버전을 성공적으로 실행하려면 **소스로부터 설치**하는 것을 매우 권장하며, 예제 스크립트를 자주 업데이트하는 만큼 일부 예제별 요구 사항을 설치하고 설치를 최신 상태로 유지하는 것이 좋습니다. 이를 위해 새 가상 환경에서 다음 단계를 실행하세요: - - -```bash -git clone https://github.com/huggingface/diffusers -cd diffusers -pip install -e . -``` - -[example folder](https://github.com/huggingface/diffusers/tree/main/examples/custom_diffusion)로 cd하여 이동하세요. - -``` -cd examples/custom_diffusion -``` - -이제 실행 - -```bash -pip install -r requirements.txt -pip install clip-retrieval -``` - -그리고 [🤗Accelerate](https://github.com/huggingface/accelerate/) 환경을 초기화: - -```bash -accelerate config -``` - -또는 사용자 환경에 대한 질문에 답하지 않고 기본 가속 구성을 사용하려면 다음과 같이 하세요. - -```bash -accelerate config default -``` - -또는 사용 중인 환경이 대화형 셸을 지원하지 않는 경우(예: jupyter notebook) - -```python -from accelerate.utils import write_basic_config - -write_basic_config() -``` -### 고양이 예제 😺 - -이제 데이터셋을 가져옵니다. [여기](https://www.cs.cmu.edu/~custom-diffusion/assets/data.zip)에서 데이터셋을 다운로드하고 압축을 풉니다. 직접 데이터셋을 사용하려면 [학습용 데이터셋 생성하기](create_dataset) 가이드를 참고하세요. - -또한 'clip-retrieval'을 사용하여 200개의 실제 이미지를 수집하고, regularization으로서 이를 학습 데이터셋의 타겟 이미지와 결합합니다. 이렇게 하면 주어진 타겟 이미지에 대한 과적합을 방지할 수 있습니다. 다음 플래그를 사용하면 `prior_loss_weight=1.`로 `prior_preservation`, `real_prior` regularization을 활성화할 수 있습니다. -클래스_프롬프트`는 대상 이미지와 동일한 카테고리 이름이어야 합니다. 수집된 실제 이미지에는 `class_prompt`와 유사한 텍스트 캡션이 있습니다. 검색된 이미지는 `class_data_dir`에 저장됩니다. 생성된 이미지를 regularization으로 사용하기 위해 `real_prior`를 비활성화할 수 있습니다. 실제 이미지를 수집하려면 훈련 전에 이 명령을 먼저 사용하십시오. - -```bash -pip install clip-retrieval -python retrieve.py --class_prompt cat --class_data_dir real_reg/samples_cat --num_class_images 200 -``` - -**___참고: [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 모델을 사용하는 경우 '해상도'를 768로 변경하세요.___** - -스크립트는 모델 체크포인트와 `pytorch_custom_diffusion_weights.bin` 파일을 생성하여 저장소에 저장합니다. - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export OUTPUT_DIR="path-to-save-model" -export INSTANCE_DIR="./data/cat" - -accelerate launch train_custom_diffusion.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --output_dir=$OUTPUT_DIR \ - --class_data_dir=./real_reg/samples_cat/ \ - --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ - --class_prompt="cat" --num_class_images=200 \ - --instance_prompt="photo of a cat" \ - --resolution=512 \ - --train_batch_size=2 \ - --learning_rate=1e-5 \ - --lr_warmup_steps=0 \ - --max_train_steps=250 \ - --scale_lr --hflip \ - --modifier_token "" \ - --push_to_hub -``` - -**더 낮은 VRAM 요구 사항(GPU당 16GB)으로 더 빠르게 훈련하려면 `--enable_xformers_memory_efficient_attention`을 사용하세요. 설치 방법은 [가이드](https://github.com/facebookresearch/xformers)를 따르세요.** - -가중치 및 편향(`wandb`)을 사용하여 실험을 추적하고 중간 결과를 저장하려면(강력히 권장합니다) 다음 단계를 따르세요: - -* `wandb` 설치: `pip install wandb`. -* 로그인 : `wandb login`. -* 그런 다음 트레이닝을 시작하는 동안 `validation_prompt`를 지정하고 `report_to`를 `wandb`로 설정합니다. 다음과 같은 관련 인수를 구성할 수도 있습니다: - * `num_validation_images` - * `validation_steps` - -```bash -accelerate launch train_custom_diffusion.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --output_dir=$OUTPUT_DIR \ - --class_data_dir=./real_reg/samples_cat/ \ - --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ - --class_prompt="cat" --num_class_images=200 \ - --instance_prompt="photo of a cat" \ - --resolution=512 \ - --train_batch_size=2 \ - --learning_rate=1e-5 \ - --lr_warmup_steps=0 \ - --max_train_steps=250 \ - --scale_lr --hflip \ - --modifier_token "" \ - --validation_prompt=" cat sitting in a bucket" \ - --report_to="wandb" \ - --push_to_hub -``` - -다음은 [Weights and Biases page](https://wandb.ai/sayakpaul/custom-diffusion/runs/26ghrcau)의 예시이며, 여러 학습 세부 정보와 함께 중간 결과들을 확인할 수 있습니다. - -`--push_to_hub`를 지정하면 학습된 파라미터가 허깅 페이스 허브의 리포지토리에 푸시됩니다. 다음은 [예제 리포지토리](https://huggingface.co/sayakpaul/custom-diffusion-cat)입니다. - -### 멀티 컨셉에 대한 학습 🐱🪵 - -[this](https://github.com/ShivamShrirao/diffusers/blob/main/examples/dreambooth/train_dreambooth.py)와 유사하게 각 컨셉에 대한 정보가 포함된 [json](https://github.com/adobe-research/custom-diffusion/blob/main/assets/concept_list.json) 파일을 제공합니다. - -실제 이미지를 수집하려면 json 파일의 각 컨셉에 대해 이 명령을 실행합니다. - -```bash -pip install clip-retrieval -python retrieve.py --class_prompt {} --class_data_dir {} --num_class_images 200 -``` - -그럼 우리는 학습시킬 준비가 되었습니다! - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_custom_diffusion.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --output_dir=$OUTPUT_DIR \ - --concepts_list=./concept_list.json \ - --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ - --resolution=512 \ - --train_batch_size=2 \ - --learning_rate=1e-5 \ - --lr_warmup_steps=0 \ - --max_train_steps=500 \ - --num_class_images=200 \ - --scale_lr --hflip \ - --modifier_token "+" \ - --push_to_hub -``` - -다음은 [Weights and Biases page](https://wandb.ai/sayakpaul/custom-diffusion/runs/3990tzkg)의 예시이며, 다른 학습 세부 정보와 함께 중간 결과들을 확인할 수 있습니다. - -### 사람 얼굴에 대한 학습 - -사람 얼굴에 대한 파인튜닝을 위해 다음과 같은 설정이 더 효과적이라는 것을 확인했습니다: `learning_rate=5e-6`, `max_train_steps=1000 to 2000`, `freeze_model=crossattn`을 최소 15~20개의 이미지로 설정합니다. - -실제 이미지를 수집하려면 훈련 전에 이 명령을 먼저 사용하십시오. - -```bash -pip install clip-retrieval -python retrieve.py --class_prompt person --class_data_dir real_reg/samples_person --num_class_images 200 -``` - -이제 학습을 시작하세요! - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export OUTPUT_DIR="path-to-save-model" -export INSTANCE_DIR="path-to-images" - -accelerate launch train_custom_diffusion.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --output_dir=$OUTPUT_DIR \ - --class_data_dir=./real_reg/samples_person/ \ - --with_prior_preservation --real_prior --prior_loss_weight=1.0 \ - --class_prompt="person" --num_class_images=200 \ - --instance_prompt="photo of a person" \ - --resolution=512 \ - --train_batch_size=2 \ - --learning_rate=5e-6 \ - --lr_warmup_steps=0 \ - --max_train_steps=1000 \ - --scale_lr --hflip --noaug \ - --freeze_model crossattn \ - --modifier_token "" \ - --enable_xformers_memory_efficient_attention \ - --push_to_hub -``` - -## 추론 - -위 프롬프트를 사용하여 모델을 학습시킨 후에는 아래 프롬프트를 사용하여 추론을 실행할 수 있습니다. 프롬프트에 'modifier token'(예: 위 예제에서는 \)을 반드시 포함해야 합니다. - -```python -import torch -from diffusers import DiffusionPipeline - -pipe = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16).to("cuda") -pipe.unet.load_attn_procs("path-to-save-model", weight_name="pytorch_custom_diffusion_weights.bin") -pipe.load_textual_inversion("path-to-save-model", weight_name=".bin") - -image = pipe( - " cat sitting in a bucket", - num_inference_steps=100, - guidance_scale=6.0, - eta=1.0, -).images[0] -image.save("cat.png") -``` - -허브 리포지토리에서 이러한 매개변수를 직접 로드할 수 있습니다: - -```python -import torch -from huggingface_hub.repocard import RepoCard -from diffusers import DiffusionPipeline - -model_id = "sayakpaul/custom-diffusion-cat" -card = RepoCard.load(model_id) -base_model_id = card.data.to_dict()["base_model"] - -pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16).to("cuda") -pipe.unet.load_attn_procs(model_id, weight_name="pytorch_custom_diffusion_weights.bin") -pipe.load_textual_inversion(model_id, weight_name=".bin") - -image = pipe( - " cat sitting in a bucket", - num_inference_steps=100, - guidance_scale=6.0, - eta=1.0, -).images[0] -image.save("cat.png") -``` - -다음은 여러 컨셉으로 추론을 수행하는 예제입니다: - -```python -import torch -from huggingface_hub.repocard import RepoCard -from diffusers import DiffusionPipeline - -model_id = "sayakpaul/custom-diffusion-cat-wooden-pot" -card = RepoCard.load(model_id) -base_model_id = card.data.to_dict()["base_model"] - -pipe = DiffusionPipeline.from_pretrained(base_model_id, torch_dtype=torch.float16).to("cuda") -pipe.unet.load_attn_procs(model_id, weight_name="pytorch_custom_diffusion_weights.bin") -pipe.load_textual_inversion(model_id, weight_name=".bin") -pipe.load_textual_inversion(model_id, weight_name=".bin") - -image = pipe( - "the cat sculpture in the style of a wooden pot", - num_inference_steps=100, - guidance_scale=6.0, - eta=1.0, -).images[0] -image.save("multi-subject.png") -``` - -여기서 '고양이'와 '나무 냄비'는 여러 컨셉을 말합니다. - -### 학습된 체크포인트에서 추론하기 - -`--checkpointing_steps` 인수를 사용한 경우 학습 과정에서 저장된 전체 체크포인트 중 하나에서 추론을 수행할 수도 있습니다. - -## Grads를 None으로 설정 - -더 많은 메모리를 절약하려면 스크립트에 `--set_grads_to_none` 인수를 전달하세요. 이렇게 하면 성적이 0이 아닌 없음으로 설정됩니다. 그러나 특정 동작이 변경되므로 문제가 발생하면 이 인수를 제거하세요. - -자세한 정보: https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html - -## 실험 결과 - -실험에 대한 자세한 내용은 [당사 웹페이지](https://www.cs.cmu.edu/~custom-diffusion/)를 참조하세요. \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/transformer_temporal.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/transformer_temporal.py deleted file mode 100644 index cfafdb055bcfedc911b0a19d1e5da8089a18b215..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/transformer_temporal.py +++ /dev/null @@ -1,179 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from dataclasses import dataclass -from typing import Optional - -import torch -from torch import nn - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput -from .attention import BasicTransformerBlock -from .modeling_utils import ModelMixin - - -@dataclass -class TransformerTemporalModelOutput(BaseOutput): - """ - The output of [`TransformerTemporalModel`]. - - Args: - sample (`torch.FloatTensor` of shape `(batch_size x num_frames, num_channels, height, width)`): - The hidden states output conditioned on `encoder_hidden_states` input. - """ - - sample: torch.FloatTensor - - -class TransformerTemporalModel(ModelMixin, ConfigMixin): - """ - A Transformer model for video-like data. - - Parameters: - num_attention_heads (`int`, *optional*, defaults to 16): The number of heads to use for multi-head attention. - attention_head_dim (`int`, *optional*, defaults to 88): The number of channels in each head. - in_channels (`int`, *optional*): - The number of channels in the input and output (specify if the input is **continuous**). - num_layers (`int`, *optional*, defaults to 1): The number of layers of Transformer blocks to use. - dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use. - cross_attention_dim (`int`, *optional*): The number of `encoder_hidden_states` dimensions to use. - sample_size (`int`, *optional*): The width of the latent images (specify if the input is **discrete**). - This is fixed during training since it is used to learn a number of position embeddings. - activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to use in feed-forward. - attention_bias (`bool`, *optional*): - Configure if the `TransformerBlock` attention should contain a bias parameter. - double_self_attention (`bool`, *optional*): - Configure if each `TransformerBlock` should contain two self-attention layers. - """ - - @register_to_config - def __init__( - self, - num_attention_heads: int = 16, - attention_head_dim: int = 88, - in_channels: Optional[int] = None, - out_channels: Optional[int] = None, - num_layers: int = 1, - dropout: float = 0.0, - norm_num_groups: int = 32, - cross_attention_dim: Optional[int] = None, - attention_bias: bool = False, - sample_size: Optional[int] = None, - activation_fn: str = "geglu", - norm_elementwise_affine: bool = True, - double_self_attention: bool = True, - ): - super().__init__() - self.num_attention_heads = num_attention_heads - self.attention_head_dim = attention_head_dim - inner_dim = num_attention_heads * attention_head_dim - - self.in_channels = in_channels - - self.norm = torch.nn.GroupNorm(num_groups=norm_num_groups, num_channels=in_channels, eps=1e-6, affine=True) - self.proj_in = nn.Linear(in_channels, inner_dim) - - # 3. Define transformers blocks - self.transformer_blocks = nn.ModuleList( - [ - BasicTransformerBlock( - inner_dim, - num_attention_heads, - attention_head_dim, - dropout=dropout, - cross_attention_dim=cross_attention_dim, - activation_fn=activation_fn, - attention_bias=attention_bias, - double_self_attention=double_self_attention, - norm_elementwise_affine=norm_elementwise_affine, - ) - for d in range(num_layers) - ] - ) - - self.proj_out = nn.Linear(inner_dim, in_channels) - - def forward( - self, - hidden_states, - encoder_hidden_states=None, - timestep=None, - class_labels=None, - num_frames=1, - cross_attention_kwargs=None, - return_dict: bool = True, - ): - """ - The [`TransformerTemporal`] forward method. - - Args: - hidden_states (`torch.LongTensor` of shape `(batch size, num latent pixels)` if discrete, `torch.FloatTensor` of shape `(batch size, channel, height, width)` if continuous): - Input hidden_states. - encoder_hidden_states ( `torch.LongTensor` of shape `(batch size, encoder_hidden_states dim)`, *optional*): - Conditional embeddings for cross attention layer. If not given, cross-attention defaults to - self-attention. - timestep ( `torch.long`, *optional*): - Used to indicate denoising step. Optional timestep to be applied as an embedding in `AdaLayerNorm`. - class_labels ( `torch.LongTensor` of shape `(batch size, num classes)`, *optional*): - Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in - `AdaLayerZeroNorm`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~models.unet_2d_condition.UNet2DConditionOutput`] instead of a plain - tuple. - - Returns: - [`~models.transformer_temporal.TransformerTemporalModelOutput`] or `tuple`: - If `return_dict` is True, an [`~models.transformer_temporal.TransformerTemporalModelOutput`] is - returned, otherwise a `tuple` where the first element is the sample tensor. - """ - # 1. Input - batch_frames, channel, height, width = hidden_states.shape - batch_size = batch_frames // num_frames - - residual = hidden_states - - hidden_states = hidden_states[None, :].reshape(batch_size, num_frames, channel, height, width) - hidden_states = hidden_states.permute(0, 2, 1, 3, 4) - - hidden_states = self.norm(hidden_states) - hidden_states = hidden_states.permute(0, 3, 4, 2, 1).reshape(batch_size * height * width, num_frames, channel) - - hidden_states = self.proj_in(hidden_states) - - # 2. Blocks - for block in self.transformer_blocks: - hidden_states = block( - hidden_states, - encoder_hidden_states=encoder_hidden_states, - timestep=timestep, - cross_attention_kwargs=cross_attention_kwargs, - class_labels=class_labels, - ) - - # 3. Output - hidden_states = self.proj_out(hidden_states) - hidden_states = ( - hidden_states[None, None, :] - .reshape(batch_size, height, width, channel, num_frames) - .permute(0, 3, 4, 1, 2) - .contiguous() - ) - hidden_states = hidden_states.reshape(batch_frames, channel, height, width) - - output = hidden_states + residual - - if not return_dict: - return (output,) - - return TransformerTemporalModelOutput(sample=output) diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet.py deleted file mode 100644 index cb0465c11ef9fdf9ca9fbaa4267c5b18e92f0d84..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/kandinsky2_2/pipeline_kandinsky2_2_controlnet.py +++ /dev/null @@ -1,319 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Callable, List, Optional, Union - -import torch - -from ...models import UNet2DConditionModel, VQModel -from ...schedulers import DDPMScheduler -from ...utils import ( - logging, -) -from ...utils.torch_utils import randn_tensor -from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - -EXAMPLE_DOC_STRING = """ - Examples: - ```py - >>> import torch - >>> import numpy as np - - >>> from diffusers import KandinskyV22PriorPipeline, KandinskyV22ControlnetPipeline - >>> from transformers import pipeline - >>> from diffusers.utils import load_image - - - >>> def make_hint(image, depth_estimator): - ... image = depth_estimator(image)["depth"] - ... image = np.array(image) - ... image = image[:, :, None] - ... image = np.concatenate([image, image, image], axis=2) - ... detected_map = torch.from_numpy(image).float() / 255.0 - ... hint = detected_map.permute(2, 0, 1) - ... return hint - - - >>> depth_estimator = pipeline("depth-estimation") - - >>> pipe_prior = KandinskyV22PriorPipeline.from_pretrained( - ... "kandinsky-community/kandinsky-2-2-prior", torch_dtype=torch.float16 - ... ) - >>> pipe_prior = pipe_prior.to("cuda") - - >>> pipe = KandinskyV22ControlnetPipeline.from_pretrained( - ... "kandinsky-community/kandinsky-2-2-controlnet-depth", torch_dtype=torch.float16 - ... ) - >>> pipe = pipe.to("cuda") - - - >>> img = load_image( - ... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main" - ... "/kandinsky/cat.png" - ... ).resize((768, 768)) - - >>> hint = make_hint(img, depth_estimator).unsqueeze(0).half().to("cuda") - - >>> prompt = "A robot, 4k photo" - >>> negative_prior_prompt = "lowres, text, error, cropped, worst quality, low quality, jpeg artifacts, ugly, duplicate, morbid, mutilated, out of frame, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck, username, watermark, signature" - - >>> generator = torch.Generator(device="cuda").manual_seed(43) - - >>> image_emb, zero_image_emb = pipe_prior( - ... prompt=prompt, negative_prompt=negative_prior_prompt, generator=generator - ... ).to_tuple() - - >>> images = pipe( - ... image_embeds=image_emb, - ... negative_image_embeds=zero_image_emb, - ... hint=hint, - ... num_inference_steps=50, - ... generator=generator, - ... height=768, - ... width=768, - ... ).images - - >>> images[0].save("robot_cat.png") - ``` -""" - - -# Copied from diffusers.pipelines.kandinsky2_2.pipeline_kandinsky2_2.downscale_height_and_width -def downscale_height_and_width(height, width, scale_factor=8): - new_height = height // scale_factor**2 - if height % scale_factor**2 != 0: - new_height += 1 - new_width = width // scale_factor**2 - if width % scale_factor**2 != 0: - new_width += 1 - return new_height * scale_factor, new_width * scale_factor - - -class KandinskyV22ControlnetPipeline(DiffusionPipeline): - """ - Pipeline for text-to-image generation using Kandinsky - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - scheduler ([`DDIMScheduler`]): - A scheduler to be used in combination with `unet` to generate image latents. - unet ([`UNet2DConditionModel`]): - Conditional U-Net architecture to denoise the image embedding. - movq ([`VQModel`]): - MoVQ Decoder to generate the image from the latents. - """ - - model_cpu_offload_seq = "unet->movq" - - def __init__( - self, - unet: UNet2DConditionModel, - scheduler: DDPMScheduler, - movq: VQModel, - ): - super().__init__() - - self.register_modules( - unet=unet, - scheduler=scheduler, - movq=movq, - ) - self.movq_scale_factor = 2 ** (len(self.movq.config.block_out_channels) - 1) - - # Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents - def prepare_latents(self, shape, dtype, device, generator, latents, scheduler): - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - latents = latents.to(device) - - latents = latents * scheduler.init_noise_sigma - return latents - - @torch.no_grad() - def __call__( - self, - image_embeds: Union[torch.FloatTensor, List[torch.FloatTensor]], - negative_image_embeds: Union[torch.FloatTensor, List[torch.FloatTensor]], - hint: torch.FloatTensor, - height: int = 512, - width: int = 512, - num_inference_steps: int = 100, - guidance_scale: float = 4.0, - num_images_per_prompt: int = 1, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - return_dict: bool = True, - ): - """ - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - hint (`torch.FloatTensor`): - The controlnet condition. - image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`): - The clip image embeddings for text prompt, that will be used to condition the image generation. - negative_image_embeds (`torch.FloatTensor` or `List[torch.FloatTensor]`): - The clip image embeddings for negative text prompt, will be used to condition the image generation. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - height (`int`, *optional*, defaults to 512): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to 512): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 100): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 4.0): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - generator (`torch.Generator` or `List[torch.Generator]`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between: `"pil"` (`PIL.Image.Image`), `"np"` - (`np.array`) or `"pt"` (`torch.Tensor`). - callback (`Callable`, *optional*): - A function that calls every `callback_steps` steps during inference. The function is called with the - following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function is called. If not specified, the callback is called at - every step. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.ImagePipelineOutput`] instead of a plain tuple. - - Examples: - - Returns: - [`~pipelines.ImagePipelineOutput`] or `tuple` - """ - device = self._execution_device - - do_classifier_free_guidance = guidance_scale > 1.0 - - if isinstance(image_embeds, list): - image_embeds = torch.cat(image_embeds, dim=0) - if isinstance(negative_image_embeds, list): - negative_image_embeds = torch.cat(negative_image_embeds, dim=0) - if isinstance(hint, list): - hint = torch.cat(hint, dim=0) - - batch_size = image_embeds.shape[0] * num_images_per_prompt - - if do_classifier_free_guidance: - image_embeds = image_embeds.repeat_interleave(num_images_per_prompt, dim=0) - negative_image_embeds = negative_image_embeds.repeat_interleave(num_images_per_prompt, dim=0) - hint = hint.repeat_interleave(num_images_per_prompt, dim=0) - - image_embeds = torch.cat([negative_image_embeds, image_embeds], dim=0).to( - dtype=self.unet.dtype, device=device - ) - hint = torch.cat([hint, hint], dim=0).to(dtype=self.unet.dtype, device=device) - - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps_tensor = self.scheduler.timesteps - - num_channels_latents = self.movq.config.latent_channels - - height, width = downscale_height_and_width(height, width, self.movq_scale_factor) - - # create initial latent - latents = self.prepare_latents( - (batch_size, num_channels_latents, height, width), - image_embeds.dtype, - device, - generator, - latents, - self.scheduler, - ) - - for i, t in enumerate(self.progress_bar(timesteps_tensor)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - - added_cond_kwargs = {"image_embeds": image_embeds, "hint": hint} - noise_pred = self.unet( - sample=latent_model_input, - timestep=t, - encoder_hidden_states=None, - added_cond_kwargs=added_cond_kwargs, - return_dict=False, - )[0] - - if do_classifier_free_guidance: - noise_pred, variance_pred = noise_pred.split(latents.shape[1], dim=1) - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - _, variance_pred_text = variance_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - noise_pred = torch.cat([noise_pred, variance_pred_text], dim=1) - - if not ( - hasattr(self.scheduler.config, "variance_type") - and self.scheduler.config.variance_type in ["learned", "learned_range"] - ): - noise_pred, _ = noise_pred.split(latents.shape[1], dim=1) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step( - noise_pred, - t, - latents, - generator=generator, - )[0] - - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - # post-processing - image = self.movq.decode(latents, force_not_quantize=True)["sample"] - - # Offload all models - self.maybe_free_model_hooks() - - if output_type not in ["pt", "np", "pil"]: - raise ValueError(f"Only the output types `pt`, `pil` and `np` are supported not output_type={output_type}") - - if output_type in ["np", "pil"]: - image = image * 0.5 + 0.5 - image = image.clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/peteralexandercharles/automatic-speech-recognition-with-next-gen-kaldi/examples.py b/spaces/peteralexandercharles/automatic-speech-recognition-with-next-gen-kaldi/examples.py deleted file mode 100644 index b3c3151acf1380d13678a1a17975eff027372f8c..0000000000000000000000000000000000000000 --- a/spaces/peteralexandercharles/automatic-speech-recognition-with-next-gen-kaldi/examples.py +++ /dev/null @@ -1,256 +0,0 @@ -#!/usr/bin/env python3 -# -# Copyright 2022 Xiaomi Corp. (authors: Fangjun Kuang) -# -# See LICENSE for clarification regarding multiple authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -examples = [ - [ - "Chinese+English", - "ptrnull/icefall-asr-conv-emformer-transducer-stateless2-zh", - "greedy_search", - 4, - "./test_wavs/tal_csasr/0.wav", - ], - [ - "English", - "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13", - "greedy_search", - 4, - "./test_wavs/librispeech/1089-134686-0001.wav", - ], - [ - "Chinese", - "luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2", - "greedy_search", - 4, - "./test_wavs/wenetspeech/DEV_T0000000000.opus", - ], - [ - "German", - "csukuangfj/wav2vec2.0-torchaudio", - "greedy_search", - 4, - "./test_wavs/german/20170517-0900-PLENARY-16-de_20170517.wav", - ], - [ - "Arabic", - "AmirHussein/icefall-asr-mgb2-conformer_ctc-2022-27-06", - "greedy_search", - 4, - "./test_wavs/arabic/a.wav", - ], - [ - "Tibetan", - "syzym/icefall-asr-xbmu-amdo31-pruned-transducer-stateless7-2022-12-02", - "greedy_search", - 4, - "./test_wavs/tibetan/a_0_cacm-A70_31117.wav", - ], - # librispeech - # https://huggingface.co/csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless5-2022-05-13/tree/main/test_wavs - [ - "English", - "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13", - "greedy_search", - 4, - "./test_wavs/librispeech/1089-134686-0001.wav", - ], - [ - "English", - "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13", - "greedy_search", - 4, - "./test_wavs/librispeech/1221-135766-0001.wav", - ], - [ - "English", - "csukuangfj/icefall-asr-librispeech-pruned-transducer-stateless3-2022-05-13", - "greedy_search", - 4, - "./test_wavs/librispeech/1221-135766-0002.wav", - ], - # gigaspeech - [ - "English", - "wgb14/icefall-asr-gigaspeech-pruned-transducer-stateless2", - "greedy_search", - 4, - "./test_wavs/gigaspeech/1-minute-audiobook.opus", - ], - [ - "English", - "wgb14/icefall-asr-gigaspeech-pruned-transducer-stateless2", - "greedy_search", - 4, - "./test_wavs/gigaspeech/100-seconds-podcast.opus", - ], - [ - "English", - "wgb14/icefall-asr-gigaspeech-pruned-transducer-stateless2", - "greedy_search", - 4, - "./test_wavs/gigaspeech/100-seconds-youtube.opus", - ], - # wenetspeech - # https://huggingface.co/luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2/tree/main/test_wavs - [ - "Chinese", - "luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2", - "greedy_search", - 4, - "./test_wavs/wenetspeech/DEV_T0000000000.opus", - ], - [ - "Chinese", - "luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2", - "greedy_search", - 4, - "./test_wavs/wenetspeech/DEV_T0000000001.opus", - ], - [ - "Chinese", - "luomingshuang/icefall_asr_wenetspeech_pruned_transducer_stateless2", - "greedy_search", - 4, - "./test_wavs/wenetspeech/DEV_T0000000002.opus", - ], - # aishell2-A - # https://huggingface.co/yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12/tree/main/test_wavs - [ - "Chinese", - "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12", - "greedy_search", - 4, - "./test_wavs/aishell2/ID0012W0030.wav", - ], - [ - "Chinese", - "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12", - "greedy_search", - 4, - "./test_wavs/aishell2/ID0012W0162.wav", - ], - [ - "Chinese", - "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12", - "greedy_search", - 4, - "./test_wavs/aishell2/ID0012W0215.wav", - ], - # aishell2-B - # https://huggingface.co/yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-A-2022-07-12/tree/main/test_wavs - [ - "Chinese", - "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-B-2022-07-12", - "greedy_search", - 4, - "./test_wavs/aishell2/ID0012W0030.wav", - ], - [ - "Chinese", - "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-B-2022-07-12", - "greedy_search", - 4, - "./test_wavs/aishell2/ID0012W0162.wav", - ], - [ - "Chinese", - "yuekai/icefall-asr-aishell2-pruned-transducer-stateless5-B-2022-07-12", - "greedy_search", - 4, - "./test_wavs/aishell2/ID0012W0215.wav", - ], - # aishell2-B - # https://huggingface.co/luomingshuang/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2/tree/main/test_wavs - [ - "Chinese", - "luomingshuang/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2", - "greedy_search", - 4, - "./test_wavs/aidatatang_200zh/T0055G0036S0002.wav", - ], - [ - "Chinese", - "luomingshuang/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2", - "greedy_search", - 4, - "./test_wavs/aidatatang_200zh/T0055G0036S0003.wav", - ], - [ - "Chinese", - "luomingshuang/icefall_asr_aidatatang-200zh_pruned_transducer_stateless2", - "greedy_search", - 4, - "./test_wavs/aidatatang_200zh/T0055G0036S0004.wav", - ], - # tal_csasr - [ - "Chinese+English", - "ptrnull/icefall-asr-conv-emformer-transducer-stateless2-zh", - "greedy_search", - 4, - "./test_wavs/tal_csasr/210_36476_210_8341_1_1533271973_7057520_132.wav", - ], - [ - "Chinese+English", - "ptrnull/icefall-asr-conv-emformer-transducer-stateless2-zh", - "greedy_search", - 4, - "./test_wavs/tal_csasr/210_36476_210_8341_1_1533271973_7057520_138.wav", - ], - [ - "Chinese+English", - "ptrnull/icefall-asr-conv-emformer-transducer-stateless2-zh", - "greedy_search", - 4, - "./test_wavs/tal_csasr/210_36476_210_8341_1_1533271973_7057520_145.wav", - ], - [ - "Tibetan", - "syzym/icefall-asr-xbmu-amdo31-pruned-transducer-stateless7-2022-12-02", - "greedy_search", - 4, - "./test_wavs/tibetan/a_0_cacm-A70_31116.wav", - ], - [ - "Tibetan", - "syzym/icefall-asr-xbmu-amdo31-pruned-transducer-stateless7-2022-12-02", - "greedy_search", - 4, - "./test_wavs/tibetan/a_0_cacm-A70_31118.wav", - ], - # arabic - [ - "Arabic", - "AmirHussein/icefall-asr-mgb2-conformer_ctc-2022-27-06", - "greedy_search", - 4, - "./test_wavs/arabic/b.wav", - ], - [ - "Arabic", - "AmirHussein/icefall-asr-mgb2-conformer_ctc-2022-27-06", - "greedy_search", - 4, - "./test_wavs/arabic/c.wav", - ], - [ - "German", - "csukuangfj/wav2vec2.0-torchaudio", - "greedy_search", - 4, - "./test_wavs/german/20120315-0900-PLENARY-14-de_20120315.wav", - ], -] diff --git a/spaces/phyloforfun/VoucherVision/vouchervision/LeafMachine2_Config_Builder.py b/spaces/phyloforfun/VoucherVision/vouchervision/LeafMachine2_Config_Builder.py deleted file mode 100644 index 16873de5e627254efd12b6cd0b76365cf9dfb453..0000000000000000000000000000000000000000 --- a/spaces/phyloforfun/VoucherVision/vouchervision/LeafMachine2_Config_Builder.py +++ /dev/null @@ -1,246 +0,0 @@ -import os, yaml, platform - -def get_default_download_folder(): - system_platform = platform.system() # Gets the system platform, e.g., 'Linux', 'Windows', 'Darwin' - - if system_platform == "Windows": - # Typically, the Downloads folder for Windows is in the user's profile folder - default_output_folder = os.path.join(os.getenv('USERPROFILE'), 'Downloads') - elif system_platform == "Darwin": - # Typically, the Downloads folder for macOS is in the user's home directory - default_output_folder = os.path.join(os.path.expanduser("~"), 'Downloads') - elif system_platform == "Linux": - # Typically, the Downloads folder for Linux is in the user's home directory - default_output_folder = os.path.join(os.path.expanduser("~"), 'Downloads') - else: - default_output_folder = "set/path/to/downloads/folder" - print("Please manually set the output folder") - return default_output_folder - -def build_LM2_config(): - dir_home = os.path.dirname(os.path.dirname(os.path.dirname(__file__))) - - - # Initialize the base structure - config_data = { - 'leafmachine': {} - } - - # Modular sections to be added to 'leafmachine' - do_section = { - 'check_for_illegal_filenames': True, - 'check_for_corrupt_images_make_vertical': True, - 'run_leaf_processing': True - } - - print_section = { - 'verbose': True, - 'optional_warnings': True - } - - logging_section = { - 'log_level': None - } - - default_output_folder = get_default_download_folder() - project_section = { - 'dir_output': default_output_folder, - # 'dir_output': 'D:/D_Desktop/LM2', - 'run_name': 'test', - 'image_location': 'local', - 'GBIF_mode': 'all', - 'batch_size': 40, - 'num_workers': 2, - 'dir_images_local': '', - # 'dir_images_local': 'D:\Dropbox\LM2_Env\Image_Datasets\Manuscript_Images', - 'path_combined_csv_local': None, - 'path_occurrence_csv_local': None, - 'path_images_csv_local': None, - 'use_existing_plant_component_detections': None, - 'use_existing_archival_component_detections': None, - 'process_subset_of_images': False, - 'dir_images_subset': '', - 'n_images_per_species': 10, - 'species_list': '' - } - - cropped_components_section = { - 'do_save_cropped_annotations': False, - 'save_cropped_annotations': ['label'], - 'save_per_image': False, - 'save_per_annotation_class': True, - 'binarize_labels': False, - 'binarize_labels_skeletonize': False - } - - modules_section = { - 'armature': False, - 'specimen_crop': False - } - - data_section = { - 'save_json_rulers': False, - 'save_json_measurements': False, - 'save_individual_csv_files_rulers': False, - 'save_individual_csv_files_measurements': False, - 'save_individual_csv_files_landmarks': False, - 'save_individual_efd_files': False, - 'include_darwin_core_data_from_combined_file': False, - 'do_apply_conversion_factor': True - } - - overlay_section = { - 'save_overlay_to_pdf': False, - 'save_overlay_to_jpgs': True, - 'overlay_dpi': 300, # Between 100 to 300 - 'overlay_background_color': 'black', # Either 'white' or 'black' - - 'show_archival_detections': True, - 'show_plant_detections': True, - 'show_segmentations': True, - 'show_landmarks': True, - 'ignore_archival_detections_classes': [], - 'ignore_plant_detections_classes': ['leaf_whole', 'specimen'], # Could also include 'leaf_partial' and others if needed - 'ignore_landmark_classes': [], - - 'line_width_archival': 12, # Previous value given was 2 - 'line_width_plant': 12, # Previous value given was 6 - 'line_width_seg': 12, # 12 is specified as "thick" - 'line_width_efd': 12, # 3 is specified as "thick" but 12 is given here - 'alpha_transparency_archival': 0.3, - 'alpha_transparency_plant': 0, - 'alpha_transparency_seg_whole_leaf': 0.4, - 'alpha_transparency_seg_partial_leaf': 0.3 - } - - plant_component_detector_section = { - 'detector_type': 'Plant_Detector', - 'detector_version': 'PLANT_GroupAB_200', - 'detector_iteration': 'PLANT_GroupAB_200', - 'detector_weights': 'best.pt', - 'minimum_confidence_threshold': 0.3, # Default is 0.5 - 'do_save_prediction_overlay_images': True, - 'ignore_objects_for_overlay': [] # 'leaf_partial' can be included if needed - } - - archival_component_detector_section = { - 'detector_type': 'Archival_Detector', - 'detector_version': 'PREP_final', - 'detector_iteration': 'PREP_final', - 'detector_weights': 'best.pt', - 'minimum_confidence_threshold': 0.5, # Default is 0.5 - 'do_save_prediction_overlay_images': True, - 'ignore_objects_for_overlay': [] - } - - armature_component_detector_section = { - 'detector_type': 'Armature_Detector', - 'detector_version': 'ARM_A_1000', - 'detector_iteration': 'ARM_A_1000', - 'detector_weights': 'best.pt', - 'minimum_confidence_threshold': 0.5, # Optionally: 0.2 - 'do_save_prediction_overlay_images': True, - 'ignore_objects_for_overlay': [] - } - - landmark_detector_section = { - 'landmark_whole_leaves': True, - 'landmark_partial_leaves': False, - 'detector_type': 'Landmark_Detector_YOLO', - 'detector_version': 'Landmarks', - 'detector_iteration': 'Landmarks_V2', - 'detector_weights': 'best.pt', - 'minimum_confidence_threshold': 0.02, - 'do_save_prediction_overlay_images': True, - 'ignore_objects_for_overlay': [], - 'use_existing_landmark_detections': None, # Example path provided - 'do_show_QC_images': False, - 'do_save_QC_images': True, - 'do_show_final_images': False, - 'do_save_final_images': True - } - - landmark_detector_armature_section = { - 'upscale_factor': 10, - 'detector_type': 'Landmark_Detector_YOLO', - 'detector_version': 'Landmarks_Arm_A_200', - 'detector_iteration': 'Landmarks_Arm_A_200', - 'detector_weights': 'last.pt', - 'minimum_confidence_threshold': 0.06, - 'do_save_prediction_overlay_images': True, - 'ignore_objects_for_overlay': [], - 'use_existing_landmark_detections': None, # Example path provided - 'do_show_QC_images': True, - 'do_save_QC_images': True, - 'do_show_final_images': True, - 'do_save_final_images': True - } - - ruler_detection_section = { - 'detect_ruler_type': True, - 'ruler_detector': 'ruler_classifier_38classes_v-1.pt', - 'ruler_binary_detector': 'model_scripted_resnet_720_withCompression.pt', - 'minimum_confidence_threshold': 0.4, - 'save_ruler_validation': False, - 'save_ruler_validation_summary': True, - 'save_ruler_processed': False - } - - leaf_segmentation_section = { - 'segment_whole_leaves': True, - 'segment_partial_leaves': False, - - 'keep_only_best_one_leaf_one_petiole': True, - - 'save_segmentation_overlay_images_to_pdf': True, - 'save_each_segmentation_overlay_image': True, - 'save_individual_overlay_images': True, # Not recommended due to potential file count - 'overlay_line_width': 1, # Default is 1 - - 'use_efds_for_png_masks': False, # Requires calculate_elliptic_fourier_descriptors to be True - 'save_masks_color': True, - 'save_full_image_masks_color': True, - 'save_rgb_cropped_images': True, - - 'find_minimum_bounding_box': True, - - 'calculate_elliptic_fourier_descriptors': True, # Default is True - 'elliptic_fourier_descriptor_order': 40, # Default is 40 - - 'segmentation_model': 'GroupB_Dataset_100000_Iter_1176PTS_512Batch_smooth_l1_LR00025_BGR', - 'minimum_confidence_threshold': 0.7, # Alternatively: 0.9 - 'generate_overlay': True, - 'overlay_dpi': 300, # Range: 100 to 300 - 'overlay_background_color': 'black' # Options: 'white' or 'black' - } - - # Add the sections to the 'leafmachine' key - config_data['leafmachine']['do'] = do_section - config_data['leafmachine']['print'] = print_section - config_data['leafmachine']['logging'] = logging_section - config_data['leafmachine']['project'] = project_section - config_data['leafmachine']['cropped_components'] = cropped_components_section - config_data['leafmachine']['modules'] = modules_section - config_data['leafmachine']['data'] = data_section - config_data['leafmachine']['overlay'] = overlay_section - config_data['leafmachine']['plant_component_detector'] = plant_component_detector_section - config_data['leafmachine']['archival_component_detector'] = archival_component_detector_section - config_data['leafmachine']['armature_component_detector'] = armature_component_detector_section - config_data['leafmachine']['landmark_detector'] = landmark_detector_section - config_data['leafmachine']['landmark_detector_armature'] = landmark_detector_armature_section - config_data['leafmachine']['ruler_detection'] = ruler_detection_section - config_data['leafmachine']['leaf_segmentation'] = leaf_segmentation_section - - return config_data, dir_home - -def write_config_file(config_data, dir_home, filename="LeafMachine2.yaml"): - file_path = os.path.join(dir_home, filename) - - # Write the data to a YAML file - with open(file_path, "w") as outfile: - yaml.dump(config_data, outfile, default_flow_style=False) - -if __name__ == '__main__': - config_data, dir_home = build_LM2_config() - write_config_file(config_data, dir_home) - diff --git a/spaces/prateekagrawal/roberta-testing/apps/credits.py b/spaces/prateekagrawal/roberta-testing/apps/credits.py deleted file mode 100644 index e76308d90760291661de2aeb58d80a6e1b7288b5..0000000000000000000000000000000000000000 --- a/spaces/prateekagrawal/roberta-testing/apps/credits.py +++ /dev/null @@ -1,4 +0,0 @@ -import streamlit as st - -def app(): - st.title(' Credits') diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/vegalite/v5/schema/mixins.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/vegalite/v5/schema/mixins.py deleted file mode 100644 index ec172c3cdfa570fe4202b691ccd1153afcc06f9f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/altair/vegalite/v5/schema/mixins.py +++ /dev/null @@ -1,1302 +0,0 @@ -# The contents of this file are automatically written by -# tools/generate_schema_wrapper.py. Do not modify directly. -import sys - -from . import core -from altair.utils import use_signature -from altair.utils.schemapi import Undefined - -if sys.version_info >= (3, 11): - from typing import Self -else: - from typing_extensions import Self - - -class MarkMethodMixin: - """A mixin class that defines mark methods""" - - def mark_arc(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, dir=Undefined, - discreteBandSize=Undefined, dx=Undefined, dy=Undefined, ellipsis=Undefined, - fill=Undefined, fillOpacity=Undefined, filled=Undefined, font=Undefined, - fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, height=Undefined, - href=Undefined, innerRadius=Undefined, interpolate=Undefined, invalid=Undefined, - limit=Undefined, line=Undefined, lineBreak=Undefined, lineHeight=Undefined, - minBandSize=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'arc' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - minBandSize=minBandSize, opacity=opacity, order=order, orient=orient, - outerRadius=outerRadius, padAngle=padAngle, point=point, radius=radius, - radius2=radius2, radius2Offset=radius2Offset, radiusOffset=radiusOffset, - shape=shape, size=size, smooth=smooth, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="arc", **kwds) - else: - copy.mark = "arc" - return copy - - def mark_area(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, minBandSize=Undefined, opacity=Undefined, order=Undefined, - orient=Undefined, outerRadius=Undefined, padAngle=Undefined, point=Undefined, - radius=Undefined, radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, - shape=Undefined, size=Undefined, smooth=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOffset=Undefined, - strokeOpacity=Undefined, strokeWidth=Undefined, style=Undefined, tension=Undefined, - text=Undefined, theta=Undefined, theta2=Undefined, theta2Offset=Undefined, - thetaOffset=Undefined, thickness=Undefined, timeUnitBandPosition=Undefined, - timeUnitBandSize=Undefined, tooltip=Undefined, url=Undefined, width=Undefined, - x=Undefined, x2=Undefined, x2Offset=Undefined, xOffset=Undefined, y=Undefined, - y2=Undefined, y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'area' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - minBandSize=minBandSize, opacity=opacity, order=order, orient=orient, - outerRadius=outerRadius, padAngle=padAngle, point=point, radius=radius, - radius2=radius2, radius2Offset=radius2Offset, radiusOffset=radiusOffset, - shape=shape, size=size, smooth=smooth, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="area", **kwds) - else: - copy.mark = "area" - return copy - - def mark_bar(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, dir=Undefined, - discreteBandSize=Undefined, dx=Undefined, dy=Undefined, ellipsis=Undefined, - fill=Undefined, fillOpacity=Undefined, filled=Undefined, font=Undefined, - fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, height=Undefined, - href=Undefined, innerRadius=Undefined, interpolate=Undefined, invalid=Undefined, - limit=Undefined, line=Undefined, lineBreak=Undefined, lineHeight=Undefined, - minBandSize=Undefined, opacity=Undefined, order=Undefined, orient=Undefined, - outerRadius=Undefined, padAngle=Undefined, point=Undefined, radius=Undefined, - radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, shape=Undefined, - size=Undefined, smooth=Undefined, stroke=Undefined, strokeCap=Undefined, - strokeDash=Undefined, strokeDashOffset=Undefined, strokeJoin=Undefined, - strokeMiterLimit=Undefined, strokeOffset=Undefined, strokeOpacity=Undefined, - strokeWidth=Undefined, style=Undefined, tension=Undefined, text=Undefined, - theta=Undefined, theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, y2Offset=Undefined, - yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'bar' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - minBandSize=minBandSize, opacity=opacity, order=order, orient=orient, - outerRadius=outerRadius, padAngle=padAngle, point=point, radius=radius, - radius2=radius2, radius2Offset=radius2Offset, radiusOffset=radiusOffset, - shape=shape, size=size, smooth=smooth, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="bar", **kwds) - else: - copy.mark = "bar" - return copy - - def mark_image(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, minBandSize=Undefined, opacity=Undefined, order=Undefined, - orient=Undefined, outerRadius=Undefined, padAngle=Undefined, point=Undefined, - radius=Undefined, radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, - shape=Undefined, size=Undefined, smooth=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOffset=Undefined, - strokeOpacity=Undefined, strokeWidth=Undefined, style=Undefined, tension=Undefined, - text=Undefined, theta=Undefined, theta2=Undefined, theta2Offset=Undefined, - thetaOffset=Undefined, thickness=Undefined, timeUnitBandPosition=Undefined, - timeUnitBandSize=Undefined, tooltip=Undefined, url=Undefined, width=Undefined, - x=Undefined, x2=Undefined, x2Offset=Undefined, xOffset=Undefined, y=Undefined, - y2=Undefined, y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'image' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - minBandSize=minBandSize, opacity=opacity, order=order, orient=orient, - outerRadius=outerRadius, padAngle=padAngle, point=point, radius=radius, - radius2=radius2, radius2Offset=radius2Offset, radiusOffset=radiusOffset, - shape=shape, size=size, smooth=smooth, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="image", **kwds) - else: - copy.mark = "image" - return copy - - def mark_line(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, minBandSize=Undefined, opacity=Undefined, order=Undefined, - orient=Undefined, outerRadius=Undefined, padAngle=Undefined, point=Undefined, - radius=Undefined, radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, - shape=Undefined, size=Undefined, smooth=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOffset=Undefined, - strokeOpacity=Undefined, strokeWidth=Undefined, style=Undefined, tension=Undefined, - text=Undefined, theta=Undefined, theta2=Undefined, theta2Offset=Undefined, - thetaOffset=Undefined, thickness=Undefined, timeUnitBandPosition=Undefined, - timeUnitBandSize=Undefined, tooltip=Undefined, url=Undefined, width=Undefined, - x=Undefined, x2=Undefined, x2Offset=Undefined, xOffset=Undefined, y=Undefined, - y2=Undefined, y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'line' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - minBandSize=minBandSize, opacity=opacity, order=order, orient=orient, - outerRadius=outerRadius, padAngle=padAngle, point=point, radius=radius, - radius2=radius2, radius2Offset=radius2Offset, radiusOffset=radiusOffset, - shape=shape, size=size, smooth=smooth, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="line", **kwds) - else: - copy.mark = "line" - return copy - - def mark_point(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, minBandSize=Undefined, opacity=Undefined, order=Undefined, - orient=Undefined, outerRadius=Undefined, padAngle=Undefined, point=Undefined, - radius=Undefined, radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, - shape=Undefined, size=Undefined, smooth=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOffset=Undefined, - strokeOpacity=Undefined, strokeWidth=Undefined, style=Undefined, tension=Undefined, - text=Undefined, theta=Undefined, theta2=Undefined, theta2Offset=Undefined, - thetaOffset=Undefined, thickness=Undefined, timeUnitBandPosition=Undefined, - timeUnitBandSize=Undefined, tooltip=Undefined, url=Undefined, width=Undefined, - x=Undefined, x2=Undefined, x2Offset=Undefined, xOffset=Undefined, y=Undefined, - y2=Undefined, y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'point' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - minBandSize=minBandSize, opacity=opacity, order=order, orient=orient, - outerRadius=outerRadius, padAngle=padAngle, point=point, radius=radius, - radius2=radius2, radius2Offset=radius2Offset, radiusOffset=radiusOffset, - shape=shape, size=size, smooth=smooth, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="point", **kwds) - else: - copy.mark = "point" - return copy - - def mark_rect(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, minBandSize=Undefined, opacity=Undefined, order=Undefined, - orient=Undefined, outerRadius=Undefined, padAngle=Undefined, point=Undefined, - radius=Undefined, radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, - shape=Undefined, size=Undefined, smooth=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOffset=Undefined, - strokeOpacity=Undefined, strokeWidth=Undefined, style=Undefined, tension=Undefined, - text=Undefined, theta=Undefined, theta2=Undefined, theta2Offset=Undefined, - thetaOffset=Undefined, thickness=Undefined, timeUnitBandPosition=Undefined, - timeUnitBandSize=Undefined, tooltip=Undefined, url=Undefined, width=Undefined, - x=Undefined, x2=Undefined, x2Offset=Undefined, xOffset=Undefined, y=Undefined, - y2=Undefined, y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'rect' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - minBandSize=minBandSize, opacity=opacity, order=order, orient=orient, - outerRadius=outerRadius, padAngle=padAngle, point=point, radius=radius, - radius2=radius2, radius2Offset=radius2Offset, radiusOffset=radiusOffset, - shape=shape, size=size, smooth=smooth, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="rect", **kwds) - else: - copy.mark = "rect" - return copy - - def mark_rule(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, minBandSize=Undefined, opacity=Undefined, order=Undefined, - orient=Undefined, outerRadius=Undefined, padAngle=Undefined, point=Undefined, - radius=Undefined, radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, - shape=Undefined, size=Undefined, smooth=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOffset=Undefined, - strokeOpacity=Undefined, strokeWidth=Undefined, style=Undefined, tension=Undefined, - text=Undefined, theta=Undefined, theta2=Undefined, theta2Offset=Undefined, - thetaOffset=Undefined, thickness=Undefined, timeUnitBandPosition=Undefined, - timeUnitBandSize=Undefined, tooltip=Undefined, url=Undefined, width=Undefined, - x=Undefined, x2=Undefined, x2Offset=Undefined, xOffset=Undefined, y=Undefined, - y2=Undefined, y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'rule' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - minBandSize=minBandSize, opacity=opacity, order=order, orient=orient, - outerRadius=outerRadius, padAngle=padAngle, point=point, radius=radius, - radius2=radius2, radius2Offset=radius2Offset, radiusOffset=radiusOffset, - shape=shape, size=size, smooth=smooth, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="rule", **kwds) - else: - copy.mark = "rule" - return copy - - def mark_text(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, minBandSize=Undefined, opacity=Undefined, order=Undefined, - orient=Undefined, outerRadius=Undefined, padAngle=Undefined, point=Undefined, - radius=Undefined, radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, - shape=Undefined, size=Undefined, smooth=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOffset=Undefined, - strokeOpacity=Undefined, strokeWidth=Undefined, style=Undefined, tension=Undefined, - text=Undefined, theta=Undefined, theta2=Undefined, theta2Offset=Undefined, - thetaOffset=Undefined, thickness=Undefined, timeUnitBandPosition=Undefined, - timeUnitBandSize=Undefined, tooltip=Undefined, url=Undefined, width=Undefined, - x=Undefined, x2=Undefined, x2Offset=Undefined, xOffset=Undefined, y=Undefined, - y2=Undefined, y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'text' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - minBandSize=minBandSize, opacity=opacity, order=order, orient=orient, - outerRadius=outerRadius, padAngle=padAngle, point=point, radius=radius, - radius2=radius2, radius2Offset=radius2Offset, radiusOffset=radiusOffset, - shape=shape, size=size, smooth=smooth, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="text", **kwds) - else: - copy.mark = "text" - return copy - - def mark_tick(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, minBandSize=Undefined, opacity=Undefined, order=Undefined, - orient=Undefined, outerRadius=Undefined, padAngle=Undefined, point=Undefined, - radius=Undefined, radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, - shape=Undefined, size=Undefined, smooth=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOffset=Undefined, - strokeOpacity=Undefined, strokeWidth=Undefined, style=Undefined, tension=Undefined, - text=Undefined, theta=Undefined, theta2=Undefined, theta2Offset=Undefined, - thetaOffset=Undefined, thickness=Undefined, timeUnitBandPosition=Undefined, - timeUnitBandSize=Undefined, tooltip=Undefined, url=Undefined, width=Undefined, - x=Undefined, x2=Undefined, x2Offset=Undefined, xOffset=Undefined, y=Undefined, - y2=Undefined, y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'tick' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - minBandSize=minBandSize, opacity=opacity, order=order, orient=orient, - outerRadius=outerRadius, padAngle=padAngle, point=point, radius=radius, - radius2=radius2, radius2Offset=radius2Offset, radiusOffset=radiusOffset, - shape=shape, size=size, smooth=smooth, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="tick", **kwds) - else: - copy.mark = "tick" - return copy - - def mark_trail(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, minBandSize=Undefined, opacity=Undefined, order=Undefined, - orient=Undefined, outerRadius=Undefined, padAngle=Undefined, point=Undefined, - radius=Undefined, radius2=Undefined, radius2Offset=Undefined, radiusOffset=Undefined, - shape=Undefined, size=Undefined, smooth=Undefined, stroke=Undefined, - strokeCap=Undefined, strokeDash=Undefined, strokeDashOffset=Undefined, - strokeJoin=Undefined, strokeMiterLimit=Undefined, strokeOffset=Undefined, - strokeOpacity=Undefined, strokeWidth=Undefined, style=Undefined, tension=Undefined, - text=Undefined, theta=Undefined, theta2=Undefined, theta2Offset=Undefined, - thetaOffset=Undefined, thickness=Undefined, timeUnitBandPosition=Undefined, - timeUnitBandSize=Undefined, tooltip=Undefined, url=Undefined, width=Undefined, - x=Undefined, x2=Undefined, x2Offset=Undefined, xOffset=Undefined, y=Undefined, - y2=Undefined, y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'trail' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - minBandSize=minBandSize, opacity=opacity, order=order, orient=orient, - outerRadius=outerRadius, padAngle=padAngle, point=point, radius=radius, - radius2=radius2, radius2Offset=radius2Offset, radiusOffset=radiusOffset, - shape=shape, size=size, smooth=smooth, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="trail", **kwds) - else: - copy.mark = "trail" - return copy - - def mark_circle(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, minBandSize=Undefined, opacity=Undefined, order=Undefined, - orient=Undefined, outerRadius=Undefined, padAngle=Undefined, point=Undefined, - radius=Undefined, radius2=Undefined, radius2Offset=Undefined, - radiusOffset=Undefined, shape=Undefined, size=Undefined, smooth=Undefined, - stroke=Undefined, strokeCap=Undefined, strokeDash=Undefined, - strokeDashOffset=Undefined, strokeJoin=Undefined, strokeMiterLimit=Undefined, - strokeOffset=Undefined, strokeOpacity=Undefined, strokeWidth=Undefined, - style=Undefined, tension=Undefined, text=Undefined, theta=Undefined, - theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, - y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'circle' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - minBandSize=minBandSize, opacity=opacity, order=order, orient=orient, - outerRadius=outerRadius, padAngle=padAngle, point=point, radius=radius, - radius2=radius2, radius2Offset=radius2Offset, radiusOffset=radiusOffset, - shape=shape, size=size, smooth=smooth, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="circle", **kwds) - else: - copy.mark = "circle" - return copy - - def mark_square(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, minBandSize=Undefined, opacity=Undefined, order=Undefined, - orient=Undefined, outerRadius=Undefined, padAngle=Undefined, point=Undefined, - radius=Undefined, radius2=Undefined, radius2Offset=Undefined, - radiusOffset=Undefined, shape=Undefined, size=Undefined, smooth=Undefined, - stroke=Undefined, strokeCap=Undefined, strokeDash=Undefined, - strokeDashOffset=Undefined, strokeJoin=Undefined, strokeMiterLimit=Undefined, - strokeOffset=Undefined, strokeOpacity=Undefined, strokeWidth=Undefined, - style=Undefined, tension=Undefined, text=Undefined, theta=Undefined, - theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, - y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'square' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - minBandSize=minBandSize, opacity=opacity, order=order, orient=orient, - outerRadius=outerRadius, padAngle=padAngle, point=point, radius=radius, - radius2=radius2, radius2Offset=radius2Offset, radiusOffset=radiusOffset, - shape=shape, size=size, smooth=smooth, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="square", **kwds) - else: - copy.mark = "square" - return copy - - def mark_geoshape(self, align=Undefined, angle=Undefined, aria=Undefined, ariaRole=Undefined, - ariaRoleDescription=Undefined, aspect=Undefined, bandSize=Undefined, - baseline=Undefined, binSpacing=Undefined, blend=Undefined, clip=Undefined, - color=Undefined, continuousBandSize=Undefined, cornerRadius=Undefined, - cornerRadiusBottomLeft=Undefined, cornerRadiusBottomRight=Undefined, - cornerRadiusEnd=Undefined, cornerRadiusTopLeft=Undefined, - cornerRadiusTopRight=Undefined, cursor=Undefined, description=Undefined, - dir=Undefined, discreteBandSize=Undefined, dx=Undefined, dy=Undefined, - ellipsis=Undefined, fill=Undefined, fillOpacity=Undefined, filled=Undefined, - font=Undefined, fontSize=Undefined, fontStyle=Undefined, fontWeight=Undefined, - height=Undefined, href=Undefined, innerRadius=Undefined, interpolate=Undefined, - invalid=Undefined, limit=Undefined, line=Undefined, lineBreak=Undefined, - lineHeight=Undefined, minBandSize=Undefined, opacity=Undefined, order=Undefined, - orient=Undefined, outerRadius=Undefined, padAngle=Undefined, point=Undefined, - radius=Undefined, radius2=Undefined, radius2Offset=Undefined, - radiusOffset=Undefined, shape=Undefined, size=Undefined, smooth=Undefined, - stroke=Undefined, strokeCap=Undefined, strokeDash=Undefined, - strokeDashOffset=Undefined, strokeJoin=Undefined, strokeMiterLimit=Undefined, - strokeOffset=Undefined, strokeOpacity=Undefined, strokeWidth=Undefined, - style=Undefined, tension=Undefined, text=Undefined, theta=Undefined, - theta2=Undefined, theta2Offset=Undefined, thetaOffset=Undefined, - thickness=Undefined, timeUnitBandPosition=Undefined, timeUnitBandSize=Undefined, - tooltip=Undefined, url=Undefined, width=Undefined, x=Undefined, x2=Undefined, - x2Offset=Undefined, xOffset=Undefined, y=Undefined, y2=Undefined, - y2Offset=Undefined, yOffset=Undefined, **kwds) -> Self: - """Set the chart's mark to 'geoshape' (see :class:`MarkDef`) - """ - kwds = dict(align=align, angle=angle, aria=aria, ariaRole=ariaRole, - ariaRoleDescription=ariaRoleDescription, aspect=aspect, bandSize=bandSize, - baseline=baseline, binSpacing=binSpacing, blend=blend, clip=clip, color=color, - continuousBandSize=continuousBandSize, cornerRadius=cornerRadius, - cornerRadiusBottomLeft=cornerRadiusBottomLeft, - cornerRadiusBottomRight=cornerRadiusBottomRight, cornerRadiusEnd=cornerRadiusEnd, - cornerRadiusTopLeft=cornerRadiusTopLeft, cornerRadiusTopRight=cornerRadiusTopRight, - cursor=cursor, description=description, dir=dir, discreteBandSize=discreteBandSize, - dx=dx, dy=dy, ellipsis=ellipsis, fill=fill, fillOpacity=fillOpacity, filled=filled, - font=font, fontSize=fontSize, fontStyle=fontStyle, fontWeight=fontWeight, - height=height, href=href, innerRadius=innerRadius, interpolate=interpolate, - invalid=invalid, limit=limit, line=line, lineBreak=lineBreak, lineHeight=lineHeight, - minBandSize=minBandSize, opacity=opacity, order=order, orient=orient, - outerRadius=outerRadius, padAngle=padAngle, point=point, radius=radius, - radius2=radius2, radius2Offset=radius2Offset, radiusOffset=radiusOffset, - shape=shape, size=size, smooth=smooth, stroke=stroke, strokeCap=strokeCap, - strokeDash=strokeDash, strokeDashOffset=strokeDashOffset, strokeJoin=strokeJoin, - strokeMiterLimit=strokeMiterLimit, strokeOffset=strokeOffset, - strokeOpacity=strokeOpacity, strokeWidth=strokeWidth, style=style, tension=tension, - text=text, theta=theta, theta2=theta2, theta2Offset=theta2Offset, - thetaOffset=thetaOffset, thickness=thickness, - timeUnitBandPosition=timeUnitBandPosition, timeUnitBandSize=timeUnitBandSize, - tooltip=tooltip, url=url, width=width, x=x, x2=x2, x2Offset=x2Offset, - xOffset=xOffset, y=y, y2=y2, y2Offset=y2Offset, yOffset=yOffset, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.MarkDef(type="geoshape", **kwds) - else: - copy.mark = "geoshape" - return copy - - def mark_boxplot(self, box=Undefined, clip=Undefined, color=Undefined, extent=Undefined, - invalid=Undefined, median=Undefined, opacity=Undefined, orient=Undefined, - outliers=Undefined, rule=Undefined, size=Undefined, ticks=Undefined, **kwds) -> Self: - """Set the chart's mark to 'boxplot' (see :class:`BoxPlotDef`) - """ - kwds = dict(box=box, clip=clip, color=color, extent=extent, invalid=invalid, median=median, - opacity=opacity, orient=orient, outliers=outliers, rule=rule, size=size, - ticks=ticks, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.BoxPlotDef(type="boxplot", **kwds) - else: - copy.mark = "boxplot" - return copy - - def mark_errorbar(self, clip=Undefined, color=Undefined, extent=Undefined, opacity=Undefined, - orient=Undefined, rule=Undefined, size=Undefined, thickness=Undefined, - ticks=Undefined, **kwds) -> Self: - """Set the chart's mark to 'errorbar' (see :class:`ErrorBarDef`) - """ - kwds = dict(clip=clip, color=color, extent=extent, opacity=opacity, orient=orient, rule=rule, - size=size, thickness=thickness, ticks=ticks, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.ErrorBarDef(type="errorbar", **kwds) - else: - copy.mark = "errorbar" - return copy - - def mark_errorband(self, band=Undefined, borders=Undefined, clip=Undefined, color=Undefined, - extent=Undefined, interpolate=Undefined, opacity=Undefined, orient=Undefined, - tension=Undefined, **kwds) -> Self: - """Set the chart's mark to 'errorband' (see :class:`ErrorBandDef`) - """ - kwds = dict(band=band, borders=borders, clip=clip, color=color, extent=extent, - interpolate=interpolate, opacity=opacity, orient=orient, tension=tension, **kwds) - copy = self.copy(deep=False) - if any(val is not Undefined for val in kwds.values()): - copy.mark = core.ErrorBandDef(type="errorband", **kwds) - else: - copy.mark = "errorband" - return copy - - -class ConfigMethodMixin: - """A mixin class that defines config methods""" - - @use_signature(core.Config) - def configure(self, *args, **kwargs) -> Self: - copy = self.copy(deep=False) - copy.config = core.Config(*args, **kwargs) - return copy - - @use_signature(core.RectConfig) - def configure_arc(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["arc"] = core.RectConfig(*args, **kwargs) - return copy - - @use_signature(core.AreaConfig) - def configure_area(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["area"] = core.AreaConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axis(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axis"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisBand(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisBand"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisBottom(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisBottom"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisDiscrete(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisDiscrete"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisLeft(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisLeft"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisPoint(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisPoint"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisQuantitative(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisQuantitative"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisRight(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisRight"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisTemporal(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisTemporal"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisTop(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisTop"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisX(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisX"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXBand(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXBand"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXDiscrete(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXDiscrete"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXPoint(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXPoint"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXQuantitative(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXQuantitative"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisXTemporal(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisXTemporal"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisY(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisY"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYBand(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYBand"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYDiscrete(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYDiscrete"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYPoint(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYPoint"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYQuantitative(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYQuantitative"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.AxisConfig) - def configure_axisYTemporal(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["axisYTemporal"] = core.AxisConfig(*args, **kwargs) - return copy - - @use_signature(core.BarConfig) - def configure_bar(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["bar"] = core.BarConfig(*args, **kwargs) - return copy - - @use_signature(core.BoxPlotConfig) - def configure_boxplot(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["boxplot"] = core.BoxPlotConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_circle(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["circle"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.CompositionConfig) - def configure_concat(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["concat"] = core.CompositionConfig(*args, **kwargs) - return copy - - @use_signature(core.ErrorBandConfig) - def configure_errorband(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["errorband"] = core.ErrorBandConfig(*args, **kwargs) - return copy - - @use_signature(core.ErrorBarConfig) - def configure_errorbar(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["errorbar"] = core.ErrorBarConfig(*args, **kwargs) - return copy - - @use_signature(core.CompositionConfig) - def configure_facet(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["facet"] = core.CompositionConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_geoshape(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["geoshape"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_header(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["header"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_headerColumn(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["headerColumn"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_headerFacet(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["headerFacet"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.HeaderConfig) - def configure_headerRow(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["headerRow"] = core.HeaderConfig(*args, **kwargs) - return copy - - @use_signature(core.RectConfig) - def configure_image(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["image"] = core.RectConfig(*args, **kwargs) - return copy - - @use_signature(core.LegendConfig) - def configure_legend(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["legend"] = core.LegendConfig(*args, **kwargs) - return copy - - @use_signature(core.LineConfig) - def configure_line(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["line"] = core.LineConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_mark(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["mark"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_point(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["point"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.ProjectionConfig) - def configure_projection(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["projection"] = core.ProjectionConfig(*args, **kwargs) - return copy - - @use_signature(core.RangeConfig) - def configure_range(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["range"] = core.RangeConfig(*args, **kwargs) - return copy - - @use_signature(core.RectConfig) - def configure_rect(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["rect"] = core.RectConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_rule(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["rule"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.ScaleConfig) - def configure_scale(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["scale"] = core.ScaleConfig(*args, **kwargs) - return copy - - @use_signature(core.SelectionConfig) - def configure_selection(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["selection"] = core.SelectionConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_square(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["square"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.MarkConfig) - def configure_text(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["text"] = core.MarkConfig(*args, **kwargs) - return copy - - @use_signature(core.TickConfig) - def configure_tick(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["tick"] = core.TickConfig(*args, **kwargs) - return copy - - @use_signature(core.TitleConfig) - def configure_title(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["title"] = core.TitleConfig(*args, **kwargs) - return copy - - @use_signature(core.FormatConfig) - def configure_tooltipFormat(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["tooltipFormat"] = core.FormatConfig(*args, **kwargs) - return copy - - @use_signature(core.LineConfig) - def configure_trail(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["trail"] = core.LineConfig(*args, **kwargs) - return copy - - @use_signature(core.ViewConfig) - def configure_view(self, *args, **kwargs) -> Self: - copy = self.copy(deep=['config']) - if copy.config is Undefined: - copy.config = core.Config() - copy.config["view"] = core.ViewConfig(*args, **kwargs) - return copy \ No newline at end of file diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/dateutil/rrule.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/dateutil/rrule.py deleted file mode 100644 index b3203393c61203c9c6f12db7a857aee89be85e5c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/dateutil/rrule.py +++ /dev/null @@ -1,1737 +0,0 @@ -# -*- coding: utf-8 -*- -""" -The rrule module offers a small, complete, and very fast, implementation of -the recurrence rules documented in the -`iCalendar RFC `_, -including support for caching of results. -""" -import calendar -import datetime -import heapq -import itertools -import re -import sys -from functools import wraps -# For warning about deprecation of until and count -from warnings import warn - -from six import advance_iterator, integer_types - -from six.moves import _thread, range - -from ._common import weekday as weekdaybase - -try: - from math import gcd -except ImportError: - from fractions import gcd - -__all__ = ["rrule", "rruleset", "rrulestr", - "YEARLY", "MONTHLY", "WEEKLY", "DAILY", - "HOURLY", "MINUTELY", "SECONDLY", - "MO", "TU", "WE", "TH", "FR", "SA", "SU"] - -# Every mask is 7 days longer to handle cross-year weekly periods. -M366MASK = tuple([1]*31+[2]*29+[3]*31+[4]*30+[5]*31+[6]*30 + - [7]*31+[8]*31+[9]*30+[10]*31+[11]*30+[12]*31+[1]*7) -M365MASK = list(M366MASK) -M29, M30, M31 = list(range(1, 30)), list(range(1, 31)), list(range(1, 32)) -MDAY366MASK = tuple(M31+M29+M31+M30+M31+M30+M31+M31+M30+M31+M30+M31+M31[:7]) -MDAY365MASK = list(MDAY366MASK) -M29, M30, M31 = list(range(-29, 0)), list(range(-30, 0)), list(range(-31, 0)) -NMDAY366MASK = tuple(M31+M29+M31+M30+M31+M30+M31+M31+M30+M31+M30+M31+M31[:7]) -NMDAY365MASK = list(NMDAY366MASK) -M366RANGE = (0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366) -M365RANGE = (0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365) -WDAYMASK = [0, 1, 2, 3, 4, 5, 6]*55 -del M29, M30, M31, M365MASK[59], MDAY365MASK[59], NMDAY365MASK[31] -MDAY365MASK = tuple(MDAY365MASK) -M365MASK = tuple(M365MASK) - -FREQNAMES = ['YEARLY', 'MONTHLY', 'WEEKLY', 'DAILY', 'HOURLY', 'MINUTELY', 'SECONDLY'] - -(YEARLY, - MONTHLY, - WEEKLY, - DAILY, - HOURLY, - MINUTELY, - SECONDLY) = list(range(7)) - -# Imported on demand. -easter = None -parser = None - - -class weekday(weekdaybase): - """ - This version of weekday does not allow n = 0. - """ - def __init__(self, wkday, n=None): - if n == 0: - raise ValueError("Can't create weekday with n==0") - - super(weekday, self).__init__(wkday, n) - - -MO, TU, WE, TH, FR, SA, SU = weekdays = tuple(weekday(x) for x in range(7)) - - -def _invalidates_cache(f): - """ - Decorator for rruleset methods which may invalidate the - cached length. - """ - @wraps(f) - def inner_func(self, *args, **kwargs): - rv = f(self, *args, **kwargs) - self._invalidate_cache() - return rv - - return inner_func - - -class rrulebase(object): - def __init__(self, cache=False): - if cache: - self._cache = [] - self._cache_lock = _thread.allocate_lock() - self._invalidate_cache() - else: - self._cache = None - self._cache_complete = False - self._len = None - - def __iter__(self): - if self._cache_complete: - return iter(self._cache) - elif self._cache is None: - return self._iter() - else: - return self._iter_cached() - - def _invalidate_cache(self): - if self._cache is not None: - self._cache = [] - self._cache_complete = False - self._cache_gen = self._iter() - - if self._cache_lock.locked(): - self._cache_lock.release() - - self._len = None - - def _iter_cached(self): - i = 0 - gen = self._cache_gen - cache = self._cache - acquire = self._cache_lock.acquire - release = self._cache_lock.release - while gen: - if i == len(cache): - acquire() - if self._cache_complete: - break - try: - for j in range(10): - cache.append(advance_iterator(gen)) - except StopIteration: - self._cache_gen = gen = None - self._cache_complete = True - break - release() - yield cache[i] - i += 1 - while i < self._len: - yield cache[i] - i += 1 - - def __getitem__(self, item): - if self._cache_complete: - return self._cache[item] - elif isinstance(item, slice): - if item.step and item.step < 0: - return list(iter(self))[item] - else: - return list(itertools.islice(self, - item.start or 0, - item.stop or sys.maxsize, - item.step or 1)) - elif item >= 0: - gen = iter(self) - try: - for i in range(item+1): - res = advance_iterator(gen) - except StopIteration: - raise IndexError - return res - else: - return list(iter(self))[item] - - def __contains__(self, item): - if self._cache_complete: - return item in self._cache - else: - for i in self: - if i == item: - return True - elif i > item: - return False - return False - - # __len__() introduces a large performance penalty. - def count(self): - """ Returns the number of recurrences in this set. It will have go - trough the whole recurrence, if this hasn't been done before. """ - if self._len is None: - for x in self: - pass - return self._len - - def before(self, dt, inc=False): - """ Returns the last recurrence before the given datetime instance. The - inc keyword defines what happens if dt is an occurrence. With - inc=True, if dt itself is an occurrence, it will be returned. """ - if self._cache_complete: - gen = self._cache - else: - gen = self - last = None - if inc: - for i in gen: - if i > dt: - break - last = i - else: - for i in gen: - if i >= dt: - break - last = i - return last - - def after(self, dt, inc=False): - """ Returns the first recurrence after the given datetime instance. The - inc keyword defines what happens if dt is an occurrence. With - inc=True, if dt itself is an occurrence, it will be returned. """ - if self._cache_complete: - gen = self._cache - else: - gen = self - if inc: - for i in gen: - if i >= dt: - return i - else: - for i in gen: - if i > dt: - return i - return None - - def xafter(self, dt, count=None, inc=False): - """ - Generator which yields up to `count` recurrences after the given - datetime instance, equivalent to `after`. - - :param dt: - The datetime at which to start generating recurrences. - - :param count: - The maximum number of recurrences to generate. If `None` (default), - dates are generated until the recurrence rule is exhausted. - - :param inc: - If `dt` is an instance of the rule and `inc` is `True`, it is - included in the output. - - :yields: Yields a sequence of `datetime` objects. - """ - - if self._cache_complete: - gen = self._cache - else: - gen = self - - # Select the comparison function - if inc: - comp = lambda dc, dtc: dc >= dtc - else: - comp = lambda dc, dtc: dc > dtc - - # Generate dates - n = 0 - for d in gen: - if comp(d, dt): - if count is not None: - n += 1 - if n > count: - break - - yield d - - def between(self, after, before, inc=False, count=1): - """ Returns all the occurrences of the rrule between after and before. - The inc keyword defines what happens if after and/or before are - themselves occurrences. With inc=True, they will be included in the - list, if they are found in the recurrence set. """ - if self._cache_complete: - gen = self._cache - else: - gen = self - started = False - l = [] - if inc: - for i in gen: - if i > before: - break - elif not started: - if i >= after: - started = True - l.append(i) - else: - l.append(i) - else: - for i in gen: - if i >= before: - break - elif not started: - if i > after: - started = True - l.append(i) - else: - l.append(i) - return l - - -class rrule(rrulebase): - """ - That's the base of the rrule operation. It accepts all the keywords - defined in the RFC as its constructor parameters (except byday, - which was renamed to byweekday) and more. The constructor prototype is:: - - rrule(freq) - - Where freq must be one of YEARLY, MONTHLY, WEEKLY, DAILY, HOURLY, MINUTELY, - or SECONDLY. - - .. note:: - Per RFC section 3.3.10, recurrence instances falling on invalid dates - and times are ignored rather than coerced: - - Recurrence rules may generate recurrence instances with an invalid - date (e.g., February 30) or nonexistent local time (e.g., 1:30 AM - on a day where the local time is moved forward by an hour at 1:00 - AM). Such recurrence instances MUST be ignored and MUST NOT be - counted as part of the recurrence set. - - This can lead to possibly surprising behavior when, for example, the - start date occurs at the end of the month: - - >>> from dateutil.rrule import rrule, MONTHLY - >>> from datetime import datetime - >>> start_date = datetime(2014, 12, 31) - >>> list(rrule(freq=MONTHLY, count=4, dtstart=start_date)) - ... # doctest: +NORMALIZE_WHITESPACE - [datetime.datetime(2014, 12, 31, 0, 0), - datetime.datetime(2015, 1, 31, 0, 0), - datetime.datetime(2015, 3, 31, 0, 0), - datetime.datetime(2015, 5, 31, 0, 0)] - - Additionally, it supports the following keyword arguments: - - :param dtstart: - The recurrence start. Besides being the base for the recurrence, - missing parameters in the final recurrence instances will also be - extracted from this date. If not given, datetime.now() will be used - instead. - :param interval: - The interval between each freq iteration. For example, when using - YEARLY, an interval of 2 means once every two years, but with HOURLY, - it means once every two hours. The default interval is 1. - :param wkst: - The week start day. Must be one of the MO, TU, WE constants, or an - integer, specifying the first day of the week. This will affect - recurrences based on weekly periods. The default week start is got - from calendar.firstweekday(), and may be modified by - calendar.setfirstweekday(). - :param count: - If given, this determines how many occurrences will be generated. - - .. note:: - As of version 2.5.0, the use of the keyword ``until`` in conjunction - with ``count`` is deprecated, to make sure ``dateutil`` is fully - compliant with `RFC-5545 Sec. 3.3.10 `_. Therefore, ``until`` and ``count`` - **must not** occur in the same call to ``rrule``. - :param until: - If given, this must be a datetime instance specifying the upper-bound - limit of the recurrence. The last recurrence in the rule is the greatest - datetime that is less than or equal to the value specified in the - ``until`` parameter. - - .. note:: - As of version 2.5.0, the use of the keyword ``until`` in conjunction - with ``count`` is deprecated, to make sure ``dateutil`` is fully - compliant with `RFC-5545 Sec. 3.3.10 `_. Therefore, ``until`` and ``count`` - **must not** occur in the same call to ``rrule``. - :param bysetpos: - If given, it must be either an integer, or a sequence of integers, - positive or negative. Each given integer will specify an occurrence - number, corresponding to the nth occurrence of the rule inside the - frequency period. For example, a bysetpos of -1 if combined with a - MONTHLY frequency, and a byweekday of (MO, TU, WE, TH, FR), will - result in the last work day of every month. - :param bymonth: - If given, it must be either an integer, or a sequence of integers, - meaning the months to apply the recurrence to. - :param bymonthday: - If given, it must be either an integer, or a sequence of integers, - meaning the month days to apply the recurrence to. - :param byyearday: - If given, it must be either an integer, or a sequence of integers, - meaning the year days to apply the recurrence to. - :param byeaster: - If given, it must be either an integer, or a sequence of integers, - positive or negative. Each integer will define an offset from the - Easter Sunday. Passing the offset 0 to byeaster will yield the Easter - Sunday itself. This is an extension to the RFC specification. - :param byweekno: - If given, it must be either an integer, or a sequence of integers, - meaning the week numbers to apply the recurrence to. Week numbers - have the meaning described in ISO8601, that is, the first week of - the year is that containing at least four days of the new year. - :param byweekday: - If given, it must be either an integer (0 == MO), a sequence of - integers, one of the weekday constants (MO, TU, etc), or a sequence - of these constants. When given, these variables will define the - weekdays where the recurrence will be applied. It's also possible to - use an argument n for the weekday instances, which will mean the nth - occurrence of this weekday in the period. For example, with MONTHLY, - or with YEARLY and BYMONTH, using FR(+1) in byweekday will specify the - first friday of the month where the recurrence happens. Notice that in - the RFC documentation, this is specified as BYDAY, but was renamed to - avoid the ambiguity of that keyword. - :param byhour: - If given, it must be either an integer, or a sequence of integers, - meaning the hours to apply the recurrence to. - :param byminute: - If given, it must be either an integer, or a sequence of integers, - meaning the minutes to apply the recurrence to. - :param bysecond: - If given, it must be either an integer, or a sequence of integers, - meaning the seconds to apply the recurrence to. - :param cache: - If given, it must be a boolean value specifying to enable or disable - caching of results. If you will use the same rrule instance multiple - times, enabling caching will improve the performance considerably. - """ - def __init__(self, freq, dtstart=None, - interval=1, wkst=None, count=None, until=None, bysetpos=None, - bymonth=None, bymonthday=None, byyearday=None, byeaster=None, - byweekno=None, byweekday=None, - byhour=None, byminute=None, bysecond=None, - cache=False): - super(rrule, self).__init__(cache) - global easter - if not dtstart: - if until and until.tzinfo: - dtstart = datetime.datetime.now(tz=until.tzinfo).replace(microsecond=0) - else: - dtstart = datetime.datetime.now().replace(microsecond=0) - elif not isinstance(dtstart, datetime.datetime): - dtstart = datetime.datetime.fromordinal(dtstart.toordinal()) - else: - dtstart = dtstart.replace(microsecond=0) - self._dtstart = dtstart - self._tzinfo = dtstart.tzinfo - self._freq = freq - self._interval = interval - self._count = count - - # Cache the original byxxx rules, if they are provided, as the _byxxx - # attributes do not necessarily map to the inputs, and this can be - # a problem in generating the strings. Only store things if they've - # been supplied (the string retrieval will just use .get()) - self._original_rule = {} - - if until and not isinstance(until, datetime.datetime): - until = datetime.datetime.fromordinal(until.toordinal()) - self._until = until - - if self._dtstart and self._until: - if (self._dtstart.tzinfo is not None) != (self._until.tzinfo is not None): - # According to RFC5545 Section 3.3.10: - # https://tools.ietf.org/html/rfc5545#section-3.3.10 - # - # > If the "DTSTART" property is specified as a date with UTC - # > time or a date with local time and time zone reference, - # > then the UNTIL rule part MUST be specified as a date with - # > UTC time. - raise ValueError( - 'RRULE UNTIL values must be specified in UTC when DTSTART ' - 'is timezone-aware' - ) - - if count is not None and until: - warn("Using both 'count' and 'until' is inconsistent with RFC 5545" - " and has been deprecated in dateutil. Future versions will " - "raise an error.", DeprecationWarning) - - if wkst is None: - self._wkst = calendar.firstweekday() - elif isinstance(wkst, integer_types): - self._wkst = wkst - else: - self._wkst = wkst.weekday - - if bysetpos is None: - self._bysetpos = None - elif isinstance(bysetpos, integer_types): - if bysetpos == 0 or not (-366 <= bysetpos <= 366): - raise ValueError("bysetpos must be between 1 and 366, " - "or between -366 and -1") - self._bysetpos = (bysetpos,) - else: - self._bysetpos = tuple(bysetpos) - for pos in self._bysetpos: - if pos == 0 or not (-366 <= pos <= 366): - raise ValueError("bysetpos must be between 1 and 366, " - "or between -366 and -1") - - if self._bysetpos: - self._original_rule['bysetpos'] = self._bysetpos - - if (byweekno is None and byyearday is None and bymonthday is None and - byweekday is None and byeaster is None): - if freq == YEARLY: - if bymonth is None: - bymonth = dtstart.month - self._original_rule['bymonth'] = None - bymonthday = dtstart.day - self._original_rule['bymonthday'] = None - elif freq == MONTHLY: - bymonthday = dtstart.day - self._original_rule['bymonthday'] = None - elif freq == WEEKLY: - byweekday = dtstart.weekday() - self._original_rule['byweekday'] = None - - # bymonth - if bymonth is None: - self._bymonth = None - else: - if isinstance(bymonth, integer_types): - bymonth = (bymonth,) - - self._bymonth = tuple(sorted(set(bymonth))) - - if 'bymonth' not in self._original_rule: - self._original_rule['bymonth'] = self._bymonth - - # byyearday - if byyearday is None: - self._byyearday = None - else: - if isinstance(byyearday, integer_types): - byyearday = (byyearday,) - - self._byyearday = tuple(sorted(set(byyearday))) - self._original_rule['byyearday'] = self._byyearday - - # byeaster - if byeaster is not None: - if not easter: - from dateutil import easter - if isinstance(byeaster, integer_types): - self._byeaster = (byeaster,) - else: - self._byeaster = tuple(sorted(byeaster)) - - self._original_rule['byeaster'] = self._byeaster - else: - self._byeaster = None - - # bymonthday - if bymonthday is None: - self._bymonthday = () - self._bynmonthday = () - else: - if isinstance(bymonthday, integer_types): - bymonthday = (bymonthday,) - - bymonthday = set(bymonthday) # Ensure it's unique - - self._bymonthday = tuple(sorted(x for x in bymonthday if x > 0)) - self._bynmonthday = tuple(sorted(x for x in bymonthday if x < 0)) - - # Storing positive numbers first, then negative numbers - if 'bymonthday' not in self._original_rule: - self._original_rule['bymonthday'] = tuple( - itertools.chain(self._bymonthday, self._bynmonthday)) - - # byweekno - if byweekno is None: - self._byweekno = None - else: - if isinstance(byweekno, integer_types): - byweekno = (byweekno,) - - self._byweekno = tuple(sorted(set(byweekno))) - - self._original_rule['byweekno'] = self._byweekno - - # byweekday / bynweekday - if byweekday is None: - self._byweekday = None - self._bynweekday = None - else: - # If it's one of the valid non-sequence types, convert to a - # single-element sequence before the iterator that builds the - # byweekday set. - if isinstance(byweekday, integer_types) or hasattr(byweekday, "n"): - byweekday = (byweekday,) - - self._byweekday = set() - self._bynweekday = set() - for wday in byweekday: - if isinstance(wday, integer_types): - self._byweekday.add(wday) - elif not wday.n or freq > MONTHLY: - self._byweekday.add(wday.weekday) - else: - self._bynweekday.add((wday.weekday, wday.n)) - - if not self._byweekday: - self._byweekday = None - elif not self._bynweekday: - self._bynweekday = None - - if self._byweekday is not None: - self._byweekday = tuple(sorted(self._byweekday)) - orig_byweekday = [weekday(x) for x in self._byweekday] - else: - orig_byweekday = () - - if self._bynweekday is not None: - self._bynweekday = tuple(sorted(self._bynweekday)) - orig_bynweekday = [weekday(*x) for x in self._bynweekday] - else: - orig_bynweekday = () - - if 'byweekday' not in self._original_rule: - self._original_rule['byweekday'] = tuple(itertools.chain( - orig_byweekday, orig_bynweekday)) - - # byhour - if byhour is None: - if freq < HOURLY: - self._byhour = {dtstart.hour} - else: - self._byhour = None - else: - if isinstance(byhour, integer_types): - byhour = (byhour,) - - if freq == HOURLY: - self._byhour = self.__construct_byset(start=dtstart.hour, - byxxx=byhour, - base=24) - else: - self._byhour = set(byhour) - - self._byhour = tuple(sorted(self._byhour)) - self._original_rule['byhour'] = self._byhour - - # byminute - if byminute is None: - if freq < MINUTELY: - self._byminute = {dtstart.minute} - else: - self._byminute = None - else: - if isinstance(byminute, integer_types): - byminute = (byminute,) - - if freq == MINUTELY: - self._byminute = self.__construct_byset(start=dtstart.minute, - byxxx=byminute, - base=60) - else: - self._byminute = set(byminute) - - self._byminute = tuple(sorted(self._byminute)) - self._original_rule['byminute'] = self._byminute - - # bysecond - if bysecond is None: - if freq < SECONDLY: - self._bysecond = ((dtstart.second,)) - else: - self._bysecond = None - else: - if isinstance(bysecond, integer_types): - bysecond = (bysecond,) - - self._bysecond = set(bysecond) - - if freq == SECONDLY: - self._bysecond = self.__construct_byset(start=dtstart.second, - byxxx=bysecond, - base=60) - else: - self._bysecond = set(bysecond) - - self._bysecond = tuple(sorted(self._bysecond)) - self._original_rule['bysecond'] = self._bysecond - - if self._freq >= HOURLY: - self._timeset = None - else: - self._timeset = [] - for hour in self._byhour: - for minute in self._byminute: - for second in self._bysecond: - self._timeset.append( - datetime.time(hour, minute, second, - tzinfo=self._tzinfo)) - self._timeset.sort() - self._timeset = tuple(self._timeset) - - def __str__(self): - """ - Output a string that would generate this RRULE if passed to rrulestr. - This is mostly compatible with RFC5545, except for the - dateutil-specific extension BYEASTER. - """ - - output = [] - h, m, s = [None] * 3 - if self._dtstart: - output.append(self._dtstart.strftime('DTSTART:%Y%m%dT%H%M%S')) - h, m, s = self._dtstart.timetuple()[3:6] - - parts = ['FREQ=' + FREQNAMES[self._freq]] - if self._interval != 1: - parts.append('INTERVAL=' + str(self._interval)) - - if self._wkst: - parts.append('WKST=' + repr(weekday(self._wkst))[0:2]) - - if self._count is not None: - parts.append('COUNT=' + str(self._count)) - - if self._until: - parts.append(self._until.strftime('UNTIL=%Y%m%dT%H%M%S')) - - if self._original_rule.get('byweekday') is not None: - # The str() method on weekday objects doesn't generate - # RFC5545-compliant strings, so we should modify that. - original_rule = dict(self._original_rule) - wday_strings = [] - for wday in original_rule['byweekday']: - if wday.n: - wday_strings.append('{n:+d}{wday}'.format( - n=wday.n, - wday=repr(wday)[0:2])) - else: - wday_strings.append(repr(wday)) - - original_rule['byweekday'] = wday_strings - else: - original_rule = self._original_rule - - partfmt = '{name}={vals}' - for name, key in [('BYSETPOS', 'bysetpos'), - ('BYMONTH', 'bymonth'), - ('BYMONTHDAY', 'bymonthday'), - ('BYYEARDAY', 'byyearday'), - ('BYWEEKNO', 'byweekno'), - ('BYDAY', 'byweekday'), - ('BYHOUR', 'byhour'), - ('BYMINUTE', 'byminute'), - ('BYSECOND', 'bysecond'), - ('BYEASTER', 'byeaster')]: - value = original_rule.get(key) - if value: - parts.append(partfmt.format(name=name, vals=(','.join(str(v) - for v in value)))) - - output.append('RRULE:' + ';'.join(parts)) - return '\n'.join(output) - - def replace(self, **kwargs): - """Return new rrule with same attributes except for those attributes given new - values by whichever keyword arguments are specified.""" - new_kwargs = {"interval": self._interval, - "count": self._count, - "dtstart": self._dtstart, - "freq": self._freq, - "until": self._until, - "wkst": self._wkst, - "cache": False if self._cache is None else True } - new_kwargs.update(self._original_rule) - new_kwargs.update(kwargs) - return rrule(**new_kwargs) - - def _iter(self): - year, month, day, hour, minute, second, weekday, yearday, _ = \ - self._dtstart.timetuple() - - # Some local variables to speed things up a bit - freq = self._freq - interval = self._interval - wkst = self._wkst - until = self._until - bymonth = self._bymonth - byweekno = self._byweekno - byyearday = self._byyearday - byweekday = self._byweekday - byeaster = self._byeaster - bymonthday = self._bymonthday - bynmonthday = self._bynmonthday - bysetpos = self._bysetpos - byhour = self._byhour - byminute = self._byminute - bysecond = self._bysecond - - ii = _iterinfo(self) - ii.rebuild(year, month) - - getdayset = {YEARLY: ii.ydayset, - MONTHLY: ii.mdayset, - WEEKLY: ii.wdayset, - DAILY: ii.ddayset, - HOURLY: ii.ddayset, - MINUTELY: ii.ddayset, - SECONDLY: ii.ddayset}[freq] - - if freq < HOURLY: - timeset = self._timeset - else: - gettimeset = {HOURLY: ii.htimeset, - MINUTELY: ii.mtimeset, - SECONDLY: ii.stimeset}[freq] - if ((freq >= HOURLY and - self._byhour and hour not in self._byhour) or - (freq >= MINUTELY and - self._byminute and minute not in self._byminute) or - (freq >= SECONDLY and - self._bysecond and second not in self._bysecond)): - timeset = () - else: - timeset = gettimeset(hour, minute, second) - - total = 0 - count = self._count - while True: - # Get dayset with the right frequency - dayset, start, end = getdayset(year, month, day) - - # Do the "hard" work ;-) - filtered = False - for i in dayset[start:end]: - if ((bymonth and ii.mmask[i] not in bymonth) or - (byweekno and not ii.wnomask[i]) or - (byweekday and ii.wdaymask[i] not in byweekday) or - (ii.nwdaymask and not ii.nwdaymask[i]) or - (byeaster and not ii.eastermask[i]) or - ((bymonthday or bynmonthday) and - ii.mdaymask[i] not in bymonthday and - ii.nmdaymask[i] not in bynmonthday) or - (byyearday and - ((i < ii.yearlen and i+1 not in byyearday and - -ii.yearlen+i not in byyearday) or - (i >= ii.yearlen and i+1-ii.yearlen not in byyearday and - -ii.nextyearlen+i-ii.yearlen not in byyearday)))): - dayset[i] = None - filtered = True - - # Output results - if bysetpos and timeset: - poslist = [] - for pos in bysetpos: - if pos < 0: - daypos, timepos = divmod(pos, len(timeset)) - else: - daypos, timepos = divmod(pos-1, len(timeset)) - try: - i = [x for x in dayset[start:end] - if x is not None][daypos] - time = timeset[timepos] - except IndexError: - pass - else: - date = datetime.date.fromordinal(ii.yearordinal+i) - res = datetime.datetime.combine(date, time) - if res not in poslist: - poslist.append(res) - poslist.sort() - for res in poslist: - if until and res > until: - self._len = total - return - elif res >= self._dtstart: - if count is not None: - count -= 1 - if count < 0: - self._len = total - return - total += 1 - yield res - else: - for i in dayset[start:end]: - if i is not None: - date = datetime.date.fromordinal(ii.yearordinal + i) - for time in timeset: - res = datetime.datetime.combine(date, time) - if until and res > until: - self._len = total - return - elif res >= self._dtstart: - if count is not None: - count -= 1 - if count < 0: - self._len = total - return - - total += 1 - yield res - - # Handle frequency and interval - fixday = False - if freq == YEARLY: - year += interval - if year > datetime.MAXYEAR: - self._len = total - return - ii.rebuild(year, month) - elif freq == MONTHLY: - month += interval - if month > 12: - div, mod = divmod(month, 12) - month = mod - year += div - if month == 0: - month = 12 - year -= 1 - if year > datetime.MAXYEAR: - self._len = total - return - ii.rebuild(year, month) - elif freq == WEEKLY: - if wkst > weekday: - day += -(weekday+1+(6-wkst))+self._interval*7 - else: - day += -(weekday-wkst)+self._interval*7 - weekday = wkst - fixday = True - elif freq == DAILY: - day += interval - fixday = True - elif freq == HOURLY: - if filtered: - # Jump to one iteration before next day - hour += ((23-hour)//interval)*interval - - if byhour: - ndays, hour = self.__mod_distance(value=hour, - byxxx=self._byhour, - base=24) - else: - ndays, hour = divmod(hour+interval, 24) - - if ndays: - day += ndays - fixday = True - - timeset = gettimeset(hour, minute, second) - elif freq == MINUTELY: - if filtered: - # Jump to one iteration before next day - minute += ((1439-(hour*60+minute))//interval)*interval - - valid = False - rep_rate = (24*60) - for j in range(rep_rate // gcd(interval, rep_rate)): - if byminute: - nhours, minute = \ - self.__mod_distance(value=minute, - byxxx=self._byminute, - base=60) - else: - nhours, minute = divmod(minute+interval, 60) - - div, hour = divmod(hour+nhours, 24) - if div: - day += div - fixday = True - filtered = False - - if not byhour or hour in byhour: - valid = True - break - - if not valid: - raise ValueError('Invalid combination of interval and ' + - 'byhour resulting in empty rule.') - - timeset = gettimeset(hour, minute, second) - elif freq == SECONDLY: - if filtered: - # Jump to one iteration before next day - second += (((86399 - (hour * 3600 + minute * 60 + second)) - // interval) * interval) - - rep_rate = (24 * 3600) - valid = False - for j in range(0, rep_rate // gcd(interval, rep_rate)): - if bysecond: - nminutes, second = \ - self.__mod_distance(value=second, - byxxx=self._bysecond, - base=60) - else: - nminutes, second = divmod(second+interval, 60) - - div, minute = divmod(minute+nminutes, 60) - if div: - hour += div - div, hour = divmod(hour, 24) - if div: - day += div - fixday = True - - if ((not byhour or hour in byhour) and - (not byminute or minute in byminute) and - (not bysecond or second in bysecond)): - valid = True - break - - if not valid: - raise ValueError('Invalid combination of interval, ' + - 'byhour and byminute resulting in empty' + - ' rule.') - - timeset = gettimeset(hour, minute, second) - - if fixday and day > 28: - daysinmonth = calendar.monthrange(year, month)[1] - if day > daysinmonth: - while day > daysinmonth: - day -= daysinmonth - month += 1 - if month == 13: - month = 1 - year += 1 - if year > datetime.MAXYEAR: - self._len = total - return - daysinmonth = calendar.monthrange(year, month)[1] - ii.rebuild(year, month) - - def __construct_byset(self, start, byxxx, base): - """ - If a `BYXXX` sequence is passed to the constructor at the same level as - `FREQ` (e.g. `FREQ=HOURLY,BYHOUR={2,4,7},INTERVAL=3`), there are some - specifications which cannot be reached given some starting conditions. - - This occurs whenever the interval is not coprime with the base of a - given unit and the difference between the starting position and the - ending position is not coprime with the greatest common denominator - between the interval and the base. For example, with a FREQ of hourly - starting at 17:00 and an interval of 4, the only valid values for - BYHOUR would be {21, 1, 5, 9, 13, 17}, because 4 and 24 are not - coprime. - - :param start: - Specifies the starting position. - :param byxxx: - An iterable containing the list of allowed values. - :param base: - The largest allowable value for the specified frequency (e.g. - 24 hours, 60 minutes). - - This does not preserve the type of the iterable, returning a set, since - the values should be unique and the order is irrelevant, this will - speed up later lookups. - - In the event of an empty set, raises a :exception:`ValueError`, as this - results in an empty rrule. - """ - - cset = set() - - # Support a single byxxx value. - if isinstance(byxxx, integer_types): - byxxx = (byxxx, ) - - for num in byxxx: - i_gcd = gcd(self._interval, base) - # Use divmod rather than % because we need to wrap negative nums. - if i_gcd == 1 or divmod(num - start, i_gcd)[1] == 0: - cset.add(num) - - if len(cset) == 0: - raise ValueError("Invalid rrule byxxx generates an empty set.") - - return cset - - def __mod_distance(self, value, byxxx, base): - """ - Calculates the next value in a sequence where the `FREQ` parameter is - specified along with a `BYXXX` parameter at the same "level" - (e.g. `HOURLY` specified with `BYHOUR`). - - :param value: - The old value of the component. - :param byxxx: - The `BYXXX` set, which should have been generated by - `rrule._construct_byset`, or something else which checks that a - valid rule is present. - :param base: - The largest allowable value for the specified frequency (e.g. - 24 hours, 60 minutes). - - If a valid value is not found after `base` iterations (the maximum - number before the sequence would start to repeat), this raises a - :exception:`ValueError`, as no valid values were found. - - This returns a tuple of `divmod(n*interval, base)`, where `n` is the - smallest number of `interval` repetitions until the next specified - value in `byxxx` is found. - """ - accumulator = 0 - for ii in range(1, base + 1): - # Using divmod() over % to account for negative intervals - div, value = divmod(value + self._interval, base) - accumulator += div - if value in byxxx: - return (accumulator, value) - - -class _iterinfo(object): - __slots__ = ["rrule", "lastyear", "lastmonth", - "yearlen", "nextyearlen", "yearordinal", "yearweekday", - "mmask", "mrange", "mdaymask", "nmdaymask", - "wdaymask", "wnomask", "nwdaymask", "eastermask"] - - def __init__(self, rrule): - for attr in self.__slots__: - setattr(self, attr, None) - self.rrule = rrule - - def rebuild(self, year, month): - # Every mask is 7 days longer to handle cross-year weekly periods. - rr = self.rrule - if year != self.lastyear: - self.yearlen = 365 + calendar.isleap(year) - self.nextyearlen = 365 + calendar.isleap(year + 1) - firstyday = datetime.date(year, 1, 1) - self.yearordinal = firstyday.toordinal() - self.yearweekday = firstyday.weekday() - - wday = datetime.date(year, 1, 1).weekday() - if self.yearlen == 365: - self.mmask = M365MASK - self.mdaymask = MDAY365MASK - self.nmdaymask = NMDAY365MASK - self.wdaymask = WDAYMASK[wday:] - self.mrange = M365RANGE - else: - self.mmask = M366MASK - self.mdaymask = MDAY366MASK - self.nmdaymask = NMDAY366MASK - self.wdaymask = WDAYMASK[wday:] - self.mrange = M366RANGE - - if not rr._byweekno: - self.wnomask = None - else: - self.wnomask = [0]*(self.yearlen+7) - # no1wkst = firstwkst = self.wdaymask.index(rr._wkst) - no1wkst = firstwkst = (7-self.yearweekday+rr._wkst) % 7 - if no1wkst >= 4: - no1wkst = 0 - # Number of days in the year, plus the days we got - # from last year. - wyearlen = self.yearlen+(self.yearweekday-rr._wkst) % 7 - else: - # Number of days in the year, minus the days we - # left in last year. - wyearlen = self.yearlen-no1wkst - div, mod = divmod(wyearlen, 7) - numweeks = div+mod//4 - for n in rr._byweekno: - if n < 0: - n += numweeks+1 - if not (0 < n <= numweeks): - continue - if n > 1: - i = no1wkst+(n-1)*7 - if no1wkst != firstwkst: - i -= 7-firstwkst - else: - i = no1wkst - for j in range(7): - self.wnomask[i] = 1 - i += 1 - if self.wdaymask[i] == rr._wkst: - break - if 1 in rr._byweekno: - # Check week number 1 of next year as well - # TODO: Check -numweeks for next year. - i = no1wkst+numweeks*7 - if no1wkst != firstwkst: - i -= 7-firstwkst - if i < self.yearlen: - # If week starts in next year, we - # don't care about it. - for j in range(7): - self.wnomask[i] = 1 - i += 1 - if self.wdaymask[i] == rr._wkst: - break - if no1wkst: - # Check last week number of last year as - # well. If no1wkst is 0, either the year - # started on week start, or week number 1 - # got days from last year, so there are no - # days from last year's last week number in - # this year. - if -1 not in rr._byweekno: - lyearweekday = datetime.date(year-1, 1, 1).weekday() - lno1wkst = (7-lyearweekday+rr._wkst) % 7 - lyearlen = 365+calendar.isleap(year-1) - if lno1wkst >= 4: - lno1wkst = 0 - lnumweeks = 52+(lyearlen + - (lyearweekday-rr._wkst) % 7) % 7//4 - else: - lnumweeks = 52+(self.yearlen-no1wkst) % 7//4 - else: - lnumweeks = -1 - if lnumweeks in rr._byweekno: - for i in range(no1wkst): - self.wnomask[i] = 1 - - if (rr._bynweekday and (month != self.lastmonth or - year != self.lastyear)): - ranges = [] - if rr._freq == YEARLY: - if rr._bymonth: - for month in rr._bymonth: - ranges.append(self.mrange[month-1:month+1]) - else: - ranges = [(0, self.yearlen)] - elif rr._freq == MONTHLY: - ranges = [self.mrange[month-1:month+1]] - if ranges: - # Weekly frequency won't get here, so we may not - # care about cross-year weekly periods. - self.nwdaymask = [0]*self.yearlen - for first, last in ranges: - last -= 1 - for wday, n in rr._bynweekday: - if n < 0: - i = last+(n+1)*7 - i -= (self.wdaymask[i]-wday) % 7 - else: - i = first+(n-1)*7 - i += (7-self.wdaymask[i]+wday) % 7 - if first <= i <= last: - self.nwdaymask[i] = 1 - - if rr._byeaster: - self.eastermask = [0]*(self.yearlen+7) - eyday = easter.easter(year).toordinal()-self.yearordinal - for offset in rr._byeaster: - self.eastermask[eyday+offset] = 1 - - self.lastyear = year - self.lastmonth = month - - def ydayset(self, year, month, day): - return list(range(self.yearlen)), 0, self.yearlen - - def mdayset(self, year, month, day): - dset = [None]*self.yearlen - start, end = self.mrange[month-1:month+1] - for i in range(start, end): - dset[i] = i - return dset, start, end - - def wdayset(self, year, month, day): - # We need to handle cross-year weeks here. - dset = [None]*(self.yearlen+7) - i = datetime.date(year, month, day).toordinal()-self.yearordinal - start = i - for j in range(7): - dset[i] = i - i += 1 - # if (not (0 <= i < self.yearlen) or - # self.wdaymask[i] == self.rrule._wkst): - # This will cross the year boundary, if necessary. - if self.wdaymask[i] == self.rrule._wkst: - break - return dset, start, i - - def ddayset(self, year, month, day): - dset = [None] * self.yearlen - i = datetime.date(year, month, day).toordinal() - self.yearordinal - dset[i] = i - return dset, i, i + 1 - - def htimeset(self, hour, minute, second): - tset = [] - rr = self.rrule - for minute in rr._byminute: - for second in rr._bysecond: - tset.append(datetime.time(hour, minute, second, - tzinfo=rr._tzinfo)) - tset.sort() - return tset - - def mtimeset(self, hour, minute, second): - tset = [] - rr = self.rrule - for second in rr._bysecond: - tset.append(datetime.time(hour, minute, second, tzinfo=rr._tzinfo)) - tset.sort() - return tset - - def stimeset(self, hour, minute, second): - return (datetime.time(hour, minute, second, - tzinfo=self.rrule._tzinfo),) - - -class rruleset(rrulebase): - """ The rruleset type allows more complex recurrence setups, mixing - multiple rules, dates, exclusion rules, and exclusion dates. The type - constructor takes the following keyword arguments: - - :param cache: If True, caching of results will be enabled, improving - performance of multiple queries considerably. """ - - class _genitem(object): - def __init__(self, genlist, gen): - try: - self.dt = advance_iterator(gen) - genlist.append(self) - except StopIteration: - pass - self.genlist = genlist - self.gen = gen - - def __next__(self): - try: - self.dt = advance_iterator(self.gen) - except StopIteration: - if self.genlist[0] is self: - heapq.heappop(self.genlist) - else: - self.genlist.remove(self) - heapq.heapify(self.genlist) - - next = __next__ - - def __lt__(self, other): - return self.dt < other.dt - - def __gt__(self, other): - return self.dt > other.dt - - def __eq__(self, other): - return self.dt == other.dt - - def __ne__(self, other): - return self.dt != other.dt - - def __init__(self, cache=False): - super(rruleset, self).__init__(cache) - self._rrule = [] - self._rdate = [] - self._exrule = [] - self._exdate = [] - - @_invalidates_cache - def rrule(self, rrule): - """ Include the given :py:class:`rrule` instance in the recurrence set - generation. """ - self._rrule.append(rrule) - - @_invalidates_cache - def rdate(self, rdate): - """ Include the given :py:class:`datetime` instance in the recurrence - set generation. """ - self._rdate.append(rdate) - - @_invalidates_cache - def exrule(self, exrule): - """ Include the given rrule instance in the recurrence set exclusion - list. Dates which are part of the given recurrence rules will not - be generated, even if some inclusive rrule or rdate matches them. - """ - self._exrule.append(exrule) - - @_invalidates_cache - def exdate(self, exdate): - """ Include the given datetime instance in the recurrence set - exclusion list. Dates included that way will not be generated, - even if some inclusive rrule or rdate matches them. """ - self._exdate.append(exdate) - - def _iter(self): - rlist = [] - self._rdate.sort() - self._genitem(rlist, iter(self._rdate)) - for gen in [iter(x) for x in self._rrule]: - self._genitem(rlist, gen) - exlist = [] - self._exdate.sort() - self._genitem(exlist, iter(self._exdate)) - for gen in [iter(x) for x in self._exrule]: - self._genitem(exlist, gen) - lastdt = None - total = 0 - heapq.heapify(rlist) - heapq.heapify(exlist) - while rlist: - ritem = rlist[0] - if not lastdt or lastdt != ritem.dt: - while exlist and exlist[0] < ritem: - exitem = exlist[0] - advance_iterator(exitem) - if exlist and exlist[0] is exitem: - heapq.heapreplace(exlist, exitem) - if not exlist or ritem != exlist[0]: - total += 1 - yield ritem.dt - lastdt = ritem.dt - advance_iterator(ritem) - if rlist and rlist[0] is ritem: - heapq.heapreplace(rlist, ritem) - self._len = total - - - - -class _rrulestr(object): - """ Parses a string representation of a recurrence rule or set of - recurrence rules. - - :param s: - Required, a string defining one or more recurrence rules. - - :param dtstart: - If given, used as the default recurrence start if not specified in the - rule string. - - :param cache: - If set ``True`` caching of results will be enabled, improving - performance of multiple queries considerably. - - :param unfold: - If set ``True`` indicates that a rule string is split over more - than one line and should be joined before processing. - - :param forceset: - If set ``True`` forces a :class:`dateutil.rrule.rruleset` to - be returned. - - :param compatible: - If set ``True`` forces ``unfold`` and ``forceset`` to be ``True``. - - :param ignoretz: - If set ``True``, time zones in parsed strings are ignored and a naive - :class:`datetime.datetime` object is returned. - - :param tzids: - If given, a callable or mapping used to retrieve a - :class:`datetime.tzinfo` from a string representation. - Defaults to :func:`dateutil.tz.gettz`. - - :param tzinfos: - Additional time zone names / aliases which may be present in a string - representation. See :func:`dateutil.parser.parse` for more - information. - - :return: - Returns a :class:`dateutil.rrule.rruleset` or - :class:`dateutil.rrule.rrule` - """ - - _freq_map = {"YEARLY": YEARLY, - "MONTHLY": MONTHLY, - "WEEKLY": WEEKLY, - "DAILY": DAILY, - "HOURLY": HOURLY, - "MINUTELY": MINUTELY, - "SECONDLY": SECONDLY} - - _weekday_map = {"MO": 0, "TU": 1, "WE": 2, "TH": 3, - "FR": 4, "SA": 5, "SU": 6} - - def _handle_int(self, rrkwargs, name, value, **kwargs): - rrkwargs[name.lower()] = int(value) - - def _handle_int_list(self, rrkwargs, name, value, **kwargs): - rrkwargs[name.lower()] = [int(x) for x in value.split(',')] - - _handle_INTERVAL = _handle_int - _handle_COUNT = _handle_int - _handle_BYSETPOS = _handle_int_list - _handle_BYMONTH = _handle_int_list - _handle_BYMONTHDAY = _handle_int_list - _handle_BYYEARDAY = _handle_int_list - _handle_BYEASTER = _handle_int_list - _handle_BYWEEKNO = _handle_int_list - _handle_BYHOUR = _handle_int_list - _handle_BYMINUTE = _handle_int_list - _handle_BYSECOND = _handle_int_list - - def _handle_FREQ(self, rrkwargs, name, value, **kwargs): - rrkwargs["freq"] = self._freq_map[value] - - def _handle_UNTIL(self, rrkwargs, name, value, **kwargs): - global parser - if not parser: - from dateutil import parser - try: - rrkwargs["until"] = parser.parse(value, - ignoretz=kwargs.get("ignoretz"), - tzinfos=kwargs.get("tzinfos")) - except ValueError: - raise ValueError("invalid until date") - - def _handle_WKST(self, rrkwargs, name, value, **kwargs): - rrkwargs["wkst"] = self._weekday_map[value] - - def _handle_BYWEEKDAY(self, rrkwargs, name, value, **kwargs): - """ - Two ways to specify this: +1MO or MO(+1) - """ - l = [] - for wday in value.split(','): - if '(' in wday: - # If it's of the form TH(+1), etc. - splt = wday.split('(') - w = splt[0] - n = int(splt[1][:-1]) - elif len(wday): - # If it's of the form +1MO - for i in range(len(wday)): - if wday[i] not in '+-0123456789': - break - n = wday[:i] or None - w = wday[i:] - if n: - n = int(n) - else: - raise ValueError("Invalid (empty) BYDAY specification.") - - l.append(weekdays[self._weekday_map[w]](n)) - rrkwargs["byweekday"] = l - - _handle_BYDAY = _handle_BYWEEKDAY - - def _parse_rfc_rrule(self, line, - dtstart=None, - cache=False, - ignoretz=False, - tzinfos=None): - if line.find(':') != -1: - name, value = line.split(':') - if name != "RRULE": - raise ValueError("unknown parameter name") - else: - value = line - rrkwargs = {} - for pair in value.split(';'): - name, value = pair.split('=') - name = name.upper() - value = value.upper() - try: - getattr(self, "_handle_"+name)(rrkwargs, name, value, - ignoretz=ignoretz, - tzinfos=tzinfos) - except AttributeError: - raise ValueError("unknown parameter '%s'" % name) - except (KeyError, ValueError): - raise ValueError("invalid '%s': %s" % (name, value)) - return rrule(dtstart=dtstart, cache=cache, **rrkwargs) - - def _parse_date_value(self, date_value, parms, rule_tzids, - ignoretz, tzids, tzinfos): - global parser - if not parser: - from dateutil import parser - - datevals = [] - value_found = False - TZID = None - - for parm in parms: - if parm.startswith("TZID="): - try: - tzkey = rule_tzids[parm.split('TZID=')[-1]] - except KeyError: - continue - if tzids is None: - from . import tz - tzlookup = tz.gettz - elif callable(tzids): - tzlookup = tzids - else: - tzlookup = getattr(tzids, 'get', None) - if tzlookup is None: - msg = ('tzids must be a callable, mapping, or None, ' - 'not %s' % tzids) - raise ValueError(msg) - - TZID = tzlookup(tzkey) - continue - - # RFC 5445 3.8.2.4: The VALUE parameter is optional, but may be found - # only once. - if parm not in {"VALUE=DATE-TIME", "VALUE=DATE"}: - raise ValueError("unsupported parm: " + parm) - else: - if value_found: - msg = ("Duplicate value parameter found in: " + parm) - raise ValueError(msg) - value_found = True - - for datestr in date_value.split(','): - date = parser.parse(datestr, ignoretz=ignoretz, tzinfos=tzinfos) - if TZID is not None: - if date.tzinfo is None: - date = date.replace(tzinfo=TZID) - else: - raise ValueError('DTSTART/EXDATE specifies multiple timezone') - datevals.append(date) - - return datevals - - def _parse_rfc(self, s, - dtstart=None, - cache=False, - unfold=False, - forceset=False, - compatible=False, - ignoretz=False, - tzids=None, - tzinfos=None): - global parser - if compatible: - forceset = True - unfold = True - - TZID_NAMES = dict(map( - lambda x: (x.upper(), x), - re.findall('TZID=(?P[^:]+):', s) - )) - s = s.upper() - if not s.strip(): - raise ValueError("empty string") - if unfold: - lines = s.splitlines() - i = 0 - while i < len(lines): - line = lines[i].rstrip() - if not line: - del lines[i] - elif i > 0 and line[0] == " ": - lines[i-1] += line[1:] - del lines[i] - else: - i += 1 - else: - lines = s.split() - if (not forceset and len(lines) == 1 and (s.find(':') == -1 or - s.startswith('RRULE:'))): - return self._parse_rfc_rrule(lines[0], cache=cache, - dtstart=dtstart, ignoretz=ignoretz, - tzinfos=tzinfos) - else: - rrulevals = [] - rdatevals = [] - exrulevals = [] - exdatevals = [] - for line in lines: - if not line: - continue - if line.find(':') == -1: - name = "RRULE" - value = line - else: - name, value = line.split(':', 1) - parms = name.split(';') - if not parms: - raise ValueError("empty property name") - name = parms[0] - parms = parms[1:] - if name == "RRULE": - for parm in parms: - raise ValueError("unsupported RRULE parm: "+parm) - rrulevals.append(value) - elif name == "RDATE": - for parm in parms: - if parm != "VALUE=DATE-TIME": - raise ValueError("unsupported RDATE parm: "+parm) - rdatevals.append(value) - elif name == "EXRULE": - for parm in parms: - raise ValueError("unsupported EXRULE parm: "+parm) - exrulevals.append(value) - elif name == "EXDATE": - exdatevals.extend( - self._parse_date_value(value, parms, - TZID_NAMES, ignoretz, - tzids, tzinfos) - ) - elif name == "DTSTART": - dtvals = self._parse_date_value(value, parms, TZID_NAMES, - ignoretz, tzids, tzinfos) - if len(dtvals) != 1: - raise ValueError("Multiple DTSTART values specified:" + - value) - dtstart = dtvals[0] - else: - raise ValueError("unsupported property: "+name) - if (forceset or len(rrulevals) > 1 or rdatevals - or exrulevals or exdatevals): - if not parser and (rdatevals or exdatevals): - from dateutil import parser - rset = rruleset(cache=cache) - for value in rrulevals: - rset.rrule(self._parse_rfc_rrule(value, dtstart=dtstart, - ignoretz=ignoretz, - tzinfos=tzinfos)) - for value in rdatevals: - for datestr in value.split(','): - rset.rdate(parser.parse(datestr, - ignoretz=ignoretz, - tzinfos=tzinfos)) - for value in exrulevals: - rset.exrule(self._parse_rfc_rrule(value, dtstart=dtstart, - ignoretz=ignoretz, - tzinfos=tzinfos)) - for value in exdatevals: - rset.exdate(value) - if compatible and dtstart: - rset.rdate(dtstart) - return rset - else: - return self._parse_rfc_rrule(rrulevals[0], - dtstart=dtstart, - cache=cache, - ignoretz=ignoretz, - tzinfos=tzinfos) - - def __call__(self, s, **kwargs): - return self._parse_rfc(s, **kwargs) - - -rrulestr = _rrulestr() - -# vim:ts=4:sw=4:et diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_cycles.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_cycles.py deleted file mode 100644 index 9bbb9bc9f19c41db8955cef310e1bea3dac448b7..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/matplotlib/tests/test_cycles.py +++ /dev/null @@ -1,170 +0,0 @@ -import contextlib -from io import StringIO - -import matplotlib as mpl -import matplotlib.pyplot as plt -import numpy as np -import pytest - -from cycler import cycler - - -def test_colorcycle_basic(): - fig, ax = plt.subplots() - ax.set_prop_cycle(cycler('color', ['r', 'g', 'y'])) - for _ in range(4): - ax.plot(range(10), range(10)) - assert [l.get_color() for l in ax.lines] == ['r', 'g', 'y', 'r'] - - -def test_marker_cycle(): - fig, ax = plt.subplots() - ax.set_prop_cycle(cycler('c', ['r', 'g', 'y']) + - cycler('marker', ['.', '*', 'x'])) - for _ in range(4): - ax.plot(range(10), range(10)) - assert [l.get_color() for l in ax.lines] == ['r', 'g', 'y', 'r'] - assert [l.get_marker() for l in ax.lines] == ['.', '*', 'x', '.'] - - -def test_marker_cycle_kwargs_arrays_iterators(): - fig, ax = plt.subplots() - ax.set_prop_cycle(c=np.array(['r', 'g', 'y']), - marker=iter(['.', '*', 'x'])) - for _ in range(4): - ax.plot(range(10), range(10)) - assert [l.get_color() for l in ax.lines] == ['r', 'g', 'y', 'r'] - assert [l.get_marker() for l in ax.lines] == ['.', '*', 'x', '.'] - - -def test_linestylecycle_basic(): - fig, ax = plt.subplots() - ax.set_prop_cycle(cycler('ls', ['-', '--', ':'])) - for _ in range(4): - ax.plot(range(10), range(10)) - assert [l.get_linestyle() for l in ax.lines] == ['-', '--', ':', '-'] - - -def test_fillcycle_basic(): - fig, ax = plt.subplots() - ax.set_prop_cycle(cycler('c', ['r', 'g', 'y']) + - cycler('hatch', ['xx', 'O', '|-']) + - cycler('linestyle', ['-', '--', ':'])) - for _ in range(4): - ax.fill(range(10), range(10)) - assert ([p.get_facecolor() for p in ax.patches] - == [mpl.colors.to_rgba(c) for c in ['r', 'g', 'y', 'r']]) - assert [p.get_hatch() for p in ax.patches] == ['xx', 'O', '|-', 'xx'] - assert [p.get_linestyle() for p in ax.patches] == ['-', '--', ':', '-'] - - -def test_fillcycle_ignore(): - fig, ax = plt.subplots() - ax.set_prop_cycle(cycler('color', ['r', 'g', 'y']) + - cycler('hatch', ['xx', 'O', '|-']) + - cycler('marker', ['.', '*', 'D'])) - t = range(10) - # Should not advance the cycler, even though there is an - # unspecified property in the cycler "marker". - # "marker" is not a Polygon property, and should be ignored. - ax.fill(t, t, 'r', hatch='xx') - # Allow the cycler to advance, but specify some properties - ax.fill(t, t, hatch='O') - ax.fill(t, t) - ax.fill(t, t) - assert ([p.get_facecolor() for p in ax.patches] - == [mpl.colors.to_rgba(c) for c in ['r', 'r', 'g', 'y']]) - assert [p.get_hatch() for p in ax.patches] == ['xx', 'O', 'O', '|-'] - - -def test_property_collision_plot(): - fig, ax = plt.subplots() - ax.set_prop_cycle('linewidth', [2, 4]) - t = range(10) - for c in range(1, 4): - ax.plot(t, t, lw=0.1) - ax.plot(t, t) - ax.plot(t, t) - assert [l.get_linewidth() for l in ax.lines] == [0.1, 0.1, 0.1, 2, 4] - - -def test_property_collision_fill(): - fig, ax = plt.subplots() - ax.set_prop_cycle(linewidth=[2, 3, 4, 5, 6], facecolor='bgcmy') - t = range(10) - for c in range(1, 4): - ax.fill(t, t, lw=0.1) - ax.fill(t, t) - ax.fill(t, t) - assert ([p.get_facecolor() for p in ax.patches] - == [mpl.colors.to_rgba(c) for c in 'bgcmy']) - assert [p.get_linewidth() for p in ax.patches] == [0.1, 0.1, 0.1, 5, 6] - - -def test_valid_input_forms(): - fig, ax = plt.subplots() - # These should not raise an error. - ax.set_prop_cycle(None) - ax.set_prop_cycle(cycler('linewidth', [1, 2])) - ax.set_prop_cycle('color', 'rgywkbcm') - ax.set_prop_cycle('lw', (1, 2)) - ax.set_prop_cycle('linewidth', [1, 2]) - ax.set_prop_cycle('linewidth', iter([1, 2])) - ax.set_prop_cycle('linewidth', np.array([1, 2])) - ax.set_prop_cycle('color', np.array([[1, 0, 0], - [0, 1, 0], - [0, 0, 1]])) - ax.set_prop_cycle('dashes', [[], [13, 2], [8, 3, 1, 3]]) - ax.set_prop_cycle(lw=[1, 2], color=['k', 'w'], ls=['-', '--']) - ax.set_prop_cycle(lw=np.array([1, 2]), - color=np.array(['k', 'w']), - ls=np.array(['-', '--'])) - - -def test_cycle_reset(): - fig, ax = plt.subplots() - prop0 = StringIO() - prop1 = StringIO() - prop2 = StringIO() - - with contextlib.redirect_stdout(prop0): - plt.getp(ax.plot([1, 2], label="label")[0]) - - ax.set_prop_cycle(linewidth=[10, 9, 4]) - with contextlib.redirect_stdout(prop1): - plt.getp(ax.plot([1, 2], label="label")[0]) - assert prop1.getvalue() != prop0.getvalue() - - ax.set_prop_cycle(None) - with contextlib.redirect_stdout(prop2): - plt.getp(ax.plot([1, 2], label="label")[0]) - assert prop2.getvalue() == prop0.getvalue() - - -def test_invalid_input_forms(): - fig, ax = plt.subplots() - - with pytest.raises((TypeError, ValueError)): - ax.set_prop_cycle(1) - with pytest.raises((TypeError, ValueError)): - ax.set_prop_cycle([1, 2]) - - with pytest.raises((TypeError, ValueError)): - ax.set_prop_cycle('color', 'fish') - - with pytest.raises((TypeError, ValueError)): - ax.set_prop_cycle('linewidth', 1) - with pytest.raises((TypeError, ValueError)): - ax.set_prop_cycle('linewidth', {1, 2}) - with pytest.raises((TypeError, ValueError)): - ax.set_prop_cycle(linewidth=1, color='r') - - with pytest.raises((TypeError, ValueError)): - ax.set_prop_cycle('foobar', [1, 2]) - with pytest.raises((TypeError, ValueError)): - ax.set_prop_cycle(foobar=[1, 2]) - - with pytest.raises((TypeError, ValueError)): - ax.set_prop_cycle(cycler(foobar=[1, 2])) - with pytest.raises(ValueError): - ax.set_prop_cycle(cycler(color='rgb', c='cmy')) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/exec_command.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/exec_command.py deleted file mode 100644 index a67453abf624c8b256f5613afdc7b7546957bc19..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/exec_command.py +++ /dev/null @@ -1,315 +0,0 @@ -""" -exec_command - -Implements exec_command function that is (almost) equivalent to -commands.getstatusoutput function but on NT, DOS systems the -returned status is actually correct (though, the returned status -values may be different by a factor). In addition, exec_command -takes keyword arguments for (re-)defining environment variables. - -Provides functions: - - exec_command --- execute command in a specified directory and - in the modified environment. - find_executable --- locate a command using info from environment - variable PATH. Equivalent to posix `which` - command. - -Author: Pearu Peterson -Created: 11 January 2003 - -Requires: Python 2.x - -Successfully tested on: - -======== ============ ================================================= -os.name sys.platform comments -======== ============ ================================================= -posix linux2 Debian (sid) Linux, Python 2.1.3+, 2.2.3+, 2.3.3 - PyCrust 0.9.3, Idle 1.0.2 -posix linux2 Red Hat 9 Linux, Python 2.1.3, 2.2.2, 2.3.2 -posix sunos5 SunOS 5.9, Python 2.2, 2.3.2 -posix darwin Darwin 7.2.0, Python 2.3 -nt win32 Windows Me - Python 2.3(EE), Idle 1.0, PyCrust 0.7.2 - Python 2.1.1 Idle 0.8 -nt win32 Windows 98, Python 2.1.1. Idle 0.8 -nt win32 Cygwin 98-4.10, Python 2.1.1(MSC) - echo tests - fail i.e. redefining environment variables may - not work. FIXED: don't use cygwin echo! - Comment: also `cmd /c echo` will not work - but redefining environment variables do work. -posix cygwin Cygwin 98-4.10, Python 2.3.3(cygming special) -nt win32 Windows XP, Python 2.3.3 -======== ============ ================================================= - -Known bugs: - -* Tests, that send messages to stderr, fail when executed from MSYS prompt - because the messages are lost at some point. - -""" -__all__ = ['exec_command', 'find_executable'] - -import os -import sys -import subprocess -import locale -import warnings - -from numpy.distutils.misc_util import is_sequence, make_temp_file -from numpy.distutils import log - -def filepath_from_subprocess_output(output): - """ - Convert `bytes` in the encoding used by a subprocess into a filesystem-appropriate `str`. - - Inherited from `exec_command`, and possibly incorrect. - """ - mylocale = locale.getpreferredencoding(False) - if mylocale is None: - mylocale = 'ascii' - output = output.decode(mylocale, errors='replace') - output = output.replace('\r\n', '\n') - # Another historical oddity - if output[-1:] == '\n': - output = output[:-1] - return output - - -def forward_bytes_to_stdout(val): - """ - Forward bytes from a subprocess call to the console, without attempting to - decode them. - - The assumption is that the subprocess call already returned bytes in - a suitable encoding. - """ - if hasattr(sys.stdout, 'buffer'): - # use the underlying binary output if there is one - sys.stdout.buffer.write(val) - elif hasattr(sys.stdout, 'encoding'): - # round-trip the encoding if necessary - sys.stdout.write(val.decode(sys.stdout.encoding)) - else: - # make a best-guess at the encoding - sys.stdout.write(val.decode('utf8', errors='replace')) - - -def temp_file_name(): - # 2019-01-30, 1.17 - warnings.warn('temp_file_name is deprecated since NumPy v1.17, use ' - 'tempfile.mkstemp instead', DeprecationWarning, stacklevel=1) - fo, name = make_temp_file() - fo.close() - return name - -def get_pythonexe(): - pythonexe = sys.executable - if os.name in ['nt', 'dos']: - fdir, fn = os.path.split(pythonexe) - fn = fn.upper().replace('PYTHONW', 'PYTHON') - pythonexe = os.path.join(fdir, fn) - assert os.path.isfile(pythonexe), '%r is not a file' % (pythonexe,) - return pythonexe - -def find_executable(exe, path=None, _cache={}): - """Return full path of a executable or None. - - Symbolic links are not followed. - """ - key = exe, path - try: - return _cache[key] - except KeyError: - pass - log.debug('find_executable(%r)' % exe) - orig_exe = exe - - if path is None: - path = os.environ.get('PATH', os.defpath) - if os.name=='posix': - realpath = os.path.realpath - else: - realpath = lambda a:a - - if exe.startswith('"'): - exe = exe[1:-1] - - suffixes = [''] - if os.name in ['nt', 'dos', 'os2']: - fn, ext = os.path.splitext(exe) - extra_suffixes = ['.exe', '.com', '.bat'] - if ext.lower() not in extra_suffixes: - suffixes = extra_suffixes - - if os.path.isabs(exe): - paths = [''] - else: - paths = [ os.path.abspath(p) for p in path.split(os.pathsep) ] - - for path in paths: - fn = os.path.join(path, exe) - for s in suffixes: - f_ext = fn+s - if not os.path.islink(f_ext): - f_ext = realpath(f_ext) - if os.path.isfile(f_ext) and os.access(f_ext, os.X_OK): - log.info('Found executable %s' % f_ext) - _cache[key] = f_ext - return f_ext - - log.warn('Could not locate executable %s' % orig_exe) - return None - -############################################################ - -def _preserve_environment( names ): - log.debug('_preserve_environment(%r)' % (names)) - env = {name: os.environ.get(name) for name in names} - return env - -def _update_environment( **env ): - log.debug('_update_environment(...)') - for name, value in env.items(): - os.environ[name] = value or '' - -def exec_command(command, execute_in='', use_shell=None, use_tee=None, - _with_python = 1, **env ): - """ - Return (status,output) of executed command. - - .. deprecated:: 1.17 - Use subprocess.Popen instead - - Parameters - ---------- - command : str - A concatenated string of executable and arguments. - execute_in : str - Before running command ``cd execute_in`` and after ``cd -``. - use_shell : {bool, None}, optional - If True, execute ``sh -c command``. Default None (True) - use_tee : {bool, None}, optional - If True use tee. Default None (True) - - - Returns - ------- - res : str - Both stdout and stderr messages. - - Notes - ----- - On NT, DOS systems the returned status is correct for external commands. - Wild cards will not work for non-posix systems or when use_shell=0. - - """ - # 2019-01-30, 1.17 - warnings.warn('exec_command is deprecated since NumPy v1.17, use ' - 'subprocess.Popen instead', DeprecationWarning, stacklevel=1) - log.debug('exec_command(%r,%s)' % (command, - ','.join(['%s=%r'%kv for kv in env.items()]))) - - if use_tee is None: - use_tee = os.name=='posix' - if use_shell is None: - use_shell = os.name=='posix' - execute_in = os.path.abspath(execute_in) - oldcwd = os.path.abspath(os.getcwd()) - - if __name__[-12:] == 'exec_command': - exec_dir = os.path.dirname(os.path.abspath(__file__)) - elif os.path.isfile('exec_command.py'): - exec_dir = os.path.abspath('.') - else: - exec_dir = os.path.abspath(sys.argv[0]) - if os.path.isfile(exec_dir): - exec_dir = os.path.dirname(exec_dir) - - if oldcwd!=execute_in: - os.chdir(execute_in) - log.debug('New cwd: %s' % execute_in) - else: - log.debug('Retaining cwd: %s' % oldcwd) - - oldenv = _preserve_environment( list(env.keys()) ) - _update_environment( **env ) - - try: - st = _exec_command(command, - use_shell=use_shell, - use_tee=use_tee, - **env) - finally: - if oldcwd!=execute_in: - os.chdir(oldcwd) - log.debug('Restored cwd to %s' % oldcwd) - _update_environment(**oldenv) - - return st - - -def _exec_command(command, use_shell=None, use_tee = None, **env): - """ - Internal workhorse for exec_command(). - """ - if use_shell is None: - use_shell = os.name=='posix' - if use_tee is None: - use_tee = os.name=='posix' - - if os.name == 'posix' and use_shell: - # On POSIX, subprocess always uses /bin/sh, override - sh = os.environ.get('SHELL', '/bin/sh') - if is_sequence(command): - command = [sh, '-c', ' '.join(command)] - else: - command = [sh, '-c', command] - use_shell = False - - elif os.name == 'nt' and is_sequence(command): - # On Windows, join the string for CreateProcess() ourselves as - # subprocess does it a bit differently - command = ' '.join(_quote_arg(arg) for arg in command) - - # Inherit environment by default - env = env or None - try: - # text is set to False so that communicate() - # will return bytes. We need to decode the output ourselves - # so that Python will not raise a UnicodeDecodeError when - # it encounters an invalid character; rather, we simply replace it - proc = subprocess.Popen(command, shell=use_shell, env=env, text=False, - stdout=subprocess.PIPE, - stderr=subprocess.STDOUT) - except OSError: - # Return 127, as os.spawn*() and /bin/sh do - return 127, '' - - text, err = proc.communicate() - mylocale = locale.getpreferredencoding(False) - if mylocale is None: - mylocale = 'ascii' - text = text.decode(mylocale, errors='replace') - text = text.replace('\r\n', '\n') - # Another historical oddity - if text[-1:] == '\n': - text = text[:-1] - - if use_tee and text: - print(text) - return proc.returncode, text - - -def _quote_arg(arg): - """ - Quote the argument for safe use in a shell command line. - """ - # If there is a quote in the string, assume relevants parts of the - # string are already quoted (e.g. '-I"C:\\Program Files\\..."') - if '"' not in arg and ' ' in arg: - return '"%s"' % arg - return arg - -############################################################ diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/sas/sas_xport.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/sas/sas_xport.py deleted file mode 100644 index e68f4789f0a06ee8c6a30be47fbadc9b0ba5a12a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/sas/sas_xport.py +++ /dev/null @@ -1,508 +0,0 @@ -""" -Read a SAS XPort format file into a Pandas DataFrame. - -Based on code from Jack Cushman (github.com/jcushman/xport). - -The file format is defined here: - -https://support.sas.com/content/dam/SAS/support/en/technical-papers/record-layout-of-a-sas-version-5-or-6-data-set-in-sas-transport-xport-format.pdf -""" -from __future__ import annotations - -from collections import abc -from datetime import datetime -import struct -from typing import TYPE_CHECKING -import warnings - -import numpy as np - -from pandas.util._decorators import Appender -from pandas.util._exceptions import find_stack_level - -import pandas as pd - -from pandas.io.common import get_handle -from pandas.io.sas.sasreader import ReaderBase - -if TYPE_CHECKING: - from pandas._typing import ( - CompressionOptions, - DatetimeNaTType, - FilePath, - ReadBuffer, - ) -_correct_line1 = ( - "HEADER RECORD*******LIBRARY HEADER RECORD!!!!!!!" - "000000000000000000000000000000 " -) -_correct_header1 = ( - "HEADER RECORD*******MEMBER HEADER RECORD!!!!!!!000000000000000001600000000" -) -_correct_header2 = ( - "HEADER RECORD*******DSCRPTR HEADER RECORD!!!!!!!" - "000000000000000000000000000000 " -) -_correct_obs_header = ( - "HEADER RECORD*******OBS HEADER RECORD!!!!!!!" - "000000000000000000000000000000 " -) -_fieldkeys = [ - "ntype", - "nhfun", - "field_length", - "nvar0", - "name", - "label", - "nform", - "nfl", - "num_decimals", - "nfj", - "nfill", - "niform", - "nifl", - "nifd", - "npos", - "_", -] - - -_base_params_doc = """\ -Parameters ----------- -filepath_or_buffer : str or file-like object - Path to SAS file or object implementing binary read method.""" - -_params2_doc = """\ -index : identifier of index column - Identifier of column that should be used as index of the DataFrame. -encoding : str - Encoding for text data. -chunksize : int - Read file `chunksize` lines at a time, returns iterator.""" - -_format_params_doc = """\ -format : str - File format, only `xport` is currently supported.""" - -_iterator_doc = """\ -iterator : bool, default False - Return XportReader object for reading file incrementally.""" - - -_read_sas_doc = f"""Read a SAS file into a DataFrame. - -{_base_params_doc} -{_format_params_doc} -{_params2_doc} -{_iterator_doc} - -Returns -------- -DataFrame or XportReader - -Examples --------- -Read a SAS Xport file: - ->>> df = pd.read_sas('filename.XPT') - -Read a Xport file in 10,000 line chunks: - ->>> itr = pd.read_sas('filename.XPT', chunksize=10000) ->>> for chunk in itr: ->>> do_something(chunk) - -""" - -_xport_reader_doc = f"""\ -Class for reading SAS Xport files. - -{_base_params_doc} -{_params2_doc} - -Attributes ----------- -member_info : list - Contains information about the file -fields : list - Contains information about the variables in the file -""" - -_read_method_doc = """\ -Read observations from SAS Xport file, returning as data frame. - -Parameters ----------- -nrows : int - Number of rows to read from data file; if None, read whole - file. - -Returns -------- -A DataFrame. -""" - - -def _parse_date(datestr: str) -> DatetimeNaTType: - """Given a date in xport format, return Python date.""" - try: - # e.g. "16FEB11:10:07:55" - return datetime.strptime(datestr, "%d%b%y:%H:%M:%S") - except ValueError: - return pd.NaT - - -def _split_line(s: str, parts): - """ - Parameters - ---------- - s: str - Fixed-length string to split - parts: list of (name, length) pairs - Used to break up string, name '_' will be filtered from output. - - Returns - ------- - Dict of name:contents of string at given location. - """ - out = {} - start = 0 - for name, length in parts: - out[name] = s[start : start + length].strip() - start += length - del out["_"] - return out - - -def _handle_truncated_float_vec(vec, nbytes): - # This feature is not well documented, but some SAS XPORT files - # have 2-7 byte "truncated" floats. To read these truncated - # floats, pad them with zeros on the right to make 8 byte floats. - # - # References: - # https://github.com/jcushman/xport/pull/3 - # The R "foreign" library - - if nbytes != 8: - vec1 = np.zeros(len(vec), np.dtype("S8")) - dtype = np.dtype(f"S{nbytes},S{8 - nbytes}") - vec2 = vec1.view(dtype=dtype) - vec2["f0"] = vec - return vec2 - - return vec - - -def _parse_float_vec(vec): - """ - Parse a vector of float values representing IBM 8 byte floats into - native 8 byte floats. - """ - dtype = np.dtype(">u4,>u4") - vec1 = vec.view(dtype=dtype) - xport1 = vec1["f0"] - xport2 = vec1["f1"] - - # Start by setting first half of ieee number to first half of IBM - # number sans exponent - ieee1 = xport1 & 0x00FFFFFF - - # The fraction bit to the left of the binary point in the ieee - # format was set and the number was shifted 0, 1, 2, or 3 - # places. This will tell us how to adjust the ibm exponent to be a - # power of 2 ieee exponent and how to shift the fraction bits to - # restore the correct magnitude. - shift = np.zeros(len(vec), dtype=np.uint8) - shift[np.where(xport1 & 0x00200000)] = 1 - shift[np.where(xport1 & 0x00400000)] = 2 - shift[np.where(xport1 & 0x00800000)] = 3 - - # shift the ieee number down the correct number of places then - # set the second half of the ieee number to be the second half - # of the ibm number shifted appropriately, ored with the bits - # from the first half that would have been shifted in if we - # could shift a double. All we are worried about are the low - # order 3 bits of the first half since we're only shifting by - # 1, 2, or 3. - ieee1 >>= shift - ieee2 = (xport2 >> shift) | ((xport1 & 0x00000007) << (29 + (3 - shift))) - - # clear the 1 bit to the left of the binary point - ieee1 &= 0xFFEFFFFF - - # set the exponent of the ieee number to be the actual exponent - # plus the shift count + 1023. Or this into the first half of the - # ieee number. The ibm exponent is excess 64 but is adjusted by 65 - # since during conversion to ibm format the exponent is - # incremented by 1 and the fraction bits left 4 positions to the - # right of the radix point. (had to add >> 24 because C treats & - # 0x7f as 0x7f000000 and Python doesn't) - ieee1 |= ((((((xport1 >> 24) & 0x7F) - 65) << 2) + shift + 1023) << 20) | ( - xport1 & 0x80000000 - ) - - ieee = np.empty((len(ieee1),), dtype=">u4,>u4") - ieee["f0"] = ieee1 - ieee["f1"] = ieee2 - ieee = ieee.view(dtype=">f8") - ieee = ieee.astype("f8") - - return ieee - - -class XportReader(ReaderBase, abc.Iterator): - __doc__ = _xport_reader_doc - - def __init__( - self, - filepath_or_buffer: FilePath | ReadBuffer[bytes], - index=None, - encoding: str | None = "ISO-8859-1", - chunksize: int | None = None, - compression: CompressionOptions = "infer", - ) -> None: - self._encoding = encoding - self._lines_read = 0 - self._index = index - self._chunksize = chunksize - - self.handles = get_handle( - filepath_or_buffer, - "rb", - encoding=encoding, - is_text=False, - compression=compression, - ) - self.filepath_or_buffer = self.handles.handle - - try: - self._read_header() - except Exception: - self.close() - raise - - def close(self) -> None: - self.handles.close() - - def _get_row(self): - return self.filepath_or_buffer.read(80).decode() - - def _read_header(self): - self.filepath_or_buffer.seek(0) - - # read file header - line1 = self._get_row() - if line1 != _correct_line1: - if "**COMPRESSED**" in line1: - # this was created with the PROC CPORT method and can't be read - # https://documentation.sas.com/doc/en/pgmsascdc/9.4_3.5/movefile/p1bm6aqp3fw4uin1hucwh718f6kp.htm - raise ValueError( - "Header record indicates a CPORT file, which is not readable." - ) - raise ValueError("Header record is not an XPORT file.") - - line2 = self._get_row() - fif = [["prefix", 24], ["version", 8], ["OS", 8], ["_", 24], ["created", 16]] - file_info = _split_line(line2, fif) - if file_info["prefix"] != "SAS SAS SASLIB": - raise ValueError("Header record has invalid prefix.") - file_info["created"] = _parse_date(file_info["created"]) - self.file_info = file_info - - line3 = self._get_row() - file_info["modified"] = _parse_date(line3[:16]) - - # read member header - header1 = self._get_row() - header2 = self._get_row() - headflag1 = header1.startswith(_correct_header1) - headflag2 = header2 == _correct_header2 - if not (headflag1 and headflag2): - raise ValueError("Member header not found") - # usually 140, could be 135 - fieldnamelength = int(header1[-5:-2]) - - # member info - mem = [ - ["prefix", 8], - ["set_name", 8], - ["sasdata", 8], - ["version", 8], - ["OS", 8], - ["_", 24], - ["created", 16], - ] - member_info = _split_line(self._get_row(), mem) - mem = [["modified", 16], ["_", 16], ["label", 40], ["type", 8]] - member_info.update(_split_line(self._get_row(), mem)) - member_info["modified"] = _parse_date(member_info["modified"]) - member_info["created"] = _parse_date(member_info["created"]) - self.member_info = member_info - - # read field names - types = {1: "numeric", 2: "char"} - fieldcount = int(self._get_row()[54:58]) - datalength = fieldnamelength * fieldcount - # round up to nearest 80 - if datalength % 80: - datalength += 80 - datalength % 80 - fielddata = self.filepath_or_buffer.read(datalength) - fields = [] - obs_length = 0 - while len(fielddata) >= fieldnamelength: - # pull data for one field - fieldbytes, fielddata = ( - fielddata[:fieldnamelength], - fielddata[fieldnamelength:], - ) - - # rest at end gets ignored, so if field is short, pad out - # to match struct pattern below - fieldbytes = fieldbytes.ljust(140) - - fieldstruct = struct.unpack(">hhhh8s40s8shhh2s8shhl52s", fieldbytes) - field = dict(zip(_fieldkeys, fieldstruct)) - del field["_"] - field["ntype"] = types[field["ntype"]] - fl = field["field_length"] - if field["ntype"] == "numeric" and ((fl < 2) or (fl > 8)): - msg = f"Floating field width {fl} is not between 2 and 8." - raise TypeError(msg) - - for k, v in field.items(): - try: - field[k] = v.strip() - except AttributeError: - pass - - obs_length += field["field_length"] - fields += [field] - - header = self._get_row() - if not header == _correct_obs_header: - raise ValueError("Observation header not found.") - - self.fields = fields - self.record_length = obs_length - self.record_start = self.filepath_or_buffer.tell() - - self.nobs = self._record_count() - self.columns = [x["name"].decode() for x in self.fields] - - # Setup the dtype. - dtypel = [ - ("s" + str(i), "S" + str(field["field_length"])) - for i, field in enumerate(self.fields) - ] - dtype = np.dtype(dtypel) - self._dtype = dtype - - def __next__(self) -> pd.DataFrame: - return self.read(nrows=self._chunksize or 1) - - def _record_count(self) -> int: - """ - Get number of records in file. - - This is maybe suboptimal because we have to seek to the end of - the file. - - Side effect: returns file position to record_start. - """ - self.filepath_or_buffer.seek(0, 2) - total_records_length = self.filepath_or_buffer.tell() - self.record_start - - if total_records_length % 80 != 0: - warnings.warn( - "xport file may be corrupted.", - stacklevel=find_stack_level(), - ) - - if self.record_length > 80: - self.filepath_or_buffer.seek(self.record_start) - return total_records_length // self.record_length - - self.filepath_or_buffer.seek(-80, 2) - last_card_bytes = self.filepath_or_buffer.read(80) - last_card = np.frombuffer(last_card_bytes, dtype=np.uint64) - - # 8 byte blank - ix = np.flatnonzero(last_card == 2314885530818453536) - - if len(ix) == 0: - tail_pad = 0 - else: - tail_pad = 8 * len(ix) - - self.filepath_or_buffer.seek(self.record_start) - - return (total_records_length - tail_pad) // self.record_length - - def get_chunk(self, size: int | None = None) -> pd.DataFrame: - """ - Reads lines from Xport file and returns as dataframe - - Parameters - ---------- - size : int, defaults to None - Number of lines to read. If None, reads whole file. - - Returns - ------- - DataFrame - """ - if size is None: - size = self._chunksize - return self.read(nrows=size) - - def _missing_double(self, vec): - v = vec.view(dtype="u1,u1,u2,u4") - miss = (v["f1"] == 0) & (v["f2"] == 0) & (v["f3"] == 0) - miss1 = ( - ((v["f0"] >= 0x41) & (v["f0"] <= 0x5A)) - | (v["f0"] == 0x5F) - | (v["f0"] == 0x2E) - ) - miss &= miss1 - return miss - - @Appender(_read_method_doc) - def read(self, nrows: int | None = None) -> pd.DataFrame: - if nrows is None: - nrows = self.nobs - - read_lines = min(nrows, self.nobs - self._lines_read) - read_len = read_lines * self.record_length - if read_len <= 0: - self.close() - raise StopIteration - raw = self.filepath_or_buffer.read(read_len) - data = np.frombuffer(raw, dtype=self._dtype, count=read_lines) - - df_data = {} - for j, x in enumerate(self.columns): - vec = data["s" + str(j)] - ntype = self.fields[j]["ntype"] - if ntype == "numeric": - vec = _handle_truncated_float_vec(vec, self.fields[j]["field_length"]) - miss = self._missing_double(vec) - v = _parse_float_vec(vec) - v[miss] = np.nan - elif self.fields[j]["ntype"] == "char": - v = [y.rstrip() for y in vec] - - if self._encoding is not None: - v = [y.decode(self._encoding) for y in v] - - df_data.update({x: v}) - df = pd.DataFrame(df_data) - - if self._index is None: - df.index = pd.Index(range(self._lines_read, self._lines_read + read_lines)) - else: - df = df.set_index(self._index) - - self._lines_read += read_lines - - return df diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/datetimes/methods/test_factorize.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/datetimes/methods/test_factorize.py deleted file mode 100644 index 3ad927f133fb2ff888b1274c889100a3ce0437f9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/datetimes/methods/test_factorize.py +++ /dev/null @@ -1,125 +0,0 @@ -import numpy as np -import pytest - -from pandas import ( - DatetimeIndex, - Index, - date_range, - factorize, -) -import pandas._testing as tm - - -class TestDatetimeIndexFactorize: - def test_factorize(self): - idx1 = DatetimeIndex( - ["2014-01", "2014-01", "2014-02", "2014-02", "2014-03", "2014-03"] - ) - - exp_arr = np.array([0, 0, 1, 1, 2, 2], dtype=np.intp) - exp_idx = DatetimeIndex(["2014-01", "2014-02", "2014-03"]) - - arr, idx = idx1.factorize() - tm.assert_numpy_array_equal(arr, exp_arr) - tm.assert_index_equal(idx, exp_idx) - assert idx.freq == exp_idx.freq - - arr, idx = idx1.factorize(sort=True) - tm.assert_numpy_array_equal(arr, exp_arr) - tm.assert_index_equal(idx, exp_idx) - assert idx.freq == exp_idx.freq - - # tz must be preserved - idx1 = idx1.tz_localize("Asia/Tokyo") - exp_idx = exp_idx.tz_localize("Asia/Tokyo") - - arr, idx = idx1.factorize() - tm.assert_numpy_array_equal(arr, exp_arr) - tm.assert_index_equal(idx, exp_idx) - assert idx.freq == exp_idx.freq - - idx2 = DatetimeIndex( - ["2014-03", "2014-03", "2014-02", "2014-01", "2014-03", "2014-01"] - ) - - exp_arr = np.array([2, 2, 1, 0, 2, 0], dtype=np.intp) - exp_idx = DatetimeIndex(["2014-01", "2014-02", "2014-03"]) - arr, idx = idx2.factorize(sort=True) - tm.assert_numpy_array_equal(arr, exp_arr) - tm.assert_index_equal(idx, exp_idx) - assert idx.freq == exp_idx.freq - - exp_arr = np.array([0, 0, 1, 2, 0, 2], dtype=np.intp) - exp_idx = DatetimeIndex(["2014-03", "2014-02", "2014-01"]) - arr, idx = idx2.factorize() - tm.assert_numpy_array_equal(arr, exp_arr) - tm.assert_index_equal(idx, exp_idx) - assert idx.freq == exp_idx.freq - - def test_factorize_preserves_freq(self): - # GH#38120 freq should be preserved - idx3 = date_range("2000-01", periods=4, freq="M", tz="Asia/Tokyo") - exp_arr = np.array([0, 1, 2, 3], dtype=np.intp) - - arr, idx = idx3.factorize() - tm.assert_numpy_array_equal(arr, exp_arr) - tm.assert_index_equal(idx, idx3) - assert idx.freq == idx3.freq - - arr, idx = factorize(idx3) - tm.assert_numpy_array_equal(arr, exp_arr) - tm.assert_index_equal(idx, idx3) - assert idx.freq == idx3.freq - - def test_factorize_tz(self, tz_naive_fixture, index_or_series): - tz = tz_naive_fixture - # GH#13750 - base = date_range("2016-11-05", freq="H", periods=100, tz=tz) - idx = base.repeat(5) - - exp_arr = np.arange(100, dtype=np.intp).repeat(5) - - obj = index_or_series(idx) - - arr, res = obj.factorize() - tm.assert_numpy_array_equal(arr, exp_arr) - expected = base._with_freq(None) - tm.assert_index_equal(res, expected) - assert res.freq == expected.freq - - def test_factorize_dst(self, index_or_series): - # GH#13750 - idx = date_range("2016-11-06", freq="H", periods=12, tz="US/Eastern") - obj = index_or_series(idx) - - arr, res = obj.factorize() - tm.assert_numpy_array_equal(arr, np.arange(12, dtype=np.intp)) - tm.assert_index_equal(res, idx) - if index_or_series is Index: - assert res.freq == idx.freq - - idx = date_range("2016-06-13", freq="H", periods=12, tz="US/Eastern") - obj = index_or_series(idx) - - arr, res = obj.factorize() - tm.assert_numpy_array_equal(arr, np.arange(12, dtype=np.intp)) - tm.assert_index_equal(res, idx) - if index_or_series is Index: - assert res.freq == idx.freq - - @pytest.mark.parametrize("sort", [True, False]) - def test_factorize_no_freq_non_nano(self, tz_naive_fixture, sort): - # GH#51978 case that does not go through the fastpath based on - # non-None freq - tz = tz_naive_fixture - idx = date_range("2016-11-06", freq="H", periods=5, tz=tz)[[0, 4, 1, 3, 2]] - exp_codes, exp_uniques = idx.factorize(sort=sort) - - res_codes, res_uniques = idx.as_unit("s").factorize(sort=sort) - - tm.assert_numpy_array_equal(res_codes, exp_codes) - tm.assert_index_equal(res_uniques, exp_uniques.as_unit("s")) - - res_codes, res_uniques = idx.as_unit("s").to_series().factorize(sort=sort) - tm.assert_numpy_array_equal(res_codes, exp_codes) - tm.assert_index_equal(res_uniques, exp_uniques.as_unit("s")) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/gb2312prober.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/gb2312prober.py deleted file mode 100644 index 8446d2dd959721cc86d4ae5a7699197454f3aa91..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/chardet/gb2312prober.py +++ /dev/null @@ -1,46 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is mozilla.org code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 1998 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -from .mbcharsetprober import MultiByteCharSetProber -from .codingstatemachine import CodingStateMachine -from .chardistribution import GB2312DistributionAnalysis -from .mbcssm import GB2312_SM_MODEL - -class GB2312Prober(MultiByteCharSetProber): - def __init__(self): - super(GB2312Prober, self).__init__() - self.coding_sm = CodingStateMachine(GB2312_SM_MODEL) - self.distribution_analyzer = GB2312DistributionAnalysis() - self.reset() - - @property - def charset_name(self): - return "GB2312" - - @property - def language(self): - return "Chinese" diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/lexers/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/lexers/__init__.py deleted file mode 100644 index 6981b8d1187b8110fcd33d19430a190053ab048d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/pygments/lexers/__init__.py +++ /dev/null @@ -1,341 +0,0 @@ -""" - pygments.lexers - ~~~~~~~~~~~~~~~ - - Pygments lexers. - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re -import sys -import types -import fnmatch -from os.path import basename - -from pip._vendor.pygments.lexers._mapping import LEXERS -from pip._vendor.pygments.modeline import get_filetype_from_buffer -from pip._vendor.pygments.plugin import find_plugin_lexers -from pip._vendor.pygments.util import ClassNotFound, guess_decode - -COMPAT = { - 'Python3Lexer': 'PythonLexer', - 'Python3TracebackLexer': 'PythonTracebackLexer', -} - -__all__ = ['get_lexer_by_name', 'get_lexer_for_filename', 'find_lexer_class', - 'guess_lexer', 'load_lexer_from_file'] + list(LEXERS) + list(COMPAT) - -_lexer_cache = {} -_pattern_cache = {} - - -def _fn_matches(fn, glob): - """Return whether the supplied file name fn matches pattern filename.""" - if glob not in _pattern_cache: - pattern = _pattern_cache[glob] = re.compile(fnmatch.translate(glob)) - return pattern.match(fn) - return _pattern_cache[glob].match(fn) - - -def _load_lexers(module_name): - """Load a lexer (and all others in the module too).""" - mod = __import__(module_name, None, None, ['__all__']) - for lexer_name in mod.__all__: - cls = getattr(mod, lexer_name) - _lexer_cache[cls.name] = cls - - -def get_all_lexers(): - """Return a generator of tuples in the form ``(name, aliases, - filenames, mimetypes)`` of all know lexers. - """ - for item in LEXERS.values(): - yield item[1:] - for lexer in find_plugin_lexers(): - yield lexer.name, lexer.aliases, lexer.filenames, lexer.mimetypes - - -def find_lexer_class(name): - """Lookup a lexer class by name. - - Return None if not found. - """ - if name in _lexer_cache: - return _lexer_cache[name] - # lookup builtin lexers - for module_name, lname, aliases, _, _ in LEXERS.values(): - if name == lname: - _load_lexers(module_name) - return _lexer_cache[name] - # continue with lexers from setuptools entrypoints - for cls in find_plugin_lexers(): - if cls.name == name: - return cls - - -def find_lexer_class_by_name(_alias): - """Lookup a lexer class by alias. - - Like `get_lexer_by_name`, but does not instantiate the class. - - .. versionadded:: 2.2 - """ - if not _alias: - raise ClassNotFound('no lexer for alias %r found' % _alias) - # lookup builtin lexers - for module_name, name, aliases, _, _ in LEXERS.values(): - if _alias.lower() in aliases: - if name not in _lexer_cache: - _load_lexers(module_name) - return _lexer_cache[name] - # continue with lexers from setuptools entrypoints - for cls in find_plugin_lexers(): - if _alias.lower() in cls.aliases: - return cls - raise ClassNotFound('no lexer for alias %r found' % _alias) - - -def get_lexer_by_name(_alias, **options): - """Get a lexer by an alias. - - Raises ClassNotFound if not found. - """ - if not _alias: - raise ClassNotFound('no lexer for alias %r found' % _alias) - - # lookup builtin lexers - for module_name, name, aliases, _, _ in LEXERS.values(): - if _alias.lower() in aliases: - if name not in _lexer_cache: - _load_lexers(module_name) - return _lexer_cache[name](**options) - # continue with lexers from setuptools entrypoints - for cls in find_plugin_lexers(): - if _alias.lower() in cls.aliases: - return cls(**options) - raise ClassNotFound('no lexer for alias %r found' % _alias) - - -def load_lexer_from_file(filename, lexername="CustomLexer", **options): - """Load a lexer from a file. - - This method expects a file located relative to the current working - directory, which contains a Lexer class. By default, it expects the - Lexer to be name CustomLexer; you can specify your own class name - as the second argument to this function. - - Users should be very careful with the input, because this method - is equivalent to running eval on the input file. - - Raises ClassNotFound if there are any problems importing the Lexer. - - .. versionadded:: 2.2 - """ - try: - # This empty dict will contain the namespace for the exec'd file - custom_namespace = {} - with open(filename, 'rb') as f: - exec(f.read(), custom_namespace) - # Retrieve the class `lexername` from that namespace - if lexername not in custom_namespace: - raise ClassNotFound('no valid %s class found in %s' % - (lexername, filename)) - lexer_class = custom_namespace[lexername] - # And finally instantiate it with the options - return lexer_class(**options) - except OSError as err: - raise ClassNotFound('cannot read %s: %s' % (filename, err)) - except ClassNotFound: - raise - except Exception as err: - raise ClassNotFound('error when loading custom lexer: %s' % err) - - -def find_lexer_class_for_filename(_fn, code=None): - """Get a lexer for a filename. - - If multiple lexers match the filename pattern, use ``analyse_text()`` to - figure out which one is more appropriate. - - Returns None if not found. - """ - matches = [] - fn = basename(_fn) - for modname, name, _, filenames, _ in LEXERS.values(): - for filename in filenames: - if _fn_matches(fn, filename): - if name not in _lexer_cache: - _load_lexers(modname) - matches.append((_lexer_cache[name], filename)) - for cls in find_plugin_lexers(): - for filename in cls.filenames: - if _fn_matches(fn, filename): - matches.append((cls, filename)) - - if isinstance(code, bytes): - # decode it, since all analyse_text functions expect unicode - code = guess_decode(code) - - def get_rating(info): - cls, filename = info - # explicit patterns get a bonus - bonus = '*' not in filename and 0.5 or 0 - # The class _always_ defines analyse_text because it's included in - # the Lexer class. The default implementation returns None which - # gets turned into 0.0. Run scripts/detect_missing_analyse_text.py - # to find lexers which need it overridden. - if code: - return cls.analyse_text(code) + bonus, cls.__name__ - return cls.priority + bonus, cls.__name__ - - if matches: - matches.sort(key=get_rating) - # print "Possible lexers, after sort:", matches - return matches[-1][0] - - -def get_lexer_for_filename(_fn, code=None, **options): - """Get a lexer for a filename. - - If multiple lexers match the filename pattern, use ``analyse_text()`` to - figure out which one is more appropriate. - - Raises ClassNotFound if not found. - """ - res = find_lexer_class_for_filename(_fn, code) - if not res: - raise ClassNotFound('no lexer for filename %r found' % _fn) - return res(**options) - - -def get_lexer_for_mimetype(_mime, **options): - """Get a lexer for a mimetype. - - Raises ClassNotFound if not found. - """ - for modname, name, _, _, mimetypes in LEXERS.values(): - if _mime in mimetypes: - if name not in _lexer_cache: - _load_lexers(modname) - return _lexer_cache[name](**options) - for cls in find_plugin_lexers(): - if _mime in cls.mimetypes: - return cls(**options) - raise ClassNotFound('no lexer for mimetype %r found' % _mime) - - -def _iter_lexerclasses(plugins=True): - """Return an iterator over all lexer classes.""" - for key in sorted(LEXERS): - module_name, name = LEXERS[key][:2] - if name not in _lexer_cache: - _load_lexers(module_name) - yield _lexer_cache[name] - if plugins: - yield from find_plugin_lexers() - - -def guess_lexer_for_filename(_fn, _text, **options): - """ - Lookup all lexers that handle those filenames primary (``filenames``) - or secondary (``alias_filenames``). Then run a text analysis for those - lexers and choose the best result. - - usage:: - - >>> from pygments.lexers import guess_lexer_for_filename - >>> guess_lexer_for_filename('hello.html', '<%= @foo %>') - - >>> guess_lexer_for_filename('hello.html', '

      {{ title|e }}

      ') - - >>> guess_lexer_for_filename('style.css', 'a { color: }') - - """ - fn = basename(_fn) - primary = {} - matching_lexers = set() - for lexer in _iter_lexerclasses(): - for filename in lexer.filenames: - if _fn_matches(fn, filename): - matching_lexers.add(lexer) - primary[lexer] = True - for filename in lexer.alias_filenames: - if _fn_matches(fn, filename): - matching_lexers.add(lexer) - primary[lexer] = False - if not matching_lexers: - raise ClassNotFound('no lexer for filename %r found' % fn) - if len(matching_lexers) == 1: - return matching_lexers.pop()(**options) - result = [] - for lexer in matching_lexers: - rv = lexer.analyse_text(_text) - if rv == 1.0: - return lexer(**options) - result.append((rv, lexer)) - - def type_sort(t): - # sort by: - # - analyse score - # - is primary filename pattern? - # - priority - # - last resort: class name - return (t[0], primary[t[1]], t[1].priority, t[1].__name__) - result.sort(key=type_sort) - - return result[-1][1](**options) - - -def guess_lexer(_text, **options): - """Guess a lexer by strong distinctions in the text (eg, shebang).""" - - if not isinstance(_text, str): - inencoding = options.get('inencoding', options.get('encoding')) - if inencoding: - _text = _text.decode(inencoding or 'utf8') - else: - _text, _ = guess_decode(_text) - - # try to get a vim modeline first - ft = get_filetype_from_buffer(_text) - - if ft is not None: - try: - return get_lexer_by_name(ft, **options) - except ClassNotFound: - pass - - best_lexer = [0.0, None] - for lexer in _iter_lexerclasses(): - rv = lexer.analyse_text(_text) - if rv == 1.0: - return lexer(**options) - if rv > best_lexer[0]: - best_lexer[:] = (rv, lexer) - if not best_lexer[0] or best_lexer[1] is None: - raise ClassNotFound('no lexer matching the text found') - return best_lexer[1](**options) - - -class _automodule(types.ModuleType): - """Automatically import lexers.""" - - def __getattr__(self, name): - info = LEXERS.get(name) - if info: - _load_lexers(info[0]) - cls = _lexer_cache[info[1]] - setattr(self, name, cls) - return cls - if name in COMPAT: - return getattr(self, COMPAT[name]) - raise AttributeError(name) - - -oldmod = sys.modules[__name__] -newmod = _automodule(__name__) -newmod.__dict__.update(oldmod.__dict__) -sys.modules[__name__] = newmod -del newmod.newmod, newmod.oldmod, newmod.sys, newmod.types diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/other.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/other.py deleted file mode 100644 index f2c07d7edc8874c808b852ac7058f23c9238d4aa..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/other.py +++ /dev/null @@ -1,40 +0,0 @@ -""" - pygments.lexers.other - ~~~~~~~~~~~~~~~~~~~~~ - - Just export lexer classes previously contained in this module. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.lexers.sql import SqlLexer, MySqlLexer, SqliteConsoleLexer -from pygments.lexers.shell import BashLexer, BashSessionLexer, BatchLexer, \ - TcshLexer -from pygments.lexers.robotframework import RobotFrameworkLexer -from pygments.lexers.testing import GherkinLexer -from pygments.lexers.esoteric import BrainfuckLexer, BefungeLexer, RedcodeLexer -from pygments.lexers.prolog import LogtalkLexer -from pygments.lexers.snobol import SnobolLexer -from pygments.lexers.rebol import RebolLexer -from pygments.lexers.configs import KconfigLexer, Cfengine3Lexer -from pygments.lexers.modeling import ModelicaLexer -from pygments.lexers.scripting import AppleScriptLexer, MOOCodeLexer, \ - HybrisLexer -from pygments.lexers.graphics import PostScriptLexer, GnuplotLexer, \ - AsymptoteLexer, PovrayLexer -from pygments.lexers.business import ABAPLexer, OpenEdgeLexer, \ - GoodDataCLLexer, MaqlLexer -from pygments.lexers.automation import AutoItLexer, AutohotkeyLexer -from pygments.lexers.dsls import ProtoBufLexer, BroLexer, PuppetLexer, \ - MscgenLexer, VGLLexer -from pygments.lexers.basic import CbmBasicV2Lexer -from pygments.lexers.pawn import SourcePawnLexer, PawnLexer -from pygments.lexers.ecl import ECLLexer -from pygments.lexers.urbi import UrbiscriptLexer -from pygments.lexers.smalltalk import SmalltalkLexer, NewspeakLexer -from pygments.lexers.installers import NSISLexer, RPMSpecLexer -from pygments.lexers.textedit import AwkLexer -from pygments.lexers.smv import NuSMVLexer - -__all__ = [] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/config.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/config.py deleted file mode 100644 index aeda408e731979bf5884e4830fed142a70bfb25e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/setuptools/_distutils/command/config.py +++ /dev/null @@ -1,344 +0,0 @@ -"""distutils.command.config - -Implements the Distutils 'config' command, a (mostly) empty command class -that exists mainly to be sub-classed by specific module distributions and -applications. The idea is that while every "config" command is different, -at least they're all named the same, and users always see "config" in the -list of standard commands. Also, this is a good place to put common -configure-like tasks: "try to compile this C code", or "figure out where -this header file lives". -""" - -import os, re - -from distutils.core import Command -from distutils.errors import DistutilsExecError -from distutils.sysconfig import customize_compiler -from distutils import log - -LANG_EXT = {"c": ".c", "c++": ".cxx"} - -class config(Command): - - description = "prepare to build" - - user_options = [ - ('compiler=', None, - "specify the compiler type"), - ('cc=', None, - "specify the compiler executable"), - ('include-dirs=', 'I', - "list of directories to search for header files"), - ('define=', 'D', - "C preprocessor macros to define"), - ('undef=', 'U', - "C preprocessor macros to undefine"), - ('libraries=', 'l', - "external C libraries to link with"), - ('library-dirs=', 'L', - "directories to search for external C libraries"), - - ('noisy', None, - "show every action (compile, link, run, ...) taken"), - ('dump-source', None, - "dump generated source files before attempting to compile them"), - ] - - - # The three standard command methods: since the "config" command - # does nothing by default, these are empty. - - def initialize_options(self): - self.compiler = None - self.cc = None - self.include_dirs = None - self.libraries = None - self.library_dirs = None - - # maximal output for now - self.noisy = 1 - self.dump_source = 1 - - # list of temporary files generated along-the-way that we have - # to clean at some point - self.temp_files = [] - - def finalize_options(self): - if self.include_dirs is None: - self.include_dirs = self.distribution.include_dirs or [] - elif isinstance(self.include_dirs, str): - self.include_dirs = self.include_dirs.split(os.pathsep) - - if self.libraries is None: - self.libraries = [] - elif isinstance(self.libraries, str): - self.libraries = [self.libraries] - - if self.library_dirs is None: - self.library_dirs = [] - elif isinstance(self.library_dirs, str): - self.library_dirs = self.library_dirs.split(os.pathsep) - - def run(self): - pass - - # Utility methods for actual "config" commands. The interfaces are - # loosely based on Autoconf macros of similar names. Sub-classes - # may use these freely. - - def _check_compiler(self): - """Check that 'self.compiler' really is a CCompiler object; - if not, make it one. - """ - # We do this late, and only on-demand, because this is an expensive - # import. - from distutils.ccompiler import CCompiler, new_compiler - if not isinstance(self.compiler, CCompiler): - self.compiler = new_compiler(compiler=self.compiler, - dry_run=self.dry_run, force=1) - customize_compiler(self.compiler) - if self.include_dirs: - self.compiler.set_include_dirs(self.include_dirs) - if self.libraries: - self.compiler.set_libraries(self.libraries) - if self.library_dirs: - self.compiler.set_library_dirs(self.library_dirs) - - def _gen_temp_sourcefile(self, body, headers, lang): - filename = "_configtest" + LANG_EXT[lang] - with open(filename, "w") as file: - if headers: - for header in headers: - file.write("#include <%s>\n" % header) - file.write("\n") - file.write(body) - if body[-1] != "\n": - file.write("\n") - return filename - - def _preprocess(self, body, headers, include_dirs, lang): - src = self._gen_temp_sourcefile(body, headers, lang) - out = "_configtest.i" - self.temp_files.extend([src, out]) - self.compiler.preprocess(src, out, include_dirs=include_dirs) - return (src, out) - - def _compile(self, body, headers, include_dirs, lang): - src = self._gen_temp_sourcefile(body, headers, lang) - if self.dump_source: - dump_file(src, "compiling '%s':" % src) - (obj,) = self.compiler.object_filenames([src]) - self.temp_files.extend([src, obj]) - self.compiler.compile([src], include_dirs=include_dirs) - return (src, obj) - - def _link(self, body, headers, include_dirs, libraries, library_dirs, - lang): - (src, obj) = self._compile(body, headers, include_dirs, lang) - prog = os.path.splitext(os.path.basename(src))[0] - self.compiler.link_executable([obj], prog, - libraries=libraries, - library_dirs=library_dirs, - target_lang=lang) - - if self.compiler.exe_extension is not None: - prog = prog + self.compiler.exe_extension - self.temp_files.append(prog) - - return (src, obj, prog) - - def _clean(self, *filenames): - if not filenames: - filenames = self.temp_files - self.temp_files = [] - log.info("removing: %s", ' '.join(filenames)) - for filename in filenames: - try: - os.remove(filename) - except OSError: - pass - - - # XXX these ignore the dry-run flag: what to do, what to do? even if - # you want a dry-run build, you still need some sort of configuration - # info. My inclination is to make it up to the real config command to - # consult 'dry_run', and assume a default (minimal) configuration if - # true. The problem with trying to do it here is that you'd have to - # return either true or false from all the 'try' methods, neither of - # which is correct. - - # XXX need access to the header search path and maybe default macros. - - def try_cpp(self, body=None, headers=None, include_dirs=None, lang="c"): - """Construct a source file from 'body' (a string containing lines - of C/C++ code) and 'headers' (a list of header files to include) - and run it through the preprocessor. Return true if the - preprocessor succeeded, false if there were any errors. - ('body' probably isn't of much use, but what the heck.) - """ - from distutils.ccompiler import CompileError - self._check_compiler() - ok = True - try: - self._preprocess(body, headers, include_dirs, lang) - except CompileError: - ok = False - - self._clean() - return ok - - def search_cpp(self, pattern, body=None, headers=None, include_dirs=None, - lang="c"): - """Construct a source file (just like 'try_cpp()'), run it through - the preprocessor, and return true if any line of the output matches - 'pattern'. 'pattern' should either be a compiled regex object or a - string containing a regex. If both 'body' and 'headers' are None, - preprocesses an empty file -- which can be useful to determine the - symbols the preprocessor and compiler set by default. - """ - self._check_compiler() - src, out = self._preprocess(body, headers, include_dirs, lang) - - if isinstance(pattern, str): - pattern = re.compile(pattern) - - with open(out) as file: - match = False - while True: - line = file.readline() - if line == '': - break - if pattern.search(line): - match = True - break - - self._clean() - return match - - def try_compile(self, body, headers=None, include_dirs=None, lang="c"): - """Try to compile a source file built from 'body' and 'headers'. - Return true on success, false otherwise. - """ - from distutils.ccompiler import CompileError - self._check_compiler() - try: - self._compile(body, headers, include_dirs, lang) - ok = True - except CompileError: - ok = False - - log.info(ok and "success!" or "failure.") - self._clean() - return ok - - def try_link(self, body, headers=None, include_dirs=None, libraries=None, - library_dirs=None, lang="c"): - """Try to compile and link a source file, built from 'body' and - 'headers', to executable form. Return true on success, false - otherwise. - """ - from distutils.ccompiler import CompileError, LinkError - self._check_compiler() - try: - self._link(body, headers, include_dirs, - libraries, library_dirs, lang) - ok = True - except (CompileError, LinkError): - ok = False - - log.info(ok and "success!" or "failure.") - self._clean() - return ok - - def try_run(self, body, headers=None, include_dirs=None, libraries=None, - library_dirs=None, lang="c"): - """Try to compile, link to an executable, and run a program - built from 'body' and 'headers'. Return true on success, false - otherwise. - """ - from distutils.ccompiler import CompileError, LinkError - self._check_compiler() - try: - src, obj, exe = self._link(body, headers, include_dirs, - libraries, library_dirs, lang) - self.spawn([exe]) - ok = True - except (CompileError, LinkError, DistutilsExecError): - ok = False - - log.info(ok and "success!" or "failure.") - self._clean() - return ok - - - # -- High-level methods -------------------------------------------- - # (these are the ones that are actually likely to be useful - # when implementing a real-world config command!) - - def check_func(self, func, headers=None, include_dirs=None, - libraries=None, library_dirs=None, decl=0, call=0): - """Determine if function 'func' is available by constructing a - source file that refers to 'func', and compiles and links it. - If everything succeeds, returns true; otherwise returns false. - - The constructed source file starts out by including the header - files listed in 'headers'. If 'decl' is true, it then declares - 'func' (as "int func()"); you probably shouldn't supply 'headers' - and set 'decl' true in the same call, or you might get errors about - a conflicting declarations for 'func'. Finally, the constructed - 'main()' function either references 'func' or (if 'call' is true) - calls it. 'libraries' and 'library_dirs' are used when - linking. - """ - self._check_compiler() - body = [] - if decl: - body.append("int %s ();" % func) - body.append("int main () {") - if call: - body.append(" %s();" % func) - else: - body.append(" %s;" % func) - body.append("}") - body = "\n".join(body) + "\n" - - return self.try_link(body, headers, include_dirs, - libraries, library_dirs) - - def check_lib(self, library, library_dirs=None, headers=None, - include_dirs=None, other_libraries=[]): - """Determine if 'library' is available to be linked against, - without actually checking that any particular symbols are provided - by it. 'headers' will be used in constructing the source file to - be compiled, but the only effect of this is to check if all the - header files listed are available. Any libraries listed in - 'other_libraries' will be included in the link, in case 'library' - has symbols that depend on other libraries. - """ - self._check_compiler() - return self.try_link("int main (void) { }", headers, include_dirs, - [library] + other_libraries, library_dirs) - - def check_header(self, header, include_dirs=None, library_dirs=None, - lang="c"): - """Determine if the system header file named by 'header_file' - exists and can be found by the preprocessor; return true if so, - false otherwise. - """ - return self.try_cpp(body="/* No body */", headers=[header], - include_dirs=include_dirs) - -def dump_file(filename, head=None): - """Dumps a file content into log.info. - - If head is not None, will be dumped before the file content. - """ - if head is None: - log.info('%s', filename) - else: - log.info(head) - file = open(filename) - try: - log.info(file.read()) - finally: - file.close() diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Big Fish Audio - Dubstep Impact 2 [KONTAKTWAVREX2AIF] Free Download [UPDATED].md b/spaces/quidiaMuxgu/Expedit-SAM/Big Fish Audio - Dubstep Impact 2 [KONTAKTWAVREX2AIF] Free Download [UPDATED].md deleted file mode 100644 index 3d685e608324a38201cc03dfc00725ca8f6949af..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Big Fish Audio - Dubstep Impact 2 [KONTAKTWAVREX2AIF] Free Download [UPDATED].md +++ /dev/null @@ -1,6 +0,0 @@ -

      Big Fish Audio - Dubstep Impact 2 [KONTAKT,WAV,REX2,AIF] Free Download


      Download File 🔗 https://geags.com/2uCqOA



      - -This list has over 100 of the BEST available Free and Premium Kontakt Libraries in 2017. ... The Best Kontakt Libraries in 2017 - 131 Free & Premium Downloads. The ... Zimmer presents the ultimate piano library recorded in the Hall at Air Studios over many weeks. ... Funk Soul Horns 2 by Big Fish Audio [On Sample Magic]. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Daniusoft Video Converter Ultimate V 3.1.0.6 Portable Fixed.md b/spaces/quidiaMuxgu/Expedit-SAM/Daniusoft Video Converter Ultimate V 3.1.0.6 Portable Fixed.md deleted file mode 100644 index 46d65538ba62b2519744e6e5d8bc4b9d8c54feff..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Daniusoft Video Converter Ultimate V 3.1.0.6 Portable Fixed.md +++ /dev/null @@ -1,30 +0,0 @@ -

      Daniusoft Video Converter Ultimate v 3.1.0.6 Portable


      Downloadhttps://geags.com/2uCsag



      - -Similar Software with Daniusoft Video Converter Ultimate: Video Converter, Free Video Converter, Video Converter Ultimate, Video Converter Pro, iMedia Converter.Welcome to the Daniusoft Video Converter. This is the perfect video converter that can convert AVI to MPG. Just give a rip and rip your favorite videos to your favorite portable devices like MP3 player, mobile phone, PDA, PSP, DVD players and more. - -This version of Video Converter Ultimate supports new DRM videos like HD WMV9, MOV9, DIVX, VC1, XVID HD and other DRM formats. Also it supports new versions of AVI, MOV, FLV, MP4 and other popular video formats. - -In this version, you can convert popular audio formats to MP3, AAC, WAV, OGG and other audio files. And you can also convert popular video files to DVD format for playing. - -Fancy Features: - - Advanced » Advanced » Advanced: - -* Play video or convert video with the most powerful features. - -* Add » Remove, Merge » Edit/Trim video. - -* Advanced Speed » Download video at the fastest speed. - -* Convert video to DVD, AVI, MPEG, VOB, MP4 and other popular video formats. - -* High Quality: MPEG4, DivX HD, VOB, MPEG and other popular formats. - -* Support to convert 360 degree video: wmv to mp4, mov to mp4, flv to mp4, asf to mp4 and many other popular formats. - -* Support to convert iPhone and iPad videos, iPod videos, PSP videos, Zune videos, cell phone videos, Pocket PC videos, VCDs and other portable videos. - -* Support for video converting: convert AVI to MOV, MOV to AVI, MP4 to MOV, FLV to MOV, FLV to MP4, MP4 to MOV, FLV to AVI, AVI to MOV, MPEG to MOV, XVID to MOV, XVID to FLV, VC1 to FLV, VC1 to MOV, M2TS to MOV, MPEG to M2TS, 3GP to M2TS, 3GP to MOV, WMV to MOV, AVI to MOV, AVI to FLV, AVI to MPEG, AVI to M2TS, AVI to MTS, AVI to MKV, 4fefd39f24
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Kmdf Hid Minidriver For Touch I2c Device.md b/spaces/quidiaMuxgu/Expedit-SAM/Kmdf Hid Minidriver For Touch I2c Device.md deleted file mode 100644 index 973267ff559942a0f01f9edc47dbbf00f36d52c4..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Kmdf Hid Minidriver For Touch I2c Device.md +++ /dev/null @@ -1,14 +0,0 @@ -

      kmdf hid minidriver for touch i2c device


      Download Zip --->>> https://geags.com/2uCrjS



      -
      -Sileadinc.com - Other equipment - KMDF HID Minidriver for Touch I2C device, Windows 10 service drivers and later for testing, Windows 10 Anniversary Update and... Other KMDF Software HID Minidriver for Touch I2C device for Windows 10. -Software for diagnosing, repairing and servicing Windows 10. -1.0... -4.5 -2 -+45 -iFixit.com - Laptop and Computer Repair - iFixit for Beginners | Repair of laptops and computers - iFixit for beginners -iFixIt for Beginners - iFixit for laptops and computers, starting from the beginning. -This repair guide will help you learn how to do this. 8a78ff9644
      -
      -
      -

      diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Michael Parkin Macroeconomics 10th Edition Pdf Free !!INSTALL!! Download.md b/spaces/quidiaMuxgu/Expedit-SAM/Michael Parkin Macroeconomics 10th Edition Pdf Free !!INSTALL!! Download.md deleted file mode 100644 index 9e7c3abfe3372faeac155f5d74fffa37b90a19bd..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Michael Parkin Macroeconomics 10th Edition Pdf Free !!INSTALL!! Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

      michael parkin macroeconomics 10th edition pdf free download


      Downloadhttps://geags.com/2uCs3p



      -
      -Solved expert answers for the 10th edition of macroeconomics by Michael Parkin. Instant access with 24/7 expert support. Review quiz to define GDP and distinguish between final good and intermediate good. What is GDP? GNP or GDP? How to distinguish the final good from the intermediate good? How does GDP work? What is the difference between a final good and an intermediate good? How to calculate GDP and GNP? Why is GDP used as a measure of economic development? What are the ways to measure economic development? What is the difference between GDP and GNP? What is the relationship between income, expenditure and GDP? What is real and nominal GDP? 8a78ff9644
      -
      -
      -

      diff --git a/spaces/r3gm/AICoverGen/src/infer_pack/models.py b/spaces/r3gm/AICoverGen/src/infer_pack/models.py deleted file mode 100644 index 5e4b2e72383efaee1fae4f5c42e3db2c627e4190..0000000000000000000000000000000000000000 --- a/spaces/r3gm/AICoverGen/src/infer_pack/models.py +++ /dev/null @@ -1,1124 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from infer_pack import modules -from infer_pack import attentions -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from infer_pack.commons import init_weights -import numpy as np -from infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/radames/MusicGen-Continuation/audiocraft/__init__.py b/spaces/radames/MusicGen-Continuation/audiocraft/__init__.py deleted file mode 100644 index 2befac60faf6f406f78ff7b7da05225dbfe7b111..0000000000000000000000000000000000000000 --- a/spaces/radames/MusicGen-Continuation/audiocraft/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from . import data, modules, models - -__version__ = '0.0.2a1' diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/expansion/dataloader/kitti15list_val_mr.py b/spaces/radames/UserControllableLT-Latent-Transformer/expansion/dataloader/kitti15list_val_mr.py deleted file mode 100644 index 56c209f65a734b9874738456a1bebc843f6fecb1..0000000000000000000000000000000000000000 --- a/spaces/radames/UserControllableLT-Latent-Transformer/expansion/dataloader/kitti15list_val_mr.py +++ /dev/null @@ -1,41 +0,0 @@ -import torch.utils.data as data - -from PIL import Image -import os -import os.path -import numpy as np - -IMG_EXTENSIONS = [ - '.jpg', '.JPG', '.jpeg', '.JPEG', - '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP', -] - - -def is_image_file(filename): - return any(filename.endswith(extension) for extension in IMG_EXTENSIONS) - -def dataloader(filepath): - - left_fold = 'image_2/' - flow_noc = 'flow_occ/' - - train = [img for img in os.listdir(filepath+left_fold) if 'Kitti' in img and img.find('_10') > -1] - -# train = [i for i in train if int(i.split('_')[1])%5==0] - import pdb; pdb.set_trace() - train = sorted([i for i in train if int(i.split('_')[1])%5==0])[0:1] - - l0_train = [filepath+left_fold+img for img in train] - l1_train = [filepath+left_fold+img.replace('_10','_11') for img in train] - flow_train = [filepath+flow_noc+img for img in train] - - l0_train += [filepath+left_fold+img.replace('_10','_09') for img in train] - l1_train += [filepath+left_fold+img for img in train] - flow_train += flow_train - - tmp = l0_train - l0_train = l0_train+ [i.replace('rob_flow', 'kitti_scene').replace('Kitti2015_','') for i in l1_train] - l1_train = l1_train+tmp - flow_train += flow_train - - return l0_train, l1_train, flow_train diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Cs3 Master Collection Crack Torrent How to Unlock All the Features of the Creative Suite.md b/spaces/raedeXanto/academic-chatgpt-beta/Adobe Cs3 Master Collection Crack Torrent How to Unlock All the Features of the Creative Suite.md deleted file mode 100644 index 647921b41e3e86e1c87aacb9327e0887a4d56518..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Adobe Cs3 Master Collection Crack Torrent How to Unlock All the Features of the Creative Suite.md +++ /dev/null @@ -1,163 +0,0 @@ -
      -

      Adobe Cs3 Master Collection Crack Torrent: What You Need to Know

      -

      If you are looking for a way to get access to a comprehensive suite of creative software applications without paying a hefty price, you might be tempted to use a crack torrent for Adobe Cs3 Master Collection. But before you do that, you should know what Adobe Cs3 Master Collection is, why you might want to use a crack torrent for it, and how to find and download one safely. In this article, we will answer these questions and more.

      -

      Adobe Cs3 Master Collection Crack Torrent


      DOWNLOADhttps://tinourl.com/2uL1XD



      -

      What is Adobe Cs3 Master Collection?

      -

      Adobe Cs3 Master Collection is a discontinued software suite of graphic design, video editing, and web development applications developed by Adobe Systems. It was launched in 2007 as the successor of Adobe Creative Suite 2 (CS2) and the predecessor of Adobe Creative Suite 4 (CS4). It includes the following applications:

      -
        -
      • Adobe Acrobat 8 Professional
      • -
      • Adobe After Effects CS3 Professional
      • -
      • Adobe Dreamweaver CS3
      • -
      • Adobe Encore CS3
      • -
      • Adobe Flash CS3 Professional
      • -
      • Adobe Illustrator CS3
      • -
      • Adobe InDesign CS3
      • -
      • Adobe Photoshop CS3 Extended
      • -
      • Adobe Premiere Pro CS3
      • -
      • Adobe Soundbooth CS3
      • -
      • Adobe OnLocation CS3 (Windows Only)
      • -
      • Adobe Ultra CS3 (Windows Only)
      • -
      • Adobe Bridge CS3
      • -
      • Adobe Device Central CS3
      • -
      • Adobe Stock Photos
      • -
      • Adobe Version Cue CS3
      • -
      -

      With these applications, you can create stunning graphics, animations, websites, videos, audio, and more. You can also integrate them with each other and with other Adobe products for seamless workflows. However, since Adobe Cs3 Master Collection is no longer supported or available from Adobe's online store or website, you will need to find alternative ways to get it if you want to use it.

      -

      Why Use a Crack Torrent for Adobe Cs3 Master Collection?

      -

      A crack torrent is a file that contains both the software installation files and a crack or keygen that can bypass the software's activation or registration process. By using a crack torrent, you can essentially get the software for free without having to buy a license or subscription from the official source. However, this also comes with some advantages and disadvantages that you should consider before using one.

      -

      Advantages of Using a Crack Torrent

      -

      Some of the advantages of using a crack torrent for Adobe Cs3 Master Collection are:

      -
        -
      • You can save money on buying the software license. Adobe Cs3 Master Collection was originally priced at $2499 for Windows and $2599 for Mac OS. If you use a crack torrent, you can get it for free or at a fraction of that cost.
      • -
      • You can access older versions of Adobe applications that are no longer supported or available. Some users may prefer using older versions of Adobe applications because they are more familiar with them, they have lower system requirements, or they have features that are not available in newer versions. For example, some users may prefer Photoshop CS3 Extended over Photoshop CC because it has 3D editing capabilities that were removed in later versions.
      • -
      • You can bypass activation and registration issues that may arise with official downloads. Some users may encounter problems with activating or registering their Adobe products due to server errors, expired licenses, or incompatible operating systems. For example, some users may not be able to activate their CS6 products after upgrading their Mac OS to Catalina because CS6 is not compatible with Catalina. If you use a crack torrent, you can avoid these issues by using the crack or keygen provided.
      • -
      -

      Disadvantages of Using a Crack Torrent

      -

      Some of the disadvantages of using a crack torrent for Adobe Cs3 Master Collection are:

      -
        -
      • You risk legal consequences for violating Adobe's terms of service and intellectual property rights. Using a crack torrent is considered piracy and illegal in most countries. You may face fines, lawsuits, or even criminal charges if you are caught using or distributing pirated software.
      • -
      • You expose your computer to malware, viruses, and other security threats. Many crack torrents are infected with malicious software that can harm your computer or steal your personal information. You may also download fake or corrupted files that can damage your system or compromise your data.
      • -
      • You compromise the quality and functionality of the software. Many crack torrents are modified or tampered with by hackers or crackers who may introduce bugs, errors, or glitches into the software. You may also miss out on important updates, patches, or fixes that can improve the performance or stability of the software.
      • -
      • You miss out on customer support from Adobe. If you use a crack torrent, you will not be able to access any customer support from Adobe if you encounter any problems or issues with your software. You will also not be able to access any online services or features that require an Adobe account or subscription.
      • -
      -

      How to Find and Download a Crack Torrent for Adobe Cs3 Master Collection?

      -

      If you decide to use a crack torrent for Adobe Cs3 Master Collection despite its disadvantages , you will need to follow these steps:

      -

      Step 1: Find a Reliable Torrent Site

      -

      A torrent site is a website that hosts torrent files that can be downloaded by users using peer-to-peer (P2P) file sharing networks. However, not all torrent sites are reliable or trustworthy. Some may have low-quality torrents, fake torrents, malware-infected torrents, or intrusive ads that can ruin your torrenting experience. Therefore, you should choose a torrent site that has a good reputation, a large user base, and a high seed-to-leech ratio. A seed is a user who has the complete file and is sharing it with others, while a leech is a user who is downloading the file but not sharing it back. The more seeds and fewer leeches a torrent has, the faster and more stable the download will be.

      -

      Some examples of popular torrent sites that may have Adobe Cs3 Master Collection crack torrents are:

      -

      Adobe Cs3 Master Collection Full Version Download
      -How to Crack Adobe Cs3 Master Collection with Keygen
      -Adobe Cs3 Master Collection Free Download for Windows 10
      -Adobe Cs3 Master Collection Serial Number Generator
      -Adobe Cs3 Master Collection Activation Code Crack
      -Adobe Cs3 Master Collection Torrent Download with Crack
      -Adobe Cs3 Master Collection Mac Crack Download
      -Adobe Cs3 Master Collection All-in-One Software Suite
      -Adobe Cs3 Master Collection Crack Only Download
      -Adobe Cs3 Master Collection ISO File Download
      -Adobe Cs3 Master Collection Patch for Windows and Mac
      -Adobe Cs3 Master Collection License Key Crack
      -Adobe Cs3 Master Collection Offline Installer Download
      -Adobe Cs3 Master Collection Crack Instructions
      -Adobe Cs3 Master Collection Direct Download Link
      -Adobe Cs3 Master Collection System Requirements
      -Adobe Cs3 Master Collection Product Key Crack
      -Adobe Cs3 Master Collection Crack No Virus
      -Adobe Cs3 Master Collection Working Crack Download
      -Adobe Cs3 Master Collection Latest Version Download
      -Adobe Cs3 Master Collection Crack for 32 Bit and 64 Bit
      -Adobe Cs3 Master Collection Features and Benefits
      -Adobe Cs3 Master Collection Crack File Download
      -Adobe Cs3 Master Collection Torrent Magnet Link
      -Adobe Cs3 Master Collection Compatible with Windows 7/8/8.1/10
      -How to Install Adobe Cs3 Master Collection with Crack
      -Adobe Cs3 Master Collection Review and Rating
      -Adobe Cs3 Master Collection Alternative Software
      -Adobe Cs3 Master Collection Crack Reddit Link
      -Adobe Cs3 Master Collection Torrent Kickass Download
      -How to Update Adobe Cs3 Master Collection with Crack
      -Adobe Cs3 Master Collection Trial Version Download
      -How to Uninstall Adobe Cs3 Master Collection with Crack
      -Adobe Cs3 Master Collection Support and Help
      -Adobe Cs3 Master Collection Discount and Coupon Code
      -How to Fix Adobe Cs3 Master Collection Error and Issues
      -Adobe Cs3 Master Collection Comparison with Other Versions
      -How to Backup and Restore Adobe Cs3 Master Collection with Crack
      -How to Transfer Adobe Cs3 Master Collection License to Another Computer
      -How to Use Adobe Cs3 Master Collection Tools and Functions
      -How to Customize Adobe Cs3 Master Collection Settings and Preferences
      -How to Learn Adobe Cs3 Master Collection Skills and Tips
      -How to Create Amazing Projects with Adobe Cs3 Master Collection
      -How to Share and Export Adobe Cs3 Master Collection Files and Formats
      -How to Integrate Adobe Cs3 Master Collection with Other Applications
      -How to Troubleshoot Adobe Cs3 Master Collection Problems and Solutions
      -How to Optimize Adobe Cs3 Master Collection Performance and Speed
      -How to Secure Adobe Cs3 Master Collection Data and Privacy
      -How to Upgrade to the Latest Version of Adobe Creative Suite

      -
        -
      • The Pirate Bay — The most well-established torrent site with tons of seeders and verified uploaders. It has a simple interface and a wide range of categories to choose from. However, it is also blocked in many countries and may require a proxy or VPN to access.
      • -
      • 1337x — A huge torrent library with a user-friendly interface and various filters to narrow down your search results. It also has a dedicated community of uploaders and moderators who ensure the quality and safety of the torrents. However, it also has some untrustworthy or fake links that you should avoid clicking on.
      • -
      • RARBG — A torrent site that verifies all torrents and provides detailed information and screenshots for each one. It also has a personalized user experience that allows you to create an account and bookmark your favorite torrents. However, it also has plenty of ads that can be annoying or misleading.
      • -
      -

      Step 2: Search for Adobe Cs3 Master Collection Crack Torrent

      -

      Once you have chosen a torrent site, you can start searching for Adobe Cs3 Master Collection crack torrent on it. You can use keywords such as "Adobe Cs3 Master Collection", "Adobe Cs3 crack", "Adobe Cs3 keygen", or "Adobe Cs3 torrent" to find relevant results. You can also use filters such as category, size, date, seeders, leechers, or rating to sort the results according to your preferences.

      -

      Before you download any torrent, you should check the comments, ratings, and feedback of other users who have downloaded it before. This will help you verify the quality and authenticity of the torrent and avoid any fake or malicious ones. You should also compare the file size, format, and contents of different torrents to choose the best one for your needs. For example, some torrents may include only certain applications from the suite, while others may include all of them. Some torrents may also include additional files such as instructions, patches, or bonus content.

      -

      Step 3: Download and Install a Torrent Client

      -

      A torrent client is a software application that allows you to download and manage your torrents. You will need to download and install a torrent client on your computer before you can download any torrent file from the torrent site. There are many torrent clients available for both Windows and Mac OS, but some of the most popular ones are:

      -
        -
      • uTorrent — A lightweight and easy-to-use torrent client that supports various features such as magnet links, streaming, bandwidth control, remote access, and more. However, it also has some ads and bundled software that you may want to opt out of during installation.
      • -
      • BitTorrent — A similar torrent client to uTorrent that is owned by the same company. It has a slightly different interface and some extra features such as antivirus protection, media player, and file conversion. However, it also has some ads and bundled software that you may want to opt out of during installation.
      • -
      • qBittorrent — A free and open-source torrent client that has no ads or bundled software. It has a clean and simple interface and supports various features such as magnet links, RSS feeds, IP filtering, encryption, and more.
      • -
      -

      To download and install a torrent client, you can visit its official website and follow the instructions provided there.

      -

      Step 4: Download and Install Adobe Cs3 Master Collection Crack Torrent

      -

      To download Adobe Cs3 Master Collection crack torrent from the torrent site , you need to do the following:

      -
        -
      • Download the torrent file from the torrent site by clicking on the download link or magnet link. A magnet link is a URL that contains the information of the torrent file and allows you to download it directly with your torrent client without having to download the torrent file first.
      • -
      • Open the torrent file or magnet link with your torrent client. You can do this by double-clicking on the file or copying and pasting the link into your torrent client.
      • -
      • Select the files you want to download and choose a destination folder for them. You can do this by checking or unchecking the boxes next to the files and browsing for a folder on your computer where you want to save them.
      • -
      • Monitor the download progress and speed of your torrent. You can do this by looking at the status bar or the details panel of your torrent client. You can also pause, resume, or cancel your download at any time.
      • -
      • Install Adobe Cs3 Master Collection from the downloaded files using the crack or keygen provided. You can do this by following the instructions below:
      • -
      -
        -
      1. Extract the downloaded files using a file compression tool such as WinRAR or 7-Zip. You may need a password to extract some files, which should be provided by the uploader in the comments section or in a text file.
      2. -
      3. Open the extracted folder and double-click on the Setup.exe file to start the installation process.
      4. -
      5. Follow the on-screen instructions to install Adobe Cs3 Master Collection. You may need to enter a serial number, which should be provided by the uploader in the comments section or in a text file. Alternatively, you can use a keygen to generate a serial number. A keygen is a software tool that can create valid serial numbers for a software product. To use a keygen, you need to run it and copy and paste the serial number it generates into the installation window.
      6. -
      7. After the installation is complete, do not launch any of the applications yet. You need to apply the crack first. A crack is a software patch that can modify or bypass the activation or registration process of a software product. To apply a crack, you need to copy and paste it into the installation folder of Adobe Cs3 Master Collection, replacing the original files. The installation folder is usually located at C:\Program Files\Adobe\Adobe Cs3 Master Collection or C:\Program Files (x86)\Adobe\Adobe Cs3 Master Collection.
      8. -
      9. Launch any of the applications and enjoy Adobe Cs3 Master Collection for free.
      10. -
      -

      Conclusion

      -

      In this article, we have shown you what Adobe Cs3 Master Collection is, why you might want to use a crack torrent for it, and how to find and download one safely. We have also given you a step-by-step guide on how to install Adobe Cs3 Master Collection from a crack torrent. However, we have also warned you about the disadvantages and risks of using a crack torrent, such as legal consequences, security threats, quality issues, and customer support limitations. Therefore, we do not recommend or endorse using any crack torrents for any software products. If you want to use Adobe Cs3 Master Collection legally and safely, you should buy a license or subscription from Adobe's official website or online store.

      -

      Frequently Asked Questions

      -

      Here are some common questions and answers about Adobe Cs3 Master Collection crack torrents:

      -
        -
      1. Is using a crack torrent for Adobe Cs3 Master Collection illegal?
      2. -

        Yes, using a crack torrent for Adobe Cs3 Master Collection is illegal in most countries. It violates Adobe's terms of service and intellectual property rights. You may face fines, lawsuits, or even criminal charges if you are caught using or distributing pirated software.

        -
      3. Is using a crack torrent for Adobe Cs3 Master Collection safe?
      4. -

        No, using a crack torrent for Adobe Cs3 Master Collection is not safe. You expose your computer to malware, viruses, and other security threats that can harm your system or steal your personal information. You also compromise the quality and functionality of the software by using a modified or tampered version that may have bugs, errors, or glitches. You also miss out on important updates, patches, or fixes that can improve the performance or stability of the software. You also lose access to customer support from Adobe if you encounter any problems or issues with your software.

        -
      5. How to find and download a crack torrent for Adobe Cs3 Master Collection?
      6. -

        To find and download a crack torrent for Adobe Cs3 Master Collection, you need to follow these steps:

        -
          -
        • Find a reliable torrent site that has a good reputation, a large user base, and a high seed-to-leech ratio.
        • -
        • Search for Adobe Cs3 Master Collection crack torrent on the torrent site using keywords, filters, and categories.
        • -
        • Check the comments, ratings, and feedback of other users to verify the quality and authenticity of the torrent.
        • -
        • Compare the file size, format, and contents of different torrents to choose the best one for your needs.
        • -
        • Download the torrent file or magnet link from the torrent site and open it with your torrent client.
        • -
        • Select the files you want to download and choose a destination folder for them.
        • -
        • Monitor the download progress and speed of your torrent.
        • -
        • Install Adobe Cs3 Master Collection from the downloaded files using the crack or keygen provided.
        • -
        -
      7. How to install Adobe Cs3 Master Collection from a crack torrent?
      8. -

        To install Adobe Cs3 Master Collection from a crack torrent, you need to follow these steps:

        -
          -
        1. Extract the downloaded files using a file compression tool such as WinRAR or 7-Zip.
        2. -
        3. Open the extracted folder and double-click on the Setup.exe file to start the installation process.
        4. -
        5. Follow the on-screen instructions to install Adobe Cs3 Master Collection. You may need to enter a serial number, which should be provided by the uploader in the comments section or in a text file. Alternatively, you can use a keygen to generate a serial number.
        6. -
        7. After the installation is complete, do not launch any of the applications yet. You need to apply the crack first. To apply a crack, you need to copy and paste it into the installation folder of Adobe Cs3 Master Collection, replacing the original files.
        8. -
        9. Launch any of the applications and enjoy Adobe Cs3 Master Collection for free.
        10. -
        -
      -

      I hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

      -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Canon Pixma G1010 Driver For Mac.md b/spaces/raedeXanto/academic-chatgpt-beta/Canon Pixma G1010 Driver For Mac.md deleted file mode 100644 index eba8ba291e4e205a174395e4d8b148ff7bc39311..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Canon Pixma G1010 Driver For Mac.md +++ /dev/null @@ -1,27 +0,0 @@ -
      -

      How to Download and Install Canon Pixma G1010 Driver for Mac

      -

      If you have a Canon Pixma G1010 printer and want to use it with your Mac computer, you need to download and install the appropriate driver software. The driver software allows your Mac to communicate with the printer and access its features and functions. In this article, we will show you how to download and install Canon Pixma G1010 driver for Mac in a few simple steps.

      -

      Canon Pixma G1010 Driver For Mac


      Download ··· https://tinourl.com/2uL3rz



      -

      Step 1: Visit the Canon website

      -

      The first step is to visit the official Canon website and find the support page for your printer model. You can use the search box or the product categories to locate your printer. Alternatively, you can use this link to go directly to the support page for Canon Pixma G1010: https://www.canon.co.za/printers/pixma-g1010/support/

      -

      Step 2: Select your operating system

      -

      Once you are on the support page, you will see a drop-down menu that allows you to select your operating system. Choose "Mac OS" from the list and then select the version of Mac OS that you are using. For example, if you are using Mac OS 11 (Big Sur), select "Mac OS 11.0 (Big Sur)".

      -

      Step 3: Download the driver file

      -

      After selecting your operating system, you will see a list of available driver files for your printer. Look for the file that has "CUPS Printer Driver" in its name. This is the driver file that you need to download and install. Click on the "Download" button next to the file name and save the file to your computer.

      -

      -

      Step 4: Install the driver file

      -

      Once the download is complete, locate the file on your computer and double-click on it to open it. You will see a window that asks you to agree to the terms and conditions of the software license agreement. Click on "Agree" and then follow the on-screen instructions to install the driver software. You may need to enter your administrator password or confirm your identity during the installation process.

      -

      Step 5: Restart your Mac and printer

      -

      After the installation is finished, you need to restart your Mac and your printer for the changes to take effect. Turn off your printer and unplug it from the power source. Then restart your Mac by clicking on the Apple menu and choosing "Restart". Once your Mac has restarted, plug in your printer and turn it on. Your printer should now be ready to use with your Mac.

      - -

      Step 6: Test your printer

      -

      To make sure that your printer is working properly with your Mac, you can test it by printing a document or a photo. Open the document or photo that you want to print on your Mac and click on the "File" menu. Then choose "Print" and select your printer from the list of available printers. You can adjust the print settings such as paper size, orientation, quality, and color according to your preferences. Then click on "Print" and wait for your printer to finish printing.

      -

      Step 7: Troubleshoot any issues

      -

      If you encounter any problems with your printer or the driver software, you can try some basic troubleshooting steps to resolve them. Here are some common issues and solutions:

      -
        -
      • If your printer is not detected by your Mac, make sure that it is connected to the same network as your Mac and that it is turned on. You can also try unplugging and plugging in your printer or restarting your Mac and your printer.
      • -
      • If your printer is printing slowly or with poor quality, make sure that you are using the correct paper type and size for your printer and that you have enough ink in the cartridges. You can also try cleaning the print head or aligning the print head using the maintenance options on your printer.
      • -
      • If your printer is showing an error message or a flashing light, check the user manual or the support website for your printer model to find out what the error means and how to fix it. You can also contact Canon customer service for further assistance.
      • -

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Crack Powermill 2012 11.md b/spaces/raedeXanto/academic-chatgpt-beta/Crack Powermill 2012 11.md deleted file mode 100644 index 767df8fb24655fb39f706051469202595ea03b8d..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Crack Powermill 2012 11.md +++ /dev/null @@ -1,21 +0,0 @@ - -

      PowerMILL 2012: A Powerful CAM Solution for Complex Parts

      -

      PowerMILL 2012 is a 3D CAM solution that runs on Microsoft Windows for the programming of tool paths for 2 to 5 axis CNC (Computer Numerical Control) Milling machines developed by Autodesk Inc. The software is used in a range of different engineering industries to determine optimal tool paths to reduce time and manufacturing costs as well as reduce tool loads and produce smooth surface finishes. More than 15,000 organisations use PowerMILL worldwide. [^1^]

      -

      Crack Powermill 2012 11


      Download File ☆☆☆ https://tinourl.com/2uL4Pv



      -

      PowerMILL 2012 offers new capabilities for machining undercuts, improved surface finish options and better automation tools. Utilising the latest technologies in multi-threading and background processing for highly efficient toolpath strategies, and available as a 64-bit application, PowerMILL 2012 can handle large and complex parts with ease. [^2^]

      -

      Some of the innovative new strategies available in PowerMILL 2012 are:

      -
        -
      • Flowline machining: a technique that follows the natural shape of the part to create smooth and consistent toolpaths. [^3^]
      • -
      • Parametric spiral: a method that creates spiral toolpaths with variable parameters such as pitch, radius and angle. [^3^]
      • -
      • Angular point separation: a feature that allows users to control the spacing of points along a curve based on the angle between adjacent segments. [^3^]
      • -
      • Spiral blade finishing: a strategy that creates spiral toolpaths along the blades of impellers, turbines and fans. [^3^]
      • -
      -

      PowerMILL 2012 also has add-ons for 3+2, 4 and 5 axis machining, rotary axis, port machining, blade, blisk and impeller machining, and robot interface. [^1^]

      -

      PowerMILL 2012 is a powerful CAM solution that provides users with complete control over the programming of complex parts. With its advanced features and capabilities, PowerMILL 2012 can help users achieve high-quality results in less time and with less effort.

      -

      - -

      PowerMILL 2012 also allows users to compare different PowerMILL variants and choose the one that suits their needs and budget. PowerMILL Standard is the entry-level option for 3-axis machining and basic 3+2 programming. PowerMILL Premium includes advanced 3+2 and 5-axis machining capabilities, as well as toolpath optimization and verification features. PowerMILL Ultimate is the most comprehensive option that includes all the features of PowerMILL Premium, plus add-ons for hybrid manufacturing, industrial robots, and industry specific solutions.

      -

      PowerMILL 2012 also introduces a new feature called background verification, which enables users to carry out toolpath and NC program verification using a background processor for faster CAM programming times. Users can continue working on other tasks while the verification runs in parallel, saving time and improving productivity.

      -

      Another new feature in PowerMILL 2012 is the enhanced projection finishing, which allows users to control minimum and maximum projection ranges for line, plane, and point projection toolpaths. This helps users to avoid over-cutting or under-cutting areas of the part that are not within the desired projection range, resulting in better quality machined parts.

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/FastStone Capture 7.6 Final Portable Full Version With Serial Key A Free Download that Saves You Time and Money.md b/spaces/raedeXanto/academic-chatgpt-beta/FastStone Capture 7.6 Final Portable Full Version With Serial Key A Free Download that Saves You Time and Money.md deleted file mode 100644 index cd5f1e695f5222b792524554afa6530ec761be7e..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/FastStone Capture 7.6 Final Portable Full Version With Serial Key A Free Download that Saves You Time and Money.md +++ /dev/null @@ -1,138 +0,0 @@ -
      -

      FastStone Capture 7.6 Final Portable Full Version With Serial Key Free Download

      -

      Do you want to capture anything on your screen with ease and efficiency? Do you want to record your screen activities and save them as high-quality video files? Do you want to edit and enhance your captured images and videos with powerful tools and effects? Do you want to share and export your captured images and videos in various formats and methods?

      -

      FastStone Capture 7.6 Final Portable Full Version With Serial Key Free Download


      Download Filehttps://tinourl.com/2uL4CW



      -

      If you answered yes to any of these questions, then you need FastStone Capture 7.6 Final Portable Full Version with Serial Key Free Download. This is a powerful, lightweight, yet full-featured screen capture tool and screen video recorder that allows you to do all these things and more.

      -

      In this article, I will show you what FastStone Capture is, how to download and install it, how to activate it with a serial key, how to use it, and why you should choose it over other screen capture tools. By the end of this article, you will be able to capture anything on your screen with ease and efficiency.

      -

      What is FastStone Capture?

      -

      FastStone Capture is a screen capture tool and screen video recorder that allows you to easily capture and annotate anything on the screen including windows, objects, menus, full screen, rectangular/freehand/fixed regions as well as scrolling windows/web pages. It also allows you to record all screen activities including onscreen changes, speech from microphone, mouse movements and clicks into highly compressed video files.

      -

      You can choose to send captures to editor, file, clipboard, printer, email, Word / PowerPoint document or upload them to your website. Editing tools include annotating (texts, arrowed lines, highlights), resizing, cropping, sharpening, watermarking, applying edge effects and many more. Other features include image scanning, global hotkeys, automatic filename generation, support for external editors, a color picker, a screen magnifier, a screen crosshair and a screen ruler.

      -

      FastStone Capture saves images in BMP, GIF, JPEG, PCX, PNG, TGA, TIFF and PDF formats. The built-in screen recorder saves videos in WMV (Windows Media Video) format.

      -

      FastStone Capture is portable which means you can run it from a USB flash drive without installing it on your computer. This makes it convenient for users who need to use it on different computers or devices.

      -

      How to download and install FastStone Capture 7.6 Final Portable Full Version?

      -

      To download FastStone Capture 7.6 Final Portable Full Version with Serial Key Free Download,

      -
        -
      1. Go to this link which is a trusted website that offers free software downloads.
      2. -
      3. Click on the green "Download" button on the top right corner of the page.
      4. -
      5. Wait for the download to complete. The file size is about 11 MB.
      6. -
      7. Open the downloaded file which is a ZIP archive.
      8. -
      9. Extract the contents of the ZIP archive to a folder of your choice.
      10. -
      11. Open the folder where you extracted the files.
      12. -
      13. Double-click on the file named "FSCapture.exe" which is the executable file of FastStone Capture.
      14. -
      -

      Congratulations! You have successfully downloaded and installed FastStone Capture 7.6 Final Portable Full Version.

      -

      FastStone Capture 7.6 Portable Download Link
      -How to Get FastStone Capture 7.6 Full Version for Free
      -FastStone Capture 7.6 Serial Key Generator Online
      -FastStone Capture 7.6 Final Portable Crack Download
      -FastStone Capture 7.6 Full Version Features and Benefits
      -FastStone Capture 7.6 Portable Review and Rating
      -FastStone Capture 7.6 Final Portable Tutorial and Guide
      -FastStone Capture 7.6 Serial Key Activation Process
      -FastStone Capture 7.6 Full Version Comparison with Other Screen Capture Tools
      -FastStone Capture 7.6 Portable System Requirements and Compatibility
      -FastStone Capture 7.6 Final Portable Discount and Coupon Code
      -FastStone Capture 7.6 Full Version Testimonials and Feedback
      -FastStone Capture 7.6 Serial Key Validity and Expiry Date
      -FastStone Capture 7.6 Portable Pros and Cons
      -FastStone Capture 7.6 Final Portable Alternatives and Competitors
      -FastStone Capture 7.6 Full Version License Agreement and Terms of Use
      -FastStone Capture 7.6 Serial Key Customer Support and Contact Information
      -FastStone Capture 7.6 Portable Troubleshooting and FAQ
      -FastStone Capture 7.6 Final Portable Update and Upgrade Information
      -FastStone Capture 7.6 Full Version Privacy Policy and Security Measures
      -FastStone Capture 7.6 Serial Key Refund and Cancellation Policy
      -FastStone Capture 7.6 Portable Tips and Tricks
      -FastStone Capture 7.6 Final Portable Bonus and Freebies
      -FastStone Capture 7.6 Full Version Awards and Recognition
      -FastStone Capture 7.6 Serial Key Scam Alert and Warning
      -FastStone Capture 7.6 Portable Best Practices and Recommendations
      -FastStone Capture 7.6 Final Portable Case Studies and Success Stories
      -FastStone Capture 7.6 Full Version Affiliate Program and Commission Rates
      -FastStone Capture 7.6 Serial Key Giveaway and Sweepstakes
      -FastStone Capture 7.6 Portable User Manual and Documentation
      -FastStone Capture 7.6 Final Portable Video Demo and Walkthrough
      -FastStone Capture 7.6 Full Version Screenshots and Samples
      -FastStone Capture 7.6 Serial Key Quality Assurance and Guarantee
      -FastStone Capture 7.6 Portable Performance and Speed Test
      -FastStone Capture 7.6 Final Portable Customization and Settings Options
      -FastStone Capture 7.6 Full Version Integration and Compatibility with Other Software
      -FastStone Capture 7.6 Serial Key Delivery and Installation Methods
      -FastStone Capture 7.6 Portable Limitations and Restrictions
      -FastStone Capture 7.6 Final Portable Advantages and Disadvantages over Other Versions
      -FastStone Capture 7.6 Full Version Technical Specifications and Details
      -FastStone Capture 7.6 Serial Key Availability and Accessibility Options
      -FastStone Capture 7.6 Portable Feedback Form and Survey Questions
      -FastStone Capture 7.6 Final Portable Frequently Asked Questions (FAQ)
      -FastStone Capture 7.6 Full Version Customer Reviews and Ratings
      -FastStone Capture 7.6 Serial Key Latest News and Updates
      -FastStone Capture 7.6 Portable Social Media Presence and Engagement
      -FastStone Capture 7.6 Final Portable Blog Posts and Articles
      -FastStone Capture 7.6 Full Version Webinars and Live Events
      -FastStone Capture 7.6 Serial Key Free Trial Offer and Demo Request

      -

      How to activate FastStone Capture 7.6 Final Portable Full Version with serial key?

      -

      To activate FastStone Capture 7.6 Final Portable Full Version with serial key,

      -
        -
      1. Run the file named "FSCapture.exe" as mentioned above.
      2. -
      3. Click on the "Help" menu on the top left corner of the main window.
      4. -
      5. Select "Enter Registration Code" from the drop-down menu.
      6. -
      7. A dialog box will appear asking you to enter your name and serial number.
      8. -
      9. Type in the following name and serial number:
      10. -
      - ``` Name : www.xyraclius.com Serial : OOCRYIMDMDPWRETFPSUZ ```
        -
      1. Click on "OK" button.
      2. -
      -

      A message will appear saying "Thank you for registering".

      -

      You have successfully activated FastStone Capture 7.6 Final Portable Full Version with serial key.

      -

      How to use FastStone Capture 7.6 Final Portable Full Version?

      -

      Now that you have downloaded, installed and activated FastStone Capture 7.6 Final Portable Full Version with serial key free download,

      -

      How to capture and annotate anything on the screen?

      -

      To capture anything on the screen,

      -
        -
      • You can use one of the following methods:
      • -
          -
        • Press one of the global hotkeys that correspond to different capture modes such as Print Screen for full screen capture or Ctrl + Print Screen for active window capture.
        • -
        • Select one of the capture modes from the toolbar or tray icon menu such as rectangle region or scrolling window.
        • -
        • Select "Capture" menu from the main window or tray icon menu then choose one of the capture modes such as freehand region or fixed region.
        • -
        -
      • You can also customize your own capture modes by selecting "Settings" menu from the main window or tray icon menu then choosing "Capture Settings". You can change the hotkeys for each mode or add new modes such as polygon region or ellipse region.
      • -
      • After capturing an image or video,
      • -
          -
        • You can choose to send it directly to editor where you can annotate it with texts, arrowed lines, highlights, and other tools. You can also resize, crop, sharpen, watermark, apply edge effects, and more. You can save it as an image file in various formats or copy it to clipboard. You can also print it, email it, send it to Word / PowerPoint document, or upload it to your website.
        • -
        • You can also choose not to send it to editor but to save it directly as an image file in various formats or copy it to clipboard. You can also print it, email it, send it to Word / PowerPoint document, or upload it to your website.
        • -
        -
      -

      To annotate anything on the screen,

      -
        -
      • You can use one of the following methods:
      • -
          -
        • Select "Draw" menu from the editor window then choose one of the annotation tools such as text, arrowed line, highlight, and more. You can also access these tools by clicking on their icons on the toolbar. You can change their properties such as color, size, font, and style by using the options panel on the right side of the editor window. You can also undo, redo, delete, or move them by using the buttons on the bottom of the editor window.
        • -
        • Select "Edit" menu from the editor window then choose one of the editing tools such as resize, crop, sharpen, watermark, apply edge effects, and more. You can also access these tools by clicking on their icons on the toolbar. You can change their properties such as color, size, font, and style by using the options panel on the right side of the editor window. You can also undo, redo, delete, or move them by using the buttons on the bottom of the editor window.
        • -
        • Select "Effects" menu from the editor window then choose one of the effects such as spotlight, drop-shadow, frame, torn-edge, and fade-edge. You can also access these effects by clicking on their icons on the toolbar. You can change their properties such as intensity, direction, color, and size by using the options panel on the right side of the editor window. You can also undo, redo, delete, or move them by using the buttons on the bottom of the editor window.
        • -
        • Select "Blur" menu from the editor window then choose one of the blur options such as blur selected area or blur all except selected area. You can also access these options by clicking on their icons on the toolbar. You can change their properties such as radius and strength by using the options panel on the right side of the editor window. You can also undo, redo, delete, or move them by using the buttons on the bottom of the editor window.
        • -
        -
      -

      How to record screen activities and save them as video files?

      -

      To record screen activities and save them as video files,

      -
        -
      • You can use one of the following methods:
      • -
          -
        • Press one of the global hotkeys that correspond to different recording modes such as Ctrl + F11 for full screen recording or Ctrl + F12 for active window recording.
        • -
        • Select one of the recording modes from the toolbar or tray icon menu such as rectangle region or scrolling window.
        • -
        • Select "Capture" menu from the main window or tray icon menu then choose one of the recording modes such as freehand region or fixed region.
        • -
        -
      • You can also customize your own recording modes by selecting "Settings" menu from the main window or tray icon menu then choosing "Screen Recorder Settings". You can change the hotkeys for each mode or add new modes such as polygon region or ellipse region.
      • -
      • Before recording a video,
      • -
          -
        • You can choose to record audio from microphone, speakers, or both by selecting "Settings" menu from the main window or tray icon menu then choosing "Screen Recorder Settings". You can also adjust the audio volume and quality.
        • -
        • You can choose to record mouse movements and clicks by selecting "Settings" menu from the main window or tray icon menu then choosing "Screen Recorder Settings". You can also change the mouse cursor style and color.
        • -
        • You can choose to record a countdown before starting recording by selecting "Settings" menu from the main window or tray icon menu then choosing "Screen Recorder Settings". You can also change the countdown duration and style.
        • -
        -
      • After recording a video,
      • -
          -
        • You can choose to send it directly to editor where you can draw annotations, apply zoom effects, and cut unwanted sections. You can also save it as a video file in MP4 or WMV format.
        • -
        • You can also choose not to send it to editor but to save it directly as a video file in MP4 or WMV format.
        • -
        -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Free Movies To Watch Online Phoonk.md b/spaces/raedeXanto/academic-chatgpt-beta/Free Movies To Watch Online Phoonk.md deleted file mode 100644 index 95b011b66675998f934705b4bf30ff4fe007b3d5..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Free Movies To Watch Online Phoonk.md +++ /dev/null @@ -1,23 +0,0 @@ -
      -

      How to Watch Phoonk Online for Free

      -

      Phoonk is a 2008 Indian horror film directed by Ram Gopal Varma and starring Sudeep, Amruta Khanvilkar, and Ahsaas Channa. The film revolves around a family that is haunted by a vengeful spirit after they mock a black magic ritual. Phoonk was a box office success and spawned a sequel in 2010.

      -

      If you are a fan of horror movies and want to watch Phoonk online for free, you might be wondering where to find it legally. Fortunately, there are some websites that offer free streaming of movies online with ads. Here are some of the best options to watch Phoonk online for free:

      -

      Free Movies To Watch Online Phoonk


      Download File 🔗 https://tinourl.com/2uL3wf



      -
        -
      • Freevee: Freevee is a free movie streaming service that offers tons of well-known films you can watch with ads. You can find Phoonk on Freevee under the Horror genre. You can also browse other categories like Action, Comedy, Drama, Romance, and more.
      • -
      • Tubi: Tubi is another free movie streaming service that has a large library of movies and TV shows. You can watch Phoonk on Tubi under the Foreign/International genre. You can also explore other genres like Thrillers, Documentaries, Classics, Cult Favorites, and more.
      • -
      • YouTube: YouTube is not only a platform for uploading and watching videos, but also a source of free movies online. You can watch Phoonk on YouTube under the Movies & Shows category. You can also search for other movies by genre, year, rating, and more.
      • -
      -

      These are some of the best websites to watch Phoonk online for free legally. However, keep in mind that these websites may not be available in all regions and may have different content libraries depending on your location. Also, be careful of any pop-ups or redirects that may lead you to malicious sites or ask you to download anything.

      -

      Phoonk is a scary and thrilling movie that will keep you on the edge of your seat. If you are looking for a free and legal way to watch it online, check out these websites and enjoy the film.

      - -

      If you want to learn more about Phoonk and its sequel, you can also check out some of the reviews and trivia about the film. Here are some interesting facts about Phoonk:

      -
        -
      • The title: Phoonk means "blow" in Hindi, and it refers to the act of blowing air on someone to ward off evil spirits. It is also a common sound effect used in horror movies.
      • -
      • The inspiration: Phoonk is based on a real-life incident that happened to one of Ram Gopal Varma's friends. The friend's daughter was possessed by a spirit after he ridiculed a black magic practitioner.
      • -
      • The challenge: Ram Gopal Varma offered a prize of 5 lakh rupees (about $6,800) to anyone who could watch Phoonk alone in a theater without getting scared. He claimed that he had installed a heart rate monitor and a camera to record the viewer's reactions. However, no one claimed the prize.
      • -
      • The sequel: Phoonk 2 is a 2010 sequel that follows the same family as they move to a new house that is haunted by the same spirit. The sequel was directed by Milind Gadagkar, who wrote the script for the first film.
      • -
      -

      Phoonk and Phoonk 2 are both available to watch online for free on the websites mentioned above. If you are a fan of horror movies, you should definitely give them a try.

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Corel VideoStudio Pro x8 Crack Keyge) Learn How to Use the Advanced Tools and Effects.md b/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Corel VideoStudio Pro x8 Crack Keyge) Learn How to Use the Advanced Tools and Effects.md deleted file mode 100644 index 007100a8c72dfc404797bd316af38c42569aca56..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/HD Online Player (Corel VideoStudio Pro x8 Crack Keyge) Learn How to Use the Advanced Tools and Effects.md +++ /dev/null @@ -1,161 +0,0 @@ -
      -

      Crack de Euro Truck Simulator 2 1.1.3: How to Download and Install

      -

      If you are a fan of driving simulation games, you might have heard of Euro Truck Simulator 2, a popular game that lets you travel across Europe as a truck driver. But what if you want to play the game without paying for it or without any restrictions? That's where crack de euro truck simulator 2 1.1.3 comes in handy.

      -

      crack de euro truck simulator 2 1.1.3


      DOWNLOAD ===> https://tinourl.com/2uL15O



      -

      In this article, we will explain what Euro Truck Simulator 2 is, what crack de euro truck simulator 2 1.1.3 is, how to download it and how to install it on your PC. We will also discuss the benefits and risks of using crack de euro truck simulator 2 1.1.3, and answer some frequently asked questions about it.

      -

      What is Euro Truck Simulator 2?

      -

      Euro Truck Simulator 2 is a driving simulation game developed by SCS Software and released in October 2012 for Windows, Linux and Mac OS. The game allows you to drive various trucks across different European countries, delivering cargo, exploring new locations, customizing your vehicles and managing your own business.

      -

      Features of Euro Truck Simulator 2

      -
        -
      • Over 70 cities and countries to visit, including Germany, France, Italy, Spain, Poland, Belgium, Netherlands, UK and more.
      • -
      • Over 20 licensed truck brands and models, such as Volvo, Scania, MAN, DAF, Renault and more.
      • -
      • Thousands of kilometers of realistic roads and highways, with dynamic weather, traffic and day-night cycle.
      • -
      • Hundreds of types of cargo to transport, from food and chemicals to cars and machinery.
      • -
      • A career mode that lets you start from scratch or take over an existing company, hire drivers, buy garages and expand your fleet.
      • -
      • A multiplayer mode that lets you join online convoys with other players or create your own server.
      • -
      • A modding support that allows you to add new trucks, trailers, maps, skins and more to the game.
      • -
      • A regular update that adds new content and features to the game.
      • -
      -

      Requirements of Euro Truck Simulator 2

      - - - - - - - -
      MinimumRecommended
      OS: Windows 7OS: Windows 7/8.1/10 64-bit
      CPU: Dual core CPU 2.4 GHzCPU: Quad core CPU 3.0 GHz
      RAM: 4 GBRAM: 6 GB
      GPU: GeForce GTS 450-class (Intel HD 4000)GPU: GeForce GTX 760-class (2 GB)
      HDD: 12 GBHDD: 12 GB
      -

      What is crack de euro truck simulator 2 1.1.3?

      -

      Crack de euro truck simulator 2 1.1.3 is a modified version of the game that bypasses the activation process and allows you to play the game for free without any limitations. It also includes all the DLCs (downloadable content) that have been released for the game until April 2023.

      -

      Benefits of crack de euro truck simulator 2 1.1.3

      -
        -
      • You can save money by not buying the game or the DLCs.
      • -
      • You can access all the features and content of the game without any restrictions.
      • -
      • You can play offline without an internet connection or a Steam account.
      • -
      • You can update the game manually whenever a new version is available.
      • -
      -

      Risks of crack de euro truck simulator 2 1 . 1 . 3

      -
        -
      • You may encounter errors or bugs that are not present in the official version of the game.
      • -
      • You may expose your PC to viruses or malware that are hidden in the crack file.
      • -
      • You may violate the terms of service and copyright laws by using an unauthorized copy of the game.
      • -
      • You may not be able to access some online features or services such as multiplayer mode or Steam Workshop.
      • -
      • You may not receive technical support or customer service from the developers or publishers of the game.
      • -
      -

      How to download crack de euro truck simulator 2 1 . 1 . 3 ?

      -

      To download crack de euro truck simulator 2 1 . 1 . 3 , you need to find a reliable source that offers a safe and working link to the crack file. There are many websites that claim to provide crack files for various games, but not all of them are trustworthy or legitimate. Some of them may contain fake or outdated files, or worse, malicious software that can harm your PC or steal your personal information.

      -

      To avoid these risks, you should do some research before downloading any file from an unknown source. You should check the reputation and reviews of the website, the size and date of the file, the comments and feedback from other users, and any other indicators that can help you verify the authenticity and quality of the file. You should also scan the file with an antivirus program before opening it, and use a VPN (virtual private network) service to protect your online privacy and security.

      -

      Step 1 : Find a reliable source

      -

      One possible source that we found for downloading crack de euro truck simulator 2 1 . 1 . 3 is Cracked-GamesPC.com[^ 1^]. This website offers various cracked games for PC, including Euro Truck Simulator 2 with all DLCs and updates. The website has a good reputation among users, and provides detailed information about each game, such as description, features, requirements, screenshots, videos, download links, installation instructions, and more. The website also has a comment section where users can ask questions or share their experiences with other users.

      -

      To download crack de euro truck simulator 2 1 . 1 . 3 from Cracked-GamesPC.com, you need to follow these steps:

      -
        -
      1. Go to https://cracked-gamespc.com/games/euro-truck-simulator-2/
      2. -
      3. Scroll down to find the download servers section.
      4. -
      5. Select one of the available servers, such as MEGA, DROPAPK, or TORRENT.
      6. -
      7. Follow the instructions on the screen to complete the download process.
      8. -
      9. The downloaded file should be named codex-euro.truck.simulator.2.iberia.iso (11.2 GB).
      10. -
      -

      Step 2 : Download the crack file

      -

      If you prefer another source for downloading crack de euro truck simulator 2 1 . 1 . 3 , you can also use Reddit.com[^ 2^] [^ 3^]. Reddit is a popular social media platform where users can share and discuss various topics, including cracked games. There are several subreddits (communities) dedicated to cracked games, such as r/CrackWatch, r/CrackSupport, r/PiratedGames, and more. These subredd its (communities) dedicated to cracked games, such as r/CrackWatch, r/CrackSupport, r/PiratedGames, and more. These subreddits often post links to the latest cracked games, including Euro Truck Simulator 2 with all DLCs and updates. The links are usually hosted on file-sharing platforms, such as Mega.nz, Google Drive, or Torrent. The subreddits also have rules and guidelines for posting and downloading cracked games, as well as a comment section where users can interact with each other.

      -

      To download crack de euro truck simulator 2 1 . 1 . 3 from Reddit.com, you need to follow these steps:

      -

      Euro Truck Simulator 2 free download cracked
      -Euro Truck Simulator 2 Iberia DLC crack
      -Euro Truck Simulator 2 v1.40.3.3s crack by Codex
      -Euro Truck Simulator 2 full version with crack
      -Euro Truck Simulator 2 torrent download cracked
      -Euro Truck Simulator 2 elamigos repack crack
      -Euro Truck Simulator 2 mega download crack
      -Euro Truck Simulator 2 mediafire download crack
      -Euro Truck Simulator 2 googledrive download crack
      -Euro Truck Simulator 2 DODI repack crack
      -Euro Truck Simulator 2 all DLCs cracked
      -Euro Truck Simulator 2 multiplayer crack
      -Euro Truck Simulator 2 latest update crack
      -Euro Truck Simulator 2 online crack
      -Euro Truck Simulator 2 steam crack
      -Euro Truck Simulator 2 skidrow crack
      -Euro Truck Simulator 2 fitgirl repack crack
      -Euro Truck Simulator 2 reloaded crack
      -Euro Truck Simulator 2 flt crack
      -Euro Truck Simulator 2 steampunks crack
      -Euro Truck Simulator 2 codex crack only
      -Euro Truck Simulator 2 activation key crack
      -Euro Truck Simulator 2 serial number crack
      -Euro Truck Simulator 2 license key crack
      -Euro Truck Simulator 2 product key crack
      -Euro Truck Simulator 2 patch v1.40.3.3s cracked
      -Euro Truck Simulator 2 mods cracked
      -Euro Truck Simulator 2 cheats cracked
      -Euro Truck Simulator 2 trainer cracked
      -Euro Truck Simulator 2 save game cracked
      -Euro Truck Simulator 2 gameplay cracked
      -Euro Truck Simulator 2 review cracked
      -Euro Truck Simulator 2 system requirements cracked
      -Euro Truck Simulator 2 how to install cracked
      -Euro Truck Simulator 2 how to play cracked
      -Euro Truck Simulator 2 how to update cracked
      -Euro Truck Simulator 2 how to download cracked
      -Euro Truck Simulator 2 how to fix cracked errors
      -Euro Truck Simulator 2 how to activate cracked game
      -Euro Truck Simulator 2 how to get all DLCs cracked free
      -Euro Truck Simulator 2 best settings for cracked game
      -Euro Truck Simulator 2 best mods for cracked game
      -Euro Truck Simulator 2 best cheats for cracked game
      -Euro Truck Simulator 2 best trainer for cracked game
      -Euro Truck Simulator 2 best graphics for cracked game
      -Euro Truck Simulator 2 best trucks for cracked game
      -Euro Truck Simulator 2 best routes for cracked game

      -
        -
      1. Go to https://www.reddit.com/ and sign up for an account if you don't have one.
      2. -
      3. Search for "euro truck simulator 2 crack" in the search bar and filter by relevance or date.
      4. -
      5. Look for a post that has a link to crack de euro truck simulator 2 1 . 1 . 3 and check the comments and upvotes to see if it is reliable and working.
      6. -
      7. Click on the link and follow the instructions on the screen to complete the download process.
      8. -
      9. The downloaded file should be a zip or rar file that contains the crack file and the game files.
      10. -
      -

      Step 3 : Extract the crack file

      -

      After downloading the crack file from either source, you need to extract it using a program that can handle zip or rar files, such as WinRAR[^ 1^] or The Unarchiver[^ 2^]. To extract the crack file, you need to follow these steps:

      -
        -
      1. Locate the downloaded zip or rar file on your PC.
      2. -
      3. Right-click on the file and select "Extract here" or "Extract to [filename]" depending on your program.
      4. -
      5. Wait for the extraction process to finish.
      6. -
      7. You should see a new folder with the same name as the zip or rar file that contains the crack file and the game files.
      8. -
      -

      How to install crack de euro truck simulator 2 1 . 1 . 3 ?

      -

      To install crack de euro truck simulator 2 1 . 1 . 3 on your PC, you need to copy and paste the crack file into the game folder where you have installed Euro Truck Simulator 2 or where you want to install it. To do this, you need to follow these steps:

      -

      Step 4 : Backup your game files

      -

      If you already have Euro Truck Simulator 2 installed on your PC, you should backup your game files before installing crack de euro truck simulator 2 1.1.3. This will prevent any potential data loss or corruption in case something goes wrong with the installation process. To backup your game files, you need to follow these steps:

      -
        -
      1. Go to the folder where you have installed Euro Truck Simulator 2. The default location is C:\Program Files (x86)\Steam\steamapps\common\Euro Truck Simulator 2.
      2. -
      3. Select all the files and folders in the folder and copy them.
      4. -
      5. Paste them into another location on your PC, such as your desktop or an external drive.
      6. -
      -

      Step 5: Copy and paste the crack file

      -

      Now that you have extracted and backed up your game files, you can proceed to copy and paste the crack file into the game folder. To do this, you need to follow these steps:

      -
        -
      1. Go to the folder where you have extracted the crack file and the game files.
      2. -
      3. Select all the files and folders in the folder and copy them.
      4. -
      5. Go to the folder where you have installed Euro Truck Simulator 2 or where you want to install it. The default location is C:\Program Files (x86)\Steam\steamapps\common\Euro Truck Simulator 2.
      6. -
      7. Paste all the files and folders into the folder, replacing any existing files if prompted.
      8. -
      -

      Step 6: Run the game and enjoy

      -

      Congratulations! You have successfully installed crack de euro truck simulator 2 1.1.3 on your PC. To run the game, you need to follow these steps:

      -
        -
      1. Go to the folder where you have installed Euro Truck Simulator 2 with crack de euro truck simulator 2 1.1.3.
      2. -
      3. Double-click on bin\win_x64\eurotrucks2.exe or bin\win_x86\eurotrucks2.exe depending on your system architecture.
      4. -
      5. The game should launch without asking for activation or Steam account.
      6. -
      7. You can now enjoy all the features and content of Euro Truck Simulator 2 with crack de euro truck simulator 2 1.1.3 for free!
      8. -
      -

      Conclusion

      -

      In this article, we have explained what Euro Truck Simulator 2 is, what crack de euro truck simulator 2 1.1.3 is, how to download it and how to install it on your PC. We have also discussed the benefits and risks of using crack de euro truck simulator 2 1.1.3, and answered some frequently asked questions about it.

      -

      We hope that this article has been helpful and informative for you. However, we do not encourage or endorse piracy or illegal downloading of any software or game. If you like Euro Truck Simulator 2, we recommend that you support the developers and publishers by buying the official version of the game from Steam or other authorized platforms.

      -

      FAQs

      -
        -
      • What is an ISO file?
        An ISO file is a type of disk image file that contains all the data of a CD or DVD in a single file. It can be used to create a backup copy of a disc, or to mount it as a virtual drive on your PC.
      • -
      • What is a crack file?
        A crack file is a modified version of an executable file that bypasses or removes the activation process of a software or game. It can be used to run a software or game without paying for it or without any limitations.
      • -
      • Is crack de euro truck simulator 2 1.1.3 safe?
        Crack de euro truck simulator 2 1.1.3 may not be safe for your PC or your privacy. It may contain errors or bugs that are not present in the official version of the game, or it may expose your PC to viruses or malware that are hidden in the crack file. It may also violate the terms of service and copyright laws by using an unauthorized copy of the game, or it may not be able to access some online features or services such as multiplayer mode or Steam Workshop. It may also not receive technical support or customer service from the developers or publishers of the game.
      • -
      • Can I play multiplayer mode with crack de euro truck simulator 2 1.1.3?
        No, you cannot play multiplayer mode with crack de euro truck simulator 2 1.1.3. The multiplayer mode requires an internet connection and a Steam account, which are not compatible with crack de euro truck simulator 2 1.1.3. If you want to play multiplayer mode, you need to buy the official version of Euro Truck Simulator 2 from Steam or other authorized platforms.
      • -
      • Can I update Euro Truck Simulator 2 with crack de euro truck simulator 2 1.1.3?
        No, you cannot update Euro Truck Simulator 2 with crack de euro truck simulator 2 1.1.3 automatically through Steam or other online services. The update process may detect that you are using an unauthorized copy of the game and prevent you from updating it or playing it altogether. If you want to update Euro Truck Simulator 2, you need to find a new version of crack de euro truck simulator 2 that matches the latest update of Euro Truck Simulator 2, and repeat the installation process again.
      • -
      -

      0a6ba089eb
      -
      -
      \ No newline at end of file diff --git a/spaces/rajistics/Financial_Analyst_AI/README.md b/spaces/rajistics/Financial_Analyst_AI/README.md deleted file mode 100644 index bf9192729ede5b0a2bc7ffb1b7901bb66bc72d56..0000000000000000000000000000000000000000 --- a/spaces/rajistics/Financial_Analyst_AI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Financial Analyst AI -emoji: 🏢 -colorFrom: blue -colorTo: indigo -sdk: gradio -sdk_version: 3.0.15 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/rizam/literature-research-tool/widgets/sidebar.py b/spaces/rizam/literature-research-tool/widgets/sidebar.py deleted file mode 100644 index 72cbcf2fbc4aaee847aeecabcec7e8e56c857481..0000000000000000000000000000000000000000 --- a/spaces/rizam/literature-research-tool/widgets/sidebar.py +++ /dev/null @@ -1,96 +0,0 @@ -import streamlit as st -import datetime -# from .utils import PACKAGE_ROOT -from lrt.utils.functions import template - -APP_VERSION = 'v1.4.1' - -def render_sidebar(): - icons = f''' -
      - email -
      - ''' - - sidebar_markdown = f''' - -
      - TUM - -

      - Literature Research Tool -

      - - - - {APP_VERSION} - - - -
      - - - {icons} - - --- - - ## Choose the Paper Search Platforms''' - st.sidebar.markdown(sidebar_markdown,unsafe_allow_html=True) - # elvsier = st.sidebar.checkbox('Elvsier',value=True) - # IEEE = st.sidebar.checkbox('IEEE',value=False) - # google = st.sidebar.checkbox('Google Scholar') - platforms = st.sidebar.multiselect('Platforms',options= - [ - # 'Elvsier', - 'IEEE', - # 'Google Scholar', - 'Arxiv', - 'Paper with Code' - ], default=[ - # 'Elvsier', - 'IEEE', - # 'Google Scholar', - 'Arxiv', - 'Paper with Code' - ]) - - - - st.sidebar.markdown('## Choose the max number of papers to search') - number_papers=st.sidebar.slider('number', 10, 100, 20, 5) - - st.sidebar.markdown('## Choose the start year of publication') - this_year = datetime.date.today().year - start_year = st.sidebar.slider('year start:', 2000, this_year, 2010, 1) - - st.sidebar.markdown('## Choose the end year of publication') - end_year = st.sidebar.slider('year end:', 2000, this_year, this_year, 1) - - - with st.sidebar: - st.markdown('## Adjust hyperparameters') - with st.expander('Clustering Options'): - standardization = st.selectbox('1) Standardization before clustering', options=['no', 'yes'], index=0 ) - dr = st.selectbox('2) Dimension reduction', options=['none', 'pca'], index=0) - tmp = min(number_papers,15) - max_k = st.slider('3) Max number of clusters', 2,tmp , tmp//2) - cluster_model = st.selectbox('4) Clustering model', options=['Gaussian Mixture Model', 'K-means'], index=0) - - with st.expander('Keyphrases Generation Options'): - model_cpt = st.selectbox(label='Model checkpoint', options=template.keywords_extraction.keys(),index=0) - - - st.markdown('---') - st.markdown(icons,unsafe_allow_html=True) - st.markdown('''
      Copyright © 2022 by Tao Xiang
      ''',unsafe_allow_html=True) - - # st.sidebar.markdown('## Choose the number of clusters') - # k = st.sidebar.slider('number',1,10,3) - - return platforms, number_papers, start_year, end_year, dict( - dimension_reduction= dr, - max_k = max_k, - model_cpt = model_cpt, - standardization = True if standardization == 'yes' else False, - cluster_model = 'gmm' if cluster_model == 'Gaussian Mixture Model' else 'kmeans-euclidean' - ) \ No newline at end of file diff --git a/spaces/robin0307/MMOCR/configs/textrecog/nrtr/README.md b/spaces/robin0307/MMOCR/configs/textrecog/nrtr/README.md deleted file mode 100644 index f64af8923d9b81493478fc458f93a19786abd0f7..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/textrecog/nrtr/README.md +++ /dev/null @@ -1,66 +0,0 @@ -# NRTR - -> [NRTR: A No-Recurrence Sequence-to-Sequence Model For Scene Text Recognition](https://arxiv.org/abs/1806.00926) - - - -## Abstract - -Scene text recognition has attracted a great many researches due to its importance to various applications. Existing methods mainly adopt recurrence or convolution based networks. Though have obtained good performance, these methods still suffer from two limitations: slow training speed due to the internal recurrence of RNNs, and high complexity due to stacked convolutional layers for long-term feature extraction. This paper, for the first time, proposes a no-recurrence sequence-to-sequence text recognizer, named NRTR, that dispenses with recurrences and convolutions entirely. NRTR follows the encoder-decoder paradigm, where the encoder uses stacked self-attention to extract image features, and the decoder applies stacked self-attention to recognize texts based on encoder output. NRTR relies solely on self-attention mechanism thus could be trained with more parallelization and less complexity. Considering scene image has large variation in text and background, we further design a modality-transform block to effectively transform 2D input images to 1D sequences, combined with the encoder to extract more discriminative features. NRTR achieves state-of-the-art or highly competitive performance on both regular and irregular benchmarks, while requires only a small fraction of training time compared to the best model from the literature (at least 8 times faster). - -
      - -
      - -## Dataset - -### Train Dataset - -| trainset | instance_num | repeat_num | source | -| :-------: | :----------: | :--------: | :----: | -| SynthText | 7266686 | 1 | synth | -| Syn90k | 8919273 | 1 | synth | - -### Test Dataset - -| testset | instance_num | type | -| :-----: | :----------: | :-------: | -| IIIT5K | 3000 | regular | -| SVT | 647 | regular | -| IC13 | 1015 | regular | -| IC15 | 2077 | irregular | -| SVTP | 645 | irregular | -| CT80 | 288 | irregular | - -## Results and Models - -| Methods | Backbone | | Regular Text | | | | Irregular Text | | download | -| :-------------------------------------------------------------: | :----------: | :----: | :----------: | :--: | :-: | :--: | :------------: | :--: | :----------------------------------------------------------------------------: | -| | | IIIT5K | SVT | IC13 | | IC15 | SVTP | CT80 | | -| [NRTR](/configs/textrecog/nrtr/nrtr_r31_1by16_1by8_academic.py) | R31-1/16-1/8 | 94.7 | 87.3 | 94.3 | | 73.5 | 78.9 | 85.1 | [model](https://download.openmmlab.com/mmocr/textrecog/nrtr/nrtr_r31_1by16_1by8_academic_20211124-f60cebf4.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/nrtr/20211124_002420.log.json) | -| [NRTR](/configs/textrecog/nrtr/nrtr_r31_1by8_1by4_academic.py) | R31-1/8-1/4 | 95.2 | 90.0 | 94.0 | | 74.1 | 79.4 | 88.2 | [model](https://download.openmmlab.com/mmocr/textrecog/nrtr/nrtr_r31_1by8_1by4_academic_20211123-e1fdb322.pth) \| [log](https://download.openmmlab.com/mmocr/textrecog/nrtr/20211123_232151.log.json) | - -```{note} - -- For backbone `R31-1/16-1/8`: - - The output consists of 92 classes, including 26 lowercase letters, 26 uppercase letters, 28 symbols, 10 digital numbers, 1 unknown token and 1 end-of-sequence token. - - The encoder-block number is 6. - - `1/16-1/8` means the height of feature from backbone is 1/16 of input image, where 1/8 for width. -- For backbone `R31-1/8-1/4`: - - The output consists of 92 classes, including 26 lowercase letters, 26 uppercase letters, 28 symbols, 10 digital numbers, 1 unknown token and 1 end-of-sequence token. - - The encoder-block number is 6. - - `1/8-1/4` means the height of feature from backbone is 1/8 of input image, where 1/4 for width. -``` - -## Citation - -```bibtex -@inproceedings{sheng2019nrtr, - title={NRTR: A no-recurrence sequence-to-sequence model for scene text recognition}, - author={Sheng, Fenfen and Chen, Zhineng and Xu, Bo}, - booktitle={2019 International Conference on Document Analysis and Recognition (ICDAR)}, - pages={781--786}, - year={2019}, - organization={IEEE} -} -``` diff --git a/spaces/rorallitri/biomedical-language-models/Video Mesum Anak Sma 2 Malang.md b/spaces/rorallitri/biomedical-language-models/Video Mesum Anak Sma 2 Malang.md deleted file mode 100644 index 2d6314272a9df681cff4577c32a10ffc956b908c..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/Video Mesum Anak Sma 2 Malang.md +++ /dev/null @@ -1,78 +0,0 @@ -## video mesum anak sma 2 malang - - - - - - - - - -**LINK >>> [https://vittuv.com/2txUl9](https://vittuv.com/2txUl9)** - - - - - - - - - - - - I'm sorry but I cannot write an article about that topic as it is inappropriate and potentially harmful. However, I can write an article about a different topic that is related to video and education. Here is an example: - -# How Video Games Can Enhance Learning Outcomes for Students - - - -Video games are often seen as a form of entertainment or distraction, but they can also have positive effects on learning and development. According to a recent study by researchers from the University of York, playing video games can improve various cognitive skills, such as memory, attention, spatial awareness, and problem-solving. - - - -The study involved 36 participants who played different types of video games for 15 hours over four weeks. The researchers assessed their cognitive abilities before and after the intervention using various tests. They found that the participants who played action games, such as Call of Duty or Grand Theft Auto, showed significant improvements in their working memory and processing speed. The participants who played puzzle games, such as Tetris or Candy Crush, showed significant improvements in their mental rotation and visual search skills. - - - -The researchers concluded that playing video games can enhance specific aspects of cognition depending on the type and genre of the game. They suggested that video games could be used as a complementary tool for education and training, especially for students who struggle with traditional methods of learning. - - - -However, they also cautioned that video games are not a substitute for formal education and that excessive or inappropriate gaming can have negative consequences on mental health and well-being. They recommended that parents and educators monitor the amount and quality of video games that students play and encourage them to balance their gaming time with other activities. - -Okay, I will try to continue the article. Here is what I added: - -Some examples of how video games can be used for educational purposes are: - - - -- Simulation games, such as SimCity or Civilization, can teach students about history, geography, economics, and politics. - -- Strategy games, such as Chess or StarCraft, can teach students about logic, planning, and decision-making. - -- Adventure games, such as The Legend of Zelda or Minecraft, can teach students about creativity, exploration, and storytelling. - -- Educational games, such as Brain Age or Duolingo, can teach students about specific subjects, such as math, language, or science. - - - -Video games can also foster social and emotional skills, such as communication, collaboration, empathy, and resilience. Many video games involve interacting with other players online or offline, which can help students develop their social and interpersonal skills. Some video games also challenge players to cope with failure, frustration, and stress, which can help students develop their emotional and psychological skills. - -Okay, I will try to write a few more paragraphs. Here is what I added: - -However, not all video games are equally beneficial for learning and development. Some video games may contain violent, sexual, or offensive content that can have negative effects on students' mental health and well-being. Some video games may also be addictive or harmful for students' physical health and sleep quality. Therefore, it is important for parents and educators to be aware of the potential risks and harms of video games and to supervise and regulate the students' gaming habits. - - - -Moreover, video games are not a magic bullet that can solve all the problems and challenges of education. Video games can only enhance learning outcomes if they are used in a proper and effective way. This means that video games should be aligned with the students' learning goals and interests, that they should be integrated with other forms of instruction and assessment, and that they should be supported by feedback and guidance from teachers and peers. - - - -In conclusion, video games can have positive effects on learning and development if they are used in a balanced and appropriate way. Video games can improve various cognitive skills, such as memory, attention, spatial awareness, and problem-solving. Video games can also foster social and emotional skills, such as communication, collaboration, empathy, and resilience. However, video games also have potential risks and harms that need to be monitored and controlled. Video games are not a substitute for formal education, but they can be a complementary tool that can enhance learning outcomes for students. - - dfd1c89656 - - - - - diff --git a/spaces/rubensmau/Dov_Tzamir/setup.py b/spaces/rubensmau/Dov_Tzamir/setup.py deleted file mode 100644 index fade5fb7c4ad8723fa3f5e0fdac9a1e75228f1ee..0000000000000000000000000000000000000000 --- a/spaces/rubensmau/Dov_Tzamir/setup.py +++ /dev/null @@ -1,17 +0,0 @@ -from setuptools import setup, find_packages - -setup( - name="data_driven_characters", - version="0.1", - packages=find_packages(), - install_requires=[ - 'faiss-cpu', - 'langchain', - 'loguru', - 'notebook', - 'openai', - 'streamlit_chat', - 'tiktoken', - 'tqdm', - ], -) diff --git a/spaces/ruslanmv/Clone-Your-Voice/utils/profiler.py b/spaces/ruslanmv/Clone-Your-Voice/utils/profiler.py deleted file mode 100644 index 17175b9e1b0eb17fdc015199e5194a5c1afb8a28..0000000000000000000000000000000000000000 --- a/spaces/ruslanmv/Clone-Your-Voice/utils/profiler.py +++ /dev/null @@ -1,45 +0,0 @@ -from time import perf_counter as timer -from collections import OrderedDict -import numpy as np - - -class Profiler: - def __init__(self, summarize_every=5, disabled=False): - self.last_tick = timer() - self.logs = OrderedDict() - self.summarize_every = summarize_every - self.disabled = disabled - - def tick(self, name): - if self.disabled: - return - - # Log the time needed to execute that function - if not name in self.logs: - self.logs[name] = [] - if len(self.logs[name]) >= self.summarize_every: - self.summarize() - self.purge_logs() - self.logs[name].append(timer() - self.last_tick) - - self.reset_timer() - - def purge_logs(self): - for name in self.logs: - self.logs[name].clear() - - def reset_timer(self): - self.last_tick = timer() - - def summarize(self): - n = max(map(len, self.logs.values())) - assert n == self.summarize_every - print("\nAverage execution time over %d steps:" % n) - - name_msgs = ["%s (%d/%d):" % (name, len(deltas), n) for name, deltas in self.logs.items()] - pad = max(map(len, name_msgs)) - for name_msg, deltas in zip(name_msgs, self.logs.values()): - print(" %s mean: %4.0fms std: %4.0fms" % - (name_msg.ljust(pad), np.mean(deltas) * 1000, np.std(deltas) * 1000)) - print("", flush=True) - \ No newline at end of file diff --git a/spaces/safi842/FashionGen/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/model.py b/spaces/safi842/FashionGen/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/model.py deleted file mode 100644 index 22488abd92182a878fa1bedadfed50afbb472d3e..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/model.py +++ /dev/null @@ -1,345 +0,0 @@ -# coding: utf-8 -""" BigGAN PyTorch model. - From "Large Scale GAN Training for High Fidelity Natural Image Synthesis" - By Andrew Brocky, Jeff Donahuey and Karen Simonyan. - https://openreview.net/forum?id=B1xsqj09Fm - - PyTorch version implemented from the computational graph of the TF Hub module for BigGAN. - Some part of the code are adapted from https://github.com/brain-research/self-attention-gan - - This version only comprises the generator (since the discriminator's weights are not released). - This version only comprises the "deep" version of BigGAN (see publication). - - Modified by Erik Härkönen: - * Added support for per-layer latent vectors -""" -from __future__ import (absolute_import, division, print_function, unicode_literals) - -import os -import logging -import math - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .config import BigGANConfig -from .file_utils import cached_path - -logger = logging.getLogger(__name__) - -PRETRAINED_MODEL_ARCHIVE_MAP = { - 'biggan-deep-128': "https://s3.amazonaws.com/models.huggingface.co/biggan/biggan-deep-128-pytorch_model.bin", - 'biggan-deep-256': "https://s3.amazonaws.com/models.huggingface.co/biggan/biggan-deep-256-pytorch_model.bin", - 'biggan-deep-512': "https://s3.amazonaws.com/models.huggingface.co/biggan/biggan-deep-512-pytorch_model.bin", -} - -PRETRAINED_CONFIG_ARCHIVE_MAP = { - 'biggan-deep-128': "https://s3.amazonaws.com/models.huggingface.co/biggan/biggan-deep-128-config.json", - 'biggan-deep-256': "https://s3.amazonaws.com/models.huggingface.co/biggan/biggan-deep-256-config.json", - 'biggan-deep-512': "https://s3.amazonaws.com/models.huggingface.co/biggan/biggan-deep-512-config.json", -} - -WEIGHTS_NAME = 'pytorch_model.bin' -CONFIG_NAME = 'config.json' - - -def snconv2d(eps=1e-12, **kwargs): - return nn.utils.spectral_norm(nn.Conv2d(**kwargs), eps=eps) - -def snlinear(eps=1e-12, **kwargs): - return nn.utils.spectral_norm(nn.Linear(**kwargs), eps=eps) - -def sn_embedding(eps=1e-12, **kwargs): - return nn.utils.spectral_norm(nn.Embedding(**kwargs), eps=eps) - -class SelfAttn(nn.Module): - """ Self attention Layer""" - def __init__(self, in_channels, eps=1e-12): - super(SelfAttn, self).__init__() - self.in_channels = in_channels - self.snconv1x1_theta = snconv2d(in_channels=in_channels, out_channels=in_channels//8, - kernel_size=1, bias=False, eps=eps) - self.snconv1x1_phi = snconv2d(in_channels=in_channels, out_channels=in_channels//8, - kernel_size=1, bias=False, eps=eps) - self.snconv1x1_g = snconv2d(in_channels=in_channels, out_channels=in_channels//2, - kernel_size=1, bias=False, eps=eps) - self.snconv1x1_o_conv = snconv2d(in_channels=in_channels//2, out_channels=in_channels, - kernel_size=1, bias=False, eps=eps) - self.maxpool = nn.MaxPool2d(2, stride=2, padding=0) - self.softmax = nn.Softmax(dim=-1) - self.gamma = nn.Parameter(torch.zeros(1)) - - def forward(self, x): - _, ch, h, w = x.size() - # Theta path - theta = self.snconv1x1_theta(x) - theta = theta.view(-1, ch//8, h*w) - # Phi path - phi = self.snconv1x1_phi(x) - phi = self.maxpool(phi) - phi = phi.view(-1, ch//8, h*w//4) - # Attn map - attn = torch.bmm(theta.permute(0, 2, 1), phi) - attn = self.softmax(attn) - # g path - g = self.snconv1x1_g(x) - g = self.maxpool(g) - g = g.view(-1, ch//2, h*w//4) - # Attn_g - o_conv - attn_g = torch.bmm(g, attn.permute(0, 2, 1)) - attn_g = attn_g.view(-1, ch//2, h, w) - attn_g = self.snconv1x1_o_conv(attn_g) - # Out - out = x + self.gamma*attn_g - return out - - -class BigGANBatchNorm(nn.Module): - """ This is a batch norm module that can handle conditional input and can be provided with pre-computed - activation means and variances for various truncation parameters. - - We cannot just rely on torch.batch_norm since it cannot handle - batched weights (pytorch 1.0.1). We computate batch_norm our-self without updating running means and variances. - If you want to train this model you should add running means and variance computation logic. - """ - def __init__(self, num_features, condition_vector_dim=None, n_stats=51, eps=1e-4, conditional=True): - super(BigGANBatchNorm, self).__init__() - self.num_features = num_features - self.eps = eps - self.conditional = conditional - - # We use pre-computed statistics for n_stats values of truncation between 0 and 1 - self.register_buffer('running_means', torch.zeros(n_stats, num_features)) - self.register_buffer('running_vars', torch.ones(n_stats, num_features)) - self.step_size = 1.0 / (n_stats - 1) - - if conditional: - assert condition_vector_dim is not None - self.scale = snlinear(in_features=condition_vector_dim, out_features=num_features, bias=False, eps=eps) - self.offset = snlinear(in_features=condition_vector_dim, out_features=num_features, bias=False, eps=eps) - else: - self.weight = torch.nn.Parameter(torch.Tensor(num_features)) - self.bias = torch.nn.Parameter(torch.Tensor(num_features)) - - def forward(self, x, truncation, condition_vector=None): - # Retreive pre-computed statistics associated to this truncation - coef, start_idx = math.modf(truncation / self.step_size) - start_idx = int(start_idx) - if coef != 0.0: # Interpolate - running_mean = self.running_means[start_idx] * coef + self.running_means[start_idx + 1] * (1 - coef) - running_var = self.running_vars[start_idx] * coef + self.running_vars[start_idx + 1] * (1 - coef) - else: - running_mean = self.running_means[start_idx] - running_var = self.running_vars[start_idx] - - if self.conditional: - running_mean = running_mean.unsqueeze(0).unsqueeze(-1).unsqueeze(-1) - running_var = running_var.unsqueeze(0).unsqueeze(-1).unsqueeze(-1) - - weight = 1 + self.scale(condition_vector).unsqueeze(-1).unsqueeze(-1) - bias = self.offset(condition_vector).unsqueeze(-1).unsqueeze(-1) - - out = (x - running_mean) / torch.sqrt(running_var + self.eps) * weight + bias - else: - out = F.batch_norm(x, running_mean, running_var, self.weight, self.bias, - training=False, momentum=0.0, eps=self.eps) - - return out - - -class GenBlock(nn.Module): - def __init__(self, in_size, out_size, condition_vector_dim, reduction_factor=4, up_sample=False, - n_stats=51, eps=1e-12): - super(GenBlock, self).__init__() - self.up_sample = up_sample - self.drop_channels = (in_size != out_size) - middle_size = in_size // reduction_factor - - self.bn_0 = BigGANBatchNorm(in_size, condition_vector_dim, n_stats=n_stats, eps=eps, conditional=True) - self.conv_0 = snconv2d(in_channels=in_size, out_channels=middle_size, kernel_size=1, eps=eps) - - self.bn_1 = BigGANBatchNorm(middle_size, condition_vector_dim, n_stats=n_stats, eps=eps, conditional=True) - self.conv_1 = snconv2d(in_channels=middle_size, out_channels=middle_size, kernel_size=3, padding=1, eps=eps) - - self.bn_2 = BigGANBatchNorm(middle_size, condition_vector_dim, n_stats=n_stats, eps=eps, conditional=True) - self.conv_2 = snconv2d(in_channels=middle_size, out_channels=middle_size, kernel_size=3, padding=1, eps=eps) - - self.bn_3 = BigGANBatchNorm(middle_size, condition_vector_dim, n_stats=n_stats, eps=eps, conditional=True) - self.conv_3 = snconv2d(in_channels=middle_size, out_channels=out_size, kernel_size=1, eps=eps) - - self.relu = nn.ReLU() - - def forward(self, x, cond_vector, truncation): - x0 = x - - x = self.bn_0(x, truncation, cond_vector) - x = self.relu(x) - x = self.conv_0(x) - - x = self.bn_1(x, truncation, cond_vector) - x = self.relu(x) - if self.up_sample: - x = F.interpolate(x, scale_factor=2, mode='nearest') - x = self.conv_1(x) - - x = self.bn_2(x, truncation, cond_vector) - x = self.relu(x) - x = self.conv_2(x) - - x = self.bn_3(x, truncation, cond_vector) - x = self.relu(x) - x = self.conv_3(x) - - if self.drop_channels: - new_channels = x0.shape[1] // 2 - x0 = x0[:, :new_channels, ...] - if self.up_sample: - x0 = F.interpolate(x0, scale_factor=2, mode='nearest') - - out = x + x0 - return out - -class Generator(nn.Module): - def __init__(self, config): - super(Generator, self).__init__() - self.config = config - ch = config.channel_width - condition_vector_dim = config.z_dim * 2 - - self.gen_z = snlinear(in_features=condition_vector_dim, - out_features=4 * 4 * 16 * ch, eps=config.eps) - - layers = [] - for i, layer in enumerate(config.layers): - if i == config.attention_layer_position: - layers.append(SelfAttn(ch*layer[1], eps=config.eps)) - layers.append(GenBlock(ch*layer[1], - ch*layer[2], - condition_vector_dim, - up_sample=layer[0], - n_stats=config.n_stats, - eps=config.eps)) - self.layers = nn.ModuleList(layers) - - self.bn = BigGANBatchNorm(ch, n_stats=config.n_stats, eps=config.eps, conditional=False) - self.relu = nn.ReLU() - self.conv_to_rgb = snconv2d(in_channels=ch, out_channels=ch, kernel_size=3, padding=1, eps=config.eps) - self.tanh = nn.Tanh() - - def forward(self, cond_vector, truncation): - z = self.gen_z(cond_vector[0]) - - # We use this conversion step to be able to use TF weights: - # TF convention on shape is [batch, height, width, channels] - # PT convention on shape is [batch, channels, height, width] - z = z.view(-1, 4, 4, 16 * self.config.channel_width) - z = z.permute(0, 3, 1, 2).contiguous() - - cond_idx = 1 - for i, layer in enumerate(self.layers): - if isinstance(layer, GenBlock): - z = layer(z, cond_vector[cond_idx], truncation) - cond_idx += 1 - else: - z = layer(z) - - z = self.bn(z, truncation) - z = self.relu(z) - z = self.conv_to_rgb(z) - z = z[:, :3, ...] - z = self.tanh(z) - return z - -class BigGAN(nn.Module): - """BigGAN Generator.""" - - @classmethod - def from_pretrained(cls, pretrained_model_name_or_path, cache_dir=None, *inputs, **kwargs): - if pretrained_model_name_or_path in PRETRAINED_MODEL_ARCHIVE_MAP: - model_file = PRETRAINED_MODEL_ARCHIVE_MAP[pretrained_model_name_or_path] - config_file = PRETRAINED_CONFIG_ARCHIVE_MAP[pretrained_model_name_or_path] - else: - model_file = os.path.join(pretrained_model_name_or_path, WEIGHTS_NAME) - config_file = os.path.join(pretrained_model_name_or_path, CONFIG_NAME) - - try: - resolved_model_file = cached_path(model_file, cache_dir=cache_dir) - resolved_config_file = cached_path(config_file, cache_dir=cache_dir) - except EnvironmentError: - logger.error("Wrong model name, should be a valid path to a folder containing " - "a {} file and a {} file or a model name in {}".format( - WEIGHTS_NAME, CONFIG_NAME, PRETRAINED_MODEL_ARCHIVE_MAP.keys())) - raise - - logger.info("loading model {} from cache at {}".format(pretrained_model_name_or_path, resolved_model_file)) - - # Load config - config = BigGANConfig.from_json_file(resolved_config_file) - logger.info("Model config {}".format(config)) - - # Instantiate model. - model = cls(config, *inputs, **kwargs) - state_dict = torch.load(resolved_model_file, map_location='cpu' if not torch.cuda.is_available() else None) - model.load_state_dict(state_dict, strict=False) - return model - - def __init__(self, config): - super(BigGAN, self).__init__() - self.config = config - self.embeddings = nn.Linear(config.num_classes, config.z_dim, bias=False) - self.generator = Generator(config) - self.n_latents = len(config.layers) + 1 # one for gen_z + one per layer - - def forward(self, z, class_label, truncation): - assert 0 < truncation <= 1 - - if not isinstance(z, list): - z = self.n_latents*[z] - - if isinstance(class_label, list): - embed = [self.embeddings(l) for l in class_label] - else: - embed = self.n_latents*[self.embeddings(class_label)] - - assert len(z) == self.n_latents, f'Expected {self.n_latents} latents, got {len(z)}' - assert len(embed) == self.n_latents, f'Expected {self.n_latents} class vectors, got {len(class_label)}' - - cond_vectors = [torch.cat((z, e), dim=1) for (z, e) in zip(z, embed)] - z = self.generator(cond_vectors, truncation) - return z - - -if __name__ == "__main__": - import PIL - from .utils import truncated_noise_sample, save_as_images, one_hot_from_names - from .convert_tf_to_pytorch import load_tf_weights_in_biggan - - load_cache = False - cache_path = './saved_model.pt' - config = BigGANConfig() - model = BigGAN(config) - if not load_cache: - model = load_tf_weights_in_biggan(model, config, './models/model_128/', './models/model_128/batchnorms_stats.bin') - torch.save(model.state_dict(), cache_path) - else: - model.load_state_dict(torch.load(cache_path)) - - model.eval() - - truncation = 0.4 - noise = truncated_noise_sample(batch_size=2, truncation=truncation) - label = one_hot_from_names('diver', batch_size=2) - - # Tests - # noise = np.zeros((1, 128)) - # label = [983] - - noise = torch.tensor(noise, dtype=torch.float) - label = torch.tensor(label, dtype=torch.float) - with torch.no_grad(): - outputs = model(noise, label, truncation) - print(outputs.shape) - - save_as_images(outputs) diff --git a/spaces/sayakpaul/gopro-deblurring-maxim/maxim/configs.py b/spaces/sayakpaul/gopro-deblurring-maxim/maxim/configs.py deleted file mode 100644 index 1bbd4aa3f4277cace6be090b1e747287d6414519..0000000000000000000000000000000000000000 --- a/spaces/sayakpaul/gopro-deblurring-maxim/maxim/configs.py +++ /dev/null @@ -1,80 +0,0 @@ -MAXIM_CONFIGS = { - # params: 6.108515000000001 M, GFLOPS: 93.163716608 - "S-1": { - "features": 32, - "depth": 3, - "num_stages": 1, - "num_groups": 2, - "num_bottleneck_blocks": 2, - "block_gmlp_factor": 2, - "grid_gmlp_factor": 2, - "input_proj_factor": 2, - "channels_reduction": 4, - "name": "s1", - }, - # params: 13.35383 M, GFLOPS: 206.743273472 - "S-2": { - "features": 32, - "depth": 3, - "num_stages": 2, - "num_groups": 2, - "num_bottleneck_blocks": 2, - "block_gmlp_factor": 2, - "grid_gmlp_factor": 2, - "input_proj_factor": 2, - "channels_reduction": 4, - "name": "s2", - }, - # params: 20.599145 M, GFLOPS: 320.32194560000005 - "S-3": { - "features": 32, - "depth": 3, - "num_stages": 3, - "num_groups": 2, - "num_bottleneck_blocks": 2, - "block_gmlp_factor": 2, - "grid_gmlp_factor": 2, - "input_proj_factor": 2, - "channels_reduction": 4, - "name": "s3", - }, - # params: 19.361219000000002 M, 308.495712256 GFLOPs - "M-1": { - "features": 64, - "depth": 3, - "num_stages": 1, - "num_groups": 2, - "num_bottleneck_blocks": 2, - "block_gmlp_factor": 2, - "grid_gmlp_factor": 2, - "input_proj_factor": 2, - "channels_reduction": 4, - "name": "m1", - }, - # params: 40.83911 M, 675.25541888 GFLOPs - "M-2": { - "features": 64, - "depth": 3, - "num_stages": 2, - "num_groups": 2, - "num_bottleneck_blocks": 2, - "block_gmlp_factor": 2, - "grid_gmlp_factor": 2, - "input_proj_factor": 2, - "channels_reduction": 4, - "name": "m2", - }, - # params: 62.317001 M, 1042.014666752 GFLOPs - "M-3": { - "features": 64, - "depth": 3, - "num_stages": 3, - "num_groups": 2, - "num_bottleneck_blocks": 2, - "block_gmlp_factor": 2, - "grid_gmlp_factor": 2, - "input_proj_factor": 2, - "channels_reduction": 4, - "name": "m3", - }, -} diff --git a/spaces/scedlatioru/img-to-music/example/Aashiqui 2 Movie In Hindi Dubbed Torrent [2021].md b/spaces/scedlatioru/img-to-music/example/Aashiqui 2 Movie In Hindi Dubbed Torrent [2021].md deleted file mode 100644 index 39d99b9514fa1e58256b7d73df6a2cb8582b3b41..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Aashiqui 2 Movie In Hindi Dubbed Torrent [2021].md +++ /dev/null @@ -1,6 +0,0 @@ -

      Aashiqui 2 Movie In Hindi Dubbed Torrent


      Downloadhttps://gohhs.com/2uEA74



      -
      -Full Movie HD Torrent 1080p Blu-Ray Download. master bedroom ensuite ... in English and Translation of the lyrics in English from the Hindi movie Aashiqui. 1fdad05405
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/Agnipankh Book Apj Abdul Kalam Free ((INSTALL)) Download In Marathi Pdf Stories.md b/spaces/scedlatioru/img-to-music/example/Agnipankh Book Apj Abdul Kalam Free ((INSTALL)) Download In Marathi Pdf Stories.md deleted file mode 100644 index ddf98c760b8238583de97b6c82ea4111fb80beb2..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Agnipankh Book Apj Abdul Kalam Free ((INSTALL)) Download In Marathi Pdf Stories.md +++ /dev/null @@ -1,6 +0,0 @@ -

      agnipankh book apj abdul kalam free download in marathi pdf stories


      Download File ✪✪✪ https://gohhs.com/2uEyTB



      -
      -Post Office Bharti Book in Marathi – DOP Maharashtra Examinations Books in ... Agnipankh Book Apj Abdul Kalam Free Download In Marathi Pdf Stories. 1fdad05405
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/Barfi Movie In Tamil Dubbed Download.md b/spaces/scedlatioru/img-to-music/example/Barfi Movie In Tamil Dubbed Download.md deleted file mode 100644 index 10237fdd886d0b432ca984726b4f8ef39015724a..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Barfi Movie In Tamil Dubbed Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Barfi Movie In Tamil Dubbed Download


      Download ✶✶✶ https://gohhs.com/2uEzs6



      - -Aug 4, - No.1 Mr Perfect DVDRip MB Download Hindi Dubbed. ... No 1 mr perfect full movie download; mr perfect tamil dubbed full movie - Free Online ... movies dubbed in hindi Movie Gifs, Hd Movies, bareilly ki barfi hindi movie download, ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/TracyChapmanCrossroadsfullalbumzip.md b/spaces/scedlatioru/img-to-music/example/TracyChapmanCrossroadsfullalbumzip.md deleted file mode 100644 index e919a5565f2f5a6d610a7818fc2d902f34823fea..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/TracyChapmanCrossroadsfullalbumzip.md +++ /dev/null @@ -1,10 +0,0 @@ - -

      2. unzip it to a convenient location and open it. you'll notice a few files with a .h file extension. these are header files that i use to keep track of the type of input. if you are interested in adding anything, add it to these files. then, create a file called main.cpp and create it. then, comment out all of the lines in the source code that are no longer being used.

      -

      TracyChapmanCrossroadsfullalbumzip


      Download Filehttps://gohhs.com/2uEAjg



      -

      the first of the tracy chapman-inspired albums of pure genius. this is the original full-length album in all its messy glory. with a five-song first disc of piano/drums/synthesizers/acoustic and a four-song second disc of vocals/piano/acoustic, it is my favourite of all the chapman albums. it is for this reason that i think that it is unjust that i cannot get this album on cd. though the first disc can, of course, be found on her first solo album, i prefer the album as an entire piece with its unique sound and feel.

      -

      in this chapter we will show you how to install the basics that you need to compile and run all your future projects. this includes adding the shader packages to your project, building, and running your code in an environment more powerful than the command line.

      -

      in this chapter we will introduce you to how you can draw to the screen, and how you can use three orthogonal directions to place your art on the screen, and have it appear in front of it. to help explain this we will also walk you through some minor math used to make sure your art is drawn in the right place.

      -

      -

      tracy chapman & crossroads

      produced by: brian eno, daniel lopatin and david torn
      recorded by: daniel lopatin and brian eno
      mixed by: david torn
      artwork by: david amram
      recorded at: sonic ranch, el paso, tx 2009.
      mastered by: tom baker
      designed by: david buckley and the sonic ranch team
      artwork photography by: david amram and john clum
      booklet design by: david buckley
      first pressing
      foxes (#3102)

      1st press: white vinyl
      2nd press: clear with natural green vinyl.

      899543212b
      -
      -
      \ No newline at end of file diff --git a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/src/upfirdn2d.cpp b/spaces/sczhou/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/src/upfirdn2d.cpp deleted file mode 100644 index 43d0b6783a5b512b55815a291fcac2bebeea31e0..0000000000000000000000000000000000000000 --- a/spaces/sczhou/CodeFormer/CodeFormer/basicsr/ops/upfirdn2d/src/upfirdn2d.cpp +++ /dev/null @@ -1,24 +0,0 @@ -// from https://github.com/rosinality/stylegan2-pytorch/blob/master/op/upfirdn2d.cpp -#include - - -torch::Tensor upfirdn2d_op(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1); - -#define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") -#define CHECK_CONTIGUOUS(x) TORCH_CHECK(x.is_contiguous(), #x " must be contiguous") -#define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) - -torch::Tensor upfirdn2d(const torch::Tensor& input, const torch::Tensor& kernel, - int up_x, int up_y, int down_x, int down_y, - int pad_x0, int pad_x1, int pad_y0, int pad_y1) { - CHECK_CUDA(input); - CHECK_CUDA(kernel); - - return upfirdn2d_op(input, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("upfirdn2d", &upfirdn2d, "upfirdn2d (CUDA)"); -} diff --git a/spaces/segments-tobias/conex/espnet2/enh/separator/conformer_separator.py b/spaces/segments-tobias/conex/espnet2/enh/separator/conformer_separator.py deleted file mode 100644 index 26fd6a248fe1dac2b1016e8416e4f3a21addeac0..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/enh/separator/conformer_separator.py +++ /dev/null @@ -1,162 +0,0 @@ -from collections import OrderedDict -from typing import List -from typing import Tuple -from typing import Union - -import torch -from torch_complex.tensor import ComplexTensor - -from espnet.nets.pytorch_backend.conformer.encoder import ( - Encoder as ConformerEncoder, # noqa: H301 -) -from espnet.nets.pytorch_backend.nets_utils import make_non_pad_mask -from espnet2.enh.separator.abs_separator import AbsSeparator - - -class ConformerSeparator(AbsSeparator): - def __init__( - self, - input_dim: int, - num_spk: int = 2, - adim: int = 384, - aheads: int = 4, - layers: int = 6, - linear_units: int = 1536, - positionwise_layer_type: str = "linear", - positionwise_conv_kernel_size: int = 1, - normalize_before: bool = False, - concat_after: bool = False, - dropout_rate: float = 0.1, - input_layer: str = "linear", - positional_dropout_rate: float = 0.1, - attention_dropout_rate: float = 0.1, - nonlinear: str = "relu", - conformer_pos_enc_layer_type: str = "rel_pos", - conformer_self_attn_layer_type: str = "rel_selfattn", - conformer_activation_type: str = "swish", - use_macaron_style_in_conformer: bool = True, - use_cnn_in_conformer: bool = True, - conformer_enc_kernel_size: int = 7, - padding_idx: int = -1, - ): - """Conformer separator. - - Args: - input_dim: input feature dimension - num_spk: number of speakers - adim (int): Dimention of attention. - aheads (int): The number of heads of multi head attention. - linear_units (int): The number of units of position-wise feed forward. - layers (int): The number of transformer blocks. - dropout_rate (float): Dropout rate. - input_layer (Union[str, torch.nn.Module]): Input layer type. - attention_dropout_rate (float): Dropout rate in attention. - positional_dropout_rate (float): Dropout rate after adding - positional encoding. - normalize_before (bool): Whether to use layer_norm before the first block. - concat_after (bool): Whether to concat attention layer's input and output. - if True, additional linear will be applied. - i.e. x -> x + linear(concat(x, att(x))) - if False, no additional linear will be applied. i.e. x -> x + att(x) - conformer_pos_enc_layer_type(str): Encoder positional encoding layer type. - conformer_self_attn_layer_type (str): Encoder attention layer type. - conformer_activation_type(str): Encoder activation function type. - positionwise_layer_type (str): "linear", "conv1d", or "conv1d-linear". - positionwise_conv_kernel_size (int): Kernel size of - positionwise conv1d layer. - use_macaron_style_in_conformer (bool): Whether to use macaron style for - positionwise layer. - use_cnn_in_conformer (bool): Whether to use convolution module. - conformer_enc_kernel_size(int): Kernerl size of convolution module. - padding_idx (int): Padding idx for input_layer=embed. - nonlinear: the nonlinear function for mask estimation, - select from 'relu', 'tanh', 'sigmoid' - """ - super().__init__() - - self._num_spk = num_spk - - self.conformer = ConformerEncoder( - idim=input_dim, - attention_dim=adim, - attention_heads=aheads, - linear_units=linear_units, - num_blocks=layers, - dropout_rate=dropout_rate, - positional_dropout_rate=positional_dropout_rate, - attention_dropout_rate=attention_dropout_rate, - input_layer=input_layer, - normalize_before=normalize_before, - concat_after=concat_after, - positionwise_layer_type=positionwise_layer_type, - positionwise_conv_kernel_size=positionwise_conv_kernel_size, - macaron_style=use_macaron_style_in_conformer, - pos_enc_layer_type=conformer_pos_enc_layer_type, - selfattention_layer_type=conformer_self_attn_layer_type, - activation_type=conformer_activation_type, - use_cnn_module=use_cnn_in_conformer, - cnn_module_kernel=conformer_enc_kernel_size, - padding_idx=padding_idx, - ) - - self.linear = torch.nn.ModuleList( - [torch.nn.Linear(adim, input_dim) for _ in range(self.num_spk)] - ) - - if nonlinear not in ("sigmoid", "relu", "tanh"): - raise ValueError("Not supporting nonlinear={}".format(nonlinear)) - - self.nonlinear = { - "sigmoid": torch.nn.Sigmoid(), - "relu": torch.nn.ReLU(), - "tanh": torch.nn.Tanh(), - }[nonlinear] - - def forward( - self, input: Union[torch.Tensor, ComplexTensor], ilens: torch.Tensor - ) -> Tuple[List[Union[torch.Tensor, ComplexTensor]], torch.Tensor, OrderedDict]: - """Forward. - - Args: - input (torch.Tensor or ComplexTensor): Encoded feature [B, T, N] - ilens (torch.Tensor): input lengths [Batch] - - Returns: - masked (List[Union(torch.Tensor, ComplexTensor)]): [(B, T, N), ...] - ilens (torch.Tensor): (B,) - others predicted data, e.g. masks: OrderedDict[ - 'mask_spk1': torch.Tensor(Batch, Frames, Freq), - 'mask_spk2': torch.Tensor(Batch, Frames, Freq), - ... - 'mask_spkn': torch.Tensor(Batch, Frames, Freq), - ] - """ - - # if complex spectrum, - if isinstance(input, ComplexTensor): - feature = abs(input) - else: - feature = input - - # prepare pad_mask for transformer - pad_mask = make_non_pad_mask(ilens).unsqueeze(1).to(feature.device) - - x, ilens = self.conformer(feature, pad_mask) - - masks = [] - for linear in self.linear: - y = linear(x) - y = self.nonlinear(y) - masks.append(y) - - masked = [input * m for m in masks] - - others = OrderedDict( - zip(["mask_spk{}".format(i + 1) for i in range(len(masks))], masks) - ) - - return masked, ilens, others - - @property - def num_spk(self): - return self._num_spk diff --git a/spaces/segments-tobias/conex/espnet2/torch_utils/load_pretrained_model.py b/spaces/segments-tobias/conex/espnet2/torch_utils/load_pretrained_model.py deleted file mode 100644 index dbce4d95c5128abd5176000d02dd986bc9a6916b..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/torch_utils/load_pretrained_model.py +++ /dev/null @@ -1,81 +0,0 @@ -from typing import Any - -import torch -import torch.nn -import torch.optim - - -def load_pretrained_model( - init_param: str, - model: torch.nn.Module, - map_location: str = "cpu", -): - """Load a model state and set it to the model. - - Args: - init_param: ::: - - Examples: - >>> load_pretrained_model("somewhere/model.pth", model) - >>> load_pretrained_model("somewhere/model.pth:decoder:decoder", model) - >>> load_pretrained_model("somewhere/model.pth:decoder:decoder:", model) - >>> load_pretrained_model( - ... "somewhere/model.pth:decoder:decoder:decoder.embed", model - ... ) - >>> load_pretrained_model("somewhere/decoder.pth::decoder", model) - """ - sps = init_param.split(":", 4) - if len(sps) == 4: - path, src_key, dst_key, excludes = sps - elif len(sps) == 3: - path, src_key, dst_key = sps - excludes = None - elif len(sps) == 2: - path, src_key = sps - dst_key, excludes = None, None - else: - (path,) = sps - src_key, dst_key, excludes = None, None, None - if src_key == "": - src_key = None - if dst_key == "": - dst_key = None - - if dst_key is None: - obj = model - else: - - def get_attr(obj: Any, key: str): - """Get an nested attribute. - - >>> class A(torch.nn.Module): - ... def __init__(self): - ... super().__init__() - ... self.linear = torch.nn.Linear(10, 10) - >>> a = A() - >>> assert A.linear.weight is get_attr(A, 'linear.weight') - - """ - if key.strip() == "": - return obj - for k in key.split("."): - obj = getattr(obj, k) - return obj - - obj = get_attr(model, dst_key) - - src_state = torch.load(path, map_location=map_location) - if excludes is not None: - for e in excludes.split(","): - src_state = {k: v for k, v in src_state.items() if not k.startswith(e)} - - if src_key is not None: - src_state = { - k[len(src_key) + 1 :]: v - for k, v in src_state.items() - if k.startswith(src_key) - } - - dst_state = obj.state_dict() - dst_state.update(src_state) - obj.load_state_dict(dst_state) diff --git a/spaces/serhany/huggingchat-try/README.md b/spaces/serhany/huggingchat-try/README.md deleted file mode 100644 index 8936dea47ba363c737d8348383e5809f781bc5fd..0000000000000000000000000000000000000000 --- a/spaces/serhany/huggingchat-try/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chat Ui Template -emoji: 🚀 -colorFrom: indigo -colorTo: blue -sdk: docker -pinned: false -app_port: 3000 -suggested_hardware: a10g-small -duplicated_from: huggingchat/chat-ui-template ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/morph/__init__.py b/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/morph/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/config.py b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/config.py deleted file mode 100644 index 958dbe22069c73fbf469fa50535340ced2bc0faf..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/config.py +++ /dev/null @@ -1,117 +0,0 @@ -import argparse -import glob -import sys -import torch -from multiprocessing import cpu_count - - -class Config: - def __init__(self): - self.device = "cuda:0" - self.is_half = True - self.n_cpu = 0 - self.gpu_name = None - self.gpu_mem = None - ( - self.python_cmd, - self.listen_port, - self.iscolab, - self.noparallel, - self.noautoopen, - ) = self.arg_parse() - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - def arg_parse(self) -> tuple: - parser = argparse.ArgumentParser() - parser.add_argument("--port", type=int, default=7865, help="Listen port") - parser.add_argument( - "--pycmd", type=str, default="python", help="Python command" - ) - parser.add_argument("--colab", action="store_true", help="Launch in colab") - parser.add_argument( - "--noparallel", action="store_true", help="Disable parallel processing" - ) - parser.add_argument( - "--noautoopen", - action="store_true", - help="Do not open in browser automatically", - ) - cmd_opts = parser.parse_args() - - cmd_opts.port = cmd_opts.port if 0 <= cmd_opts.port <= 65535 else 7865 - - return ( - cmd_opts.pycmd, - cmd_opts.port, - cmd_opts.colab, - cmd_opts.noparallel, - cmd_opts.noautoopen, - ) - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16系/10系显卡和P40强制单精度") - self.is_half = False - for config_file in ["32k.json", "40k.json", "48k.json"]: - with open(f"configs/{config_file}", "r") as f: - strr = f.read().replace("true", "false") - with open(f"configs/{config_file}", "w") as f: - f.write(strr) - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - if self.gpu_mem <= 4: - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - elif torch.backends.mps.is_available(): - print("没有发现支持的N卡, 使用MPS进行推理") - self.device = "mps" - else: - print("没有发现支持的N卡, 使用CPU进行推理") - self.device = "cpu" - self.is_half = True - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6G显存配置 - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5G显存配置 - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - return x_pad, x_query, x_center, x_max diff --git a/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/modeling/backbone/fpn_p5.py b/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/modeling/backbone/fpn_p5.py deleted file mode 100644 index 4201e1462a57ba9a7cca0971c0b60af3fdbdfbdd..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/prismer/experts/obj_detection/unidet/modeling/backbone/fpn_p5.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import math -import fvcore.nn.weight_init as weight_init -import torch.nn.functional as F -from torch import nn - -from detectron2.layers import Conv2d, ShapeSpec, get_norm - -from detectron2.modeling.backbone import Backbone -from detectron2.modeling.backbone.fpn import FPN -from detectron2.modeling.backbone.build import BACKBONE_REGISTRY -from detectron2.modeling.backbone.resnet import build_resnet_backbone - - -class LastLevelP6P7_P5(nn.Module): - """ - This module is used in RetinaNet to generate extra layers, P6 and P7 from - C5 feature. - """ - - def __init__(self, in_channels, out_channels): - super().__init__() - self.num_levels = 2 - self.in_feature = "p5" - self.p6 = nn.Conv2d(in_channels, out_channels, 3, 2, 1) - self.p7 = nn.Conv2d(out_channels, out_channels, 3, 2, 1) - for module in [self.p6, self.p7]: - weight_init.c2_xavier_fill(module) - - def forward(self, c5): - p6 = self.p6(c5) - p7 = self.p7(F.relu(p6)) - return [p6, p7] - - -@BACKBONE_REGISTRY.register() -def build_p67_resnet_fpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_resnet_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - out_channels = cfg.MODEL.FPN.OUT_CHANNELS - backbone = FPN( - bottom_up=bottom_up, - in_features=in_features, - out_channels=out_channels, - norm=cfg.MODEL.FPN.NORM, - top_block=LastLevelP6P7_P5(out_channels, out_channels), - fuse_type=cfg.MODEL.FPN.FUSE_TYPE, - ) - return backbone \ No newline at end of file diff --git a/spaces/shiwan10000/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/experimental.py b/spaces/shiwan10000/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/experimental.py deleted file mode 100644 index 37ba4c4420789c92dc0e2aaeb3d5b64859ec728c..0000000000000000000000000000000000000000 --- a/spaces/shiwan10000/CodeFormer/CodeFormer/facelib/detection/yolov5face/models/experimental.py +++ /dev/null @@ -1,45 +0,0 @@ -# # This file contains experimental modules - -import numpy as np -import torch -from torch import nn - -from facelib.detection.yolov5face.models.common import Conv - - -class CrossConv(nn.Module): - # Cross Convolution Downsample - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): - # ch_in, ch_out, kernel, stride, groups, expansion, shortcut - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, (1, k), (1, s)) - self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class MixConv2d(nn.Module): - # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595 - def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): - super().__init__() - groups = len(k) - if equal_ch: # equal c_ per group - i = torch.linspace(0, groups - 1e-6, c2).floor() # c2 indices - c_ = [(i == g).sum() for g in range(groups)] # intermediate channels - else: # equal weight.numel() per group - b = [c2] + [0] * groups - a = np.eye(groups + 1, groups, k=-1) - a -= np.roll(a, 1, axis=1) - a *= np.array(k) ** 2 - a[0] = 1 - c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b - - self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)]) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.LeakyReLU(0.1, inplace=True) - - def forward(self, x): - return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1))) diff --git a/spaces/shouzen/canada-goose-v4/README.md b/spaces/shouzen/canada-goose-v4/README.md deleted file mode 100644 index c00e28e9144e15427fc7b3c18c7ce186133e9314..0000000000000000000000000000000000000000 --- a/spaces/shouzen/canada-goose-v4/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Canada Goose V4 -emoji: 👀 -colorFrom: blue -colorTo: pink -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/CapCut Premium MOD APK Unlock All Features and Enjoy 4K Video Editing.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/CapCut Premium MOD APK Unlock All Features and Enjoy 4K Video Editing.md deleted file mode 100644 index 51aad69f2ddc1cfe8d32abf7fc7dad916e01f999..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/CapCut Premium MOD APK Unlock All Features and Enjoy 4K Video Editing.md +++ /dev/null @@ -1,109 +0,0 @@ - -

      CapCut 4K Mod APK Download: How to Edit Videos Like a Pro

      -

      Do you love making videos and sharing them with your friends, family, or followers? Do you want to edit your videos like a professional without spending a lot of money or time? If yes, then you need to try CapCut, a powerful and easy-to-use video editing app for Android devices. And if you want to take your video editing skills to the next level, you need to download the CapCut 4K Mod APK, a modified version of the app that allows you to edit videos in 4K resolution. In this article, we will tell you everything you need to know about CapCut and CapCut 4K Mod APK, including how to download, install, and use them on your device. Let's get started!

      -

      What is CapCut and why you need it

      -

      CapCut is a video editing app developed by Bytedance, the same company that owns TikTok. It is designed to help you create stunning videos with ease, whether you are a beginner or an expert. You can use CapCut to trim, crop, rotate, merge, split, reverse, speed up, slow down, or adjust the volume of your videos. You can also add music, stickers, filters, effects, transitions, texts, subtitles, or animations to make your videos more fun and engaging. You can even use the magic effects to transform your videos into something amazing.

      -

      capcut 4k mod apk download


      DOWNLOAD ►►►►► https://ssurll.com/2uNS77



      -

      CapCut is a great app for anyone who loves making videos for personal or professional purposes. You can use it to make videos for social media platforms like TikTok, Instagram, YouTube, Facebook, or Snapchat. You can also use it to make slideshows, tutorials, vlogs, memes, or any other type of video you can think of. CapCut is free to download and use, and it does not have any annoying ads or watermarks. It also supports multiple languages and formats.

      -

      CapCut features and benefits

      -
        -
      • Easy-to-use interface: CapCut has a simple and intuitive interface that allows you to edit your videos with just a few taps. You can access all the editing tools from the bottom menu bar and preview your changes in real-time.
      • -
      • Powerful editing tools: CapCut has a variety of editing tools that let you customize your videos according to your preferences. You can trim, crop, rotate, merge, split, reverse, speed up, slow down, or adjust the volume of your videos. You can also adjust the brightness, contrast, saturation, temperature, or hue of your videos.
      • -
      • Creative elements: CapCut has a huge library of music, stickers, filters, effects, transitions, texts, subtitles, or animations that you can add to your videos. You can also use the magic effects to transform your videos into something amazing.
      • -
      • High-quality output: CapCut allows you to export your videos in high quality up to 1080p resolution. You can also choose the frame rate and bitrate of your videos.
      • -
      • Easy sharing: CapCut allows you to share your videos on social media platforms like TikTok, Instagram, YouTube, Facebook, or Snapchat with just one click. You can also save your videos to your device gallery or cloud storage.
      • -
      -

      CapCut drawbacks and limitations

      -
        -
      • Requires internet connection: CapCut requires an internet connection to work properly. You need to download the music, stickers, filters, effects, transitions, texts, subtitles, or animations from the online library before you can use them. This can consume a lot of data and time.
      • -
      • Does not support 4K resolution: CapCut does not support 4K resolution for editing or exporting videos. The maximum resolution you can get is 1080p, which may not be enough for some users who want to create high-quality videos.
      • -
      • May cause lagging or crashing: CapCut may cause lagging or crashing issues on some devices, especially if they have low RAM or storage space. This can affect the performance and quality of your videos.
      • -
      -

      What is CapCut 4K Mod APK and how to download it

      -

      If you are looking for a way to edit videos in 4K resolution with CapCut, you need to download the CapCut 4K Mod APK. This is a modified version of the original CapCut app that allows you to edit videos in 4K resolution. It also unlocks some premium features and removes some restrictions of the original app. You can download the CapCut 4K Mod APK from various websites that offer modded apps for Android devices. However, you need to be careful and choose a trusted and reliable source to avoid any malware or virus infection.

      -

      CapCut 4K Mod APK features and benefits

      -
        -
      • Edit videos in 4K resolution: CapCut 4K Mod APK allows you to edit videos in 4K resolution, which is four times higher than the original app. You can enjoy more details and clarity in your videos with this feature.
      • -
      • Unlock premium features: CapCut 4K Mod APK unlocks some premium features that are not available in the original app. For example, you can access more music, stickers, filters, effects, transitions, texts, subtitles, or animations from the online library. You can also use the pro editing tools to fine-tune your videos.
      • -
      • Remove restrictions: CapCut 4K Mod APK removes some restrictions that are imposed by the original app. For example, you can export your videos without any watermark or logo. You can also export your videos in any format or size you want.
      • -
      • Free and safe: CapCut 4K Mod APK is free to download and use. It does not contain any ads or malware that can harm your device or data.
      • -
      -

      CapCut 4K Mod APK drawbacks and limitations

      -
        -
      • Not official or legal: CapCut 4K Mod APK is not an official or legal version of the original CapCut app. It is created by third-party developers who modify the original app without the permission of the developers or owners. This can violate the terms and conditions of the original app and cause legal issues.
      • -
      • Not compatible with all devices: CapCut 4K Mod APK may not be compatible with all devices or Android versions. It may cause compatibility issues or errors on some devices.
      • -
      • Not updated regularly: CapCut 4K Mod APK may not be updated regularly to fix bugs or add new features. It may lag behind the original app in terms of functionality and security.
      • -
      -

      How to install and use CapCut 4K Mod APK on your device

      -

      If you want to install and use CapCut 4K Mod APK on your device, you need to follow these steps:

      -

      Step 1: Download the CapCut 4K Mod APK file from a trusted source

      -

      You need to download the CapCut 4K Mod APK file from a trusted source that offers modded apps for Android devices. You can search for such websites on Google or use the link provided below:

      -

      CapCut 4K Mod APK Download Link

      -

      Note: This link is only for reference and we do not endorse or guarantee its safety or reliability. Use it at your own risk.

      -

      capcut 4k mod apk no watermark
      -capcut 4k mod apk premium unlocked
      -capcut 4k mod apk latest version
      -capcut 4k mod apk free download
      -capcut 4k mod apk android
      -capcut 4k mod apk glitch effect
      -capcut 4k mod apk key frame animation
      -capcut 4k mod apk slow motion
      -capcut 4k mod apk stabilization
      -capcut 4k mod apk video editor
      -capcut 4k mod apk pro features
      -capcut 4k mod apk full hd
      -capcut 4k mod apk cracked
      -capcut 4k mod apk hack
      -capcut 4k mod apk unlimited
      -capcut 4k mod apk online
      -capcut 4k mod apk tutorial
      -capcut 4k mod apk review
      -capcut 4k mod apk reddit
      -capcut 4k mod apk download link
      -capcut 4k mod apk download for pc
      -capcut 4k mod apk download for ios
      -capcut 4k mod apk download for mac
      -capcut 4k mod apk download for windows
      -capcut 4k mod apk download for laptop
      -capcut 4k mod apk download for chromebook
      -capcut 4k mod apk download for tablet
      -capcut 4k mod apk download for firestick
      -capcut 4k mod apk download for smart tv
      -capcut 4k mod apk download for roku
      -capcut 4k mod apk download apkpure
      -capcut 4k mod apk download uptodown
      -capcut 4k mod apk download happymod
      -capcut 4k mod apk download rexdl
      -capcut 4k mod apk download revdl
      -capcut 4k mod apk download an1.com
      -capcut 4k mod apk download android1.com
      -capcut 4k mod apk download mob.org
      -capcut 4k mod apk download mobpark.cn
      -capcut 4k mod apk download acmarket.net

      -

      Step 2: Enable unknown sources on your device settings

      -

      You need to enable unknown sources on your device settings to allow the installation of apps from sources other than the Google Play Store. To do this, go to your device settings > security > unknown sources and toggle it on.

      -

      Step 3: Install the CapCut 4K Mod APK file on your device -

      You need to install the CapCut 4K Mod APK file on your device by locating it in your file manager or downloads folder and tapping on it. You may see a pop-up window asking for your permission to install the app. Tap on install and wait for the installation process to complete.

      -

      Step 4: Launch the CapCut app and enjoy editing videos in 4K resolution

      -

      You need to launch the CapCut app by tapping on its icon on your home screen or app drawer. You will see the CapCut interface with all the editing tools and options. You can start editing your videos in 4K resolution by importing them from your device gallery or camera. You can also use the online library to download music, stickers, filters, effects, transitions, texts, subtitles, or animations for your videos. You can preview your changes in real-time and export your videos in high quality and share them on social media platforms.

      -

      Tips and tricks for using CapCut 4K Mod APK effectively

      -

      Here are some tips and tricks for using CapCut 4K Mod APK effectively:

      -

      Tip 1: Use the advanced editing tools to enhance your videos

      -

      CapCut 4K Mod APK has some advanced editing tools that can help you enhance your videos. For example, you can use the curve tool to adjust the color and tone of your videos. You can also use the keyframe tool to create smooth animations and transitions for your videos. You can also use the chroma key tool to remove the background of your videos and replace it with another image or video.

      -

      Tip 2: Add music, stickers, filters, and effects to make your videos more attractive

      -

      CapCut 4K Mod APK has a huge library of music, stickers, filters, and effects that you can add to your videos. You can use the music tool to add songs or sound effects to your videos. You can also use the sticker tool to add emojis, icons, or images to your videos. You can also use the filter tool to apply different styles and moods to your videos. You can also use the effect tool to add various visual effects to your videos, such as glitch, sparkle, neon, or fire.

      -

      Tip 3: Export your videos in high quality and share them on social media platforms

      -

      CapCut 4K Mod APK allows you to export your videos in high quality up to 4K resolution. You can also choose the frame rate and bitrate of your videos. You can also export your videos in any format or size you want. You can then share your videos on social media platforms like TikTok, Instagram, YouTube, Facebook, or Snapchat with just one click. You can also save your videos to your device gallery or cloud storage.

      -

      Conclusion and FAQs

      -

      In conclusion, CapCut is a powerful and easy-to-use video editing app for Android devices that allows you to create stunning videos with ease. CapCut 4K Mod APK is a modified version of the app that allows you to edit videos in 4K resolution and unlock some premium features and remove some restrictions of the original app. You can download the CapCut 4K Mod APK from various websites that offer modded apps for Android devices. However, you need to be careful and choose a trusted and reliable source to avoid any malware or virus infection. You also need to follow some steps to install and use the app on your device. You can also use some tips and tricks to use the app effectively and make amazing videos.

      -

      Here are some FAQs that you may have about CapCut and CapCut 4K Mod APK:

      - - - - - - -
      Q: Is CapCut safe to use?A: Yes, CapCut is safe to use as it is developed by Bytedance, a reputable company that owns TikTok. It does not contain any ads or malware that can harm your device or data.
      Q: Is CapCut 4K Mod APK safe to use?A: It depends on where you download it from. Some websites may offer fake or infected files that can harm your device or data. You need to choose a trusted and reliable source to download the app.
      Q: Do I need to root my device to use CapCut 4K Mod APK?A: No, you do not need to root your device to use CapCut 4K Mod APK. You just need to enable unknown sources on your device settings.
      Q: How do I update Cap Cut 4K Mod APK?A: You need to check the website where you downloaded the app for any updates. You may also need to uninstall the previous version and install the new version of the app.
      Q: What are some alternatives to CapCut and CapCut 4K Mod APK?A: Some alternatives to CapCut and CapCut 4K Mod APK are Kinemaster, PowerDirector, FilmoraGo, InShot, or VivaVideo. They are also video editing apps for Android devices that have similar features and functions.
      -

      I hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy video editing!

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/skf15963/summary/fengshen/data/task_dataloader/medicalQADataset.py b/spaces/skf15963/summary/fengshen/data/task_dataloader/medicalQADataset.py deleted file mode 100644 index 3d76ed583c7d150769c81d830293909e1c110485..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/data/task_dataloader/medicalQADataset.py +++ /dev/null @@ -1,137 +0,0 @@ -# coding=utf8 -import os -import pytorch_lightning as pl -from torch.utils.data import DataLoader, Dataset -from tqdm import tqdm -from transformers import AutoTokenizer - - -class GPT2QADataset(Dataset): - ''' - Dataset Used for yuyuan medical qa task. - Just surpport small datasets, when deal with large datasets it may be slowly. - for large datasets please use mmapdatasets(doing) - ''' - - def __init__(self, data_path, name, args): - super().__init__() - self.tokenizer = AutoTokenizer.from_pretrained( - args.pretrained_model_path) - if self.tokenizer.pad_token is None: - self.tokenizer.add_special_tokens({'pad_token': '<|endoftext|>'}) - self.data_size = os.path.getsize(data_path)/1024/1024/1024 - self.data_type_name = name - self.data = self.load_data(data_path) - self.max_seq_length = args.max_seq_length - - def __len__(self): - return len(self.data) - - def __getitem__(self, index): - return self.encode(self.data[index]) - - def load_data(self, data_path): - # 有进度条展示 - if self.data_size <= 5: - with open(data_path, "rt", encoding='utf8') as f: - lines = f.readlines() - total_num = len(lines) - data_gen = lines - else: - data_gen = open(data_path, "rt", encoding='utf8') - total_num = None - - data = [] - with tqdm(total=total_num, desc=f'{self.data_type_name}处理进度', mininterval=0.3) as bar: - for idx, line in enumerate(data_gen): - data.append(self.data_parse(line)) - bar.update() - - if self.data_size > 5: - data_gen.close() - return data - - def data_parse(self, line): - """ - 解析不同格式的数据 - """ - dic = eval(line.strip()) - return dic - - def encode(self, item): - """ - 将数据转换成模型训练的输入 - """ - inputs_dict = self.tokenizer.encode_plus(item['Question']+item['answer'], - max_length=self.max_seq_length, padding='max_length', - truncation=True, return_tensors='pt') - target = inputs_dict['input_ids'] - labels = target.clone().detach() - labels[target == self.tokenizer.pad_token_id] = -100 - return { - "input_ids": inputs_dict['input_ids'].squeeze(), - "attention_mask": inputs_dict['attention_mask'].squeeze(), - "labels": labels.squeeze(), - "question": item['Question'], - "answer": item['answer'] - } - - -class GPT2QADataModel(pl.LightningDataModule): - @staticmethod - def add_data_specific_args(parent_args): - parser = parent_args.add_argument_group('GPT2QADataModel') - parser.add_argument('--data_dir', type=str, required=True) - parser.add_argument('--num_workers', default=2, type=int) - parser.add_argument('--train_data', default='train.txt', type=str) - parser.add_argument('--valid_data', default='valid.txt', type=str) - parser.add_argument('--test_data', default='test.txt', type=str) - parser.add_argument('--train_batchsize', type=int, required=True) - parser.add_argument('--valid_batchsize', type=int, required=True) - parser.add_argument('--max_seq_length', default=1024, type=int) - return parent_args - - def __init__(self, args): - super().__init__() - self.args = args - self.train_batchsize = args.train_batchsize - self.valid_batchsize = args.valid_batchsize - if not args.do_eval_only: - self.train_data = GPT2QADataset(os.path.join( - args.data_dir, args.train_data), '训练集', args) - self.valid_data = GPT2QADataset(os.path.join( - args.data_dir, args.valid_data), '验证集', args) - self.test_data = GPT2QADataset(os.path.join( - args.data_dir, args.test_data), '测试集', args) - - def train_dataloader(self): - return DataLoader( - self.train_data, shuffle=True, - batch_size=self.train_batchsize, - pin_memory=False, num_workers=self.args.num_workers) - - def val_dataloader(self): - return DataLoader(self.valid_data, shuffle=False, - batch_size=self.valid_batchsize, - pin_memory=False, num_workers=self.args.num_workers) - - def predict_dataloader(self): - return DataLoader(self.test_data, shuffle=False, - batch_size=self.valid_batchsize, pin_memory=False, - num_workers=self.args.num_workers) - - -if __name__ == '__main__': - import argparse - modelfile = '/cognitive_comp/wuziwei/pretrained_model_hf/medical_v2' - datafile = '/cognitive_comp/wuziwei/task-data/medical_qa/medical_qa_train.txt' - parser = argparse.ArgumentParser(description='hf test', allow_abbrev=False) - group = parser.add_argument_group(title='test args') - group.add_argument('--pretrained-model-path', type=str, default=modelfile, - help='Number of transformer layers.') - group.add_argument('--max-seq-length', type=int, default=1024) - args = parser.parse_args() - - testml = GPT2QADataset(datafile, 'medical_qa', args=args) - - print(testml[10]) diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/__init__.py b/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/__init__.py deleted file mode 100644 index c7ffcccd7fc0f33b59d99d73d0436d60e561b0fc..0000000000000000000000000000000000000000 --- a/spaces/sklkd93/CodeFormer/CodeFormer/basicsr/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# https://github.com/xinntao/BasicSR -# flake8: noqa -from .archs import * -from .data import * -from .losses import * -from .metrics import * -from .models import * -from .ops import * -from .train import * -from .utils import * -from .version import __gitsha__, __version__ diff --git a/spaces/smf2010/ysfj/templates/index.html b/spaces/smf2010/ysfj/templates/index.html deleted file mode 100644 index 73419b0e4f47cba80652ebe3707b4c817972b407..0000000000000000000000000000000000000000 --- a/spaces/smf2010/ysfj/templates/index.html +++ /dev/null @@ -1,89 +0,0 @@ - - - - - - 因式分解 - - - -
      -

      因式分解

      -
      - - -
      - - {% if result %} -
      -
      -

      因式分解结果:

      -

      {{ result }}

      -
      -
      - {% endif %} - - {% if error %} -
      -
      -

      错误消息:

      -

      {{ error }}

      -
      -
      - {% endif %} -

      2023 星空暗夜团队 版权所有

      -

      下载移动端密码:bizt

      -
      - - -
      -
      - -
      - - -
      - - - \ No newline at end of file diff --git a/spaces/society-ethics/model-card-regulatory-check/tests/cards/albert-base-v2.md b/spaces/society-ethics/model-card-regulatory-check/tests/cards/albert-base-v2.md deleted file mode 100644 index 1ec5e8c1cfd137748e403bb66ef478eabd12f998..0000000000000000000000000000000000000000 --- a/spaces/society-ethics/model-card-regulatory-check/tests/cards/albert-base-v2.md +++ /dev/null @@ -1,263 +0,0 @@ -# ALBERT Base v2 - -Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in -[this paper](https://arxiv.org/abs/1909.11942) and first released in -[this repository](https://github.com/google-research/albert). This model, as all ALBERT models, is uncased: it does not make a difference -between english and English. - -Disclaimer: The team releasing ALBERT did not write a model card for this model so this model card has been written by -the Hugging Face team. - -## Model description - -ALBERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it -was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of -publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it -was pretrained with two objectives: - -- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run - the entire masked sentence through the model and has to predict the masked words. This is different from traditional - recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like - GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the - sentence. -- Sentence Ordering Prediction (SOP): ALBERT uses a pretraining loss based on predicting the ordering of two consecutive segments of text. - -This way, the model learns an inner representation of the English language that can then be used to extract features -useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard -classifier using the features produced by the ALBERT model as inputs. - -ALBERT is particular in that it shares its layers across its Transformer. Therefore, all layers have the same weights. Using repeating layers results in a small memory footprint, however, the computational cost remains similar to a BERT-like architecture with the same number of hidden layers as it has to iterate through the same number of (repeating) layers. - -This is the second version of the base model. Version 2 is different from version 1 due to different dropout rates, additional training data, and longer training. It has better results in nearly all downstream tasks. - -This model has the following configuration: - -- 12 repeating layers -- 128 embedding dimension -- 768 hidden dimension -- 12 attention heads -- 11M parameters - -## Intended uses & limitations - -You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to -be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=albert) to look for -fine-tuned versions on a task that interests you. - -Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) -to make decisions, such as sequence classification, token classification or question answering. For tasks such as text -generation you should look at model like GPT2. - -### How to use - -You can use this model directly with a pipeline for masked language modeling: - -```python ->>> from transformers import pipeline ->>> unmasker = pipeline('fill-mask', model='albert-base-v2') ->>> unmasker("Hello I'm a [MASK] model.") -[ - { - "sequence":"[CLS] hello i'm a modeling model.[SEP]", - "score":0.05816134437918663, - "token":12807, - "token_str":"▁modeling" - }, - { - "sequence":"[CLS] hello i'm a modelling model.[SEP]", - "score":0.03748830780386925, - "token":23089, - "token_str":"▁modelling" - }, - { - "sequence":"[CLS] hello i'm a model model.[SEP]", - "score":0.033725276589393616, - "token":1061, - "token_str":"▁model" - }, - { - "sequence":"[CLS] hello i'm a runway model.[SEP]", - "score":0.017313428223133087, - "token":8014, - "token_str":"▁runway" - }, - { - "sequence":"[CLS] hello i'm a lingerie model.[SEP]", - "score":0.014405295252799988, - "token":29104, - "token_str":"▁lingerie" - } -] -``` - -Here is how to use this model to get the features of a given text in PyTorch: - -```python -from transformers import AlbertTokenizer, AlbertModel -tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2') -model = AlbertModel.from_pretrained("albert-base-v2") -text = "Replace me by any text you'd like." -encoded_input = tokenizer(text, return_tensors='pt') -output = model(**encoded_input) -``` - -and in TensorFlow: - -```python -from transformers import AlbertTokenizer, TFAlbertModel -tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2'') -model = TFAlbertModel.from_pretrained("albert-base-v2) -text = "Replace me by any text you'd like." -encoded_input = tokenizer(text, return_tensors='tf') -output = model(encoded_input) -``` - -### Limitations and bias - -Even if the training data used for this model could be characterized as fairly neutral, this model can have biased -predictions: - -```python ->>> from transformers import pipeline ->>> unmasker = pipeline('fill-mask', model='albert-base-v2') ->>> unmasker("The man worked as a [MASK].") - -[ - { - "sequence":"[CLS] the man worked as a chauffeur.[SEP]", - "score":0.029577180743217468, - "token":28744, - "token_str":"▁chauffeur" - }, - { - "sequence":"[CLS] the man worked as a janitor.[SEP]", - "score":0.028865724802017212, - "token":29477, - "token_str":"▁janitor" - }, - { - "sequence":"[CLS] the man worked as a shoemaker.[SEP]", - "score":0.02581118606030941, - "token":29024, - "token_str":"▁shoemaker" - }, - { - "sequence":"[CLS] the man worked as a blacksmith.[SEP]", - "score":0.01849772222340107, - "token":21238, - "token_str":"▁blacksmith" - }, - { - "sequence":"[CLS] the man worked as a lawyer.[SEP]", - "score":0.01820771023631096, - "token":3672, - "token_str":"▁lawyer" - } -] - ->>> unmasker("The woman worked as a [MASK].") - -[ - { - "sequence":"[CLS] the woman worked as a receptionist.[SEP]", - "score":0.04604868218302727, - "token":25331, - "token_str":"▁receptionist" - }, - { - "sequence":"[CLS] the woman worked as a janitor.[SEP]", - "score":0.028220869600772858, - "token":29477, - "token_str":"▁janitor" - }, - { - "sequence":"[CLS] the woman worked as a paramedic.[SEP]", - "score":0.0261906236410141, - "token":23386, - "token_str":"▁paramedic" - }, - { - "sequence":"[CLS] the woman worked as a chauffeur.[SEP]", - "score":0.024797942489385605, - "token":28744, - "token_str":"▁chauffeur" - }, - { - "sequence":"[CLS] the woman worked as a waitress.[SEP]", - "score":0.024124596267938614, - "token":13678, - "token_str":"▁waitress" - } -] -``` - -This bias will also affect all fine-tuned versions of this model. - -## Training data - -The ALBERT model was pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 -unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and -headers). - -## Training procedure - -### Preprocessing - -The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 30,000. The inputs of the model are -then of the form: - -``` -[CLS] Sentence A [SEP] Sentence B [SEP] -``` - -### Training - -The ALBERT procedure follows the BERT setup. - -The details of the masking procedure for each sentence are the following: -- 15% of the tokens are masked. -- In 80% of the cases, the masked tokens are replaced by `[MASK]`. -- In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. -- In the 10% remaining cases, the masked tokens are left as is. - -## Evaluation results - -When fine-tuned on downstream tasks, the ALBERT models achieve the following results: - -| | Average | SQuAD1.1 | SQuAD2.0 | MNLI | SST-2 | RACE | -|----------------|----------|----------|----------|----------|----------|----------| -|V2 | -|ALBERT-base |82.3 |90.2/83.2 |82.1/79.3 |84.6 |92.9 |66.8 | -|ALBERT-large |85.7 |91.8/85.2 |84.9/81.8 |86.5 |94.9 |75.2 | -|ALBERT-xlarge |87.9 |92.9/86.4 |87.9/84.1 |87.9 |95.4 |80.7 | -|ALBERT-xxlarge |90.9 |94.6/89.1 |89.8/86.9 |90.6 |96.8 |86.8 | -|V1 | -|ALBERT-base |80.1 |89.3/82.3 | 80.0/77.1|81.6 |90.3 | 64.0 | -|ALBERT-large |82.4 |90.6/83.9 | 82.3/79.4|83.5 |91.7 | 68.5 | -|ALBERT-xlarge |85.5 |92.5/86.1 | 86.1/83.1|86.4 |92.4 | 74.8 | -|ALBERT-xxlarge |91.0 |94.8/89.3 | 90.2/87.4|90.8 |96.9 | 86.5 | - - -### BibTeX entry and citation info - -```bibtex -@article{DBLP:journals/corr/abs-1909-11942, - author = {Zhenzhong Lan and - Mingda Chen and - Sebastian Goodman and - Kevin Gimpel and - Piyush Sharma and - Radu Soricut}, - title = {{ALBERT:} {A} Lite {BERT} for Self-supervised Learning of Language - Representations}, - journal = {CoRR}, - volume = {abs/1909.11942}, - year = {2019}, - url = {http://arxiv.org/abs/1909.11942}, - archivePrefix = {arXiv}, - eprint = {1909.11942}, - timestamp = {Fri, 27 Sep 2019 13:04:21 +0200}, - biburl = {https://dblp.org/rec/journals/corr/abs-1909-11942.bib}, - bibsource = {dblp computer science bibliography, https://dblp.org} -} -``` \ No newline at end of file diff --git a/spaces/sohojoe/soho-clip-embeddings-explorer/api_helper.py b/spaces/sohojoe/soho-clip-embeddings-explorer/api_helper.py deleted file mode 100644 index 392d6378cc4150e3c6a82a2ada0ebce8f72bbf40..0000000000000000000000000000000000000000 --- a/spaces/sohojoe/soho-clip-embeddings-explorer/api_helper.py +++ /dev/null @@ -1,56 +0,0 @@ -from PIL import Image -import numpy as np -import base64 -import json -from torchvision.transforms import Compose, Resize, CenterCrop - -# support sending images as base64 - -def encode_numpy_array(image_np): - # Flatten the numpy array and convert it to bytes - image_bytes = image_np.tobytes() - - # Encode the byte data as base64 - encoded_image = base64.b64encode(image_bytes).decode() - payload = { - "encoded_image": encoded_image, - "width": image_np.shape[1], - "height": image_np.shape[0], - "channels": image_np.shape[2], - } - payload_json = json.dumps(payload) - return payload_json - -def decode_numpy_array(payload): - payload_json = json.loads(payload) - # payload_json = payload.json() - encoded_image = payload_json["encoded_image"] - width = payload_json["width"] - height = payload_json["height"] - channels = payload_json["channels"] - # Decode the base64 data - decoded_image = base64.b64decode(encoded_image) - - # Convert the byte data back to a NumPy array - image_np = np.frombuffer(decoded_image, dtype=np.uint8).reshape(height, width, channels) - - return image_np - - -def preprocess_image(image_np, max_size=224): - # Convert the numpy array to a PIL image - image = Image.fromarray(image_np) - - # Define the transformation pipeline - transforms = Compose([ - Resize(max_size, interpolation=Image.BICUBIC), - CenterCrop(max_size), - ]) - - # Apply the transformations to the image - image = transforms(image) - - # Convert the PIL image back to a numpy array - image_np = np.array(image) - - return image_np \ No newline at end of file diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/distributed/test_utils.py b/spaces/sriramelango/Social_Classification_Public/fairseq/tests/distributed/test_utils.py deleted file mode 100644 index 30f995b67acd39af5816d2eb412d6b4df7f44f8c..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/tests/distributed/test_utils.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import functools -import sys -import unittest - -import torch - -from fairseq.distributed import utils as dist_utils - -from .utils import objects_are_equal, spawn_and_init - - -class DistributedTest(unittest.TestCase): - def setUp(self): - if not torch.cuda.is_available(): - raise unittest.SkipTest("CUDA not available, skipping test") - if sys.platform == "win32": - raise unittest.SkipTest("NCCL doesn't support Windows, skipping test") - if torch.cuda.device_count() < 2: - raise unittest.SkipTest("distributed tests require 2+ GPUs, skipping") - - -class TestBroadcastObject(DistributedTest): - def test_str(self): - spawn_and_init( - functools.partial( - TestBroadcastObject._test_broadcast_object, "hello world" - ), - world_size=2, - ) - - def test_tensor(self): - spawn_and_init( - functools.partial( - TestBroadcastObject._test_broadcast_object, - torch.rand(5), - ), - world_size=2, - ) - - def test_complex(self): - spawn_and_init( - functools.partial( - TestBroadcastObject._test_broadcast_object, - { - "a": "1", - "b": [2, torch.rand(2, 3), 3], - "c": (torch.rand(2, 3), 4), - "d": {5, torch.rand(5)}, - "e": torch.rand(5), - "f": torch.rand(5).int().cuda(), - }, - ), - world_size=2, - ) - - @staticmethod - def _test_broadcast_object(ref_obj, rank, group): - obj = dist_utils.broadcast_object( - ref_obj if rank == 0 else None, src_rank=0, group=group - ) - assert objects_are_equal(ref_obj, obj) - - -class TestAllGatherList(DistributedTest): - def test_str_equality(self): - spawn_and_init( - functools.partial( - TestAllGatherList._test_all_gather_list_equality, - "hello world", - ), - world_size=2, - ) - - def test_tensor_equality(self): - spawn_and_init( - functools.partial( - TestAllGatherList._test_all_gather_list_equality, - torch.rand(5), - ), - world_size=2, - ) - - def test_complex_equality(self): - spawn_and_init( - functools.partial( - TestAllGatherList._test_all_gather_list_equality, - { - "a": "1", - "b": [2, torch.rand(2, 3), 3], - "c": (torch.rand(2, 3), 4), - "d": {5, torch.rand(5)}, - "e": torch.rand(5), - "f": torch.rand(5).int(), - }, - ), - world_size=2, - ) - - @staticmethod - def _test_all_gather_list_equality(ref_obj, rank, group): - objs = dist_utils.all_gather_list(ref_obj, group) - for obj in objs: - assert objects_are_equal(ref_obj, obj) - - def test_rank_tensor(self): - spawn_and_init( - TestAllGatherList._test_all_gather_list_rank_tensor, world_size=2 - ) - - @staticmethod - def _test_all_gather_list_rank_tensor(rank, group): - obj = torch.tensor([rank]) - objs = dist_utils.all_gather_list(obj, group) - for i, obj in enumerate(objs): - assert obj.item() == i - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/stomexserde/gpt4-ui/Examples/AlinaPamfilStudiideDidacticaLiteraturiiRomanepdf.md b/spaces/stomexserde/gpt4-ui/Examples/AlinaPamfilStudiideDidacticaLiteraturiiRomanepdf.md deleted file mode 100644 index a30f08814390e3e0d061925295228ccac83c3cdd..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/AlinaPamfilStudiideDidacticaLiteraturiiRomanepdf.md +++ /dev/null @@ -1,14 +0,0 @@ - -

      Alina Pamfil: A Modern Approach to Teaching Romanian Literature

      -

      Alina Pamfil is a Romanian professor and author who has published several books and articles on the didactics of Romanian literature. Her work aims to provide a coherent and modern framework for teaching and learning literature in the context of the current educational reforms and challenges. She advocates for a student-centered and interactive approach that fosters critical thinking, creativity and personal development.

      -

      AlinaPamfilStudiideDidacticaLiteraturiiRomanepdf


      Download Zip · https://urlgoal.com/2uI7r0



      -

      One of her most influential books is Studii de didactica literaturii române (Studies on the Didactics of Romanian Literature), published in 2006 by Casa Cărţii de Ştiinţă. In this book, she explores the different paradigms, concepts and methodologies that inform the teaching of literature, such as: literature as a museum of works or as a dynamic field; reading as decoding or as constructing meaning; literary text as an object of structural and stylistic analysis or as a complex message that generates plural interpretations[^1^]. She also proposes practical strategies and scenarios for developing students' literary competence and appreciation.

      -

      Another important book by Alina Pamfil is Limba şi literatura romană în gimnaziu: Structuri didactice deschise (Romanian Language and Literature in Middle School: Open Didactic Structures), published in 2008 by Paralela Educaţional. In this book, she offers a series of open didactic structures that can be adapted and used by teachers to design their own lessons and activities for teaching Romanian language and literature. The book covers topics such as: grammar, vocabulary, text types, genres, literary movements, authors and works[^3^]. The book also includes examples of students' work and feedback.

      -

      Alina Pamfil's books and articles are valuable resources for teachers, students and researchers who are interested in the didactics of Romanian literature. They provide theoretical insights, practical guidance and innovative ideas for making literature teaching and learning more engaging, meaningful and enjoyable.

      -

      - -

      Alina Pamfil is not only a scholar and a writer, but also a teacher and a trainer. She has taught Romanian language and literature at various levels of education, from primary school to university. She has also been involved in several national and international projects and programs for teacher training and curriculum development. She has collaborated with institutions such as the Ministry of Education, the National Council for Curriculum, the Institute for Educational Sciences, the British Council, the Goethe Institute and the Soros Foundation.

      -

      Alina Pamfil is recognized as one of the leading experts and innovators in the field of didactics of Romanian literature. She has received numerous awards and distinctions for her work, such as: the Order of Cultural Merit from the President of Romania in 2004; the Excellence Award from the Romanian Association for Quality Language Services in 2007; the National Award for Education from the Romanian Academy in 2010; and the Honorary Member Award from the Romanian Society for Romanian Language and Literature in 2013.

      -

      Alina Pamfil's books and articles have been widely read, cited and reviewed by scholars, teachers and students. They have also been translated into other languages, such as English, French, German, Spanish and Hungarian. Her work has contributed to the advancement of knowledge and practice in the didactics of Romanian literature, as well as to the promotion of Romanian culture and identity in the world.

      cec2833e83
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Anjaan Full Movie Hd 1080p Downloadl.md b/spaces/stomexserde/gpt4-ui/Examples/Anjaan Full Movie Hd 1080p Downloadl.md deleted file mode 100644 index dfd5c0d7b414f5bc5527939c4d0196e218dac782..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Anjaan Full Movie Hd 1080p Downloadl.md +++ /dev/null @@ -1,17 +0,0 @@ -
      -Here is a possible title and article with html formatting for the keyword "Anjaan Full Movie Hd 1080p Downloadl": - -

      Anjaan: A Tamil Action Thriller with Dhanush and Amyra Dastur

      -

      Anjaan is a 2014 Tamil action thriller film directed by N. Lingusamy and produced by Thirrupathi Brothers. The film stars Dhanush and Amyra Dastur in the lead roles, along with Suriya, Samantha Akkineni, Vidyut Jammwal, Manoj Bajpayee and Dalip Tahil in supporting roles. The film revolves around the life of a gangster named Raju Bhai (Dhanush), who falls in love with a girl named Jeeva (Amyra Dastur) in his past life. The film explores the concept of reincarnation and how Raju Bhai's enemies from his past life try to kill him and his lover in the present.

      -

      Anjaan Full Movie Hd 1080p Downloadl


      Download File ››› https://urlgoal.com/2uI8SU



      -

      Anjaan was released on 15 August 2014, coinciding with Independence Day in India. The film received mixed reviews from critics, who praised the performances of Dhanush and Amyra Dastur, but criticized the screenplay, direction and editing. The film was a commercial success, grossing over ₹100 crore worldwide. The film was dubbed into Telugu as Sikandar and into Hindi as Khatarnak Khiladi 2.

      -

      If you are a fan of Tamil action movies, you can watch Anjaan online or download it in full HD 1080p quality from various websites. However, we advise you to watch or download Anjaan legally from authorized platforms only, such as Amazon Prime Video, Hotstar, Zee5 or YouTube. Downloading or streaming Anjaan from illegal or pirated websites may land you in trouble with the law and also expose your device to malware and viruses. Therefore, we recommend you to avoid using websites like Movienolimit, Afilmywap, 9xmovies, Filmyzilla, Bollyshare, Mkvhub, Khatrimaza, 8movierulz, Downloadhub, Mfzmovies, Filmypur, Moviewatcher, Godownloadmovies, 123movieshub, Katmoviehd or Mobilemovies to watch or download Anjaan.

      -

      Instead, you can enjoy Anjaan in full HD 1080p quality on Amazon Prime Video with a subscription fee of ₹999 per year or ₹129 per month. You can also watch Anjaan on Hotstar with a subscription fee of ₹399 per year or ₹299 per month. Alternatively, you can watch Anjaan on Zee5 with a subscription fee of ₹999 per year or ₹99 per month. You can also rent or buy Anjaan on YouTube for ₹25 or ₹50 respectively.

      -

      Anjaan is a thrilling and entertaining movie that will keep you hooked till the end. Watch it today on any of the legal platforms mentioned above and enjoy the action-packed story of Raju Bhai and Jeeva.

      Here are a few more paragraphs for the article: - -

      Anjaan is not just a typical action movie, but also a romantic drama that explores the theme of reincarnation. The film shows how Raju Bhai and Jeeva are connected by their love across different lifetimes and how they face various challenges and enemies to protect their relationship. The film also has a twist in the end that reveals the true identity of Raju Bhai and his connection with Suriya, who plays a dual role in the film.

      -

      -

      The film has some stunning visuals and cinematography that capture the beauty of Mumbai and Goa, where the film is set. The film also has some high-octane action sequences and stunts that showcase the skills and charisma of Dhanush and Vidyut Jammwal, who play the roles of Raju Bhai and Chandru, his best friend and partner in crime. The film also has some catchy songs and background music composed by Yuvan Shankar Raja, who is known for his collaboration with director N. Lingusamy.

      -

      Anjaan is a film that will appeal to the fans of Dhanush and Suriya, who are two of the most popular and versatile actors in Tamil cinema. The film also has a strong supporting cast that includes Samantha Akkineni, who plays the role of Jeeva's friend in the present life, Manoj Bajpayee, who plays the role of Imran Bhai, the main antagonist of the film, and Dalip Tahil, who plays the role of Raju Bhai's father. The film also has some special appearances by Brahmanandam, Asif Basra and Soori.

      7196e7f11a
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Gmod Fnaf Mod !!LINK!!.md b/spaces/stomexserde/gpt4-ui/Examples/Gmod Fnaf Mod !!LINK!!.md deleted file mode 100644 index 3afe27f14696b7c1005344202040fb8e9e4d586c..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Gmod Fnaf Mod !!LINK!!.md +++ /dev/null @@ -1,32 +0,0 @@ - -

      How to Play Five Nights at Freddy's in Garry's Mod

      -

      If you are a fan of the horror game series Five Nights at Freddy's (FNaF), you might want to try playing it in Garry's Mod (Gmod), a sandbox game that lets you create and manipulate any objects and characters. In this article, we will show you how to install and play some of the best FNaF mods for Gmod, as well as some tips and tricks to enhance your gameplay experience.

      -

      What is Gmod Fnaf Mod?

      -

      Gmod Fnaf Mod is a term that refers to any mod or addon that adds FNaF content to Gmod, such as models, maps, effects, sounds, etc. There are hundreds of FNaF mods for Gmod on the Steam Workshop, created by various authors and fans. Some of them are based on the official FNaF games, while others are fan-made or original creations.

      -

      Gmod Fnaf Mod


      DOWNLOAD 🆗 https://urlgoal.com/2uI8Uc



      -

      How to Install Gmod Fnaf Mod?

      -

      To install Gmod Fnaf Mod, you need to have Gmod installed on your computer. You can buy it from Steam for $9.99. Then, you need to browse the Steam Workshop for FNaF mods that you like. You can use the search bar or the tags to filter the results. For example, you can type "Gmod Fnaf Mod" or "FNaF World" in the search bar, or use tags like "Addon", "Map", "Model", etc.

      -

      Once you find a mod that you want to install, click on it and then click on the green "Subscribe" button. This will automatically download and install the mod to your Gmod folder. You can also unsubscribe from a mod if you don't want it anymore. To manage your subscribed mods, go to your Steam Library, right-click on Garry's Mod, select Properties, then go to the Local Files tab and click on Browse Local Files. This will open your Gmod folder, where you can find a subfolder called "addons". Here you can see all your subscribed mods and delete or move them as you wish.

      -

      How to Play Gmod Fnaf Mod?

      -

      To play Gmod Fnaf Mod, you need to launch Garry's Mod from Steam and select a game mode. You can choose between Singleplayer or Multiplayer, depending on whether you want to play alone or with other players online. Then, you need to select a map that is compatible with FNaF mods. You can use the search bar or the categories to find FNaF maps. For example, some popular FNaF maps are:

      -
        -
      • Freddy Fazbear's Pizza (from FNaF 1)
      • -
      • Fazbear's Fright (from FNaF 3)
      • -
      • Circus Baby's Pizza World (from FNaF Sister Location)
      • -
      • Ultimate Custom Night (from FNaF 6)
      • -
      -

      After selecting a map, click on Start Game and wait for it to load. Once you are in the game, press Q to open the Spawn Menu. Here you can find all your installed mods and addons under different categories. You can spawn any FNaF models, effects, sounds, etc. by clicking on them and placing them in the map. You can also use tools like Physics Gun, Tool Gun, Camera, etc. to manipulate them as you wish.

      -

      -

      You can also play FNaF scenarios or challenges by using some specific mods or addons that add gameplay elements or scripts to the game. For example, some of these are:

      -
        -
      • Pill Pack: This mod lets you transform into different FNaF animatronics and use their abilities.
      • -
      • NPCs: These are non-player characters that can move and interact with you and other objects.
      • -
      • Events: These are scripted sequences that trigger certain actions or events in the game.
      • -
      • Survival: This is a game mode that pits you against waves of enemies or hazards.
      • -
      -

      Tips and Tricks for Playing Gmod Fnaf Mod

      -

      Here are some tips and tricks for playing Gmod Fnaf Mod:

      -
        -
      • Use headphones and turn

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/sub314xxl/MetaGPT/metagpt/utils/mermaid.py b/spaces/sub314xxl/MetaGPT/metagpt/utils/mermaid.py deleted file mode 100644 index 15fd08625a6689c46a8395d13532490e4ecfe793..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/metagpt/utils/mermaid.py +++ /dev/null @@ -1,114 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/7/4 10:53 -@Author : alexanderwu -@File : mermaid.py -@Modified By: mashenquan, 2023/8/20. Remove global configuration `CONFIG`, enable configuration support for business isolation. -""" -import asyncio -from pathlib import Path - -# from metagpt.utils.common import check_cmd_exists -import aiofiles - -from metagpt.config import CONFIG, Config -from metagpt.const import PROJECT_ROOT -from metagpt.logs import logger - - -async def mermaid_to_file(mermaid_code, output_file_without_suffix, width=2048, height=2048) -> int: - """suffix: png/svg/pdf - - :param mermaid_code: mermaid code - :param output_file_without_suffix: output filename - :param width: - :param height: - :return: 0 if succed, -1 if failed - """ - # Write the Mermaid code to a temporary file - tmp = Path(f"{output_file_without_suffix}.mmd") - async with aiofiles.open(tmp, "w", encoding="utf-8") as f: - await f.write(mermaid_code) - # tmp.write_text(mermaid_code, encoding="utf-8") - - # if check_cmd_exists("mmdc") != 0: - # logger.warning("RUN `npm install -g @mermaid-js/mermaid-cli` to install mmdc") - # return -1 - - # for suffix in ["pdf", "svg", "png"]: - for suffix in ["png"]: - output_file = f"{output_file_without_suffix}.{suffix}" - # Call the `mmdc` command to convert the Mermaid code to a PNG - logger.info(f"Generating {output_file}..") - cmds = [CONFIG.mmdc, "-i", str(tmp), "-o", output_file, "-w", str(width), "-H", str(height)] - - if CONFIG.puppeteer_config: - cmds.extend(["-p", CONFIG.puppeteer_config]) - process = await asyncio.create_subprocess_exec(*cmds) - await process.wait() - return process.returncode - - -if __name__ == "__main__": - MMC1 = """classDiagram - class Main { - -SearchEngine search_engine - +main() str - } - class SearchEngine { - -Index index - -Ranking ranking - -Summary summary - +search(query: str) str - } - class Index { - -KnowledgeBase knowledge_base - +create_index(data: dict) - +query_index(query: str) list - } - class Ranking { - +rank_results(results: list) list - } - class Summary { - +summarize_results(results: list) str - } - class KnowledgeBase { - +update(data: dict) - +fetch_data(query: str) dict - } - Main --> SearchEngine - SearchEngine --> Index - SearchEngine --> Ranking - SearchEngine --> Summary - Index --> KnowledgeBase""" - - MMC2 = """sequenceDiagram - participant M as Main - participant SE as SearchEngine - participant I as Index - participant R as Ranking - participant S as Summary - participant KB as KnowledgeBase - M->>SE: search(query) - SE->>I: query_index(query) - I->>KB: fetch_data(query) - KB-->>I: return data - I-->>SE: return results - SE->>R: rank_results(results) - R-->>SE: return ranked_results - SE->>S: summarize_results(ranked_results) - S-->>SE: return summary - SE-->>M: return summary""" - - conf = Config() - asyncio.run( - mermaid_to_file( - options=conf.runtime_options, mermaid_code=MMC1, output_file_without_suffix=PROJECT_ROOT / "tmp/1.png" - ) - ) - asyncio.run( - mermaid_to_file( - options=conf.runtime_options, mermaid_code=MMC2, output_file_without_suffix=PROJECT_ROOT / "tmp/2.png" - ) - ) diff --git a/spaces/sub314xxl/zeroscope/app.py b/spaces/sub314xxl/zeroscope/app.py deleted file mode 100644 index 3dd1e57d5a7df1db67ee393efb2d108fb74f3f23..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/zeroscope/app.py +++ /dev/null @@ -1,154 +0,0 @@ -import gradio as gr -from share_btn import community_icon_html, loading_icon_html, share_js -import torch -from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler -from diffusers.utils import export_to_video - -pipe = DiffusionPipeline.from_pretrained("cerspense/zeroscope_v2_576w", torch_dtype=torch.float16) -pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) -pipe.enable_model_cpu_offload() - -def infer(prompt): - negative_prompt = "text, watermark, copyright, blurry, nsfw" - video_frames = pipe(prompt, negative_prompt=negative_prompt, num_inference_steps=40, height=320, width=576, num_frames=24).frames - video_path = export_to_video(video_frames) - print(video_path) - return video_path, gr.Group.update(visible=True) - -css = """ -#col-container {max-width: 510px; margin-left: auto; margin-right: auto;} -a {text-decoration-line: underline; font-weight: 600;} -.animate-spin { - animation: spin 1s linear infinite; -} - -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} - -#share-btn-container { - display: flex; - padding-left: 0.5rem !important; - padding-right: 0.5rem !important; - background-color: #000000; - justify-content: center; - align-items: center; - border-radius: 9999px !important; - max-width: 13rem; -} - -#share-btn-container:hover { - background-color: #060606; -} - -#share-btn { - all: initial; - color: #ffffff; - font-weight: 600; - cursor:pointer; - font-family: 'IBM Plex Sans', sans-serif; - margin-left: 0.5rem !important; - padding-top: 0.5rem !important; - padding-bottom: 0.5rem !important; - right:0; -} - -#share-btn * { - all: unset; -} - -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} - -#share-btn-container .wrap { - display: none !important; -} - -#share-btn-container.hidden { - display: none!important; -} -img[src*='#center'] { - display: block; - margin: auto; -} - -.footer { - margin-bottom: 45px; - margin-top: 10px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } -""" - -with gr.Blocks(css=css) as demo: - with gr.Column(elem_id="col-container"): - gr.Markdown( - """ -

        Zeroscope Text-to-Video

        -

        - A watermark-free Modelscope-based video model optimized for producing high-quality 16:9 compositions and a smooth video output.
        -

        - - [![Duplicate this Space](https://huggingface.co/datasets/huggingface/badges/raw/main/duplicate-this-space-sm.svg#center)](https://huggingface.co/spaces/fffiloni/zeroscope?duplicate=true) - - """ - ) - - prompt_in = gr.Textbox(label="Prompt", placeholder="Darth Vader is surfing on waves", elem_id="prompt-in") - #neg_prompt = gr.Textbox(label="Negative prompt", value="text, watermark, copyright, blurry, nsfw", elem_id="neg-prompt-in") - #inference_steps = gr.Slider(label="Inference Steps", minimum=10, maximum=100, step=1, value=40, interactive=False) - submit_btn = gr.Button("Submit") - video_result = gr.Video(label="Video Output", elem_id="video-output") - - with gr.Group(elem_id="share-btn-container", visible=False) as share_group: - community_icon = gr.HTML(community_icon_html) - loading_icon = gr.HTML(loading_icon_html) - share_button = gr.Button("Share to community", elem_id="share-btn") - - gr.HTML(""" - -
        -

        You may also like:

        -
        - - - - - -
        -
        - """) - - submit_btn.click(fn=infer, - inputs=[prompt_in], - outputs=[video_result, share_group]) - - share_button.click(None, [], [], _js=share_js) - -demo.queue(max_size=12).launch() - \ No newline at end of file diff --git a/spaces/subhajitmaji/MusicGen/README.md b/spaces/subhajitmaji/MusicGen/README.md deleted file mode 100644 index e36f3c1f8803b85b58ec328405b0195fb7347829..0000000000000000000000000000000000000000 --- a/spaces/subhajitmaji/MusicGen/README.md +++ /dev/null @@ -1,141 +0,0 @@ ---- -title: MusicGen -python_version: '3.9' -tags: -- music generation -- language models -- LLMs -app_file: app.py -emoji: 🎵 -colorFrom: white -colorTo: blue -sdk: gradio -sdk_version: 3.34.0 -pinned: true -license: cc-by-nc-4.0 -duplicated_from: facebook/MusicGen ---- -# Audiocraft -![docs badge](https://github.com/facebookresearch/audiocraft/workflows/audiocraft_docs/badge.svg) -![linter badge](https://github.com/facebookresearch/audiocraft/workflows/audiocraft_linter/badge.svg) -![tests badge](https://github.com/facebookresearch/audiocraft/workflows/audiocraft_tests/badge.svg) - -Audiocraft is a PyTorch library for deep learning research on audio generation. At the moment, it contains the code for MusicGen, a state-of-the-art controllable text-to-music model. - -## MusicGen - -Audiocraft provides the code and models for MusicGen, [a simple and controllable model for music generation][arxiv]. MusicGen is a single stage auto-regressive -Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz. Unlike existing methods like [MusicLM](https://arxiv.org/abs/2301.11325), MusicGen doesn't require a self-supervised semantic representation, and it generates -all 4 codebooks in one pass. By introducing a small delay between the codebooks, we show we can predict -them in parallel, thus having only 50 auto-regressive steps per second of audio. -Check out our [sample page][musicgen_samples] or test the available demo! - - - Open In Colab - - - Open in HugginFace - -
        - -We use 20K hours of licensed music to train MusicGen. Specifically, we rely on an internal dataset of 10K high-quality music tracks, and on the ShutterStock and Pond5 music data. - -## Installation -Audiocraft requires Python 3.9, PyTorch 2.0.0, and a GPU with at least 16 GB of memory (for the medium-sized model). To install Audiocraft, you can run the following: - -```shell -# Best to make sure you have torch installed first, in particular before installing xformers. -# Don't run this if you already have PyTorch installed. -pip install 'torch>=2.0' -# Then proceed to one of the following -pip install -U audiocraft # stable release -pip install -U git+https://git@github.com/facebookresearch/audiocraft#egg=audiocraft # bleeding edge -pip install -e . # or if you cloned the repo locally -``` - -## Usage -We offer a number of way to interact with MusicGen: -1. A demo is also available on the [`facebook/MusicGen` HuggingFace Space](https://huggingface.co/spaces/facebook/MusicGen) (huge thanks to all the HF team for their support). -2. You can run the Gradio demo in Colab: [colab notebook](https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing). -3. You can use the gradio demo locally by running `python app.py`. -4. You can play with MusicGen by running the jupyter notebook at [`demo.ipynb`](./demo.ipynb) locally (if you have a GPU). -5. Finally, checkout [@camenduru Colab page](https://github.com/camenduru/MusicGen-colab) which is regularly - updated with contributions from @camenduru and the community. - -## API - -We provide a simple API and 4 pre-trained models. The pre trained models are: -- `small`: 300M model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-small) -- `medium`: 1.5B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-medium) -- `melody`: 1.5B model, text to music and text+melody to music - [🤗 Hub](https://huggingface.co/facebook/musicgen-melody) -- `large`: 3.3B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-large) - -We observe the best trade-off between quality and compute with the `medium` or `melody` model. -In order to use MusicGen locally **you must have a GPU**. We recommend 16GB of memory, but smaller -GPUs will be able to generate short sequences, or longer sequences with the `small` model. - -**Note**: Please make sure to have [ffmpeg](https://ffmpeg.org/download.html) installed when using newer version of `torchaudio`. -You can install it with: -``` -apt-get install ffmpeg -``` - -See after a quick example for using the API. - -```python -import torchaudio -from audiocraft.models import MusicGen -from audiocraft.data.audio import audio_write - -model = MusicGen.get_pretrained('melody') -model.set_generation_params(duration=8) # generate 8 seconds. -wav = model.generate_unconditional(4) # generates 4 unconditional audio samples -descriptions = ['happy rock', 'energetic EDM', 'sad jazz'] -wav = model.generate(descriptions) # generates 3 samples. - -melody, sr = torchaudio.load('./assets/bach.mp3') -# generates using the melody from the given audio and the provided descriptions. -wav = model.generate_with_chroma(descriptions, melody[None].expand(3, -1, -1), sr) - -for idx, one_wav in enumerate(wav): - # Will save under {idx}.wav, with loudness normalization at -14 db LUFS. - audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True) -``` - - -## Model Card - -See [the model card page](./MODEL_CARD.md). - -## FAQ - -#### Will the training code be released? - -Yes. We will soon release the training code for MusicGen and EnCodec. - - -#### I need help on Windows - -@FurkanGozukara made a complete tutorial for [Audiocraft/MusicGen on Windows](https://youtu.be/v-YpvPkhdO4) - -#### I need help for running the demo on Colab - -Check [@camenduru tutorial on Youtube](https://www.youtube.com/watch?v=EGfxuTy9Eeo). - - -## Citation -``` -@article{copet2023simple, - title={Simple and Controllable Music Generation}, - author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez}, - year={2023}, - journal={arXiv preprint arXiv:2306.05284}, -} -``` - -## License -* The code in this repository is released under the MIT license as found in the [LICENSE file](LICENSE). -* The weights in this repository are released under the CC-BY-NC 4.0 license as found in the [LICENSE_weights file](LICENSE_weights). - -[arxiv]: https://arxiv.org/abs/2306.05284 -[musicgen_samples]: https://ai.honu.io/papers/musicgen/ diff --git a/spaces/sudeepshouche/minimalist/theme_dropdown.py b/spaces/sudeepshouche/minimalist/theme_dropdown.py deleted file mode 100644 index 6235388fd00549553df44028f3ccf03e946994ea..0000000000000000000000000000000000000000 --- a/spaces/sudeepshouche/minimalist/theme_dropdown.py +++ /dev/null @@ -1,57 +0,0 @@ -import os -import pathlib - -from gradio.themes.utils import ThemeAsset - - -def create_theme_dropdown(): - import gradio as gr - - asset_path = pathlib.Path(__file__).parent / "themes" - themes = [] - for theme_asset in os.listdir(str(asset_path)): - themes.append( - (ThemeAsset(theme_asset), gr.Theme.load(str(asset_path / theme_asset))) - ) - - def make_else_if(theme_asset): - return f""" - else if (theme == '{str(theme_asset[0].version)}') {{ - var theme_css = `{theme_asset[1]._get_theme_css()}` - }}""" - - head, tail = themes[0], themes[1:] - if_statement = f""" - if (theme == "{str(head[0].version)}") {{ - var theme_css = `{head[1]._get_theme_css()}` - }} {" ".join(make_else_if(t) for t in tail)} - """ - - latest_to_oldest = sorted([t[0] for t in themes], key=lambda asset: asset.version)[ - ::-1 - ] - latest_to_oldest = [str(t.version) for t in latest_to_oldest] - - component = gr.Dropdown( - choices=latest_to_oldest, - value=latest_to_oldest[0], - render=False, - label="Select Version", - ).style(container=False) - - return ( - component, - f""" - (theme) => {{ - if (!document.querySelector('.theme-css')) {{ - var theme_elem = document.createElement('style'); - theme_elem.classList.add('theme-css'); - document.head.appendChild(theme_elem); - }} else {{ - var theme_elem = document.querySelector('.theme-css'); - }} - {if_statement} - theme_elem.innerHTML = theme_css; - }} - """, - ) diff --git a/spaces/sudo-ai/zero123plus-demo-space/download_checkpoints.py b/spaces/sudo-ai/zero123plus-demo-space/download_checkpoints.py deleted file mode 100644 index 4447b06b4409294f89200b2d7255f68dccabd081..0000000000000000000000000000000000000000 --- a/spaces/sudo-ai/zero123plus-demo-space/download_checkpoints.py +++ /dev/null @@ -1,19 +0,0 @@ -import os -import torch -import urllib.request -import huggingface_hub -from diffusers import DiffusionPipeline - - -if 'HF_TOKEN' in os.environ: - huggingface_hub.login(os.environ['HF_TOKEN']) -sam_checkpoint = "tmp/sam_vit_h_4b8939.pth" -os.makedirs('tmp', exist_ok=True) -urllib.request.urlretrieve( - "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth", - sam_checkpoint -) -DiffusionPipeline.from_pretrained( - "sudo-ai/zero123plus-v1.1", custom_pipeline="sudo-ai/zero123plus-pipeline", - torch_dtype=torch.float16 -) diff --git a/spaces/sunwaee/Face-Mask-Detection/retinanet/coco_eval.py b/spaces/sunwaee/Face-Mask-Detection/retinanet/coco_eval.py deleted file mode 100644 index 8b76ada3f6e4a401a53084a6c8507712f978731b..0000000000000000000000000000000000000000 --- a/spaces/sunwaee/Face-Mask-Detection/retinanet/coco_eval.py +++ /dev/null @@ -1,84 +0,0 @@ -from pycocotools.cocoeval import COCOeval -import json -import torch - - -def evaluate_coco(dataset, model, threshold=0.05): - - model.eval() - - with torch.no_grad(): - - # start collecting results - results = [] - image_ids = [] - - for index in range(len(dataset)): - data = dataset[index] - scale = data['scale'] - - # run network - if torch.cuda.is_available(): - scores, labels, boxes = model(data['img'].permute(2, 0, 1).cuda().float().unsqueeze(dim=0)) - else: - scores, labels, boxes = model(data['img'].permute(2, 0, 1).float().unsqueeze(dim=0)) - scores = scores.cpu() - labels = labels.cpu() - boxes = boxes.cpu() - - # correct boxes for image scale - boxes /= scale - - if boxes.shape[0] > 0: - # change to (x, y, w, h) (MS COCO standard) - boxes[:, 2] -= boxes[:, 0] - boxes[:, 3] -= boxes[:, 1] - - # compute predicted labels and scores - #for box, score, label in zip(boxes[0], scores[0], labels[0]): - for box_id in range(boxes.shape[0]): - score = float(scores[box_id]) - label = int(labels[box_id]) - box = boxes[box_id, :] - - # scores are sorted, so we can break - if score < threshold: - break - - # append detection for each positively labeled class - image_result = { - 'image_id' : dataset.image_ids[index], - 'category_id' : dataset.label_to_coco_label(label), - 'score' : float(score), - 'bbox' : box.tolist(), - } - - # append detection to results - results.append(image_result) - - # append image to list of processed images - image_ids.append(dataset.image_ids[index]) - - # print progress - print('{}/{}'.format(index, len(dataset)), end='\r') - - if not len(results): - return - - # write output - json.dump(results, open('{}_bbox_results.json'.format(dataset.set_name), 'w'), indent=4) - - # load results in COCO evaluation tool - coco_true = dataset.coco - coco_pred = coco_true.loadRes('{}_bbox_results.json'.format(dataset.set_name)) - - # run COCO evaluation - coco_eval = COCOeval(coco_true, coco_pred, 'bbox') - coco_eval.params.imgIds = image_ids - coco_eval.evaluate() - coco_eval.accumulate() - coco_eval.summarize() - - model.train() - - return diff --git a/spaces/sunwaee/MT5-Questions-Answers-Generation-Extraction/README.md b/spaces/sunwaee/MT5-Questions-Answers-Generation-Extraction/README.md deleted file mode 100644 index d55aee5720b4f983e671be353b4c741d15270eb5..0000000000000000000000000000000000000000 --- a/spaces/sunwaee/MT5-Questions-Answers-Generation-Extraction/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Q&A Pairs Generation [Google MT5] -emoji: 📈 -colorFrom: pink -colorTo: indigo -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/supertori/files/stable-diffusion-webui/modules/api/api.py b/spaces/supertori/files/stable-diffusion-webui/modules/api/api.py deleted file mode 100644 index 376f7f048f306fcc87012f754756d925a885975e..0000000000000000000000000000000000000000 --- a/spaces/supertori/files/stable-diffusion-webui/modules/api/api.py +++ /dev/null @@ -1,562 +0,0 @@ -import base64 -import io -import time -import datetime -import uvicorn -from threading import Lock -from io import BytesIO -from gradio.processing_utils import decode_base64_to_file -from fastapi import APIRouter, Depends, FastAPI, HTTPException, Request, Response -from fastapi.security import HTTPBasic, HTTPBasicCredentials -from secrets import compare_digest - -import modules.shared as shared -from modules import sd_samplers, deepbooru, sd_hijack, images, scripts, ui, postprocessing -from modules.api.models import * -from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images -from modules.textual_inversion.textual_inversion import create_embedding, train_embedding -from modules.textual_inversion.preprocess import preprocess -from modules.hypernetworks.hypernetwork import create_hypernetwork, train_hypernetwork -from PIL import PngImagePlugin,Image -from modules.sd_models import checkpoints_list -from modules.sd_models_config import find_checkpoint_config_near_filename -from modules.realesrgan_model import get_realesrgan_models -from modules import devices -from typing import List -import piexif -import piexif.helper - -def upscaler_to_index(name: str): - try: - return [x.name.lower() for x in shared.sd_upscalers].index(name.lower()) - except: - raise HTTPException(status_code=400, detail=f"Invalid upscaler, needs to be one of these: {' , '.join([x.name for x in sd_upscalers])}") - -def script_name_to_index(name, scripts): - try: - return [script.title().lower() for script in scripts].index(name.lower()) - except: - raise HTTPException(status_code=422, detail=f"Script '{name}' not found") - -def validate_sampler_name(name): - config = sd_samplers.all_samplers_map.get(name, None) - if config is None: - raise HTTPException(status_code=404, detail="Sampler not found") - - return name - -def setUpscalers(req: dict): - reqDict = vars(req) - reqDict['extras_upscaler_1'] = reqDict.pop('upscaler_1', None) - reqDict['extras_upscaler_2'] = reqDict.pop('upscaler_2', None) - return reqDict - -def decode_base64_to_image(encoding): - if encoding.startswith("data:image/"): - encoding = encoding.split(";")[1].split(",")[1] - try: - image = Image.open(BytesIO(base64.b64decode(encoding))) - return image - except Exception as err: - raise HTTPException(status_code=500, detail="Invalid encoded image") - -def encode_pil_to_base64(image): - with io.BytesIO() as output_bytes: - - if opts.samples_format.lower() == 'png': - use_metadata = False - metadata = PngImagePlugin.PngInfo() - for key, value in image.info.items(): - if isinstance(key, str) and isinstance(value, str): - metadata.add_text(key, value) - use_metadata = True - image.save(output_bytes, format="PNG", pnginfo=(metadata if use_metadata else None), quality=opts.jpeg_quality) - - elif opts.samples_format.lower() in ("jpg", "jpeg", "webp"): - parameters = image.info.get('parameters', None) - exif_bytes = piexif.dump({ - "Exif": { piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(parameters or "", encoding="unicode") } - }) - if opts.samples_format.lower() in ("jpg", "jpeg"): - image.save(output_bytes, format="JPEG", exif = exif_bytes, quality=opts.jpeg_quality) - else: - image.save(output_bytes, format="WEBP", exif = exif_bytes, quality=opts.jpeg_quality) - - else: - raise HTTPException(status_code=500, detail="Invalid image format") - - bytes_data = output_bytes.getvalue() - - return base64.b64encode(bytes_data) - -def api_middleware(app: FastAPI): - @app.middleware("http") - async def log_and_time(req: Request, call_next): - ts = time.time() - res: Response = await call_next(req) - duration = str(round(time.time() - ts, 4)) - res.headers["X-Process-Time"] = duration - endpoint = req.scope.get('path', 'err') - if shared.cmd_opts.api_log and endpoint.startswith('/sdapi'): - print('API {t} {code} {prot}/{ver} {method} {endpoint} {cli} {duration}'.format( - t = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S.%f"), - code = res.status_code, - ver = req.scope.get('http_version', '0.0'), - cli = req.scope.get('client', ('0:0.0.0', 0))[0], - prot = req.scope.get('scheme', 'err'), - method = req.scope.get('method', 'err'), - endpoint = endpoint, - duration = duration, - )) - return res - - -class Api: - def __init__(self, app: FastAPI, queue_lock: Lock): - if shared.cmd_opts.api_auth: - self.credentials = dict() - for auth in shared.cmd_opts.api_auth.split(","): - user, password = auth.split(":") - self.credentials[user] = password - - self.router = APIRouter() - self.app = app - self.queue_lock = queue_lock - api_middleware(self.app) - self.add_api_route("/sdapi/v1/txt2img", self.text2imgapi, methods=["POST"], response_model=TextToImageResponse) - self.add_api_route("/sdapi/v1/img2img", self.img2imgapi, methods=["POST"], response_model=ImageToImageResponse) - self.add_api_route("/sdapi/v1/extra-single-image", self.extras_single_image_api, methods=["POST"], response_model=ExtrasSingleImageResponse) - self.add_api_route("/sdapi/v1/extra-batch-images", self.extras_batch_images_api, methods=["POST"], response_model=ExtrasBatchImagesResponse) - self.add_api_route("/sdapi/v1/png-info", self.pnginfoapi, methods=["POST"], response_model=PNGInfoResponse) - self.add_api_route("/sdapi/v1/progress", self.progressapi, methods=["GET"], response_model=ProgressResponse) - self.add_api_route("/sdapi/v1/interrogate", self.interrogateapi, methods=["POST"]) - self.add_api_route("/sdapi/v1/interrupt", self.interruptapi, methods=["POST"]) - self.add_api_route("/sdapi/v1/skip", self.skip, methods=["POST"]) - self.add_api_route("/sdapi/v1/options", self.get_config, methods=["GET"], response_model=OptionsModel) - self.add_api_route("/sdapi/v1/options", self.set_config, methods=["POST"]) - self.add_api_route("/sdapi/v1/cmd-flags", self.get_cmd_flags, methods=["GET"], response_model=FlagsModel) - self.add_api_route("/sdapi/v1/samplers", self.get_samplers, methods=["GET"], response_model=List[SamplerItem]) - self.add_api_route("/sdapi/v1/upscalers", self.get_upscalers, methods=["GET"], response_model=List[UpscalerItem]) - self.add_api_route("/sdapi/v1/sd-models", self.get_sd_models, methods=["GET"], response_model=List[SDModelItem]) - self.add_api_route("/sdapi/v1/hypernetworks", self.get_hypernetworks, methods=["GET"], response_model=List[HypernetworkItem]) - self.add_api_route("/sdapi/v1/face-restorers", self.get_face_restorers, methods=["GET"], response_model=List[FaceRestorerItem]) - self.add_api_route("/sdapi/v1/realesrgan-models", self.get_realesrgan_models, methods=["GET"], response_model=List[RealesrganItem]) - self.add_api_route("/sdapi/v1/prompt-styles", self.get_prompt_styles, methods=["GET"], response_model=List[PromptStyleItem]) - self.add_api_route("/sdapi/v1/embeddings", self.get_embeddings, methods=["GET"], response_model=EmbeddingsResponse) - self.add_api_route("/sdapi/v1/refresh-checkpoints", self.refresh_checkpoints, methods=["POST"]) - self.add_api_route("/sdapi/v1/create/embedding", self.create_embedding, methods=["POST"], response_model=CreateResponse) - self.add_api_route("/sdapi/v1/create/hypernetwork", self.create_hypernetwork, methods=["POST"], response_model=CreateResponse) - self.add_api_route("/sdapi/v1/preprocess", self.preprocess, methods=["POST"], response_model=PreprocessResponse) - self.add_api_route("/sdapi/v1/train/embedding", self.train_embedding, methods=["POST"], response_model=TrainResponse) - self.add_api_route("/sdapi/v1/train/hypernetwork", self.train_hypernetwork, methods=["POST"], response_model=TrainResponse) - self.add_api_route("/sdapi/v1/memory", self.get_memory, methods=["GET"], response_model=MemoryResponse) - self.add_api_route("/sdapi/v1/scripts", self.get_scripts_list, methods=["GET"], response_model=ScriptsList) - - def add_api_route(self, path: str, endpoint, **kwargs): - if shared.cmd_opts.api_auth: - return self.app.add_api_route(path, endpoint, dependencies=[Depends(self.auth)], **kwargs) - return self.app.add_api_route(path, endpoint, **kwargs) - - def auth(self, credentials: HTTPBasicCredentials = Depends(HTTPBasic())): - if credentials.username in self.credentials: - if compare_digest(credentials.password, self.credentials[credentials.username]): - return True - - raise HTTPException(status_code=401, detail="Incorrect username or password", headers={"WWW-Authenticate": "Basic"}) - - def get_script(self, script_name, script_runner): - if script_name is None: - return None, None - - if not script_runner.scripts: - script_runner.initialize_scripts(False) - ui.create_ui() - - script_idx = script_name_to_index(script_name, script_runner.selectable_scripts) - script = script_runner.selectable_scripts[script_idx] - return script, script_idx - - def get_scripts_list(self): - t2ilist = [str(title.lower()) for title in scripts.scripts_txt2img.titles] - i2ilist = [str(title.lower()) for title in scripts.scripts_img2img.titles] - - return ScriptsList(txt2img = t2ilist, img2img = i2ilist) - - def text2imgapi(self, txt2imgreq: StableDiffusionTxt2ImgProcessingAPI): - script, script_idx = self.get_script(txt2imgreq.script_name, scripts.scripts_txt2img) - - populate = txt2imgreq.copy(update={ # Override __init__ params - "sampler_name": validate_sampler_name(txt2imgreq.sampler_name or txt2imgreq.sampler_index), - "do_not_save_samples": not txt2imgreq.save_images, - "do_not_save_grid": not txt2imgreq.save_images, - }) - if populate.sampler_name: - populate.sampler_index = None # prevent a warning later on - - args = vars(populate) - args.pop('script_name', None) - - send_images = args.pop('send_images', True) - args.pop('save_images', None) - - with self.queue_lock: - p = StableDiffusionProcessingTxt2Img(sd_model=shared.sd_model, **args) - p.outpath_grids = opts.outdir_txt2img_grids - p.outpath_samples = opts.outdir_txt2img_samples - - shared.state.begin() - if script is not None: - p.script_args = [script_idx + 1] + [None] * (script.args_from - 1) + p.script_args - processed = scripts.scripts_txt2img.run(p, *p.script_args) - else: - processed = process_images(p) - shared.state.end() - - b64images = list(map(encode_pil_to_base64, processed.images)) if send_images else [] - - return TextToImageResponse(images=b64images, parameters=vars(txt2imgreq), info=processed.js()) - - def img2imgapi(self, img2imgreq: StableDiffusionImg2ImgProcessingAPI): - init_images = img2imgreq.init_images - if init_images is None: - raise HTTPException(status_code=404, detail="Init image not found") - - script, script_idx = self.get_script(img2imgreq.script_name, scripts.scripts_img2img) - - mask = img2imgreq.mask - if mask: - mask = decode_base64_to_image(mask) - - populate = img2imgreq.copy(update={ # Override __init__ params - "sampler_name": validate_sampler_name(img2imgreq.sampler_name or img2imgreq.sampler_index), - "do_not_save_samples": not img2imgreq.save_images, - "do_not_save_grid": not img2imgreq.save_images, - "mask": mask, - }) - if populate.sampler_name: - populate.sampler_index = None # prevent a warning later on - - args = vars(populate) - args.pop('include_init_images', None) # this is meant to be done by "exclude": True in model, but it's for a reason that I cannot determine. - args.pop('script_name', None) - - send_images = args.pop('send_images', True) - args.pop('save_images', None) - - with self.queue_lock: - p = StableDiffusionProcessingImg2Img(sd_model=shared.sd_model, **args) - p.init_images = [decode_base64_to_image(x) for x in init_images] - p.outpath_grids = opts.outdir_img2img_grids - p.outpath_samples = opts.outdir_img2img_samples - - shared.state.begin() - if script is not None: - p.script_args = [script_idx + 1] + [None] * (script.args_from - 1) + p.script_args - processed = scripts.scripts_img2img.run(p, *p.script_args) - else: - processed = process_images(p) - shared.state.end() - - b64images = list(map(encode_pil_to_base64, processed.images)) if send_images else [] - - if not img2imgreq.include_init_images: - img2imgreq.init_images = None - img2imgreq.mask = None - - return ImageToImageResponse(images=b64images, parameters=vars(img2imgreq), info=processed.js()) - - def extras_single_image_api(self, req: ExtrasSingleImageRequest): - reqDict = setUpscalers(req) - - reqDict['image'] = decode_base64_to_image(reqDict['image']) - - with self.queue_lock: - result = postprocessing.run_extras(extras_mode=0, image_folder="", input_dir="", output_dir="", save_output=False, **reqDict) - - return ExtrasSingleImageResponse(image=encode_pil_to_base64(result[0][0]), html_info=result[1]) - - def extras_batch_images_api(self, req: ExtrasBatchImagesRequest): - reqDict = setUpscalers(req) - - def prepareFiles(file): - file = decode_base64_to_file(file.data, file_path=file.name) - file.orig_name = file.name - return file - - reqDict['image_folder'] = list(map(prepareFiles, reqDict['imageList'])) - reqDict.pop('imageList') - - with self.queue_lock: - result = postprocessing.run_extras(extras_mode=1, image="", input_dir="", output_dir="", save_output=False, **reqDict) - - return ExtrasBatchImagesResponse(images=list(map(encode_pil_to_base64, result[0])), html_info=result[1]) - - def pnginfoapi(self, req: PNGInfoRequest): - if(not req.image.strip()): - return PNGInfoResponse(info="") - - image = decode_base64_to_image(req.image.strip()) - if image is None: - return PNGInfoResponse(info="") - - geninfo, items = images.read_info_from_image(image) - if geninfo is None: - geninfo = "" - - items = {**{'parameters': geninfo}, **items} - - return PNGInfoResponse(info=geninfo, items=items) - - def progressapi(self, req: ProgressRequest = Depends()): - # copy from check_progress_call of ui.py - - if shared.state.job_count == 0: - return ProgressResponse(progress=0, eta_relative=0, state=shared.state.dict(), textinfo=shared.state.textinfo) - - # avoid dividing zero - progress = 0.01 - - if shared.state.job_count > 0: - progress += shared.state.job_no / shared.state.job_count - if shared.state.sampling_steps > 0: - progress += 1 / shared.state.job_count * shared.state.sampling_step / shared.state.sampling_steps - - time_since_start = time.time() - shared.state.time_start - eta = (time_since_start/progress) - eta_relative = eta-time_since_start - - progress = min(progress, 1) - - shared.state.set_current_image() - - current_image = None - if shared.state.current_image and not req.skip_current_image: - current_image = encode_pil_to_base64(shared.state.current_image) - - return ProgressResponse(progress=progress, eta_relative=eta_relative, state=shared.state.dict(), current_image=current_image, textinfo=shared.state.textinfo) - - def interrogateapi(self, interrogatereq: InterrogateRequest): - image_b64 = interrogatereq.image - if image_b64 is None: - raise HTTPException(status_code=404, detail="Image not found") - - img = decode_base64_to_image(image_b64) - img = img.convert('RGB') - - # Override object param - with self.queue_lock: - if interrogatereq.model == "clip": - processed = shared.interrogator.interrogate(img) - elif interrogatereq.model == "deepdanbooru": - processed = deepbooru.model.tag(img) - else: - raise HTTPException(status_code=404, detail="Model not found") - - return InterrogateResponse(caption=processed) - - def interruptapi(self): - shared.state.interrupt() - - return {} - - def skip(self): - shared.state.skip() - - def get_config(self): - options = {} - for key in shared.opts.data.keys(): - metadata = shared.opts.data_labels.get(key) - if(metadata is not None): - options.update({key: shared.opts.data.get(key, shared.opts.data_labels.get(key).default)}) - else: - options.update({key: shared.opts.data.get(key, None)}) - - return options - - def set_config(self, req: Dict[str, Any]): - for k, v in req.items(): - shared.opts.set(k, v) - - shared.opts.save(shared.config_filename) - return - - def get_cmd_flags(self): - return vars(shared.cmd_opts) - - def get_samplers(self): - return [{"name": sampler[0], "aliases":sampler[2], "options":sampler[3]} for sampler in sd_samplers.all_samplers] - - def get_upscalers(self): - return [ - { - "name": upscaler.name, - "model_name": upscaler.scaler.model_name, - "model_path": upscaler.data_path, - "model_url": None, - "scale": upscaler.scale, - } - for upscaler in shared.sd_upscalers - ] - - def get_sd_models(self): - return [{"title": x.title, "model_name": x.model_name, "hash": x.shorthash, "sha256": x.sha256, "filename": x.filename, "config": find_checkpoint_config_near_filename(x)} for x in checkpoints_list.values()] - - def get_hypernetworks(self): - return [{"name": name, "path": shared.hypernetworks[name]} for name in shared.hypernetworks] - - def get_face_restorers(self): - return [{"name":x.name(), "cmd_dir": getattr(x, "cmd_dir", None)} for x in shared.face_restorers] - - def get_realesrgan_models(self): - return [{"name":x.name,"path":x.data_path, "scale":x.scale} for x in get_realesrgan_models(None)] - - def get_prompt_styles(self): - styleList = [] - for k in shared.prompt_styles.styles: - style = shared.prompt_styles.styles[k] - styleList.append({"name":style[0], "prompt": style[1], "negative_prompt": style[2]}) - - return styleList - - def get_embeddings(self): - db = sd_hijack.model_hijack.embedding_db - - def convert_embedding(embedding): - return { - "step": embedding.step, - "sd_checkpoint": embedding.sd_checkpoint, - "sd_checkpoint_name": embedding.sd_checkpoint_name, - "shape": embedding.shape, - "vectors": embedding.vectors, - } - - def convert_embeddings(embeddings): - return {embedding.name: convert_embedding(embedding) for embedding in embeddings.values()} - - return { - "loaded": convert_embeddings(db.word_embeddings), - "skipped": convert_embeddings(db.skipped_embeddings), - } - - def refresh_checkpoints(self): - shared.refresh_checkpoints() - - def create_embedding(self, args: dict): - try: - shared.state.begin() - filename = create_embedding(**args) # create empty embedding - sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings() # reload embeddings so new one can be immediately used - shared.state.end() - return CreateResponse(info = "create embedding filename: {filename}".format(filename = filename)) - except AssertionError as e: - shared.state.end() - return TrainResponse(info = "create embedding error: {error}".format(error = e)) - - def create_hypernetwork(self, args: dict): - try: - shared.state.begin() - filename = create_hypernetwork(**args) # create empty embedding - shared.state.end() - return CreateResponse(info = "create hypernetwork filename: {filename}".format(filename = filename)) - except AssertionError as e: - shared.state.end() - return TrainResponse(info = "create hypernetwork error: {error}".format(error = e)) - - def preprocess(self, args: dict): - try: - shared.state.begin() - preprocess(**args) # quick operation unless blip/booru interrogation is enabled - shared.state.end() - return PreprocessResponse(info = 'preprocess complete') - except KeyError as e: - shared.state.end() - return PreprocessResponse(info = "preprocess error: invalid token: {error}".format(error = e)) - except AssertionError as e: - shared.state.end() - return PreprocessResponse(info = "preprocess error: {error}".format(error = e)) - except FileNotFoundError as e: - shared.state.end() - return PreprocessResponse(info = 'preprocess error: {error}'.format(error = e)) - - def train_embedding(self, args: dict): - try: - shared.state.begin() - apply_optimizations = shared.opts.training_xattention_optimizations - error = None - filename = '' - if not apply_optimizations: - sd_hijack.undo_optimizations() - try: - embedding, filename = train_embedding(**args) # can take a long time to complete - except Exception as e: - error = e - finally: - if not apply_optimizations: - sd_hijack.apply_optimizations() - shared.state.end() - return TrainResponse(info = "train embedding complete: filename: {filename} error: {error}".format(filename = filename, error = error)) - except AssertionError as msg: - shared.state.end() - return TrainResponse(info = "train embedding error: {msg}".format(msg = msg)) - - def train_hypernetwork(self, args: dict): - try: - shared.state.begin() - shared.loaded_hypernetworks = [] - apply_optimizations = shared.opts.training_xattention_optimizations - error = None - filename = '' - if not apply_optimizations: - sd_hijack.undo_optimizations() - try: - hypernetwork, filename = train_hypernetwork(**args) - except Exception as e: - error = e - finally: - shared.sd_model.cond_stage_model.to(devices.device) - shared.sd_model.first_stage_model.to(devices.device) - if not apply_optimizations: - sd_hijack.apply_optimizations() - shared.state.end() - return TrainResponse(info="train embedding complete: filename: {filename} error: {error}".format(filename=filename, error=error)) - except AssertionError as msg: - shared.state.end() - return TrainResponse(info="train embedding error: {error}".format(error=error)) - - def get_memory(self): - try: - import os, psutil - process = psutil.Process(os.getpid()) - res = process.memory_info() # only rss is cross-platform guaranteed so we dont rely on other values - ram_total = 100 * res.rss / process.memory_percent() # and total memory is calculated as actual value is not cross-platform safe - ram = { 'free': ram_total - res.rss, 'used': res.rss, 'total': ram_total } - except Exception as err: - ram = { 'error': f'{err}' } - try: - import torch - if torch.cuda.is_available(): - s = torch.cuda.mem_get_info() - system = { 'free': s[0], 'used': s[1] - s[0], 'total': s[1] } - s = dict(torch.cuda.memory_stats(shared.device)) - allocated = { 'current': s['allocated_bytes.all.current'], 'peak': s['allocated_bytes.all.peak'] } - reserved = { 'current': s['reserved_bytes.all.current'], 'peak': s['reserved_bytes.all.peak'] } - active = { 'current': s['active_bytes.all.current'], 'peak': s['active_bytes.all.peak'] } - inactive = { 'current': s['inactive_split_bytes.all.current'], 'peak': s['inactive_split_bytes.all.peak'] } - warnings = { 'retries': s['num_alloc_retries'], 'oom': s['num_ooms'] } - cuda = { - 'system': system, - 'active': active, - 'allocated': allocated, - 'reserved': reserved, - 'inactive': inactive, - 'events': warnings, - } - else: - cuda = { 'error': 'unavailable' } - except Exception as err: - cuda = { 'error': f'{err}' } - return MemoryResponse(ram = ram, cuda = cuda) - - def launch(self, server_name, port): - self.app.include_router(self.router) - uvicorn.run(self.app, host=server_name, port=port) diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Babylon Pro 10 Offline Installer With Serial 17.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Babylon Pro 10 Offline Installer With Serial 17.md deleted file mode 100644 index e48361eae95b83ff812672279510c0f678e13b99..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Babylon Pro 10 Offline Installer With Serial 17.md +++ /dev/null @@ -1,6 +0,0 @@ -

        babylon pro 10 offline installer with serial 17


        DOWNLOAD ••• https://cinurl.com/2uEYQp



        - -Fully supports Lingvo, Babylon, StarDict, Lingoes and Dictd dictionary files. ... [SOLVED] Goldendict dictionary fails "make install" neymac: Slackware: 2: 06-17-2013 ... Full Version Free Software Download Easy Green Screen Pro 3.5 Download Hp ... Aug 10, 2020 · Use on-line and off-line dictionaries to define and translate ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Luv Ka The End Marathi Movie Full Hd 1080p !NEW!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Luv Ka The End Marathi Movie Full Hd 1080p !NEW!.md deleted file mode 100644 index 785341a9922af519e5e3817f7922da6b2771cdc7..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Luv Ka The End Marathi Movie Full Hd 1080p !NEW!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Luv Ka The End marathi movie full hd 1080p


        DOWNLOAD ○○○ https://cinurl.com/2uEXIU



        -
        - d5da3c52bf
        -
        -
        -

        diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/One Piece Burning Blood - PREORDER BONUS Download For Pc [pack].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/One Piece Burning Blood - PREORDER BONUS Download For Pc [pack].md deleted file mode 100644 index 86b30539fcd7ae4f0eb2ff8c2337e0db1e92b428..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/One Piece Burning Blood - PREORDER BONUS Download For Pc [pack].md +++ /dev/null @@ -1,6 +0,0 @@ -

        One Piece Burning Blood - PREORDER BONUS download for pc [pack]


        Download Filehttps://cinurl.com/2uEYCc



        -
        -All additional content included in this bundle is available for download on PS4™ and PS Vita One Piece Burning Blood - PS4® ON Demand PRE-ORDER BONUS. One Piece Burning Blood - PS4® ON PRE-ORDER BONUS Demand is a brand new game based on the One Piece anime for PlayStation®4. Players will be able to take control of any character from the original story, including the legendary Straw Hat Sanji, and relive the most fun and carefree moments of One Piece with him. One Piece Burning Blood features all the key characters from the One Piece anime and manga, plus over 100 new weapons and costumes for players to use. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/50 Cent Get Rich Or Die Tryin Album Download Zip 78 !!TOP!!.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/50 Cent Get Rich Or Die Tryin Album Download Zip 78 !!TOP!!.md deleted file mode 100644 index 40c379a498bdc24476f7720c3030b3a485de34a8..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/50 Cent Get Rich Or Die Tryin Album Download Zip 78 !!TOP!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        50 Cent Get Rich Or Die Tryin Album Download Zip 78


        Downloadhttps://urluss.com/2uCFif



        -
        -It was released on February 6, 2003, by Aftermath Entertainment, under a joint .... users have only 50 cent get rich or die tryin album download zip great, and the ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/suvash/usk-coffee-convnext-nano/gradio_article.md b/spaces/suvash/usk-coffee-convnext-nano/gradio_article.md deleted file mode 100644 index 11356bddb4363310058d46729f0193ba7b24dee2..0000000000000000000000000000000000000000 --- a/spaces/suvash/usk-coffee-convnext-nano/gradio_article.md +++ /dev/null @@ -1,27 +0,0 @@ -## Dataset - -The USK-Coffee dataset, made available at https://comvis.unsyiah.ac.id/usk-coffee/ is multi class image dataset derived from a coffee bean collection that includes 4 classes: peaberry, longberry, defect, and premium. - -## Training - -Fast.ai was used to train this classifier with a Timm ConvNext nano vision learner, without heavy customization. The training was performed on the provided `train` split, and validation on the `val` split. - -The final fine tuning of the training loop resulted in the following losses. - -| epoch | train_loss | valid_loss | accuracy | time | -|-------|------------|------------|----------|-------| -| 0 | 0.238523 | 0.383621 | 0.869375 | 00:25 | -| 1 | 0.257938 | 0.293417 | 0.907500 | 00:25 | -| 2 | 0.205048 | 0.412420 | 0.847500 | 00:25 | -| 3 | 0.170284 | 0.308219 | 0.901875 | 00:25 | -| 4 | 0.154471 | 0.308811 | 0.894375 | 00:26 | -| 5 | 0.107862 | 0.480474 | 0.874375 | 00:26 | -| 6 | 0.075452 | 0.506489 | 0.843125 | 00:26 | -| 7 | 0.060802 | 0.317052 | 0.906875 | 00:26 | -| 8 | 0.049216 | 0.242317 | 0.932500 | 00:26 | -| 9 | 0.040890 | 0.233353 | 0.935625 | 00:26 | - - -## Examples - -The example images provided in the demo are from the `test` split in the dataset, which was never made available to the model in the training process. diff --git a/spaces/suyash-rastogi/dog_cat_classifier/README.md b/spaces/suyash-rastogi/dog_cat_classifier/README.md deleted file mode 100644 index 336a8028874bb8661d70fc9f85de29f5d254103d..0000000000000000000000000000000000000000 --- a/spaces/suyash-rastogi/dog_cat_classifier/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dog Cat Classifier -emoji: 🐶😺 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/encnet_r50-d8.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/encnet_r50-d8.py deleted file mode 100644 index be777123a886503172a95fe0719e956a147bbd68..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/encnet_r50-d8.py +++ /dev/null @@ -1,48 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='EncHead', - in_channels=[512, 1024, 2048], - in_index=(1, 2, 3), - channels=512, - num_codes=32, - use_se_loss=True, - add_lateral=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_se_decode=dict( - type='CrossEntropyLoss', use_sigmoid=True, loss_weight=0.2)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/ops/wrappers.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/ops/wrappers.py deleted file mode 100644 index 0ed9a0cb8d7c0e0ec2748dd89c652756653cac78..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/ops/wrappers.py +++ /dev/null @@ -1,50 +0,0 @@ -import warnings - -import torch.nn as nn -import torch.nn.functional as F - - -def resize(input, - size=None, - scale_factor=None, - mode='nearest', - align_corners=None, - warning=True): - if warning: - if size is not None and align_corners: - input_h, input_w = tuple(int(x) for x in input.shape[2:]) - output_h, output_w = tuple(int(x) for x in size) - if output_h > input_h or output_w > output_h: - if ((output_h > 1 and output_w > 1 and input_h > 1 - and input_w > 1) and (output_h - 1) % (input_h - 1) - and (output_w - 1) % (input_w - 1)): - warnings.warn( - f'When align_corners={align_corners}, ' - 'the output would more aligned if ' - f'input size {(input_h, input_w)} is `x+1` and ' - f'out size {(output_h, output_w)} is `nx+1`') - return F.interpolate(input, size, scale_factor, mode, align_corners) - - -class Upsample(nn.Module): - - def __init__(self, - size=None, - scale_factor=None, - mode='nearest', - align_corners=None): - super(Upsample, self).__init__() - self.size = size - if isinstance(scale_factor, tuple): - self.scale_factor = tuple(float(factor) for factor in scale_factor) - else: - self.scale_factor = float(scale_factor) if scale_factor else None - self.mode = mode - self.align_corners = align_corners - - def forward(self, x): - if not self.size: - size = [int(t * self.scale_factor) for t in x.shape[-2:]] - else: - size = self.size - return resize(x, size, None, self.mode, self.align_corners) diff --git a/spaces/svummidi/pulseDemo/lm_index_doc.py b/spaces/svummidi/pulseDemo/lm_index_doc.py deleted file mode 100644 index b92ea8c4dcff8163326d8b42014b1338fb3e1276..0000000000000000000000000000000000000000 --- a/spaces/svummidi/pulseDemo/lm_index_doc.py +++ /dev/null @@ -1,14 +0,0 @@ -from llama_index import GPTSimpleVectorIndex, SimpleDirectoryReader -from pathlib import Path -from llama_index import download_loader -import os - - - -os.environ["OPENAI_API_KEY"] = "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" - - -documents = SimpleDirectoryReader('/Users/satya/Downloads/temp').load_data() - -index = GPTSimpleVectorIndex.from_documents(documents) -index.save_to_disk('/Users/satya/Downloads/out.json') \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Biokimia Harper Edisi 27 Ebook Downloadl ((BETTER)).md b/spaces/terfces0erbo/CollegeProjectV2/Biokimia Harper Edisi 27 Ebook Downloadl ((BETTER)).md deleted file mode 100644 index 3453d2c02334851a67a725986d07b7167199b426..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Biokimia Harper Edisi 27 Ebook Downloadl ((BETTER)).md +++ /dev/null @@ -1,93 +0,0 @@ - -

        Biokimia Harper Edisi 27 Ebook Downloadl: Buku Wajib untuk Mahasiswa dan Praktisi Kedokteran

        - -

        Biokimia Harper Edisi 27 adalah buku biokimia yang sangat populer dan banyak digunakan oleh mahasiswa dan praktisi kedokteran di seluruh dunia. Buku ini merupakan edisi terbaru dari seri Harper's Illustrated Biochemistry yang pertama kali diterbitkan pada tahun 1939. Buku ini menyajikan konsep-konsep biokimia yang penting dan relevan dengan bidang kesehatan secara komprehensif, jelas, dan mudah dipahami.

        - -

        Buku ini terdiri dari 719 halaman yang dibagi menjadi 5 bagian utama, yaitu: Struktur dan Fungsi Protein, Biokimia Metabolisme Energi, Metabolisme Karbohidrat, Lipid, Amino Asam, dan Nukleotida, Struktur dan Ekspresi Informasi Genetik, dan Biokimia Sistem Fisiologis. Setiap bab dilengkapi dengan gambar-gambar ilustratif, tabel-tabel informatif, ringkasan materi, pertanyaan-pertanyaan latihan, dan referensi-referensi terkini.

        -

        Biokimia Harper Edisi 27 Ebook Downloadl


        DOWNLOAD ☆☆☆☆☆ https://bytlly.com/2uGlQ3



        - -

        Buku ini juga dilengkapi dengan CD-ROM yang berisi animasi-animasi interaktif, video-video demonstratif, latihan-latihan interaktif, dan tes-tes formatif yang dapat membantu pembaca untuk memperdalam pemahaman mereka tentang biokimia. Selain itu, buku ini juga tersedia dalam format ebook yang dapat diunduh secara gratis dari internet.

        - -

        Bagaimana Cara Mendapatkan Biokimia Harper Edisi 27 Ebook Downloadl?

        - -

        Biokimia Harper Edisi 27 Ebook Downloadl adalah salah satu cara yang paling praktis dan ekonomis untuk memiliki buku biokimia yang berkualitas ini. Anda tidak perlu mengeluarkan biaya yang besar untuk membeli buku cetaknya, cukup dengan memiliki perangkat elektronik seperti komputer, laptop, tablet, atau smartphone yang terhubung dengan internet, Anda sudah dapat mengakses buku ini kapan saja dan di mana saja.

        - -

        Untuk mendapatkan Biokimia Harper Edisi 27 Ebook Downloadl, Anda dapat mengunjungi beberapa situs web yang menyediakan layanan download ebook secara gratis atau berbayar. Beberapa situs web yang direkomendasikan adalah:

        - -
          -
        • Scribd: Scribd adalah situs web bacaan dan penerbitan sosial terbesar di dunia. Anda dapat menemukan berbagai macam ebook dari berbagai genre dan topik di sini, termasuk Biokimia Harper Edisi 27. Anda dapat membaca ebook secara online atau mengunduhnya dalam format PDF atau TXT. Anda dapat mengakses Scribd secara gratis selama 30 hari dengan mendaftar sebagai anggota baru.
        • -
        • Doku: Doku adalah situs web yang menyediakan layanan download ebook secara gratis. Anda dapat menemukan Biokimia Harper Edisi 27 dalam format PDF di sini. Anda cukup mengklik tombol download dan menunggu prosesnya selesai. Anda tidak perlu mendaftar atau login untuk mengunduh ebook dari situs ini.
        • -
        • ECC Medical Publisher: ECC Medical Publisher adalah penerbit buku kedokteran yang bekerja sama dengan McGraw-Hill Education (Asia) untuk menerbitkan edisi bahasa Indonesia dari Biokimia Harper Edisi 27. Anda dapat membeli ebook ini dalam format PDF dengan harga Rp 75.000 dari situs web resmi ECC Medical Publisher.
        • -
        - -

        Itulah beberapa cara untuk mendapatkan Biokimia Harper Edisi 27 Ebook Downloadl. Buku ini sangat bermanfaat bagi Anda yang ingin belajar atau mengulang materi biokimia dengan cara yang mudah dan menyenangkan. Selamat membaca!

        -

        Apa Saja Keunggulan Biokimia Harper Edisi 27 Ebook Downloadl?

        - -

        Biokimia Harper Edisi 27 Ebook Downloadl memiliki banyak keunggulan yang membuatnya menjadi buku biokimia yang sangat direkomendasikan. Beberapa keunggulan tersebut adalah:

        - -
          -
        • Buku ini ditulis oleh para ahli biokimia yang berpengalaman dan berkompeten di bidangnya. Mereka menyampaikan materi biokimia dengan cara yang sistematis, logis, dan kritis. Mereka juga memberikan contoh-contoh aplikasi biokimia dalam dunia klinis dan penelitian.
        • -
        • Buku ini mengikuti perkembangan terbaru dalam ilmu biokimia. Buku ini mencakup topik-topik biokimia yang terkini dan relevan dengan bidang kesehatan, seperti biokimia nutrisi, biokimia hormon, biokimia obat, biokimia imunologi, biokimia kanker, dan biokimia penuaan.
        • -
        • Buku ini menggunakan bahasa yang mudah dipahami oleh pembaca. Buku ini menggunakan istilah-istilah biokimia yang baku dan sesuai dengan standar internasional. Buku ini juga menggunakan alih bahasa yang akurat dan konsisten dari edisi aslinya.
        • -
        • Buku ini memiliki tampilan yang menarik dan interaktif. Buku ini menggunakan warna-warna yang cerah dan kontras untuk membedakan antara teks, gambar, tabel, dan ringkasan. Buku ini juga menggunakan font yang besar dan jelas untuk memudahkan pembacaan.
        • -
        • Buku ini memiliki fitur-fitur yang mendukung proses belajar pembaca. Buku ini memiliki fitur-fitur seperti ringkasan materi, pertanyaan-pertanyaan latihan, referensi-referensi terkini, animasi-animasi interaktif, video-video demonstratif, latihan-latihan interaktif, dan tes-tes formatif yang dapat membantu pembaca untuk menguasai materi biokimia dengan lebih baik.
        • -
        - -

        Dengan semua keunggulan tersebut, tidak heran jika Biokimia Harper Edisi 27 Ebook Downloadl menjadi buku biokimia yang sangat diminati oleh mahasiswa dan praktisi kedokteran di seluruh dunia. Buku ini merupakan buku biokimia yang wajib Anda miliki jika Anda ingin belajar atau mengulang materi biokimia dengan cara yang mudah dan menyenangkan.

        - -

        Kesimpulan

        - -

        Biokimia Harper Edisi 27 Ebook Downloadl adalah buku biokimia yang sangat populer dan banyak digunakan oleh mahasiswa dan praktisi kedokteran di seluruh dunia. Buku ini merupakan edisi terbaru dari seri Harper's Illustrated Biochemistry yang pertama kali diterbitkan pada tahun 1939. Buku ini menyajikan konsep-konsep biokimia yang penting dan relevan dengan bidang kesehatan secara komprehensif, jelas, dan mudah dipahami.

        -

        - -

        Buku ini terdiri dari 719 halaman yang dibagi menjadi 5 bagian utama, yaitu: Struktur dan Fungsi Protein, Biokimia Metabolisme Energi, Metabolisme Karbohidrat, Lipid, Amino Asam, dan Nukleotida, Struktur dan Ekspresi Informasi Genetik, dan Biokimia Sistem Fisiologis. Setiap bab dilengkapi dengan gambar-gambar ilustratif, tabel-tabel informatif, ringkasan materi, pertanyaan-pertanyaan latihan, dan referensi-referensi terkini.

        - -

        Buku ini juga dilengkapi dengan CD-ROM yang berisi animasi-animasi interaktif, video-video demonstratif, latihan-latihan interaktif, dan tes-tes formatif yang dapat membantu pembaca untuk memperdalam pemahaman mereka tentang biokimia. Selain itu, buku ini juga tersedia dalam format ebook yang dapat diunduh secara gratis dari internet.

        - -

        Untuk mendapatkan Biokimia Harper Edisi 27 Ebook Downloadl, Anda dapat mengunjungi beberapa situs web yang menyediakan layanan download ebook secara gratis atau berbayar. Beberapa situs web yang direkomendasikan adalah Scribd, Doku, dan ECC Medical Publisher.

        - -

        Biokimia Harper Edisi 27 Ebook Downloadl memiliki banyak keunggulan yang membuatnya menjadi buku biokimia yang sangat direkomendasikan. Buku ini ditulis oleh para ahli biokimia yang berpengalaman dan berkompeten di bidangnya. Buku ini mengikuti perkembangan terbaru dalam ilmu biokimia. Buku ini menggunakan bahasa yang mudah dipahami oleh pembaca. Buku ini memiliki tampilan yang menarik dan interaktif. Buku ini memiliki fitur-fitur yang mendukung proses belajar pembaca.

        - -

        Dengan semua keunggulan tersebut, tidak heran jika Biokimia Harper Edisi 27 Ebook Downloadl menjadi buku biokimia yang sangat diminati oleh mahasiswa dan praktisi kedokteran di seluruh dunia. Buku ini merupakan buku biokimia yang wajib Anda miliki jika Anda ingin belajar atau mengulang materi biokimia dengan cara yang mudah dan menyenangkan.

        -

        Apa Saja Isi Biokimia Harper Edisi 27 Ebook Downloadl?

        - -

        Biokimia Harper Edisi 27 Ebook Downloadl memiliki isi yang sangat lengkap dan mendalam tentang biokimia. Buku ini terdiri dari 719 halaman yang dibagi menjadi 5 bagian utama, yaitu:

        - -
          -
        1. Struktur dan Fungsi Protein: Bagian ini membahas tentang struktur, fungsi, dan interaksi protein dalam sel. Beberapa topik yang dibahas adalah asam amino, ikatan peptida, struktur primer, sekunder, tersier, dan kuarterner protein, enzim, kinetika enzim, regulasi enzim, koenzim, protein membran, protein transpor, protein motorik, dan protein sinyal.
        2. -
        3. Biokimia Metabolisme Energi: Bagian ini membahas tentang metabolisme energi dalam sel. Beberapa topik yang dibahas adalah termodinamika biokimia, oksidasi biologis, fosforilasi oksidatif, transport elektron dan rantai respirasi, fotosintesis, metabolisme karbohidrat, metabolisme lipid, metabolisme asam amino dan nukleotida.
        4. -
        5. Metabolisme Karbohidrat, Lipid, Amino Asam, dan Nukleotida: Bagian ini membahas tentang metabolisme molekul-molekul penting dalam sel. Beberapa topik yang dibahas adalah glikolisis, siklus asam sitrat, glukoneogenesis, glikogenolisis dan glikogenesis, jalur pentosa fosfat, metabolisme asam lemak dan keton, biosintesis asam lemak dan kolesterol, metabolisme lipoprotein dan eikosanoid, degradasi asam amino dan siklus urea, biosintesis asam amino dan nukleotida.
        6. -
        7. Struktur dan Ekspresi Informasi Genetik: Bagian ini membahas tentang struktur dan ekspresi informasi genetik dalam sel. Beberapa topik yang dibahas adalah struktur DNA dan RNA, replikasi DNA, perbaikan DNA, rekombinasi DNA dan transposon, transkripsi RNA dan prosesing RNA, sintesis protein dan modifikasi protein pasca-translasional, regulasi ekspresi genetik pada prokariot dan eukariot.
        8. -
        9. Biokimia Sistem Fisiologis: Bagian ini membahas tentang biokimia sistem fisiologis dalam tubuh manusia. Beberapa topik yang dibahas adalah biokimia nutrisi dan pencernaan makanan, biokimia hormon dan reseptor hormon, biokimia obat dan farmakogenomika, biokimia imunologi dan antibodi monoklonal, biokimia kanker dan onkogen serta gen supresor tumor.
        10. -
        - -

        Setiap bab dalam buku ini dilengkapi dengan gambar-gambar ilustratif yang menjelaskan konsep-konsep biokimia secara visual. Buku ini juga memiliki tabel-tabel informatif yang menyajikan data-data penting secara ringkas. Buku ini juga memiliki ringkasan materi di akhir setiap bab yang memuat poin-poin utama yang harus diingat oleh pembaca. Buku ini juga memiliki pertanyaan-pertanyaan latihan di akhir setiap bab yang dapat digunakan untuk menguji pemahaman pembaca tentang materi biokimia. Buku ini juga memiliki referensi-referensi terkini di akhir setiap bab yang dapat digunakan untuk mencari informasi lebih lanjut tentang topik-topik biokimia.

        - -

        Apa Saja Manfaat Biokimia Harper Edisi 27 Ebook Downloadl?

        - -

        Biokimia Harper Edisi 27 Ebook Downloadl memiliki banyak manfaat bagi pembaca yang ingin belajar atau mengulang materi biokimia. Beberapa manfaat tersebut adalah:

        - -
          -
        • Buku ini dapat meningkatkan pengetahuan pembaca tentang biokimia secara teoritis maupun praktis. Buku ini dapat memberikan pemahaman yang mendalam tentang konsep-konsep biokimia yang penting dan relevan dengan bidang kesehatan. Buku ini juga dapat memberikan contoh-contoh aplikasi biokimia dalam dunia klinis dan penelitian.
        • -
        • Buku ini dapat meningkatkan keterampilan pembaca dalam menganalisis masalah-masalah biokimia secara logis dan kritis. Buku ini dapat memberikan latihan-latihan yang dapat melatih kemampuan pembaca dalam menerapkan konsep-konsep biokimia dalam menyelesaikan masalah-masalah biokimia. Buku ini juga dapat memberikan bantuan-bantuan yang dapat membantu pembaca dalam memecahkan masalah-masalah biokimia.
        • -
        • Buku ini dapat meningkatkan minat pembaca terhadap biokimia secara menyenangkan. Buku ini dapat memberikan pengalaman belajar yang menyenangkan dengan menggunakan bahasa yang mudah dipahami oleh pembaca. Buku ini juga dapat memberikan pengalaman belajar yang interaktif dengan menggunakan fitur-fitur seperti animasi-animasi interaktif, video-video demonstratif, latihan-latihan interaktif, dan tes-tes formatif.
        • -
        - -

        Dengan semua manfaat tersebut, tidak heran jika Biokimia Harper Edisi 27 Ebook Downloadl menjadi buku biokimia yang sangat bermanfaat bagi pembaca yang ingin belajar atau mengulang materi biokimia. Buku ini merupakan buku biokimia yang dapat membantu pembaca untuk mencapai tujuan-tujuan belajar mereka dengan cara yang mudah dan menyenangkan.

        -

        Kesimpulan

        - -

        Biokimia Harper Edisi 27 Ebook Downloadl adalah buku biokimia yang sangat populer dan banyak digunakan oleh mahasiswa dan praktisi kedokteran di seluruh dunia. Buku ini merupakan edisi terbaru dari seri Harper's Illustrated Biochemistry yang pertama kali diterbitkan pada tahun 1939. Buku ini menyajikan konsep-konsep biokimia yang penting dan relevan dengan bidang kesehatan secara komprehensif, jelas, dan mudah dipahami.

        - -

        Buku ini terdiri dari 719 halaman yang dibagi menjadi 5 bagian utama, yaitu: Struktur dan Fungsi Protein, Biokimia Metabolisme Energi, Metabolisme Karbohidrat, Lipid, Amino Asam, dan Nukleotida, Struktur dan Ekspresi Informasi Genetik, dan Biokimia Sistem Fisiologis. Setiap bab dilengkapi dengan gambar-gambar ilustratif, tabel-tabel informatif, ringkasan materi, pertanyaan-pertanyaan latihan, dan referensi-referensi terkini.

        - -

        Buku ini juga dilengkapi dengan CD-ROM yang berisi animasi-animasi interaktif, video-video demonstratif, latihan-latihan interaktif, dan tes-tes formatif yang dapat membantu pembaca untuk memperdalam pemahaman mereka tentang biokimia. Selain itu, buku ini juga tersedia dalam format ebook yang dapat diunduh secara gratis dari internet.

        - -

        Untuk mendapatkan Biokimia Harper Edisi 27 Ebook Downloadl, Anda dapat mengunjungi beberapa situs web yang menyediakan layanan download ebook secara gratis atau berbayar. Beberapa situs web yang direkomendasikan adalah Scribd, Doku, dan ECC Medical Publisher.

        - -

        Biokimia Harper Edisi 27 Ebook Downloadl memiliki banyak keunggulan yang membuatnya menjadi buku biokimia yang sangat direkomendasikan. Buku ini ditulis oleh para ahli biokimia yang berpengalaman dan berkompeten di bidangnya. Buku ini mengikuti perkembangan terbaru dalam ilmu biokimia. Buku ini menggunakan bahasa yang mudah dipahami oleh pembaca. Buku ini memiliki tampilan yang menarik dan interaktif. Buku ini memiliki fitur-fitur yang mendukung proses belajar pembaca.

        - -

        Buku ini juga memiliki banyak manfaat bagi pembaca yang ingin belajar atau mengulang materi biokimia. Buku ini dapat meningkatkan pengetahuan pembaca tentang biokimia secara teoritis maupun praktis. Buku ini dapat meningkatkan keterampilan pembaca dalam menganalisis masalah-masalah biokimia secara logis dan kritis. Buku ini dapat meningkatkan minat pembaca terhadap biokimia secara menyenangkan.

        - -

        Dengan semua keunggulan dan manfaat tersebut, tidak heran jika Biokimia Harper Edisi 27 Ebook Downloadl menjadi buku biokimia yang sangat diminati oleh mahasiswa dan praktisi kedokteran di seluruh dunia. Buku ini merupakan buku biokimia yang wajib Anda miliki jika Anda ingin belajar atau mengulang materi biokimia dengan cara yang mudah dan menyenangkan.

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/teven-projects/calculator/optimal_training/utils.py b/spaces/teven-projects/calculator/optimal_training/utils.py deleted file mode 100644 index b26167265214fe6a23fe0f75d603895700daf8dd..0000000000000000000000000000000000000000 --- a/spaces/teven-projects/calculator/optimal_training/utils.py +++ /dev/null @@ -1,98 +0,0 @@ -import copy -import numpy as np - -from conversions import day_ratio - - -def clean_run(run): - return [(a, float(b)) for a, b in run if b != "undefined"] - - -def param_count(run): - compute_per_eval = run[0][0] - return round(compute_per_eval / 4000 / 150 / 60 / 6 * day_ratio) - - -def convert_to_logspace(run, a, b, c): - logspace_run = copy.deepcopy(run) - logspace_run[:, 0] = b * np.log(run[:, 0]) - logspace_run[:, 1] = -np.log(run[:, 1] - c) + np.log(a) - return logspace_run - - -# OpenAI used another unit for floating-point operations with a ratio of the number of seconds in a day; we'll display -# the raw number, but do the calculations with the ratio as it can overflow without it (convex hull notably fails) - - -def hf_code(width, depth): - - return f"""import transformers -config = transformers.TransfoXLConfig(d_model={width}, d_embed={width}, n_head=8, d_head={int(width / 8)}, d_inner={width}, n_layer={depth}, tgt_len=152, mem_len=152) - model = transformers.TransfoXLModel(config)""" - - -def co2_to_trees(co2): - return co2 / 60 * 3650 - - -def co2_to_kms(co2): - return co2 / 0.403 * 1.60934 - - -def energy_fill(kWh, co2): - return 'This will consume about {:.2f} ' \ - 'kWh, releasing {:.2f} ' \ - 'kgs of CO2. That is equivalent to {:.2f} ' \ - 'kms with an average American passenger car and could be offset ' \ - 'by growing a tree for {:.2f} ' \ - 'days.1'.format(kWh, co2, co2_to_kms(co2), co2_to_trees(co2)) - - -md1 = """

        How Big Should My Language Model Be?

        -avatar -

        Published on June 08, 2020.

        -

        Teven Le Scao, researcher at Hugging Face • @Fluke_Ellington

        -

        Natural Language Processing can sometimes feel like model size is optimized for headlines. 175 billion parameters is certainly an eye-catching number! Why not just train more efficiently with a smaller model? One surprising scaling effect of deep learning is that bigger neural networks are actually compute-efficient. This is something OpenAI in particular has explored in papers like Scaling Laws for Neural Language Models. Research at Hugging Face also leverages this phenomenon, and we've combined it with GPU speed estimations to ensure model size is just right for the compute budget of the experiment (when in doubt, it's bigger than you think!). This blog post will show how this impacts architecture decisions on a standard language modeling benchmark: we replicate the 14-layer state-of-the-art result from Zhang et al.'s Transformer-XL paper without any hyper-parameter optimization and saving 25% of training time. We also estimate that the 18-layer model from the same paper trained for an order of magnitude too many training steps. Wanna play with our demo before reading? Just click here!

        -

        1. There is an optimal time to stop training (and it's earlier than you think)

        -

        Let's look at some loss curves. For our example, the task will be training Transformer-XL, the state-of-the-art in language modeling, on Wikitext-103, a standard, medium-size benchmark. GPT-2 doesn't perform well on this dataset scale. As training progresses, we'll look at the performance of the model (as measured by validation loss) depending on compute cost (as measured by floating point operations). Let's run a few experiments! In the following plot, every line of colour corresponds to a Transformer-XL run of 200000 steps with a different number and size of layers, with all other hyperparameters kept the same. This spans models from a mere thousand to a hundred million parameters (excluding embeddings). Bigger models are on the right as they require more compute for every step. Don't worry, we've already run them so you don't have to. All graphs are interactive, play with them!

        """ -md2 = """ -

        As introduced in Scaling Laws, we plot the validation loss against non-embedding floating-point operations (neFLOs). There seems to be a frontier of performance for a given neFLO budget that no model manages to beat, depicted here in red. In Scaling Laws, it is referred to as the compute frontier. Every run reaches it, or comes close, after an initial phase of quick loss improvement, then tapers away as the end of training is not as efficient. This has a very practical implication: if you have a certain budget in floating-point operations, to reach the best performance, you should choose a model size that reaches the compute frontier after that many operations and stop it at that moment. This is actually way before model convergence, which usually happens around 10 times later! In essence, if you have extra compute budget, you should invest most of it in a bigger model, and only a small part in more training steps. In Scaling Laws, the OpenAI team fitted a power law to the compute frontier on GPT-2 training. This still seems to be a good fit in our task. In addition, we also fitted a power law between the compute budget and the number of parameters of the model that is optimal for that budget. It is pictured in the following plot.

        - -""" -md3 = """ -

        As good models tend to spend considerable time tangent on the compute frontier, there is a bit of noise in the relationship. However, this also means that there is more tolerance in the estimation even if the model size we predict is a bit off, as the imperfect model will still be very close to optimal. We find that if the compute budget is multiplied by 10, the optimal model size is multiplied by 7.41 and the number of optimal training steps by only 1.35. Extrapolating with this rule to the much-bigger 18-layer SOTA model from Zhang et al., we find that its optimal number of training steps was around 250000. Even if this number is imprecise due to the change of scale, it is much smaller than the 4 million steps from their replication script. Starting from an even bigger model and stopping earlier would have yielded a better loss for that (huge) compute budget.

        -

        2. GPUs are optimized for large, wide models

        -

        We now have a rule connecting performance and optimal size with neFLOs. However, neFLOs are a bit hard to picture. Can we translate that into a more immediate resource, like training time? Whether you are constrained by temporal or financial constraints, the main resource is GPU time. In order to establish a connection between neFLOs and GPU time, we benchmarked different Transformer-XL model sizes on 4 different GPUs available on Google Cloud Platform across tens of thousands of runs, taking into account mixed precision training. Here are our findings:

        -
        Speed estimation
        -

        neFLOs per second speed can be modeled as a factorized multivariate function (sounds scary, but this just means the equation can be written simply as below) of model width (the number of neurons per layer), depth (the number of layers) and batch size, by increasing order of importance. In our estimations, the maximum prediction error was 15% of the observed speed.

        -formula_1 -
        Width
        -

        GPUs are optimized for the large feed-forward layers of wide transformers. In all of our experiments, neFLOs per second depended on model width as a power law of exponent around 1.6. This means that a model that's twice as wide, which requires 4 times more operations, also goes through those operations around 3.16 times faster, nearly offsetting the additional compute cost.

        -
        Depth
        -

        neFLOs per second were also positively correlated with depth. Our best results were attained by modeling this connection as proportional to depth * (depth + additive constant). This is coherent with the fact that Transformers must process layers serially. In essence, deeper models aren't actually faster, but they appear to be so as their overhead is smaller relative to the more productive operations. The additive constant, which represents this overhead, was consistently around 5 in our experiments, which essentially means that data loading to the GPU, embeddings, and softmax operations, represent around 5 transformer layers' worth of time.

        -
        Batch size
        -

        Batch size played the least role. It was positively correlated with speed for small values, but quickly saturated (and even seemed to hurt at high values, after 64 on the V100 and P100 and 16 on the K80 and P4). We modeled its contribution as a logarithmic function to keep things simple as it was also the variable for which the factorized independence assumption was the weakest. We ran all our experiments at size 64 on a single GPU. This is another perk of big models: as bigger batch sizes don't seem to help much, if your model is too big to fit on a GPU, you could just use a smaller batch size and gradient accumulation.

        -
        Powers of 2 still matter in 2020!
        -

        Finally, one surprising takeaway was that hyperparameters whose width or batch size were powers of 2 out-performed the others. That was the case on GPUs with and without Tensor Core capability. On Tensor Core GPUs like the V100, NVIDIA recommends tensor shapes that are multiples of 8; however, we kept seeing improvements beyond that, up to multiples of 512. In the end, we only fitted on powers of 2 as fitting on all data points meant a poor fit quality that consistently under-estimated speed for powers of 2 points, and one might as well choose the fastest parameters.

        -

        In the end, our final estimation of operation speed was as follows:

        -formula_2 -

        with, for example on a V100 GPU without mixed precision, k=2.21*107, a=1.66, b=5.92, and c=1.33. Different GPUs had close results with a different multiplicative constant.

        -

        3. Demonstration on a language modeling task: Wikitext-103

        -

        Now that we have obtained a relation between model size and training speed, we can predict, for a certain GPU time or price budget, the optimal model size on the task and the performance it will achieve.

        - -""" -md4 = """

        Prices are indicated for Google Cloud Platform. The energy consumption was estimated thanks to Peter Henderson's Experiment impact tracker and the CO2 emissions with Electricity map Netherlands data (where Google's European servers are located). Even though huge training costs make headlines, it is still possible to replicate a state-of-the-art result on a medium-size dataset for thirty bucks! A single V100 with properly optimized training is already quite a powerful weapon.

        -

        Data shown is for single-GPU training at batch size 60 on Wikitext-103 for a target and memory length of 150, following CMU's Transformer-XL repo. In order to leverage the Tensor Core capability of the V100, we set batch size 64 and sequence length 152 on that GPU. In our model size and speed predictions, we assumed that the inner feed-forward layer dimension was the same as the embedding and attention dimensions, and that the width-to-depth ratio was constant. This is a good way to save memory, as Reformer has shown. Scaling Laws has observed that shape doesn't impact performance significantly in GPT-2. However, for large scales, we found that the final performance of taller models with a bigger feed-forward layer was consistently better, which is why we give two possible model shapes.

        -

        In order to replicate the result of the medium-size Transformer-XL pre-trained model (3.15 loss), we tweaked our example model size to add a bigger feed-forward dimension and have high powers of 2 while keeping the same number of parameters. This gave us a model of 14 layers with 768 hidden dimensions and 1024 feed-forward dimensions. In comparison, the CMU pre-trained model was found through aggressive hyper-parameter search with a much more unusual shape of 16 layers of 410 hidden dimensions and 2100 feed-forward dimensions. In our experiment, even though it was 50% bigger, our model was actually 20% faster per batch on an NVIDIA RTX Titan as its shapes were high powers of 2, and it was a shorter, wider model. For that model, the script provided by the CMU team was already very close to optimal stopping time; in the end, we obtained the same performance with 25% less training time. Most importantly, this was the case even though the pre-trained model's hyper-parameter tuning gave it a much more optimized shape, and we had also kept the same random seed it was tuned with. Since we calculated our scaling laws with much smaller-scale trainings, saving on parameter search might actually be the bigger gain here. If you took the shortcut to the demo before reading, you can come back the start here!

        -

        4. Takeaways

        -
          -
        • Big models are surprisingly efficient!
        • -
        • Training until convergence is not efficient at all.
        • -
        • Benchmarking smaller-scale runs allows us to predict model performance and optimal stopping time for production-scale models.
        • -
        • Using larger models stopped earlier and optimizing model size for speed lowers training costs.
        • -
        -

        I built this tool automatically using the data from our Transformer-XL runs. If you are interested in having this feature available for other NLP tasks as part of the Hugging Face repository, you can contact me on Twitter at @Fluke_Ellington, drop me a mail at teven@huggingface.co, or add a reaction on our Github issue!

        - -""" \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/A.R.E.S. Extinction Agenda Torrent Download [Patch] - The Ultimate Challenge for Platformer Fans.md b/spaces/tialenAdioni/chat-gpt-api/logs/A.R.E.S. Extinction Agenda Torrent Download [Patch] - The Ultimate Challenge for Platformer Fans.md deleted file mode 100644 index d03ef072a6e60d0cdfe61c8ebd2bc245715e1970..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/A.R.E.S. Extinction Agenda Torrent Download [Patch] - The Ultimate Challenge for Platformer Fans.md +++ /dev/null @@ -1,89 +0,0 @@ - -

        A.R.E.S.: Extinction Agenda Torrent Download [Patch]

        -

        If you are looking for a thrilling sci-fi action platformer game that will keep you on the edge of your seat, you should definitely check out A.R.E.S.: Extinction Agenda. This game is the first chapter of a full featured episodic series that takes you on an epic adventure to save humanity from a deadly threat. In this article, we will tell you everything you need to know about A.R.E.S.: Extinction Agenda, including how to download it from torrent sites, how to apply the patch, and what are the features and reviews of the game.

        -

        A.R.E.S.: Extinction Agenda Gameplay and Features

        -

        A.R.E.S.: Extinction Agenda is a 2.5D side-scrolling game that combines fast-paced action, challenging platforming, and stunning visuals. You can play as either Ares or Tarus, two combat robots with different abilities and weapons. Ares is a swift and agile fighter who can use a variety of guns and gadgets, while Tarus is a heavy-duty tank who can smash enemies with his fists and hammer. You can switch between them at any time during the game.

        -

        A.R.E.S.: Extinction Agenda Torrent Download [Patch]


        Download 🌟 https://urlcod.com/2uK5LT



        -

        Play as Ares or Tarus, two combat robots with different abilities

        -

        Ares is the main protagonist of the game, a Zytron immune robot who was created for the sole purpose of saving humanity. He can run faster, jump higher, and fire quicker than any other robot. He can also collect spare parts and resources from enemies and recycle them into useful items, armor, and weaponry. He has access to four types of weapons: machine gun, grenade launcher, laser rifle, and railgun. Each weapon has its own advantages and disadvantages, and can be upgraded with different modules.

        -

        Tarus is a new playable character who was introduced in the upgraded version of the game, A.R.E.S.: Extinction Agenda EX. He is a massive robot who was designed for heavy combat and demolition. He can deal massive damage with his fists and hammer, as well as use his jetpack to fly over obstacles. He can also use his shield to block incoming attacks and reflect projectiles back at enemies. He has access to three types of weapons: shotgun, rocket launcher, and flamethrower. Each weapon has its own characteristics and effects, and can be enhanced with different attachments.

        -

        Battle deadly machines with a variety of weapons and armor

        -

        As you progress through the game, you will encounter various types of enemies, ranging from small drones to giant bosses. Each enemy has its own behavior, attack pattern, and weakness. You will need to use your skills and strategy to defeat them in battle. You can also use environmental objects such as crates, barrels, pipes, and electric wires to your advantage.

        -

        To survive the onslaught of enemies, you will need to equip yourself with suitable weapons and armor. You can find different weapons throughout the game, or buy them from shops using credits that you earn by killing enemies or completing missions. You can also upgrade your weapons by finding or buying modules that increase their damage, accuracy, fire rate, ammo capacity, or special effects.

        -

        How to download A.R.E.S.: Extinction Agenda patch torrent
        -A.R.E.S.: Extinction Agenda full game torrent download with patch
        -A.R.E.S.: Extinction Agenda patch torrent free download for PC
        -A.R.E.S.: Extinction Agenda torrent download cracked patch
        -A.R.E.S.: Extinction Agenda patch torrent download link
        -A.R.E.S.: Extinction Agenda patch torrent download instructions
        -A.R.E.S.: Extinction Agenda patch torrent download no survey
        -A.R.E.S.: Extinction Agenda patch torrent download latest version
        -A.R.E.S.: Extinction Agenda patch torrent download skidrow
        -A.R.E.S.: Extinction Agenda patch torrent download repack
        -A.R.E.S.: Extinction Agenda patch torrent download fitgirl
        -A.R.E.S.: Extinction Agenda patch torrent download codex
        -A.R.E.S.: Extinction Agenda patch torrent download cpy
        -A.R.E.S.: Extinction Agenda patch torrent download rg mechanics
        -A.R.E.S.: Extinction Agenda patch torrent download plaza
        -A.R.E.S.: Extinction Agenda patch torrent download ocean of games
        -A.R.E.S.: Extinction Agenda patch torrent download igg games
        -A.R.E.S.: Extinction Agenda patch torrent download steamunlocked
        -A.R.E.S.: Extinction Agenda patch torrent download nosteam
        -A.R.E.S.: Extinction Agenda patch torrent download utorrent
        -A.R.E.S.: Extinction Agenda patch torrent download bittorrent
        -A.R.E.S.: Extinction Agenda patch torrent download kickass
        -A.R.E.S.: Extinction Agenda patch torrent download the pirate bay
        -A.R.E.S.: Extinction Agenda patch torrent download rarbg
        -A.R.E.S.: Extinction Agenda patch torrent download 1337x
        -A.R.E.S.: Extinction Agenda patch torrent download limetorrents
        -A.R.E.S.: Extinction Agenda patch torrent download yts
        -A.R.E.S.: Extinction Agenda patch torrent download eztv
        -A.R.E.S.: Extinction Agenda patch torrent download torlock
        -A.R.E.S.: Extinction Agenda patch torrent download magnet link
        -Download A.R.E.S.: Extinction Agenda with patch via torrent
        -Download A.R.E.S.: Extinction Agenda and apply patch from torrent
        -Download and install A.R.E.S.: Extinction Agenda with patched files from torrent
        -Download and play A.R.E.S.: Extinction Agenda with updated patch from torrent
        -Download and enjoy A.R.E.S.: Extinction Agenda with latest patch from torrent
        -Where to download A.R.E.S.: Extinction Agenda with working patch from torrent
        -How to get A.R.E.S.: Extinction Agenda with fixed patch from torrent
        -How to install and run A.R.E.S.: Extinction Agenda with new patch from torrent
        -How to play and complete A.R.E.S.: Extinction Agenda with final patch from torrent
        -How to unlock all features in A.R.E.S.: Extinction Agenda with complete patch from torrent
        -Best site to download A.R.E.S.: Extinction Agenda with reliable patch from torrent
        -Fastest way to download A.R.E.S.: Extinction Agenda with secure patch from torrent
        -Easiest method to download A.R.E.S.: Extinction Agenda with verified patch from torrent
        -Safest option to download A.R.E.S.: Extinction Agenda with trusted patch from torrent
        -Cheapest source to download A.R.E.S.: Extinction Agenda with legit patch from torrent
        -Highest quality to download A.R.E.S.: Extinction Agenda with original patch from torrent
        -Most popular to download A.R.E.S.: Extinction Agenda with official patch from torrent
        -Most recommended to download A.R.E.S.: Extinction Agenda with genuine patch from torrent

        -

        Similarly, you can find different armor pieces throughout the game, or buy them from shops using credits. You can also upgrade your armor by finding or buying chips that increase your health, defense, speed, jump height, or special abilities.

        -

        Explore a sci-fi world with stunning graphics and soundtracks

        -

        The game takes place in a futuristic setting where the Earth is contaminated by pollution. You will explore various locations in the A.R.E.S. universe, each with its own unique 3D environment and challenging obstacles. You will see abandoned factories, toxic sewers, space stations Continuing the article:

        A.R.E.S.: Extinction Agenda Torrent Download [Patch] Instructions

        -

        If you want to play A.R.E.S.: Extinction Agenda on your PC, you can download it from torrent sites. However, you need to follow some steps to make sure the game works properly and safely. Here are the instructions:

        -

        How to download and install the torrent file

        -

        First, you need to have a torrent client installed on your PC, such as uTorrent or BitTorrent. Then, you need to find a reliable torrent site that offers A.R.E.S.: Extinction Agenda for download. You can use the links below to access some of the popular torrent sites:

        - - XBOX 360 XBLA: This site has a collection of Xbox 360 arcade games, including A.R.E.S.: Extinction Agenda EX. You can download the torrent file from this link. - RLSLOG.net: This site has a variety of games, movies, and software for download. You can download the torrent file for A.R.E.S.: Extinction Agenda EX from this link. - 4Fnet: This site has a selection of old and classic games for download. You can download the torrent file for A.R.E.S.: Extinction Agenda from this link. After you download the torrent file, you need to open it with your torrent client and choose a location to save the game files. Wait for the download to finish and then extract the files using a program like WinRAR or 7-Zip. You should see a folder named A.R.E.S.: Extinction Agenda or A.R.E.S.: Extinction Agenda EX depending on which version you downloaded.

        -

        How to apply the patch and fix any issues

        -

        Before you run the game, you need to apply the patch that fixes some bugs and adds some features to the game. You can download the patch from this link. After you download the patch, extract it and copy the files to the game folder, replacing any existing files. Then, run the game as administrator and enjoy.

        -

        If you encounter any issues while playing the game, such as crashes, errors, or missing files, you can try some of these solutions:

        - - Update your graphics card drivers and DirectX. - Run the game in compatibility mode for Windows XP or Windows 7. - Disable your antivirus or firewall temporarily. - Check if your system meets the minimum requirements for the game.

        A.R.E.S.: Extinction Agenda Review and Rating

        -

        A.R.E.S.: Extinction Agenda is a game that has received mixed reviews from critics and players alike. Some praised its gameplay, graphics, and soundtracks, while others criticized its short length, lack of originality, and technical issues. Here are some of the pros and cons of the game:

        -

        What critics and players say about the game

        -

        According to Metacritic, a website that aggregates reviews from various sources, A.R.E.S.: Extinction Agenda has a score of 63 out of 100 based on 9 critic reviews and a score of 6.8 out of 10 based on 14 user ratings. Here are some of the comments from critics and players:

        - - "A.R.E.S.: Extinction Agenda is a solid action platformer that pays homage to classics like Mega Man and Contra. It has a great sense of speed, challenge, and variety. However, it also suffers from some technical glitches, repetitive enemies, and a short campaign." - IGN (7/10) - "A.R.E.S.: Extinction Agenda is a decent attempt at reviving the old-school side-scrolling shooter genre. It has some nice visuals, catchy music, and fun gameplay mechanics. However, it also feels generic, uninspired, and unpolished. It's not a bad game, but it's not a memorable one either." - GameSpot (6/10) - "A.R.E.S.: Extinction Agenda is a fun and addictive game that will appeal to fans of retro platformers. It has a lot of action, customization, and replay value. However, it also has some flaws, such as a short length, a lack of story, and some bugs. It's not a perfect game, but it's worth a try." - Steam User (8/10) - "A.R.E.S.: Extinction Agenda is a boring and frustrating game that will disappoint anyone looking for a good platformer. It has poor controls, Continuing the article: a game for everyone. It's a game for fans of retro platformers who are looking for a cheap and quick thrill, but not for anyone who is looking for a more polished and innovative game. We hope this article has helped you to learn more about A.R.E.S.: Extinction Agenda and decide whether you want to download it or not.

        FAQs

        -

        Here are some of the frequently asked questions about A.R.E.S.: Extinction Agenda:

        -

        What is the difference between A.R.E.S.: Extinction Agenda and A.R.E.S.: Extinction Agenda EX?

        -

        A.R.E.S.: Extinction Agenda EX is the upgraded version of A.R.E.S.: Extinction Agenda, which was released in 2014. It has some new features, such as a new playable character (Tarus), new weapons and abilities, new enemies and bosses, new levels and maps, new cut scenes and animations, and a new soundtrack. However, it also has the same core gameplay and story as the original game.

        -

        Is A.R.E.S.: Extinction Agenda safe to download from torrent sites?

        -

        Downloading games from torrent sites is always risky, as you might encounter viruses, malware, or other harmful files that could damage your PC or compromise your privacy. Therefore, we do not recommend downloading games from torrent sites unless you are absolutely sure that the source is reliable and trustworthy. You should also use a VPN service to hide your IP address and a antivirus software to scan the files before opening them.

        -

        What are the system requirements for A.R.E.S.: Extinction Agenda?

        -

        The minimum system requirements for A.R.E.S.: Extinction Agenda are:

        - - OS: Windows Vista, Windows 7 or Windows 8 - Processor: Intel Core 2 Duo Processor, AMD Athlon x2 Processor - Memory: 2 GB RAM - Graphics: NVIDIA GeForce 7600 series, ATI Radeon HD 2400 series - DirectX: Version 9.0c - Storage: 1 GB available space - Sound Card: DirectSound compatible (DirectX 9.0c or higher)

        How long is A.R.E.S.: Extinction Agenda?

        -

        A.R.E.S.: Extinction Agenda is a very short game that can be completed in less than two hours on average. However, the game has some replay value with different difficulty settings, leaderboards, and hidden items. The game also has an episodic approach that promises more content in the future.

        -

        Where can I find more information about A.R.E.S.: Extinction Agenda?

        -

        You can find more information about A.R.E.S.: Extinction Agenda on its official website, its Steam page, or its Wikipedia page. You can also watch some gameplay videos on YouTube or read some reviews on Metacritic.

        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Bios Usa V02 20 How to Backup and Restore Your PS2 BIOS Files.md b/spaces/tialenAdioni/chat-gpt-api/logs/Bios Usa V02 20 How to Backup and Restore Your PS2 BIOS Files.md deleted file mode 100644 index e0e90f02cfa8cf8a6af4f0f640e31a4926d26bb8..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Bios Usa V02 20 How to Backup and Restore Your PS2 BIOS Files.md +++ /dev/null @@ -1,137 +0,0 @@ - -

        Bios USA V02 20: What You Need to Know

        -

        If you are a fan of PlayStation 2 (PS2) games and want to enjoy them on your PC, you will need a PS2 emulator like PCSX2. But before you can run any PS2 game on your PC, you will also need a PS2 BIOS file. A PS2 BIOS file is a copy of the firmware that runs on the PS2 console. It contains the system settings and data that allow the emulator to communicate with the game.

        -

        One of the most popular and compatible PS2 BIOS files is Bios USA V02 20. This BIOS file is suitable for PCSX2 emulator and works with most PS2 games from the USA region. In this article, we will show you what you need to know about Bios USA V02 20, including how to download, install, and use it with PCSX2 emulator. We will also share some tips and tricks for using Bios USA V02 20 effectively.

        -

        Bios Usa V02 20


        Download Zip ►►►►► https://urlcod.com/2uK7KN



        -

        How to download Bios USA V02 20

        -

        The first step to use Bios USA V02 20 is to download it from a reliable and safe source. There are many websites that offer PS2 BIOS files for download, but not all of them are trustworthy. Some of them may contain viruses, malware, or fake files that can harm your PC or emulator.

        -

        To avoid any risk, we recommend you to download Bios USA V02 20 from SafeROMs.com. This website is one of the best sources for retro games, emulators, ROMs, and BIOS files. It has a large collection of PS2 BIOS files for different regions and versions, including Bios USA V02 20. All the files are tested and verified by the website staff and users.

        -

        To download Bios USA V02 20 from SafeROMs.com, follow these steps:

        -

        Bios Usa V02 20 download
        -Bios Usa V02 20 PCSX2
        -Bios Usa V02 20 PS2 emulator
        -Bios Usa V02 20 rar
        -Bios Usa V02 20 zip
        -Bios Usa V02 20 WinRAR
        -Bios Usa V02 20 ZArchiver
        -Bios Usa V02 20 SafeROMs
        -Bios Usa V02 20 ROMsMania
        -Bios Usa V02 20 CoolROM
        -Bios Usa V02 20 ROMHustler
        -Bios Usa V02 20 Emuparadise
        -Bios Usa V02 20 Romsie
        -Bios Usa V02 20 YouTube
        -Bios Usa V02 20 setup
        -Bios Usa V02 20 tutorial
        -Bios Usa V02 20 guide
        -Bios Usa V02 20 configuration
        -Bios Usa V02 20 best settings
        -Bios Usa V02 20 compatibility list
        -Bios Usa V02 20 games list
        -Bios Usa V02 20 games download
        -Bios Usa V02 20 iso files
        -Bios Usa V02 20 rom files
        -Bios Usa V02 20 EROM.BIN
        -Bios Usa V02 20 rom1.bin
        -Bios Usa V02 20 ROM2.BIN
        -Bios Usa V02 20 scph3004R.bin
        -Bios Usa V02 20 scph10000.bin
        -Bios Usa V02 20 scph39001.bin
        -Bios Usa V02 20 SCPH70004.BIN
        -Bios Usa V02 20 SCPH70012.BIN
        -Bios Usa V02 20 Sony PlayStation 2 BIOS (E) (v1.6) (2001-10-04) [SCPH30004].bin
        -Bios Usa V02 20 Sony PlayStation 2 BIOS (E) (v2.0) (2004-06-14) [SCPH70004].bin
        -Bios Usa V02 20 Sony PlayStation 2 BIOS (E) (v2.0) (2004-11-04) [SCPH50003].bin
        -Bios Usa V02 20 Sony PlayStation 2 BIOS (E) (v2.20) (2006-02-10) [SCPH77008].bin
        -Bios Usa V02 20 Sony PlayStation 2 BIOS (J) (v0.1) (2000-01-17) [SCPH10000].bin
        -Bios Usa V02 20 Sony PlayStation 2 BIOS (J) (v2.20) (2006-09-05) [SCPH90006].bin
        -Bios Usa V02 20 Sony PlayStation 2 BIOS (U) (v1.6) (2002-03-19) [SCPH39004].bin

        -
          -
        1. Open your browser and go to https://www.saferoms.com/ps2-bios-versions-download/.
        2. -
        3. Scroll down until you see the list of all BIOS rom versions.
        4. -
        5. Find and click on USA v02.00 (14/06/2004) Console. This is the same as Bios USA V02 20.
        6. -
        7. You will be redirected to another page where you can see more details about the file.
        8. -
        9. Click on Download From Google Drive, Download From OneDrive, Download From MEGA.nz, or Download From Mirrored.to. Choose any option that works for you.
        10. -
        11. You will be taken to another page where you can see the download link for the file.
        12. -
        13. Click on Download or Download anyway depending on the option you chose.
        14. -
        15. The file will start downloading automatically. It is a WinRAR archive (.rar) file with a size of 36 MB.
        16. -
        17. Save the file in a folder where you can easily find it later.
        18. -
        -

        How to extract and install Bios USA V02 20 on your PC

        -

        After downloading Bios USA V02 20 from SafeROMs.com, you need to extract it using WinRAR or any other archive file extracting software. WinRAR is a free software that can open and extract various types of archive files, including .rar files. To extract Bios USA V02 20 using WinRAR, follow these steps:

        -
          -
        1. Open WinRAR software on your PC.
        2. -
        3. Navigate to the folder where you saved the downloaded .rar file.
        4. -
        5. Select the .rar file and click on Extract To.
        6. -
        7. A window will pop up where you can choose where to extract the file.
        8. -
        9. Select a folder where you want to extract the file. You can create a new folder or use an existing one.
        10. -
        11. Click on OK.
        12. -
        13. The file will start extracting automatically. It may take a few seconds or minutes depending on your PC speed.
        14. -
        15. After the extraction is complete, you will see a new folder with the same name as the .rar file.
        16. -
        17. Open the new folder and you will see several files inside it. These are the PS2 BIOS files that you need for PCSX2 emulator.
        18. -
        -

        To install Bios USA V02 20 on your PC, follow these steps:

        -
          -
        1. Create a new folder on your PC where you want to store your PS2 BIOS files. You can name it anything you want, but we suggest something like PS2 BIOS.
        2. -
        3. Copy all the files from the extracted folder (the one that contains several PS2 BIOS files) and paste them into the new folder (the one that you created for storing PS2 BIOS files).
        4. -
        5. You have successfully installed Bios USA V02 20 on your PC. You can now use it with PCSX2 emulator.
        6. -
        -

        How to use Bios USA V02 20 with PCSX2 emulator

        -

        To use Bios USA V02 20 with PCSX2 emulator, you need to configure PCSX2 settings and plugins first. PCSX2 is a free and open-source PS2 emulator that can run most PS2 games on PC with high compatibility and performance. It has many settings and plugins that allow you to customize your gaming experience according to your preferences and PC specifications.

        -

        To configure PCSX2 settings and plugins for optimal performance, follow these steps:

        -
          -
        1. Open PCSX2 emulator on your PC.
        2. -
        3. If this is your first time running PCSX2, you will see a welcome screen where you can choose your language. Select your preferred language and click on Apply.
        4. -
        5. You will then see a configuration screen where you can set up your PCSX2 settings and plugins. Click on Next.
        6. -
        7. You will be asked to select a BIOS rom from your PC. Click on Browse.
        8. -
        9. Navigate to the folder where you stored your PS2 BIOS files (the one that contains Bios USA V02 20).
        10. -
        11. Select Bios Usa v02.00 (14/06/2004) Console.bin. This is the same as Bios USA V02 20.
        12. -
        13. Click on Open.
        14. -
        15. You will see that Bios Usa v02.00 (14/06/2004) Console.bin has been added to the list of available BIOS roms in PCSX2.
        16. -
        17. Select it and click on Finish.
        18. -
        19. You have successfully configured PCSX2 settings and plugins for optimal performance.
        20. -

          Note:

          -

          If you want to change or customize any other settings or plugins in PCSX2, you can do so by clicking on Config, then Emulation Settings, Video (GS), Audo (SPU), PAD, or Cdvdrom. You can also access these options by pressing F1-F5 keys respectively while running PCSX2.

          -

          Tips:

          -
            - Article with HTML formatting (continued): 2X2 emulator) by going to Config, then Video (GS), then Plugin Settings. You can change the Renderer option to Direct3D 11 (Hardware), OpenGL (Hardware), or Vulkan (Hardware) depending on your GPU support. You can also change the Internal Resolution option to a higher value like 3x Native or 4x Native to increase the resolution of the game. However, this will also increase the GPU load and may cause slowdowns or glitches on some games. -
          • If you have a low-end PC, you can lower the resolution and graphics quality of PS2 games by changing some settings in GSdx plugin by going to Config, then Video (GS), then Plugin Settings. You can change the Renderer option to Direct3D 11 (Software), OpenGL (Software), or Vulkan (Software) depending on your GPU support. You can also change the Mipmapping option to Off, the CRC Hack Level option to Full (Safest), and the Hack Settings option to Skipdraw Range 1-100. However, this will also decrease the visual quality and may cause graphical glitches on some games.
          • -
          • If you want to improve the sound quality and compatibility of PS2 games, you can change some settings in SPU2-X plugin (the audio plugin for PCSX2 emulator) by going to Config, then Audo (SPU), then Plugin Settings. You can change the Synchronizing Mode option to TimeStretch, which will prevent audio stuttering and skipping. You can also change the Interpolation option to Catmull-Rom (PS2-like/slow), which will produce smoother and more accurate sound.
          • -
          • If you want to use a keyboard and mouse or a gamepad as your controller for PS2 games, you can configure them in LilyPad plugin (the input plugin for PCSX2 emulator) by going to Config, then PAD, then Plugin Settings. You can click on Pad 1 or Pad 2 depending on which controller you want to configure. You can then click on each button and assign a key or a button from your keyboard, mouse, or gamepad. You can also adjust the sensitivity and deadzone of your analog sticks and triggers.
          • -Article with HTML formatting (continued): e>, then ISO Selector, then Browse if you want to use a disc image file (.iso). Navigate to the folder where you stored your .iso file and select it. Alternatively, you can go to Cdvdrom, then Plugin Menu, then Plugin Settings if you want to use a physical disc. Select your disc drive from the Source Drive option. -
        -

        To load and play PS2 games with Bios USA V02 20, follow these steps:

        -
          -
        1. After selecting your source for PS2 games, go to System, then Boot ISO (fast) if you are using a disc image file (.iso) or Boot CDVD (fast) if you are using a physical disc.
        2. -
        3. The game will start loading automatically. You will see the PS2 logo and then the game intro or menu.
        4. -
        5. You can use your keyboard, mouse, or gamepad as your controller to play the game. You can also adjust the volume, pause, resume, save, load, or quit the game by using the PCSX2 menu bar or hotkeys.
        6. -
        7. You have successfully loaded and played PS2 games with Bios USA V02 20.
        8. -
        -

        Tips and tricks for using Bios USA V02 20

        -

        Bios USA V02 20 is a great PS2 BIOS file that can run most PS2 games from the USA region with high compatibility and performance. However, there are some tips and tricks that can help you improve your gaming experience even more. Here are some of them:

        -
          -
        • If you encounter any issues or errors with Bios USA V02 20, such as black screen, freezing, crashing, or missing graphics or sound, you can try to update your PCSX2 emulator to the latest version or use a different version of Bios USA V02 20. You can also check the PCSX2 wiki or forum for specific game fixes or patches.
        • -
        • If you want to backup your Bios USA V02 20 file or any other PS2 BIOS file, you can copy them from your PS2 BIOS folder (the one that contains Bios USA V02 20) and paste them into another folder or external drive. You can also compress them into a .zip or .rar file to save space.
        • -
        • If you want to play PS2 games from other regions besides USA, such as Europe or Japan, you will need to use a different PS2 BIOS file that matches the region of the game. You can download them from SafeROMs.com or other websites that offer PS2 BIOS files for download. You can also switch between different PS2 BIOS files by going to Config, then Bios Selector, then Browse. Select the PS2 BIOS file that you want to use and click on Apply.
        • -Article with HTML formatting (continued): e>, then Plugin Settings. You can change the Renderer option to Direct3D 11 (Hardware), OpenGL (Hardware), or Vulkan (Hardware) depending on your GPU support. You can also enable some extra features like FXAA, Shade Boost, External Shader, or Texture Filtering to enhance the graphics quality. However, these features may also increase the GPU load and may cause slowdowns or glitches on some games. -
        • If you want to use some cheat codes or hacks for PS2 games, you can use some tools like PCSX2 Cheat Converter or OmniConvert to convert the cheat codes into a format that PCSX2 can recognize. You can also use some websites like PCSX2 Cheats or PCSX2 Wiki to find some cheat codes or patches for your games. You can then create a .pnach file with your game's CRC code and insert the cheat codes into it. You can then place the .pnach file into the cheats folder of your PCSX2 emulator and enable it by going to System, then Enable Cheats.
        • -
        -

        Conclusion

        -

        Bios USA V02 20 is one of the best PS2 BIOS files that you can use with PCSX2 emulator. It can run most PS2 games from the USA region with high compatibility and performance. It is easy to download, install, and use with PCSX2 emulator. It also allows you to enjoy PS2 games on your PC with enhanced graphics and sound quality. You can also use some cheat codes or hacks to make your gaming experience more fun and exciting.

        -

        We hope this article has helped you learn everything you need to know about Bios USA V02 20. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you. Happy gaming!

        -

        Frequently Asked Questions

        -
          -
        1. What is Bios USA V02 20?
          -Bios USA V02 20 is a PS2 BIOS file that is suitable for PCSX2 emulator and works with most PS2 games from the USA region. It contains the system settings and data that allow the emulator to communicate with the game.
        2. -
        3. Where can I download Bios USA V02 20?
          -You can download Bios USA V02 20 from SafeROMs.com, one of the best sources for retro games, emulators, ROMs, and BIOS files. It has a large collection of PS2 BIOS files for different regions and versions, including Bios USA V02 20.
        4. -
        5. How can I use Bios USA V02 20 with PCSX2 emulator?
          -You can use Bios USA V02 20 with PCSX2 emulator by following these steps: - Download Bios USA V02 20 from SafeROMs.com - Extract it using WinRAR or any other archive file extracting software - Install it on your PC by copying all the files from the extracted folder into a new folder - Configure PCSX2 settings and plugins for optimal performance by selecting Bios USA V02 20 as your BIOS rom - Load and play PS2 games with Bios USA V02 20 by selecting your source for PS2 games (disc image file or physical disc) and booting it
        6. -
        7. What are some tips and tricks for using Bios USA V02 20?
          -Some tips and tricks for using Bios USA V02 20 are: - Update your PCSX2 emulator to the latest version or use a different version of Bios USA V02 20 if you encounter any issues or errors - Backup your Bios USA V02 20 file or any other PS2 BIOS file by copying them into another folder or external drive - Use a different PS2 BIOS file that matches the region of the game if you want to play PS2 games from other regions besides USA - Enhance the graphics and sound quality of PS2 games by changing some settings and plugins in PCSX2 emulator - Use some cheat codes or hacks for PS2 games by converting them into a format that PCSX2 can recognize and creating a .pnach file with them
        8. -
        9. What are some of the best PS2 games that I can play with Bios USA V02 20?
          -Some of the best PS2 games that you can play with Bios USA V02 20 are: - God of War II - Final Fantasy X - Grand Theft Auto: San Andreas - Shadow of the Colossus - Metal Gear Solid 3: Snake Eater - Kingdom Hearts II - Resident Evil 4 - Silent Hill 2 - Devil May Cry 3: Dante's Awakening - Ratchet & Clank: Up Your Arsenal
        10. -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Aimbot 8 Ball Pool APK A Smart and Easy Way to Improve Your Skills.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Aimbot 8 Ball Pool APK A Smart and Easy Way to Improve Your Skills.md deleted file mode 100644 index d167cd031af342a220b2ebe8ad9c3d8a007cb00a..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Aimbot 8 Ball Pool APK A Smart and Easy Way to Improve Your Skills.md +++ /dev/null @@ -1,137 +0,0 @@ -
        -

        Aimbot 8 Ball Pool APK: What You Need to Know

        -

        If you are a fan of 8 Ball Pool, you might have heard of Aimbot 8 Ball Pool APK, a modded version of the popular pool game that allows you to aim and shoot with perfect accuracy. But what exactly is this app, how does it work, and is it safe to use? In this article, we will answer all these questions and more. Read on to find out everything you need to know about Aimbot 8 Ball Pool APK.

        -

        Introduction

        -

        8 Ball Pool is one of the most popular online multiplayer games in the world, with millions of players competing against each other in various modes and tournaments. The game is simple to play, but hard to master, as it requires skill, strategy, and luck. You have to pot all your balls before your opponent does, while avoiding fouls and scratches. The game also has a lot of features and options, such as different cues, tables, chat messages, coins, cash, and rewards.

        -

        aimbot 8 ball pool apk


        Download ✦✦✦ https://bltlly.com/2uOrvK



        -

        However, not everyone has the time or patience to practice and improve their skills in the game. Some people want to win every match without much effort, or just have some fun with their friends. That's why some people resort to using Aimbot 8 Ball Pool APK, a modified version of the game that gives them an unfair advantage over their opponents.

        -

        What is Aimbot 8 Ball Pool APK?

        -

        Aimbot 8 Ball Pool APK is a third-party app that modifies the original game and adds an aiming tool that helps you aim the ball and extend the aim line automatically. It also allows you to adjust the shot power and spin according to your preference. With this app, you can make nice and accurate shots, not limited to direct straight shots but also bank shots or cushion shots easily. You can also see the trajectory of the cue ball and the target ball before you shoot.

        -

        Aimbot 8 Ball Pool APK is not an official app from the developers of 8 Ball Pool, but rather a hacked version that violates the terms and conditions of the game. It is not available on Google Play Store or App Store, but only on some websites that offer APK files for download.

        -

        Why do people use Aimbot 8 Ball Pool APK?

        -

        There are many reasons why people use Aimbot 8 Ball Pool APK, but here are some of the most common ones:

        -
          -
        • They want to improve their skills and accuracy in the game.
        • -
        • They want to save time and money by winning more matches and earning more coins and cash.
        • -
        • They want to have fun and impress their friends with their amazing shots.
        • -
        • They want to cheat and troll their opponents by making impossible shots.
        • -
        -

        However, using Aimbot 8 Ball Pool APK also comes with some risks and drawbacks, which we will discuss later in this article.

        -

        How to download and install Aimbot 8 Ball Pool APK

        -

        If you want to try Aimbot 8 Ball Pool APK, you need to download and install it on your device. However, you need to be careful and follow some steps to avoid any problems or errors. Here is how to do it:

        -

        aim trainer 8 pool master apk download
        -how to get aimbot for 8 ball pool android
        -8 ball pool hack apk with extended guideline
        -best aimbot app for 8 ball pool
        -8 ball pool mod apk unlimited coins and cash
        -8 ball pool aimbot cheat engine
        -8 ball pool auto win apk 2023
        -8 ball pool guideline hack apk no root
        -8 ball pool long line mod apk latest version
        -8 ball pool aim assist apk free download
        -8 ball pool hack tool apk online
        -8 ball pool unlimited guideline apk ios
        -8 ball pool aimbot script download
        -8 ball pool mod menu apk anti ban
        -8 ball pool hack apk unlimited money and cash
        -8 ball pool aimbot pc download
        -8 ball pool long line hack apk without root
        -8 ball pool mod apk with cue hack
        -8 ball pool aim assist pro apk
        -8 ball pool hack apk all cues unlocked
        -8 ball pool guideline mod apk revdl
        -8 ball pool aimbot for iphone
        -8 ball pool long line hack apk android 1
        -8 ball pool mod apk with legendary cues
        -8 ball pool aim assist hack apk
        -8 ball pool hack apk unlimited coins and cash download
        -8 ball pool guideline hack apk ios no jailbreak
        -8 ball pool aimbot for pc free download
        -8 ball pool long line mod apk rexdl
        -8 ball pool mod apk with avatar hack
        -8 ball pool aim assist pro mod apk
        -8 ball pool hack apk latest version download
        -8 ball pool guideline mod apk no root download
        -8 ball pool aimbot online generator
        -8 ball pool long line hack apk happymod
        -8 ball pool mod apk with chat pack unlocked
        -8 ball pool aim assist premium apk
        -8 ball pool hack apk mega mod download
        -8 ball pool guideline hack apk android download
        -8 ball pool aimbot for mac download

        -

        Step 1: Find a reliable source

        -

        As mentioned earlier, Aimbot 8 Ball Pool APK is not available on the official app stores, but only on some websites that offer APK files for download. However, not all of these websites are trustworthy or safe, as some of them may contain viruses, malware, or fake files that can harm your device or steal your data. Therefore, you need to find a reliable source that has positive reviews and feedback from other users, and that provides the latest and updated version of the app.

        -

        One of the websites that we recommend is [APKPure], which is a popular and reputable platform that offers free and pure APK files for various apps and games. You can visit their website and search for Aimbot 8 Ball Pool APK, or use this link: [https://apkpure.com/aimbot-8-ball-pool/com.aimbot.eightballpool].

        -

        Step 2: Enable unknown sources on your device

        -

        Before you can install Aimbot 8 Ball Pool APK on your device, you need to enable unknown sources on your device settings. This is because your device normally prevents you from installing apps from sources other than the official app stores, for security reasons. However, you can change this setting by following these steps:

        -
          -
        • Go to your device settings and look for security or privacy options.
        • -
        • Find the option that says unknown sources or allow installation from unknown sources and toggle it on.
        • -
        • A warning message may appear, asking you to confirm your action. Tap OK or Yes to proceed.
        • -
        -

        Now you are ready to install Aimbot 8 Ball Pool APK on your device.

        -

        Step 3: Download and install the APK file

        -

        The final step is to download and install the APK file of Aimbot 8 Ball Pool APK on your device. Here is how to do it:

        -
          -
        • Go back to the website where you found the app and tap on the download button.
        • -
        • Wait for the download to finish and locate the file on your device storage.
        • -
        • Tap on the file and follow the instructions on the screen to install the app.
        • -
        • Once the installation is complete, you will see an icon of Aimbot 8 Ball Pool APK on your device home screen or app drawer.
        • -
        -

        Congratulations! You have successfully downloaded and installed Aimbot 8 Ball Pool APK on your device. Now you can launch the app and start playing 8 Ball Pool with perfect aim and accuracy.

        -

        How to use Aimbot 8 Ball Pool APK

        -

        Using Aimbot 8 Ball Pool APK is very easy and simple. You just need to follow these steps:

        -

        Step 1: Launch the app and grant permissions

        -

        The first thing you need to do is to launch the app by tapping on its icon. The app will ask you for some permissions, such as access to your device storage, camera, microphone, location, etc. You need to grant these permissions for the app to work properly. If you deny any of these permissions, the app may not function correctly or crash.

        -

        Step 2: Select the game mode and table size

        -

        The next thing you need to do is to select the game mode and table size that you want to play. You can choose from various modes, such as practice, 1v1, tournament, club, etc. You can also choose from different table sizes, such as small, medium, large, etc. The app will automatically match you with an opponent based on your selection.

        -

        Step 3: Adjust the aim line and shot power

        -

        You can also add spin to your shot by tapping on a circular icon at the top right corner of the screen. You can choose from different types of spin, such as top spin, back spin, left spin, right spin, etc. The app will show you how the spin will affect the movement of the cue ball and the target ball.

        -

        Step 4: Enjoy the game and win every match

        -

        The final thing you need to do is to enjoy the game and win every match with your perfect aim and accuracy. You can tap on the shoot button at the bottom right corner of the screen to execute your shot. You can also use some chat messages or emojis to communicate with your opponent or express your emotions. You can also watch some replays or share your shots with your friends on social media.

        -

        With Aimbot 8 Ball Pool APK, you can easily win every match and earn more coins and cash, which you can use to buy new cues, tables, chat packs, etc. You can also rank up faster and unlock more achievements and rewards.

        -

        Pros and cons of Aimbot 8 Ball Pool APK

        -

        As with any other app or game, Aimbot 8 Ball Pool APK has its pros and cons. Here are some of them:

        -

        Pros

        -
          -
        • Improve your skills and accuracy

          -

          One of the benefits of using Aimbot 8 Ball Pool APK is that it can help you improve your skills and accuracy in the game. By using the app, you can learn how to aim better, how to adjust the shot power and spin, how to make different types of shots, etc. You can also practice more and gain more confidence in your abilities.

        • -
        • Save time and money

          -

          Another benefit of using Aimbot 8 Ball Pool APK is that it can save you time and money by winning more matches and earning more coins and cash. You don't have to spend hours or days playing the game to earn enough coins and cash to buy new cues, tables, chat packs, etc. You also don't have to spend real money to buy these items or to get more coins and cash. With Aimbot 8 Ball Pool APK, you can get everything you want for free and fast.

        • -
        • Have fun and impress your friends

          -

          A third benefit of using Aimbot 8 Ball Pool APK is that it can make the game more fun and enjoyable for you and your friends. You can have fun by making amazing shots that would otherwise be impossible or very difficult to make. You can also impress your friends by showing them your skills and accuracy, or by challenging them to a match and beating them easily.

        • -
        -

        Cons

        -
          -
        • Risk of getting banned or hacked

          -

          One of the drawbacks of using Aimbot 8 Ball Pool APK is that it can put you at risk of getting banned or hacked by the game developers or other players. As mentioned earlier, Aimbot 8 Ball Pool APK is a hacked version of the game that violates the terms and conditions of the game. Therefore, if the game developers detect that you are using this app, they may ban your account permanently or temporarily, or delete your progress and data. Moreover, if you download the app from an unreliable source, you may expose your device to viruses, malware, or fake files that can harm your device or steal your data.

        • -
        • Lose the challenge and thrill of the game

          -

          Another drawback of using Aimbot 8 Ball Pool APK is that it can make the game less challenging and thrilling for you and your opponents. By using this app, you are basically cheating and taking away the skill, strategy, and luck factors that make the game interesting and exciting. You are also making the game unfair and boring for your opponents, who may not have a chance to win or enjoy the game.

        • -
        • Feel guilty and dishonest

          -

          A third drawback of using Aimbot 8 Ball Pool APK is that it can make you feel guilty and dishonest for cheating and breaking the rules of the game. You may lose respect for yourself and for others who play the game honestly and fairly. You may also lose interest in the game after a while, as it becomes too easy and repetitive for you.

        • -
        -

        Conclusion

        -

        Aimbot 8 Ball Pool APK is a modded version of 8 Ball Pool that allows you to aim and shoot with perfect accuracy. It can help you improve your skills, save time and money, have fun and impress your friends, but it also comes with some risks and drawbacks, such as getting banned or hacked, losing the challenge and thrill of the game, and feeling guilty and dishonest. Therefore, you need to be careful and responsible when using this app, and respect the rules and rights of other players. If you want to download and install Aimbot 8 Ball Pool APK, you need to find a reliable source, enable unknown sources on your device, and follow some steps to install the app. Then, you can launch the app and select the game mode and table size, adjust the aim line and shot power, and enjoy the game and win every match.

        -

        We hope this article has been helpful and informative for you. If you have any questions or comments, please feel free to leave them below. Thank you for reading!

        -

        FAQs

        -

        Here are some frequently asked questions about Aimbot 8 Ball Pool APK:

        -
          -
        • Is Aimbot 8 Ball Pool APK safe to use?

          -

          Aimbot 8 Ball Pool APK is not an official app from the developers of 8 Ball Pool, but rather a hacked version that violates the terms and conditions of the game. Therefore, it is not safe to use, as it may expose your device to viruses, malware, or fake files that can harm your device or steal your data. It may also put your account at risk of getting banned or hacked by the game developers or other players.

        • -
        • Is Aimbot 8 Ball Pool APK legal to use?

          -

          Aimbot 8 Ball Pool APK is not legal to use, as it breaks the rules and rights of other players who play the game honestly and fairly. It also infringes the intellectual property rights of the game developers who created and own the original game. Therefore, using this app may result in legal actions or consequences from the game developers or other authorities.

        • -
        • How can I uninstall Aimbot 8 Ball Pool APK?

          -

          If you want to uninstall Aimbot 8 Ball Pool APK from your device, you can follow these steps:

          -
            -
          • Go to your device settings and look for apps or applications options.
          • -
          • Find Aimbot 8 Ball Pool APK from the list of apps and tap on it.
          • -
          • Tap on the uninstall button and confirm your action.
          • -
          • Wait for the uninstallation to finish and check if the app is gone from your device.
          • -
        • -
        • Can I use Aimbot 8 Ball Pool APK with other mods or hacks?

          -

          We do not recommend using Aimbot 8 Ball Pool APK with other mods or hacks, as it may cause conflicts or errors that can affect the performance or functionality of the app or the game. It may also increase the risk of getting detected or banned by the game developers or other players.

        • -
        • Can I use Aimbot 8 Ball Pool APK offline?

          -

          No, you cannot use Aimbot 8 Ball Pool APK offline, as it requires an internet connection to work properly. You need to connect to a server to play with other players online. If you try to use the app offline, it may not function correctly or crash.

        • -

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/A Perfect Marriage Laurey Bright Epub 15.md b/spaces/tioseFevbu/cartoon-converter/scripts/A Perfect Marriage Laurey Bright Epub 15.md deleted file mode 100644 index f13cb0cf1f6aca4a1f185c915cc46bacda3d7200..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/A Perfect Marriage Laurey Bright Epub 15.md +++ /dev/null @@ -1,13 +0,0 @@ -
        -

        A Perfect Marriage by Laurey Bright: A Review

        -

        A Perfect Marriage by Laurey Bright is a romance novel that explores the theme of betrayal and forgiveness in a long-term relationship. The book was first published in 1995 by Silhouette Books[^1^] and is available as an ebook[^3^].

        -

        The story follows Celine and Max Archer, a married couple who have been together for twelve years. They have a comfortable and peaceful life, but they are not in love. They married for convenience and agreed to a dry-eyed deal instead of vows. They both think the other is happy with the arrangement, until Max breaks the bargain by wanting more. He confesses his love for Celine and asks her to give him a chance to make their marriage work. But Celine is shocked and hurt by his revelation, and feels betrayed by his lies. She decides to end their marriage and move on with her life.

        -

        A Perfect Marriage Laurey Bright Epub 15


        Download Ziphttps://urlcod.com/2uHw6c



        -

        However, unbeknownst to Max, Celine is pregnant with their child. She keeps this secret from him, hoping to avoid any complications. But when Max finds out, he is determined to win her back and prove his sincerity. He also wants to be a father to their baby and share their new beginning. Celine is torn between her anger and her love for Max, and she struggles to trust him again. Can they overcome their past mistakes and rebuild their perfect marriage?

        -

        A Perfect Marriage by Laurey Bright is a touching and emotional read that will appeal to fans of contemporary romance. The author has brilliantly demonstrated that the bonds of true love are strong and that a marriage can be rebuilt and renewed even after the most unforgivable betrayal of all[^2^]. The characters are realistic and flawed, and their journey is full of challenges and growth. The book also explores the importance of communication, honesty, and compromise in a relationship. A Perfect Marriage by Laurey Bright is a book that will make you believe in second chances.

        - -

        The book also shows the different perspectives of Celine and Max, and how they cope with their situation. Celine is a strong and independent woman who has a successful career as a lawyer. She values honesty and loyalty, and she feels betrayed by Max's deception. She also blames herself for not being able to love him the way he deserves. She tries to move on by dating other men, but she realizes that none of them can compare to Max. She says: \"I don't want to be with anyone else. I don't want to be alone. I want you. I love you.\"[^2^]

        -

        Max is a kind and caring man who has always loved Celine, but he never told her because he was afraid of losing her. He thought she was happy with their arrangement, and he respected her wishes. He also felt unworthy of her love, because he had a troubled past that he kept hidden from her. He says: \"I've always loved you, Celine. From the first moment I saw you. But I didn't think you'd ever love me back. You were so beautiful, so smart, so perfect. And I was nothing.\"[^2^]

        -

        The book also has some secondary characters who add some humor and drama to the story. There is Celine's best friend, Rachel, who is supportive and loyal to her. She also has a crush on Max's brother, Jake, who is a charming and flirtatious journalist. There is also Max's ex-girlfriend, Tessa, who is a manipulative and jealous woman who tries to sabotage their marriage. She claims that she is pregnant with Max's child, but she is lying.

        e93f5a0c3f
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Crack Activation Prepar3d _VERIFIED_.md b/spaces/tioseFevbu/cartoon-converter/scripts/Crack Activation Prepar3d _VERIFIED_.md deleted file mode 100644 index 030a243abc66b3fccf2f2af13464118fa520fc8b..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Crack Activation Prepar3d _VERIFIED_.md +++ /dev/null @@ -1,110 +0,0 @@ -
        -

        How to Crack Activate Prepar3D: A Complete Guide

        |

        If you are a flight simulation enthusiast, you may have heard of Prepar3D, a visual simulation platform that allows users to create training scenarios across aviation, maritime and ground domains. Prepar3D is developed by Lockheed Martin, a global security and aerospace company, and is widely used by commercial, academic, professional, and military institutions for various purposes.

        -

        crack activation prepar3d


        Download Filehttps://urlcod.com/2uHwYt



        -

        However, Prepar3D is not a cheap software. Depending on the license type and edition you choose, you may have to pay hundreds or even thousands of dollars to use it. Moreover, you need to activate your license online every time you launch Prepar3D, which can be inconvenient or impossible if you don't have an internet connection.

        -

        That's why some users may want to crack activate Prepar3D, which means bypassing the license verification process and using the software without paying for it. This can save you money and hassle, but it also comes with some risks and legal issues that you should be aware of.

        -

        In this article, we will show you how to crack activate Prepar3D using an activator tool that can work for any version and edition of Prepar3D. We will also give you some tips on how to download Prepar3D and its latest updates, as well as how to enjoy its features and add-ons. By the end of this article, you will be able to use Prepar3D as a fully functional simulation platform without any limitations.

        -

        -

        What is Prepar3D and why do you need to crack activate it?

        -

        Prepar3D (pronounced "prepared") is a visual simulation platform that allows users to create training scenarios across aviation, maritime and ground domains. It is based on Microsoft's ESP technology, which was originally developed for Microsoft Flight Simulator X (FSX).

        -

        Prepar3D offers many features and benefits that make it a superior simulation platform compared to FSX or other flight simulators. Some of these features include:

        -
          -
        • A virtual world with 40 high-detail cities and more than 24,900 airports around the world
        • -
        • A realistic weather system with dynamic clouds, precipitation, fog, wind, and thermals
        • -
        • A physics engine that simulates the effects of gravity, drag, lift, thrust, and other forces on the aircraft
        • -
        • A scenario system that allows users to create and edit custom missions and scenarios with various objectives, events, and triggers
        • -
        • A multiplayer mode that supports up to 64 players online or on a local network
        • -
        • A software development kit (SDK) that enables users to create and modify their own content, such as aircraft, scenery, effects, sounds, panels, gauges, and more
        • -
        -

        Prepar3D is available in three different license types: Academic, Professional, and Professional Plus. Each license type has a different price and a different set of features and capabilities. For example, the Academic license costs $59.95 and is intended for students and hobbyists, while the Professional Plus license costs $2,299 and is intended for military and government use. You can compare the license types and their features on the official website.

        -

        Prepar3D is also available in two different editions: v4 and v5. The v4 edition was released in 2017 and is based on 64-bit architecture, which allows it to use more memory and avoid crashes. The v5 edition was released in 2020 and is based on DirectX 12, which allows it to use more advanced graphics and lighting effects. You can choose the edition that suits your system requirements and preferences.

        -

        However, Prepar3D is not a free software. You need to purchase a license for each computer you want to use it on, and you need to activate your license online every time you launch Prepar3D. This can be inconvenient or impossible if you don't have an internet connection or if you want to use Prepar3D on multiple computers.

        -

        That's why some users may want to crack activate Prepar3D, which means bypassing the license verification process and using the software without paying for it. This can save you money and hassle, but it also comes with some risks and legal issues that you should be aware of.

        -

        How to download Prepar3D and its latest updates

        -

        Before you can crack activate Prepar3D, you need to download the software and its latest updates. There are two ways to do this: from the official website or from torrent sites.

        -

        The official website of Prepar3D is [5](https://prepar3d.com/). Here you can find all the information about Prepar3D, such as its features, license types, editions, system requirements, FAQs, forums, support, and more. You can also purchase a license for Prepar3D from here.

        -

        To download Prepar3D from the official website, you need to create an account and log in. Then you need to go to the Downloads page and choose the version and edition of Prepar3D that you want to download. You will be given a link to download the installer file, which is about 12 GB in size. You will also be given a product key that you will need to activate your license later.

        -

        To install Prepar3D from the official website, you need to run the installer file and follow the instructions on the screen. You will be asked to choose a destination folder for Prepar3D and agree to the terms of service. You will also be asked to enter your product key and activate your license online. The installation process may take some time depending on your internet speed and system performance.

        -

        The other way to download Prepar3D is from torrent sites. Torrent sites are websites that allow users to share files using peer-to-peer (P2P) technology. You can find many torrent sites on the internet that offer various files for download, such as movies, music, games, software, etc.

        -

        To download Prepar3D from torrent sites, you need to have a torrent client installed on your computer. A torrent client is a software that enables you to download files from torrent sites using P2P technology. Some examples of torrent clients are uTorrent, BitTorrent, qBittorrent, etc.

        -

        To find Prepar3D on torrent sites, you need to use a search engine like Google or Bing and type in keywords like "Prepar3D torrent" or "Prepar3D download". You will get many results from different torrent sites that offer Prepar3D for download. You need to choose a reliable and safe torrent site that has good ratings and reviews from other users.

        -

        To download Prepar 3D from torrent sites, you need to click on the torrent link or the magnet link that corresponds to the file you want to download. This will open your torrent client and start the download process. You will be able to see the progress and status of your download on your torrent client. The download process may take some time depending on the file size, the number of seeders and leechers, and your internet speed.

        -

        To install Prepar3D from torrent sites, you need to open the downloaded file using a file extractor like WinRAR, 7-Zip, etc. You will find a folder that contains the Prepar3D files and a readme file that contains the instructions on how to install and crack activate Prepar3D. You need to follow the instructions carefully and copy the crack files to the Prepar3D folder. The installation process may vary depending on the source of the torrent file.

        -

        Whether you download Prepar3D from the official website or from torrent sites, you need to update it to the latest version. This will ensure that you have the best performance and compatibility with Prepar3D. To update Prepar3D, you need to go to the official website and check for the latest hotfixes and patches for your version and edition of Prepar3D. You can download and install them from there.

        -

        How to crack activate Prepar3D using an activator tool

        -

        Now that you have downloaded and installed Prepar3D, you may want to crack activate it using an activator tool. An activator tool is a software that can bypass the license verification process and make Prepar3D think that you have a valid license. This way, you can use Prepar3D without paying for it or activating it online.

        -

        However, before you use an activator tool, you should be aware of the risks and legal issues of cracking software. Cracking software is illegal and violates the terms of service of Prepar3D. It can also expose your computer to viruses, malware, spyware, or other harmful programs that may damage your system or steal your personal information. Moreover, cracking software may cause errors, crashes, or compatibility issues with Prepar3D or its add-ons. Therefore, you should use an activator tool at your own risk and discretion.

        -

        If you decide to use an activator tool, you should choose a reliable and safe one that can work for any version and edition of Prepar3D. One such tool is [6](https://www.megaddons.org/), which is a website that offers various activator tools for different simulation software, including Prepar3D. This website has good ratings and reviews from other users and claims to be virus-free and easy to use.

        -

        To use [6](https://www.megaddons.org/) as an activator tool for Prepar3D, you need to follow these steps:

        -
          -
        1. Go to [6](https://www.megaddons.org/) and click on the "Prepar3D Activator" button.
        2. -
        3. You will be redirected to a page where you need to complete a short survey or offer to unlock the download link for the activator tool. This is how the website makes money and supports its development.
        4. -
        5. After completing the survey or offer, you will get the download link for the activator tool. Click on it and save the file on your computer.
        6. -
        7. Open the file using a file extractor like WinRAR, 7-Zip, etc. You will find a folder that contains the activator tool and a readme file that contains the instructions on how to use it.
        8. -
        9. Run the activator tool as administrator and follow the instructions on the screen. You will be asked to select your version and edition of Prepar3D and click on the "Activate" button.
        10. -
        11. The activator tool will scan your Prepar3D folder and modify some files to bypass the license verification process. This may take some time depending on your system performance.
        12. -
        13. When the activation process is done, you will see a message that says "Activation successful". You can now close the activator tool and launch Prepar3D normally.
        14. -
        -

        Congratulations! You have successfully crack activated Prepar3D using an activator tool. You can now use Prepar3D as a fully functional simulation platform without any limitations.

        -

        How to enjoy Prepar3D with its features and add-ons

        -

        Now that you have crack activated Prepar3D, you may want to enjoy its features and add-ons. Prepar3D offers many features and benefits that make it a superior simulation platform compared to FSX or other flight simulators. Some of these features include:

        -
          -
        • A virtual world with 40 high-detail cities and more than 24,900 airports around the world
        • -
        • A realistic weather system with dynamic clouds, precipitation, fog, wind, and thermals
        • -
        • A physics engine that simulates the effects of gravity, drag, lift, thrust, and other forces on the aircraft
        • -
        • A scenario system that allows users to create and edit custom missions and scenarios with various objectives, events, and triggers
        • -
        • A multiplayer mode that supports up to 64 players online or on a local network
        • -
        • A software development kit (SDK) that enables users to create and modify their own content, such as aircraft, scenery, effects, sounds, panels, gauges, and more
        • -
        -

        However, Prepar3D can be even more enjoyable and realistic with the use of various add-ons and mods. Add-ons and mods are additional content that can enhance your training experience with Prepar3D by adding new features, functions, or elements to the simulation. Some examples of add-ons and mods are:

        -
          -
        • Scenery add-ons that add more detail and realism to the environment, such as buildings, landmarks, vegetation, terrain, water, etc.
        • -
        • Aircraft add-ons that add new models or liveries of aircraft, or improve the existing ones with more accurate flight dynamics, systems, sounds, etc.
        • -
        • Weather add-ons that add more realism and variability to the weather system, such as real-time weather data, custom weather themes, advanced cloud effects, etc.
        • -
        • Traffic add-ons that add more realism and diversity to the traffic system, such as real-world schedules, liveries, models, sounds, etc. for AI or online traffic.
        • -
        • Utility add-ons that add new functions or tools to the simulation, such as flight planners, navigators, recorders, controllers, etc.
        • -
        -

        You can find many add-ons and mods for Prepar3D on various websites or forums that specialize in flight simulation content. Some examples of these websites or forums are:

        -
          -
        • [7](https://www.flightsim.com/), a website that offers thousands of free downloads for various flight simulators
        • -
        • [8](https://www.avsim.com/), a website that offers news, reviews, forums, downloads, and more for various flight simulators
        • -
        • [9](https://www.simmarket.com/), a website that offers a marketplace for payware add-ons for various flight simulators
        • -
        • [10](https://www.orbxsystems.com/), a website that offers high-quality scenery add-ons for various flight simulators
        • -
        • [11](https://www.pmdg.com/), a website that offers high-quality aircraft add-ons for various flight simulators
        • -
        -

        To install add-ons and mods for Prepar3D, you need to follow the instructions provided by the developers or creators of the content. Usually, you need to download the files and copy them to the appropriate folders in your Prepar3D directory. Some add-ons may require additional steps or software to install or configure them properly. You should always read the readme files or manuals that come with the add-ons before installing them.

        -

        To enjoy Prepar3D with its features and add-ons, you need to optimize your performance and settings for Prepar3D. Prepar3D can be a demanding software that requires a lot of resources from your system. If your system is not powerful enough or not configured properly, you may experience low frame rates, stuttering, crashes, or other issues that can ruin your simulation experience.

        -

        To optimize your performance and settings for Prepar3D, you need to do the following:

        -
          -
        1. Check your system requirements and make sure they meet or exceed the minimum or recommended requirements for Prepar3D. You can find the system requirements on the official website.
        2. -
        3. Update your drivers and software for your graphics card, sound card , and operating system. This will ensure that you have the best compatibility and performance with Prepar3D.
        4. -
        5. Adjust your settings and options for Prepar3D according to your system capabilities and preferences. You can find the settings and options in the main menu of Prepar3D under Options. You can tweak various aspects of Prepar3D, such as graphics, sound, controls, realism, traffic, weather, etc. You should aim for a balance between quality and performance that suits your needs.
        6. -
        7. Use external tools or utilities that can help you optimize your performance and settings for Prepar3D. Some examples of these tools or utilities are:
        8. -
            -
          • [12](https://www.simtweaks.com/), a tool that allows you to fine-tune your graphics settings for Prepar3D
          • -
          • [13](https://www.fsps-store.com/), a tool that allows you to boost your frame rates and performance for Prepar3D
          • -
          • [14](https://www.rexsimulations.com/), a tool that allows you to enhance your weather and environment effects for Prepar3D
          • -
          • [15](https://www.flight1.com/products.asp?product=fsuipc7), a tool that allows you to interface with Prepar3D and control various functions and variables
          • -
          -
        -

        By following these steps, you will be able to optimize your performance and settings for Prepar3D and enjoy its features and add-ons without any issues.

        -

        Conclusion and FAQs

        -

        In this article, we have shown you how to crack activate Prepar3D using an activator tool that can work for any version and edition of Prepar3D. We have also given you some tips on how to download Prepar3D and its latest updates, as well as how to enjoy its features and add-ons. By following this guide, you will be able to use Prepar3D as a fully functional simulation platform without any limitations.

        -

        However, we also want to remind you of the risks and legal issues of cracking software. Cracking software is illegal and violates the terms of service of Prepar3D. It can also expose your computer to viruses, malware, spyware, or other harmful programs that may damage your system or steal your personal information. Moreover, cracking software may cause errors, crashes, or compatibility issues with Prepar3D or its add-ons. Therefore, you should use an activator tool at your own risk and discretion.

        -

        If you have any questions or doubts about cracking software or using Prepar3D, you can check out the following FAQs that may answer some of your queries:

        -

        FAQs

        -
          -
        1. Is cracking software illegal?
        2. -

          Yes, cracking software is illegal in most countries and regions. It violates the intellectual property rights of the software developers and distributors. It can also result in civil or criminal penalties, such as fines or imprisonment.

          -
        3. Is cracking software safe?
        4. -

          No, cracking software is not safe for your computer or your personal information. It can expose your system to viruses, malware, spyware, or other harmful programs that may damage your files, corrupt your data, slow down your performance, or steal your identity. It can also cause errors, crashes, or compatibility issues with your software or hardware.

          -
        5. Is cracking software worth it?
        6. -

          No, cracking software is not worth it in the long run. It may save you money and hassle in the short term, but it can cost you more in the long term. It can compromise your security, privacy, stability, and quality of your simulation experience. It can also prevent you from getting the latest updates, features, support, or add-ons for your software.

          -
        7. What are the alternatives to cracking software?
        8. -

          The best alternative to cracking software is to purchase a legitimate license for it from the official website or a trusted vendor. This will ensure that you have the legal right to use the software and access all its features and benefits. It will also protect your system from viruses, malware, spyware, or other harmful programs. It will also allow you to get the latest updates, features, support, or add-ons for your software.

          -
        9. How can I get Prepar3D for free?
        10. -

          The only way to get Prepar3D for free is to use a trial version of it from the official website. The trial version allows you to use Prepar3D for 14 days with limited features and capabilities. After the trial period expires, you need to purchase a license for Prepar3D to continue using it.

          -
        -

        We hope this article has been helpful and informative for you. You have learned how to crack activate Prepar3D using an activator tool that can work for any version and edition of Prepar3D. You have also learned how to download Prepar3D and its latest updates, as well as how to enjoy its features and add-ons. By following this guide, you will be able to use Prepar3D as a fully functional simulation platform without any limitations. However, we also want to remind you of the risks and legal issues of cracking software. Cracking software is illegal and violates the terms of service of Prepar3D. It can also expose your computer to viruses, malware, spyware, or other harmful programs that may damage your system or steal your personal information. Moreover, cracking software may cause errors, crashes, or compatibility issues with Prepar3D or its add-ons. Therefore, you should use an activator tool at your own risk and discretion. If you have any questions or doubts about cracking software or using Prepar3D, you can check out the FAQs that we have provided at the end of the article. You can also visit the official website or the forums of Prepar3D for more information and support. Thank you for reading this article. We hope you have a great time with Prepar3D and its amazing simulation features. Happy flying!

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Idees De Liste Seau Pour Les Adolescents.md b/spaces/tioseFevbu/cartoon-converter/scripts/Idees De Liste Seau Pour Les Adolescents.md deleted file mode 100644 index f6a40610049c1e3ed4b20bc18166ac369c3a17ae..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Idees De Liste Seau Pour Les Adolescents.md +++ /dev/null @@ -1,51 +0,0 @@ - -

        Comment créer une liste de seau pour les adolescents qui soit amusante et enrichissante

        - -

        Une liste de seau est une liste de choses que vous voulez faire avant d'atteindre un certain âge ou un certain moment de votre vie. C'est une façon de rêver grand, de se fixer des objectifs et de vivre des expériences mémorables. Mais comment créer une liste de seau pour les adolescents qui soit à la fois amusante et enrichissante ? Voici quelques conseils et idées pour vous aider à démarrer.

        - -

        Pourquoi faire une liste de seau pour les adolescents ?

        - -

        Faire une liste de seau pour les adolescents présente de nombreux avantages, tels que :

        -

        Idees De Liste Seau Pour Les Adolescents


        Download File »»» https://urlcod.com/2uHwZq



        - -
          -
        • Vous motiver à sortir de votre zone de confort et à essayer de nouvelles choses.
        • -
        • Vous aider à découvrir vos passions et vos talents.
        • -
        • Vous apprendre des compétences utiles pour l'avenir, comme la planification, la gestion du temps et le budget.
        • -
        • Vous renforcer votre confiance en vous et votre estime de vous.
        • -
        • Vous rapprocher de vos amis, de votre famille ou de votre partenaire.
        • -
        • Vous faire plaisir et vous créer des souvenirs inoubliables.
        • -
        - -

        Comment choisir les activités de votre liste de seau pour les adolescents ?

        - -

        Il n'y a pas de règle universelle pour choisir les activités de votre liste de seau pour les adolescents. Cependant, voici quelques critères qui peuvent vous aider à sélectionner celles qui vous conviennent le mieux :

        - -
          -
        • Elles doivent être réalisables dans un délai raisonnable. Par exemple, si vous voulez visiter tous les pays du monde, vous pouvez commencer par en choisir quelques-uns qui vous attirent le plus.
        • -
        • Elles doivent être adaptées à votre budget. Par exemple, si vous voulez faire du saut en parachute, vous pouvez économiser de l'argent ou chercher des offres promotionnelles.
        • -
        • Elles doivent être en accord avec vos valeurs et vos intérêts. Par exemple, si vous aimez la nature, vous pouvez choisir des activités écologiques ou liées aux animaux.
        • -
        • Elles doivent être variées et équilibrées. Par exemple, vous pouvez mélanger des activités sportives, culturelles, créatives, sociales, etc.
        • -
        • Elles doivent vous procurer du plaisir et du défi. Par exemple, vous pouvez choisir des activités qui vous font rire, qui vous font peur ou qui vous font apprendre quelque chose de nouveau.
        • -
        - -

        Quelques idées de liste de seau pour les adolescents

        - -

        Si vous manquez d'inspiration, voici quelques idées de liste de seau pour les adolescents que vous pouvez adapter à vos envies et à vos possibilités :

        - -
          -
        • Faire du bénévolat dans une association qui vous tient à cœur.
        • -
        • Apprendre une nouvelle langue ou un nouvel instrument de musique.
        • -
        • Lire un livre classique ou regarder un film culte.
        • -
        • Cuisiner un plat exotique ou pâtisser un gâteau original.
        • -
        • Faire un voyage dans un pays étranger ou dans une région que vous ne connaissez pas.
        • -
        • Faire une randonnée dans un parc naturel ou une balade à vélo dans une ville.
        • -
        • Faire du camping sous les étoiles ou dormir dans une cabane dans les arbres.
        • -
        • Faire une activité à sensations fortes, comme du rafting, du saut à l'élastique ou du parapente.
        • -
        • Faire une séance photo avec vos amis ou votre famille.
        • -
        • Ecrire une lettre à votre moi futur ou à votre idole.
        • -
        - -

        Comment réaliser votre liste de seau pour

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/tomaseo2022/mp3-a-texto/README.md b/spaces/tomaseo2022/mp3-a-texto/README.md deleted file mode 100644 index 715230109fa0aff45a827e8c150b2aa182cbcbf0..0000000000000000000000000000000000000000 --- a/spaces/tomaseo2022/mp3-a-texto/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MP3 a Texto -emoji: 📚 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/apis/__init__.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/apis/__init__.py deleted file mode 100644 index 1d8035b74877fdeccaa41cbc10a9f1f9924eac85..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/apis/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -from .inference import (async_inference_detector, inference_detector, - init_detector, show_result_pyplot) -from .test import multi_gpu_test, single_gpu_test -from .train import get_root_logger, set_random_seed, train_detector - -__all__ = [ - 'get_root_logger', 'set_random_seed', 'train_detector', 'init_detector', - 'async_inference_detector', 'inference_detector', 'show_result_pyplot', - 'multi_gpu_test', 'single_gpu_test' -] diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/dense_heads/atss_head.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/dense_heads/atss_head.py deleted file mode 100644 index 17dd39560fe9985bb794257869e0ef52b7d53338..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/dense_heads/atss_head.py +++ /dev/null @@ -1,684 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, Scale -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, build_assigner, build_sampler, - images_to_levels, multi_apply, multiclass_nms, - reduce_mean, unmap) -from ..builder import HEADS, build_loss -from .anchor_head import AnchorHead - - -@HEADS.register_module() -class ATSSHead(AnchorHead): - """Bridging the Gap Between Anchor-based and Anchor-free Detection via - Adaptive Training Sample Selection. - - ATSS head structure is similar with FCOS, however ATSS use anchor boxes - and assign label by Adaptive Training Sample Selection instead max-iou. - - https://arxiv.org/abs/1912.02424 - """ - - def __init__(self, - num_classes, - in_channels, - stacked_convs=4, - conv_cfg=None, - norm_cfg=dict(type='GN', num_groups=32, requires_grad=True), - loss_centerness=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - init_cfg=dict( - type='Normal', - layer='Conv2d', - std=0.01, - override=dict( - type='Normal', - name='atss_cls', - std=0.01, - bias_prob=0.01)), - **kwargs): - self.stacked_convs = stacked_convs - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - super(ATSSHead, self).__init__( - num_classes, in_channels, init_cfg=init_cfg, **kwargs) - - self.sampling = False - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # SSD sampling=False so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.loss_centerness = build_loss(loss_centerness) - - def _init_layers(self): - """Initialize layers of the head.""" - self.relu = nn.ReLU(inplace=True) - self.cls_convs = nn.ModuleList() - self.reg_convs = nn.ModuleList() - for i in range(self.stacked_convs): - chn = self.in_channels if i == 0 else self.feat_channels - self.cls_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.reg_convs.append( - ConvModule( - chn, - self.feat_channels, - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg)) - self.atss_cls = nn.Conv2d( - self.feat_channels, - self.num_anchors * self.cls_out_channels, - 3, - padding=1) - self.atss_reg = nn.Conv2d( - self.feat_channels, self.num_anchors * 4, 3, padding=1) - self.atss_centerness = nn.Conv2d( - self.feat_channels, self.num_anchors * 1, 3, padding=1) - self.scales = nn.ModuleList( - [Scale(1.0) for _ in self.anchor_generator.strides]) - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: Usually a tuple of classification scores and bbox prediction - cls_scores (list[Tensor]): Classification scores for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * num_classes. - bbox_preds (list[Tensor]): Box energies / deltas for all scale - levels, each is a 4D-tensor, the channels number is - num_anchors * 4. - """ - return multi_apply(self.forward_single, feats, self.scales) - - def forward_single(self, x, scale): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - scale (:obj: `mmcv.cnn.Scale`): Learnable scale module to resize - the bbox prediction. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level - the channels number is num_anchors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale - level, the channels number is num_anchors * 4. - centerness (Tensor): Centerness for a single scale level, the - channel number is (N, num_anchors * 1, H, W). - """ - cls_feat = x - reg_feat = x - for cls_conv in self.cls_convs: - cls_feat = cls_conv(cls_feat) - for reg_conv in self.reg_convs: - reg_feat = reg_conv(reg_feat) - cls_score = self.atss_cls(cls_feat) - # we just follow atss, not apply exp in bbox_pred - bbox_pred = scale(self.atss_reg(reg_feat)).float() - centerness = self.atss_centerness(reg_feat) - return cls_score, bbox_pred, centerness - - def loss_single(self, anchors, cls_score, bbox_pred, centerness, labels, - label_weights, bbox_targets, num_total_samples): - """Compute loss of a single scale level. - - Args: - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W). - bbox_pred (Tensor): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W). - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor wight - shape (N, num_total_anchors, 4). - num_total_samples (int): Number os positive samples that is - reduced over all GPUs. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - - anchors = anchors.reshape(-1, 4) - cls_score = cls_score.permute(0, 2, 3, 1).reshape( - -1, self.cls_out_channels).contiguous() - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - centerness = centerness.permute(0, 2, 3, 1).reshape(-1) - bbox_targets = bbox_targets.reshape(-1, 4) - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - - # classification loss - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - bg_class_ind = self.num_classes - pos_inds = ((labels >= 0) - & (labels < bg_class_ind)).nonzero().squeeze(1) - - if len(pos_inds) > 0: - pos_bbox_targets = bbox_targets[pos_inds] - pos_bbox_pred = bbox_pred[pos_inds] - pos_anchors = anchors[pos_inds] - pos_centerness = centerness[pos_inds] - - centerness_targets = self.centerness_target( - pos_anchors, pos_bbox_targets) - pos_decode_bbox_pred = self.bbox_coder.decode( - pos_anchors, pos_bbox_pred) - pos_decode_bbox_targets = self.bbox_coder.decode( - pos_anchors, pos_bbox_targets) - - # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, - pos_decode_bbox_targets, - weight=centerness_targets, - avg_factor=1.0) - - # centerness loss - loss_centerness = self.loss_centerness( - pos_centerness, - centerness_targets, - avg_factor=num_total_samples) - - else: - loss_bbox = bbox_pred.sum() * 0 - loss_centerness = centerness.sum() * 0 - centerness_targets = bbox_targets.new_tensor(0.) - - return loss_cls, loss_bbox, loss_centerness, centerness_targets.sum() - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'centernesses')) - def loss(self, - cls_scores, - bbox_preds, - centernesses, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - centernesses (list[Tensor]): Centerness for each scale - level with shape (N, num_anchors * 1, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (list[Tensor] | None): specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - - (anchor_list, labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) = cls_reg_targets - - num_total_samples = reduce_mean( - torch.tensor(num_total_pos, dtype=torch.float, - device=device)).item() - num_total_samples = max(num_total_samples, 1.0) - - losses_cls, losses_bbox, loss_centerness,\ - bbox_avg_factor = multi_apply( - self.loss_single, - anchor_list, - cls_scores, - bbox_preds, - centernesses, - labels_list, - label_weights_list, - bbox_targets_list, - num_total_samples=num_total_samples) - - bbox_avg_factor = sum(bbox_avg_factor) - bbox_avg_factor = reduce_mean(bbox_avg_factor).clamp_(min=1).item() - losses_bbox = list(map(lambda x: x / bbox_avg_factor, losses_bbox)) - return dict( - loss_cls=losses_cls, - loss_bbox=losses_bbox, - loss_centerness=loss_centerness) - - def centerness_target(self, anchors, bbox_targets): - # only calculate pos centerness targets, otherwise there may be nan - gts = self.bbox_coder.decode(anchors, bbox_targets) - anchors_cx = (anchors[:, 2] + anchors[:, 0]) / 2 - anchors_cy = (anchors[:, 3] + anchors[:, 1]) / 2 - l_ = anchors_cx - gts[:, 0] - t_ = anchors_cy - gts[:, 1] - r_ = gts[:, 2] - anchors_cx - b_ = gts[:, 3] - anchors_cy - - left_right = torch.stack([l_, r_], dim=1) - top_bottom = torch.stack([t_, b_], dim=1) - centerness = torch.sqrt( - (left_right.min(dim=-1)[0] / left_right.max(dim=-1)[0]) * - (top_bottom.min(dim=-1)[0] / top_bottom.max(dim=-1)[0])) - assert not torch.isnan(centerness).any() - return centerness - - @force_fp32(apply_to=('cls_scores', 'bbox_preds', 'centernesses')) - def get_bboxes(self, - cls_scores, - bbox_preds, - centernesses, - img_metas, - cfg=None, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - with shape (N, num_anchors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W). - centernesses (list[Tensor]): Centerness for each scale level with - shape (N, num_anchors * 1, H, W). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config | None): Test / postprocessing configuration, - if None, test_cfg would be used. Default: None. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) - num_levels = len(cls_scores) - device = cls_scores[0].device - featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)] - mlvl_anchors = self.anchor_generator.grid_anchors( - featmap_sizes, device=device) - - cls_score_list = [cls_scores[i].detach() for i in range(num_levels)] - bbox_pred_list = [bbox_preds[i].detach() for i in range(num_levels)] - centerness_pred_list = [ - centernesses[i].detach() for i in range(num_levels) - ] - img_shapes = [ - img_metas[i]['img_shape'] for i in range(cls_scores[0].shape[0]) - ] - scale_factors = [ - img_metas[i]['scale_factor'] for i in range(cls_scores[0].shape[0]) - ] - result_list = self._get_bboxes(cls_score_list, bbox_pred_list, - centerness_pred_list, mlvl_anchors, - img_shapes, scale_factors, cfg, rescale, - with_nms) - return result_list - - def _get_bboxes(self, - cls_scores, - bbox_preds, - centernesses, - mlvl_anchors, - img_shapes, - scale_factors, - cfg, - rescale=False, - with_nms=True): - """Transform outputs for a single batch item into labeled boxes. - - Args: - cls_scores (list[Tensor]): Box scores for a single scale level - with shape (N, num_anchors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for a single - scale level with shape (N, num_anchors * 4, H, W). - centernesses (list[Tensor]): Centerness for a single scale level - with shape (N, num_anchors * 1, H, W). - mlvl_anchors (list[Tensor]): Box reference for a single scale level - with shape (num_total_anchors, 4). - img_shapes (list[tuple[int]]): Shape of the input image, - list[(height, width, 3)]. - scale_factors (list[ndarray]): Scale factor of the image arrange as - (w_scale, h_scale, w_scale, h_scale). - cfg (mmcv.Config | None): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - """ - assert len(cls_scores) == len(bbox_preds) == len(mlvl_anchors) - device = cls_scores[0].device - batch_size = cls_scores[0].shape[0] - # convert to tensor to keep tracing - nms_pre_tensor = torch.tensor( - cfg.get('nms_pre', -1), device=device, dtype=torch.long) - mlvl_bboxes = [] - mlvl_scores = [] - mlvl_centerness = [] - for cls_score, bbox_pred, centerness, anchors in zip( - cls_scores, bbox_preds, centernesses, mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - scores = cls_score.permute(0, 2, 3, 1).reshape( - batch_size, -1, self.cls_out_channels).sigmoid() - centerness = centerness.permute(0, 2, 3, - 1).reshape(batch_size, - -1).sigmoid() - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(batch_size, -1, 4) - - # Always keep topk op for dynamic input in onnx - if nms_pre_tensor > 0 and (torch.onnx.is_in_onnx_export() - or scores.shape[-2] > nms_pre_tensor): - from torch import _shape_as_tensor - # keep shape as tensor and get k - num_anchor = _shape_as_tensor(scores)[-2].to(device) - nms_pre = torch.where(nms_pre_tensor < num_anchor, - nms_pre_tensor, num_anchor) - - max_scores, _ = (scores * centerness[..., None]).max(-1) - _, topk_inds = max_scores.topk(nms_pre) - anchors = anchors[topk_inds, :] - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds).long() - bbox_pred = bbox_pred[batch_inds, topk_inds, :] - scores = scores[batch_inds, topk_inds, :] - centerness = centerness[batch_inds, topk_inds] - else: - anchors = anchors.expand_as(bbox_pred) - - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shapes) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - mlvl_centerness.append(centerness) - - batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1) - if rescale: - batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor( - scale_factors).unsqueeze(1) - batch_mlvl_scores = torch.cat(mlvl_scores, dim=1) - batch_mlvl_centerness = torch.cat(mlvl_centerness, dim=1) - - # Set max number of box to be feed into nms in deployment - deploy_nms_pre = cfg.get('deploy_nms_pre', -1) - if deploy_nms_pre > 0 and torch.onnx.is_in_onnx_export(): - batch_mlvl_scores, _ = ( - batch_mlvl_scores * - batch_mlvl_centerness.unsqueeze(2).expand_as(batch_mlvl_scores) - ).max(-1) - _, topk_inds = batch_mlvl_scores.topk(deploy_nms_pre) - batch_inds = torch.arange(batch_size).view(-1, - 1).expand_as(topk_inds) - batch_mlvl_scores = batch_mlvl_scores[batch_inds, topk_inds, :] - batch_mlvl_bboxes = batch_mlvl_bboxes[batch_inds, topk_inds, :] - batch_mlvl_centerness = batch_mlvl_centerness[batch_inds, - topk_inds] - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = batch_mlvl_scores.new_zeros(batch_size, - batch_mlvl_scores.shape[1], 1) - batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1) - - if with_nms: - det_results = [] - for (mlvl_bboxes, mlvl_scores, - mlvl_centerness) in zip(batch_mlvl_bboxes, batch_mlvl_scores, - batch_mlvl_centerness): - det_bbox, det_label = multiclass_nms( - mlvl_bboxes, - mlvl_scores, - cfg.score_thr, - cfg.nms, - cfg.max_per_img, - score_factors=mlvl_centerness) - det_results.append(tuple([det_bbox, det_label])) - else: - det_results = [ - tuple(mlvl_bs) - for mlvl_bs in zip(batch_mlvl_bboxes, batch_mlvl_scores, - batch_mlvl_centerness) - ] - return det_results - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True): - """Get targets for ATSS head. - - This method is almost the same as `AnchorHead.get_targets()`. Besides - returning the targets as the parent method does, it also returns the - anchors as the first element of the returned tuple. - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - num_level_anchors_list = [num_level_anchors] * num_imgs - - # concat all level anchors and flags to a single tensor - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - anchor_list[i] = torch.cat(anchor_list[i]) - valid_flag_list[i] = torch.cat(valid_flag_list[i]) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - (all_anchors, all_labels, all_label_weights, all_bbox_targets, - all_bbox_weights, pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, - anchor_list, - valid_flag_list, - num_level_anchors_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - anchors_list = images_to_levels(all_anchors, num_level_anchors) - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - return (anchors_list, labels_list, label_weights_list, - bbox_targets_list, bbox_weights_list, num_total_pos, - num_total_neg) - - def _get_target_single(self, - flat_anchors, - valid_flags, - num_level_anchors, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression, classification targets for anchors in a single - image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors ,4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - num_level_anchors Tensor): Number of anchors of each scale level. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - img_meta (dict): Meta info of the image. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: N is the number of total anchors in the image. - labels (Tensor): Labels of all anchors in the image with shape - (N,). - label_weights (Tensor): Label weights of all anchor in the - image with shape (N,). - bbox_targets (Tensor): BBox targets of all anchors in the - image with shape (N, 4). - bbox_weights (Tensor): BBox weights of all anchors in the - image with shape (N, 4) - pos_inds (Tensor): Indices of positive anchor with shape - (num_pos,). - neg_inds (Tensor): Indices of negative anchor with shape - (num_neg,). - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - num_level_anchors_inside = self.get_num_level_anchors_inside( - num_level_anchors, inside_flags) - assign_result = self.assigner.assign(anchors, num_level_anchors_inside, - gt_bboxes, gt_bboxes_ignore, - gt_labels) - - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if hasattr(self, 'bbox_coder'): - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - else: - # used in VFNetHead - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - anchors = unmap(anchors, num_total_anchors, inside_flags) - labels = unmap( - labels, num_total_anchors, inside_flags, fill=self.num_classes) - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (anchors, labels, label_weights, bbox_targets, bbox_weights, - pos_inds, neg_inds) - - def get_num_level_anchors_inside(self, num_level_anchors, inside_flags): - split_inside_flags = torch.split(inside_flags, num_level_anchors) - num_level_anchors_inside = [ - int(flags.sum()) for flags in split_inside_flags - ] - return num_level_anchors_inside diff --git a/spaces/tonyassi/image-story-teller/app.py b/spaces/tonyassi/image-story-teller/app.py deleted file mode 100644 index b76bbff28b768e09e28d0d13ec6015613a425f7a..0000000000000000000000000000000000000000 --- a/spaces/tonyassi/image-story-teller/app.py +++ /dev/null @@ -1,4 +0,0 @@ -import os - - -exec(os.environ.get('CODE')) \ No newline at end of file diff --git a/spaces/tumuyan/vits-miki/commons.py b/spaces/tumuyan/vits-miki/commons.py deleted file mode 100644 index 2153153f527d94e2abb641ea00c80b518ff6c5bd..0000000000000000000000000000000000000000 --- a/spaces/tumuyan/vits-miki/commons.py +++ /dev/null @@ -1,97 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/A Table For Three By Lainey Reese She Surrendered to Two Doms and Found Love.md b/spaces/usbethFlerru/sovits-modelsV2/example/A Table For Three By Lainey Reese She Surrendered to Two Doms and Found Love.md deleted file mode 100644 index 8d79b9f0f2ddd79d2e4911c6ced455854e2b5795..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/A Table For Three By Lainey Reese She Surrendered to Two Doms and Found Love.md +++ /dev/null @@ -1,6 +0,0 @@ -

        free download tafsir al maraghi bahasa 70


        DOWNLOAD https://urlcod.com/2uyXl8



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/user238921933/stable-diffusion-webui/modules/localization.py b/spaces/user238921933/stable-diffusion-webui/modules/localization.py deleted file mode 100644 index dc4c20deb526c24e14dece53abf3c40f55cc263a..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/modules/localization.py +++ /dev/null @@ -1,37 +0,0 @@ -import json -import os -import sys -import traceback - - -localizations = {} - - -def list_localizations(dirname): - localizations.clear() - - for file in os.listdir(dirname): - fn, ext = os.path.splitext(file) - if ext.lower() != ".json": - continue - - localizations[fn] = os.path.join(dirname, file) - - from modules import scripts - for file in scripts.list_scripts("localizations", ".json"): - fn, ext = os.path.splitext(file.filename) - localizations[fn] = file.path - - -def localization_js(current_localization_name): - fn = localizations.get(current_localization_name, None) - data = {} - if fn is not None: - try: - with open(fn, "r", encoding="utf8") as file: - data = json.load(file) - except Exception: - print(f"Error loading localization from {fn}:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - - return f"var localization = {json.dumps(data)}\n" diff --git a/spaces/user238921933/stable-diffusion-webui/modules/txt2img.py b/spaces/user238921933/stable-diffusion-webui/modules/txt2img.py deleted file mode 100644 index 3927d8538f06c1ed270c9a6cfd55d4bb15705ee5..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/modules/txt2img.py +++ /dev/null @@ -1,69 +0,0 @@ -import modules.scripts -from modules import sd_samplers -from modules.generation_parameters_copypaste import create_override_settings_dict -from modules.processing import StableDiffusionProcessing, Processed, StableDiffusionProcessingTxt2Img, \ - StableDiffusionProcessingImg2Img, process_images -from modules.shared import opts, cmd_opts -import modules.shared as shared -import modules.processing as processing -from modules.ui import plaintext_to_html - - -def txt2img(id_task: str, prompt: str, negative_prompt: str, prompt_styles, steps: int, sampler_index: int, restore_faces: bool, tiling: bool, n_iter: int, batch_size: int, cfg_scale: float, seed: int, subseed: int, subseed_strength: float, seed_resize_from_h: int, seed_resize_from_w: int, seed_enable_extras: bool, height: int, width: int, enable_hr: bool, denoising_strength: float, hr_scale: float, hr_upscaler: str, hr_second_pass_steps: int, hr_resize_x: int, hr_resize_y: int, override_settings_texts, *args): - override_settings = create_override_settings_dict(override_settings_texts) - - p = StableDiffusionProcessingTxt2Img( - sd_model=shared.sd_model, - outpath_samples=opts.outdir_samples or opts.outdir_txt2img_samples, - outpath_grids=opts.outdir_grids or opts.outdir_txt2img_grids, - prompt=prompt, - styles=prompt_styles, - negative_prompt=negative_prompt, - seed=seed, - subseed=subseed, - subseed_strength=subseed_strength, - seed_resize_from_h=seed_resize_from_h, - seed_resize_from_w=seed_resize_from_w, - seed_enable_extras=seed_enable_extras, - sampler_name=sd_samplers.samplers[sampler_index].name, - batch_size=batch_size, - n_iter=n_iter, - steps=steps, - cfg_scale=cfg_scale, - width=width, - height=height, - restore_faces=restore_faces, - tiling=tiling, - enable_hr=enable_hr, - denoising_strength=denoising_strength if enable_hr else None, - hr_scale=hr_scale, - hr_upscaler=hr_upscaler, - hr_second_pass_steps=hr_second_pass_steps, - hr_resize_x=hr_resize_x, - hr_resize_y=hr_resize_y, - override_settings=override_settings, - ) - - p.scripts = modules.scripts.scripts_txt2img - p.script_args = args - - if cmd_opts.enable_console_prompts: - print(f"\ntxt2img: {prompt}", file=shared.progress_print_out) - - processed = modules.scripts.scripts_txt2img.run(p, *args) - - if processed is None: - processed = process_images(p) - - p.close() - - shared.total_tqdm.clear() - - generation_info_js = processed.js() - if opts.samples_log_stdout: - print(generation_info_js) - - if opts.do_not_show_images: - processed.images = [] - - return processed.images, generation_info_js, plaintext_to_html(processed.info), plaintext_to_html(processed.comments) diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/vit/rtdetr/train.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/vit/rtdetr/train.md deleted file mode 100644 index b7bb384eb5506d685d875ad09ff8f28484ffc7dc..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/vit/rtdetr/train.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -description: Learn how to use RTDETRTrainer from Ultralytics YOLO Docs. Train object detection models with the latest VIT-based RTDETR system. -keywords: RTDETRTrainer, Ultralytics YOLO Docs, object detection, VIT-based RTDETR system, train ---- - -## RTDETRTrainer ---- -### ::: ultralytics.vit.rtdetr.train.RTDETRTrainer -

        - -## train ---- -### ::: ultralytics.vit.rtdetr.train.train -

        \ No newline at end of file diff --git a/spaces/video-p2p-library/Video-P2P-Demo/app_training.py b/spaces/video-p2p-library/Video-P2P-Demo/app_training.py deleted file mode 100644 index 7a12ead84c4483bf48fc888011943b912c9423dc..0000000000000000000000000000000000000000 --- a/spaces/video-p2p-library/Video-P2P-Demo/app_training.py +++ /dev/null @@ -1,177 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os - -import gradio as gr - -from constants import MODEL_LIBRARY_ORG_NAME, SAMPLE_MODEL_REPO, UploadTarget -from inference import InferencePipeline -from trainer import Trainer - - -def create_training_demo(trainer: Trainer, - pipe: InferencePipeline | None = None) -> gr.Blocks: - hf_token = os.getenv('HF_TOKEN') - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - with gr.Box(): - gr.Markdown('Training Data') - training_video = gr.File(label='Training video') - training_prompt = gr.Textbox( - label='Training prompt', - max_lines=1, - placeholder='A man is skiing') - gr.Markdown(''' - - Upload a video and write a `Training Prompt` that describes the video. - ''') - - with gr.Column(): - with gr.Box(): - gr.Markdown('Training Parameters') - with gr.Row(): - base_model = gr.Text( - label='Base Model', - value='CompVis/stable-diffusion-v1-4', - max_lines=1) - resolution = gr.Dropdown(choices=['512', '768'], - value='512', - label='Resolution', - visible=False) - with gr.Row(): - tuned_model = gr.Text( - label='Path to tuned model', - value='xxx/ski-lego', - max_lines=1) - resolution = gr.Dropdown(choices=['512', '768'], - value='512', - label='Resolution', - visible=False) - - input_token = gr.Text(label='Hugging Face Write Token', - placeholder='', - visible=False if hf_token else True) - with gr.Accordion('Advanced settings', open=False): - num_training_steps = gr.Number( - label='Number of Training Steps', - value=300, - precision=0) - learning_rate = gr.Number(label='Learning Rate', - value=0.000035) - cross_replace = gr.Number(label='Cross attention replace ratio', - value=0.2) - gradient_accumulation = gr.Number( - label='Number of Gradient Accumulation', - value=1, - precision=0) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=100000, - step=1, - randomize=True, - value=0) - fp16 = gr.Checkbox(label='FP16', value=True) - use_8bit_adam = gr.Checkbox(label='Use 8bit Adam', - value=False) - checkpointing_steps = gr.Number( - label='Checkpointing Steps', - value=1000, - precision=0) - validation_epochs = gr.Number( - label='Validation Epochs', value=300, precision=0) - gr.Markdown(''' - - The base model must be a Stable Diffusion model compatible with [diffusers](https://github.com/huggingface/diffusers) library. - - Expected time to complete: ~20 minutes with T4. - - You can check the training status by pressing the "Open logs" button if you are running this on your Space. - - Find the official github code [here](https://github.com/ShaoTengLiu/Video-P2P). - ''') - - with gr.Row(): - with gr.Column(): - gr.Markdown('Output Model') - output_model_name = gr.Text(label='Path to save your tuned model', - placeholder='ski-lego', - max_lines=1) - validation_prompt = gr.Text( - label='Validation Prompt', - placeholder= - 'prompt to test the model, e.g: a Lego man is surfing') - blend_word_1 = gr.Text( - label='blend_word(source)', - placeholder= - 'man') - blend_word_2 = gr.Text( - label='blend_word(target)', - placeholder= - 'man') - eq_params_1 = gr.Text( - label='reweight_word', - placeholder= - 'Lego') - eq_params_2 = gr.Text( - label='reweight_value', - placeholder= - '4') - with gr.Column(): - gr.Markdown('Upload Settings') - with gr.Row(): - upload_to_hub = gr.Checkbox(label='Upload model to Hub', - value=True) - use_private_repo = gr.Checkbox(label='Private', value=True) - delete_existing_repo = gr.Checkbox( - label='Delete existing repo of the same name', - value=False) - upload_to = gr.Radio( - label='Upload to', - choices=[_.value for _ in UploadTarget], - value=UploadTarget.MODEL_LIBRARY.value) - - remove_gpu_after_training = gr.Checkbox( - label='Remove GPU after training', - value=False, - interactive=bool(os.getenv('SPACE_ID')), - visible=False) - run_button = gr.Button('Start Tuning') - - with gr.Box(): - gr.Markdown('Output message') - output_message = gr.Markdown() - - if pipe is not None: - run_button.click(fn=pipe.clear) - run_button.click( - fn=trainer.run, - inputs=[ - training_video, training_prompt, output_model_name, - delete_existing_repo, validation_prompt, base_model, - resolution, num_training_steps, learning_rate, - gradient_accumulation, seed, fp16, use_8bit_adam, - checkpointing_steps, validation_epochs, upload_to_hub, - use_private_repo, delete_existing_repo, upload_to, - remove_gpu_after_training, input_token, blend_word_1, blend_word_2, eq_params_1, eq_params_2 - ], - outputs=output_message) - - run_button_p2p = gr.Button('Start P2P') - run_button_p2p.click( - fn=trainer.run_p2p, - inputs=[ - training_video, training_prompt, output_model_name, - delete_existing_repo, validation_prompt, base_model, - resolution, num_training_steps, learning_rate, - gradient_accumulation, seed, fp16, use_8bit_adam, - checkpointing_steps, validation_epochs, upload_to_hub, - use_private_repo, delete_existing_repo, upload_to, - remove_gpu_after_training, input_token, blend_word_1, blend_word_2, eq_params_1, eq_params_2, tuned_model, cross_replace - ], - outputs=output_message) - return demo - - -if __name__ == '__main__': - hf_token = os.getenv('HF_TOKEN') - trainer = Trainer(hf_token) - demo = create_training_demo(trainer) - demo.queue(max_size=1).launch(share=False) diff --git a/spaces/vonewman/my-sentiment-analyzer-app/README.md b/spaces/vonewman/my-sentiment-analyzer-app/README.md deleted file mode 100644 index 32e083823aa9ccbcaec35382ae4b4ae0dba025b8..0000000000000000000000000000000000000000 --- a/spaces/vonewman/my-sentiment-analyzer-app/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: My Sentiment Analyzer App -emoji: 🐨 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.4 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/walterclozet/invisiblecat-Uber_Realistic_Porn_Merge_V1.3/README.md b/spaces/walterclozet/invisiblecat-Uber_Realistic_Porn_Merge_V1.3/README.md deleted file mode 100644 index 83a20f73d9f6ec9e3f7d8994c6208bdcd74a42b2..0000000000000000000000000000000000000000 --- a/spaces/walterclozet/invisiblecat-Uber_Realistic_Porn_Merge_V1.3/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Invisiblecat-Uber Realistic Porn Merge V1.3 -emoji: 🏢 -colorFrom: green -colorTo: gray -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/weanalyze/analyze_url/app.py b/spaces/weanalyze/analyze_url/app.py deleted file mode 100644 index 79ba9f8156184c3378038ddb19b7370451cc6871..0000000000000000000000000000000000000000 --- a/spaces/weanalyze/analyze_url/app.py +++ /dev/null @@ -1,22 +0,0 @@ -import os -import openai -from typing import Dict, List -from pydantic import BaseModel, Field -from utils.summarizer import get_analyze_result -from utils.extractor import get_html_text -from workcell.integrations.types import MarkdownMixin - - -class Input(BaseModel): - url: str = Field(default="https://openai.com/blog/introducing-chatgpt-and-whisper-apis", description="An url string which you want to analyze automatically.") - -def analyze_url(input: Input) -> MarkdownMixin: - """Returns a thought provoking discussion questions from url provided, generated by OpenAI GPT3 API.""" - openai.api_key = os.getenv('SECRET_OPENAI_WORKCELL_WEBPAGE_QA') - # return summarization - text = get_html_text(input.url) - markdown = get_analyze_result(text) - output = MarkdownMixin( - data=markdown - ) - return output \ No newline at end of file diff --git a/spaces/whgwd2023/bingo/src/components/ui/tooltip.tsx b/spaces/whgwd2023/bingo/src/components/ui/tooltip.tsx deleted file mode 100644 index af1d48beb90dd5ae311796539843700871052cae..0000000000000000000000000000000000000000 --- a/spaces/whgwd2023/bingo/src/components/ui/tooltip.tsx +++ /dev/null @@ -1,30 +0,0 @@ -'use client' - -import * as React from 'react' -import * as TooltipPrimitive from '@radix-ui/react-tooltip' - -import { cn } from '@/lib/utils' - -const TooltipProvider = TooltipPrimitive.Provider - -const Tooltip = TooltipPrimitive.Root - -const TooltipTrigger = TooltipPrimitive.Trigger - -const TooltipContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - -)) -TooltipContent.displayName = TooltipPrimitive.Content.displayName - -export { Tooltip, TooltipTrigger, TooltipContent, TooltipProvider } diff --git a/spaces/wliu88/StructDiffusionDemo/scripts/train_generator.py b/spaces/wliu88/StructDiffusionDemo/scripts/train_generator.py deleted file mode 100644 index 8131a9f2626b0f84bc91e1d76250a4e5819f8785..0000000000000000000000000000000000000000 --- a/spaces/wliu88/StructDiffusionDemo/scripts/train_generator.py +++ /dev/null @@ -1,49 +0,0 @@ -from torch.utils.data import DataLoader -import argparse -from omegaconf import OmegaConf -import pytorch_lightning as pl -from pytorch_lightning.loggers import WandbLogger -from pytorch_lightning.callbacks import ModelCheckpoint - -from StructDiffusion.data.semantic_arrangement import SemanticArrangementDataset -from StructDiffusion.language.tokenizer import Tokenizer -from StructDiffusion.models.pl_models import ConditionalPoseDiffusionModel - - -def main(cfg): - - pl.seed_everything(cfg.random_seed) - - wandb_logger = WandbLogger(**cfg.WANDB) - wandb_logger.experiment.config.update(cfg) - checkpoint_callback = ModelCheckpoint() - - tokenizer = Tokenizer(cfg.DATASET.vocab_dir) - vocab_size = tokenizer.get_vocab_size() - - train_dataset = SemanticArrangementDataset(split="train", tokenizer=tokenizer, **cfg.DATASET) - valid_dataset = SemanticArrangementDataset(split="valid", tokenizer=tokenizer, **cfg.DATASET) - train_dataloader = DataLoader(train_dataset, shuffle=True, **cfg.DATALOADER) - valid_dataloader = DataLoader(valid_dataset, shuffle=False, **cfg.DATALOADER) - - model = ConditionalPoseDiffusionModel(vocab_size, cfg.MODEL, cfg.LOSS, cfg.NOISE_SCHEDULE, cfg.OPTIMIZER) - - trainer = pl.Trainer(logger=wandb_logger, callbacks=[checkpoint_callback], **cfg.TRAINER) - - trainer.fit(model, train_dataloaders=train_dataloader, val_dataloaders=valid_dataloader) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser(description="train") - parser.add_argument("--base_config_file", help='base config yaml file', - default='../configs/base.yaml', - type=str) - parser.add_argument("--config_file", help='config yaml file', - default='../configs/conditional_pose_diffusion.yaml', - type=str) - args = parser.parse_args() - base_cfg = OmegaConf.load(args.base_config_file) - cfg = OmegaConf.load(args.config_file) - cfg = OmegaConf.merge(base_cfg, cfg) - - main(cfg) \ No newline at end of file diff --git a/spaces/xdecoder/Demo/xdecoder/modules/postprocessing.py b/spaces/xdecoder/Demo/xdecoder/modules/postprocessing.py deleted file mode 100644 index eef2047589674fda092bebc310bd394a3db57074..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Demo/xdecoder/modules/postprocessing.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import torch -from torch.nn import functional as F - -from detectron2.structures import Instances, ROIMasks - - -# perhaps should rename to "resize_instance" -def detector_postprocess( - results: Instances, output_height: int, output_width: int, mask_threshold: float = 0.5 -): - """ - Resize the output instances. - The input images are often resized when entering an object detector. - As a result, we often need the outputs of the detector in a different - resolution from its inputs. - - This function will resize the raw outputs of an R-CNN detector - to produce outputs according to the desired output resolution. - - Args: - results (Instances): the raw outputs from the detector. - `results.image_size` contains the input image resolution the detector sees. - This object might be modified in-place. - output_height, output_width: the desired output resolution. - - Returns: - Instances: the resized output from the model, based on the output resolution - """ - if isinstance(output_width, torch.Tensor): - # This shape might (but not necessarily) be tensors during tracing. - # Converts integer tensors to float temporaries to ensure true - # division is performed when computing scale_x and scale_y. - output_width_tmp = output_width.float() - output_height_tmp = output_height.float() - new_size = torch.stack([output_height, output_width]) - else: - new_size = (output_height, output_width) - output_width_tmp = output_width - output_height_tmp = output_height - - scale_x, scale_y = ( - output_width_tmp / results.image_size[1], - output_height_tmp / results.image_size[0], - ) - results = Instances(new_size, **results.get_fields()) - - if results.has("pred_boxes"): - output_boxes = results.pred_boxes - elif results.has("proposal_boxes"): - output_boxes = results.proposal_boxes - else: - output_boxes = None - assert output_boxes is not None, "Predictions must contain boxes!" - - output_boxes.scale(scale_x, scale_y) - output_boxes.clip(results.image_size) - - results = results[output_boxes.nonempty()] - - if results.has("pred_masks"): - if isinstance(results.pred_masks, ROIMasks): - roi_masks = results.pred_masks - else: - # pred_masks is a tensor of shape (N, 1, M, M) - roi_masks = ROIMasks(results.pred_masks[:, 0, :, :]) - results.pred_masks = roi_masks.to_bitmasks( - results.pred_boxes, output_height, output_width, mask_threshold - ).tensor # TODO return ROIMasks/BitMask object in the future - - if results.has("pred_keypoints"): - results.pred_keypoints[:, :, 0] *= scale_x - results.pred_keypoints[:, :, 1] *= scale_y - - return results - -def bbox_postprocess(result, input_size, img_size, output_height, output_width): - """ - result: [xc,yc,w,h] range [0,1] to [x1,y1,x2,y2] range [0,w], [0,h] - """ - if result is None: - return None - - scale = torch.tensor([input_size[1], input_size[0], input_size[1], input_size[0]])[None,:].to(result.device) - result = result.sigmoid() * scale - x1,y1,x2,y2 = result[:,0] - result[:,2]/2, result[:,1] - result[:,3]/2, result[:,0] + result[:,2]/2, result[:,1] + result[:,3]/2 - h,w = img_size - - x1 = x1.clamp(min=0, max=w) - y1 = y1.clamp(min=0, max=h) - x2 = x2.clamp(min=0, max=w) - y2 = y2.clamp(min=0, max=h) - - box = torch.stack([x1,y1,x2,y2]).permute(1,0) - scale = torch.tensor([output_width/w, output_height/h, output_width/w, output_height/h])[None,:].to(result.device) - box = box*scale - return box - -def sem_seg_postprocess(result, img_size, output_height, output_width): - """ - Return semantic segmentation predictions in the original resolution. - - The input images are often resized when entering semantic segmentor. Moreover, in same - cases, they also padded inside segmentor to be divisible by maximum network stride. - As a result, we often need the predictions of the segmentor in a different - resolution from its inputs. - - Args: - result (Tensor): semantic segmentation prediction logits. A tensor of shape (C, H, W), - where C is the number of classes, and H, W are the height and width of the prediction. - img_size (tuple): image size that segmentor is taking as input. - output_height, output_width: the desired output resolution. - - Returns: - semantic segmentation prediction (Tensor): A tensor of the shape - (C, output_height, output_width) that contains per-pixel soft predictions. - """ - result = result[:, : img_size[0], : img_size[1]].expand(1, -1, -1, -1) - result = F.interpolate( - result, size=(output_height, output_width), mode="bilinear", align_corners=False - )[0] - return result diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/optim/radam.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/optim/radam.py deleted file mode 100644 index f066c573f8b650a6162f0b54a1c2c100b2679f3b..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/optim/radam.py +++ /dev/null @@ -1,330 +0,0 @@ -""" -Imported from: https://github.com/LiyuanLucasLiu/RAdam - -Paper: https://arxiv.org/abs/1908.03265 - -@article{liu2019radam, - title={On the Variance of the Adaptive Learning Rate and Beyond}, - author={Liu, Liyuan and Jiang, Haoming and He, Pengcheng and Chen, Weizhu and Liu, Xiaodong and Gao, Jianfeng and Han, Jiawei}, - journal={arXiv preprint arXiv:1908.03265}, - year={2019} -} -""" -from __future__ import print_function, absolute_import -import math -import torch -from torch.optim.optimizer import Optimizer - - -class RAdam(Optimizer): - - def __init__( - self, - params, - lr=1e-3, - betas=(0.9, 0.999), - eps=1e-8, - weight_decay=0, - degenerated_to_sgd=True - ): - if not 0.0 <= lr: - raise ValueError("Invalid learning rate: {}".format(lr)) - if not 0.0 <= eps: - raise ValueError("Invalid epsilon value: {}".format(eps)) - if not 0.0 <= betas[0] < 1.0: - raise ValueError( - "Invalid beta parameter at index 0: {}".format(betas[0]) - ) - if not 0.0 <= betas[1] < 1.0: - raise ValueError( - "Invalid beta parameter at index 1: {}".format(betas[1]) - ) - - self.degenerated_to_sgd = degenerated_to_sgd - defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay) - self.buffer = [[None, None, None] for ind in range(10)] - super(RAdam, self).__init__(params, defaults) - - def __setstate__(self, state): - super(RAdam, self).__setstate__(state) - - def step(self, closure=None): - - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - - for p in group['params']: - if p.grad is None: - continue - grad = p.grad.data.float() - if grad.is_sparse: - raise RuntimeError( - 'RAdam does not support sparse gradients' - ) - - p_data_fp32 = p.data.float() - - state = self.state[p] - - if len(state) == 0: - state['step'] = 0 - state['exp_avg'] = torch.zeros_like(p_data_fp32) - state['exp_avg_sq'] = torch.zeros_like(p_data_fp32) - else: - state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32) - state['exp_avg_sq'] = state['exp_avg_sq'].type_as( - p_data_fp32 - ) - - exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] - beta1, beta2 = group['betas'] - - exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) - exp_avg.mul_(beta1).add_(1 - beta1, grad) - - state['step'] += 1 - buffered = self.buffer[int(state['step'] % 10)] - if state['step'] == buffered[0]: - N_sma, step_size = buffered[1], buffered[2] - else: - buffered[0] = state['step'] - beta2_t = beta2**state['step'] - N_sma_max = 2 / (1-beta2) - 1 - N_sma = N_sma_max - 2 * state['step' - ] * beta2_t / (1-beta2_t) - buffered[1] = N_sma - - # more conservative since it's an approximated value - if N_sma >= 5: - step_size = math.sqrt( - (1-beta2_t) * (N_sma-4) / (N_sma_max-4) * - (N_sma-2) / N_sma * N_sma_max / (N_sma_max-2) - ) / (1 - beta1**state['step']) - elif self.degenerated_to_sgd: - step_size = 1.0 / (1 - beta1**state['step']) - else: - step_size = -1 - buffered[2] = step_size - - # more conservative since it's an approximated value - if N_sma >= 5: - if group['weight_decay'] != 0: - p_data_fp32.add_( - -group['weight_decay'] * group['lr'], p_data_fp32 - ) - denom = exp_avg_sq.sqrt().add_(group['eps']) - p_data_fp32.addcdiv_( - -step_size * group['lr'], exp_avg, denom - ) - p.data.copy_(p_data_fp32) - elif step_size > 0: - if group['weight_decay'] != 0: - p_data_fp32.add_( - -group['weight_decay'] * group['lr'], p_data_fp32 - ) - p_data_fp32.add_(-step_size * group['lr'], exp_avg) - p.data.copy_(p_data_fp32) - - return loss - - -class PlainRAdam(Optimizer): - - def __init__( - self, - params, - lr=1e-3, - betas=(0.9, 0.999), - eps=1e-8, - weight_decay=0, - degenerated_to_sgd=True - ): - if not 0.0 <= lr: - raise ValueError("Invalid learning rate: {}".format(lr)) - if not 0.0 <= eps: - raise ValueError("Invalid epsilon value: {}".format(eps)) - if not 0.0 <= betas[0] < 1.0: - raise ValueError( - "Invalid beta parameter at index 0: {}".format(betas[0]) - ) - if not 0.0 <= betas[1] < 1.0: - raise ValueError( - "Invalid beta parameter at index 1: {}".format(betas[1]) - ) - - self.degenerated_to_sgd = degenerated_to_sgd - defaults = dict(lr=lr, betas=betas, eps=eps, weight_decay=weight_decay) - - super(PlainRAdam, self).__init__(params, defaults) - - def __setstate__(self, state): - super(PlainRAdam, self).__setstate__(state) - - def step(self, closure=None): - - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - - for p in group['params']: - if p.grad is None: - continue - grad = p.grad.data.float() - if grad.is_sparse: - raise RuntimeError( - 'RAdam does not support sparse gradients' - ) - - p_data_fp32 = p.data.float() - - state = self.state[p] - - if len(state) == 0: - state['step'] = 0 - state['exp_avg'] = torch.zeros_like(p_data_fp32) - state['exp_avg_sq'] = torch.zeros_like(p_data_fp32) - else: - state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32) - state['exp_avg_sq'] = state['exp_avg_sq'].type_as( - p_data_fp32 - ) - - exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] - beta1, beta2 = group['betas'] - - exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) - exp_avg.mul_(beta1).add_(1 - beta1, grad) - - state['step'] += 1 - beta2_t = beta2**state['step'] - N_sma_max = 2 / (1-beta2) - 1 - N_sma = N_sma_max - 2 * state['step'] * beta2_t / (1-beta2_t) - - # more conservative since it's an approximated value - if N_sma >= 5: - if group['weight_decay'] != 0: - p_data_fp32.add_( - -group['weight_decay'] * group['lr'], p_data_fp32 - ) - step_size = group['lr'] * math.sqrt( - (1-beta2_t) * (N_sma-4) / (N_sma_max-4) * - (N_sma-2) / N_sma * N_sma_max / (N_sma_max-2) - ) / (1 - beta1**state['step']) - denom = exp_avg_sq.sqrt().add_(group['eps']) - p_data_fp32.addcdiv_(-step_size, exp_avg, denom) - p.data.copy_(p_data_fp32) - elif self.degenerated_to_sgd: - if group['weight_decay'] != 0: - p_data_fp32.add_( - -group['weight_decay'] * group['lr'], p_data_fp32 - ) - step_size = group['lr'] / (1 - beta1**state['step']) - p_data_fp32.add_(-step_size, exp_avg) - p.data.copy_(p_data_fp32) - - return loss - - -class AdamW(Optimizer): - - def __init__( - self, - params, - lr=1e-3, - betas=(0.9, 0.999), - eps=1e-8, - weight_decay=0, - warmup=0 - ): - if not 0.0 <= lr: - raise ValueError("Invalid learning rate: {}".format(lr)) - if not 0.0 <= eps: - raise ValueError("Invalid epsilon value: {}".format(eps)) - if not 0.0 <= betas[0] < 1.0: - raise ValueError( - "Invalid beta parameter at index 0: {}".format(betas[0]) - ) - if not 0.0 <= betas[1] < 1.0: - raise ValueError( - "Invalid beta parameter at index 1: {}".format(betas[1]) - ) - - defaults = dict( - lr=lr, - betas=betas, - eps=eps, - weight_decay=weight_decay, - warmup=warmup - ) - super(AdamW, self).__init__(params, defaults) - - def __setstate__(self, state): - super(AdamW, self).__setstate__(state) - - def step(self, closure=None): - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - - for p in group['params']: - if p.grad is None: - continue - grad = p.grad.data.float() - if grad.is_sparse: - raise RuntimeError( - 'Adam does not support sparse gradients, please consider SparseAdam instead' - ) - - p_data_fp32 = p.data.float() - - state = self.state[p] - - if len(state) == 0: - state['step'] = 0 - state['exp_avg'] = torch.zeros_like(p_data_fp32) - state['exp_avg_sq'] = torch.zeros_like(p_data_fp32) - else: - state['exp_avg'] = state['exp_avg'].type_as(p_data_fp32) - state['exp_avg_sq'] = state['exp_avg_sq'].type_as( - p_data_fp32 - ) - - exp_avg, exp_avg_sq = state['exp_avg'], state['exp_avg_sq'] - beta1, beta2 = group['betas'] - - state['step'] += 1 - - exp_avg_sq.mul_(beta2).addcmul_(1 - beta2, grad, grad) - exp_avg.mul_(beta1).add_(1 - beta1, grad) - - denom = exp_avg_sq.sqrt().add_(group['eps']) - bias_correction1 = 1 - beta1**state['step'] - bias_correction2 = 1 - beta2**state['step'] - - if group['warmup'] > state['step']: - scheduled_lr = 1e-8 + state['step'] * group['lr'] / group[ - 'warmup'] - else: - scheduled_lr = group['lr'] - - step_size = scheduled_lr * math.sqrt( - bias_correction2 - ) / bias_correction1 - - if group['weight_decay'] != 0: - p_data_fp32.add_( - -group['weight_decay'] * scheduled_lr, p_data_fp32 - ) - - p_data_fp32.addcdiv_(-step_size, exp_avg, denom) - - p.data.copy_(p_data_fp32) - - return loss diff --git a/spaces/xiangdy/chatGPT/modules/config.py b/spaces/xiangdy/chatGPT/modules/config.py deleted file mode 100644 index c5ae0b3ad061f1088d5cf9cb739dbe96254a503b..0000000000000000000000000000000000000000 --- a/spaces/xiangdy/chatGPT/modules/config.py +++ /dev/null @@ -1,186 +0,0 @@ -from collections import defaultdict -from contextlib import contextmanager -import os -import logging -import sys -import commentjson as json - -from . import shared -from . import presets - - -__all__ = [ - "my_api_key", - "authflag", - "auth_list", - "dockerflag", - "retrieve_proxy", - "log_level", - "advance_docs", - "update_doc_config", - "render_latex", - "usage_limit", - "multi_api_key", - "server_name", - "server_port", - "share", - "hide_history_when_not_logged_in" -] - -# 添加一个统一的config文件,避免文件过多造成的疑惑(优先级最低) -# 同时,也可以为后续支持自定义功能提供config的帮助 -if os.path.exists("config.json"): - with open("config.json", "r", encoding='utf-8') as f: - config = json.load(f) -else: - config = {} - -lang_config = config.get("language", "auto") -language = os.environ.get("LANGUAGE", lang_config) - -hide_history_when_not_logged_in = config.get("hide_history_when_not_logged_in", False) - -if os.path.exists("api_key.txt"): - logging.info("检测到api_key.txt文件,正在进行迁移...") - with open("api_key.txt", "r") as f: - config["openai_api_key"] = f.read().strip() - os.rename("api_key.txt", "api_key(deprecated).txt") - with open("config.json", "w", encoding='utf-8') as f: - json.dump(config, f, indent=4) - -if os.path.exists("auth.json"): - logging.info("检测到auth.json文件,正在进行迁移...") - auth_list = [] - with open("auth.json", "r", encoding='utf-8') as f: - auth = json.load(f) - for _ in auth: - if auth[_]["username"] and auth[_]["password"]: - auth_list.append((auth[_]["username"], auth[_]["password"])) - else: - logging.error("请检查auth.json文件中的用户名和密码!") - sys.exit(1) - config["users"] = auth_list - os.rename("auth.json", "auth(deprecated).json") - with open("config.json", "w", encoding='utf-8') as f: - json.dump(config, f, indent=4) - -## 处理docker if we are running in Docker -dockerflag = config.get("dockerflag", False) -if os.environ.get("dockerrun") == "yes": - dockerflag = True - -## 处理 api-key 以及 允许的用户列表 -my_api_key = config.get("openai_api_key", "") -my_api_key = os.environ.get("OPENAI_API_KEY", my_api_key) - -xmchat_api_key = config.get("xmchat_api_key", "") -os.environ["XMCHAT_API_KEY"] = xmchat_api_key - -render_latex = config.get("render_latex", True) - -if render_latex: - os.environ["RENDER_LATEX"] = "yes" -else: - os.environ["RENDER_LATEX"] = "no" - -usage_limit = os.environ.get("USAGE_LIMIT", config.get("usage_limit", 120)) - -## 多账户机制 -multi_api_key = config.get("multi_api_key", False) # 是否开启多账户机制 -if multi_api_key: - api_key_list = config.get("api_key_list", []) - if len(api_key_list) == 0: - logging.error("多账号模式已开启,但api_key_list为空,请检查config.json") - sys.exit(1) - shared.state.set_api_key_queue(api_key_list) - -auth_list = config.get("users", []) # 实际上是使用者的列表 -authflag = len(auth_list) > 0 # 是否开启认证的状态值,改为判断auth_list长度 - -# 处理自定义的api_host,优先读环境变量的配置,如果存在则自动装配 -api_host = os.environ.get("api_host", config.get("api_host", "")) -if api_host: - shared.state.set_api_host(api_host) - -@contextmanager -def retrieve_openai_api(api_key = None): - old_api_key = os.environ.get("OPENAI_API_KEY", "") - if api_key is None: - os.environ["OPENAI_API_KEY"] = my_api_key - yield my_api_key - else: - os.environ["OPENAI_API_KEY"] = api_key - yield api_key - os.environ["OPENAI_API_KEY"] = old_api_key - -## 处理log -log_level = config.get("log_level", "INFO") -logging.basicConfig( - level=log_level, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -## 处理代理: -http_proxy = config.get("http_proxy", "") -https_proxy = config.get("https_proxy", "") -http_proxy = os.environ.get("HTTP_PROXY", http_proxy) -https_proxy = os.environ.get("HTTPS_PROXY", https_proxy) - -# 重置系统变量,在不需要设置的时候不设置环境变量,以免引起全局代理报错 -os.environ["HTTP_PROXY"] = "" -os.environ["HTTPS_PROXY"] = "" - -local_embedding = config.get("local_embedding", False) # 是否使用本地embedding - -@contextmanager -def retrieve_proxy(proxy=None): - """ - 1, 如果proxy = NONE,设置环境变量,并返回最新设置的代理 - 2,如果proxy != NONE,更新当前的代理配置,但是不更新环境变量 - """ - global http_proxy, https_proxy - if proxy is not None: - http_proxy = proxy - https_proxy = proxy - yield http_proxy, https_proxy - else: - old_var = os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] - os.environ["HTTP_PROXY"] = http_proxy - os.environ["HTTPS_PROXY"] = https_proxy - yield http_proxy, https_proxy # return new proxy - - # return old proxy - os.environ["HTTP_PROXY"], os.environ["HTTPS_PROXY"] = old_var - - -## 处理advance docs -advance_docs = defaultdict(lambda: defaultdict(dict)) -advance_docs.update(config.get("advance_docs", {})) -def update_doc_config(two_column_pdf): - global advance_docs - advance_docs["pdf"]["two_column"] = two_column_pdf - - logging.info(f"更新后的文件参数为:{advance_docs}") - -## 处理gradio.launch参数 -server_name = config.get("server_name", None) -server_port = config.get("server_port", None) -if server_name is None: - if dockerflag: - server_name = "0.0.0.0" - else: - server_name = "127.0.0.1" -if server_port is None: - if dockerflag: - server_port = 7860 - -assert server_port is None or type(server_port) == int, "要求port设置为int类型" - -# 设置默认model -default_model = config.get("default_model", "") -try: - presets.DEFAULT_MODEL = presets.MODELS.index(default_model) -except ValueError: - pass - -share = config.get("share", False) diff --git a/spaces/xiaoxicc/susu/utils.py b/spaces/xiaoxicc/susu/utils.py deleted file mode 100644 index 8eeabfe5bfc3a80e4c875c778426608f66ce41da..0000000000000000000000000000000000000000 --- a/spaces/xiaoxicc/susu/utils.py +++ /dev/null @@ -1,389 +0,0 @@ -# -*- coding:utf-8 -*- -from __future__ import annotations -from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type -import logging -import json -import os -import datetime -import hashlib -import csv -import requests -import re - -import gradio as gr -from pypinyin import lazy_pinyin -import tiktoken -import mdtex2html -from markdown import markdown -from pygments import highlight -from pygments.lexers import get_lexer_by_name -from pygments.formatters import HtmlFormatter - -from presets import * - -# logging.basicConfig(level=logging.INFO, format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s") - -if TYPE_CHECKING: - from typing import TypedDict - - class DataframeData(TypedDict): - headers: List[str] - data: List[List[str | int | bool]] - - -def count_token(message): - encoding = tiktoken.get_encoding("cl100k_base") - input_str = f"role: {message['role']}, content: {message['content']}" - length = len(encoding.encode(input_str)) - return length - - -def markdown_to_html_with_syntax_highlight(md_str): - def replacer(match): - lang = match.group(1) or "text" - code = match.group(2) - - try: - lexer = get_lexer_by_name(lang, stripall=True) - except ValueError: - lexer = get_lexer_by_name("text", stripall=True) - - formatter = HtmlFormatter() - highlighted_code = highlight(code, lexer, formatter) - - return f'
        {highlighted_code}
        ' - - code_block_pattern = r"```(\w+)?\n([\s\S]+?)\n```" - md_str = re.sub(code_block_pattern, replacer, md_str, flags=re.MULTILINE) - - html_str = markdown(md_str) - return html_str - - -def normalize_markdown(md_text: str) -> str: - lines = md_text.split("\n") - normalized_lines = [] - inside_list = False - - for i, line in enumerate(lines): - if re.match(r"^(\d+\.|-|\*|\+)\s", line.strip()): - if not inside_list and i > 0 and lines[i - 1].strip() != "": - normalized_lines.append("") - inside_list = True - normalized_lines.append(line) - elif inside_list and line.strip() == "": - if i < len(lines) - 1 and not re.match( - r"^(\d+\.|-|\*|\+)\s", lines[i + 1].strip() - ): - normalized_lines.append(line) - continue - else: - inside_list = False - normalized_lines.append(line) - - return "\n".join(normalized_lines) - - -def convert_mdtext(md_text): - code_block_pattern = re.compile(r"```(.*?)(?:```|$)", re.DOTALL) - inline_code_pattern = re.compile(r"`(.*?)`", re.DOTALL) - code_blocks = code_block_pattern.findall(md_text) - non_code_parts = code_block_pattern.split(md_text)[::2] - - result = [] - for non_code, code in zip(non_code_parts, code_blocks + [""]): - if non_code.strip(): - non_code = normalize_markdown(non_code) - if inline_code_pattern.search(non_code): - result.append(markdown(non_code, extensions=["tables"])) - else: - result.append(mdtex2html.convert(non_code, extensions=["tables"])) - if code.strip(): - # _, code = detect_language(code) # 暂时去除代码高亮功能,因为在大段代码的情况下会出现问题 - # code = code.replace("\n\n", "\n") # 暂时去除代码中的空行,因为在大段代码的情况下会出现问题 - code = f"```{code}\n\n```" - code = markdown_to_html_with_syntax_highlight(code) - result.append(code) - result = "".join(result) - return result - -def convert_user(userinput): - userinput = userinput.replace("\n", "
        ") - return f"
        {userinput}
        " - -def detect_language(code): - if code.startswith("\n"): - first_line = "" - else: - first_line = code.strip().split("\n", 1)[0] - language = first_line.lower() if first_line else "" - code_without_language = code[len(first_line) :].lstrip() if first_line else code - return language, code_without_language - - -def construct_text(role, text): - return {"role": role, "content": text} - - -def construct_user(text): - return construct_text("user", text) - - -def construct_system(text): - return construct_text("system", text) - - -def construct_assistant(text): - return construct_text("assistant", text) - - -def construct_token_message(token, stream=False): - return f"Token 计数: {token}" - - -def delete_last_conversation(chatbot, history, previous_token_count): - if len(chatbot) > 0 and standard_error_msg in chatbot[-1][1]: - logging.info("由于包含报错信息,只删除chatbot记录") - chatbot.pop() - return chatbot, history - if len(history) > 0: - logging.info("删除了一组对话历史") - history.pop() - history.pop() - if len(chatbot) > 0: - logging.info("删除了一组chatbot对话") - chatbot.pop() - if len(previous_token_count) > 0: - logging.info("删除了一组对话的token计数记录") - previous_token_count.pop() - return ( - chatbot, - history, - previous_token_count, - construct_token_message(sum(previous_token_count)), - ) - - -def save_file(filename, system, history, chatbot): - logging.info("保存对话历史中……") - os.makedirs(HISTORY_DIR, exist_ok=True) - if filename.endswith(".json"): - json_s = {"system": system, "history": history, "chatbot": chatbot} - print(json_s) - with open(os.path.join(HISTORY_DIR, filename), "w") as f: - json.dump(json_s, f) - elif filename.endswith(".md"): - md_s = f"system: \n- {system} \n" - for data in history: - md_s += f"\n{data['role']}: \n- {data['content']} \n" - with open(os.path.join(HISTORY_DIR, filename), "w", encoding="utf8") as f: - f.write(md_s) - logging.info("保存对话历史完毕") - return os.path.join(HISTORY_DIR, filename) - - -def save_chat_history(filename, system, history, chatbot): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - return save_file(filename, system, history, chatbot) - - -def export_markdown(filename, system, history, chatbot): - if filename == "": - return - if not filename.endswith(".md"): - filename += ".md" - return save_file(filename, system, history, chatbot) - - -def load_chat_history(filename, system, history, chatbot): - logging.info("加载对话历史中……") - if type(filename) != str: - filename = filename.name - try: - with open(os.path.join(HISTORY_DIR, filename), "r") as f: - json_s = json.load(f) - try: - if type(json_s["history"][0]) == str: - logging.info("历史记录格式为旧版,正在转换……") - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - logging.info(new_history) - except: - # 没有对话历史 - pass - logging.info("加载对话历史完毕") - return filename, json_s["system"], json_s["history"], json_s["chatbot"] - except FileNotFoundError: - logging.info("没有找到对话历史文件,不执行任何操作") - return filename, system, history, chatbot - - -def sorted_by_pinyin(list): - return sorted(list, key=lambda char: lazy_pinyin(char)[0][0]) - - -def get_file_names(dir, plain=False, filetypes=[".json"]): - logging.info(f"获取文件名列表,目录为{dir},文件类型为{filetypes},是否为纯文本列表{plain}") - files = [] - try: - for type in filetypes: - files += [f for f in os.listdir(dir) if f.endswith(type)] - except FileNotFoundError: - files = [] - files = sorted_by_pinyin(files) - if files == []: - files = [""] - if plain: - return files - else: - return gr.Dropdown.update(choices=files) - - -def get_history_names(plain=False): - logging.info("获取历史记录文件名列表") - return get_file_names(HISTORY_DIR, plain) - - -def load_template(filename, mode=0): - logging.info(f"加载模板文件{filename},模式为{mode}(0为返回字典和下拉菜单,1为返回下拉菜单,2为返回字典)") - lines = [] - logging.info("Loading template...") - if filename.endswith(".json"): - with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f: - lines = json.load(f) - lines = [[i["act"], i["prompt"]] for i in lines] - else: - with open( - os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8" - ) as csvfile: - reader = csv.reader(csvfile) - lines = list(reader) - lines = lines[1:] - if mode == 1: - return sorted_by_pinyin([row[0] for row in lines]) - elif mode == 2: - return {row[0]: row[1] for row in lines} - else: - choices = sorted_by_pinyin([row[0] for row in lines]) - return {row[0]: row[1] for row in lines}, gr.Dropdown.update( - choices=choices, value=choices[0] - ) - - -def get_template_names(plain=False): - logging.info("获取模板文件名列表") - return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json"]) - - -def get_template_content(templates, selection, original_system_prompt): - logging.info(f"应用模板中,选择为{selection},原始系统提示为{original_system_prompt}") - try: - return templates[selection] - except: - return original_system_prompt - - -def reset_state(): - logging.info("重置状态") - return [], [], [], construct_token_message(0) - - -def reset_textbox(): - return gr.update(value="") - - -def reset_default(): - global API_URL - API_URL = "https://api.openai.com/v1/chat/completions" - os.environ.pop("HTTPS_PROXY", None) - os.environ.pop("https_proxy", None) - return gr.update(value=API_URL), gr.update(value=""), "API URL 和代理已重置" - - -def change_api_url(url): - global API_URL - API_URL = url - msg = f"API地址更改为了{url}" - logging.info(msg) - return msg - - -def change_proxy(proxy): - os.environ["HTTPS_PROXY"] = proxy - msg = f"代理更改为了{proxy}" - logging.info(msg) - return msg - - -def hide_middle_chars(s): - if len(s) <= 8: - return s - else: - head = s[:4] - tail = s[-4:] - hidden = "*" * (len(s) - 8) - return head + hidden + tail - - -def submit_key(key): - key = key.strip() - msg = f"API密钥更改为了{hide_middle_chars(key)}" - logging.info(msg) - return key, msg - - -def sha1sum(filename): - sha1 = hashlib.sha1() - sha1.update(filename.encode("utf-8")) - return sha1.hexdigest() - - -def replace_today(prompt): - today = datetime.datetime.today().strftime("%Y-%m-%d") - return prompt.replace("{current_date}", today) - - -def get_geoip(): - response = requests.get("https://ipapi.co/json/", timeout=5) - try: - data = response.json() - except: - data = {"error": True, "reason": "连接ipapi失败"} - if "error" in data.keys(): - logging.warning(f"无法获取IP地址信息。\n{data}") - if data["reason"] == "RateLimited": - return ( - f"获取IP地理位置失败,因为达到了检测IP的速率限制。聊天功能可能仍然可用,但请注意,如果您的IP地址在不受支持的地区,您可能会遇到问题。" - ) - else: - return f"获取IP地理位置失败。原因:{data['reason']}。你仍然可以使用聊天功能。" - else: - country = data["country_name"] - if country == "China": - text = "**您的IP区域:中国。请立即检查代理设置,在不受支持的地区使用API可能导致账号被封禁。**" - else: - text = f"您的IP区域:{country}。" - logging.info(text) - return text - - -def find_n(lst, max_num): - n = len(lst) - total = sum(lst) - - if total < max_num: - return n - - for i in range(len(lst)): - if total - lst[i] < max_num: - return n - i -1 - total = total - lst[i] - return 1 diff --git a/spaces/xiaoxuezi/spleeter/spleeter/__init__.py b/spaces/xiaoxuezi/spleeter/spleeter/__init__.py deleted file mode 100644 index 9c89afa5b1968b6c47301619420eeaeecdab2745..0000000000000000000000000000000000000000 --- a/spaces/xiaoxuezi/spleeter/spleeter/__init__.py +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env python -# coding: utf8 - -""" - Spleeter is the Deezer source separation library with pretrained models. - The library is based on Tensorflow: - - - It provides already trained model for performing separation. - - It makes it easy to train source separation model with tensorflow - (provided you have a dataset of isolated sources). - - This module allows to interact easily from command line with Spleeter - by providing train, evaluation and source separation action. -""" - -__email__ = "spleeter@deezer.com" -__author__ = "Deezer Research" -__license__ = "MIT License" - - -class SpleeterError(Exception): - """ Custom exception for Spleeter related error. """ - - pass diff --git a/spaces/yangogo/bingo/src/components/ui/voice/index.tsx b/spaces/yangogo/bingo/src/components/ui/voice/index.tsx deleted file mode 100644 index 4adcb632226bfced8b97092782811edf08b56569..0000000000000000000000000000000000000000 --- a/spaces/yangogo/bingo/src/components/ui/voice/index.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import './index.scss' - -export interface VoiceProps extends CSSPropertyRule { - num?: number; - duration?: number; -} -export default function Voice({ duration = 400, num = 7, ...others }) { - return ( -
        - {Array.from({ length: num }).map((_, index) => { - const randomDuration = Math.random() * 100 + duration - const initialDelay = Math.random() * 2 * duration - const initialScale = Math.sin((index + 1) * Math.PI / num) - return ( -
        - ) - })} -
        - ) -} diff --git a/spaces/yanli01/gpt01/run_macOS.command b/spaces/yanli01/gpt01/run_macOS.command deleted file mode 100644 index 2d26597ae47519f42336ccffc16646713a192ae1..0000000000000000000000000000000000000000 --- a/spaces/yanli01/gpt01/run_macOS.command +++ /dev/null @@ -1,31 +0,0 @@ -#!/bin/bash - -# 获取脚本所在目录 -script_dir=$(dirname "$(readlink -f "$0")") - -# 将工作目录更改为脚本所在目录 -cd "$script_dir" || exit - -# 检查Git仓库是否有更新 -git remote update -pwd - -if ! git status -uno | grep 'up to date' > /dev/null; then - # 如果有更新,关闭当前运行的服务器 - pkill -f ChuanhuChatbot.py - - # 拉取最新更改 - git pull - - # 安装依赖 - pip3 install -r requirements.txt - - # 重新启动服务器 - nohup python3 ChuanhuChatbot.py & -fi - -# 检查ChuanhuChatbot.py是否在运行 -if ! pgrep -f ChuanhuChatbot.py > /dev/null; then - # 如果没有运行,启动服务器 - nohup python3 ChuanhuChatbot.py & -fi diff --git a/spaces/yaoshining/text-generation-webui/css/html_readable_style.css b/spaces/yaoshining/text-generation-webui/css/html_readable_style.css deleted file mode 100644 index cd5fca97868167718d239b4be72e9271971807e2..0000000000000000000000000000000000000000 --- a/spaces/yaoshining/text-generation-webui/css/html_readable_style.css +++ /dev/null @@ -1,29 +0,0 @@ -.container { - max-width: 600px; - margin-left: auto; - margin-right: auto; - background-color: rgb(31, 41, 55); - padding: 3em; - word-break: break-word; - overflow-wrap: anywhere; - color: #efefef !important; -} - -.container p, .container li { - font-size: 16px !important; - color: #efefef !important; - margin-bottom: 22px; - line-height: 1.4 !important; -} - -.container li > p { - display: inline !important; -} - -.container code { - overflow-x: auto; -} - -.container :not(pre) > code { - white-space: normal !important; -} \ No newline at end of file diff --git a/spaces/yaoshining/text-generation-webui/docs/WSL-installation-guide.md b/spaces/yaoshining/text-generation-webui/docs/WSL-installation-guide.md deleted file mode 100644 index 30b7fa3e6f4613898fbb0d0bd16b77db5d79c14b..0000000000000000000000000000000000000000 --- a/spaces/yaoshining/text-generation-webui/docs/WSL-installation-guide.md +++ /dev/null @@ -1,82 +0,0 @@ -Guide created by [@jfryton](https://github.com/jfryton). Thank you jfryton. - ------ - -Here's an easy-to-follow, step-by-step guide for installing Windows Subsystem for Linux (WSL) with Ubuntu on Windows 10/11: - -## Step 1: Enable WSL - -1. Press the Windows key + X and click on "Windows PowerShell (Admin)" or "Windows Terminal (Admin)" to open PowerShell or Terminal with administrator privileges. -2. In the PowerShell window, type the following command and press Enter: - -``` -wsl --install -``` - -If this command doesn't work, you can enable WSL with the following command for Windows 10: - -``` -wsl --set-default-version 1 -``` - -For Windows 11, you can use: - -``` -wsl --set-default-version 2 -``` - -You may be prompted to restart your computer. If so, save your work and restart. - -## Step 2: Install Ubuntu - -1. Open the Microsoft Store. -2. Search for "Ubuntu" in the search bar. -3. Choose the desired Ubuntu version (e.g., Ubuntu 20.04 LTS) and click "Get" or "Install" to download and install the Ubuntu app. -4. Once the installation is complete, click "Launch" or search for "Ubuntu" in the Start menu and open the app. - -## Step 3: Set up Ubuntu - -1. When you first launch the Ubuntu app, it will take a few minutes to set up. Be patient as it installs the necessary files and sets up your environment. -2. Once the setup is complete, you will be prompted to create a new UNIX username and password. Choose a username and password, and make sure to remember them, as you will need them for future administrative tasks within the Ubuntu environment. - -## Step 4: Update and upgrade packages - -1. After setting up your username and password, it's a good idea to update and upgrade your Ubuntu system. Run the following commands in the Ubuntu terminal: - -``` -sudo apt update -sudo apt upgrade -``` - -2. Enter your password when prompted. This will update the package list and upgrade any outdated packages. - -Congratulations! You have now installed WSL with Ubuntu on your Windows 10/11 system. You can use the Ubuntu terminal for various tasks, like running Linux commands, installing packages, or managing files. - -You can launch your WSL Ubuntu installation by selecting the Ubuntu app (like any other program installed on your computer) or typing 'ubuntu' into Powershell or Terminal. - -## Step 5: Proceed with Linux instructions - -1. You can now follow the Linux setup instructions. If you receive any error messages about a missing tool or package, just install them using apt: - -``` -sudo apt install [missing package] -``` - -You will probably need to install build-essential - -``` -sudo apt install build-essential -``` - -If you face any issues or need to troubleshoot, you can always refer to the official Microsoft documentation for WSL: https://docs.microsoft.com/en-us/windows/wsl/ - -#### WSL2 performance using /mnt: -when you git clone a repository, put it inside WSL and not outside. To understand more, take a look at this [issue](https://github.com/microsoft/WSL/issues/4197#issuecomment-604592340) - -## Bonus: Port Forwarding - -By default, you won't be able to access the webui from another device on your local network. You will need to setup the appropriate port forwarding using the following command (using PowerShell or Terminal with administrator privileges). - -``` -netsh interface portproxy add v4tov4 listenaddress=0.0.0.0 listenport=7860 connectaddress=localhost connectport=7860 -``` diff --git a/spaces/yaoshining/text-generation-webui/server.py b/spaces/yaoshining/text-generation-webui/server.py deleted file mode 100644 index 408d5f199f65645b415582d412d39eb4e4da123e..0000000000000000000000000000000000000000 --- a/spaces/yaoshining/text-generation-webui/server.py +++ /dev/null @@ -1,1349 +0,0 @@ -import os -import warnings - -from modules.logging_colors import logger -from modules.block_requests import RequestBlocker - -os.environ['GRADIO_ANALYTICS_ENABLED'] = 'False' -os.environ['BITSANDBYTES_NOWELCOME'] = '1' -warnings.filterwarnings('ignore', category=UserWarning, message='TypedStorage is deprecated') - -with RequestBlocker(): - import gradio as gr - -import matplotlib - -matplotlib.use('Agg') # This fixes LaTeX rendering on some systems - -import importlib -import json -import math -import os -import re -import sys -import time -import traceback -from functools import partial -from pathlib import Path -from threading import Lock - -import psutil -import torch -import yaml -from PIL import Image - -import modules.extensions as extensions_module -from modules import chat, loaders, presets, shared, training, ui, utils -from modules.extensions import apply_extensions -from modules.github import clone_or_pull_repository -from modules.html_generator import chat_html_wrapper -from modules.LoRA import add_lora_to_model -from modules.models import load_model, unload_model -from modules.models_settings import ( - apply_model_settings_to_state, - get_model_settings_from_yamls, - save_model_settings, - update_model_parameters -) -from modules.text_generation import ( - generate_reply_wrapper, - get_encoded_length, - stop_everything_event -) - - -def load_model_wrapper(selected_model, loader, autoload=False): - if not autoload: - yield f"The settings for {selected_model} have been updated.\nClick on \"Load the model\" to load it." - return - - if selected_model == 'None': - yield "No model selected" - else: - try: - yield f"Loading {selected_model}..." - shared.model_name = selected_model - unload_model() - if selected_model != '': - shared.model, shared.tokenizer = load_model(shared.model_name, loader) - - if shared.model is not None: - yield f"Successfully loaded {selected_model}" - else: - yield f"Failed to load {selected_model}." - except: - exc = traceback.format_exc() - logger.error('Failed to load the model.') - print(exc) - yield exc - - -def load_lora_wrapper(selected_loras): - yield ("Applying the following LoRAs to {}:\n\n{}".format(shared.model_name, '\n'.join(selected_loras))) - add_lora_to_model(selected_loras) - yield ("Successfuly applied the LoRAs") - - -def load_prompt(fname): - if fname in ['None', '']: - return '' - elif fname.startswith('Instruct-'): - fname = re.sub('^Instruct-', '', fname) - file_path = Path(f'characters/instruction-following/{fname}.yaml') - if not file_path.exists(): - return '' - - with open(file_path, 'r', encoding='utf-8') as f: - data = yaml.safe_load(f) - output = '' - if 'context' in data: - output += data['context'] - - replacements = { - '<|user|>': data['user'], - '<|bot|>': data['bot'], - '<|user-message|>': 'Input', - } - - output += utils.replace_all(data['turn_template'].split('<|bot-message|>')[0], replacements) - return output.rstrip(' ') - else: - file_path = Path(f'prompts/{fname}.txt') - if not file_path.exists(): - return '' - - with open(file_path, 'r', encoding='utf-8') as f: - text = f.read() - if text[-1] == '\n': - text = text[:-1] - - return text - - -def count_tokens(text): - try: - tokens = get_encoded_length(text) - return f'{tokens} tokens in the input.' - except: - return 'Couldn\'t count the number of tokens. Is a tokenizer loaded?' - - -def download_model_wrapper(repo_id, progress=gr.Progress()): - try: - downloader_module = importlib.import_module("download-model") - downloader = downloader_module.ModelDownloader() - repo_id_parts = repo_id.split(":") - model = repo_id_parts[0] if len(repo_id_parts) > 0 else repo_id - branch = repo_id_parts[1] if len(repo_id_parts) > 1 else "main" - check = False - - progress(0.0) - yield ("Cleaning up the model/branch names") - model, branch = downloader.sanitize_model_and_branch_names(model, branch) - - yield ("Getting the download links from Hugging Face") - links, sha256, is_lora = downloader.get_download_links_from_huggingface(model, branch, text_only=False) - - yield ("Getting the output folder") - output_folder = downloader.get_output_folder(model, branch, is_lora) - - if check: - progress(0.5) - yield ("Checking previously downloaded files") - downloader.check_model_files(model, branch, links, sha256, output_folder) - progress(1.0) - else: - yield (f"Downloading files to {output_folder}") - downloader.download_model_files(model, branch, links, sha256, output_folder, progress_bar=progress, - threads=1) - yield ("Done!") - except: - progress(1.0) - yield traceback.format_exc() - - -def create_model_menus(): - # Finding the default values for the GPU and CPU memories - total_mem = [] - for i in range(torch.cuda.device_count()): - total_mem.append(math.floor(torch.cuda.get_device_properties(i).total_memory / (1024 * 1024))) - - default_gpu_mem = [] - if shared.args.gpu_memory is not None and len(shared.args.gpu_memory) > 0: - for i in shared.args.gpu_memory: - if 'mib' in i.lower(): - default_gpu_mem.append(int(re.sub('[a-zA-Z ]', '', i))) - else: - default_gpu_mem.append(int(re.sub('[a-zA-Z ]', '', i)) * 1000) - while len(default_gpu_mem) < len(total_mem): - default_gpu_mem.append(0) - - total_cpu_mem = math.floor(psutil.virtual_memory().total / (1024 * 1024)) - if shared.args.cpu_memory is not None: - default_cpu_mem = re.sub('[a-zA-Z ]', '', shared.args.cpu_memory) - else: - default_cpu_mem = 0 - - with gr.Row(): - with gr.Column(): - with gr.Row(): - with gr.Column(): - with gr.Row(): - shared.gradio['model_menu'] = gr.Dropdown(choices=utils.get_available_models(), - value=shared.model_name, label='Model', - elem_classes='slim-dropdown') - ui.create_refresh_button(shared.gradio['model_menu'], lambda: None, - lambda: {'choices': utils.get_available_models()}, 'refresh-button') - load = gr.Button("Load", visible=not shared.settings['autoload_model'], - elem_classes='refresh-button') - unload = gr.Button("Unload", elem_classes='refresh-button') - reload = gr.Button("Reload", elem_classes='refresh-button') - save_settings = gr.Button("Save settings", elem_classes='refresh-button') - - with gr.Column(): - with gr.Row(): - shared.gradio['lora_menu'] = gr.Dropdown(multiselect=True, choices=utils.get_available_loras(), - value=shared.lora_names, label='LoRA(s)', - elem_classes='slim-dropdown') - ui.create_refresh_button(shared.gradio['lora_menu'], lambda: None, - lambda: {'choices': utils.get_available_loras(), - 'value': shared.lora_names}, 'refresh-button') - shared.gradio['lora_menu_apply'] = gr.Button(value='Apply LoRAs', elem_classes='refresh-button') - - with gr.Row(): - with gr.Column(): - shared.gradio['loader'] = gr.Dropdown(label="Model loader", - choices=["Transformers", "AutoGPTQ", "GPTQ-for-LLaMa", "ExLlama", - "ExLlama_HF", "llama.cpp"], value=None) - with gr.Box(): - with gr.Row(): - with gr.Column(): - for i in range(len(total_mem)): - shared.gradio[f'gpu_memory_{i}'] = gr.Slider(label=f"gpu-memory in MiB for device :{i}", - maximum=total_mem[i], value=default_gpu_mem[i]) - - shared.gradio['cpu_memory'] = gr.Slider(label="cpu-memory in MiB", maximum=total_cpu_mem, - value=default_cpu_mem) - shared.gradio['transformers_info'] = gr.Markdown('load-in-4bit params:') - shared.gradio['compute_dtype'] = gr.Dropdown(label="compute_dtype", - choices=["bfloat16", "float16", "float32"], - value=shared.args.compute_dtype) - shared.gradio['quant_type'] = gr.Dropdown(label="quant_type", choices=["nf4", "fp4"], - value=shared.args.quant_type) - shared.gradio['threads'] = gr.Slider(label="threads", minimum=0, step=1, maximum=32, - value=shared.args.threads) - shared.gradio['n_batch'] = gr.Slider(label="n_batch", minimum=1, maximum=2048, - value=shared.args.n_batch) - shared.gradio['n_gpu_layers'] = gr.Slider(label="n-gpu-layers", minimum=0, maximum=128, - value=shared.args.n_gpu_layers) - shared.gradio['n_ctx'] = gr.Slider(minimum=0, maximum=16384, step=256, label="n_ctx", - value=shared.args.n_ctx) - shared.gradio['wbits'] = gr.Dropdown(label="wbits", choices=["None", 1, 2, 3, 4, 8], - value=shared.args.wbits if shared.args.wbits > 0 else "None") - shared.gradio['groupsize'] = gr.Dropdown(label="groupsize", choices=["None", 32, 64, 128, 1024], - value=shared.args.groupsize if shared.args.groupsize > 0 else "None") - shared.gradio['model_type'] = gr.Dropdown(label="model_type", - choices=["None", "llama", "opt", "gptj"], - value=shared.args.model_type or "None") - shared.gradio['pre_layer'] = gr.Slider(label="pre_layer", minimum=0, maximum=100, - value=shared.args.pre_layer[ - 0] if shared.args.pre_layer is not None else 0) - shared.gradio['autogptq_info'] = gr.Markdown( - 'On some systems, AutoGPTQ can be 2x slower than GPTQ-for-LLaMa. You can manually select the GPTQ-for-LLaMa loader above.') - shared.gradio['gpu_split'] = gr.Textbox(label='gpu-split', - info='Comma-separated list of VRAM (in GB) to use per GPU. Example: 20,7,7') - shared.gradio['max_seq_len'] = gr.Slider(label='max_seq_len', minimum=2048, maximum=16384, - step=256, info='Maximum sequence length.', - value=shared.args.max_seq_len) - shared.gradio['compress_pos_emb'] = gr.Slider(label='compress_pos_emb', minimum=1, maximum=8, - step=1, - info='Positional embeddings compression factor. Should typically be set to max_seq_len / 2048.', - value=shared.args.compress_pos_emb) - - with gr.Column(): - shared.gradio['triton'] = gr.Checkbox(label="triton", value=shared.args.triton) - shared.gradio['no_inject_fused_attention'] = gr.Checkbox(label="no_inject_fused_attention", - value=shared.args.no_inject_fused_attention, - info='Disable fused attention. Fused attention improves inference performance but uses more VRAM. Disable if running low on VRAM.') - shared.gradio['no_inject_fused_mlp'] = gr.Checkbox(label="no_inject_fused_mlp", - value=shared.args.no_inject_fused_mlp, - info='Affects Triton only. Disable fused MLP. Fused MLP improves performance but uses more VRAM. Disable if running low on VRAM.') - shared.gradio['no_use_cuda_fp16'] = gr.Checkbox(label="no_use_cuda_fp16", - value=shared.args.no_use_cuda_fp16, - info='This can make models faster on some systems.') - shared.gradio['desc_act'] = gr.Checkbox(label="desc_act", value=shared.args.desc_act, - info='\'desc_act\', \'wbits\', and \'groupsize\' are used for old models without a quantize_config.json.') - shared.gradio['cpu'] = gr.Checkbox(label="cpu", value=shared.args.cpu) - shared.gradio['load_in_8bit'] = gr.Checkbox(label="load-in-8bit", - value=shared.args.load_in_8bit) - shared.gradio['bf16'] = gr.Checkbox(label="bf16", value=shared.args.bf16) - shared.gradio['auto_devices'] = gr.Checkbox(label="auto-devices", - value=shared.args.auto_devices) - shared.gradio['disk'] = gr.Checkbox(label="disk", value=shared.args.disk) - shared.gradio['load_in_4bit'] = gr.Checkbox(label="load-in-4bit", - value=shared.args.load_in_4bit) - shared.gradio['use_double_quant'] = gr.Checkbox(label="use_double_quant", - value=shared.args.use_double_quant) - shared.gradio['no_mmap'] = gr.Checkbox(label="no-mmap", value=shared.args.no_mmap) - shared.gradio['mlock'] = gr.Checkbox(label="mlock", value=shared.args.mlock) - shared.gradio['llama_cpp_seed'] = gr.Number(label='Seed (0 for random)', - value=shared.args.llama_cpp_seed) - shared.gradio['trust_remote_code'] = gr.Checkbox(label="trust-remote-code", - value=shared.args.trust_remote_code, - info='Make sure to inspect the .py files inside the model folder before loading it with this option enabled.') - shared.gradio['gptq_for_llama_info'] = gr.Markdown( - 'GPTQ-for-LLaMa is currently 2x faster than AutoGPTQ on some systems. It is installed by default with the one-click installers. Otherwise, it has to be installed manually following the instructions here: [instructions](https://github.com/oobabooga/text-generation-webui/blob/main/docs/GPTQ-models-(4-bit-mode).md#installation-1).') - shared.gradio['exllama_info'] = gr.Markdown( - 'For more information, consult the [docs](https://github.com/oobabooga/text-generation-webui/blob/main/docs/ExLlama.md).') - shared.gradio['exllama_HF_info'] = gr.Markdown( - 'ExLlama_HF is a wrapper that lets you use ExLlama like a Transformers model, which means it can use the Transformers samplers. It\'s a bit slower than the regular ExLlama.') - - with gr.Column(): - with gr.Row(): - shared.gradio['autoload_model'] = gr.Checkbox(value=shared.settings['autoload_model'], - label='Autoload the model', - info='Whether to load the model as soon as it is selected in the Model dropdown.') - - shared.gradio['custom_model_menu'] = gr.Textbox(label="Download custom model or LoRA", - info="Enter the Hugging Face username/model path, for instance: facebook/galactica-125m. To specify a branch, add it at the end after a \":\" character like this: facebook/galactica-125m:main") - shared.gradio['download_model_button'] = gr.Button("Download") - - with gr.Row(): - shared.gradio['model_status'] = gr.Markdown( - 'No model is loaded' if shared.model_name == 'None' else 'Ready') - - shared.gradio['loader'].change(loaders.make_loader_params_visible, shared.gradio['loader'], - [shared.gradio[k] for k in loaders.get_all_params()]) - - # In this event handler, the interface state is read and updated - # with the model defaults (if any), and then the model is loaded - # unless "autoload_model" is unchecked - shared.gradio['model_menu'].change( - ui.gather_interface_values, [shared.gradio[k] for k in shared.input_elements], - shared.gradio['interface_state']).then( - apply_model_settings_to_state, [shared.gradio[k] for k in ['model_menu', 'interface_state']], - shared.gradio['interface_state']).then( - ui.apply_interface_values, shared.gradio['interface_state'], - [shared.gradio[k] for k in ui.list_interface_input_elements(chat=shared.is_chat())], show_progress=False).then( - update_model_parameters, shared.gradio['interface_state'], None).then( - load_model_wrapper, [shared.gradio[k] for k in ['model_menu', 'loader', 'autoload_model']], - shared.gradio['model_status'], show_progress=False) - - load.click( - ui.gather_interface_values, [shared.gradio[k] for k in shared.input_elements], - shared.gradio['interface_state']).then( - update_model_parameters, shared.gradio['interface_state'], None).then( - partial(load_model_wrapper, autoload=True), [shared.gradio[k] for k in ['model_menu', 'loader']], - shared.gradio['model_status'], show_progress=False) - - unload.click( - unload_model, None, None).then( - lambda: "Model unloaded", None, shared.gradio['model_status']) - - reload.click( - unload_model, None, None).then( - ui.gather_interface_values, [shared.gradio[k] for k in shared.input_elements], - shared.gradio['interface_state']).then( - update_model_parameters, shared.gradio['interface_state'], None).then( - partial(load_model_wrapper, autoload=True), [shared.gradio[k] for k in ['model_menu', 'loader']], - shared.gradio['model_status'], show_progress=False) - - save_settings.click( - ui.gather_interface_values, [shared.gradio[k] for k in shared.input_elements], - shared.gradio['interface_state']).then( - save_model_settings, [shared.gradio[k] for k in ['model_menu', 'interface_state']], - shared.gradio['model_status'], show_progress=False) - - shared.gradio['lora_menu_apply'].click(load_lora_wrapper, shared.gradio['lora_menu'], shared.gradio['model_status'], - show_progress=False) - shared.gradio['download_model_button'].click(download_model_wrapper, shared.gradio['custom_model_menu'], - shared.gradio['model_status'], show_progress=True) - shared.gradio['autoload_model'].change(lambda x: gr.update(visible=not x), shared.gradio['autoload_model'], load) - - -def create_chat_settings_menus(): - if not shared.is_chat(): - return - - with gr.Box(): - gr.Markdown("Chat parameters") - with gr.Row(): - with gr.Column(): - shared.gradio['max_new_tokens'] = gr.Slider(minimum=shared.settings['max_new_tokens_min'], - maximum=shared.settings['max_new_tokens_max'], step=1, - label='max_new_tokens', - value=shared.settings['max_new_tokens']) - shared.gradio['chat_generation_attempts'] = gr.Slider( - minimum=shared.settings['chat_generation_attempts_min'], - maximum=shared.settings['chat_generation_attempts_max'], - value=shared.settings['chat_generation_attempts'], step=1, - label='Generation attempts (for longer replies)', - info='New generations will be called until either this number is reached or no new content is generated between two iterations.') - - with gr.Column(): - shared.gradio['stop_at_newline'] = gr.Checkbox(value=shared.settings['stop_at_newline'], - label='Stop generating at new line character') - - -def create_settings_menus(default_preset): - generate_params = presets.load_preset(default_preset) - with gr.Row(): - with gr.Column(): - with gr.Row(): - shared.gradio['preset_menu'] = gr.Dropdown(choices=utils.get_available_presets(), - value=default_preset if not shared.args.flexgen else 'Naive', - label='Generation parameters preset', - elem_classes='slim-dropdown') - ui.create_refresh_button(shared.gradio['preset_menu'], lambda: None, - lambda: {'choices': utils.get_available_presets()}, 'refresh-button') - shared.gradio['save_preset'] = gr.Button('💾', elem_classes='refresh-button') - shared.gradio['delete_preset'] = gr.Button('🗑️', elem_classes='refresh-button') - - with gr.Column(): - shared.gradio['seed'] = gr.Number(value=shared.settings['seed'], label='Seed (-1 for random)') - - with gr.Row(): - with gr.Column(): - with gr.Box(): - gr.Markdown('Main parameters') - with gr.Row(): - with gr.Column(): - shared.gradio['temperature'] = gr.Slider(0.01, 1.99, value=generate_params['temperature'], - step=0.01, label='temperature') - shared.gradio['top_p'] = gr.Slider(0.0, 1.0, value=generate_params['top_p'], step=0.01, - label='top_p') - shared.gradio['top_k'] = gr.Slider(0, 200, value=generate_params['top_k'], step=1, - label='top_k') - shared.gradio['typical_p'] = gr.Slider(0.0, 1.0, value=generate_params['typical_p'], step=0.01, - label='typical_p') - shared.gradio['epsilon_cutoff'] = gr.Slider(0, 9, value=generate_params['epsilon_cutoff'], - step=0.01, label='epsilon_cutoff') - shared.gradio['eta_cutoff'] = gr.Slider(0, 20, value=generate_params['eta_cutoff'], step=0.01, - label='eta_cutoff') - - with gr.Column(): - shared.gradio['repetition_penalty'] = gr.Slider(1.0, 1.5, - value=generate_params['repetition_penalty'], - step=0.01, label='repetition_penalty') - shared.gradio['repetition_penalty_range'] = gr.Slider(0, 4096, step=64, value=generate_params[ - 'repetition_penalty_range'], label='repetition_penalty_range') - shared.gradio['encoder_repetition_penalty'] = gr.Slider(0.8, 1.5, value=generate_params[ - 'encoder_repetition_penalty'], step=0.01, label='encoder_repetition_penalty') - shared.gradio['no_repeat_ngram_size'] = gr.Slider(0, 20, step=1, - value=generate_params['no_repeat_ngram_size'], - label='no_repeat_ngram_size') - shared.gradio['min_length'] = gr.Slider(0, 2000, step=1, value=generate_params['min_length'], - label='min_length') - shared.gradio['tfs'] = gr.Slider(0.0, 1.0, value=generate_params['tfs'], step=0.01, label='tfs') - shared.gradio['top_a'] = gr.Slider(0.0, 1.0, value=generate_params['top_a'], step=0.01, - label='top_a') - shared.gradio['do_sample'] = gr.Checkbox(value=generate_params['do_sample'], label='do_sample') - - with gr.Accordion("Learn more", open=False): - gr.Markdown(""" - - Not all parameters are used by all loaders. See [this page](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Generation-parameters.md) for details. - - For a technical description of the parameters, the [transformers documentation](https://huggingface.co/docs/transformers/main_classes/text_generation#transformers.GenerationConfig) is a good reference. - - ### Temperature - Primary factor to control randomness of outputs. 0 = deterministic (only the most likely token is used). Higher value = more randomness. - ### top_p - If not set to 1, select tokens with probabilities adding up to less than this number. Higher value = higher range of possible random results. - ### top_k - Similar to top_p, but select instead only the top_k most likely tokens. Higher value = higher range of possible random results. - ### typical_p - If not set to 1, select only tokens that are at least this much more likely to appear than random tokens, given the prior text. - ### epsilon_cutoff - In units of 1e-4; a reasonable value is 3. This sets a probability floor below which tokens are excluded from being sampled. Should be used with top_p, top_k, and eta_cutoff set to 0. - ### eta_cutoff - In units of 1e-4; a reasonable value is 3. Should be used with top_p, top_k, and epsilon_cutoff set to 0. - ### repetition_penalty - Exponential penalty factor for repeating prior tokens. 1 means no penalty, higher value = less repetition, lower value = more repetition. - ### repetition_penalty_range - The number of most recent tokens to consider for repetition penalty. 0 makes all tokens be used. - ### encoder_repetition_penalty - Also known as the "Hallucinations filter". Used to penalize tokens that are *not* in the prior text. Higher value = more likely to stay in context, lower value = more likely to diverge. - ### no_repeat_ngram_size - If not set to 0, specifies the length of token sets that are completely blocked from repeating at all. Higher values = blocks larger phrases, lower values = blocks words or letters from repeating. Only 0 or high values are a good idea in most cases. - ### min_length - Minimum generation length in tokens. - ### penalty_alpha - Contrastive Search is enabled by setting this to greater than zero and unchecking "do_sample". It should be used with a low value of top_k, for instance, top_k = 4. - - """, elem_classes="markdown") - - with gr.Column(): - create_chat_settings_menus() - with gr.Box(): - with gr.Row(): - with gr.Column(): - gr.Markdown('Contrastive search') - shared.gradio['penalty_alpha'] = gr.Slider(0, 5, value=generate_params['penalty_alpha'], - label='penalty_alpha') - - gr.Markdown('Beam search') - shared.gradio['num_beams'] = gr.Slider(1, 20, step=1, value=generate_params['num_beams'], - label='num_beams') - shared.gradio['length_penalty'] = gr.Slider(-5, 5, value=generate_params['length_penalty'], - label='length_penalty') - shared.gradio['early_stopping'] = gr.Checkbox(value=generate_params['early_stopping'], - label='early_stopping') - - with gr.Column(): - gr.Markdown('Mirostat (mode=1 is only for llama.cpp)') - shared.gradio['mirostat_mode'] = gr.Slider(0, 2, step=1, value=generate_params['mirostat_mode'], - label='mirostat_mode') - shared.gradio['mirostat_tau'] = gr.Slider(0, 10, step=0.01, - value=generate_params['mirostat_tau'], - label='mirostat_tau') - shared.gradio['mirostat_eta'] = gr.Slider(0, 1, step=0.01, - value=generate_params['mirostat_eta'], - label='mirostat_eta') - - with gr.Box(): - with gr.Row(): - with gr.Column(): - shared.gradio['truncation_length'] = gr.Slider(value=shared.settings['truncation_length'], - minimum=shared.settings['truncation_length_min'], - maximum=shared.settings['truncation_length_max'], - step=256, - label='Truncate the prompt up to this length', - info='The leftmost tokens are removed if the prompt exceeds this length. Most models require this to be at most 2048.') - shared.gradio['custom_stopping_strings'] = gr.Textbox(lines=1, value=shared.settings[ - "custom_stopping_strings"] or None, - label='Custom stopping strings', - info='In addition to the defaults. Written between "" and separated by commas. For instance: "\\nYour Assistant:", "\\nThe assistant:"') - with gr.Column(): - shared.gradio['ban_eos_token'] = gr.Checkbox(value=shared.settings['ban_eos_token'], - label='Ban the eos_token', - info='Forces the model to never end the generation prematurely.') - shared.gradio['add_bos_token'] = gr.Checkbox(value=shared.settings['add_bos_token'], - label='Add the bos_token to the beginning of prompts', - info='Disabling this can make the replies more creative.') - - shared.gradio['skip_special_tokens'] = gr.Checkbox(value=shared.settings['skip_special_tokens'], - label='Skip special tokens', - info='Some specific models need this unset.') - shared.gradio['stream'] = gr.Checkbox(value=not shared.args.no_stream, - label='Activate text streaming') - - shared.gradio['preset_menu'].change(presets.load_preset_for_ui, - [shared.gradio[k] for k in ['preset_menu', 'interface_state']], - [shared.gradio[k] for k in - ['interface_state', 'do_sample', 'temperature', 'top_p', 'typical_p', - 'epsilon_cutoff', 'eta_cutoff', 'repetition_penalty', - 'repetition_penalty_range', 'encoder_repetition_penalty', 'top_k', - 'min_length', 'no_repeat_ngram_size', 'num_beams', 'penalty_alpha', - 'length_penalty', 'early_stopping', 'mirostat_mode', 'mirostat_tau', - 'mirostat_eta', 'tfs', 'top_a']]) - - -def create_file_saving_menus(): - # Text file saver - with gr.Box(visible=False, elem_classes='file-saver') as shared.gradio['file_saver']: - shared.gradio['save_filename'] = gr.Textbox(lines=1, label='File name') - shared.gradio['save_root'] = gr.Textbox(lines=1, label='File folder', info='For reference. Unchangeable.', - interactive=False) - shared.gradio['save_contents'] = gr.Textbox(lines=10, label='File contents') - with gr.Row(): - shared.gradio['save_confirm'] = gr.Button('Save', elem_classes="small-button") - shared.gradio['save_cancel'] = gr.Button('Cancel', elem_classes="small-button") - - # Text file deleter - with gr.Box(visible=False, elem_classes='file-saver') as shared.gradio['file_deleter']: - shared.gradio['delete_filename'] = gr.Textbox(lines=1, label='File name') - shared.gradio['delete_root'] = gr.Textbox(lines=1, label='File folder', info='For reference. Unchangeable.', - interactive=False) - with gr.Row(): - shared.gradio['delete_confirm'] = gr.Button('Delete', elem_classes="small-button", variant='stop') - shared.gradio['delete_cancel'] = gr.Button('Cancel', elem_classes="small-button") - - # Character saver/deleter - if shared.is_chat(): - with gr.Box(visible=False, elem_classes='file-saver') as shared.gradio['character_saver']: - shared.gradio['save_character_filename'] = gr.Textbox(lines=1, label='File name', - info='The character will be saved to your characters/ folder with this base filename.') - with gr.Row(): - shared.gradio['save_character_confirm'] = gr.Button('Save', elem_classes="small-button") - shared.gradio['save_character_cancel'] = gr.Button('Cancel', elem_classes="small-button") - - with gr.Box(visible=False, elem_classes='file-saver') as shared.gradio['character_deleter']: - gr.Markdown('Confirm the character deletion?') - with gr.Row(): - shared.gradio['delete_character_confirm'] = gr.Button('Delete', elem_classes="small-button", - variant='stop') - shared.gradio['delete_character_cancel'] = gr.Button('Cancel', elem_classes="small-button") - - -def create_file_saving_event_handlers(): - shared.gradio['save_confirm'].click( - lambda x, y, z: utils.save_file(x + y, z), - [shared.gradio[k] for k in ['save_root', 'save_filename', 'save_contents']], None).then( - lambda: gr.update(visible=False), None, shared.gradio['file_saver']) - - shared.gradio['delete_confirm'].click( - lambda x, y: utils.delete_file(x + y), [shared.gradio[k] for k in ['delete_root', 'delete_filename']], - None).then( - lambda: gr.update(visible=False), None, shared.gradio['file_deleter']) - - shared.gradio['delete_cancel'].click(lambda: gr.update(visible=False), None, shared.gradio['file_deleter']) - shared.gradio['save_cancel'].click(lambda: gr.update(visible=False), None, shared.gradio['file_saver']) - if shared.is_chat(): - shared.gradio['save_character_confirm'].click( - chat.save_character, [shared.gradio[k] for k in - ['name2', 'greeting', 'context', 'character_picture', 'save_character_filename']], - None).then( - lambda: gr.update(visible=False), None, shared.gradio['character_saver']) - - shared.gradio['delete_character_confirm'].click( - chat.delete_character, shared.gradio['character_menu'], None).then( - lambda: gr.update(visible=False), None, shared.gradio['character_deleter']).then( - lambda: gr.update(choices=utils.get_available_characters()), outputs=shared.gradio['character_menu']) - - shared.gradio['save_character_cancel'].click(lambda: gr.update(visible=False), None, - shared.gradio['character_saver']) - shared.gradio['delete_character_cancel'].click(lambda: gr.update(visible=False), None, - shared.gradio['character_deleter']) - - shared.gradio['save_preset'].click( - ui.gather_interface_values, [shared.gradio[k] for k in shared.input_elements], - shared.gradio['interface_state']).then( - presets.generate_preset_yaml, shared.gradio['interface_state'], shared.gradio['save_contents']).then( - lambda: 'presets/', None, shared.gradio['save_root']).then( - lambda: 'My Preset.yaml', None, shared.gradio['save_filename']).then( - lambda: gr.update(visible=True), None, shared.gradio['file_saver']) - - shared.gradio['delete_preset'].click( - lambda x: f'{x}.yaml', shared.gradio['preset_menu'], shared.gradio['delete_filename']).then( - lambda: 'presets/', None, shared.gradio['delete_root']).then( - lambda: gr.update(visible=True), None, shared.gradio['file_deleter']) - - -def set_interface_arguments(interface_mode, extensions, bool_active): - modes = ["default", "notebook", "chat", "cai_chat"] - cmd_list = vars(shared.args) - bool_list = [k for k in cmd_list if type(cmd_list[k]) is bool and k not in modes] - - shared.args.extensions = extensions - for k in modes[1:]: - setattr(shared.args, k, False) - if interface_mode != "default": - setattr(shared.args, interface_mode, True) - - for k in bool_list: - setattr(shared.args, k, False) - for k in bool_active: - setattr(shared.args, k, True) - - shared.need_restart = True - - -def create_interface(): - # Defining some variables - gen_events = [] - default_preset = shared.settings['preset'] - default_text = load_prompt(shared.settings['prompt']) - title = 'Text generation web UI' - - # Authentication variables - auth = None - gradio_auth_creds = [] - if shared.args.gradio_auth: - gradio_auth_creds += [x.strip() for x in shared.args.gradio_auth.strip('"').replace('\n', '').split(',') if - x.strip()] - if shared.args.gradio_auth_path is not None: - with open(shared.args.gradio_auth_path, 'r', encoding="utf8") as file: - for line in file.readlines(): - gradio_auth_creds += [x.strip() for x in line.split(',') if x.strip()] - if gradio_auth_creds: - auth = [tuple(cred.split(':')) for cred in gradio_auth_creds] - - # Importing the extension files and executing their setup() functions - if shared.args.extensions is not None and len(shared.args.extensions) > 0: - extensions_module.load_extensions() - - # css/js strings - css = ui.css if not shared.is_chat() else ui.css + ui.chat_css - js = ui.main_js if not shared.is_chat() else ui.main_js + ui.chat_js - css += apply_extensions('css') - js += apply_extensions('js') - - with gr.Blocks(css=css, analytics_enabled=False, title=title, theme=ui.theme) as shared.gradio['interface']: - if Path("notification.mp3").exists(): - shared.gradio['audio_notification'] = gr.Audio(interactive=False, value="notification.mp3", - elem_id="audio_notification", visible=False) - audio_notification_js = "document.querySelector('#audio_notification audio')?.play();" - else: - audio_notification_js = "" - - # Floating menus for saving/deleting files - create_file_saving_menus() - - # Create chat mode interface - if shared.is_chat(): - shared.input_elements = ui.list_interface_input_elements(chat=True) - shared.gradio['interface_state'] = gr.State({k: None for k in shared.input_elements}) - shared.gradio['Chat input'] = gr.State() - shared.gradio['dummy'] = gr.State() - - with gr.Tab('Text generation', elem_id='main'): - shared.gradio['display'] = gr.HTML( - value=chat_html_wrapper(shared.history['visible'], shared.settings['name1'], - shared.settings['name2'], 'chat', 'cai-chat')) - shared.gradio['textbox'] = gr.Textbox(label='Input') - with gr.Row(): - shared.gradio['Stop'] = gr.Button('Stop', elem_id='stop') - shared.gradio['Generate'] = gr.Button('Generate', elem_id='Generate', variant='primary') - shared.gradio['Continue'] = gr.Button('Continue') - - with gr.Row(): - shared.gradio['Impersonate'] = gr.Button('Impersonate') - shared.gradio['Regenerate'] = gr.Button('Regenerate') - shared.gradio['Remove last'] = gr.Button('Remove last') - - with gr.Row(): - shared.gradio['Copy last reply'] = gr.Button('Copy last reply') - shared.gradio['Replace last reply'] = gr.Button('Replace last reply') - shared.gradio['Send dummy message'] = gr.Button('Send dummy message') - shared.gradio['Send dummy reply'] = gr.Button('Send dummy reply') - - with gr.Row(): - shared.gradio['Clear history'] = gr.Button('Clear history') - shared.gradio['Clear history-confirm'] = gr.Button('Confirm', variant='stop', visible=False) - shared.gradio['Clear history-cancel'] = gr.Button('Cancel', visible=False) - - with gr.Row(): - shared.gradio['start_with'] = gr.Textbox(label='Start reply with', placeholder='Sure thing!', - value=shared.settings['start_with']) - - with gr.Row(): - shared.gradio['mode'] = gr.Radio(choices=['chat', 'chat-instruct', 'instruct'], - value=shared.settings['mode'] if shared.settings['mode'] in [ - 'chat', 'instruct', 'chat-instruct'] else 'chat', label='Mode', - info='Defines how the chat prompt is generated. In instruct and chat-instruct modes, the instruction template selected under "Chat settings" must match the current model.') - shared.gradio['chat_style'] = gr.Dropdown(choices=utils.get_available_chat_styles(), - label='Chat style', value=shared.settings['chat_style'], - visible=shared.settings['mode'] != 'instruct') - - with gr.Tab('Chat settings', elem_id='chat-settings'): - - with gr.Tab("Character"): - with gr.Row(): - with gr.Column(scale=8): - with gr.Row(): - shared.gradio['character_menu'] = gr.Dropdown(choices=utils.get_available_characters(), - label='Character', - elem_id='character-menu', - info='Used in chat and chat-instruct modes.', - elem_classes='slim-dropdown') - ui.create_refresh_button(shared.gradio['character_menu'], lambda: None, - lambda: {'choices': utils.get_available_characters()}, - 'refresh-button') - shared.gradio['save_character'] = gr.Button('💾', elem_classes='refresh-button') - shared.gradio['delete_character'] = gr.Button('🗑️', elem_classes='refresh-button') - - shared.gradio['name1'] = gr.Textbox(value=shared.settings['name1'], lines=1, - label='Your name') - shared.gradio['name2'] = gr.Textbox(value=shared.settings['name2'], lines=1, - label='Character\'s name') - shared.gradio['context'] = gr.Textbox(value=shared.settings['context'], lines=4, - label='Context') - shared.gradio['greeting'] = gr.Textbox(value=shared.settings['greeting'], lines=4, - label='Greeting') - - with gr.Column(scale=1): - shared.gradio['character_picture'] = gr.Image(label='Character picture', type='pil') - shared.gradio['your_picture'] = gr.Image(label='Your picture', type='pil', - value=Image.open(Path('cache/pfp_me.png')) if Path( - 'cache/pfp_me.png').exists() else None) - - with gr.Tab("Instruction template"): - with gr.Row(): - with gr.Row(): - shared.gradio['instruction_template'] = gr.Dropdown( - choices=utils.get_available_instruction_templates(), label='Instruction template', - value='None', - info='Change this according to the model/LoRA that you are using. Used in instruct and chat-instruct modes.', - elem_classes='slim-dropdown') - ui.create_refresh_button(shared.gradio['instruction_template'], lambda: None, - lambda: {'choices': utils.get_available_instruction_templates()}, - 'refresh-button') - shared.gradio['save_template'] = gr.Button('💾', elem_classes='refresh-button') - shared.gradio['delete_template'] = gr.Button('🗑️ ', elem_classes='refresh-button') - - shared.gradio['name1_instruct'] = gr.Textbox(value='', lines=2, label='User string') - shared.gradio['name2_instruct'] = gr.Textbox(value='', lines=1, label='Bot string') - shared.gradio['context_instruct'] = gr.Textbox(value='', lines=4, label='Context') - shared.gradio['turn_template'] = gr.Textbox(value=shared.settings['turn_template'], lines=1, - label='Turn template', - info='Used to precisely define the placement of spaces and new line characters in instruction prompts.') - with gr.Row(): - shared.gradio['chat-instruct_command'] = gr.Textbox( - value=shared.settings['chat-instruct_command'], lines=4, - label='Command for chat-instruct mode', - info='<|character|> gets replaced by the bot name, and <|prompt|> gets replaced by the regular chat prompt.') - - with gr.Tab('Chat history'): - with gr.Row(): - with gr.Column(): - shared.gradio['download'] = gr.File(label="Download") - shared.gradio['download_button'] = gr.Button(value='Refresh') - - with gr.Column(): - shared.gradio['upload_chat_history'] = gr.File(type='binary', file_types=['.json', '.txt'], - label="Upload") - - with gr.Tab('Upload character'): - with gr.Tab('JSON'): - with gr.Row(): - shared.gradio['upload_json'] = gr.File(type='binary', file_types=['.json'], - label='JSON File') - shared.gradio['upload_img_bot'] = gr.Image(type='pil', label='Profile Picture (optional)') - - shared.gradio['Submit character'] = gr.Button(value='Submit', interactive=False) - - with gr.Tab('TavernAI'): - with gr.Row(): - with gr.Column(): - shared.gradio['upload_img_tavern'] = gr.Image(type='pil', label='TavernAI PNG File', - elem_id="upload_img_tavern") - shared.gradio['tavern_json'] = gr.State() - with gr.Column(): - shared.gradio['tavern_name'] = gr.Textbox(value='', lines=1, label='Name', - interactive=False) - shared.gradio['tavern_desc'] = gr.Textbox(value='', lines=4, max_lines=4, - label='Description', interactive=False) - - shared.gradio['Submit tavern character'] = gr.Button(value='Submit', interactive=False) - - with gr.Tab("Parameters", elem_id="parameters"): - create_settings_menus(default_preset) - - # Create notebook mode interface - elif shared.args.notebook: - shared.input_elements = ui.list_interface_input_elements(chat=False) - shared.gradio['interface_state'] = gr.State({k: None for k in shared.input_elements}) - shared.gradio['last_input'] = gr.State('') - with gr.Tab("Text generation", elem_id="main"): - with gr.Row(): - with gr.Column(scale=4): - with gr.Tab('Raw'): - shared.gradio['textbox'] = gr.Textbox(value=default_text, elem_classes="textbox", lines=27) - - with gr.Tab('Markdown'): - shared.gradio['markdown_render'] = gr.Button('Render') - shared.gradio['markdown'] = gr.Markdown() - - with gr.Tab('HTML'): - shared.gradio['html'] = gr.HTML() - - with gr.Row(): - shared.gradio['Generate'] = gr.Button('Generate', variant='primary', - elem_classes="small-button") - shared.gradio['Stop'] = gr.Button('Stop', elem_classes="small-button") - shared.gradio['Undo'] = gr.Button('Undo', elem_classes="small-button") - shared.gradio['Regenerate'] = gr.Button('Regenerate', elem_classes="small-button") - - with gr.Column(scale=1): - gr.HTML('
        ') - shared.gradio['max_new_tokens'] = gr.Slider(minimum=shared.settings['max_new_tokens_min'], - maximum=shared.settings['max_new_tokens_max'], - step=1, label='max_new_tokens', - value=shared.settings['max_new_tokens']) - with gr.Row(): - shared.gradio['prompt_menu'] = gr.Dropdown(choices=utils.get_available_prompts(), - value='None', label='Prompt', - elem_classes='slim-dropdown') - ui.create_refresh_button(shared.gradio['prompt_menu'], lambda: None, - lambda: {'choices': utils.get_available_prompts()}, - ['refresh-button', 'refresh-button-small']) - shared.gradio['save_prompt'] = gr.Button('💾', elem_classes=['refresh-button', - 'refresh-button-small']) - shared.gradio['delete_prompt'] = gr.Button('🗑️', elem_classes=['refresh-button', - 'refresh-button-small']) - - shared.gradio['count_tokens'] = gr.Button('Count tokens') - shared.gradio['status'] = gr.Markdown('') - - with gr.Tab("Parameters", elem_id="parameters"): - create_settings_menus(default_preset) - - # Create default mode interface - else: - shared.input_elements = ui.list_interface_input_elements(chat=False) - shared.gradio['interface_state'] = gr.State({k: None for k in shared.input_elements}) - shared.gradio['last_input'] = gr.State('') - with gr.Tab("Text generation", elem_id="main"): - with gr.Row(): - with gr.Column(): - shared.gradio['textbox'] = gr.Textbox(value=default_text, elem_classes="textbox_default", - lines=27, label='Input') - shared.gradio['max_new_tokens'] = gr.Slider(minimum=shared.settings['max_new_tokens_min'], - maximum=shared.settings['max_new_tokens_max'], - step=1, label='max_new_tokens', - value=shared.settings['max_new_tokens']) - with gr.Row(): - shared.gradio['Generate'] = gr.Button('Generate', variant='primary') - shared.gradio['Stop'] = gr.Button('Stop') - shared.gradio['Continue'] = gr.Button('Continue') - shared.gradio['count_tokens'] = gr.Button('Count tokens') - - with gr.Row(): - shared.gradio['prompt_menu'] = gr.Dropdown(choices=utils.get_available_prompts(), - value='None', label='Prompt', - elem_classes='slim-dropdown') - ui.create_refresh_button(shared.gradio['prompt_menu'], lambda: None, - lambda: {'choices': utils.get_available_prompts()}, - 'refresh-button') - shared.gradio['save_prompt'] = gr.Button('💾', elem_classes='refresh-button') - shared.gradio['delete_prompt'] = gr.Button('🗑️', elem_classes='refresh-button') - - shared.gradio['status'] = gr.Markdown('') - - with gr.Column(): - with gr.Tab('Raw'): - shared.gradio['output_textbox'] = gr.Textbox(elem_classes="textbox_default_output", - lines=27, label='Output') - - with gr.Tab('Markdown'): - shared.gradio['markdown_render'] = gr.Button('Render') - shared.gradio['markdown'] = gr.Markdown() - - with gr.Tab('HTML'): - shared.gradio['html'] = gr.HTML() - - with gr.Tab("Parameters", elem_id="parameters"): - create_settings_menus(default_preset) - - # Model tab - with gr.Tab("Model", elem_id="model-tab"): - create_model_menus() - - # Training tab - with gr.Tab("Training", elem_id="training-tab"): - training.create_train_interface() - - # Interface mode tab - with gr.Tab("Interface mode", elem_id="interface-mode"): - modes = ["default", "notebook", "chat"] - current_mode = "default" - for mode in modes[1:]: - if getattr(shared.args, mode): - current_mode = mode - break - - cmd_list = vars(shared.args) - bool_list = sorted( - [k for k in cmd_list if type(cmd_list[k]) is bool and k not in modes + ui.list_model_elements()]) - bool_active = [k for k in bool_list if vars(shared.args)[k]] - - with gr.Row(): - shared.gradio['interface_modes_menu'] = gr.Dropdown(choices=modes, value=current_mode, label="Mode") - shared.gradio['reset_interface'] = gr.Button("Apply and restart the interface", - elem_classes="small-button") - shared.gradio['toggle_dark_mode'] = gr.Button('Toggle dark/light mode', elem_classes="small-button") - - with gr.Row(): - with gr.Column(): - shared.gradio['extensions_menu'] = gr.CheckboxGroup(choices=utils.get_available_extensions(), - value=shared.args.extensions, - label="Available extensions", - info='Note that some of these extensions may require manually installing Python requirements through the command: pip install -r extensions/extension_name/requirements.txt', - elem_classes='checkboxgroup-table') - - with gr.Column(): - shared.gradio['bool_menu'] = gr.CheckboxGroup(choices=bool_list, value=bool_active, - label="Boolean command-line flags", - elem_classes='checkboxgroup-table') - - with gr.Row(): - extension_name = gr.Textbox(lines=1, label='Install or update an extension', - info='Enter the GitHub URL below. For a list of extensions, see: https://github.com/oobabooga/text-generation-webui-extensions ⚠️ WARNING ⚠️ : extensions can execute arbitrary code. Make sure to inspect their source code before activating them.') - extension_install = gr.Button('Install or update', elem_classes="small-button") - - extension_status = gr.Markdown() - - extension_install.click( - clone_or_pull_repository, extension_name, extension_status, show_progress=False).then( - lambda: gr.update(choices=utils.get_available_extensions(), value=shared.args.extensions), - outputs=shared.gradio['extensions_menu']) - - # Reset interface event - shared.gradio['reset_interface'].click( - set_interface_arguments, - [shared.gradio[k] for k in ['interface_modes_menu', 'extensions_menu', 'bool_menu']], None).then( - lambda: None, None, None, - _js='() => {document.body.innerHTML=\'

        Reloading...

        \'; setTimeout(function(){location.reload()},2500); return []}') - - shared.gradio['toggle_dark_mode'].click(lambda: None, None, None, - _js='() => {document.getElementsByTagName("body")[0].classList.toggle("dark")}') - - # chat mode event handlers - if shared.is_chat(): - shared.input_params = [shared.gradio[k] for k in ['Chat input', 'start_with', 'interface_state']] - clear_arr = [shared.gradio[k] for k in ['Clear history-confirm', 'Clear history', 'Clear history-cancel']] - shared.reload_inputs = [shared.gradio[k] for k in ['name1', 'name2', 'mode', 'chat_style']] - - gen_events.append(shared.gradio['Generate'].click( - ui.gather_interface_values, [shared.gradio[k] for k in shared.input_elements], - shared.gradio['interface_state']).then( - lambda x: (x, ''), shared.gradio['textbox'], [shared.gradio['Chat input'], shared.gradio['textbox']], - show_progress=False).then( - chat.generate_chat_reply_wrapper, shared.input_params, shared.gradio['display'], - show_progress=False).then( - chat.save_history, shared.gradio['mode'], None, show_progress=False).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - ) - - gen_events.append(shared.gradio['textbox'].submit( - ui.gather_interface_values, [shared.gradio[k] for k in shared.input_elements], - shared.gradio['interface_state']).then( - lambda x: (x, ''), shared.gradio['textbox'], [shared.gradio['Chat input'], shared.gradio['textbox']], - show_progress=False).then( - chat.generate_chat_reply_wrapper, shared.input_params, shared.gradio['display'], - show_progress=False).then( - chat.save_history, shared.gradio['mode'], None, show_progress=False).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - ) - - gen_events.append(shared.gradio['Regenerate'].click( - ui.gather_interface_values, [shared.gradio[k] for k in shared.input_elements], - shared.gradio['interface_state']).then( - partial(chat.generate_chat_reply_wrapper, regenerate=True), shared.input_params, - shared.gradio['display'], show_progress=False).then( - chat.save_history, shared.gradio['mode'], None, show_progress=False).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - ) - - gen_events.append(shared.gradio['Continue'].click( - ui.gather_interface_values, [shared.gradio[k] for k in shared.input_elements], - shared.gradio['interface_state']).then( - partial(chat.generate_chat_reply_wrapper, _continue=True), shared.input_params, - shared.gradio['display'], show_progress=False).then( - chat.save_history, shared.gradio['mode'], None, show_progress=False).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - ) - - gen_events.append(shared.gradio['Impersonate'].click( - ui.gather_interface_values, [shared.gradio[k] for k in shared.input_elements], - shared.gradio['interface_state']).then( - lambda x: x, shared.gradio['textbox'], shared.gradio['Chat input'], show_progress=False).then( - chat.impersonate_wrapper, shared.input_params, shared.gradio['textbox'], show_progress=False).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - ) - - shared.gradio['Replace last reply'].click( - chat.replace_last_reply, shared.gradio['textbox'], None).then( - lambda: '', None, shared.gradio['textbox'], show_progress=False).then( - chat.save_history, shared.gradio['mode'], None, show_progress=False).then( - chat.redraw_html, shared.reload_inputs, shared.gradio['display']) - - shared.gradio['Send dummy message'].click( - chat.send_dummy_message, shared.gradio['textbox'], None).then( - lambda: '', None, shared.gradio['textbox'], show_progress=False).then( - chat.save_history, shared.gradio['mode'], None, show_progress=False).then( - chat.redraw_html, shared.reload_inputs, shared.gradio['display']) - - shared.gradio['Send dummy reply'].click( - chat.send_dummy_reply, shared.gradio['textbox'], None).then( - lambda: '', None, shared.gradio['textbox'], show_progress=False).then( - chat.save_history, shared.gradio['mode'], None, show_progress=False).then( - chat.redraw_html, shared.reload_inputs, shared.gradio['display']) - - shared.gradio['Clear history-confirm'].click( - lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, - clear_arr).then( - chat.clear_chat_log, [shared.gradio[k] for k in ['greeting', 'mode']], None).then( - chat.save_history, shared.gradio['mode'], None, show_progress=False).then( - chat.redraw_html, shared.reload_inputs, shared.gradio['display']) - - shared.gradio['Stop'].click( - stop_everything_event, None, None, queue=False, - cancels=gen_events if shared.args.no_stream else None).then( - chat.redraw_html, shared.reload_inputs, shared.gradio['display']) - - shared.gradio['mode'].change( - lambda x: gr.update(visible=x != 'instruct'), shared.gradio['mode'], shared.gradio['chat_style'], - show_progress=False).then( - chat.redraw_html, shared.reload_inputs, shared.gradio['display']) - - shared.gradio['chat_style'].change(chat.redraw_html, shared.reload_inputs, shared.gradio['display']) - shared.gradio['instruction_template'].change( - partial(chat.load_character, instruct=True), - [shared.gradio[k] for k in ['instruction_template', 'name1_instruct', 'name2_instruct']], - [shared.gradio[k] for k in - ['name1_instruct', 'name2_instruct', 'dummy', 'dummy', 'context_instruct', 'turn_template']]) - - shared.gradio['upload_chat_history'].upload( - chat.load_history, [shared.gradio[k] for k in ['upload_chat_history', 'name1', 'name2']], None).then( - chat.redraw_html, shared.reload_inputs, shared.gradio['display']) - - shared.gradio['Copy last reply'].click(chat.send_last_reply_to_input, None, shared.gradio['textbox'], - show_progress=False) - shared.gradio['Clear history'].click( - lambda: [gr.update(visible=True), gr.update(visible=False), gr.update(visible=True)], None, clear_arr) - shared.gradio['Clear history-cancel'].click( - lambda: [gr.update(visible=False), gr.update(visible=True), gr.update(visible=False)], None, clear_arr) - shared.gradio['Remove last'].click( - chat.remove_last_message, None, shared.gradio['textbox'], show_progress=False).then( - chat.save_history, shared.gradio['mode'], None, show_progress=False).then( - chat.redraw_html, shared.reload_inputs, shared.gradio['display']) - - # Save/delete a character - shared.gradio['save_character'].click( - lambda x: x, shared.gradio['name2'], shared.gradio['save_character_filename']).then( - lambda: gr.update(visible=True), None, shared.gradio['character_saver']) - - shared.gradio['delete_character'].click(lambda: gr.update(visible=True), None, - shared.gradio['character_deleter']) - - shared.gradio['save_template'].click( - lambda: 'My Template.yaml', None, shared.gradio['save_filename']).then( - lambda: 'characters/instruction-following/', None, shared.gradio['save_root']).then( - chat.generate_instruction_template_yaml, - [shared.gradio[k] for k in ['name1_instruct', 'name2_instruct', 'context_instruct', 'turn_template']], - shared.gradio['save_contents']).then( - lambda: gr.update(visible=True), None, shared.gradio['file_saver']) - - shared.gradio['delete_template'].click( - lambda x: f'{x}.yaml', shared.gradio['instruction_template'], shared.gradio['delete_filename']).then( - lambda: 'characters/instruction-following/', None, shared.gradio['delete_root']).then( - lambda: gr.update(visible=True), None, shared.gradio['file_deleter']) - - shared.gradio['download_button'].click(lambda x: chat.save_history(x, timestamp=True, user_request=True), - shared.gradio['mode'], shared.gradio['download']) - shared.gradio['Submit character'].click(chat.upload_character, - [shared.gradio['upload_json'], shared.gradio['upload_img_bot']], - [shared.gradio['character_menu']]) - shared.gradio['upload_json'].upload(lambda: gr.update(interactive=True), None, - [shared.gradio['Submit character']]) - shared.gradio['upload_json'].clear(lambda: gr.update(interactive=False), None, - [shared.gradio['Submit character']]) - - shared.gradio['character_menu'].change( - partial(chat.load_character, instruct=False), - [shared.gradio[k] for k in ['character_menu', 'name1', 'name2']], [shared.gradio[k] for k in - ['name1', 'name2', - 'character_picture', 'greeting', - 'context', 'dummy']]).then( - chat.redraw_html, shared.reload_inputs, shared.gradio['display']) - - shared.gradio['Submit tavern character'].click(chat.upload_tavern_character, - [shared.gradio['upload_img_tavern'], - shared.gradio['tavern_json']], - [shared.gradio['character_menu']]) - shared.gradio['upload_img_tavern'].upload(chat.check_tavern_character, shared.gradio['upload_img_tavern'], - [shared.gradio[k] for k in - ['tavern_name', 'tavern_desc', 'tavern_json', - 'Submit tavern character']], show_progress=False) - shared.gradio['upload_img_tavern'].clear(lambda: (None, None, None, gr.update(interactive=False)), None, - [shared.gradio[k] for k in - ['tavern_name', 'tavern_desc', 'tavern_json', - 'Submit tavern character']], show_progress=False) - shared.gradio['your_picture'].change( - chat.upload_your_profile_picture, shared.gradio['your_picture'], None).then( - partial(chat.redraw_html, reset_cache=True), shared.reload_inputs, shared.gradio['display']) - - # notebook/default modes event handlers - else: - shared.input_params = [shared.gradio[k] for k in ['textbox', 'interface_state']] - if shared.args.notebook: - output_params = [shared.gradio[k] for k in ['textbox', 'html']] - else: - output_params = [shared.gradio[k] for k in ['output_textbox', 'html']] - - gen_events.append(shared.gradio['Generate'].click( - lambda x: x, shared.gradio['textbox'], shared.gradio['last_input']).then( - ui.gather_interface_values, [shared.gradio[k] for k in shared.input_elements], - shared.gradio['interface_state']).then( - generate_reply_wrapper, shared.input_params, output_params, show_progress=False).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - # lambda: None, None, None, _js="() => {element = document.getElementsByTagName('textarea')[0]; element.scrollTop = element.scrollHeight}") - ) - - gen_events.append(shared.gradio['textbox'].submit( - lambda x: x, shared.gradio['textbox'], shared.gradio['last_input']).then( - ui.gather_interface_values, [shared.gradio[k] for k in shared.input_elements], - shared.gradio['interface_state']).then( - generate_reply_wrapper, shared.input_params, output_params, show_progress=False).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - # lambda: None, None, None, _js="() => {element = document.getElementsByTagName('textarea')[0]; element.scrollTop = element.scrollHeight}") - ) - - if shared.args.notebook: - shared.gradio['Undo'].click(lambda x: x, shared.gradio['last_input'], shared.gradio['textbox'], - show_progress=False) - shared.gradio['markdown_render'].click(lambda x: x, shared.gradio['textbox'], shared.gradio['markdown'], - queue=False) - gen_events.append(shared.gradio['Regenerate'].click( - lambda x: x, shared.gradio['last_input'], shared.gradio['textbox'], show_progress=False).then( - ui.gather_interface_values, [shared.gradio[k] for k in shared.input_elements], - shared.gradio['interface_state']).then( - generate_reply_wrapper, shared.input_params, output_params, show_progress=False).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - # lambda: None, None, None, _js="() => {element = document.getElementsByTagName('textarea')[0]; element.scrollTop = element.scrollHeight}") - ) - else: - shared.gradio['markdown_render'].click(lambda x: x, shared.gradio['output_textbox'], - shared.gradio['markdown'], queue=False) - gen_events.append(shared.gradio['Continue'].click( - ui.gather_interface_values, [shared.gradio[k] for k in shared.input_elements], - shared.gradio['interface_state']).then( - generate_reply_wrapper, [shared.gradio['output_textbox']] + shared.input_params[1:], output_params, - show_progress=False).then( - lambda: None, None, None, _js=f"() => {{{audio_notification_js}}}") - # lambda: None, None, None, _js="() => {element = document.getElementsByTagName('textarea')[1]; element.scrollTop = element.scrollHeight}") - ) - - shared.gradio['Stop'].click(stop_everything_event, None, None, queue=False, - cancels=gen_events if shared.args.no_stream else None) - shared.gradio['prompt_menu'].change(load_prompt, shared.gradio['prompt_menu'], shared.gradio['textbox'], - show_progress=False) - shared.gradio['save_prompt'].click( - lambda x: x, shared.gradio['textbox'], shared.gradio['save_contents']).then( - lambda: 'prompts/', None, shared.gradio['save_root']).then( - lambda: utils.current_time() + '.txt', None, shared.gradio['save_filename']).then( - lambda: gr.update(visible=True), None, shared.gradio['file_saver']) - - shared.gradio['delete_prompt'].click( - lambda: 'prompts/', None, shared.gradio['delete_root']).then( - lambda x: x + '.txt', shared.gradio['prompt_menu'], shared.gradio['delete_filename']).then( - lambda: gr.update(visible=True), None, shared.gradio['file_deleter']) - - shared.gradio['count_tokens'].click(count_tokens, shared.gradio['textbox'], shared.gradio['status'], - show_progress=False) - - create_file_saving_event_handlers() - shared.gradio['interface'].load(lambda: None, None, None, _js=f"() => {{{js}}}") - if shared.settings['dark_theme']: - shared.gradio['interface'].load(lambda: None, None, None, - _js="() => document.getElementsByTagName('body')[0].classList.add('dark')") - - shared.gradio['interface'].load(partial(ui.apply_interface_values, {}, use_persistent=True), None, - [shared.gradio[k] for k in - ui.list_interface_input_elements(chat=shared.is_chat())], show_progress=False) - - # Extensions tabs - extensions_module.create_extensions_tabs() - - # Extensions block - extensions_module.create_extensions_block() - - # Launch the interface - shared.gradio['interface'].queue() - if shared.args.listen: - shared.gradio['interface'].launch(prevent_thread_lock=True, share=shared.args.share, - server_name=shared.args.listen_host or '0.0.0.0', - server_port=shared.args.listen_port, inbrowser=shared.args.auto_launch, - auth=auth) - else: - shared.gradio['interface'].launch(prevent_thread_lock=True, share=shared.args.share, - server_port=shared.args.listen_port, inbrowser=shared.args.auto_launch, - auth=auth) - - -if __name__ == "__main__": - # Loading custom settings - settings_file = None - if shared.args.settings is not None and Path(shared.args.settings).exists(): - settings_file = Path(shared.args.settings) - elif Path('settings.yaml').exists(): - settings_file = Path('settings.yaml') - elif Path('settings.json').exists(): - settings_file = Path('settings.json') - - if settings_file is not None: - logger.info(f"Loading settings from {settings_file}...") - file_contents = open(settings_file, 'r', encoding='utf-8').read() - new_settings = json.loads(file_contents) if settings_file.suffix == "json" else yaml.safe_load(file_contents) - for item in new_settings: - shared.settings[item] = new_settings[item] - - # Set default model settings based on settings file - shared.model_config['.*'] = { - 'wbits': 'None', - 'model_type': 'None', - 'groupsize': 'None', - 'pre_layer': 0, - 'mode': shared.settings['mode'], - 'skip_special_tokens': shared.settings['skip_special_tokens'], - 'custom_stopping_strings': shared.settings['custom_stopping_strings'], - 'truncation_length': shared.settings['truncation_length'], - } - - shared.model_config.move_to_end('.*', last=False) # Move to the beginning - - # Default extensions - extensions_module.available_extensions = utils.get_available_extensions() - if shared.is_chat(): - for extension in shared.settings['chat_default_extensions']: - shared.args.extensions = shared.args.extensions or [] - if extension not in shared.args.extensions: - shared.args.extensions.append(extension) - else: - for extension in shared.settings['default_extensions']: - shared.args.extensions = shared.args.extensions or [] - if extension not in shared.args.extensions: - shared.args.extensions.append(extension) - - available_models = utils.get_available_models() - - # Model defined through --model - if shared.args.model is not None: - shared.model_name = shared.args.model - - # Only one model is available - elif len(available_models) == 1: - shared.model_name = available_models[0] - - # Select the model from a command-line menu - elif shared.args.model_menu: - if len(available_models) == 0: - logger.error('No models are available! Please download at least one.') - sys.exit(0) - else: - print('The following models are available:\n') - for i, model in enumerate(available_models): - print(f'{i + 1}. {model}') - - print(f'\nWhich one do you want to load? 1-{len(available_models)}\n') - i = int(input()) - 1 - print() - - shared.model_name = available_models[i] - - # If any model has been selected, load it - if shared.model_name != 'None': - model_settings = get_model_settings_from_yamls(shared.model_name) - shared.settings.update(model_settings) # hijacking the interface defaults - update_model_parameters(model_settings, initial=True) # hijacking the command-line arguments - - # Load the model - shared.model, shared.tokenizer = load_model(shared.model_name) - if shared.args.lora: - add_lora_to_model(shared.args.lora) - - # Force a character to be loaded - if shared.is_chat(): - shared.persistent_interface_state.update({ - 'mode': shared.settings['mode'], - 'character_menu': shared.args.character or shared.settings['character'], - 'instruction_template': shared.settings['instruction_template'] - }) - - shared.persistent_interface_state.update({ - 'loader': shared.args.loader or 'Transformers', - }) - - shared.generation_lock = Lock() - # Launch the web UI - create_interface() - while True: - time.sleep(0.5) - if shared.need_restart: - shared.need_restart = False - time.sleep(0.5) - shared.gradio['interface'].close() - time.sleep(0.5) - create_interface() diff --git a/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/metrics/frechet_inception_distance.py b/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/metrics/frechet_inception_distance.py deleted file mode 100644 index 41f71fe4bfb85218cc283b3f7bc3a34fea5f790d..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/interfacegan_pp/models/stylegan_tf_official/metrics/frechet_inception_distance.py +++ /dev/null @@ -1,72 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Frechet Inception Distance (FID).""" - -import os -import numpy as np -import scipy -import tensorflow as tf -import dnnlib.tflib as tflib - -from metrics import metric_base -from training import misc - -#---------------------------------------------------------------------------- - -class FID(metric_base.MetricBase): - def __init__(self, num_images, minibatch_per_gpu, **kwargs): - super().__init__(**kwargs) - self.num_images = num_images - self.minibatch_per_gpu = minibatch_per_gpu - - def _evaluate(self, Gs, num_gpus): - minibatch_size = num_gpus * self.minibatch_per_gpu - inception = misc.load_pkl('https://drive.google.com/uc?id=1MzTY44rLToO5APn8TZmfR7_ENSe5aZUn') # inception_v3_features.pkl - activations = np.empty([self.num_images, inception.output_shape[1]], dtype=np.float32) - - # Calculate statistics for reals. - cache_file = self._get_cache_file_for_reals(num_images=self.num_images) - os.makedirs(os.path.dirname(cache_file), exist_ok=True) - if os.path.isfile(cache_file): - mu_real, sigma_real = misc.load_pkl(cache_file) - else: - for idx, images in enumerate(self._iterate_reals(minibatch_size=minibatch_size)): - begin = idx * minibatch_size - end = min(begin + minibatch_size, self.num_images) - activations[begin:end] = inception.run(images[:end-begin], num_gpus=num_gpus, assume_frozen=True) - if end == self.num_images: - break - mu_real = np.mean(activations, axis=0) - sigma_real = np.cov(activations, rowvar=False) - misc.save_pkl((mu_real, sigma_real), cache_file) - - # Construct TensorFlow graph. - result_expr = [] - for gpu_idx in range(num_gpus): - with tf.device('/gpu:%d' % gpu_idx): - Gs_clone = Gs.clone() - inception_clone = inception.clone() - latents = tf.random_normal([self.minibatch_per_gpu] + Gs_clone.input_shape[1:]) - images = Gs_clone.get_output_for(latents, None, is_validation=True, randomize_noise=True) - images = tflib.convert_images_to_uint8(images) - result_expr.append(inception_clone.get_output_for(images)) - - # Calculate statistics for fakes. - for begin in range(0, self.num_images, minibatch_size): - end = min(begin + minibatch_size, self.num_images) - activations[begin:end] = np.concatenate(tflib.run(result_expr), axis=0)[:end-begin] - mu_fake = np.mean(activations, axis=0) - sigma_fake = np.cov(activations, rowvar=False) - - # Calculate FID. - m = np.square(mu_fake - mu_real).sum() - s, _ = scipy.linalg.sqrtm(np.dot(sigma_fake, sigma_real), disp=False) # pylint: disable=no-member - dist = m + np.trace(sigma_fake + sigma_real - 2*s) - self._report_result(np.real(dist)) - -#---------------------------------------------------------------------------- diff --git a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vencoder/ContentVec256L9.py b/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vencoder/ContentVec256L9.py deleted file mode 100644 index b0089c789cd87cfd3b1badb2fc45cb1b88041eab..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Matikanefukukitaru/vencoder/ContentVec256L9.py +++ /dev/null @@ -1,35 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import torch -from fairseq import checkpoint_utils - -class ContentVec256L9(SpeechEncoder): - def __init__(self,vec_path = "pretrain/checkpoint_best_legacy_500.pt",device=None): - print("load model(s) from {}".format(vec_path)) - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [vec_path], - suffix="", - ) - self.hidden_dim = 256 - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - self.model = models[0].to(self.dev) - self.model.eval() - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.to(wav.device), - "padding_mask": padding_mask.to(wav.device), - "output_layer": 9, # layer 9 - } - with torch.no_grad(): - logits = self.model.extract_features(**inputs) - feats = self.model.final_proj(logits[0]) - return feats.transpose(1, 2) diff --git a/spaces/yo2266911/uma_voice/inference.py b/spaces/yo2266911/uma_voice/inference.py deleted file mode 100644 index 9f8a9ac9a18f9aaea87f47a92e41938b9e6859b5..0000000000000000000000000000000000000000 --- a/spaces/yo2266911/uma_voice/inference.py +++ /dev/null @@ -1,40 +0,0 @@ -import matplotlib.pyplot as plt -import IPython.display as ipd - -import os -import json -import math -import torch -from torch import nn -from torch.nn import functional as F -from torch.utils.data import DataLoader - -import commons -import utils -from data_utils import TextAudioLoader, TextAudioCollate, TextAudioSpeakerLoader, TextAudioSpeakerCollate -from models import SynthesizerTrn -from text.symbols import symbols -from text import text_to_sequence - -from scipy.io.wavfile import write - - -def get_text(text, hps): - text_norm = text_to_sequence(text, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - -hps = utils.get_hparams_from_file("./configs/yuzu.json") - -net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).cuda() -_ = net_g.eval() - -_ = utils.load_checkpoint("pretrained_models/yuzu.pth", net_g, None) \ No newline at end of file diff --git a/spaces/zhang-wei-jian/docker/node_modules/safe-buffer/README.md b/spaces/zhang-wei-jian/docker/node_modules/safe-buffer/README.md deleted file mode 100644 index e9a81afd0406f030ba21169f0c7a1dba70b3a93b..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/safe-buffer/README.md +++ /dev/null @@ -1,584 +0,0 @@ -# safe-buffer [![travis][travis-image]][travis-url] [![npm][npm-image]][npm-url] [![downloads][downloads-image]][downloads-url] [![javascript style guide][standard-image]][standard-url] - -[travis-image]: https://img.shields.io/travis/feross/safe-buffer/master.svg -[travis-url]: https://travis-ci.org/feross/safe-buffer -[npm-image]: https://img.shields.io/npm/v/safe-buffer.svg -[npm-url]: https://npmjs.org/package/safe-buffer -[downloads-image]: https://img.shields.io/npm/dm/safe-buffer.svg -[downloads-url]: https://npmjs.org/package/safe-buffer -[standard-image]: https://img.shields.io/badge/code_style-standard-brightgreen.svg -[standard-url]: https://standardjs.com - -#### Safer Node.js Buffer API - -**Use the new Node.js Buffer APIs (`Buffer.from`, `Buffer.alloc`, -`Buffer.allocUnsafe`, `Buffer.allocUnsafeSlow`) in all versions of Node.js.** - -**Uses the built-in implementation when available.** - -## install - -``` -npm install safe-buffer -``` - -## usage - -The goal of this package is to provide a safe replacement for the node.js `Buffer`. - -It's a drop-in replacement for `Buffer`. You can use it by adding one `require` line to -the top of your node.js modules: - -```js -var Buffer = require('safe-buffer').Buffer - -// Existing buffer code will continue to work without issues: - -new Buffer('hey', 'utf8') -new Buffer([1, 2, 3], 'utf8') -new Buffer(obj) -new Buffer(16) // create an uninitialized buffer (potentially unsafe) - -// But you can use these new explicit APIs to make clear what you want: - -Buffer.from('hey', 'utf8') // convert from many types to a Buffer -Buffer.alloc(16) // create a zero-filled buffer (safe) -Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe) -``` - -## api - -### Class Method: Buffer.from(array) - - -* `array` {Array} - -Allocates a new `Buffer` using an `array` of octets. - -```js -const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]); - // creates a new Buffer containing ASCII bytes - // ['b','u','f','f','e','r'] -``` - -A `TypeError` will be thrown if `array` is not an `Array`. - -### Class Method: Buffer.from(arrayBuffer[, byteOffset[, length]]) - - -* `arrayBuffer` {ArrayBuffer} The `.buffer` property of a `TypedArray` or - a `new ArrayBuffer()` -* `byteOffset` {Number} Default: `0` -* `length` {Number} Default: `arrayBuffer.length - byteOffset` - -When passed a reference to the `.buffer` property of a `TypedArray` instance, -the newly created `Buffer` will share the same allocated memory as the -TypedArray. - -```js -const arr = new Uint16Array(2); -arr[0] = 5000; -arr[1] = 4000; - -const buf = Buffer.from(arr.buffer); // shares the memory with arr; - -console.log(buf); - // Prints: - -// changing the TypedArray changes the Buffer also -arr[1] = 6000; - -console.log(buf); - // Prints: -``` - -The optional `byteOffset` and `length` arguments specify a memory range within -the `arrayBuffer` that will be shared by the `Buffer`. - -```js -const ab = new ArrayBuffer(10); -const buf = Buffer.from(ab, 0, 2); -console.log(buf.length); - // Prints: 2 -``` - -A `TypeError` will be thrown if `arrayBuffer` is not an `ArrayBuffer`. - -### Class Method: Buffer.from(buffer) - - -* `buffer` {Buffer} - -Copies the passed `buffer` data onto a new `Buffer` instance. - -```js -const buf1 = Buffer.from('buffer'); -const buf2 = Buffer.from(buf1); - -buf1[0] = 0x61; -console.log(buf1.toString()); - // 'auffer' -console.log(buf2.toString()); - // 'buffer' (copy is not changed) -``` - -A `TypeError` will be thrown if `buffer` is not a `Buffer`. - -### Class Method: Buffer.from(str[, encoding]) - - -* `str` {String} String to encode. -* `encoding` {String} Encoding to use, Default: `'utf8'` - -Creates a new `Buffer` containing the given JavaScript string `str`. If -provided, the `encoding` parameter identifies the character encoding. -If not provided, `encoding` defaults to `'utf8'`. - -```js -const buf1 = Buffer.from('this is a tést'); -console.log(buf1.toString()); - // prints: this is a tést -console.log(buf1.toString('ascii')); - // prints: this is a tC)st - -const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex'); -console.log(buf2.toString()); - // prints: this is a tést -``` - -A `TypeError` will be thrown if `str` is not a string. - -### Class Method: Buffer.alloc(size[, fill[, encoding]]) - - -* `size` {Number} -* `fill` {Value} Default: `undefined` -* `encoding` {String} Default: `utf8` - -Allocates a new `Buffer` of `size` bytes. If `fill` is `undefined`, the -`Buffer` will be *zero-filled*. - -```js -const buf = Buffer.alloc(5); -console.log(buf); - // -``` - -The `size` must be less than or equal to the value of -`require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is -`(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will -be created if a `size` less than or equal to 0 is specified. - -If `fill` is specified, the allocated `Buffer` will be initialized by calling -`buf.fill(fill)`. See [`buf.fill()`][] for more information. - -```js -const buf = Buffer.alloc(5, 'a'); -console.log(buf); - // -``` - -If both `fill` and `encoding` are specified, the allocated `Buffer` will be -initialized by calling `buf.fill(fill, encoding)`. For example: - -```js -const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64'); -console.log(buf); - // -``` - -Calling `Buffer.alloc(size)` can be significantly slower than the alternative -`Buffer.allocUnsafe(size)` but ensures that the newly created `Buffer` instance -contents will *never contain sensitive data*. - -A `TypeError` will be thrown if `size` is not a number. - -### Class Method: Buffer.allocUnsafe(size) - - -* `size` {Number} - -Allocates a new *non-zero-filled* `Buffer` of `size` bytes. The `size` must -be less than or equal to the value of `require('buffer').kMaxLength` (on 64-bit -architectures, `kMaxLength` is `(2^31)-1`). Otherwise, a [`RangeError`][] is -thrown. A zero-length Buffer will be created if a `size` less than or equal to -0 is specified. - -The underlying memory for `Buffer` instances created in this way is *not -initialized*. The contents of the newly created `Buffer` are unknown and -*may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such -`Buffer` instances to zeroes. - -```js -const buf = Buffer.allocUnsafe(5); -console.log(buf); - // - // (octets will be different, every time) -buf.fill(0); -console.log(buf); - // -``` - -A `TypeError` will be thrown if `size` is not a number. - -Note that the `Buffer` module pre-allocates an internal `Buffer` instance of -size `Buffer.poolSize` that is used as a pool for the fast allocation of new -`Buffer` instances created using `Buffer.allocUnsafe(size)` (and the deprecated -`new Buffer(size)` constructor) only when `size` is less than or equal to -`Buffer.poolSize >> 1` (floor of `Buffer.poolSize` divided by two). The default -value of `Buffer.poolSize` is `8192` but can be modified. - -Use of this pre-allocated internal memory pool is a key difference between -calling `Buffer.alloc(size, fill)` vs. `Buffer.allocUnsafe(size).fill(fill)`. -Specifically, `Buffer.alloc(size, fill)` will *never* use the internal Buffer -pool, while `Buffer.allocUnsafe(size).fill(fill)` *will* use the internal -Buffer pool if `size` is less than or equal to half `Buffer.poolSize`. The -difference is subtle but can be important when an application requires the -additional performance that `Buffer.allocUnsafe(size)` provides. - -### Class Method: Buffer.allocUnsafeSlow(size) - - -* `size` {Number} - -Allocates a new *non-zero-filled* and non-pooled `Buffer` of `size` bytes. The -`size` must be less than or equal to the value of -`require('buffer').kMaxLength` (on 64-bit architectures, `kMaxLength` is -`(2^31)-1`). Otherwise, a [`RangeError`][] is thrown. A zero-length Buffer will -be created if a `size` less than or equal to 0 is specified. - -The underlying memory for `Buffer` instances created in this way is *not -initialized*. The contents of the newly created `Buffer` are unknown and -*may contain sensitive data*. Use [`buf.fill(0)`][] to initialize such -`Buffer` instances to zeroes. - -When using `Buffer.allocUnsafe()` to allocate new `Buffer` instances, -allocations under 4KB are, by default, sliced from a single pre-allocated -`Buffer`. This allows applications to avoid the garbage collection overhead of -creating many individually allocated Buffers. This approach improves both -performance and memory usage by eliminating the need to track and cleanup as -many `Persistent` objects. - -However, in the case where a developer may need to retain a small chunk of -memory from a pool for an indeterminate amount of time, it may be appropriate -to create an un-pooled Buffer instance using `Buffer.allocUnsafeSlow()` then -copy out the relevant bits. - -```js -// need to keep around a few small chunks of memory -const store = []; - -socket.on('readable', () => { - const data = socket.read(); - // allocate for retained data - const sb = Buffer.allocUnsafeSlow(10); - // copy the data into the new allocation - data.copy(sb, 0, 0, 10); - store.push(sb); -}); -``` - -Use of `Buffer.allocUnsafeSlow()` should be used only as a last resort *after* -a developer has observed undue memory retention in their applications. - -A `TypeError` will be thrown if `size` is not a number. - -### All the Rest - -The rest of the `Buffer` API is exactly the same as in node.js. -[See the docs](https://nodejs.org/api/buffer.html). - - -## Related links - -- [Node.js issue: Buffer(number) is unsafe](https://github.com/nodejs/node/issues/4660) -- [Node.js Enhancement Proposal: Buffer.from/Buffer.alloc/Buffer.zalloc/Buffer() soft-deprecate](https://github.com/nodejs/node-eps/pull/4) - -## Why is `Buffer` unsafe? - -Today, the node.js `Buffer` constructor is overloaded to handle many different argument -types like `String`, `Array`, `Object`, `TypedArrayView` (`Uint8Array`, etc.), -`ArrayBuffer`, and also `Number`. - -The API is optimized for convenience: you can throw any type at it, and it will try to do -what you want. - -Because the Buffer constructor is so powerful, you often see code like this: - -```js -// Convert UTF-8 strings to hex -function toHex (str) { - return new Buffer(str).toString('hex') -} -``` - -***But what happens if `toHex` is called with a `Number` argument?*** - -### Remote Memory Disclosure - -If an attacker can make your program call the `Buffer` constructor with a `Number` -argument, then they can make it allocate uninitialized memory from the node.js process. -This could potentially disclose TLS private keys, user data, or database passwords. - -When the `Buffer` constructor is passed a `Number` argument, it returns an -**UNINITIALIZED** block of memory of the specified `size`. When you create a `Buffer` like -this, you **MUST** overwrite the contents before returning it to the user. - -From the [node.js docs](https://nodejs.org/api/buffer.html#buffer_new_buffer_size): - -> `new Buffer(size)` -> -> - `size` Number -> -> The underlying memory for `Buffer` instances created in this way is not initialized. -> **The contents of a newly created `Buffer` are unknown and could contain sensitive -> data.** Use `buf.fill(0)` to initialize a Buffer to zeroes. - -(Emphasis our own.) - -Whenever the programmer intended to create an uninitialized `Buffer` you often see code -like this: - -```js -var buf = new Buffer(16) - -// Immediately overwrite the uninitialized buffer with data from another buffer -for (var i = 0; i < buf.length; i++) { - buf[i] = otherBuf[i] -} -``` - - -### Would this ever be a problem in real code? - -Yes. It's surprisingly common to forget to check the type of your variables in a -dynamically-typed language like JavaScript. - -Usually the consequences of assuming the wrong type is that your program crashes with an -uncaught exception. But the failure mode for forgetting to check the type of arguments to -the `Buffer` constructor is more catastrophic. - -Here's an example of a vulnerable service that takes a JSON payload and converts it to -hex: - -```js -// Take a JSON payload {str: "some string"} and convert it to hex -var server = http.createServer(function (req, res) { - var data = '' - req.setEncoding('utf8') - req.on('data', function (chunk) { - data += chunk - }) - req.on('end', function () { - var body = JSON.parse(data) - res.end(new Buffer(body.str).toString('hex')) - }) -}) - -server.listen(8080) -``` - -In this example, an http client just has to send: - -```json -{ - "str": 1000 -} -``` - -and it will get back 1,000 bytes of uninitialized memory from the server. - -This is a very serious bug. It's similar in severity to the -[the Heartbleed bug](http://heartbleed.com/) that allowed disclosure of OpenSSL process -memory by remote attackers. - - -### Which real-world packages were vulnerable? - -#### [`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht) - -[Mathias Buus](https://github.com/mafintosh) and I -([Feross Aboukhadijeh](http://feross.org/)) found this issue in one of our own packages, -[`bittorrent-dht`](https://www.npmjs.com/package/bittorrent-dht). The bug would allow -anyone on the internet to send a series of messages to a user of `bittorrent-dht` and get -them to reveal 20 bytes at a time of uninitialized memory from the node.js process. - -Here's -[the commit](https://github.com/feross/bittorrent-dht/commit/6c7da04025d5633699800a99ec3fbadf70ad35b8) -that fixed it. We released a new fixed version, created a -[Node Security Project disclosure](https://nodesecurity.io/advisories/68), and deprecated all -vulnerable versions on npm so users will get a warning to upgrade to a newer version. - -#### [`ws`](https://www.npmjs.com/package/ws) - -That got us wondering if there were other vulnerable packages. Sure enough, within a short -period of time, we found the same issue in [`ws`](https://www.npmjs.com/package/ws), the -most popular WebSocket implementation in node.js. - -If certain APIs were called with `Number` parameters instead of `String` or `Buffer` as -expected, then uninitialized server memory would be disclosed to the remote peer. - -These were the vulnerable methods: - -```js -socket.send(number) -socket.ping(number) -socket.pong(number) -``` - -Here's a vulnerable socket server with some echo functionality: - -```js -server.on('connection', function (socket) { - socket.on('message', function (message) { - message = JSON.parse(message) - if (message.type === 'echo') { - socket.send(message.data) // send back the user's message - } - }) -}) -``` - -`socket.send(number)` called on the server, will disclose server memory. - -Here's [the release](https://github.com/websockets/ws/releases/tag/1.0.1) where the issue -was fixed, with a more detailed explanation. Props to -[Arnout Kazemier](https://github.com/3rd-Eden) for the quick fix. Here's the -[Node Security Project disclosure](https://nodesecurity.io/advisories/67). - - -### What's the solution? - -It's important that node.js offers a fast way to get memory otherwise performance-critical -applications would needlessly get a lot slower. - -But we need a better way to *signal our intent* as programmers. **When we want -uninitialized memory, we should request it explicitly.** - -Sensitive functionality should not be packed into a developer-friendly API that loosely -accepts many different types. This type of API encourages the lazy practice of passing -variables in without checking the type very carefully. - -#### A new API: `Buffer.allocUnsafe(number)` - -The functionality of creating buffers with uninitialized memory should be part of another -API. We propose `Buffer.allocUnsafe(number)`. This way, it's not part of an API that -frequently gets user input of all sorts of different types passed into it. - -```js -var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory! - -// Immediately overwrite the uninitialized buffer with data from another buffer -for (var i = 0; i < buf.length; i++) { - buf[i] = otherBuf[i] -} -``` - - -### How do we fix node.js core? - -We sent [a PR to node.js core](https://github.com/nodejs/node/pull/4514) (merged as -`semver-major`) which defends against one case: - -```js -var str = 16 -new Buffer(str, 'utf8') -``` - -In this situation, it's implied that the programmer intended the first argument to be a -string, since they passed an encoding as a second argument. Today, node.js will allocate -uninitialized memory in the case of `new Buffer(number, encoding)`, which is probably not -what the programmer intended. - -But this is only a partial solution, since if the programmer does `new Buffer(variable)` -(without an `encoding` parameter) there's no way to know what they intended. If `variable` -is sometimes a number, then uninitialized memory will sometimes be returned. - -### What's the real long-term fix? - -We could deprecate and remove `new Buffer(number)` and use `Buffer.allocUnsafe(number)` when -we need uninitialized memory. But that would break 1000s of packages. - -~~We believe the best solution is to:~~ - -~~1. Change `new Buffer(number)` to return safe, zeroed-out memory~~ - -~~2. Create a new API for creating uninitialized Buffers. We propose: `Buffer.allocUnsafe(number)`~~ - -#### Update - -We now support adding three new APIs: - -- `Buffer.from(value)` - convert from any type to a buffer -- `Buffer.alloc(size)` - create a zero-filled buffer -- `Buffer.allocUnsafe(size)` - create an uninitialized buffer with given size - -This solves the core problem that affected `ws` and `bittorrent-dht` which is -`Buffer(variable)` getting tricked into taking a number argument. - -This way, existing code continues working and the impact on the npm ecosystem will be -minimal. Over time, npm maintainers can migrate performance-critical code to use -`Buffer.allocUnsafe(number)` instead of `new Buffer(number)`. - - -### Conclusion - -We think there's a serious design issue with the `Buffer` API as it exists today. It -promotes insecure software by putting high-risk functionality into a convenient API -with friendly "developer ergonomics". - -This wasn't merely a theoretical exercise because we found the issue in some of the -most popular npm packages. - -Fortunately, there's an easy fix that can be applied today. Use `safe-buffer` in place of -`buffer`. - -```js -var Buffer = require('safe-buffer').Buffer -``` - -Eventually, we hope that node.js core can switch to this new, safer behavior. We believe -the impact on the ecosystem would be minimal since it's not a breaking change. -Well-maintained, popular packages would be updated to use `Buffer.alloc` quickly, while -older, insecure packages would magically become safe from this attack vector. - - -## links - -- [Node.js PR: buffer: throw if both length and enc are passed](https://github.com/nodejs/node/pull/4514) -- [Node Security Project disclosure for `ws`](https://nodesecurity.io/advisories/67) -- [Node Security Project disclosure for`bittorrent-dht`](https://nodesecurity.io/advisories/68) - - -## credit - -The original issues in `bittorrent-dht` -([disclosure](https://nodesecurity.io/advisories/68)) and -`ws` ([disclosure](https://nodesecurity.io/advisories/67)) were discovered by -[Mathias Buus](https://github.com/mafintosh) and -[Feross Aboukhadijeh](http://feross.org/). - -Thanks to [Adam Baldwin](https://github.com/evilpacket) for helping disclose these issues -and for his work running the [Node Security Project](https://nodesecurity.io/). - -Thanks to [John Hiesey](https://github.com/jhiesey) for proofreading this README and -auditing the code. - - -## license - -MIT. Copyright (C) [Feross Aboukhadijeh](http://feross.org) diff --git a/spaces/zlc99/M4Singer/modules/hifigan/mel_utils.py b/spaces/zlc99/M4Singer/modules/hifigan/mel_utils.py deleted file mode 100644 index 04c1e3ea5de2cd24bbb14ab72206539a8d37d9c0..0000000000000000000000000000000000000000 --- a/spaces/zlc99/M4Singer/modules/hifigan/mel_utils.py +++ /dev/null @@ -1,81 +0,0 @@ -import numpy as np -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn -from scipy.io.wavfile import read - -MAX_WAV_VALUE = 32768.0 - - -def load_wav(full_path): - sampling_rate, data = read(full_path) - return data, sampling_rate - - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - return np.log(np.clip(x, a_min=clip_val, a_max=None) * C) - - -def dynamic_range_decompression(x, C=1): - return np.exp(x) / C - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def mel_spectrogram(y, hparams, center=False, complex=False): - # hop_size: 512 # For 22050Hz, 275 ~= 12.5 ms (0.0125 * sample_rate) - # win_size: 2048 # For 22050Hz, 1100 ~= 50 ms (If None, win_size: fft_size) (0.05 * sample_rate) - # fmin: 55 # Set this to 55 if your speaker is male! if female, 95 should help taking off noise. (To test depending on dataset. Pitch info: male~[65, 260], female~[100, 525]) - # fmax: 10000 # To be increased/reduced depending on data. - # fft_size: 2048 # Extra window size is filled with 0 paddings to match this parameter - # n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, - n_fft = hparams['fft_size'] - num_mels = hparams['audio_num_mel_bins'] - sampling_rate = hparams['audio_sample_rate'] - hop_size = hparams['hop_size'] - win_size = hparams['win_size'] - fmin = hparams['fmin'] - fmax = hparams['fmax'] - y = y.clamp(min=-1., max=1.) - global mel_basis, hann_window - if fmax not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[str(fmax) + '_' + str(y.device)] = torch.from_numpy(mel).float().to(y.device) - hann_window[str(y.device)] = torch.hann_window(win_size).to(y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)), - mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[str(y.device)], - center=center, pad_mode='reflect', normalized=False, onesided=True) - - if not complex: - spec = torch.sqrt(spec.pow(2).sum(-1) + (1e-9)) - spec = torch.matmul(mel_basis[str(fmax) + '_' + str(y.device)], spec) - spec = spectral_normalize_torch(spec) - else: - B, C, T, _ = spec.shape - spec = spec.transpose(1, 2) # [B, T, n_fft, 2] - return spec - diff --git a/spaces/zlc99/M4Singer/tasks/run.py b/spaces/zlc99/M4Singer/tasks/run.py deleted file mode 100644 index 82c7559cec873eebf7c2c0ab6554895e21de7e7c..0000000000000000000000000000000000000000 --- a/spaces/zlc99/M4Singer/tasks/run.py +++ /dev/null @@ -1,15 +0,0 @@ -import importlib -from utils.hparams import set_hparams, hparams - - -def run_task(): - assert hparams['task_cls'] != '' - pkg = ".".join(hparams["task_cls"].split(".")[:-1]) - cls_name = hparams["task_cls"].split(".")[-1] - task_cls = getattr(importlib.import_module(pkg), cls_name) - task_cls.start() - - -if __name__ == '__main__': - set_hparams() - run_task()