diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Assassins Creed Iii 103 Skidrow Patch Everything You Need to Know About the Latest Version of the Game.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Assassins Creed Iii 103 Skidrow Patch Everything You Need to Know About the Latest Version of the Game.md deleted file mode 100644 index fe452252bde2f2e7fba390e6c4182f2cb5471534..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Assassins Creed Iii 103 Skidrow Patch Everything You Need to Know About the Latest Version of the Game.md +++ /dev/null @@ -1,136 +0,0 @@ - -

Assassins Creed III 103 Skidrow Patch: Everything You Need to Know

-

If you are a fan of the Assassins Creed series, you might be interested in playing the third installment of the franchise, Assassins Creed III. This game takes you to the American Revolution era, where you can explore the historical events and locations, as well as engage in stealth, combat, and parkour. However, if you want to play this game on your PC, you might encounter some issues and bugs that can ruin your gaming experience. That's why you might want to use the Assassins Creed III 103 Skidrow Patch, which is a crack and update for the game that fixes many problems and adds new features. In this article, we will tell you everything you need to know about this patch, including what it is, how to download and install it, and how to fix common issues and errors with it. Let's get started!

-

Assassins Creed Iii 103 Skidrow Patch


Download ---> https://byltly.com/2uKwc2



-

What is Assassins Creed III?

-

A brief overview of the game's story and gameplay

-

Assassins Creed III is an action-adventure game developed by Ubisoft Montreal and published by Ubisoft in 2012. It is the fifth main game in the Assassins Creed series, and a sequel to Assassins Creed: Revelations. The game follows the story of Desmond Miles, a modern-day assassin who relives the memories of his ancestors through a device called the Animus. In this game, Desmond accesses the memories of Ratonhnhaké:ton, also known as Connor, a half-English, half-Mohawk assassin who fights against the Templars during the American Revolution. The game features an open-world environment that spans various locations in colonial America, such as Boston, New York, the Frontier, and the Caribbean Sea. The game also introduces naval combat, hunting, crafting, and homestead management as new gameplay elements.

-

The main features and improvements of Assassins Creed III

-

Assassins Creed III is considered one of the most ambitious and innovative games in the series, as it offers many new features and improvements over its predecessors. Some of these features are:

- -

What is Skidrow?

-

A brief history and background of Skidrow group

-

Skidrow is a group of hackers and crackers who specialize in cracking and releasing games for PC. They are one of the most popular and notorious groups in the scene, as they have cracked hundreds of games since their inception in 1990. Some of their most famous releases include Grand Theft Auto V, The Witcher 3: Wild Hunt, Far Cry 5, and Red Dead Redemption 2. Skidrow is also known for their rivalry with other groups such as Reloaded, Codex, and CPY.

-

Assassins Creed 3 update 1.03 skidrow crack
-How to install Assassins Creed III skidrow patch 103
-Assassins Creed III version 1.03 skidrow download
-Assassins Creed 3 skidrow patch 103 fix
-Assassins Creed III skidrow update 1.03 error
-Download Assassins Creed 3 patch 1.03 skidrow free
-Assassins Creed III skidrow patch 103 not working
-Assassins Creed 3 update 1.03 skidrow torrent
-Assassins Creed III skidrow patch 103 changelog
-Assassins Creed 3 skidrow patch 103 gameplay
-Assassins Creed III version 1.03 skidrow trainer
-Assassins Creed 3 skidrow patch 103 lag
-Assassins Creed III skidrow update 1.03 features
-Assassins Creed 3 patch 1.03 skidrow size
-Assassins Creed III skidrow patch 103 requirements
-Assassins Creed 3 skidrow patch 103 mods
-Assassins Creed III version 1.03 skidrow cheats
-Assassins Creed 3 update 1.03 skidrow review
-Assassins Creed III skidrow patch 103 steam
-Assassins Creed 3 skidrow patch 103 save game
-Assassins Creed III skidrow update 1.03 release date
-Assassins Creed 3 patch 1.03 skidrow keygen
-Assassins Creed III skidrow patch 103 multiplayer
-Assassins Creed 3 skidrow patch 103 graphics
-Assassins Creed III version 1.03 skidrow system requirements
-Assassins Creed 3 update 1.03 skidrow repack
-Assassins Creed III skidrow patch 103 dlc
-Assassins Creed 3 skidrow patch 103 bugs
-Assassins Creed III skidrow update 1.03 performance
-Assassins Creed 3 patch 1.03 skidrow iso
-Assassins Creed III skidrow patch 103 sound
-Assassins Creed 3 skidrow patch 103 resolution
-Assassins Creed III version 1.03 skidrow comparison
-Assassins Creed 3 update 1.03 skidrow rar password
-Assassins Creed III skidrow patch 103 achievements
-Assassins Creed 3 skidrow patch 103 missions
-Assassins Creed III skidrow update 1.03 unlocker
-Assassins Creed 3 patch 1.03 skidrow direct link
-Assassins Creed III skidrow patch 103 characters
-Assassins Creed 3 skidrow patch 103 settings
-Assassins Creed III version 1.03 skidrow crack only
-Assassins Creed 3 update 1.03 skidrow megaupload
-Assassins Creed III skidrow patch 103 screenshots
-Assassins Creed 3 skidrow patch 103 video
-Assassins Creed III skidrow update 1.03 guide
-Assassins Creed iii patch v1.03 with theta crack download free full pc game torrent cracked by reloaded and blackbox repack working link no survey no password no virus no malware no adfly no survey no password no virus no malware no adfly no survey no password no virus no malware no adfly no survey no password no virus no malware no adfly

-

The benefits and risks of using Skidrow cracks and patches

-

Using Skidrow cracks and patches can have some benefits and risks for PC gamers. Some of the benefits are:

- -

Some of the risks are:

- -

What is Assassins Creed III 103 Skidrow Patch?

-

A detailed description of the patch and its contents

-

Assassins Creed III 103 Skidrow Patch is a crack and update for Assassins Creed III that was released by Skidrow in 2013. It is also known as Assassins Creed III Update v1.03 + Crack Only Proper-Reloaded. This patch fixes many bugs and glitches that were present in the original version of the game, such as:

- -

This patch also adds some new features and improvements to the game, such as:

- -

How to download and install the patch correctly

-

To download and install Assassins Creed III 103 Skidrow Patch correctly, you need to follow these steps:

-
    -
  1. Download Assassins Creed III 103 Skidrow Patch from a reliable source such as Skidrow Reloaded.
  2. -
  3. Extract the files from the downloaded archive using a program such as WinRAR or 7-Zip.
  4. -
  5. Copy all the files from the Crack folder to your Assassins Creed III installation folder (usually C:\Program Files (x86)\Ubisoft\Assassin's Creed III).
  6. -
  7. Run AC3SP.exe as administrator to start playing the game with the patch applied.
  8. -
-

How to fix common issues and errors with Assassins Creed III 103 Skidrow Patch?

-

A list of possible problems and solutions for the patch users

-

If you encounter any issues or errors while using Assassins Creed III 103 Skidrow Patch, you can try some of these possible solutions:

- - - - - - - -
ProblemSolution
The game does not start or crashes at launch.- Make sure your PC meets the minimum system requirements for Assassins Creed III.
- Make sure you have installed all the necessary drivers for your graphics card.
- Make sure you have disabled any antivirus or firewall programs that might interfere with - Try to run the game in compatibility mode for Windows 7 or 8.
- Try to update or reinstall DirectX and Microsoft Visual C++ Redistributable.
- Try to delete or rename the file AC3SP.ini in your installation folder.
The game runs slowly or lags during gameplay.- Make sure your PC meets the recommended system requirements for Assassins Creed III.
- Make sure you have adjusted the graphics settings to suit your PC's capabilities.
- Make sure you have closed any background programs that might consume your CPU or RAM.
- Make sure you have defragmented your hard drive and cleaned your registry.
- Try to lower the resolution or disable some effects such as anti-aliasing, shadows, or reflections.
The game does not save or load properly.- Make sure you have enough free space on your hard drive.
- Make sure you have not modified or deleted any game files.
- Make sure you have backed up your save files before applying the patch.
- Make sure you have run AC3SP.exe as administrator.
- Try to delete or rename the folder Ubisoft Game Launcher in C:\Program Files (x86)\Ubisoft.
The game does not connect to the internet or multiplayer mode.- Make sure you have a stable and fast internet connection.
- Make sure you have allowed the game through your firewall or router settings.
- Make sure you have updated your game to the latest version.
- Make sure you have created and logged in to a Ubisoft account.
- Try to use a VPN or proxy service to bypass any regional restrictions.
The game shows an error message such as "AC3SP.exe has stopped working" or "Ubisoft Game Launcher error code 2".- Make sure you have followed all the steps in the previous solutions.
- Make sure you have downloaded and installed the patch from a trusted source.
- Make sure you have copied all the files from the Crack folder correctly.
- Try to reinstall the game and the patch from scratch.
- Try to contact Skidrow for support and feedback.
-

How to contact Skidrow for support and feedback

-

If none of the solutions above work for you, or if you have any questions, suggestions, or feedback for Skidrow, you can try to contact them through their official website, Skidrow Reloaded. There, you can find more information about their releases, updates, news, and comments. You can also join their community and chat with other users who might have similar issues or interests. However, be aware that Skidrow is not an official source of support for Assassins Creed III, and they might not respond to your messages or requests. Therefore, use their services at your own risk and discretion.

-

Conclusion

-

A summary of the main points and a call to action for the readers

-

Assassins Creed III 103 Skidrow Patch is a crack and update for Assassins Creed III that fixes many bugs and glitches and adds new features and improvements to the game. It is a great way to enjoy one of the best games in the Assassins Creed series without spending any money or facing any restrictions. However, it also comes with some risks and challenges that might affect your PC's security or performance. Therefore, before using this patch, make sure you know what you are doing and follow the instructions carefully. If you encounter any problems or errors with this patch, try some of the solutions we provided above, or contact Skidrow for support and feedback. We hope this article was helpful and informative for you. If you liked it, please share it with your friends and fellow gamers. And if you want to play more games like Assassins Creed III, check out our website for more cracks and patches from Skidrow. Thank you for reading!

-

FAQs

-

Here are some frequently asked questions about Assassins Creed III 103 Skidrow Patch:

-
    -
  1. Q: Do I need to have Assassins Creed III installed before applying this patch?
    A: Yes, you need to have Assassins Creed III installed on your PC before applying this patch. You can download Assassins Creed III from Skidrow Reloaded.
  2. -
  3. Q: Do I need to apply any previous patches before applying this patch?
    A: No, you do not need to apply any previous patches before applying this patch. This patch includes all the previous updates and fixes for Assassins Creed III.
  4. -
  5. Q: Does this patch work with Steam or Uplay versions of Assassins Creed III?
    A: No, this patch only works with Skidrow version of Assassins Creed III. If you have Steam or Uplay versions of Assassins Creed III, you need to uninstall them and install Skidrow version instead.
  6. -
  7. Q: Does this patch include any DLCs (downloadable content) for Assassins Creed III?
    A: Yes, this patch includes one DLC for Assassins Creed III: The Tyranny of King Washington - The Infamy (Part 1). This is a single-player mission that explores an alternate history where George Washington becomes a tyrant. You can access this mission from the main menu of the game.
  8. -
  9. Q: Can I play online or multiplayer mode with this patch?
    A: Yes, you can play online or multiplayer mode with this patch. However, you need to create and log in to a Ubisoft account first. You also need to allow the game through your firewall or router settings. You might also face some lag or connection issues depending on your internet speed and location.
  10. -
-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cracked Dc Unlocker Unlimited Credits New Versionl The Latest and Most Powerful Version of DC-Unlocker.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cracked Dc Unlocker Unlimited Credits New Versionl The Latest and Most Powerful Version of DC-Unlocker.md deleted file mode 100644 index 0b5d082b9cd52063a9b00cac8a73879fa2d80005..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cracked Dc Unlocker Unlimited Credits New Versionl The Latest and Most Powerful Version of DC-Unlocker.md +++ /dev/null @@ -1,163 +0,0 @@ - -

Cracked DC Unlocker Unlimited Credits New Version

-

Do you want to unlock your modem, router, or phone without paying for credits or waiting for hours? If yes, then you might be interested in Cracked DC Unlocker Unlimited Credits New Version. This is a software that allows you to bypass the limitations of the official DC Unlocker and use it for free and unlimited. But what is DC Unlocker and how does it work? And what are the advantages and disadvantages of using the cracked version? In this article, we will answer these questions and more. We will also show you how to download, install, and use Cracked DC Unlocker Unlimited Credits New Version on your device. So, let's get started!

-

Cracked Dc Unlocker Unlimited Credits New Versionl


DOWNLOADhttps://byltly.com/2uKwc3



-

What is DC Unlocker?

-

DC Unlocker is a software that helps you to unlock various devices such as modems, routers, phones, and dongles from different brands and models. It supports over 6000 devices from Huawei, ZTE, LG, Nokia, Alcatel, Sony, Lenovo, Xiaomi, and more. It can also repair IMEI, firmware, bootloaders, NVM, security area, etc. It is one of the most popular and reliable unlocking tools in the market.

-

A brief introduction to DC Unlocker and its features

-

DC Unlocker was created in 2004 by a team of professionals who wanted to provide a fast and easy solution for unlocking devices. It works by connecting your device to your PC via USB cable and detecting its information automatically. Then, it generates an unlock code or performs a direct unlock depending on the device model. It also has an online server that updates the software regularly with new models and features.

-

Some of the main features of DC Unlocker are:

-

Cracked Dc Unlocker Software Free Download
-Dc Unlocker Patched with Unlimited Credits 2017
-Dc Unlocker Client Software V1.00.0565 Cracked
-How to use Dc Unlocker Cracked Version
-Dc Unlocker Free Read Bootloader Huawei Phones
-Dc Unlocker Free Unlock Huawei Modems
-Dc Unlocker Free Write Firmware Huawei Modems
-Dc Unlocker Cracked Username and Password
-Dc Unlocker Cracked for Qualcomm and Hisilicon Devices
-Dc Unlocker Cracked Latest Version Download
-Dc Unlocker Cracked No Sign in Support
-Dc Unlocker Cracked No Credits Required
-Dc Unlocker Cracked Tested and Working
-Dc Unlocker Cracked XDA Forums
-Dc Unlocker Cracked News Updates and Guides
-Dc Unlocker Cracked for Different Provider SIM
-Dc Unlocker Cracked RAR File Download
-Dc Unlocker Cracked dccrap.exe Download
-Dc Unlocker Cracked for Huawei Smart Phones
-Dc Unlocker Cracked for Huawei Modems
-How to Install Dc Unlocker Cracked Version
-How to Update Dc Unlocker Cracked Version
-How to Fix Errors in Dc Unlocker Cracked Version
-How to Get Free Credits in Dc Unlocker Cracked Version
-How to Bypass Login in Dc Unlocker Cracked Version
-How to Change Language in Dc Unlocker Cracked Version
-How to Support New Models in Dc Unlocker Cracked Version
-How to Reset Counter in Dc Unlocker Cracked Version
-How to Repair IMEI in Dc Unlocker Cracked Version
-How to Backup and Restore Data in Dc Unlocker Cracked Version
-How to Enable Voice Feature in Dc Unlocker Cracked Version
-How to Flash Custom Firmware in Dc Unlocker Cracked Version
-How to Remove FRP Lock in Dc Unlocker Cracked Version
-How to Root and Unroot Devices in Dc Unlocker Cracked Version
-How to Generate Code from IMEI in Dc Unlocker Cracked Version
-How to Calculate Hash Code in Dc Unlocker Cracked Version
-How to Detect Device Automatically in Dc Unlocker Cracked Version
-How to Select COM Port Manually in Dc Unlocker Cracked Version
-How to Scan for Available Ports in Dc Unlocker Cracked Version
-How to Check Device Information in Dc Unlocker Cracked Version
-How to Check Device Status in Dc Unlocker Cracked Version
-How to Check Device Firmware Version in Dc Unlocker Cracked Version
-How to Check Device Hardware Version in Dc Unlocker Cracked Version
-How to Check Device Security Area Backup in Dc Unlocker Cracked Version
-How to Check Device NV Items Backup in Dc Unlocker Cracked Version
-How to Check Device SIM Lock Status in Dc Unlocker Cracked Version
-How to Check Device Network Lock Status in Dc Unlocker Cracked Version
-How to Check Device Bootloader Lock Status in Dc Unlocker Cracked Version
-How to Check Device Warranty Status in Dc Unlocker Cracked Version

- -

How to use DC Unlocker to unlock modems, routers, and phones

-

To use DC Unlocker to unlock your device, you need to follow these steps:

-
    -
  1. Download and install DC Unlocker on your PC from the official website: https://www.dc-unlocker.com/
  2. -
  3. Buy credits from the website or from a reseller. You need credits to perform unlocking operations with DC Unlocker. The price of credits depends on the device model and the number of credits required. You can check the price list here: https://www.dc-unlocker.com/buy/user_prices
  4. -
  5. Connect your device to your PC via USB cable. Make sure you have installed the drivers for your device on your PC. You can find the drivers here: https://www.dc-unlocker.com/downloads/drivers
  6. -
  7. Run DC Unlocker as administrator and click on the magnifying glass icon to detect your device.
  8. -
  9. Select your device model from the drop-down menu or leave it as auto-detect.
  10. -
  11. Click on the unlocking tab and choose either unlock or read unlock code depending on your device model.
  12. -
  13. Wait for a few seconds or minutes until the process is completed.
  14. -
  15. Disconnect your device from your PC and insert a different SIM card.
  16. -
  17. Enjoy your unlocked device!
  18. -
-

What is Cracked DC Unlocker Unlimited Credits?

-

Cracked DC Unlocker Unlimited Credits is a modified version of DC Unlocker that allows you to use it without paying for credits or registering an account. It also gives you unlimited credits so you can unlock as many devices as you want. It is created by hackers who crack the original software and bypass its security features.

-

The difference between the official and the cracked version of DC Unlocker

-

The main difference between the official and the cracked version of DC Unlocker is that the official version is legal and safe while the cracked version is illegal and risky. The official version is supported by the developers who update it regularly with new models and features. It also has a customer service that can help you with any issues or questions. The cracked version is not supported by anyone and may contain viruses or malware that can harm your PC or device. It also may not work properly or at all with some models or versions.

-

The benefits and risks of using Cracked DC Unlocker Unlimited Credits

-

The benefits of using Cracked DC Unlocker Unlimited Credits are:

- -

The risks of using Cracked DC Unlocker Unlimited Credits are:

- -

Where to download Cracked DC Unlocker Unlimited Credits New Version

-

If you still want to download Cracked DC Unlocker Unlimited Credits New Version despite its risks, you can find it on various websites that offer cracked software. However, we do not recommend or endorse any of these websites as they may contain harmful content or links. Use them at your own risk and discretion. Some examples of these websites are:

- - - - - - - -
NameURL
GSM X Teamhttps://gsmxteam.net/dc-unlocker-crack/
GSM Forumhttps://forum.gsmhosting.com/vbb/f1000/dc-unlocker-2-client-1-00-1431-crack-2020-a-2848418/
GSM Crack Toolshttps://gsmcracktools.com/dc-unlocker-crack/
GSM Box Crackhttps://gsmboxcrack.com/dc-unlocker-crack/
GSM Flash Toolhttps://gsmflashtool.com/dc-unlocker-crack/
-

How to install and use Cracked DC Unlocker Unlimited Credits New Version

-

The system requirements and compatibility of Cracked DC Unlocker Unlimited Credits New Version

-

To install and use Cracked DC Unlocker Unlimited Credits New Version on your PC, you need to have:

- -

The step-by-step guide to install and use Cracked DC Unlocker Unlimited Credits New Version

-

To install and use Cracked DC Unlocker Unlimited Credits New Version on your PC, you need to follow these steps:

-
    - other source you trust) and extract the zip file to a folder on your PC. -
  1. Run the setup.exe file as administrator and follow the instructions to install the software on your PC.
  2. -
  3. After the installation is completed, run the DC Unlocker 2 Client.exe file as administrator from the folder where you installed the software.
  4. -
  5. Connect your device to your PC via USB cable. Make sure you have installed the drivers for your device on your PC. You can find the drivers here: https://www.dc-unlocker.com/downloads/drivers
  6. -
  7. Click on the magnifying glass icon to detect your device. You should see a message saying "Found Applications port COMX" where X is a number.
  8. -
  9. Select your device model from the drop-down menu or leave it as auto-detect.
  10. -
  11. Click on the unlocking tab and choose either unlock or read unlock code depending on your device model.
  12. -
  13. Wait for a few seconds or minutes until the process is completed. You should see a message saying "Unlocking, please wait ..." and then "Unlock done".
  14. -
  15. Disconnect your device from your PC and insert a different SIM card.
  16. -
  17. Enjoy your unlocked device!
  18. -
-

The troubleshooting tips and FAQs for Cracked DC Unlocker Unlimited Credits New Version

-

If you encounter any problems or errors while using Cracked DC Unlocker Unlimited Credits New Version, you can try these tips:

- -

If you have any questions or doubts about Cracked DC Unlocker Unlimited Credits New Version, you can check these FAQs:

-
    -
  1. Q: Is Cracked DC Unlocker Unlimited Credits New Version safe to use?
  2. -
  3. A: No, it is not safe to use. It is illegal and risky. It may contain viruses or malware that can harm your PC or device. It may also damage your device or void your warranty. We do not recommend using it at all.
  4. -
  5. Q: Is Cracked DC Unlocker Unlimited Credits New Version free to use?
  6. -
  7. A: Yes, it is free to use. You do not need to pay for credits or subscriptions to use it. However, you may pay a higher price in terms of security and quality. You may also face legal consequences for using it.
  8. -
  9. Q: Does Cracked DC Unlocker Unlimited Credits New Version work with all devices?
  10. -
  11. A: No, it does not work with all devices. It may not support some models or versions that are supported by the official version. It may also not work properly or at all with some devices. You may end up bricking your device by using it.
  12. -
  13. Q: Can I update Cracked DC Unlocker Unlimited Credits New Version?
  14. -
  15. A: No, you cannot update Cracked DC Unlocker Unlimited Credits New Version. It does not have an online server that updates it regularly with new models and features. It may also stop working if you update it manually or automatically.
  16. -
  17. Q: Can I get support for Cracked DC Unlocker Unlimited Credits New Version?
  18. -
  19. A: No, you cannot get support for Cracked DC Unlocker Unlimited Credits New Version. It does not have a customer service that can help you with any issues or questions. You are on your own if you use it.
  20. -
-

Conclusion

-

In conclusion, Cracked DC Unlocker Unlimited Credits New Version is a software that allows you to unlock various devices such as modems, routers, and phones without paying for credits or registering an account. It also gives you unlimited credits so you can unlock as many devices as you want. However, it is illegal and risky to use. It may contain viruses or malware that can harm your PC or device. It may also damage your device or void your warranty. It may not work with all devices or versions. It may not be updated or supported by anyone. We do not recommend using it at all. Instead, we suggest using the official version of DC Unlocker which is legal and safe. It is supported by the developers who update it regularly with new models and features. It also has a customer service that can help you with any issues or questions. You can buy credits from the website or from a reseller at a reasonable price. You can also enjoy other features such as repairing IMEI, firmware, bootloaders, etc. You can download and install DC Unlocker from the official website: https://www.dc-unlocker.com/

-

We hope this article has been helpful and informative for you. If you have any feedback or suggestions, please let us know in the comments below. Thank you for reading!

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fisiologiaanimalhill.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fisiologiaanimalhill.md deleted file mode 100644 index e46144136c7ec7eef51394431030d9eb48debbd0..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fisiologiaanimalhill.md +++ /dev/null @@ -1,35 +0,0 @@ -
-

Fisiologia Animal Hill: A Comprehensive Guide to Animal Physiology

-

Fisiologia Animal Hill is a popular textbook that covers the principles and concepts of animal physiology in a clear and engaging way. The book is written by Richard W. Hill, Gordon A. Wyse, and Margaret Anderson, who are experts in the field of comparative physiology. The book is suitable for undergraduate and graduate students who want to learn about the diversity and adaptations of animals in different environments.

-

fisiologiaanimalhill


Downloadhttps://byltly.com/2uKywd



-

In this article, we will provide an overview of the main topics and features of Fisiologia Animal Hill, and explain why it is a valuable resource for anyone interested in animal physiology. We will also share some tips on how to use the book effectively for your studies.

-

What is Fisiologia Animal Hill?

-

Fisiologia Animal Hill is a comprehensive and updated textbook that covers the fundamentals of animal physiology, from molecules to organisms. The book is divided into seven parts, each focusing on a major aspect of animal physiology:

- -

Each part consists of several chapters that provide detailed explanations and examples of the physiological phenomena and principles. The book also includes numerous figures, tables, -diagrams - -

What are the features of Fisiologia Animal Hill?

-

Fisiologia Animal Hill is not only a comprehensive textbook, but also a user-friendly and interactive learning tool. The book has several features that enhance its readability and usability, such as:

-

- -

These features make Fisiologia Animal Hill a valuable and effective textbook for learning animal physiology.

81aa517590
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Boeing 737-300 500 CBT - Lufthansa Full Versionl !!HOT!!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Boeing 737-300 500 CBT - Lufthansa Full Versionl !!HOT!!.md deleted file mode 100644 index 0283d158ad294ed5712e68290a3427554213d3e6..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Boeing 737-300 500 CBT - Lufthansa Full Versionl !!HOT!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

Boeing 737-300 500 CBT - Lufthansa Full Versionl


Download ->>->>->> https://imgfil.com/2uy1WE



- -In a duo situation, you need to think like a complete rhythm section: comping instrument, ... Boeing 737-300 500 CBT - Lufthansa Full Versionl 1fdad05405
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/2020 Design 12 The Most Trusted Software for Kitchen and Bathroom Designers.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/2020 Design 12 The Most Trusted Software for Kitchen and Bathroom Designers.md deleted file mode 100644 index f06a3798dbc732d37d542bde821d180cf3c7ea9f..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/2020 Design 12 The Most Trusted Software for Kitchen and Bathroom Designers.md +++ /dev/null @@ -1,186 +0,0 @@ - -

How to Download 2020 Design 12: The Best Kitchen and Bathroom Design Software

-

If you are a kitchen and bathroom designer, you know how important it is to have a reliable and powerful software that can help you create stunning designs for your clients. You need a software that can handle complex layouts, realistic renderings, and online catalogs of manufacturer products. You need a software that can make your design process faster, easier, and more enjoyable. You need 2020 Design Live, the latest version of the most popular kitchen and bathroom design software in North America.

-

download 2020 design 12


DOWNLOAD ☆☆☆ https://urlin.us/2uT1QC



-

In this article, we will show you how to download 2020 Design 12, the desktop solution of 2020 Design Live, and how to use it to create amazing designs that will impress your clients. We will also answer some of the most frequently asked questions about this software. Let's get started!

-

What is 2020 Design 12?

-

2020 Design 12 is the desktop solution of 2020 Design Live, the kitchen and bathroom design software that runs on both desktop and cloud platforms. It is designed for professional designers who want to have access to the largest selection of manufacturer catalogs, online configurable cabinets, appliances, and plumbing, advanced lighting wizard, SketchUp integration, and more. It is also equipped with all the tools that will help you create photorealistic renderings, 360° panoramas, and detailed floor plans.

-

Features and benefits of 2020 Design 12

-

Some of the features and benefits of using 2020 Design 12 are:

- -

Requirements and compatibility of 2020 Design 12

-

To use 2020 Design 12, you need to have the following system requirements and compatibility:

-

download 2020 design live software
-download 2020 design v12 for kitchen and bathroom
-download 2020 design v12 with new rendering engine
-download 2020 design v12 with shaker cabinet doors
-download 2020 design v12 with sketchup integration
-download 2020 design v12 with advanced lighting wizard
-download 2020 design v12 with cloud configurable catalogs
-download 2020 design v12 with manufacturer products
-download 2020 design v12 with 360 panoramas
-download 2020 design v12 with space planning tools
-download 2020 design v12 with decorative cloud items
-download 2020 design v12 with personalization features
-download 2020 design v12 with support and updates
-download 2020 design v12 with free trial option
-download 2020 design v12 with pricing information
-download 2020 design v12 with testimonials and reviews
-download 2020 design v12 with training resources
-download 2020 design v12 with video tips and tutorials
-download 2020 design v12 with webinar recordings
-download 2020 design v12 with knowledge center access
-download 2020 design v12 with blogs and news updates
-download 2020 design v12 with edition comparison chart
-download 2020 design live foundation edition
-download 2020 design live essentials edition
-download 2020 design live premium edition
-how to download 2020 design v12 on windows pc
-how to download 2020 design v12 on mac os
-how to download 2020 design v12 on multiple devices
-how to download 2020 design v12 offline installer
-how to download 2020 design v12 latest version
-how to install and activate 2020 design v12 software
-how to update and upgrade to 2020 design v12 software
-how to uninstall and remove 2020 design v12 software
-how to troubleshoot and fix issues with 2020 design v12 software
-how to contact customer service for 2020 design v12 software
-how to use and learn from 2020 design v12 software
-how to create and share designs with 2020 design v12 software
-how to import and export files with 2020 design v12 software
-how to customize and optimize settings with 2020 design v12 software
-how to access and manage catalogs with 2020 design v12 software
-benefits and features of downloading 2020 design v12 software
-pros and cons of downloading 2020 design v12 software
-alternatives and competitors of downloading 2020 design v12 software
-discounts and coupons for downloading 2020 design v12 software
-requirements and specifications for downloading 2020 design v12 software

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

How to download and install 2020 Design 12

-

To download and install 2020 Design 12, you need to have a valid license and an active subscription. You also need to have an account on the 2020 website. If you don't have one, you can create one for free. Here are the steps to download and install 2020 Design 12:

-

Steps to download 2020 Design 12

-
    -
  1. Go to the 2020 website and log in with your username and password.
  2. -
  3. Click on the Downloads tab and select 2020 Design Live Desktop Solution (2020 Design 12).
  4. -
  5. Select the language of your choice and click on the Download Now button.
  6. -
  7. A pop-up window will appear asking you to save the file. Choose a location on your computer where you want to save the file and click on the Save File button.
  8. -
  9. The file will start downloading. It may take some time depending on your internet speed. You can check the progress of the download on your browser.
  10. -
  11. Once the download is complete, you will see a message saying that the file is ready to be opened. Click on the Open File button.
  12. -
  13. A security warning may appear asking you if you want to run the file. Click on the Run Anyway button.
  14. -
  15. The 2020 Design 12 installer will launch. Follow the instructions on the screen to complete the installation.
  16. -
  17. You may need to restart your computer after the installation is finished.
  18. -
  19. You can now launch 2020 Design 12 from your desktop or start menu.
  20. Steps to install 2020 Design 12

    -

    To install 2020 Design 12, you need to have a valid license key and an active subscription. You also need to have an internet connection to activate the software. Here are the steps to install 2020 Design 12:

    -
      -
    1. After downloading the file, double-click on it to launch the installer.
    2. -
    3. A welcome screen will appear. Click on the Next button.
    4. -
    5. A license agreement screen will appear. Read the terms and conditions and check the box to accept them. Click on the Next button.
    6. -
    7. A destination folder screen will appear. Choose a location on your computer where you want to install the software. You can use the default location or browse for a different one. Click on the Next button.
    8. -
    9. A start menu folder screen will appear. Choose a name for the folder where you want to create shortcuts for the software. You can use the default name or type a different one. Click on the Next button.
    10. -
    11. A ready to install screen will appear. Review your choices and click on the Install button.
    12. -
    13. The installation will begin. It may take some time depending on your computer speed. You can check the progress of the installation on the screen.
    14. -
    15. Once the installation is complete, you will see a message saying that 2020 Design 12 has been successfully installed. Click on the Finish button.
    16. -
    17. The software will launch automatically. You will see a login screen where you need to enter your username and password that you used to create your account on the 2020 website. Click on the Login button.
    18. -
    19. You will see an activation screen where you need to enter your license key that you received when you purchased the software. Click on the Activate button.
    20. -
    21. You will see a confirmation screen saying that your software has been activated. Click on the OK button.
    22. -
    23. You can now start using 2020 Design 12 to create your kitchen and bathroom designs.
    24. How to use 2020 Design 12

      -

      Now that you have downloaded and installed 2020 Design 12, you are ready to use it to create your kitchen and bathroom designs. 2020 Design 12 is a user-friendly and intuitive software that will guide you through the design process step by step. Here are some tips and tricks for using 2020 Design 12:

      -

      Tips and tricks for using 2020 Design 12

      - -

      Examples of designs created with 2020 Design 12

      -

      To inspire you and show you what you can do with 2020 Design 12, here are some examples of kitchen and bathroom designs created with this software:

      -
Operating SystemWindows 10 (64-bit)
ProcessorIntel Core i5 or higher
Memory8 GB RAM or higher
Hard Disk Space10 GB or higher
Graphics CardNVIDIA GeForce GTX 1050 or higher
Internet ConnectionHigh-speed broadband connection
Screen Resolution1920 x 1080 or higher
Mouse3-button mouse with scroll wheel
KeyboardStandard keyboard with numeric keypad
PrinterColor printer (optional)
- - - - - - - - - - -
Kitchen design with white cabinets and blue backsplashKitchen design with dark wood cabinets and marble countertopKitchen design with gray cabinets and yellow accents
Bathroom design with white vanity and blue tilesBathroom design with dark wood vanity and stone wallBathroom design with gray vanity and green plants
-

Conclusion

-

In conclusion, 2020 Design 12 is the best kitchen and bathroom design software that you can use to create stunning designs for your clients. It has a new 64-bit architecture, a new EZ Render rendering engine, a new Cloud Configurator, a new Shaker Cabinet Door option, a new Screen Layout feature, a new Annotation Tool, a new SketchUp Importer, a new Cabinet Door Replacement feature, a new Catalog Manager, a new Pricing Tool, a new Manager Starter Edition, a new User Interface, an improved User Experience, and an improved User Support. It also has access to the largest selection of manufacturer catalogs online.

-

To download 2020 Design 12, you need to have a valid license and an active subscription. You also need to have an account on the 2020 website. You can follow the steps that we have explained in this article to download and install the software. You can also use the tips and tricks that we have shared to use the software effectively. You can also check out the examples of designs that we have shown to inspire you and see what you can do with 2020 Design 12.

-

We hope that this article has helped you learn how to download 2020 Design 12 and how to use it to create amazing kitchen and bathroom designs. If you have any questions or feedback, please feel free to contact us or leave a comment below. We would love to hear from you!

-

FAQs

-

Here are some of the most frequently asked questions about 2020 Design 12:

-
    -
  1. How much does 2020 Design 12 cost?
  2. -

    2020 Design 12 is available as a subscription-based software. The price depends on the type and duration of the subscription that you choose. You can visit the 2020 website to see the different subscription options and prices.

    -
  3. Can I use 2020 Design 12 on a Mac?
  4. -

    2020 Design 12 is compatible with Windows 10 (64-bit) operating system only. If you want to use it on a Mac, you need to install a Windows emulator, such as Parallels Desktop or Boot Camp, on your Mac.

    -
  5. Can I use 2020 Design 12 offline?
  6. -

    2020 Design 12 is a desktop solution that can be used offline. However, you need to have an internet connection to activate the software, download catalogs, use the Cloud Configurator, and access online support.

    -
  7. Can I import and export files from other software into 2020 Design 12?
  8. -

    Yes, you can import and export files from other software into 2020 Design 12. You can import files in DWG, DXF, SKP, JPG, PNG, BMP, TIF, GIF, PDF, and CSV formats. You can export files in DWG, DXF, SKP, JPG, PNG, BMP, TIF, GIF, PDF, CSV, XML, and HTML formats.

    -
  9. Can I share my designs with my clients using 2020 Design 12?
  10. -

    Yes, you can share your designs with your clients using 2020 Design 12. You can send them renderings, panoramas, floor plans, elevations, perspectives, and reports via email or social media. You can also use the 2020 Cloud Viewer, a free online tool that lets you share your designs in an interactive way.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APKPure 3 The Android App Store that Saves You Time Space and Data.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APKPure 3 The Android App Store that Saves You Time Space and Data.md deleted file mode 100644 index bb7e0cb5158febe16897e3f6e116ee1db5ce4778..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/APKPure 3 The Android App Store that Saves You Time Space and Data.md +++ /dev/null @@ -1,124 +0,0 @@ - -

APKPure 3: A Comprehensive Guide

-

If you are an Android user, you might be familiar with Google Play Store, the official app store for Android devices. But did you know that there are other app stores that offer different apps and games that you might not find on Google Play? One of them is APKPure, a popular alternative app store that has been around since 2014.

-

APKPure is a website and an app that allows you to download and install Android apps and games from various sources. You can find apps that are not available in your region, apps that are discontinued or removed from Google Play, apps that have faster updates or older versions, and more. You can also discover new and upcoming apps and games, follow your favorite ones, and join a community of Android enthusiasts.

-

apkpure 3


DOWNLOAD - https://urlin.us/2uSYa3



-

However, using APKPure also comes with some risks and challenges. Since APKPure is not an official app store, it does not have the same security and quality standards as Google Play. You might encounter apps that are infected with malware or adware, apps that are illegal or infringe copyrights, apps that are outdated or incompatible with your device, and more. You also need to enable unknown sources on your device settings to install apps from APKPure, which can expose you to potential threats.

-

In this article, we will give you a comprehensive guide on APKPure 3, the latest version of the app store. We will cover its features, benefits, drawbacks, alternatives, and more. We will also provide some tips and recommendations on how to use APKPure safely and effectively.

-

Features of APKPure 3

-

APKPure 3 is the latest version of the app store that was released in September 2020. It has some new features and improvements that make it more user-friendly and convenient. Here are some of the features of APKPure 3:

- -

Benefits of APKPure 3

-

APKPure 3 has many benefits for Android users who want to explore more apps and games beyond Google Play. Here are some of the benefits of using APKPure 3:

-

apkpure 3 download
-apkpure 3 apk
-apkpure 3 app store
-apkpure 3 update
-apkpure 3 install
-apkpure 3 mod apk
-apkpure 3 for pc
-apkpure 3 for android
-apkpure 3 latest version
-apkpure 3 free download
-apkpure 3 pro apk
-apkpure 3 old version
-apkpure 3 online
-apkpure 3 games
-apkpure 3 app download
-apkpure 3 premium apk
-apkpure 3 downloader
-apkpure 3 for ios
-apkpure 3 beta
-apkpure 3 review
-apkpure 3 alternative
-apkpure 3 modded games
-apkpure 3 cracked apps
-apkpure 3 for firestick
-apkpure 3 lite apk
-apkpure 3 region free apk
-apkpure 3 safe
-apkpure 3 pubg mobile
-apkpure 3 fortnite
-apkpure 3 minecraft
-apkpure 3 gta san andreas
-apkpure 3 roblox
-apkpure 3 among us
-apkpure 3 call of duty mobile
-apkpure 3 clash of clans
-apkpure 3 pokemon go
-apkpure 3 brawl stars
-apkpure 3 free fire
-apkpure 3 subway surfers
-apkpure 3 candy crush saga
-apkpure 3 zoom cloud meetings
-apkpure 3 tiktok
-apkpure 3 instagram
-apkpure 3 whatsapp messenger
-apkpure 3 facebook lite
-apkpure 3 youtube vanced
-apkpure 3 netflix mod apk

- -

Drawbacks of APKPure 3

-

APKPure 3 is not without its drawbacks. As an unofficial app store, it has some risks and challenges that you should be aware of before using it. Here are some of the drawbacks of using APKPure 3:

- -

Alternatives to APKPure 3

-

If you are looking for other app stores that offer similar or better features than APKPure 3, you have plenty of options to choose from. Here are some of the best alternatives to APKPure 3:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionProsCons
APKMirrorA website that hosts free Android apps and games from various sources.- Safe downloading
- Ability to get old versions
- No account needed
- No native Android app
- Accepts very few new APKs
- No auto-update feature
F-DroidAn app store that only offers free and open source Android apps and games.- Privacy focused
- Ad-free
- No registration required
- Crowdsourced
- No tracking
- Limited selection
- No modded or patched apps
- Slow updates
AptoideAn app store that allows users to create and manage their own app stores.- Free
- Open source version available
- Large user base
- Customizable
- Illegal apps
- Not all apps are safe
- Most apps are outdated
Aurora StoreAn app store that allows users to download apps from Google Play anonymously.- Privacy focused
- Ad-free - No region locking
- No account needed
- Not all apps are available
- Some apps might not work properly
- No auto-update feature
-

Conclusion

-

APKPure 3 is a great app store for Android users who want to explore more apps and games beyond Google Play. It offers a lot of features, benefits, and options that can enhance your Android experience. However, it also has some drawbacks and risks that you should be careful of before using it. You should always check the source, signature, and permission of the apps you download from APKPure, and use a reliable antivirus or security app to protect your device. You should also respect the rights and policies of the app developers and publishers, and avoid downloading or using illegal or infringing apps.

-

If you are looking for other app stores that offer similar or better features than APKPure 3, you can try APKMirror, F-Droid, Aptoide, or Aurora Store. They are some of the best alternatives to APKPure 3 that you can find online. You can compare their pros and cons and choose the one that suits your needs and preferences.

-

We hope this article has given you a comprehensive guide on APKPure 3 and its alternatives. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading!

-

FAQs

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Best Ways to Download YouTube Videos Reddit Users Recommend.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Best Ways to Download YouTube Videos Reddit Users Recommend.md deleted file mode 100644 index ee2376dc1e1bfe62ec58f96a4a3c28b640399794..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Best Ways to Download YouTube Videos Reddit Users Recommend.md +++ /dev/null @@ -1,183 +0,0 @@ -
-

Best Way to Download YouTube Videos Reddit

-

Do you want to download YouTube videos that are posted on Reddit? Maybe you want to watch them offline, share them with your friends, or edit them for your own purposes. Whatever the reason, downloading YouTube videos from Reddit is not as hard as you might think. In this article, we will show you the best tools to download YouTube videos from Reddit, whether you are using a web browser, a mobile device, or a desktop app.

-

Introduction

-

Why download YouTube videos from Reddit?

-

Reddit is one of the most popular social media platforms in the world, with millions of users sharing and discussing all kinds of topics. One of the most common types of content on Reddit is YouTube videos, which can be found in various subreddits, such as r/videos, r/funny, r/educationalvideos, and many more.

-

best way to download youtube videos reddit


DOWNLOAD ○○○ https://urlin.us/2uSYnv



-

Downloading YouTube videos from Reddit can have many benefits, such as:

- -

What are the best tools to download YouTube videos from Reddit?

-

There are many tools that claim to download YouTube videos from Reddit, but not all of them are reliable, safe, or easy to use. Some of them may not work properly, contain malware, or have annoying pop-ups. To help you avoid these problems, we have selected the best tools to download YouTube videos from Reddit, based on their features, performance, and user reviews. We have divided them into three categories: web-based video downloaders, mobile apps, and desktop apps.

-

Web-based video downloaders

-

RedditSave

-

RedditSave is a free website that lets you download videos from any device. And unlike some downloader sites, it saves videos with the audio included. It works with YouTube and many other video platforms that are posted on Reddit.

-

How to use RedditSave

-
    -
  1. Go to [Reddit](^1^) and find the post that contains the YouTube video you want to download.
  2. -
  3. Copy the URL of the post by right-clicking on it and selecting "Copy link address".
  4. -
  5. Go to [RedditSave](^5^) and paste the URL in the search box.
  6. -
  7. Click on "Download" and choose the quality and format you want.
  8. -
  9. Click on "Download" again and save the video file on your device.
  10. -
-

Pros and cons of RedditSave

- -

Viddit.red

-

Viddit.red is another free website that allows you to download YouTube videos from Reddit with a simple interface. It also supports other video platforms, such as Vimeo, Dailymotion, Twitch, and more. It downloads videos with sound and offers different quality options.

-

How to use Viddit.red

-
    -
  1. Go to [Reddit] and find the post that contains the YouTube video you want to download.
  2. -
  3. Copy the URL of the post by right-clicking on it and selecting "Copy link address".
  4. -
  5. Go to [Viddit.red] and paste the URL in the search box.
  6. -
  7. Click on "Download" and choose the quality you want.
  8. -
  9. Click on "Download" again and save the video file on your device.
  10. -
-

Pros and cons of Viddit.red

- -

Mobile apps

-

Slide for Reddit

-

If you are using an Android device, you can download YouTube videos from Reddit using Slide for Reddit, a free and open-source app that lets you browse Reddit in a smooth and customizable way. It has a built-in video downloader that works with YouTube and other video platforms. It also has many other features, such as offline mode, night mode, multi-account support, and more.

-

How to use Slide for Reddit

-
    -
  1. Download and install Slide for Reddit from [Google Play Store].
  2. -
  3. Open the app and log in to your Reddit account or browse as a guest.
  4. -
  5. Find the post that contains the YouTube video you want to download.
  6. -
  7. Tap on the three-dot menu icon at the top right corner of the post and select "Download content".
  8. -
  9. Select the quality and format you want and tap on "Download".
  10. -
  11. The video file will be saved in your device's gallery or file manager.
  12. -
-

Pros and cons of Slide for Reddit

- -

SaveVideo bot

-

If you are using an iOS device, you can download YouTube videos from Reddit using SaveVideo bot, a free Telegram bot that lets you download videos from any website. It works with YouTube and other video platforms that are posted on Reddit. It downloads videos with sound and offers different quality options.

-

How to use SaveVideo bot

-
    -
  1. Download and install Telegram from [App Store].
  2. -
  3. Open the app and create an account or log in to your existing account.
  4. -
  5. Go to [Reddit] and find the post that contains the YouTube video you want to download.
  6. -
  7. Copy the URL of the post by tapping on it and selecting "Share" then "Copy".
  8. -
  9. Go to Telegram and search for [@SaveVideoBot] or click on this [link].
  10. -
  11. Paste the URL in the chat box and send it to the bot.
  12. -
  13. The bot will reply with a list of quality options. Tap on the one you want.
  14. -
  15. The bot will send you the video file. Tap on it and select "Save to Camera Roll".
  16. -
-

Pros and cons of SaveVideo bot

- -

Conclusion

-

Summary of the main points

-

In this article, we have shown you the best way to download YouTube videos from Reddit, using different tools for different devices. We have compared the pros and cons of each tool, and explained how to use them step by step. Whether you want to use a web-based video downloader, a mobile app, or a desktop app, you can find the best option for your needs and preferences.

-

Call to action

-

Now that you know how to download YouTube videos from Reddit, why not give it a try and see for yourself how easy and convenient it is? You can enjoy watching your favorite videos offline, share them with your friends, or edit them for your own purposes. Just remember to respect the rights of the original creators and follow the terms of service of each platform. Happy downloading!

-

How to download youtube videos with sound from reddit
-Reddit video downloader online free
-Best software for downloading youtube videos from reddit
-RedditSave: Download reddit videos with audio
-Stacher: A customizable GUI for YT-DLP
-yt-dlp: A command-line tool for downloading youtube videos
-Slide for Reddit: A mobile app that can download reddit videos
-/u/SaveVideo bot: A reddit bot that can download videos from any subreddit
-Download youtube videos in 1080p from reddit
-How to use FFMPEG to merge video and audio from youtube downloads
-Jdownloader2: A desktop app that can download youtube videos
-How to avoid YT throttling when downloading youtube videos
-How to download your own youtube videos from reddit
-YouTube Premium: A paid service that allows offline viewing of youtube videos
-How to insert "pp" after "youtube" to download videos
-How to setup youtube-dl-gui for downloading youtube videos
-How to use the command line to download youtube videos with yt-dlp
-How to download a portion of a youtube video from reddit
-How to automatically rename the output files of youtube downloads
-How to download videos copied to your clipboard with Stacher
-How to use multi-threading to download multiple youtube videos simultaneously
-How to download playlists from youtube using yt-dlp or Stacher
-How to use the Something Not Working tab in Stacher to troubleshoot issues
-How to choose the best video and audio quality for youtube downloads
-How to use the extra options in Stacher for more customization
-How to install yt-dlp and yt-dlg on Windows, Mac, or Linux
-How to use the -x option in yt-dlp to only download audio from youtube videos
-How to use the /r/youtubedl subreddit for more information and support
-How to use the wikiHow guide on The 7 Best Free Tools to Download Reddit Videos with Sound
-How to use the Business Insider guide on 2 Ways to Download Any Reddit Video
-How to use the /r/software subreddit for more recommendations and reviews on youtube video downloaders
-How to use the GitHub repository of yt-dlp for more details and updates on the tool
-How to use the GitHub repository of yt-dlg for more details and updates on the GUI
-How to use the GitHub repository of jely2002/youtube-dl-gui for another GUI option for yt-dlp or youtube-dl
-How to use the GitHub repository of oleksis/youtube-dl-gui for another GUI option for yt-dlp or youtube-dl
-How to use the Stacher subreddit's Wiki for more instructions and tips on using Stacher
-How to use the Slide for Reddit app's settings and features for downloading reddit videos
-How to use the /u/SaveVideo bot's commands and options for downloading reddit videos
-How to use the Jdownloader2 app's settings and features for downloading youtube videos
-How to use the YouTube Premium app's settings and features for offline viewing of youtube videos

-

FAQs

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Bitcoin Mining Simulator Idle Clicker Tycoon Mod APK.md b/spaces/1phancelerku/anime-remove-background/Bitcoin Mining Simulator Idle Clicker Tycoon Mod APK.md deleted file mode 100644 index 313ac70070225796e7d1a8a4f7ec350879b598c6..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Bitcoin Mining Simulator Idle Clicker Tycoon Mod APK.md +++ /dev/null @@ -1,162 +0,0 @@ -
-

Bitcoin Mining Idle Tycoon Mod APK: A Fun and Educational Game for Crypto Enthusiasts

-

Are you interested in bitcoin mining and cryptocurrency? Do you want to learn how to mine bitcoins, trade them, and grow your virtual business? If yes, then you might want to check out Bitcoin Mining Idle Tycoon Mod APK, a fun and educational game that simulates the process of bitcoin mining. In this article, we will tell you what this game is, how to play it, what are its benefits and challenges, and how to download and install the mod apk version. Read on to find out more.

-

What is Bitcoin Mining Idle Tycoon Mod APK?

-

A brief introduction to the game and its features

-

Bitcoin Mining Idle Tycoon Mod APK is a modified version of Bitcoin Mining Idle Tycoon, a game developed by Ernest Trosclair. The game is an idle clicker tycoon game that lets you start your own bitcoin mining business, hire workers, upgrade your equipment, trade your currency, and get rich. The game has many features that make it realistic and engaging, such as:

-

bitcoin mining idle tycoon mod apk


Download Ziphttps://jinyurl.com/2uNOzK



- -

How to download and install the mod apk version

-

The mod apk version of Bitcoin Mining Idle Tycoon is a modified version that gives you some extra benefits, such as:

- -

To download and install the mod apk version, you need to follow these steps:

-
    -
  1. Go to this link and download the mod apk file.
  2. -
  3. Enable unknown sources on your device settings.
  4. -
  5. Locate the downloaded file on your file manager and tap on it.
  6. -
  7. Follow the installation instructions on the screen.
  8. -
  9. Launch the game and enjoy.
  10. -
-

How to Play Bitcoin Mining Idle Tycoon Mod APK?

-

The basics of bitcoin mining and the game mechanics

-

Bitcoin mining is the process of creating new bitcoins by solving complex mathematical problems that verify transactions on the blockchain

The network, which is a globally distributed public ledger consisting of a giant list of timestamped transactions. The network relies on the consensus of the miners to agree on the current state of the ledger and to prevent double-spending, which is when someone tries to spend the same bitcoin twice.

-

The different upgrades, workers, and equipment available in the game

-

In Bitcoin Mining Idle Tycoon Mod APK, you can upgrade your mining business by hiring more workers, buying better equipment, and increasing your hash rate. The hash rate is the measure of how fast your computer can solve the algorithms and earn bitcoins. The higher your hash rate, the more bitcoins you can mine.

-

Some of the upgrades, workers, and equipment you can get in the game are:

-

bitcoin mining idle tycoon hack apk
-bitcoin mining idle tycoon unlimited coins apk
-bitcoin mining idle tycoon mod apk download
-bitcoin mining idle tycoon cheat apk
-bitcoin mining idle tycoon latest mod apk
-bitcoin mining idle tycoon mod apk android 1
-bitcoin mining idle tycoon mod apk revdl
-bitcoin mining idle tycoon mod apk 4.27.0
-bitcoin mining idle tycoon mod apk free shopping
-bitcoin mining idle tycoon mod apk happymod
-bitcoin mining idle tycoon mod apk 2023
-bitcoin mining idle tycoon mod apk offline
-bitcoin mining idle tycoon mod apk no ads
-bitcoin mining idle tycoon mod apk unlimited money
-bitcoin mining idle tycoon mod apk rexdl
-bitcoin mining idle tycoon mod apk 4.26.0
-bitcoin mining idle tycoon mod apk 4.25.0
-bitcoin mining idle tycoon mod apk 4.24.0
-bitcoin mining idle tycoon mod apk 4.23.0
-bitcoin mining idle tycoon mod apk 4.22.0
-bitcoin mining idle tycoon pro mod apk
-bitcoin mining idle tycoon premium mod apk
-bitcoin mining idle tycoon vip mod apk
-bitcoin mining idle tycoon mega mod apk
-bitcoin mining idle tycoon super mod apk
-download game bitcoin mining idle tycoon mod apk
-download bitcoin mining idle tycoon hack mod apk
-download bitcoin mining idle tycoon cheat mod apk
-download bitcoin mining idle tycoon unlimited money mod apk
-download bitcoin mining idle tycoon latest version mod apk
-how to install bitcoin mining idle tycoon mod apk
-how to play bitcoin mining idle tycoon mod apk
-how to download bitcoin mining idle tycoon mod apk on pc
-how to update bitcoin mining idle tycoon mod apk
-how to get free coins in bitcoin mining idle tycoon mod apk
-best tips for bitcoin mining idle tycoon mod apk
-best strategy for bitcoin mining idle tycoon mod apk
-best guide for bitcoin mining idle tycoon mod apk
-best cheats for bitcoin mining idle tycoon mod apk
-best hacks for bitcoin mining idle tycoon mod apk

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
UpgradeDescriptionCost
WorkerA person who works on your mining rig and earns bitcoins for you$100
Graphics CardA device that enhances your computer's performance and increases your hash rate$500
Cooling FanA device that cools down your computer and prevents overheating and damage$200
Power SupplyA device that provides electricity to your computer and equipment$300
ASIC MinerA specialized device that is designed for bitcoin mining and has a very high hash rate$10,000
Data CenterA large facility that houses many computers and equipment for bitcoin mining$100,000
Solar PanelA device that generates renewable energy from the sun and reduces your electricity cost$50,000
Quantum ComputerA futuristic device that can solve algorithms in seconds and has an enormous hash rate$1,000,000

The trade market and the strategies to sell or keep mined currency

One of the most important aspects of Bitcoin Mining Idle Tycoon Mod APK is the trade market, where you can sell or keep your mined currency. The trade market shows you the current price of bitcoin in US dollars, as well as the historical price chart. You can choose to sell your bitcoins instantly at the current price, or wait for a better price in the future. However, you also have to consider the risk of price fluctuations and market crashes.

Some of the strategies you can use to sell or keep your mined currency are:

What are the Benefits of Bitcoin Mining Idle Tycoon Mod APK?

The educational value of learning about bitcoin mining and cryptocurrency

One of the main benefits of Bitcoin Mining Idle Tycoon Mod APK is that it can teach you about bitcoin mining and cryptocurrency in a fun and interactive way. You can learn about how bitcoin works, how it is created, how it is traded, and how it is secured. You can also learn about the history and evolution of bitcoin, as well as its advantages and disadvantages. By playing this game, you can gain a better understanding of one of the most innovative and influential technologies of our time.

The entertainment value of managing a virtual mining business and getting rich

Another benefit of Bitcoin Mining Idle Tycoon Mod APK is that it can provide you with hours of entertainment and satisfaction. You can enjoy managing your own virtual mining business, hiring workers, buying equipment, upgrading your facilities, and earning bitcoins. You can also compete with other players. in the global leaderboard, and see how you rank among the best bitcoin miners in the world. You can also have fun with the humorous and witty dialogues, graphics, and sound effects in the game. You can feel the thrill of getting rich and achieving your goals in the game.

-

The mod apk features that enhance the gaming experience and remove ads

-

A final benefit of Bitcoin Mining Idle Tycoon Mod APK is that it offers some extra features that enhance the gaming experience and remove ads. The mod apk version gives you unlimited money, which means you can buy any upgrade, worker, or equipment you want without worrying about the cost. You can also modify the advertising gain reward, which means you can get more bitcoins from watching ads. Moreover, you can enjoy the game without any annoying ads that interrupt your gameplay or consume your data.

-

What are the Challenges of Bitcoin Mining Idle Tycoon Mod APK?

-

The increasing difficulty and competition of mining as the game progresses

-

One of the challenges of Bitcoin Mining Idle Tycoon Mod APK is that it becomes more difficult and competitive as the game progresses. The game follows the real-life scenario of bitcoin mining, which means that the algorithms become harder to solve over time, and the reward for each block decreases. This means that you need to invest more money and resources to maintain your hash rate and profits. You also need to compete with other players who are also mining bitcoins and trying to get a share of the limited supply.

-

The risk of cryptojacking and malware from downloading untrusted sources

-

Another challenge of Bitcoin Mining Idle Tycoon Mod APK is that it poses a risk of cryptojacking and malware from downloading untrusted sources. Cryptojacking is a malicious practice where hackers use your device's processing power to mine cryptocurrency without your consent or knowledge. Malware is a software that can harm your device or data by stealing, deleting, encrypting, or spying on them. These threats can affect your device's performance, battery life, security, and privacy. Therefore, you need to be careful when downloading and installing the mod apk version from unknown sources, and always scan your device for any potential infections.

-

The legal and ethical issues of bitcoin mining and its environmental impact

-

A final challenge of Bitcoin Mining Idle Tycoon Mod APK is that it raises some legal and ethical issues of bitcoin mining and its environmental impact. Bitcoin mining is not regulated or controlled by any central authority, which means that it can be used for illegal or unethical purposes, such as money laundering, tax evasion, terrorism financing, or drug trafficking. Bitcoin mining also consumes a lot of electricity and generates a lot of carbon emissions, which contributes to global warming and climate change. Therefore, you need to be aware of these issues and consider their implications when playing this game.

-

Conclusion

-

Bitcoin Mining Idle Tycoon Mod APK is a fun and educational game that simulates the process of bitcoin mining. You can learn how to mine bitcoins, trade them, and grow your virtual business. You can also enjoy managing your own mining business, hiring workers, buying equipment, upgrading your facilities, and earning bitcoins. You can also benefit from the mod apk features that give you unlimited money, modify advertising gain reward, and remove ads. However, you also need to face some challenges, such as the increasing difficulty and competition of mining, the risk of cryptojacking and malware from downloading untrusted sources, and the legal and ethical issues of bitcoin mining and its environmental impact. If you are interested in bitcoin mining and cryptocurrency, you might want to try this game and see how it works.

-

FAQs

-

Q1: Is Bitcoin Mining Idle Tycoon Mod APK safe to download and play?

-

A1: Bitcoin Mining Idle Tycoon Mod APK is generally safe to download and play if you get it from a trusted source. However, there is always a risk of cryptojacking and malware from downloading untrusted sources. Therefore, you should always scan your device for any potential infections before installing the mod apk version.

-

Q2: How much real money can I earn from playing Bitcoin Mining Idle Tycoon Mod APK?

-

A2: Bitcoin Mining Idle Tycoon Mod APK is a game that simulates bitcoin mining. You cannot earn real money from playing this game. The bitcoins you mine in the game are virtual currency that only exist in the game. However, you can learn about how bitcoin mining works in real life by playing this game.

-

Q3: What are some tips and tricks to succeed in Bitcoin Mining Idle Tycoon Mod APK?

-

A3: Some tips and tricks to succeed in Bitcoin Mining Idle Tycoon Mod APK are:

- -

Q4: What are some alternatives to Bitcoin Mining Idle Tycoon Mod APK?

-

A4: Some alternatives to Bitcoin Mining Idle Tycoon Mod APK are:

- -

Q5: How can I learn more about bitcoin mining and cryptocurrency?

-

A5: Some ways to learn more about bitcoin mining and cryptocurrency are:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Index of Cricket League Mod APK v1.0.5 for Android - Unlimited Coins and Gems.md b/spaces/1phancelerku/anime-remove-background/Download Index of Cricket League Mod APK v1.0.5 for Android - Unlimited Coins and Gems.md deleted file mode 100644 index b1ca0aa3b502c91f0e6d963b6fda110a0be41543..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Index of Cricket League Mod APK v1.0.5 for Android - Unlimited Coins and Gems.md +++ /dev/null @@ -1,124 +0,0 @@ -
-

Index of Cricket League Mod APK: How to Download and Play the Best Cricket Game on Your Android Device

-

Introduction

-

If you are a fan of cricket, you must have heard of Cricket League, one of the most popular and realistic cricket games on the Google Play Store. However, if you want to enjoy the game to the fullest, you might need to spend some real money to unlock premium features, such as unlimited coins, all players unlocked, no ads, and more. That's why many people are looking for the modded version of Cricket League, which gives them access to all these benefits for free.

-

index of cricket league mod apk


Download –––––>>> https://jinyurl.com/2uNTox



-

In this article, we will show you how to download and install Cricket League Mod APK on your Android device, and how to play the game with all the features unlocked. We will also answer some frequently asked questions about the game and the modded file. So, without further ado, let's get started!

-

What is Cricket League Mod APK?

-

Cricket League Mod APK is a modified version of the original Cricket League game, which is developed by Gametion Technologies Pvt Ltd. The modded file has been hacked by some third-party developers to provide users with unlimited money, all players unlocked, no ads, and other premium features that are otherwise not available in the official game.

-

With Cricket League Mod APK, you can enjoy playing cricket with your favorite teams and players, without worrying about running out of coins or being interrupted by annoying ads. You can also join different tournaments and leagues, unlock new stadiums and rewards, and experience realistic 3D graphics and sound effects.

-

Why should you download Cricket League Mod APK?

-

There are many reasons why you should download Cricket League Mod APK instead of the original game. Here are some of them:

- -

How to download and install Cricket League Mod APK?

-

Downloading and installing Cricket League Mod APK is very easy and simple. Just follow these steps:

-

Step 1: Find a reliable source for the modded file

-

The first thing you need to do is to find a trustworthy website that provides the modded file for Cricket League. You can use Google or any other search engine to find a reliable source for the modded file. You can also check the reviews and ratings of the website to see if it is safe and secure. Some of the websites that offer Cricket League Mod APK are:

- -

Make sure you download the latest version of the modded file, which is 1.0.9 as of June 2023.

-

Step 2: Enable unknown sources on your device

-

The next thing you need to do is to enable unknown sources on your device. This will allow you to install apps that are not from the Google Play Store. To do this, follow these steps:

-

Cricket League v1.0.5 mod apk download
-How to install Cricket League mod apk on Android
-Cricket League mod apk unlimited money and coins
-Cricket League 3D multiplayer mod apk latest version
-Cricket League mod apk by Miniclip Com
-Cricket League mod apk free download for Android
-Cricket League mod apk offline mode
-Cricket League mod apk hack and cheats
-Cricket League mod apk with all teams unlocked
-Cricket League mod apk no root required
-Cricket League mod apk with real-time commentary
-Cricket League mod apk with realistic graphics and physics
-Cricket League mod apk with custom tournaments and leagues
-Cricket League mod apk with online leaderboards and achievements
-Cricket League mod apk with easy controls and gameplay
-Cricket League mod apk with different game modes and difficulty levels
-Cricket League mod apk with HD quality sound and music
-Cricket League mod apk with daily rewards and challenges
-Cricket League mod apk with in-app purchases and ads removed
-Cricket League mod apk with bug fixes and performance improvements
-Cricket League mod apk for PC and laptop
-Cricket League mod apk for iOS and iPhone
-Cricket League mod apk for Windows 10 and Mac OS
-Cricket League mod apk for Firestick and Smart TV
-Cricket League mod apk for Chromebook and Linux
-Cricket League mod apk reviews and ratings
-Cricket League mod apk tips and tricks
-Cricket League mod apk FAQs and guides
-Cricket League mod apk features and specifications
-Cricket League mod apk comparison and alternatives
-Download link of Cricket League mod apk file
-How to update Cricket League mod apk to the latest version
-How to uninstall Cricket League mod apk from your device
-How to backup and restore your data in Cricket League mod apk
-How to play Cricket League mod apk with friends online
-How to join a clan or create your own in Cricket League mod apk
-How to customize your avatar and team in Cricket League mod apk
-How to earn more money and coins in Cricket League mod apk
-How to unlock new teams and players in Cricket League mod apk
-How to improve your skills and strategy in Cricket League mod apk

-
    -
  1. Go to your device settings and tap on security or privacy.
  2. -
  3. Find the option that says unknown sources or install unknown apps and toggle it on.
  4. -
  5. A warning message will pop up, telling you that installing apps from unknown sources can harm your device. Tap on OK or Allow to proceed.
  6. -
-

You can also enable unknown sources for specific apps, such as your browser or file manager, by tapping on their names and toggling on the option that says allow from this source.

-

Step 3: Download and install the APK file

-

The final step is to download and install the APK file on your device. To do this, follow these steps:

-
    -
  1. Open your browser or file manager and go to the website where you downloaded the modded file.
  2. -
  3. Tap on the download button or link and wait for the file to be downloaded.
  4. -
  5. Once the download is complete, tap on the file name or open it with your file manager.
  6. -
  7. A prompt will appear, asking you if you want to install the app. Tap on Install and wait for the installation to finish.
  8. -
  9. Once the installation is done, tap on Open or Done to launch the game or exit the installer.
  10. -
-

Congratulations! You have successfully downloaded and installed Cricket League Mod APK on your Android device. You can now enjoy playing the game with all the features unlocked.

-

How to play Cricket League Mod APK?

-

Playing Cricket League Mod APK is very easy and fun. Here are some tips on how to play the game:

-

Choose your team and players

-

The first thing you need to do is to choose your team and players. You can select from different countries, such as India, Australia, England, Pakistan, South Africa, etc. You can also create your own custom team with your favorite players. You can edit their names, skills, appearances, etc.

-

You can also unlock all the players in the game, including legendary cricketers like Sachin Tendulkar, Virat Kohli, MS Dhoni, AB de Villiers, and more. You can also create your own dream team with your favorite players.

-

Play different modes and tournaments

-

The next thing you need to do is to play different modes and tournaments in the game. You can choose from different options, such as Quick Match, World Cup, IPL, PSL, BBL, CPL, and more. You can also play online with your friends or other players from around the world.

-

You can also customize your match settings, such as overs, difficulty level, toss, pitch condition, weather, etc. You can also view your match statistics, such as scorecard, wagon wheel, man of the match, etc.

-

Unlock new stadiums and rewards

-

The last thing you need to do is to unlock new stadiums and rewards as you progress in the game. You can play in different venues like Eden Gardens, Wankhede Stadium, Lord's, MCG, SCG, etc. You can also win trophies, medals, badges, and other prizes.

-

You can also unlock new features and items in the game store using your unlimited coins. You can buy new bats, balls, gloves, helmets, shoes, etc. You can also upgrade your skills and abilities using your coins.

-

Enjoy realistic graphics and sound effects

-

The best thing about Cricket League Mod APK is that it has realistic 3D graphics and sound effects that make you feel like you are playing in a real cricket match. You can also customize your camera angles, graphics settings, sound effects, etc. You can also enjoy the commentary and crowd cheering that add to the excitement of the game.

-

Conclusion

-

Cricket League Mod APK is a great game for cricket lovers who want to enjoy the game with all the features unlocked. You can download and install the modded file on your Android device easily and safely, and play the game with unlimited coins, all players unlocked, no ads, and other premium features. You can also play different modes and tournaments, unlock new stadiums and rewards, and enjoy realistic graphics and sound effects.

-

If you are looking for a fun and realistic cricket game on your Android device, you should definitely try Cricket League Mod APK. It is one of the best cricket games on the Google Play Store, and it will give you hours of entertainment and enjoyment.

-

FAQs

-

Here are some frequently asked questions about Cricket League Mod APK:

-

Q: Is Cricket League Mod APK safe to download and install?

-

A: Yes, Cricket League Mod APK is safe to download and install, as long as you get it from a reliable source. However, you should always be careful when downloading apps from unknown sources, as they may contain viruses or malware that can harm your device. You should also scan the file with an antivirus app before installing it.

-

Q: Do I need to root my device to use Cricket League Mod APK?

-

A: No, you do not need to root your device to use Cricket League Mod APK. The modded file works on both rooted and non-rooted devices. However, some features may require root access, such as changing the IMEI number or spoofing your location.

-

Q: Will I get banned from playing online if I use Cricket League Mod APK?

-

A: No, you will not get banned from playing online if you use Cricket League Mod APK. The modded file has an anti-ban feature that prevents the game server from detecting your modded file. However, you should not abuse the modded features or cheat in online matches, as that may ruin the fun for other players.

-

Q: How can I update Cricket League Mod APK?

-

A: You can update Cricket League Mod APK by downloading the latest version of the modded file from the same website where you got it. You can also check for updates within the game settings. However, you should always backup your game data before updating, as some updates may cause compatibility issues or data loss.

-

Q: How can I uninstall Cricket League Mod APK?

-

A: You can uninstall Cricket League Mod APK by following these steps:

-
    -
  1. Go to your device settings and tap on apps or applications.
  2. -
  3. Find and tap on Cricket League Mod APK.
  4. -
  5. Tap on uninstall and confirm your action.
  6. -
  7. Wait for the app to be uninstalled from your device.
  8. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/4Taps/SadTalker/src/facerender/modules/generator.py b/spaces/4Taps/SadTalker/src/facerender/modules/generator.py deleted file mode 100644 index 1b5f8d26b18a1fa5cb1d8cbe9d1fa2413bf39f01..0000000000000000000000000000000000000000 --- a/spaces/4Taps/SadTalker/src/facerender/modules/generator.py +++ /dev/null @@ -1,251 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F -from src.facerender.modules.util import ResBlock2d, SameBlock2d, UpBlock2d, DownBlock2d, ResBlock3d, SPADEResnetBlock -from src.facerender.modules.dense_motion import DenseMotionNetwork - - -class OcclusionAwareGenerator(nn.Module): - """ - Generator follows NVIDIA architecture. - """ - - def __init__(self, image_channel, feature_channel, num_kp, block_expansion, max_features, num_down_blocks, reshape_channel, reshape_depth, - num_resblocks, estimate_occlusion_map=False, dense_motion_params=None, estimate_jacobian=False): - super(OcclusionAwareGenerator, self).__init__() - - if dense_motion_params is not None: - self.dense_motion_network = DenseMotionNetwork(num_kp=num_kp, feature_channel=feature_channel, - estimate_occlusion_map=estimate_occlusion_map, - **dense_motion_params) - else: - self.dense_motion_network = None - - self.first = SameBlock2d(image_channel, block_expansion, kernel_size=(7, 7), padding=(3, 3)) - - down_blocks = [] - for i in range(num_down_blocks): - in_features = min(max_features, block_expansion * (2 ** i)) - out_features = min(max_features, block_expansion * (2 ** (i + 1))) - down_blocks.append(DownBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1))) - self.down_blocks = nn.ModuleList(down_blocks) - - self.second = nn.Conv2d(in_channels=out_features, out_channels=max_features, kernel_size=1, stride=1) - - self.reshape_channel = reshape_channel - self.reshape_depth = reshape_depth - - self.resblocks_3d = torch.nn.Sequential() - for i in range(num_resblocks): - self.resblocks_3d.add_module('3dr' + str(i), ResBlock3d(reshape_channel, kernel_size=3, padding=1)) - - out_features = block_expansion * (2 ** (num_down_blocks)) - self.third = SameBlock2d(max_features, out_features, kernel_size=(3, 3), padding=(1, 1), lrelu=True) - self.fourth = nn.Conv2d(in_channels=out_features, out_channels=out_features, kernel_size=1, stride=1) - - self.resblocks_2d = torch.nn.Sequential() - for i in range(num_resblocks): - self.resblocks_2d.add_module('2dr' + str(i), ResBlock2d(out_features, kernel_size=3, padding=1)) - - up_blocks = [] - for i in range(num_down_blocks): - in_features = max(block_expansion, block_expansion * (2 ** (num_down_blocks - i))) - out_features = max(block_expansion, block_expansion * (2 ** (num_down_blocks - i - 1))) - up_blocks.append(UpBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1))) - self.up_blocks = nn.ModuleList(up_blocks) - - self.final = nn.Conv2d(block_expansion, image_channel, kernel_size=(7, 7), padding=(3, 3)) - self.estimate_occlusion_map = estimate_occlusion_map - self.image_channel = image_channel - - def deform_input(self, inp, deformation): - _, d_old, h_old, w_old, _ = deformation.shape - _, _, d, h, w = inp.shape - if d_old != d or h_old != h or w_old != w: - deformation = deformation.permute(0, 4, 1, 2, 3) - deformation = F.interpolate(deformation, size=(d, h, w), mode='trilinear') - deformation = deformation.permute(0, 2, 3, 4, 1) - return F.grid_sample(inp, deformation) - - def forward(self, source_image, kp_driving, kp_source): - # Encoding (downsampling) part - out = self.first(source_image) - for i in range(len(self.down_blocks)): - out = self.down_blocks[i](out) - out = self.second(out) - bs, c, h, w = out.shape - # print(out.shape) - feature_3d = out.view(bs, self.reshape_channel, self.reshape_depth, h ,w) - feature_3d = self.resblocks_3d(feature_3d) - - # Transforming feature representation according to deformation and occlusion - output_dict = {} - if self.dense_motion_network is not None: - dense_motion = self.dense_motion_network(feature=feature_3d, kp_driving=kp_driving, - kp_source=kp_source) - output_dict['mask'] = dense_motion['mask'] - - if 'occlusion_map' in dense_motion: - occlusion_map = dense_motion['occlusion_map'] - output_dict['occlusion_map'] = occlusion_map - else: - occlusion_map = None - deformation = dense_motion['deformation'] - out = self.deform_input(feature_3d, deformation) - - bs, c, d, h, w = out.shape - out = out.view(bs, c*d, h, w) - out = self.third(out) - out = self.fourth(out) - - if occlusion_map is not None: - if out.shape[2] != occlusion_map.shape[2] or out.shape[3] != occlusion_map.shape[3]: - occlusion_map = F.interpolate(occlusion_map, size=out.shape[2:], mode='bilinear') - out = out * occlusion_map - - # output_dict["deformed"] = self.deform_input(source_image, deformation) # 3d deformation cannot deform 2d image - - # Decoding part - out = self.resblocks_2d(out) - for i in range(len(self.up_blocks)): - out = self.up_blocks[i](out) - out = self.final(out) - out = F.sigmoid(out) - - output_dict["prediction"] = out - - return output_dict - - -class SPADEDecoder(nn.Module): - def __init__(self): - super().__init__() - ic = 256 - oc = 64 - norm_G = 'spadespectralinstance' - label_nc = 256 - - self.fc = nn.Conv2d(ic, 2 * ic, 3, padding=1) - self.G_middle_0 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_1 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_2 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_3 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_4 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.G_middle_5 = SPADEResnetBlock(2 * ic, 2 * ic, norm_G, label_nc) - self.up_0 = SPADEResnetBlock(2 * ic, ic, norm_G, label_nc) - self.up_1 = SPADEResnetBlock(ic, oc, norm_G, label_nc) - self.conv_img = nn.Conv2d(oc, 3, 3, padding=1) - self.up = nn.Upsample(scale_factor=2) - - def forward(self, feature): - seg = feature - x = self.fc(feature) - x = self.G_middle_0(x, seg) - x = self.G_middle_1(x, seg) - x = self.G_middle_2(x, seg) - x = self.G_middle_3(x, seg) - x = self.G_middle_4(x, seg) - x = self.G_middle_5(x, seg) - x = self.up(x) - x = self.up_0(x, seg) # 256, 128, 128 - x = self.up(x) - x = self.up_1(x, seg) # 64, 256, 256 - - x = self.conv_img(F.leaky_relu(x, 2e-1)) - # x = torch.tanh(x) - x = F.sigmoid(x) - - return x - - -class OcclusionAwareSPADEGenerator(nn.Module): - - def __init__(self, image_channel, feature_channel, num_kp, block_expansion, max_features, num_down_blocks, reshape_channel, reshape_depth, - num_resblocks, estimate_occlusion_map=False, dense_motion_params=None, estimate_jacobian=False): - super(OcclusionAwareSPADEGenerator, self).__init__() - - if dense_motion_params is not None: - self.dense_motion_network = DenseMotionNetwork(num_kp=num_kp, feature_channel=feature_channel, - estimate_occlusion_map=estimate_occlusion_map, - **dense_motion_params) - else: - self.dense_motion_network = None - - self.first = SameBlock2d(image_channel, block_expansion, kernel_size=(3, 3), padding=(1, 1)) - - down_blocks = [] - for i in range(num_down_blocks): - in_features = min(max_features, block_expansion * (2 ** i)) - out_features = min(max_features, block_expansion * (2 ** (i + 1))) - down_blocks.append(DownBlock2d(in_features, out_features, kernel_size=(3, 3), padding=(1, 1))) - self.down_blocks = nn.ModuleList(down_blocks) - - self.second = nn.Conv2d(in_channels=out_features, out_channels=max_features, kernel_size=1, stride=1) - - self.reshape_channel = reshape_channel - self.reshape_depth = reshape_depth - - self.resblocks_3d = torch.nn.Sequential() - for i in range(num_resblocks): - self.resblocks_3d.add_module('3dr' + str(i), ResBlock3d(reshape_channel, kernel_size=3, padding=1)) - - out_features = block_expansion * (2 ** (num_down_blocks)) - self.third = SameBlock2d(max_features, out_features, kernel_size=(3, 3), padding=(1, 1), lrelu=True) - self.fourth = nn.Conv2d(in_channels=out_features, out_channels=out_features, kernel_size=1, stride=1) - - self.estimate_occlusion_map = estimate_occlusion_map - self.image_channel = image_channel - - self.decoder = SPADEDecoder() - - def deform_input(self, inp, deformation): - _, d_old, h_old, w_old, _ = deformation.shape - _, _, d, h, w = inp.shape - if d_old != d or h_old != h or w_old != w: - deformation = deformation.permute(0, 4, 1, 2, 3) - deformation = F.interpolate(deformation, size=(d, h, w), mode='trilinear') - deformation = deformation.permute(0, 2, 3, 4, 1) - return F.grid_sample(inp, deformation) - - def forward(self, source_image, kp_driving, kp_source): - # Encoding (downsampling) part - out = self.first(source_image) - for i in range(len(self.down_blocks)): - out = self.down_blocks[i](out) - out = self.second(out) - bs, c, h, w = out.shape - # print(out.shape) - feature_3d = out.view(bs, self.reshape_channel, self.reshape_depth, h ,w) - feature_3d = self.resblocks_3d(feature_3d) - - # Transforming feature representation according to deformation and occlusion - output_dict = {} - if self.dense_motion_network is not None: - dense_motion = self.dense_motion_network(feature=feature_3d, kp_driving=kp_driving, - kp_source=kp_source) - output_dict['mask'] = dense_motion['mask'] - - if 'occlusion_map' in dense_motion: - occlusion_map = dense_motion['occlusion_map'] - output_dict['occlusion_map'] = occlusion_map - else: - occlusion_map = None - deformation = dense_motion['deformation'] - out = self.deform_input(feature_3d, deformation) - - bs, c, d, h, w = out.shape - out = out.view(bs, c*d, h, w) - out = self.third(out) - out = self.fourth(out) - - if occlusion_map is not None: - if out.shape[2] != occlusion_map.shape[2] or out.shape[3] != occlusion_map.shape[3]: - occlusion_map = F.interpolate(occlusion_map, size=out.shape[2:], mode='bilinear') - out = out * occlusion_map - - # Decoding part - out = self.decoder(out) - - output_dict["prediction"] = out - - return output_dict \ No newline at end of file diff --git a/spaces/801artistry/RVC801/demucs/test.py b/spaces/801artistry/RVC801/demucs/test.py deleted file mode 100644 index 4140914ddbff3543b4056ca0cb1b5e887434a40a..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/demucs/test.py +++ /dev/null @@ -1,109 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import gzip -import sys -from concurrent import futures - -import musdb -import museval -import torch as th -import tqdm -from scipy.io import wavfile -from torch import distributed - -from .audio import convert_audio -from .utils import apply_model - - -def evaluate(model, - musdb_path, - eval_folder, - workers=2, - device="cpu", - rank=0, - save=False, - shifts=0, - split=False, - overlap=0.25, - is_wav=False, - world_size=1): - """ - Evaluate model using museval. Run the model - on a single GPU, the bottleneck being the call to museval. - """ - - output_dir = eval_folder / "results" - output_dir.mkdir(exist_ok=True, parents=True) - json_folder = eval_folder / "results/test" - json_folder.mkdir(exist_ok=True, parents=True) - - # we load tracks from the original musdb set - test_set = musdb.DB(musdb_path, subsets=["test"], is_wav=is_wav) - src_rate = 44100 # hardcoded for now... - - for p in model.parameters(): - p.requires_grad = False - p.grad = None - - pendings = [] - with futures.ProcessPoolExecutor(workers or 1) as pool: - for index in tqdm.tqdm(range(rank, len(test_set), world_size), file=sys.stdout): - track = test_set.tracks[index] - - out = json_folder / f"{track.name}.json.gz" - if out.exists(): - continue - - mix = th.from_numpy(track.audio).t().float() - ref = mix.mean(dim=0) # mono mixture - mix = (mix - ref.mean()) / ref.std() - mix = convert_audio(mix, src_rate, model.samplerate, model.audio_channels) - estimates = apply_model(model, mix.to(device), - shifts=shifts, split=split, overlap=overlap) - estimates = estimates * ref.std() + ref.mean() - - estimates = estimates.transpose(1, 2) - references = th.stack( - [th.from_numpy(track.targets[name].audio).t() for name in model.sources]) - references = convert_audio(references, src_rate, - model.samplerate, model.audio_channels) - references = references.transpose(1, 2).numpy() - estimates = estimates.cpu().numpy() - win = int(1. * model.samplerate) - hop = int(1. * model.samplerate) - if save: - folder = eval_folder / "wav/test" / track.name - folder.mkdir(exist_ok=True, parents=True) - for name, estimate in zip(model.sources, estimates): - wavfile.write(str(folder / (name + ".wav")), 44100, estimate) - - if workers: - pendings.append((track.name, pool.submit( - museval.evaluate, references, estimates, win=win, hop=hop))) - else: - pendings.append((track.name, museval.evaluate( - references, estimates, win=win, hop=hop))) - del references, mix, estimates, track - - for track_name, pending in tqdm.tqdm(pendings, file=sys.stdout): - if workers: - pending = pending.result() - sdr, isr, sir, sar = pending - track_store = museval.TrackStore(win=44100, hop=44100, track_name=track_name) - for idx, target in enumerate(model.sources): - values = { - "SDR": sdr[idx].tolist(), - "SIR": sir[idx].tolist(), - "ISR": isr[idx].tolist(), - "SAR": sar[idx].tolist() - } - - track_store.add_target(target_name=target, values=values) - json_path = json_folder / f"{track_name}.json.gz" - gzip.open(json_path, "w").write(track_store.json.encode('utf-8')) - if world_size > 1: - distributed.barrier() diff --git a/spaces/AI-Dashboards/HEDIS.Assessment.PHQ9.GADD7.SDoH/style.css b/spaces/AI-Dashboards/HEDIS.Assessment.PHQ9.GADD7.SDoH/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/AI-Dashboards/HEDIS.Assessment.PHQ9.GADD7.SDoH/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/synta.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/synta.py deleted file mode 100644 index 9a9e3ba127549e3b95d989cca5f52caa4766ec94..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/tasks/tts/synta.py +++ /dev/null @@ -1,25 +0,0 @@ -import os -import torch -import torch.nn.functional as F -from torch import nn - -from text_to_speech.modules.tts.syntaspeech.syntaspeech import SyntaSpeech -from tasks.tts.ps_adv import PortaSpeechAdvTask -from text_to_speech.utils.commons.hparams import hparams - - -class SyntaSpeechTask(PortaSpeechAdvTask): - def build_tts_model(self): - ph_dict_size = len(self.token_encoder) - word_dict_size = len(self.word_encoder) - self.model = SyntaSpeech(ph_dict_size, word_dict_size, hparams) - - self.gen_params = [p for p in self.model.parameters() if p.requires_grad] - self.dp_params = [p for k, p in self.model.named_parameters() if (('dur_predictor' in k) and p.requires_grad)] - self.gen_params_except_dp = [p for k, p in self.model.named_parameters() if (('dur_predictor' not in k) and p.requires_grad)] - self.bert_params = [p for k, p in self.model.named_parameters() if (('bert' in k) and p.requires_grad)] - self.gen_params_except_bert_and_dp = [p for k, p in self.model.named_parameters() if ('dur_predictor' not in k) and ('bert' not in k) and p.requires_grad ] - - self.use_bert = True if len(self.bert_params) > 0 else False - - \ No newline at end of file diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/audio/io.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/audio/io.py deleted file mode 100644 index 34d5d20ae13e9aa481b1bc85117ad6539af8a624..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/utils/audio/io.py +++ /dev/null @@ -1,22 +0,0 @@ -import subprocess - -import numpy as np -from scipy.io import wavfile - - -def save_wav(wav, path, sr, norm=False): - if norm: - wav = wav / np.abs(wav).max() - wav = wav * 32767 - wavfile.write(path[:-4] + '.wav', sr, wav.astype(np.int16)) - if path[-4:] == '.mp3': - to_mp3(path[:-4]) - - -def to_mp3(out_path): - if out_path[-4:] == '.wav': - out_path = out_path[:-4] - subprocess.check_call( - f'ffmpeg -threads 1 -loglevel error -i "{out_path}.wav" -vn -b:a 192k -y -hide_banner -async 1 "{out_path}.mp3"', - shell=True, stdin=subprocess.PIPE) - subprocess.check_call(f'rm -f "{out_path}.wav"', shell=True) diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/transform.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/transform.py deleted file mode 100644 index 7014c926f153a351d2256c869c67c02d57b30913..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio/ldm/modules/encoders/open_clap/transform.py +++ /dev/null @@ -1,30 +0,0 @@ -from torchvision.transforms import Normalize, Compose, RandomResizedCrop, InterpolationMode, ToTensor, Resize, \ - CenterCrop - - -def _convert_to_rgb(image): - return image.convert('RGB') - - -def image_transform( - image_size: int, - is_train: bool, - mean=(0.48145466, 0.4578275, 0.40821073), - std=(0.26862954, 0.26130258, 0.27577711) -): - normalize = Normalize(mean=mean, std=std) - if is_train: - return Compose([ - RandomResizedCrop(image_size, scale=(0.9, 1.0), interpolation=InterpolationMode.BICUBIC), - _convert_to_rgb, - ToTensor(), - normalize, - ]) - else: - return Compose([ - Resize(image_size, interpolation=InterpolationMode.BICUBIC), - CenterCrop(image_size), - _convert_to_rgb, - ToTensor(), - normalize, - ]) diff --git a/spaces/AIWaves/SOP_Generation-single/Component/__init__.py b/spaces/AIWaves/SOP_Generation-single/Component/__init__.py deleted file mode 100644 index 61d0e26fcc092bfe6da96fdb5696586ec7d30045..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/SOP_Generation-single/Component/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .ExtraComponent import * -from .PromptComponent import * -from .ToolComponent import * \ No newline at end of file diff --git a/spaces/AdvertisingAgency/README/README.md b/spaces/AdvertisingAgency/README/README.md deleted file mode 100644 index 1e343c21dff1b5eb778eb7f7749d5f1cdeb8e702..0000000000000000000000000000000000000000 --- a/spaces/AdvertisingAgency/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: README -emoji: 🐢 -colorFrom: indigo -colorTo: blue -sdk: static -pinned: false ---- - -Edit this `README.md` markdown file to author your organization card. diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/buildarcadeobject.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/buildarcadeobject.d.ts deleted file mode 100644 index 321651837ff46055d56ca195495dab1729e2cf06..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/buildarcadeobject.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import BuildArcadeObject from './utils/arcade/BuildArcadeObject'; -export default BuildArcadeObject; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/clock/Factory.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/clock/Factory.d.ts deleted file mode 100644 index 8748cf08e2b34063fbeb83ae8570a7fcfb8aca18..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/spinner/clock/Factory.d.ts +++ /dev/null @@ -1,6 +0,0 @@ -import Clock from './Clock'; -import Base from '../base/Base'; - -export default function Factory( - config?: Base.IConfig -): Clock; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogress/LineProgress.d.ts b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogress/LineProgress.d.ts deleted file mode 100644 index 84c06a0fe439f27612fed70498b99d0d94c284f5..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/lineprogress/LineProgress.d.ts +++ /dev/null @@ -1,2 +0,0 @@ -import LineProgress from '../../../plugins/lineprogress'; -export default LineProgress; \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/.github/ISSUE_TEMPLATE/feature_request.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/.github/ISSUE_TEMPLATE/feature_request.md deleted file mode 100644 index 24405ec4fa1d1ebf802813bc1af3ce2840ef2f9c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/.github/ISSUE_TEMPLATE/feature_request.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -name: "\U0001F680 Feature request" -about: Suggest an idea for this project -title: '' -labels: '' -assignees: '' - ---- - -**Is your feature request related to a problem? Please describe.** -A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] - -**Describe the solution you'd like** -A clear and concise description of what you want to happen. - -**Describe alternatives you've considered** -A clear and concise description of any alternative solutions or features you've considered. - -**Additional context** -Add any other context or screenshots about the feature request here. diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/__init__.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/__init__.py deleted file mode 100644 index f004dd95d97df16167f932587b3ce73b05b04a37..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/dense_heads/__init__.py +++ /dev/null @@ -1,41 +0,0 @@ -from .anchor_free_head import AnchorFreeHead -from .anchor_head import AnchorHead -from .atss_head import ATSSHead -from .cascade_rpn_head import CascadeRPNHead, StageCascadeRPNHead -from .centripetal_head import CentripetalHead -from .corner_head import CornerHead -from .embedding_rpn_head import EmbeddingRPNHead -from .fcos_head import FCOSHead -from .fovea_head import FoveaHead -from .free_anchor_retina_head import FreeAnchorRetinaHead -from .fsaf_head import FSAFHead -from .ga_retina_head import GARetinaHead -from .ga_rpn_head import GARPNHead -from .gfl_head import GFLHead -from .guided_anchor_head import FeatureAdaption, GuidedAnchorHead -from .ld_head import LDHead -from .nasfcos_head import NASFCOSHead -from .paa_head import PAAHead -from .pisa_retinanet_head import PISARetinaHead -from .pisa_ssd_head import PISASSDHead -from .reppoints_head import RepPointsHead -from .retina_head import RetinaHead -from .retina_sepbn_head import RetinaSepBNHead -from .rpn_head import RPNHead -from .sabl_retina_head import SABLRetinaHead -from .ssd_head import SSDHead -from .transformer_head import TransformerHead -from .vfnet_head import VFNetHead -from .yolact_head import YOLACTHead, YOLACTProtonet, YOLACTSegmHead -from .yolo_head import YOLOV3Head - -__all__ = [ - 'AnchorFreeHead', 'AnchorHead', 'GuidedAnchorHead', 'FeatureAdaption', - 'RPNHead', 'GARPNHead', 'RetinaHead', 'RetinaSepBNHead', 'GARetinaHead', - 'SSDHead', 'FCOSHead', 'RepPointsHead', 'FoveaHead', - 'FreeAnchorRetinaHead', 'ATSSHead', 'FSAFHead', 'NASFCOSHead', - 'PISARetinaHead', 'PISASSDHead', 'GFLHead', 'CornerHead', 'YOLACTHead', - 'YOLACTSegmHead', 'YOLACTProtonet', 'YOLOV3Head', 'PAAHead', - 'SABLRetinaHead', 'CentripetalHead', 'VFNetHead', 'TransformerHead', - 'StageCascadeRPNHead', 'CascadeRPNHead', 'EmbeddingRPNHead', 'LDHead' -] diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index 3bfb9bdb3064275c2ac3bf2a057ef8eb79c308df..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/danet/danet_r101-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './danet_r50-d8_512x1024_40k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r50-d8_512x512_160k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r50-d8_512x512_160k_ade20k.py deleted file mode 100644 index 2a5dc203cc793860aae7743d16c4fb9a564ad1d8..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r50-d8_512x512_160k_ade20k.py +++ /dev/null @@ -1,6 +0,0 @@ -_base_ = [ - '../_base_/models/encnet_r50-d8.py', '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_160k.py' -] -model = dict( - decode_head=dict(num_classes=150), auxiliary_head=dict(num_classes=150)) diff --git a/spaces/Andyrasika/Andyrasika-dreamshaper-sdxl-1.0/app.py b/spaces/Andyrasika/Andyrasika-dreamshaper-sdxl-1.0/app.py deleted file mode 100644 index ac4e38acae0fe51b582ab9caf1d1bcfe1d03a41a..0000000000000000000000000000000000000000 --- a/spaces/Andyrasika/Andyrasika-dreamshaper-sdxl-1.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Andyrasika/dreamshaper-sdxl-1.0").launch() \ No newline at end of file diff --git a/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-search/autocomplete.umd.js b/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-search/autocomplete.umd.js deleted file mode 100644 index 619c57cc5c8b59d94aa0f39f8f70639f0d9ac691..0000000000000000000000000000000000000000 --- a/spaces/AnimalEquality/chatbot/_proc/_docs/site_libs/quarto-search/autocomplete.umd.js +++ /dev/null @@ -1,3 +0,0 @@ -/*! @algolia/autocomplete-js 1.7.3 | MIT License | © Algolia, Inc. and contributors | https://github.com/algolia/autocomplete */ -!function(e,t){"object"==typeof exports&&"undefined"!=typeof module?t(exports):"function"==typeof define&&define.amd?define(["exports"],t):t((e="undefined"!=typeof globalThis?globalThis:e||self)["@algolia/autocomplete-js"]={})}(this,(function(e){"use strict";function t(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function n(e){for(var n=1;n=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}function a(e,t){return function(e){if(Array.isArray(e))return e}(e)||function(e,t){var n=null==e?null:"undefined"!=typeof Symbol&&e[Symbol.iterator]||e["@@iterator"];if(null==n)return;var r,o,i=[],u=!0,a=!1;try{for(n=n.call(e);!(u=(r=n.next()).done)&&(i.push(r.value),!t||i.length!==t);u=!0);}catch(e){a=!0,o=e}finally{try{u||null==n.return||n.return()}finally{if(a)throw o}}return i}(e,t)||l(e,t)||function(){throw new TypeError("Invalid attempt to destructure non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()}function c(e){return function(e){if(Array.isArray(e))return s(e)}(e)||function(e){if("undefined"!=typeof Symbol&&null!=e[Symbol.iterator]||null!=e["@@iterator"])return Array.from(e)}(e)||l(e)||function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()}function l(e,t){if(e){if("string"==typeof e)return s(e,t);var n=Object.prototype.toString.call(e).slice(8,-1);return"Object"===n&&e.constructor&&(n=e.constructor.name),"Map"===n||"Set"===n?Array.from(e):"Arguments"===n||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n)?s(e,t):void 0}}function s(e,t){(null==t||t>e.length)&&(t=e.length);for(var n=0,r=new Array(t);n=n?null===r?null:0:o}function S(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function I(e,t,n){return t in e?Object.defineProperty(e,t,{value:n,enumerable:!0,configurable:!0,writable:!0}):e[t]=n,e}function E(e,t){var n=[];return Promise.resolve(e(t)).then((function(e){return Promise.all(e.filter((function(e){return Boolean(e)})).map((function(e){if(e.sourceId,n.includes(e.sourceId))throw new Error("[Autocomplete] The `sourceId` ".concat(JSON.stringify(e.sourceId)," is not unique."));n.push(e.sourceId);var t=function(e){for(var t=1;te.length)&&(t=e.length);for(var n=0,r=new Array(t);ne.length)&&(t=e.length);for(var n=0,r=new Array(t);n=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}var ae,ce,le,se=null,pe=(ae=-1,ce=-1,le=void 0,function(e){var t=++ae;return Promise.resolve(e).then((function(e){return le&&t=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}var ye=["props","refresh","store"],be=["inputElement","formElement","panelElement"],Oe=["inputElement"],_e=["inputElement","maxLength"],Pe=["item","source"];function je(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function we(e){for(var t=1;t=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}function Ee(e){var t=e.props,n=e.refresh,r=e.store,o=Ie(e,ye);return{getEnvironmentProps:function(e){var n=e.inputElement,o=e.formElement,i=e.panelElement;function u(e){!r.getState().isOpen&&r.pendingRequests.isEmpty()||e.target===n||!1===[o,i].some((function(t){return n=t,r=e.target,n===r||n.contains(r);var n,r}))&&(r.dispatch("blur",null),t.debug||r.pendingRequests.cancelAll())}return we({onTouchStart:u,onMouseDown:u,onTouchMove:function(e){!1!==r.getState().isOpen&&n===t.environment.document.activeElement&&e.target!==n&&n.blur()}},Ie(e,be))},getRootProps:function(e){return we({role:"combobox","aria-expanded":r.getState().isOpen,"aria-haspopup":"listbox","aria-owns":r.getState().isOpen?"".concat(t.id,"-list"):void 0,"aria-labelledby":"".concat(t.id,"-label")},e)},getFormProps:function(e){return e.inputElement,we({action:"",noValidate:!0,role:"search",onSubmit:function(i){var u;i.preventDefault(),t.onSubmit(we({event:i,refresh:n,state:r.getState()},o)),r.dispatch("submit",null),null===(u=e.inputElement)||void 0===u||u.blur()},onReset:function(i){var u;i.preventDefault(),t.onReset(we({event:i,refresh:n,state:r.getState()},o)),r.dispatch("reset",null),null===(u=e.inputElement)||void 0===u||u.focus()}},Ie(e,Oe))},getLabelProps:function(e){return we({htmlFor:"".concat(t.id,"-input"),id:"".concat(t.id,"-label")},e)},getInputProps:function(e){var i;function u(e){(t.openOnFocus||Boolean(r.getState().query))&&fe(we({event:e,props:t,query:r.getState().completion||r.getState().query,refresh:n,store:r},o)),r.dispatch("focus",null)}var a=e||{};a.inputElement;var c=a.maxLength,l=void 0===c?512:c,s=Ie(a,_e),p=A(r.getState()),f=function(e){return Boolean(e&&e.match(C))}((null===(i=t.environment.navigator)||void 0===i?void 0:i.userAgent)||""),d=null!=p&&p.itemUrl&&!f?"go":"search";return we({"aria-autocomplete":"both","aria-activedescendant":r.getState().isOpen&&null!==r.getState().activeItemId?"".concat(t.id,"-item-").concat(r.getState().activeItemId):void 0,"aria-controls":r.getState().isOpen?"".concat(t.id,"-list"):void 0,"aria-labelledby":"".concat(t.id,"-label"),value:r.getState().completion||r.getState().query,id:"".concat(t.id,"-input"),autoComplete:"off",autoCorrect:"off",autoCapitalize:"off",enterKeyHint:d,spellCheck:"false",autoFocus:t.autoFocus,placeholder:t.placeholder,maxLength:l,type:"search",onChange:function(e){fe(we({event:e,props:t,query:e.currentTarget.value.slice(0,l),refresh:n,store:r},o))},onKeyDown:function(e){!function(e){var t=e.event,n=e.props,r=e.refresh,o=e.store,i=ge(e,de);if("ArrowUp"===t.key||"ArrowDown"===t.key){var u=function(){var e=n.environment.document.getElementById("".concat(n.id,"-item-").concat(o.getState().activeItemId));e&&(e.scrollIntoViewIfNeeded?e.scrollIntoViewIfNeeded(!1):e.scrollIntoView(!1))},a=function(){var e=A(o.getState());if(null!==o.getState().activeItemId&&e){var n=e.item,u=e.itemInputValue,a=e.itemUrl,c=e.source;c.onActive(ve({event:t,item:n,itemInputValue:u,itemUrl:a,refresh:r,source:c,state:o.getState()},i))}};t.preventDefault(),!1===o.getState().isOpen&&(n.openOnFocus||Boolean(o.getState().query))?fe(ve({event:t,props:n,query:o.getState().query,refresh:r,store:o},i)).then((function(){o.dispatch(t.key,{nextActiveItemId:n.defaultActiveItemId}),a(),setTimeout(u,0)})):(o.dispatch(t.key,{}),a(),u())}else if("Escape"===t.key)t.preventDefault(),o.dispatch(t.key,null),o.pendingRequests.cancelAll();else if("Tab"===t.key)o.dispatch("blur",null),o.pendingRequests.cancelAll();else if("Enter"===t.key){if(null===o.getState().activeItemId||o.getState().collections.every((function(e){return 0===e.items.length})))return void(n.debug||o.pendingRequests.cancelAll());t.preventDefault();var c=A(o.getState()),l=c.item,s=c.itemInputValue,p=c.itemUrl,f=c.source;if(t.metaKey||t.ctrlKey)void 0!==p&&(f.onSelect(ve({event:t,item:l,itemInputValue:s,itemUrl:p,refresh:r,source:f,state:o.getState()},i)),n.navigator.navigateNewTab({itemUrl:p,item:l,state:o.getState()}));else if(t.shiftKey)void 0!==p&&(f.onSelect(ve({event:t,item:l,itemInputValue:s,itemUrl:p,refresh:r,source:f,state:o.getState()},i)),n.navigator.navigateNewWindow({itemUrl:p,item:l,state:o.getState()}));else if(t.altKey);else{if(void 0!==p)return f.onSelect(ve({event:t,item:l,itemInputValue:s,itemUrl:p,refresh:r,source:f,state:o.getState()},i)),void n.navigator.navigate({itemUrl:p,item:l,state:o.getState()});fe(ve({event:t,nextState:{isOpen:!1},props:n,query:s,refresh:r,store:o},i)).then((function(){f.onSelect(ve({event:t,item:l,itemInputValue:s,itemUrl:p,refresh:r,source:f,state:o.getState()},i))}))}}}(we({event:e,props:t,refresh:n,store:r},o))},onFocus:u,onBlur:y,onClick:function(n){e.inputElement!==t.environment.document.activeElement||r.getState().isOpen||u(n)}},s)},getPanelProps:function(e){return we({onMouseDown:function(e){e.preventDefault()},onMouseLeave:function(){r.dispatch("mouseleave",null)}},e)},getListProps:function(e){return we({role:"listbox","aria-labelledby":"".concat(t.id,"-label"),id:"".concat(t.id,"-list")},e)},getItemProps:function(e){var i=e.item,u=e.source,a=Ie(e,Pe);return we({id:"".concat(t.id,"-item-").concat(i.__autocomplete_id),role:"option","aria-selected":r.getState().activeItemId===i.__autocomplete_id,onMouseMove:function(e){if(i.__autocomplete_id!==r.getState().activeItemId){r.dispatch("mousemove",i.__autocomplete_id);var t=A(r.getState());if(null!==r.getState().activeItemId&&t){var u=t.item,a=t.itemInputValue,c=t.itemUrl,l=t.source;l.onActive(we({event:e,item:u,itemInputValue:a,itemUrl:c,refresh:n,source:l,state:r.getState()},o))}}},onMouseDown:function(e){e.preventDefault()},onClick:function(e){var a=u.getItemInputValue({item:i,state:r.getState()}),c=u.getItemUrl({item:i,state:r.getState()});(c?Promise.resolve():fe(we({event:e,nextState:{isOpen:!1},props:t,query:a,refresh:n,store:r},o))).then((function(){u.onSelect(we({event:e,item:i,itemInputValue:a,itemUrl:c,refresh:n,source:u,state:r.getState()},o))}))}},a)}}}function Ae(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function Ce(e){for(var t=1;t0},reshape:function(e){return e.sources}},e),{},{id:null!==(n=e.id)&&void 0!==n?n:v(),plugins:o,initialState:H({activeItemId:null,query:"",completion:null,collections:[],isOpen:!1,status:"idle",context:{}},e.initialState),onStateChange:function(t){var n;null===(n=e.onStateChange)||void 0===n||n.call(e,t),o.forEach((function(e){var n;return null===(n=e.onStateChange)||void 0===n?void 0:n.call(e,t)}))},onSubmit:function(t){var n;null===(n=e.onSubmit)||void 0===n||n.call(e,t),o.forEach((function(e){var n;return null===(n=e.onSubmit)||void 0===n?void 0:n.call(e,t)}))},onReset:function(t){var n;null===(n=e.onReset)||void 0===n||n.call(e,t),o.forEach((function(e){var n;return null===(n=e.onReset)||void 0===n?void 0:n.call(e,t)}))},getSources:function(n){return Promise.all([].concat(F(o.map((function(e){return e.getSources}))),[e.getSources]).filter(Boolean).map((function(e){return E(e,n)}))).then((function(e){return d(e)})).then((function(e){return e.map((function(e){return H(H({},e),{},{onSelect:function(n){e.onSelect(n),t.forEach((function(e){var t;return null===(t=e.onSelect)||void 0===t?void 0:t.call(e,n)}))},onActive:function(n){e.onActive(n),t.forEach((function(e){var t;return null===(t=e.onActive)||void 0===t?void 0:t.call(e,n)}))}})}))}))},navigator:H({navigate:function(e){var t=e.itemUrl;r.location.assign(t)},navigateNewTab:function(e){var t=e.itemUrl,n=r.open(t,"_blank","noopener");null==n||n.focus()},navigateNewWindow:function(e){var t=e.itemUrl;r.open(t,"_blank","noopener")}},e.navigator)})}(e,t),r=R(Te,n,(function(e){var t=e.prevState,r=e.state;n.onStateChange(Be({prevState:t,state:r,refresh:u},o))})),o=function(e){var t=e.store;return{setActiveItemId:function(e){t.dispatch("setActiveItemId",e)},setQuery:function(e){t.dispatch("setQuery",e)},setCollections:function(e){var n=0,r=e.map((function(e){return L(L({},e),{},{items:d(e.items).map((function(e){return L(L({},e),{},{__autocomplete_id:n++})}))})}));t.dispatch("setCollections",r)},setIsOpen:function(e){t.dispatch("setIsOpen",e)},setStatus:function(e){t.dispatch("setStatus",e)},setContext:function(e){t.dispatch("setContext",e)}}}({store:r}),i=Ee(Be({props:n,refresh:u,store:r},o));function u(){return fe(Be({event:new Event("input"),nextState:{isOpen:r.getState().isOpen},props:n,query:r.getState().query,refresh:u,store:r},o))}return n.plugins.forEach((function(e){var n;return null===(n=e.subscribe)||void 0===n?void 0:n.call(e,Be(Be({},o),{},{refresh:u,onSelect:function(e){t.push({onSelect:e})},onActive:function(e){t.push({onActive:e})}}))})),function(e){var t,n,r=e.metadata,o=e.environment;if(null===(t=o.navigator)||void 0===t||null===(n=t.userAgent)||void 0===n?void 0:n.includes("Algolia Crawler")){var i=o.document.createElement("meta"),u=o.document.querySelector("head");i.name="algolia:metadata",setTimeout((function(){i.content=JSON.stringify(r),u.appendChild(i)}),0)}}({metadata:ke({plugins:n.plugins,options:e}),environment:n.environment}),Be(Be({refresh:u},i),o)}var Ue=function(e,t,n,r){var o;t[0]=0;for(var i=1;i=5&&((o||!e&&5===r)&&(u.push(r,0,o,n),r=6),e&&(u.push(r,e,0,n),r=6)),o=""},c=0;c"===t?(r=1,o=""):o=t+o[0]:i?t===i?i="":o+=t:'"'===t||"'"===t?i=t:">"===t?(a(),r=1):r&&("="===t?(r=5,n=o,o=""):"/"===t&&(r<5||">"===e[c][l+1])?(a(),3===r&&(u=u[0]),r=u,(u=u[0]).push(2,0,r),r=0):" "===t||"\t"===t||"\n"===t||"\r"===t?(a(),r=2):o+=t),3===r&&"!--"===o&&(r=4,u=u[0])}return a(),u}(e)),t),arguments,[])).length>1?t:t[0]}var We=function(e){var t=e.environment,n=t.document.createElementNS("http://www.w3.org/2000/svg","svg");n.setAttribute("class","aa-ClearIcon"),n.setAttribute("viewBox","0 0 24 24"),n.setAttribute("width","18"),n.setAttribute("height","18"),n.setAttribute("fill","currentColor");var r=t.document.createElementNS("http://www.w3.org/2000/svg","path");return r.setAttribute("d","M5.293 6.707l5.293 5.293-5.293 5.293c-0.391 0.391-0.391 1.024 0 1.414s1.024 0.391 1.414 0l5.293-5.293 5.293 5.293c0.391 0.391 1.024 0.391 1.414 0s0.391-1.024 0-1.414l-5.293-5.293 5.293-5.293c0.391-0.391 0.391-1.024 0-1.414s-1.024-0.391-1.414 0l-5.293 5.293-5.293-5.293c-0.391-0.391-1.024-0.391-1.414 0s-0.391 1.024 0 1.414z"),n.appendChild(r),n};function Qe(e,t){if("string"==typeof t){var n=e.document.querySelector(t);return"The element ".concat(JSON.stringify(t)," is not in the document."),n}return t}function $e(){for(var e=arguments.length,t=new Array(e),n=0;n2&&(u.children=arguments.length>3?lt.call(arguments,2):n),"function"==typeof e&&null!=e.defaultProps)for(i in e.defaultProps)void 0===u[i]&&(u[i]=e.defaultProps[i]);return _t(e,u,r,o,null)}function _t(e,t,n,r,o){var i={type:e,props:t,key:n,ref:r,__k:null,__:null,__b:0,__e:null,__d:void 0,__c:null,__h:null,constructor:void 0,__v:null==o?++pt:o};return null==o&&null!=st.vnode&&st.vnode(i),i}function Pt(e){return e.children}function jt(e,t){this.props=e,this.context=t}function wt(e,t){if(null==t)return e.__?wt(e.__,e.__.__k.indexOf(e)+1):null;for(var n;t0?_t(d.type,d.props,d.key,null,d.__v):d)){if(d.__=n,d.__b=n.__b+1,null===(f=g[s])||f&&d.key==f.key&&d.type===f.type)g[s]=void 0;else for(p=0;p0&&void 0!==arguments[0]?arguments[0]:[];return{get:function(){return e},add:function(t){var n=e[e.length-1];(null==n?void 0:n.isHighlighted)===t.isHighlighted?e[e.length-1]={value:n.value+t.value,isHighlighted:n.isHighlighted}:e.push(t)}}}(n?[{value:n,isHighlighted:!1}]:[]);return t.forEach((function(e){var t=e.split(Ht);r.add({value:t[0],isHighlighted:!0}),""!==t[1]&&r.add({value:t[1],isHighlighted:!1})})),r.get()}function Wt(e){return function(e){if(Array.isArray(e))return Qt(e)}(e)||function(e){if("undefined"!=typeof Symbol&&null!=e[Symbol.iterator]||null!=e["@@iterator"])return Array.from(e)}(e)||function(e,t){if(!e)return;if("string"==typeof e)return Qt(e,t);var n=Object.prototype.toString.call(e).slice(8,-1);"Object"===n&&e.constructor&&(n=e.constructor.name);if("Map"===n||"Set"===n)return Array.from(e);if("Arguments"===n||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n))return Qt(e,t)}(e)||function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()}function Qt(e,t){(null==t||t>e.length)&&(t=e.length);for(var n=0,r=new Array(t);n",""":'"',"'":"'"},Gt=new RegExp(/\w/i),Kt=/&(amp|quot|lt|gt|#39);/g,Jt=RegExp(Kt.source);function Yt(e,t){var n,r,o,i=e[t],u=(null===(n=e[t+1])||void 0===n?void 0:n.isHighlighted)||!0,a=(null===(r=e[t-1])||void 0===r?void 0:r.isHighlighted)||!0;return Gt.test((o=i.value)&&Jt.test(o)?o.replace(Kt,(function(e){return zt[e]})):o)||a!==u?i.isHighlighted:a}function Xt(e,t){var n=Object.keys(e);if(Object.getOwnPropertySymbols){var r=Object.getOwnPropertySymbols(e);t&&(r=r.filter((function(t){return Object.getOwnPropertyDescriptor(e,t).enumerable}))),n.push.apply(n,r)}return n}function Zt(e){for(var t=1;te.length)&&(t=e.length);for(var n=0,r=new Array(t);n=0||(o[n]=e[n]);return o}(e,t);if(Object.getOwnPropertySymbols){var i=Object.getOwnPropertySymbols(e);for(r=0;r=0||Object.prototype.propertyIsEnumerable.call(e,n)&&(o[n]=e[n])}return o}function mn(e){return function(e){if(Array.isArray(e))return vn(e)}(e)||function(e){if("undefined"!=typeof Symbol&&null!=e[Symbol.iterator]||null!=e["@@iterator"])return Array.from(e)}(e)||function(e,t){if(!e)return;if("string"==typeof e)return vn(e,t);var n=Object.prototype.toString.call(e).slice(8,-1);"Object"===n&&e.constructor&&(n=e.constructor.name);if("Map"===n||"Set"===n)return Array.from(e);if("Arguments"===n||/^(?:Ui|I)nt(?:8|16|32)(?:Clamped)?Array$/.test(n))return vn(e,t)}(e)||function(){throw new TypeError("Invalid attempt to spread non-iterable instance.\nIn order to be iterable, non-array objects must have a [Symbol.iterator]() method.")}()}function vn(e,t){(null==t||t>e.length)&&(t=e.length);for(var n=0,r=new Array(t);n0;if(!O.value.core.openOnFocus&&!t.query)return n;var r=Boolean(h.current||O.value.renderer.renderNoResults);return!n&&r||n},__autocomplete_metadata:{userAgents:Sn,options:e}}))})),j=p(n({collections:[],completion:null,context:{},isOpen:!1,query:"",activeItemId:null,status:"idle"},O.value.core.initialState)),w={getEnvironmentProps:O.value.renderer.getEnvironmentProps,getFormProps:O.value.renderer.getFormProps,getInputProps:O.value.renderer.getInputProps,getItemProps:O.value.renderer.getItemProps,getLabelProps:O.value.renderer.getLabelProps,getListProps:O.value.renderer.getListProps,getPanelProps:O.value.renderer.getPanelProps,getRootProps:O.value.renderer.getRootProps},S={setActiveItemId:P.value.setActiveItemId,setQuery:P.value.setQuery,setCollections:P.value.setCollections,setIsOpen:P.value.setIsOpen,setStatus:P.value.setStatus,setContext:P.value.setContext,refresh:P.value.refresh},I=d((function(){return Ve.bind(O.value.renderer.renderer.createElement)})),E=d((function(){return ct({autocomplete:P.value,autocompleteScopeApi:S,classNames:O.value.renderer.classNames,environment:O.value.core.environment,isDetached:_.value,placeholder:O.value.core.placeholder,propGetters:w,setIsModalOpen:k,state:j.current,translations:O.value.renderer.translations})}));function A(){tt(E.value.panel,{style:_.value?{}:wn({panelPlacement:O.value.renderer.panelPlacement,container:E.value.root,form:E.value.form,environment:O.value.core.environment})})}function C(e){j.current=e;var t={autocomplete:P.value,autocompleteScopeApi:S,classNames:O.value.renderer.classNames,components:O.value.renderer.components,container:O.value.renderer.container,html:I.value,dom:E.value,panelContainer:_.value?E.value.detachedContainer:O.value.renderer.panelContainer,propGetters:w,state:j.current,renderer:O.value.renderer.renderer},r=!g(e)&&!h.current&&O.value.renderer.renderNoResults||O.value.renderer.render;!function(e){var t=e.autocomplete,r=e.autocompleteScopeApi,o=e.dom,i=e.propGetters,u=e.state;nt(o.root,i.getRootProps(n({state:u,props:t.getRootProps({})},r))),nt(o.input,i.getInputProps(n({state:u,props:t.getInputProps({inputElement:o.input}),inputElement:o.input},r))),tt(o.label,{hidden:"stalled"===u.status}),tt(o.loadingIndicator,{hidden:"stalled"!==u.status}),tt(o.clearButton,{hidden:!u.query})}(t),function(e,t){var r=t.autocomplete,o=t.autocompleteScopeApi,u=t.classNames,a=t.html,c=t.dom,l=t.panelContainer,s=t.propGetters,p=t.state,f=t.components,d=t.renderer;if(p.isOpen){l.contains(c.panel)||"loading"===p.status||l.appendChild(c.panel),c.panel.classList.toggle("aa-Panel--stalled","stalled"===p.status);var m=p.collections.filter((function(e){var t=e.source,n=e.items;return t.templates.noResults||n.length>0})).map((function(e,t){var c=e.source,l=e.items;return d.createElement("section",{key:t,className:u.source,"data-autocomplete-source-id":c.sourceId},c.templates.header&&d.createElement("div",{className:u.sourceHeader},c.templates.header({components:f,createElement:d.createElement,Fragment:d.Fragment,items:l,source:c,state:p,html:a})),c.templates.noResults&&0===l.length?d.createElement("div",{className:u.sourceNoResults},c.templates.noResults({components:f,createElement:d.createElement,Fragment:d.Fragment,source:c,state:p,html:a})):d.createElement("ul",i({className:u.list},s.getListProps(n({state:p,props:r.getListProps({})},o))),l.map((function(e){var t=r.getItemProps({item:e,source:c});return d.createElement("li",i({key:t.id,className:u.item},s.getItemProps(n({state:p,props:t},o))),c.templates.item({components:f,createElement:d.createElement,Fragment:d.Fragment,item:e,state:p,html:a}))}))),c.templates.footer&&d.createElement("div",{className:u.sourceFooter},c.templates.footer({components:f,createElement:d.createElement,Fragment:d.Fragment,items:l,source:c,state:p,html:a})))})),v=d.createElement(d.Fragment,null,d.createElement("div",{className:u.panelLayout},m),d.createElement("div",{className:"aa-GradientBottom"})),h=m.reduce((function(e,t){return e[t.props["data-autocomplete-source-id"]]=t,e}),{});e(n(n({children:v,state:p,sections:m,elements:h},d),{},{components:f,html:a},o),c.panel)}else l.contains(c.panel)&&l.removeChild(c.panel)}(r,t)}function D(){var e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{};c();var t=O.value.renderer,n=t.components,r=u(t,In);y.current=Ge(r,O.value.core,{components:Ke(n,(function(e){return!e.value.hasOwnProperty("__autocomplete_componentName")})),initialState:j.current},e),m(),l(),P.value.refresh().then((function(){C(j.current)}))}function k(e){requestAnimationFrame((function(){var t=O.value.core.environment.document.body.contains(E.value.detachedOverlay);e!==t&&(e?(O.value.core.environment.document.body.appendChild(E.value.detachedOverlay),O.value.core.environment.document.body.classList.add("aa-Detached"),E.value.input.focus()):(O.value.core.environment.document.body.removeChild(E.value.detachedOverlay),O.value.core.environment.document.body.classList.remove("aa-Detached"),P.value.setQuery(""),P.value.refresh()))}))}return a((function(){var e=P.value.getEnvironmentProps({formElement:E.value.form,panelElement:E.value.panel,inputElement:E.value.input});return tt(O.value.core.environment,e),function(){tt(O.value.core.environment,Object.keys(e).reduce((function(e,t){return n(n({},e),{},o({},t,void 0))}),{}))}})),a((function(){var e=_.value?O.value.core.environment.document.body:O.value.renderer.panelContainer,t=_.value?E.value.detachedOverlay:E.value.panel;return _.value&&j.current.isOpen&&k(!0),C(j.current),function(){e.contains(t)&&e.removeChild(t)}})),a((function(){var e=O.value.renderer.container;return e.appendChild(E.value.root),function(){e.removeChild(E.value.root)}})),a((function(){var e=f((function(e){C(e.state)}),0);return b.current=function(t){var n=t.state,r=t.prevState;(_.value&&r.isOpen!==n.isOpen&&k(n.isOpen),_.value||!n.isOpen||r.isOpen||A(),n.query!==r.query)&&O.value.core.environment.document.querySelectorAll(".aa-Panel--scrollable").forEach((function(e){0!==e.scrollTop&&(e.scrollTop=0)}));e({state:n})},function(){b.current=void 0}})),a((function(){var e=f((function(){var e=_.value;_.value=O.value.core.environment.matchMedia(O.value.renderer.detachedMediaQuery).matches,e!==_.value?D({}):requestAnimationFrame(A)}),20);return O.value.core.environment.addEventListener("resize",e),function(){O.value.core.environment.removeEventListener("resize",e)}})),a((function(){if(!_.value)return function(){};function e(e){E.value.detachedContainer.classList.toggle("aa-DetachedContainer--modal",e)}function t(t){e(t.matches)}var n=O.value.core.environment.matchMedia(getComputedStyle(O.value.core.environment.document.documentElement).getPropertyValue("--aa-detached-modal-media-query"));e(n.matches);var r=Boolean(n.addEventListener);return r?n.addEventListener("change",t):n.addListener(t),function(){r?n.removeEventListener("change",t):n.removeListener(t)}})),a((function(){return requestAnimationFrame(A),function(){}})),n(n({},S),{},{update:D,destroy:function(){c()}})},e.getAlgoliaFacets=function(e){var t=En({transformResponse:function(e){return e.facetHits}}),r=e.queries.map((function(e){return n(n({},e),{},{type:"facet"})}));return t(n(n({},e),{},{queries:r}))},e.getAlgoliaResults=An,Object.defineProperty(e,"__esModule",{value:!0})})); - diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/models.py b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/models.py deleted file mode 100644 index 83e550f8f2b6c858dd76fdde4515e6164c88b912..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/extensions/openai/models.py +++ /dev/null @@ -1,78 +0,0 @@ -from extensions.openai.embeddings import get_embeddings_model_name -from extensions.openai.errors import OpenAIError -from modules import shared -from modules.models import load_model as _load_model -from modules.models import unload_model -from modules.models_settings import get_model_metadata, update_model_parameters -from modules.utils import get_available_models - - -def get_current_model_list() -> list: - return [shared.model_name] # The real chat/completions model, maybe "None" - - -def get_pseudo_model_list() -> list: - return [ # these are expected by so much, so include some here as a dummy - 'gpt-3.5-turbo', - 'text-embedding-ada-002', - ] - - -def load_model(model_name: str) -> dict: - resp = { - "id": model_name, - "object": "engine", - "owner": "self", - "ready": True, - } - if model_name not in get_pseudo_model_list() + [get_embeddings_model_name()] + get_current_model_list(): # Real model only - # No args. Maybe it works anyways! - # TODO: hack some heuristics into args for better results - - shared.model_name = model_name - unload_model() - - model_settings = get_model_metadata(shared.model_name) - shared.settings.update({k: v for k, v in model_settings.items() if k in shared.settings}) - update_model_parameters(model_settings, initial=True) - - if shared.settings['mode'] != 'instruct': - shared.settings['instruction_template'] = None - - shared.model, shared.tokenizer = _load_model(shared.model_name) - - if not shared.model: # load failed. - shared.model_name = "None" - raise OpenAIError(f"Model load failed for: {shared.model_name}") - - return resp - - -def list_models(is_legacy: bool = False) -> dict: - # TODO: Lora's? - all_model_list = get_current_model_list() + [get_embeddings_model_name()] + get_pseudo_model_list() + get_available_models() - - models = {} - - if is_legacy: - models = [{"id": id, "object": "engine", "owner": "user", "ready": True} for id in all_model_list] - if not shared.model: - models[0]['ready'] = False - else: - models = [{"id": id, "object": "model", "owned_by": "user", "permission": []} for id in all_model_list] - - resp = { - "object": "list", - "data": models, - } - - return resp - - -def model_info(model_name: str) -> dict: - return { - "id": model_name, - "object": "model", - "owned_by": "user", - "permission": [] - } diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/utils/fuse_conv_bn.py b/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/utils/fuse_conv_bn.py deleted file mode 100644 index cb7076f80bf37f7931185bf0293ffcc1ce19c8ef..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/annotator/uniformer/mmcv/cnn/utils/fuse_conv_bn.py +++ /dev/null @@ -1,59 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - - -def _fuse_conv_bn(conv, bn): - """Fuse conv and bn into one module. - - Args: - conv (nn.Module): Conv to be fused. - bn (nn.Module): BN to be fused. - - Returns: - nn.Module: Fused module. - """ - conv_w = conv.weight - conv_b = conv.bias if conv.bias is not None else torch.zeros_like( - bn.running_mean) - - factor = bn.weight / torch.sqrt(bn.running_var + bn.eps) - conv.weight = nn.Parameter(conv_w * - factor.reshape([conv.out_channels, 1, 1, 1])) - conv.bias = nn.Parameter((conv_b - bn.running_mean) * factor + bn.bias) - return conv - - -def fuse_conv_bn(module): - """Recursively fuse conv and bn in a module. - - During inference, the functionary of batch norm layers is turned off - but only the mean and var alone channels are used, which exposes the - chance to fuse it with the preceding conv layers to save computations and - simplify network structures. - - Args: - module (nn.Module): Module to be fused. - - Returns: - nn.Module: Fused module. - """ - last_conv = None - last_conv_name = None - - for name, child in module.named_children(): - if isinstance(child, - (nn.modules.batchnorm._BatchNorm, nn.SyncBatchNorm)): - if last_conv is None: # only fuse BN that is after Conv - continue - fused_conv = _fuse_conv_bn(last_conv, child) - module._modules[last_conv_name] = fused_conv - # To reduce changes, set BN as Identity instead of deleting it. - module._modules[name] = nn.Identity() - last_conv = None - elif isinstance(child, nn.Conv2d): - last_conv = child - last_conv_name = name - else: - fuse_conv_bn(child) - return module diff --git a/spaces/Anustup/NS_AI_LABS/tests/segments_test.py b/spaces/Anustup/NS_AI_LABS/tests/segments_test.py deleted file mode 100644 index b2da7a175ab54b5ac53de78e4ee3865997603aab..0000000000000000000000000000000000000000 --- a/spaces/Anustup/NS_AI_LABS/tests/segments_test.py +++ /dev/null @@ -1,48 +0,0 @@ -import sys -import unittest - -sys.path.append('../NS_AI_LABS') - -from src.segments import merge_timestamps - -class TestSegments(unittest.TestCase): - def __init__(self, *args, **kwargs): - super(TestSegments, self).__init__(*args, **kwargs) - - def test_merge_segments(self): - segments = [ - {'start': 10.0, 'end': 20.0}, - {'start': 22.0, 'end': 27.0}, - {'start': 31.0, 'end': 35.0}, - {'start': 45.0, 'end': 60.0}, - {'start': 61.0, 'end': 65.0}, - {'start': 68.0, 'end': 98.0}, - {'start': 100.0, 'end': 102.0}, - {'start': 110.0, 'end': 112.0} - ] - - result = merge_timestamps(segments, merge_window=5, max_merge_size=30, padding_left=1, padding_right=1) - - self.assertListEqual(result, [ - {'start': 9.0, 'end': 36.0}, - {'start': 44.0, 'end': 66.0}, - {'start': 67.0, 'end': 99.0}, - {'start': 99.0, 'end': 103.0}, - {'start': 109.0, 'end': 113.0} - ]) - - def test_overlap_next(self): - segments = [ - {'start': 5.0, 'end': 39.182}, - {'start': 39.986, 'end': 40.814} - ] - - result = merge_timestamps(segments, merge_window=5, max_merge_size=30, padding_left=1, padding_right=1) - - self.assertListEqual(result, [ - {'start': 4.0, 'end': 39.584}, - {'start': 39.584, 'end': 41.814} - ]) - -if __name__ == '__main__': - unittest.main() \ No newline at end of file diff --git a/spaces/ArtGAN/Diffusion-API/diffusion_webui/utils/__init__.py b/spaces/ArtGAN/Diffusion-API/diffusion_webui/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/virtualenv.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/virtualenv.py deleted file mode 100644 index 882e36f5c1de19a8200000c216cf80119b37c96d..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_internal/utils/virtualenv.py +++ /dev/null @@ -1,104 +0,0 @@ -import logging -import os -import re -import site -import sys -from typing import List, Optional - -logger = logging.getLogger(__name__) -_INCLUDE_SYSTEM_SITE_PACKAGES_REGEX = re.compile( - r"include-system-site-packages\s*=\s*(?Ptrue|false)" -) - - -def _running_under_venv() -> bool: - """Checks if sys.base_prefix and sys.prefix match. - - This handles PEP 405 compliant virtual environments. - """ - return sys.prefix != getattr(sys, "base_prefix", sys.prefix) - - -def _running_under_legacy_virtualenv() -> bool: - """Checks if sys.real_prefix is set. - - This handles virtual environments created with pypa's virtualenv. - """ - # pypa/virtualenv case - return hasattr(sys, "real_prefix") - - -def running_under_virtualenv() -> bool: - """True if we're running inside a virtual environment, False otherwise.""" - return _running_under_venv() or _running_under_legacy_virtualenv() - - -def _get_pyvenv_cfg_lines() -> Optional[List[str]]: - """Reads {sys.prefix}/pyvenv.cfg and returns its contents as list of lines - - Returns None, if it could not read/access the file. - """ - pyvenv_cfg_file = os.path.join(sys.prefix, "pyvenv.cfg") - try: - # Although PEP 405 does not specify, the built-in venv module always - # writes with UTF-8. (pypa/pip#8717) - with open(pyvenv_cfg_file, encoding="utf-8") as f: - return f.read().splitlines() # avoids trailing newlines - except OSError: - return None - - -def _no_global_under_venv() -> bool: - """Check `{sys.prefix}/pyvenv.cfg` for system site-packages inclusion - - PEP 405 specifies that when system site-packages are not supposed to be - visible from a virtual environment, `pyvenv.cfg` must contain the following - line: - - include-system-site-packages = false - - Additionally, log a warning if accessing the file fails. - """ - cfg_lines = _get_pyvenv_cfg_lines() - if cfg_lines is None: - # We're not in a "sane" venv, so assume there is no system - # site-packages access (since that's PEP 405's default state). - logger.warning( - "Could not access 'pyvenv.cfg' despite a virtual environment " - "being active. Assuming global site-packages is not accessible " - "in this environment." - ) - return True - - for line in cfg_lines: - match = _INCLUDE_SYSTEM_SITE_PACKAGES_REGEX.match(line) - if match is not None and match.group("value") == "false": - return True - return False - - -def _no_global_under_legacy_virtualenv() -> bool: - """Check if "no-global-site-packages.txt" exists beside site.py - - This mirrors logic in pypa/virtualenv for determining whether system - site-packages are visible in the virtual environment. - """ - site_mod_dir = os.path.dirname(os.path.abspath(site.__file__)) - no_global_site_packages_file = os.path.join( - site_mod_dir, - "no-global-site-packages.txt", - ) - return os.path.exists(no_global_site_packages_file) - - -def virtualenv_no_global() -> bool: - """Returns a boolean, whether running in venv with no system site-packages.""" - # PEP 405 compliance needs to be checked first since virtualenv >=20 would - # return True for both checks, but is only able to use the PEP 405 config. - if _running_under_venv(): - return _no_global_under_venv() - - if _running_under_legacy_virtualenv(): - return _no_global_under_legacy_virtualenv() - - return False diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/jaraco/text/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/jaraco/text/__init__.py deleted file mode 100644 index a0306d5ff5cc4a2eb76458c127c462efe59a566d..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_vendor/jaraco/text/__init__.py +++ /dev/null @@ -1,599 +0,0 @@ -import re -import itertools -import textwrap -import functools - -try: - from importlib.resources import files # type: ignore -except ImportError: # pragma: nocover - from setuptools.extern.importlib_resources import files # type: ignore - -from setuptools.extern.jaraco.functools import compose, method_cache -from setuptools.extern.jaraco.context import ExceptionTrap - - -def substitution(old, new): - """ - Return a function that will perform a substitution on a string - """ - return lambda s: s.replace(old, new) - - -def multi_substitution(*substitutions): - """ - Take a sequence of pairs specifying substitutions, and create - a function that performs those substitutions. - - >>> multi_substitution(('foo', 'bar'), ('bar', 'baz'))('foo') - 'baz' - """ - substitutions = itertools.starmap(substitution, substitutions) - # compose function applies last function first, so reverse the - # substitutions to get the expected order. - substitutions = reversed(tuple(substitutions)) - return compose(*substitutions) - - -class FoldedCase(str): - """ - A case insensitive string class; behaves just like str - except compares equal when the only variation is case. - - >>> s = FoldedCase('hello world') - - >>> s == 'Hello World' - True - - >>> 'Hello World' == s - True - - >>> s != 'Hello World' - False - - >>> s.index('O') - 4 - - >>> s.split('O') - ['hell', ' w', 'rld'] - - >>> sorted(map(FoldedCase, ['GAMMA', 'alpha', 'Beta'])) - ['alpha', 'Beta', 'GAMMA'] - - Sequence membership is straightforward. - - >>> "Hello World" in [s] - True - >>> s in ["Hello World"] - True - - You may test for set inclusion, but candidate and elements - must both be folded. - - >>> FoldedCase("Hello World") in {s} - True - >>> s in {FoldedCase("Hello World")} - True - - String inclusion works as long as the FoldedCase object - is on the right. - - >>> "hello" in FoldedCase("Hello World") - True - - But not if the FoldedCase object is on the left: - - >>> FoldedCase('hello') in 'Hello World' - False - - In that case, use ``in_``: - - >>> FoldedCase('hello').in_('Hello World') - True - - >>> FoldedCase('hello') > FoldedCase('Hello') - False - """ - - def __lt__(self, other): - return self.lower() < other.lower() - - def __gt__(self, other): - return self.lower() > other.lower() - - def __eq__(self, other): - return self.lower() == other.lower() - - def __ne__(self, other): - return self.lower() != other.lower() - - def __hash__(self): - return hash(self.lower()) - - def __contains__(self, other): - return super().lower().__contains__(other.lower()) - - def in_(self, other): - "Does self appear in other?" - return self in FoldedCase(other) - - # cache lower since it's likely to be called frequently. - @method_cache - def lower(self): - return super().lower() - - def index(self, sub): - return self.lower().index(sub.lower()) - - def split(self, splitter=' ', maxsplit=0): - pattern = re.compile(re.escape(splitter), re.I) - return pattern.split(self, maxsplit) - - -# Python 3.8 compatibility -_unicode_trap = ExceptionTrap(UnicodeDecodeError) - - -@_unicode_trap.passes -def is_decodable(value): - r""" - Return True if the supplied value is decodable (using the default - encoding). - - >>> is_decodable(b'\xff') - False - >>> is_decodable(b'\x32') - True - """ - value.decode() - - -def is_binary(value): - r""" - Return True if the value appears to be binary (that is, it's a byte - string and isn't decodable). - - >>> is_binary(b'\xff') - True - >>> is_binary('\xff') - False - """ - return isinstance(value, bytes) and not is_decodable(value) - - -def trim(s): - r""" - Trim something like a docstring to remove the whitespace that - is common due to indentation and formatting. - - >>> trim("\n\tfoo = bar\n\t\tbar = baz\n") - 'foo = bar\n\tbar = baz' - """ - return textwrap.dedent(s).strip() - - -def wrap(s): - """ - Wrap lines of text, retaining existing newlines as - paragraph markers. - - >>> print(wrap(lorem_ipsum)) - Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do - eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad - minim veniam, quis nostrud exercitation ullamco laboris nisi ut - aliquip ex ea commodo consequat. Duis aute irure dolor in - reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla - pariatur. Excepteur sint occaecat cupidatat non proident, sunt in - culpa qui officia deserunt mollit anim id est laborum. - - Curabitur pretium tincidunt lacus. Nulla gravida orci a odio. Nullam - varius, turpis et commodo pharetra, est eros bibendum elit, nec luctus - magna felis sollicitudin mauris. Integer in mauris eu nibh euismod - gravida. Duis ac tellus et risus vulputate vehicula. Donec lobortis - risus a elit. Etiam tempor. Ut ullamcorper, ligula eu tempor congue, - eros est euismod turpis, id tincidunt sapien risus a quam. Maecenas - fermentum consequat mi. Donec fermentum. Pellentesque malesuada nulla - a mi. Duis sapien sem, aliquet nec, commodo eget, consequat quis, - neque. Aliquam faucibus, elit ut dictum aliquet, felis nisl adipiscing - sapien, sed malesuada diam lacus eget erat. Cras mollis scelerisque - nunc. Nullam arcu. Aliquam consequat. Curabitur augue lorem, dapibus - quis, laoreet et, pretium ac, nisi. Aenean magna nisl, mollis quis, - molestie eu, feugiat in, orci. In hac habitasse platea dictumst. - """ - paragraphs = s.splitlines() - wrapped = ('\n'.join(textwrap.wrap(para)) for para in paragraphs) - return '\n\n'.join(wrapped) - - -def unwrap(s): - r""" - Given a multi-line string, return an unwrapped version. - - >>> wrapped = wrap(lorem_ipsum) - >>> wrapped.count('\n') - 20 - >>> unwrapped = unwrap(wrapped) - >>> unwrapped.count('\n') - 1 - >>> print(unwrapped) - Lorem ipsum dolor sit amet, consectetur adipiscing ... - Curabitur pretium tincidunt lacus. Nulla gravida orci ... - - """ - paragraphs = re.split(r'\n\n+', s) - cleaned = (para.replace('\n', ' ') for para in paragraphs) - return '\n'.join(cleaned) - - - - -class Splitter(object): - """object that will split a string with the given arguments for each call - - >>> s = Splitter(',') - >>> s('hello, world, this is your, master calling') - ['hello', ' world', ' this is your', ' master calling'] - """ - - def __init__(self, *args): - self.args = args - - def __call__(self, s): - return s.split(*self.args) - - -def indent(string, prefix=' ' * 4): - """ - >>> indent('foo') - ' foo' - """ - return prefix + string - - -class WordSet(tuple): - """ - Given an identifier, return the words that identifier represents, - whether in camel case, underscore-separated, etc. - - >>> WordSet.parse("camelCase") - ('camel', 'Case') - - >>> WordSet.parse("under_sep") - ('under', 'sep') - - Acronyms should be retained - - >>> WordSet.parse("firstSNL") - ('first', 'SNL') - - >>> WordSet.parse("you_and_I") - ('you', 'and', 'I') - - >>> WordSet.parse("A simple test") - ('A', 'simple', 'test') - - Multiple caps should not interfere with the first cap of another word. - - >>> WordSet.parse("myABCClass") - ('my', 'ABC', 'Class') - - The result is a WordSet, so you can get the form you need. - - >>> WordSet.parse("myABCClass").underscore_separated() - 'my_ABC_Class' - - >>> WordSet.parse('a-command').camel_case() - 'ACommand' - - >>> WordSet.parse('someIdentifier').lowered().space_separated() - 'some identifier' - - Slices of the result should return another WordSet. - - >>> WordSet.parse('taken-out-of-context')[1:].underscore_separated() - 'out_of_context' - - >>> WordSet.from_class_name(WordSet()).lowered().space_separated() - 'word set' - - >>> example = WordSet.parse('figured it out') - >>> example.headless_camel_case() - 'figuredItOut' - >>> example.dash_separated() - 'figured-it-out' - - """ - - _pattern = re.compile('([A-Z]?[a-z]+)|([A-Z]+(?![a-z]))') - - def capitalized(self): - return WordSet(word.capitalize() for word in self) - - def lowered(self): - return WordSet(word.lower() for word in self) - - def camel_case(self): - return ''.join(self.capitalized()) - - def headless_camel_case(self): - words = iter(self) - first = next(words).lower() - new_words = itertools.chain((first,), WordSet(words).camel_case()) - return ''.join(new_words) - - def underscore_separated(self): - return '_'.join(self) - - def dash_separated(self): - return '-'.join(self) - - def space_separated(self): - return ' '.join(self) - - def trim_right(self, item): - """ - Remove the item from the end of the set. - - >>> WordSet.parse('foo bar').trim_right('foo') - ('foo', 'bar') - >>> WordSet.parse('foo bar').trim_right('bar') - ('foo',) - >>> WordSet.parse('').trim_right('bar') - () - """ - return self[:-1] if self and self[-1] == item else self - - def trim_left(self, item): - """ - Remove the item from the beginning of the set. - - >>> WordSet.parse('foo bar').trim_left('foo') - ('bar',) - >>> WordSet.parse('foo bar').trim_left('bar') - ('foo', 'bar') - >>> WordSet.parse('').trim_left('bar') - () - """ - return self[1:] if self and self[0] == item else self - - def trim(self, item): - """ - >>> WordSet.parse('foo bar').trim('foo') - ('bar',) - """ - return self.trim_left(item).trim_right(item) - - def __getitem__(self, item): - result = super(WordSet, self).__getitem__(item) - if isinstance(item, slice): - result = WordSet(result) - return result - - @classmethod - def parse(cls, identifier): - matches = cls._pattern.finditer(identifier) - return WordSet(match.group(0) for match in matches) - - @classmethod - def from_class_name(cls, subject): - return cls.parse(subject.__class__.__name__) - - -# for backward compatibility -words = WordSet.parse - - -def simple_html_strip(s): - r""" - Remove HTML from the string `s`. - - >>> str(simple_html_strip('')) - '' - - >>> print(simple_html_strip('A stormy day in paradise')) - A stormy day in paradise - - >>> print(simple_html_strip('Somebody tell the truth.')) - Somebody tell the truth. - - >>> print(simple_html_strip('What about
\nmultiple lines?')) - What about - multiple lines? - """ - html_stripper = re.compile('()|(<[^>]*>)|([^<]+)', re.DOTALL) - texts = (match.group(3) or '' for match in html_stripper.finditer(s)) - return ''.join(texts) - - -class SeparatedValues(str): - """ - A string separated by a separator. Overrides __iter__ for getting - the values. - - >>> list(SeparatedValues('a,b,c')) - ['a', 'b', 'c'] - - Whitespace is stripped and empty values are discarded. - - >>> list(SeparatedValues(' a, b , c, ')) - ['a', 'b', 'c'] - """ - - separator = ',' - - def __iter__(self): - parts = self.split(self.separator) - return filter(None, (part.strip() for part in parts)) - - -class Stripper: - r""" - Given a series of lines, find the common prefix and strip it from them. - - >>> lines = [ - ... 'abcdefg\n', - ... 'abc\n', - ... 'abcde\n', - ... ] - >>> res = Stripper.strip_prefix(lines) - >>> res.prefix - 'abc' - >>> list(res.lines) - ['defg\n', '\n', 'de\n'] - - If no prefix is common, nothing should be stripped. - - >>> lines = [ - ... 'abcd\n', - ... '1234\n', - ... ] - >>> res = Stripper.strip_prefix(lines) - >>> res.prefix = '' - >>> list(res.lines) - ['abcd\n', '1234\n'] - """ - - def __init__(self, prefix, lines): - self.prefix = prefix - self.lines = map(self, lines) - - @classmethod - def strip_prefix(cls, lines): - prefix_lines, lines = itertools.tee(lines) - prefix = functools.reduce(cls.common_prefix, prefix_lines) - return cls(prefix, lines) - - def __call__(self, line): - if not self.prefix: - return line - null, prefix, rest = line.partition(self.prefix) - return rest - - @staticmethod - def common_prefix(s1, s2): - """ - Return the common prefix of two lines. - """ - index = min(len(s1), len(s2)) - while s1[:index] != s2[:index]: - index -= 1 - return s1[:index] - - -def remove_prefix(text, prefix): - """ - Remove the prefix from the text if it exists. - - >>> remove_prefix('underwhelming performance', 'underwhelming ') - 'performance' - - >>> remove_prefix('something special', 'sample') - 'something special' - """ - null, prefix, rest = text.rpartition(prefix) - return rest - - -def remove_suffix(text, suffix): - """ - Remove the suffix from the text if it exists. - - >>> remove_suffix('name.git', '.git') - 'name' - - >>> remove_suffix('something special', 'sample') - 'something special' - """ - rest, suffix, null = text.partition(suffix) - return rest - - -def normalize_newlines(text): - r""" - Replace alternate newlines with the canonical newline. - - >>> normalize_newlines('Lorem Ipsum\u2029') - 'Lorem Ipsum\n' - >>> normalize_newlines('Lorem Ipsum\r\n') - 'Lorem Ipsum\n' - >>> normalize_newlines('Lorem Ipsum\x85') - 'Lorem Ipsum\n' - """ - newlines = ['\r\n', '\r', '\n', '\u0085', '\u2028', '\u2029'] - pattern = '|'.join(newlines) - return re.sub(pattern, '\n', text) - - -def _nonblank(str): - return str and not str.startswith('#') - - -@functools.singledispatch -def yield_lines(iterable): - r""" - Yield valid lines of a string or iterable. - - >>> list(yield_lines('')) - [] - >>> list(yield_lines(['foo', 'bar'])) - ['foo', 'bar'] - >>> list(yield_lines('foo\nbar')) - ['foo', 'bar'] - >>> list(yield_lines('\nfoo\n#bar\nbaz #comment')) - ['foo', 'baz #comment'] - >>> list(yield_lines(['foo\nbar', 'baz', 'bing\n\n\n'])) - ['foo', 'bar', 'baz', 'bing'] - """ - return itertools.chain.from_iterable(map(yield_lines, iterable)) - - -@yield_lines.register(str) -def _(text): - return filter(_nonblank, map(str.strip, text.splitlines())) - - -def drop_comment(line): - """ - Drop comments. - - >>> drop_comment('foo # bar') - 'foo' - - A hash without a space may be in a URL. - - >>> drop_comment('http://example.com/foo#bar') - 'http://example.com/foo#bar' - """ - return line.partition(' #')[0] - - -def join_continuation(lines): - r""" - Join lines continued by a trailing backslash. - - >>> list(join_continuation(['foo \\', 'bar', 'baz'])) - ['foobar', 'baz'] - >>> list(join_continuation(['foo \\', 'bar', 'baz'])) - ['foobar', 'baz'] - >>> list(join_continuation(['foo \\', 'bar \\', 'baz'])) - ['foobarbaz'] - - Not sure why, but... - The character preceeding the backslash is also elided. - - >>> list(join_continuation(['goo\\', 'dly'])) - ['godly'] - - A terrible idea, but... - If no line is available to continue, suppress the lines. - - >>> list(join_continuation(['foo', 'bar\\', 'baz\\'])) - ['foo'] - """ - lines = iter(lines) - for item in lines: - while item.endswith('\\'): - try: - item = item[:-2].strip() + next(lines) - except StopIteration: - return - yield item diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/fast_rcnn.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/fast_rcnn.py deleted file mode 100644 index 42eba21092af71bafdc1875fdbe3a5ae1e3fd543..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/roi_heads/fast_rcnn.py +++ /dev/null @@ -1,462 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -from typing import Dict, List, Tuple, Union -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.config import configurable -from detectron2.layers import ShapeSpec, batched_nms, cat, cross_entropy, nonzero_tuple -from detectron2.modeling.box_regression import Box2BoxTransform, _dense_box_regression_loss -from detectron2.structures import Boxes, Instances -from detectron2.utils.events import get_event_storage - -__all__ = ["fast_rcnn_inference", "FastRCNNOutputLayers"] - - -logger = logging.getLogger(__name__) - -""" -Shape shorthand in this module: - - N: number of images in the minibatch - R: number of ROIs, combined over all images, in the minibatch - Ri: number of ROIs in image i - K: number of foreground classes. E.g.,there are 80 foreground classes in COCO. - -Naming convention: - - deltas: refers to the 4-d (dx, dy, dw, dh) deltas that parameterize the box2box - transform (see :class:`box_regression.Box2BoxTransform`). - - pred_class_logits: predicted class scores in [-inf, +inf]; use - softmax(pred_class_logits) to estimate P(class). - - gt_classes: ground-truth classification labels in [0, K], where [0, K) represent - foreground object classes and K represents the background class. - - pred_proposal_deltas: predicted box2box transform deltas for transforming proposals - to detection box predictions. - - gt_proposal_deltas: ground-truth box2box transform deltas -""" - - -def fast_rcnn_inference( - boxes: List[torch.Tensor], - scores: List[torch.Tensor], - image_shapes: List[Tuple[int, int]], - score_thresh: float, - nms_thresh: float, - topk_per_image: int, -): - """ - Call `fast_rcnn_inference_single_image` for all images. - - Args: - boxes (list[Tensor]): A list of Tensors of predicted class-specific or class-agnostic - boxes for each image. Element i has shape (Ri, K * 4) if doing - class-specific regression, or (Ri, 4) if doing class-agnostic - regression, where Ri is the number of predicted objects for image i. - This is compatible with the output of :meth:`FastRCNNOutputLayers.predict_boxes`. - scores (list[Tensor]): A list of Tensors of predicted class scores for each image. - Element i has shape (Ri, K + 1), where Ri is the number of predicted objects - for image i. Compatible with the output of :meth:`FastRCNNOutputLayers.predict_probs`. - image_shapes (list[tuple]): A list of (width, height) tuples for each image in the batch. - score_thresh (float): Only return detections with a confidence score exceeding this - threshold. - nms_thresh (float): The threshold to use for box non-maximum suppression. Value in [0, 1]. - topk_per_image (int): The number of top scoring detections to return. Set < 0 to return - all detections. - - Returns: - instances: (list[Instances]): A list of N instances, one for each image in the batch, - that stores the topk most confidence detections. - kept_indices: (list[Tensor]): A list of 1D tensor of length of N, each element indicates - the corresponding boxes/scores index in [0, Ri) from the input, for image i. - """ - result_per_image = [ - fast_rcnn_inference_single_image( - boxes_per_image, scores_per_image, image_shape, score_thresh, nms_thresh, topk_per_image - ) - for scores_per_image, boxes_per_image, image_shape in zip(scores, boxes, image_shapes) - ] - return [x[0] for x in result_per_image], [x[1] for x in result_per_image] - - -def _log_classification_stats(pred_logits, gt_classes, prefix="fast_rcnn"): - """ - Log the classification metrics to EventStorage. - - Args: - pred_logits: Rx(K+1) logits. The last column is for background class. - gt_classes: R labels - """ - num_instances = gt_classes.numel() - if num_instances == 0: - return - pred_classes = pred_logits.argmax(dim=1) - bg_class_ind = pred_logits.shape[1] - 1 - - fg_inds = (gt_classes >= 0) & (gt_classes < bg_class_ind) - num_fg = fg_inds.nonzero().numel() - fg_gt_classes = gt_classes[fg_inds] - fg_pred_classes = pred_classes[fg_inds] - - num_false_negative = (fg_pred_classes == bg_class_ind).nonzero().numel() - num_accurate = (pred_classes == gt_classes).nonzero().numel() - fg_num_accurate = (fg_pred_classes == fg_gt_classes).nonzero().numel() - - storage = get_event_storage() - storage.put_scalar(f"{prefix}/cls_accuracy", num_accurate / num_instances) - if num_fg > 0: - storage.put_scalar(f"{prefix}/fg_cls_accuracy", fg_num_accurate / num_fg) - storage.put_scalar(f"{prefix}/false_negative", num_false_negative / num_fg) - - -def fast_rcnn_inference_single_image( - boxes, - scores, - image_shape: Tuple[int, int], - score_thresh: float, - nms_thresh: float, - topk_per_image: int, -): - """ - Single-image inference. Return bounding-box detection results by thresholding - on scores and applying non-maximum suppression (NMS). - - Args: - Same as `fast_rcnn_inference`, but with boxes, scores, and image shapes - per image. - - Returns: - Same as `fast_rcnn_inference`, but for only one image. - """ - valid_mask = torch.isfinite(boxes).all(dim=1) & torch.isfinite(scores).all(dim=1) - if not valid_mask.all(): - boxes = boxes[valid_mask] - scores = scores[valid_mask] - - scores = scores[:, :-1] - num_bbox_reg_classes = boxes.shape[1] // 4 - # Convert to Boxes to use the `clip` function ... - boxes = Boxes(boxes.reshape(-1, 4)) - boxes.clip(image_shape) - boxes = boxes.tensor.view(-1, num_bbox_reg_classes, 4) # R x C x 4 - - # 1. Filter results based on detection scores. It can make NMS more efficient - # by filtering out low-confidence detections. - filter_mask = scores > score_thresh # R x K - # R' x 2. First column contains indices of the R predictions; - # Second column contains indices of classes. - filter_inds = filter_mask.nonzero() - if num_bbox_reg_classes == 1: - boxes = boxes[filter_inds[:, 0], 0] - else: - boxes = boxes[filter_mask] - scores = scores[filter_mask] - - # 2. Apply NMS for each class independently. - keep = batched_nms(boxes, scores, filter_inds[:, 1], nms_thresh) - if topk_per_image >= 0: - keep = keep[:topk_per_image] - boxes, scores, filter_inds = boxes[keep], scores[keep], filter_inds[keep] - - result = Instances(image_shape) - result.pred_boxes = Boxes(boxes) - result.scores = scores - result.pred_classes = filter_inds[:, 1] - return result, filter_inds[:, 0] - - -class FastRCNNOutputLayers(nn.Module): - """ - Two linear layers for predicting Fast R-CNN outputs: - - 1. proposal-to-detection box regression deltas - 2. classification scores - """ - - @configurable - def __init__( - self, - input_shape: ShapeSpec, - *, - box2box_transform, - num_classes: int, - test_score_thresh: float = 0.0, - test_nms_thresh: float = 0.5, - test_topk_per_image: int = 100, - cls_agnostic_bbox_reg: bool = False, - smooth_l1_beta: float = 0.0, - box_reg_loss_type: str = "smooth_l1", - loss_weight: Union[float, Dict[str, float]] = 1.0, - ): - """ - NOTE: this interface is experimental. - - Args: - input_shape (ShapeSpec): shape of the input feature to this module - box2box_transform (Box2BoxTransform or Box2BoxTransformRotated): - num_classes (int): number of foreground classes - test_score_thresh (float): threshold to filter predictions results. - test_nms_thresh (float): NMS threshold for prediction results. - test_topk_per_image (int): number of top predictions to produce per image. - cls_agnostic_bbox_reg (bool): whether to use class agnostic for bbox regression - smooth_l1_beta (float): transition point from L1 to L2 loss. Only used if - `box_reg_loss_type` is "smooth_l1" - box_reg_loss_type (str): Box regression loss type. One of: "smooth_l1", "giou", - "diou", "ciou" - loss_weight (float|dict): weights to use for losses. Can be single float for weighting - all losses, or a dict of individual weightings. Valid dict keys are: - * "loss_cls": applied to classification loss - * "loss_box_reg": applied to box regression loss - """ - super().__init__() - if isinstance(input_shape, int): # some backward compatibility - input_shape = ShapeSpec(channels=input_shape) - self.num_classes = num_classes - input_size = input_shape.channels * (input_shape.width or 1) * (input_shape.height or 1) - # prediction layer for num_classes foreground classes and one background class (hence + 1) - self.cls_score = nn.Linear(input_size, num_classes + 1) - num_bbox_reg_classes = 1 if cls_agnostic_bbox_reg else num_classes - box_dim = len(box2box_transform.weights) - self.bbox_pred = nn.Linear(input_size, num_bbox_reg_classes * box_dim) - - nn.init.normal_(self.cls_score.weight, std=0.01) - nn.init.normal_(self.bbox_pred.weight, std=0.001) - for l in [self.cls_score, self.bbox_pred]: - nn.init.constant_(l.bias, 0) - - self.box2box_transform = box2box_transform - self.smooth_l1_beta = smooth_l1_beta - self.test_score_thresh = test_score_thresh - self.test_nms_thresh = test_nms_thresh - self.test_topk_per_image = test_topk_per_image - self.box_reg_loss_type = box_reg_loss_type - if isinstance(loss_weight, float): - loss_weight = {"loss_cls": loss_weight, "loss_box_reg": loss_weight} - self.loss_weight = loss_weight - - @classmethod - def from_config(cls, cfg, input_shape): - return { - "input_shape": input_shape, - "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS), - # fmt: off - "num_classes" : cfg.MODEL.ROI_HEADS.NUM_CLASSES, - "cls_agnostic_bbox_reg" : cfg.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG, - "smooth_l1_beta" : cfg.MODEL.ROI_BOX_HEAD.SMOOTH_L1_BETA, - "test_score_thresh" : cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST, - "test_nms_thresh" : cfg.MODEL.ROI_HEADS.NMS_THRESH_TEST, - "test_topk_per_image" : cfg.TEST.DETECTIONS_PER_IMAGE, - "box_reg_loss_type" : cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_TYPE, - "loss_weight" : {"loss_box_reg": cfg.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_WEIGHT}, - # fmt: on - } - - def forward(self, x): - """ - Args: - x: per-region features of shape (N, ...) for N bounding boxes to predict. - - Returns: - (Tensor, Tensor): - First tensor: shape (N,K+1), scores for each of the N box. Each row contains the - scores for K object categories and 1 background class. - - Second tensor: bounding box regression deltas for each box. Shape is shape (N,Kx4), - or (N,4) for class-agnostic regression. - """ - if x.dim() > 2: - x = torch.flatten(x, start_dim=1) - scores = self.cls_score(x) - proposal_deltas = self.bbox_pred(x) - return scores, proposal_deltas - - def losses(self, predictions, proposals): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were used - to compute predictions. The fields ``proposal_boxes``, ``gt_boxes``, - ``gt_classes`` are expected. - - Returns: - Dict[str, Tensor]: dict of losses - """ - scores, proposal_deltas = predictions - - # parse classification outputs - gt_classes = ( - cat([p.gt_classes for p in proposals], dim=0) if len(proposals) else torch.empty(0) - ) - _log_classification_stats(scores, gt_classes) - - # parse box regression outputs - if len(proposals): - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) # Nx4 - assert not proposal_boxes.requires_grad, "Proposals should not require gradients!" - # If "gt_boxes" does not exist, the proposals must be all negative and - # should not be included in regression loss computation. - # Here we just use proposal_boxes as an arbitrary placeholder because its - # value won't be used in self.box_reg_loss(). - gt_boxes = cat( - [(p.gt_boxes if p.has("gt_boxes") else p.proposal_boxes).tensor for p in proposals], - dim=0, - ) - else: - proposal_boxes = gt_boxes = torch.empty((0, 4), device=proposal_deltas.device) - - losses = { - "loss_cls": cross_entropy(scores, gt_classes, reduction="mean"), - "loss_box_reg": self.box_reg_loss( - proposal_boxes, gt_boxes, proposal_deltas, gt_classes - ), - } - return {k: v * self.loss_weight.get(k, 1.0) for k, v in losses.items()} - - def box_reg_loss(self, proposal_boxes, gt_boxes, pred_deltas, gt_classes): - """ - Args: - proposal_boxes/gt_boxes are tensors with the same shape (R, 4 or 5). - pred_deltas has shape (R, 4 or 5), or (R, num_classes * (4 or 5)). - gt_classes is a long tensor of shape R, the gt class label of each proposal. - R shall be the number of proposals. - """ - box_dim = proposal_boxes.shape[1] # 4 or 5 - # Regression loss is only computed for foreground proposals (those matched to a GT) - fg_inds = nonzero_tuple((gt_classes >= 0) & (gt_classes < self.num_classes))[0] - if pred_deltas.shape[1] == box_dim: # cls-agnostic regression - fg_pred_deltas = pred_deltas[fg_inds] - else: - fg_pred_deltas = pred_deltas.view(-1, self.num_classes, box_dim)[ - fg_inds, gt_classes[fg_inds] - ] - - loss_box_reg = _dense_box_regression_loss( - [proposal_boxes[fg_inds]], - self.box2box_transform, - [fg_pred_deltas.unsqueeze(0)], - [gt_boxes[fg_inds]], - ..., - self.box_reg_loss_type, - self.smooth_l1_beta, - ) - - # The reg loss is normalized using the total number of regions (R), not the number - # of foreground regions even though the box regression loss is only defined on - # foreground regions. Why? Because doing so gives equal training influence to - # each foreground example. To see how, consider two different minibatches: - # (1) Contains a single foreground region - # (2) Contains 100 foreground regions - # If we normalize by the number of foreground regions, the single example in - # minibatch (1) will be given 100 times as much influence as each foreground - # example in minibatch (2). Normalizing by the total number of regions, R, - # means that the single example in minibatch (1) and each of the 100 examples - # in minibatch (2) are given equal influence. - return loss_box_reg / max(gt_classes.numel(), 1.0) # return 0 if empty - - def inference(self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances]): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were - used to compute predictions. The ``proposal_boxes`` field is expected. - - Returns: - list[Instances]: same as `fast_rcnn_inference`. - list[Tensor]: same as `fast_rcnn_inference`. - """ - boxes = self.predict_boxes(predictions, proposals) - scores = self.predict_probs(predictions, proposals) - image_shapes = [x.image_size for x in proposals] - return fast_rcnn_inference( - boxes, - scores, - image_shapes, - self.test_score_thresh, - self.test_nms_thresh, - self.test_topk_per_image, - ) - - def predict_boxes_for_gt_classes(self, predictions, proposals): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were used - to compute predictions. The fields ``proposal_boxes``, ``gt_classes`` are expected. - - Returns: - list[Tensor]: - A list of Tensors of predicted boxes for GT classes in case of - class-specific box head. Element i of the list has shape (Ri, B), where Ri is - the number of proposals for image i and B is the box dimension (4 or 5) - """ - if not len(proposals): - return [] - scores, proposal_deltas = predictions - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) - N, B = proposal_boxes.shape - predict_boxes = self.box2box_transform.apply_deltas( - proposal_deltas, proposal_boxes - ) # Nx(KxB) - - K = predict_boxes.shape[1] // B - if K > 1: - gt_classes = torch.cat([p.gt_classes for p in proposals], dim=0) - # Some proposals are ignored or have a background class. Their gt_classes - # cannot be used as index. - gt_classes = gt_classes.clamp_(0, K - 1) - - predict_boxes = predict_boxes.view(N, K, B)[ - torch.arange(N, dtype=torch.long, device=predict_boxes.device), gt_classes - ] - num_prop_per_image = [len(p) for p in proposals] - return predict_boxes.split(num_prop_per_image) - - def predict_boxes( - self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances] - ): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were - used to compute predictions. The ``proposal_boxes`` field is expected. - - Returns: - list[Tensor]: - A list of Tensors of predicted class-specific or class-agnostic boxes - for each image. Element i has shape (Ri, K * B) or (Ri, B), where Ri is - the number of proposals for image i and B is the box dimension (4 or 5) - """ - if not len(proposals): - return [] - _, proposal_deltas = predictions - num_prop_per_image = [len(p) for p in proposals] - proposal_boxes = cat([p.proposal_boxes.tensor for p in proposals], dim=0) - predict_boxes = self.box2box_transform.apply_deltas( - proposal_deltas, - proposal_boxes, - ) # Nx(KxB) - return predict_boxes.split(num_prop_per_image) - - def predict_probs( - self, predictions: Tuple[torch.Tensor, torch.Tensor], proposals: List[Instances] - ): - """ - Args: - predictions: return values of :meth:`forward()`. - proposals (list[Instances]): proposals that match the features that were - used to compute predictions. - - Returns: - list[Tensor]: - A list of Tensors of predicted class probabilities for each image. - Element i has shape (Ri, K + 1), where Ri is the number of proposals for image i. - """ - scores, _ = predictions - num_inst_per_image = [len(p) for p in proposals] - probs = F.softmax(scores, dim=-1) - return probs.split(num_inst_per_image, dim=0) diff --git a/spaces/BAAI/vid2vid-zero/vid2vid_zero/p2p/seq_aligner.py b/spaces/BAAI/vid2vid-zero/vid2vid_zero/p2p/seq_aligner.py deleted file mode 100644 index 1b9c72380f32823295554cdd4d35352ee5984022..0000000000000000000000000000000000000000 --- a/spaces/BAAI/vid2vid-zero/vid2vid_zero/p2p/seq_aligner.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright 2022 Google LLC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import torch -import numpy as np - - -class ScoreParams: - - def __init__(self, gap, match, mismatch): - self.gap = gap - self.match = match - self.mismatch = mismatch - - def mis_match_char(self, x, y): - if x != y: - return self.mismatch - else: - return self.match - - -def get_matrix(size_x, size_y, gap): - matrix = [] - for i in range(len(size_x) + 1): - sub_matrix = [] - for j in range(len(size_y) + 1): - sub_matrix.append(0) - matrix.append(sub_matrix) - for j in range(1, len(size_y) + 1): - matrix[0][j] = j*gap - for i in range(1, len(size_x) + 1): - matrix[i][0] = i*gap - return matrix - - -def get_matrix(size_x, size_y, gap): - matrix = np.zeros((size_x + 1, size_y + 1), dtype=np.int32) - matrix[0, 1:] = (np.arange(size_y) + 1) * gap - matrix[1:, 0] = (np.arange(size_x) + 1) * gap - return matrix - - -def get_traceback_matrix(size_x, size_y): - matrix = np.zeros((size_x + 1, size_y +1), dtype=np.int32) - matrix[0, 1:] = 1 - matrix[1:, 0] = 2 - matrix[0, 0] = 4 - return matrix - - -def global_align(x, y, score): - matrix = get_matrix(len(x), len(y), score.gap) - trace_back = get_traceback_matrix(len(x), len(y)) - for i in range(1, len(x) + 1): - for j in range(1, len(y) + 1): - left = matrix[i, j - 1] + score.gap - up = matrix[i - 1, j] + score.gap - diag = matrix[i - 1, j - 1] + score.mis_match_char(x[i - 1], y[j - 1]) - matrix[i, j] = max(left, up, diag) - if matrix[i, j] == left: - trace_back[i, j] = 1 - elif matrix[i, j] == up: - trace_back[i, j] = 2 - else: - trace_back[i, j] = 3 - return matrix, trace_back - - -def get_aligned_sequences(x, y, trace_back): - x_seq = [] - y_seq = [] - i = len(x) - j = len(y) - mapper_y_to_x = [] - while i > 0 or j > 0: - if trace_back[i, j] == 3: - x_seq.append(x[i-1]) - y_seq.append(y[j-1]) - i = i-1 - j = j-1 - mapper_y_to_x.append((j, i)) - elif trace_back[i][j] == 1: - x_seq.append('-') - y_seq.append(y[j-1]) - j = j-1 - mapper_y_to_x.append((j, -1)) - elif trace_back[i][j] == 2: - x_seq.append(x[i-1]) - y_seq.append('-') - i = i-1 - elif trace_back[i][j] == 4: - break - mapper_y_to_x.reverse() - return x_seq, y_seq, torch.tensor(mapper_y_to_x, dtype=torch.int64) - - -def get_mapper(x: str, y: str, tokenizer, max_len=77): - x_seq = tokenizer.encode(x) - y_seq = tokenizer.encode(y) - score = ScoreParams(0, 1, -1) - matrix, trace_back = global_align(x_seq, y_seq, score) - mapper_base = get_aligned_sequences(x_seq, y_seq, trace_back)[-1] - alphas = torch.ones(max_len) - alphas[: mapper_base.shape[0]] = mapper_base[:, 1].ne(-1).float() - mapper = torch.zeros(max_len, dtype=torch.int64) - mapper[:mapper_base.shape[0]] = mapper_base[:, 1] - mapper[mapper_base.shape[0]:] = len(y_seq) + torch.arange(max_len - len(y_seq)) - return mapper, alphas - - -def get_refinement_mapper(prompts, tokenizer, max_len=77): - x_seq = prompts[0] - mappers, alphas = [], [] - for i in range(1, len(prompts)): - mapper, alpha = get_mapper(x_seq, prompts[i], tokenizer, max_len) - mappers.append(mapper) - alphas.append(alpha) - return torch.stack(mappers), torch.stack(alphas) - - -def get_word_inds(text: str, word_place: int, tokenizer): - split_text = text.split(" ") - if type(word_place) is str: - word_place = [i for i, word in enumerate(split_text) if word_place == word] - elif type(word_place) is int: - word_place = [word_place] - out = [] - if len(word_place) > 0: - words_encode = [tokenizer.decode([item]).strip("#") for item in tokenizer.encode(text)][1:-1] - cur_len, ptr = 0, 0 - - for i in range(len(words_encode)): - cur_len += len(words_encode[i]) - if ptr in word_place: - out.append(i + 1) - if cur_len >= len(split_text[ptr]): - ptr += 1 - cur_len = 0 - return np.array(out) - - -def get_replacement_mapper_(x: str, y: str, tokenizer, max_len=77): - words_x = x.split(' ') - words_y = y.split(' ') - if len(words_x) != len(words_y): - raise ValueError(f"attention replacement edit can only be applied on prompts with the same length" - f" but prompt A has {len(words_x)} words and prompt B has {len(words_y)} words.") - inds_replace = [i for i in range(len(words_y)) if words_y[i] != words_x[i]] - inds_source = [get_word_inds(x, i, tokenizer) for i in inds_replace] - inds_target = [get_word_inds(y, i, tokenizer) for i in inds_replace] - mapper = np.zeros((max_len, max_len)) - i = j = 0 - cur_inds = 0 - while i < max_len and j < max_len: - if cur_inds < len(inds_source) and inds_source[cur_inds][0] == i: - inds_source_, inds_target_ = inds_source[cur_inds], inds_target[cur_inds] - if len(inds_source_) == len(inds_target_): - mapper[inds_source_, inds_target_] = 1 - else: - ratio = 1 / len(inds_target_) - for i_t in inds_target_: - mapper[inds_source_, i_t] = ratio - cur_inds += 1 - i += len(inds_source_) - j += len(inds_target_) - elif cur_inds < len(inds_source): - mapper[i, j] = 1 - i += 1 - j += 1 - else: - mapper[j, j] = 1 - i += 1 - j += 1 - - return torch.from_numpy(mapper).float() - - - -def get_replacement_mapper(prompts, tokenizer, max_len=77): - x_seq = prompts[0] - mappers = [] - for i in range(1, len(prompts)): - mapper = get_replacement_mapper_(x_seq, prompts[i], tokenizer, max_len) - mappers.append(mapper) - return torch.stack(mappers) - diff --git a/spaces/Bala2-03-2003/MygenvioceAI/README.md b/spaces/Bala2-03-2003/MygenvioceAI/README.md deleted file mode 100644 index 535814a0d4937e26309fedfd19da25b8528bdf43..0000000000000000000000000000000000000000 --- a/spaces/Bala2-03-2003/MygenvioceAI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: MygenvioceAI -emoji: 🏃 -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/BasToTheMax/tensor/app.py b/spaces/BasToTheMax/tensor/app.py deleted file mode 100644 index 3d3727030a2caabb67f3ce980f28a1ab9628fe03..0000000000000000000000000000000000000000 --- a/spaces/BasToTheMax/tensor/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr -from stable_diffusion_tf.stable_diffusion import StableDiffusion -from PIL import Image - -generator = StableDiffusion( - img_height=512, - img_width=512, - jit_compile=True, -) - - -def gen(prompt): - image = generator.generate( - prompt, - num_steps=50, - unconditional_guidance_scale=7.5, - temperature=1, - batch_size=1, - ) - return image[0] - -demo = gr.Interface(fn=gen, inputs="text", outputs=gr.Image(type="pil")) - -demo.launch() - -# prompt = "a photo of an astronaut riding a horse on mars" -# image = pipe(prompt).images[0] -# image.save("astronaut_rides_horse.png") \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar Mis Monstruos Cantando.md b/spaces/Benson/text-generation/Examples/Cmo Descargar Mis Monstruos Cantando.md deleted file mode 100644 index 021d7c5a584fdc44f10a0f80d2525fecaa58c786..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cmo Descargar Mis Monstruos Cantando.md +++ /dev/null @@ -1,79 +0,0 @@ -
-

Cómo descargar mis monstruos cantando

-

¿Te gustan los juegos musicales? ¿Te gusta coleccionar y criar monstruos lindos y divertidos? ¿Quieres crear tu propia isla llena de criaturas cantantes? Si respondiste sí a cualquiera de estas preguntas, entonces definitivamente deberías descargar My Singing Monsters, un juego gratuito para dispositivos Android, iOS y Steam. En este artículo, te mostraremos cómo descargar My Singing Monsters en diferentes dispositivos y plataformas, así como algunos consejos y trucos para aprovechar al máximo tu experiencia con monstruos.

-

¿Cómo descargar mis monstruos cantando


Download --->>> https://bltlly.com/2v6Jrs



-

Descarga desde Google Play Store

-

Si tienes un teléfono o tableta Android, la forma más fácil de descargar My Singing Monsters es desde Google Play Store. Estos son los pasos que debes seguir:

-
    -
  1. Abra la aplicación Play Store en su dispositivo o vaya a play.google.com en su navegador.
  2. -
  3. Buscar "Mis monstruos cantando" o usar esto enlace directo.
  4. -
  5. Toque en el título de la aplicación y comprobar las calificaciones de estrellas, el número de descargas, y los comentarios para asegurarse de que es confiable y seguro.
  6. -
  7. Toque en "Instalar" (para aplicaciones gratuitas) o el precio de la aplicación (para aplicaciones de pago) y aceptar los permisos.
  8. -
  9. Espere a que la aplicación se descargue e instale en su dispositivo.
  10. -
  11. ¡Abre la aplicación y disfruta!
  12. -
-

Descarga desde App Store

-

Si tienes un iPhone o iPad, puedes descargar My Singing Monsters desde la App Store. Estos son los pasos que debes seguir:

-
    -
  1. Abra la aplicación App Store en su dispositivo o vaya a apps.apple.com en su navegador.
  2. -
  3. Buscar "Mis monstruos cantando" o usar esto enlace directo.
  4. -
  5. Toque en el título de la aplicación y comprobar las calificaciones de estrellas, el número de descargas, y los comentarios para asegurarse de que es confiable y seguro.
  6. -
  7. Toque en "Obtener" (para aplicaciones gratuitas) o el precio de la aplicación (para aplicaciones de pago) e ingrese su contraseña de Apple ID o use Touch ID o Face ID.
  8. - -
  9. ¡Abre la aplicación y disfruta!
  10. -
-

Descargar desde Steam

-

Si tienes un PC o Mac, puedes descargar My Singing Monsters de Steam, una popular plataforma de juegos. Estos son los pasos que debes seguir:

-
    -
  1. Descargue e instale Steam en su computadora desde store.steampowered.com.
  2. -
  3. Crea una cuenta de Steam o inicia sesión con la existente.
  4. -
  5. Buscar "Mis monstruos cantando" o usar esto enlace directo.
  6. -
  7. Haga clic en "Jugar juego" (para juegos gratis) o "Añadir al carrito" (para juegos de pago) y siga las instrucciones.
  8. -
  9. Espere a que el juego se descargue e instale en su computadora.
  10. -
  11. Lanza Steam y abre el juego desde tu biblioteca.
  12. -
  13. Disfruta!
  14. -
-

Descarga desde otras fuentes

-

Riesgos y precauciones

-

Si bien descargar aplicaciones de las fuentes oficiales suele ser seguro y fácil, es posible que desee descargar My Singing Monsters de otras fuentes por varias razones. Por ejemplo, es posible que tenga un dispositivo antiguo que no sea compatible con la última versión de la aplicación, o que desee acceder a algunas funciones que no están disponibles en su región. Sin embargo, descargar aplicaciones de fuentes desconocidas también puede plantear algunos riesgos, como:

-

-
    -
  • Malware: Algunas aplicaciones pueden contener software malicioso que puede dañar su dispositivo o robar su información personal.
  • -
  • Virus: Algunas aplicaciones pueden infectar su dispositivo con virus que pueden dañar sus archivos o ralentizar su rendimiento.
  • -
  • Spyware: Algunas aplicaciones pueden monitorear su actividad o recopilar sus datos sin su consentimiento.
  • -
  • Adware: Algunas aplicaciones pueden mostrar anuncios molestos o inapropiados en su dispositivo.
  • -
  • Estafas: Algunas aplicaciones pueden engañar a pagar por algo que no es lo que esperaba o no vale la pena el precio.
  • -
-

Para evitar estos riesgos, siempre debe tener cuidado y precaución al descargar aplicaciones de otras fuentes. Aquí hay algunas precauciones que puede tomar:

-
    - -
  • Descargar aplicaciones solo desde sitios o plataformas confiables y verificados.
  • -
  • Escanear la aplicación con un antivirus fiable o software anti-malware antes de instalarla.
  • -
  • Lea los permisos y términos de servicio de la aplicación cuidadosamente y solo aceptarlos si está de acuerdo con ellos.
  • -
  • Copia de seguridad de su dispositivo y datos regularmente en caso de que algo salga mal.
  • -
-

Cómo cargar los APK

-

Si quieres descargar My Singing Monsters desde una fuente distinta de Google Play Store, tendrás que cargar un archivo APK. APK significa Android Package Kit, y es el formato de archivo que Android utiliza para distribuir e instalar aplicaciones. Sideload significa instalar una aplicación desde una fuente distinta de la oficial. Estos son los pasos que debes seguir para cargar un archivo APK:

-
    -
  1. Encuentra un sitio de buena reputación que ofrece archivos APK para mis monstruos cantando, como apkpure.com o apkmonk.com.
  2. -
  3. Descargar el archivo APK a su dispositivo o transferirlo desde su computadora a través de un cable USB o Bluetooth.
  4. -
  5. Habilita la opción de instalar aplicaciones de fuentes desconocidas en tu dispositivo. Puede hacer esto yendo a Configuración > Seguridad > Fuentes desconocidas y activando.
  6. -
  7. Localice el archivo APK en su dispositivo utilizando una aplicación de administrador de archivos o la carpeta Descargas.
  8. -
  9. Toque en el archivo APK y siga las instrucciones para instalarlo.
  10. -
  11. ¡Abre la aplicación y disfruta!
  12. -
-

Conclusión

- -

Preguntas frecuentes

-

Aquí hay algunas preguntas y respuestas frecuentes sobre la descarga de My Singing Monsters:

-

Q: ¿Cuánto espacio ocupa My Singing Monsters en mi dispositivo?

-

A: El tamaño de My Singing Monsters varía según el dispositivo y la plataforma, pero suele ser de unos 100 MB. Sin embargo, podría requerir más espacio a medida que avanzas en el juego y desbloqueas más contenido.

-

Q: ¿Puedo jugar mis monstruos cantando offline?

-

A: No, necesitas una conexión a Internet para jugar a My Singing Monsters, ya que es un juego online que requiere una comunicación constante con los servidores. También necesita una conexión a Internet para acceder a algunas funciones, como interacciones sociales, almacenamiento en la nube y actualizaciones.

-

Q: ¿Puedo jugar mis monstruos cantando en múltiples dispositivos?

-

A: Sí, puedes jugar My Singing Monsters en varios dispositivos usando la misma cuenta. Solo necesita vincular su cuenta a una dirección de correo electrónico o una cuenta de Facebook y luego iniciar sesión con ella en cualquier dispositivo. También puede sincronizar su progreso entre dispositivos utilizando la función de almacenamiento en la nube.

-

Q: ¿Cómo puedo actualizar mis monstruos cantando?

-

A: Si has descargado My Singing Monsters de las fuentes oficiales, recibirás notificaciones cuando haya una actualización disponible para la aplicación. A continuación, puede actualizarlo desde la Play Store, la App Store o Steam, dependiendo de su dispositivo y plataforma. Si has descargado My Singing Monsters de otras fuentes, tendrás que comprobar el sitio donde lo conseguiste y descargar la última versión del archivo APK. A continuación, puede instalarlo sobre la aplicación existente sin perder sus datos.

-

Q: ¿Cómo puedo contactar a los desarrolladores de My Singing Monsters?

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/egg_link.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/egg_link.py deleted file mode 100644 index eb57ed1519f82adb79a3d2377e1f286df9d8ef6b..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_internal/utils/egg_link.py +++ /dev/null @@ -1,72 +0,0 @@ -import os -import re -import sys -from typing import List, Optional - -from pip._internal.locations import site_packages, user_site -from pip._internal.utils.virtualenv import ( - running_under_virtualenv, - virtualenv_no_global, -) - -__all__ = [ - "egg_link_path_from_sys_path", - "egg_link_path_from_location", -] - - -def _egg_link_name(raw_name: str) -> str: - """ - Convert a Name metadata value to a .egg-link name, by applying - the same substitution as pkg_resources's safe_name function. - Note: we cannot use canonicalize_name because it has a different logic. - """ - return re.sub("[^A-Za-z0-9.]+", "-", raw_name) + ".egg-link" - - -def egg_link_path_from_sys_path(raw_name: str) -> Optional[str]: - """ - Look for a .egg-link file for project name, by walking sys.path. - """ - egg_link_name = _egg_link_name(raw_name) - for path_item in sys.path: - egg_link = os.path.join(path_item, egg_link_name) - if os.path.isfile(egg_link): - return egg_link - return None - - -def egg_link_path_from_location(raw_name: str) -> Optional[str]: - """ - Return the path for the .egg-link file if it exists, otherwise, None. - - There's 3 scenarios: - 1) not in a virtualenv - try to find in site.USER_SITE, then site_packages - 2) in a no-global virtualenv - try to find in site_packages - 3) in a yes-global virtualenv - try to find in site_packages, then site.USER_SITE - (don't look in global location) - - For #1 and #3, there could be odd cases, where there's an egg-link in 2 - locations. - - This method will just return the first one found. - """ - sites: List[str] = [] - if running_under_virtualenv(): - sites.append(site_packages) - if not virtualenv_no_global() and user_site: - sites.append(user_site) - else: - if user_site: - sites.append(user_site) - sites.append(site_packages) - - egg_link_name = _egg_link_name(raw_name) - for site in sites: - egglink = os.path.join(site, egg_link_name) - if os.path.isfile(egglink): - return egglink - return None diff --git a/spaces/CVPR/LIVE/diffvg.cpp b/spaces/CVPR/LIVE/diffvg.cpp deleted file mode 100644 index 7346d24b758b135bdd402fdb67ea412f48419eb3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/diffvg.cpp +++ /dev/null @@ -1,1792 +0,0 @@ -#include "diffvg.h" -#include "aabb.h" -#include "shape.h" -#include "sample_boundary.h" -#include "atomic.h" -#include "cdf.h" -#include "compute_distance.h" -#include "cuda_utils.h" -#include "edge_query.h" -#include "filter.h" -#include "matrix.h" -#include "parallel.h" -#include "pcg.h" -#include "ptr.h" -#include "scene.h" -#include "vector.h" -#include "winding_number.h" -#include "within_distance.h" -#include -#include -#include -#include -#include - -namespace py = pybind11; - -struct Command { - int shape_group_id; - int shape_id; - int point_id; // Only used by path -}; - -DEVICE -bool is_inside(const SceneData &scene_data, - int shape_group_id, - const Vector2f &pt, - EdgeQuery *edge_query) { - const ShapeGroup &shape_group = scene_data.shape_groups[shape_group_id]; - // pt is in canvas space, transform it to shape's local space - auto local_pt = xform_pt(shape_group.canvas_to_shape, pt); - const auto &bvh_nodes = scene_data.shape_groups_bvh_nodes[shape_group_id]; - const AABB &bbox = bvh_nodes[2 * shape_group.num_shapes - 2].box; - if (!inside(bbox, local_pt)) { - return false; - } - auto winding_number = 0; - // Traverse the shape group BVH - constexpr auto max_bvh_stack_size = 64; - int bvh_stack[max_bvh_stack_size]; - auto stack_size = 0; - bvh_stack[stack_size++] = 2 * shape_group.num_shapes - 2; - while (stack_size > 0) { - const BVHNode &node = bvh_nodes[bvh_stack[--stack_size]]; - if (node.child1 < 0) { - // leaf - auto shape_id = node.child0; - auto w = compute_winding_number( - scene_data.shapes[shape_id], scene_data.path_bvhs[shape_id], local_pt); - winding_number += w; - if (edge_query != nullptr) { - if (edge_query->shape_group_id == shape_group_id && - edge_query->shape_id == shape_id) { - if ((shape_group.use_even_odd_rule && abs(w) % 2 == 1) || - (!shape_group.use_even_odd_rule && w != 0)) { - edge_query->hit = true; - } - } - } - } else { - assert(node.child0 >= 0 && node.child1 >= 0); - const AABB &b0 = bvh_nodes[node.child0].box; - if (inside(b0, local_pt)) { - bvh_stack[stack_size++] = node.child0; - } - const AABB &b1 = bvh_nodes[node.child1].box; - if (inside(b1, local_pt)) { - bvh_stack[stack_size++] = node.child1; - } - assert(stack_size <= max_bvh_stack_size); - } - } - if (shape_group.use_even_odd_rule) { - return abs(winding_number) % 2 == 1; - } else { - return winding_number != 0; - } -} - -DEVICE void accumulate_boundary_gradient(const Shape &shape, - float contrib, - float t, - const Vector2f &normal, - const BoundaryData &boundary_data, - Shape &d_shape, - const Matrix3x3f &shape_to_canvas, - const Vector2f &local_boundary_pt, - Matrix3x3f &d_shape_to_canvas) { - assert(isfinite(contrib)); - assert(isfinite(normal)); - // According to Reynold transport theorem, - // the Jacobian of the boundary integral is dot(velocity, normal), - // where the velocity depends on the variable being differentiated with. - if (boundary_data.is_stroke) { - auto has_path_thickness = false; - if (shape.type == ShapeType::Path) { - const Path &path = *(const Path *)shape.ptr; - has_path_thickness = path.thickness != nullptr; - } - // differentiate stroke width: velocity is the same as normal - if (has_path_thickness) { - Path *d_p = (Path*)d_shape.ptr; - auto base_point_id = boundary_data.path.base_point_id; - auto point_id = boundary_data.path.point_id; - auto t = boundary_data.path.t; - const Path &path = *(const Path *)shape.ptr; - if (path.num_control_points[base_point_id] == 0) { - // Straight line - auto i0 = point_id; - auto i1 = (point_id + 1) % path.num_points; - // r = r0 + t * (r1 - r0) - atomic_add(&d_p->thickness[i0], (1 - t) * contrib); - atomic_add(&d_p->thickness[i1], ( t) * contrib); - } else if (path.num_control_points[base_point_id] == 1) { - // Quadratic Bezier curve - auto i0 = point_id; - auto i1 = point_id + 1; - auto i2 = (point_id + 2) % path.num_points; - // r = (1-t)^2r0 + 2(1-t)t r1 + t^2 r2 - atomic_add(&d_p->thickness[i0], square(1 - t) * contrib); - atomic_add(&d_p->thickness[i1], (2*(1-t)*t) * contrib); - atomic_add(&d_p->thickness[i2], (t*t) * contrib); - } else if (path.num_control_points[base_point_id] == 2) { - auto i0 = point_id; - auto i1 = point_id + 1; - auto i2 = point_id + 2; - auto i3 = (point_id + 3) % path.num_points; - // r = (1-t)^3r0 + 3*(1-t)^2tr1 + 3*(1-t)t^2r2 + t^3r3 - atomic_add(&d_p->thickness[i0], cubic(1 - t) * contrib); - atomic_add(&d_p->thickness[i1], 3 * square(1 - t) * t * contrib); - atomic_add(&d_p->thickness[i2], 3 * (1 - t) * t * t * contrib); - atomic_add(&d_p->thickness[i3], t * t * t * contrib); - } else { - assert(false); - } - } else { - atomic_add(&d_shape.stroke_width, contrib); - } - } - switch (shape.type) { - case ShapeType::Circle: { - Circle *d_p = (Circle*)d_shape.ptr; - // velocity for the center is (1, 0) for x and (0, 1) for y - atomic_add(&d_p->center[0], normal * contrib); - // velocity for the radius is the same as the normal - atomic_add(&d_p->radius, contrib); - break; - } case ShapeType::Ellipse: { - Ellipse *d_p = (Ellipse*)d_shape.ptr; - // velocity for the center is (1, 0) for x and (0, 1) for y - atomic_add(&d_p->center[0], normal * contrib); - // velocity for the radius: - // x = center.x + r.x * cos(2pi * t) - // y = center.y + r.y * sin(2pi * t) - // for r.x: (cos(2pi * t), 0) - // for r.y: (0, sin(2pi * t)) - atomic_add(&d_p->radius.x, cos(2 * float(M_PI) * t) * normal.x * contrib); - atomic_add(&d_p->radius.y, sin(2 * float(M_PI) * t) * normal.y * contrib); - break; - } case ShapeType::Path: { - Path *d_p = (Path*)d_shape.ptr; - auto base_point_id = boundary_data.path.base_point_id; - auto point_id = boundary_data.path.point_id; - auto t = boundary_data.path.t; - const Path &path = *(const Path *)shape.ptr; - if (path.num_control_points[base_point_id] == 0) { - // Straight line - auto i0 = point_id; - auto i1 = (point_id + 1) % path.num_points; - // pt = p0 + t * (p1 - p0) - // velocity for p0.x: (1 - t, 0) - // p0.y: ( 0, 1 - t) - // p1.x: ( t, 0) - // p1.y: ( 0, t) - atomic_add(&d_p->points[2 * i0 + 0], (1 - t) * normal.x * contrib); - atomic_add(&d_p->points[2 * i0 + 1], (1 - t) * normal.y * contrib); - atomic_add(&d_p->points[2 * i1 + 0], ( t) * normal.x * contrib); - atomic_add(&d_p->points[2 * i1 + 1], ( t) * normal.y * contrib); - } else if (path.num_control_points[base_point_id] == 1) { - // Quadratic Bezier curve - auto i0 = point_id; - auto i1 = point_id + 1; - auto i2 = (point_id + 2) % path.num_points; - // pt = (1-t)^2p0 + 2(1-t)t p1 + t^2 p2 - // velocity for p0.x: ((1-t)^2, 0) - // p0.y: ( 0, (1-t)^2) - // p1.x: (2(1-t)t, 0) - // p1.y: ( 0, 2(1-t)t) - // p1.x: ( t^2, 0) - // p1.y: ( 0, t^2) - atomic_add(&d_p->points[2 * i0 + 0], square(1 - t) * normal.x * contrib); - atomic_add(&d_p->points[2 * i0 + 1], square(1 - t) * normal.y * contrib); - atomic_add(&d_p->points[2 * i1 + 0], (2*(1-t)*t) * normal.x * contrib); - atomic_add(&d_p->points[2 * i1 + 1], (2*(1-t)*t) * normal.y * contrib); - atomic_add(&d_p->points[2 * i2 + 0], (t*t) * normal.x * contrib); - atomic_add(&d_p->points[2 * i2 + 1], (t*t) * normal.y * contrib); - } else if (path.num_control_points[base_point_id] == 2) { - auto i0 = point_id; - auto i1 = point_id + 1; - auto i2 = point_id + 2; - auto i3 = (point_id + 3) % path.num_points; - // pt = (1-t)^3p0 + 3*(1-t)^2tp1 + 3*(1-t)t^2p2 + t^3p3 - // velocity for p0.x: ( (1-t)^3, 0) - // p0.y: ( 0, (1-t)^3) - // p1.x: (3*(1-t)^2t, 0) - // p1.y: ( 0, 3*(1-t)^2t) - // p2.x: (3*(1-t)t^2, 0) - // p2.y: ( 0, 3*(1-t)t^2) - // p2.x: ( t^3, 0) - // p2.y: ( 0, t^3) - atomic_add(&d_p->points[2 * i0 + 0], cubic(1 - t) * normal.x * contrib); - atomic_add(&d_p->points[2 * i0 + 1], cubic(1 - t) * normal.y * contrib); - atomic_add(&d_p->points[2 * i1 + 0], 3 * square(1 - t) * t * normal.x * contrib); - atomic_add(&d_p->points[2 * i1 + 1], 3 * square(1 - t) * t * normal.y * contrib); - atomic_add(&d_p->points[2 * i2 + 0], 3 * (1 - t) * t * t * normal.x * contrib); - atomic_add(&d_p->points[2 * i2 + 1], 3 * (1 - t) * t * t * normal.y * contrib); - atomic_add(&d_p->points[2 * i3 + 0], t * t * t * normal.x * contrib); - atomic_add(&d_p->points[2 * i3 + 1], t * t * t * normal.y * contrib); - } else { - assert(false); - } - break; - } case ShapeType::Rect: { - Rect *d_p = (Rect*)d_shape.ptr; - // The velocity depends on the position of the boundary - if (normal == Vector2f{-1, 0}) { - // left - // velocity for p_min is (1, 0) for x and (0, 0) for y - atomic_add(&d_p->p_min.x, -contrib); - } else if (normal == Vector2f{1, 0}) { - // right - // velocity for p_max is (1, 0) for x and (0, 0) for y - atomic_add(&d_p->p_max.x, contrib); - } else if (normal == Vector2f{0, -1}) { - // top - // velocity for p_min is (0, 0) for x and (0, 1) for y - atomic_add(&d_p->p_min.y, -contrib); - } else if (normal == Vector2f{0, 1}) { - // bottom - // velocity for p_max is (0, 0) for x and (0, 1) for y - atomic_add(&d_p->p_max.y, contrib); - } else { - // incorrect normal assignment? - assert(false); - } - break; - } default: { - assert(false); - break; - } - } - // for shape_to_canvas we have the following relationship: - // boundary_pt = xform_pt(shape_to_canvas, local_pt) - // the velocity is the derivative of boundary_pt with respect to shape_to_canvas - // we can use reverse-mode AD to compute the dot product of the velocity and the Jacobian - // by passing the normal in d_xform_pt - auto d_shape_to_canvas_ = Matrix3x3f(); - auto d_local_boundary_pt = Vector2f{0, 0}; - d_xform_pt(shape_to_canvas, - local_boundary_pt, - normal * contrib, - d_shape_to_canvas_, - d_local_boundary_pt); - atomic_add(&d_shape_to_canvas(0, 0), d_shape_to_canvas_); -} - -DEVICE -Vector4f sample_color(const ColorType &color_type, - void *color, - const Vector2f &pt) { - switch (color_type) { - case ColorType::Constant: { - auto c = (const Constant*)color; - assert(isfinite(c->color)); - return c->color; - } case ColorType::LinearGradient: { - auto c = (const LinearGradient*)color; - // Project pt to (c->begin, c->end) - auto beg = c->begin; - auto end = c->end; - auto t = dot(pt - beg, end - beg) / max(dot(end - beg, end - beg), 1e-3f); - // Find the correponding stop: - if (t < c->stop_offsets[0]) { - return Vector4f{c->stop_colors[0], - c->stop_colors[1], - c->stop_colors[2], - c->stop_colors[3]}; - } - for (int i = 0; i < c->num_stops - 1; i++) { - auto offset_curr = c->stop_offsets[i]; - auto offset_next = c->stop_offsets[i + 1]; - assert(offset_next > offset_curr); - if (t >= offset_curr && t < offset_next) { - auto color_curr = Vector4f{ - c->stop_colors[4 * i + 0], - c->stop_colors[4 * i + 1], - c->stop_colors[4 * i + 2], - c->stop_colors[4 * i + 3]}; - auto color_next = Vector4f{ - c->stop_colors[4 * (i + 1) + 0], - c->stop_colors[4 * (i + 1) + 1], - c->stop_colors[4 * (i + 1) + 2], - c->stop_colors[4 * (i + 1) + 3]}; - auto tt = (t - offset_curr) / (offset_next - offset_curr); - assert(isfinite(tt)); - assert(isfinite(color_curr)); - assert(isfinite(color_next)); - return color_curr * (1 - tt) + color_next * tt; - } - } - return Vector4f{c->stop_colors[4 * (c->num_stops - 1) + 0], - c->stop_colors[4 * (c->num_stops - 1) + 1], - c->stop_colors[4 * (c->num_stops - 1) + 2], - c->stop_colors[4 * (c->num_stops - 1) + 3]}; - } case ColorType::RadialGradient: { - auto c = (const RadialGradient*)color; - // Distance from pt to center - auto offset = pt - c->center; - auto normalized_offset = offset / c->radius; - auto t = length(normalized_offset); - // Find the correponding stop: - if (t < c->stop_offsets[0]) { - return Vector4f{c->stop_colors[0], - c->stop_colors[1], - c->stop_colors[2], - c->stop_colors[3]}; - } - for (int i = 0; i < c->num_stops - 1; i++) { - auto offset_curr = c->stop_offsets[i]; - auto offset_next = c->stop_offsets[i + 1]; - assert(offset_next > offset_curr); - if (t >= offset_curr && t < offset_next) { - auto color_curr = Vector4f{ - c->stop_colors[4 * i + 0], - c->stop_colors[4 * i + 1], - c->stop_colors[4 * i + 2], - c->stop_colors[4 * i + 3]}; - auto color_next = Vector4f{ - c->stop_colors[4 * (i + 1) + 0], - c->stop_colors[4 * (i + 1) + 1], - c->stop_colors[4 * (i + 1) + 2], - c->stop_colors[4 * (i + 1) + 3]}; - auto tt = (t - offset_curr) / (offset_next - offset_curr); - assert(isfinite(tt)); - assert(isfinite(color_curr)); - assert(isfinite(color_next)); - return color_curr * (1 - tt) + color_next * tt; - } - } - return Vector4f{c->stop_colors[4 * (c->num_stops - 1) + 0], - c->stop_colors[4 * (c->num_stops - 1) + 1], - c->stop_colors[4 * (c->num_stops - 1) + 2], - c->stop_colors[4 * (c->num_stops - 1) + 3]}; - } default: { - assert(false); - } - } - return Vector4f{}; -} - -DEVICE -void d_sample_color(const ColorType &color_type, - void *color_ptr, - const Vector2f &pt, - const Vector4f &d_color, - void *d_color_ptr, - float *d_translation) { - switch (color_type) { - case ColorType::Constant: { - auto d_c = (Constant*)d_color_ptr; - atomic_add(&d_c->color[0], d_color); - return; - } case ColorType::LinearGradient: { - auto c = (const LinearGradient*)color_ptr; - auto d_c = (LinearGradient*)d_color_ptr; - // Project pt to (c->begin, c->end) - auto beg = c->begin; - auto end = c->end; - auto t = dot(pt - beg, end - beg) / max(dot(end - beg, end - beg), 1e-3f); - // Find the correponding stop: - if (t < c->stop_offsets[0]) { - atomic_add(&d_c->stop_colors[0], d_color); - return; - } - for (int i = 0; i < c->num_stops - 1; i++) { - auto offset_curr = c->stop_offsets[i]; - auto offset_next = c->stop_offsets[i + 1]; - assert(offset_next > offset_curr); - if (t >= offset_curr && t < offset_next) { - auto color_curr = Vector4f{ - c->stop_colors[4 * i + 0], - c->stop_colors[4 * i + 1], - c->stop_colors[4 * i + 2], - c->stop_colors[4 * i + 3]}; - auto color_next = Vector4f{ - c->stop_colors[4 * (i + 1) + 0], - c->stop_colors[4 * (i + 1) + 1], - c->stop_colors[4 * (i + 1) + 2], - c->stop_colors[4 * (i + 1) + 3]}; - auto tt = (t - offset_curr) / (offset_next - offset_curr); - // return color_curr * (1 - tt) + color_next * tt; - auto d_color_curr = d_color * (1 - tt); - auto d_color_next = d_color * tt; - auto d_tt = sum(d_color * (color_next - color_curr)); - auto d_offset_next = -d_tt * tt / (offset_next - offset_curr); - auto d_offset_curr = d_tt * ((tt - 1.f) / (offset_next - offset_curr)); - auto d_t = d_tt / (offset_next - offset_curr); - assert(isfinite(d_tt)); - atomic_add(&d_c->stop_colors[4 * i], d_color_curr); - atomic_add(&d_c->stop_colors[4 * (i + 1)], d_color_next); - atomic_add(&d_c->stop_offsets[i], d_offset_curr); - atomic_add(&d_c->stop_offsets[i + 1], d_offset_next); - // auto t = dot(pt - beg, end - beg) / max(dot(end - beg, end - beg), 1e-6f); - // l = max(dot(end - beg, end - beg), 1e-3f) - // t = dot(pt - beg, end - beg) / l; - auto l = max(dot(end - beg, end - beg), 1e-3f); - auto d_beg = d_t * (-(pt - beg)-(end - beg)) / l; - auto d_end = d_t * (pt - beg) / l; - auto d_l = -d_t * t / l; - if (dot(end - beg, end - beg) > 1e-3f) { - d_beg += 2 * d_l * (beg - end); - d_end += 2 * d_l * (end - beg); - } - atomic_add(&d_c->begin[0], d_beg); - atomic_add(&d_c->end[0], d_end); - if (d_translation != nullptr) { - atomic_add(d_translation, (d_beg + d_end)); - } - return; - } - } - atomic_add(&d_c->stop_colors[4 * (c->num_stops - 1)], d_color); - return; - } case ColorType::RadialGradient: { - auto c = (const RadialGradient*)color_ptr; - auto d_c = (RadialGradient*)d_color_ptr; - // Distance from pt to center - auto offset = pt - c->center; - auto normalized_offset = offset / c->radius; - auto t = length(normalized_offset); - // Find the correponding stop: - if (t < c->stop_offsets[0]) { - atomic_add(&d_c->stop_colors[0], d_color); - return; - } - for (int i = 0; i < c->num_stops - 1; i++) { - auto offset_curr = c->stop_offsets[i]; - auto offset_next = c->stop_offsets[i + 1]; - assert(offset_next > offset_curr); - if (t >= offset_curr && t < offset_next) { - auto color_curr = Vector4f{ - c->stop_colors[4 * i + 0], - c->stop_colors[4 * i + 1], - c->stop_colors[4 * i + 2], - c->stop_colors[4 * i + 3]}; - auto color_next = Vector4f{ - c->stop_colors[4 * (i + 1) + 0], - c->stop_colors[4 * (i + 1) + 1], - c->stop_colors[4 * (i + 1) + 2], - c->stop_colors[4 * (i + 1) + 3]}; - auto tt = (t - offset_curr) / (offset_next - offset_curr); - assert(isfinite(tt)); - // return color_curr * (1 - tt) + color_next * tt; - auto d_color_curr = d_color * (1 - tt); - auto d_color_next = d_color * tt; - auto d_tt = sum(d_color * (color_next - color_curr)); - auto d_offset_next = -d_tt * tt / (offset_next - offset_curr); - auto d_offset_curr = d_tt * ((tt - 1.f) / (offset_next - offset_curr)); - auto d_t = d_tt / (offset_next - offset_curr); - assert(isfinite(d_t)); - atomic_add(&d_c->stop_colors[4 * i], d_color_curr); - atomic_add(&d_c->stop_colors[4 * (i + 1)], d_color_next); - atomic_add(&d_c->stop_offsets[i], d_offset_curr); - atomic_add(&d_c->stop_offsets[i + 1], d_offset_next); - // offset = pt - c->center - // normalized_offset = offset / c->radius - // t = length(normalized_offset) - auto d_normalized_offset = d_length(normalized_offset, d_t); - auto d_offset = d_normalized_offset / c->radius; - auto d_radius = -d_normalized_offset * offset / (c->radius * c->radius); - auto d_center = -d_offset; - atomic_add(&d_c->center[0], d_center); - atomic_add(&d_c->radius[0], d_radius); - if (d_translation != nullptr) { - atomic_add(d_translation, d_center); - } - } - } - atomic_add(&d_c->stop_colors[4 * (c->num_stops - 1)], d_color); - return; - } default: { - assert(false); - } - } -} - -struct Fragment { - Vector3f color; - float alpha; - int group_id; - bool is_stroke; -}; - -struct PrefilterFragment { - Vector3f color; - float alpha; - int group_id; - bool is_stroke; - int shape_id; - float distance; - Vector2f closest_pt; - ClosestPointPathInfo path_info; - bool within_distance; -}; - -DEVICE -Vector4f sample_color(const SceneData &scene, - const Vector4f *background_color, - const Vector2f &screen_pt, - const Vector4f *d_color = nullptr, - EdgeQuery *edge_query = nullptr, - Vector4f *d_background_color = nullptr, - float *d_translation = nullptr) { - if (edge_query != nullptr) { - edge_query->hit = false; - } - - // screen_pt is in screen space ([0, 1), [0, 1)), - // need to transform to canvas space - auto pt = screen_pt; - pt.x *= scene.canvas_width; - pt.y *= scene.canvas_height; - constexpr auto max_hit_shapes = 256; - constexpr auto max_bvh_stack_size = 64; - Fragment fragments[max_hit_shapes]; - int bvh_stack[max_bvh_stack_size]; - auto stack_size = 0; - auto num_fragments = 0; - bvh_stack[stack_size++] = 2 * scene.num_shape_groups - 2; - while (stack_size > 0) { - const BVHNode &node = scene.bvh_nodes[bvh_stack[--stack_size]]; - if (node.child1 < 0) { - // leaf - auto group_id = node.child0; - const ShapeGroup &shape_group = scene.shape_groups[group_id]; - if (shape_group.stroke_color != nullptr) { - if (within_distance(scene, group_id, pt, edge_query)) { - auto color_alpha = sample_color(shape_group.stroke_color_type, - shape_group.stroke_color, - pt); - Fragment f; - f.color = Vector3f{color_alpha[0], color_alpha[1], color_alpha[2]}; - f.alpha = color_alpha[3]; - f.group_id = group_id; - f.is_stroke = true; - assert(num_fragments < max_hit_shapes); - fragments[num_fragments++] = f; - } - } - if (shape_group.fill_color != nullptr) { - if (is_inside(scene, group_id, pt, edge_query)) { - auto color_alpha = sample_color(shape_group.fill_color_type, - shape_group.fill_color, - pt); - Fragment f; - f.color = Vector3f{color_alpha[0], color_alpha[1], color_alpha[2]}; - f.alpha = color_alpha[3]; - f.group_id = group_id; - f.is_stroke = false; - assert(num_fragments < max_hit_shapes); - fragments[num_fragments++] = f; - } - } - } else { - assert(node.child0 >= 0 && node.child1 >= 0); - const AABB &b0 = scene.bvh_nodes[node.child0].box; - if (inside(b0, pt, scene.bvh_nodes[node.child0].max_radius)) { - bvh_stack[stack_size++] = node.child0; - } - const AABB &b1 = scene.bvh_nodes[node.child1].box; - if (inside(b1, pt, scene.bvh_nodes[node.child1].max_radius)) { - bvh_stack[stack_size++] = node.child1; - } - assert(stack_size <= max_bvh_stack_size); - } - } - if (num_fragments <= 0) { - if (background_color != nullptr) { - if (d_background_color != nullptr) { - *d_background_color = *d_color; - } - return *background_color; - } - return Vector4f{0, 0, 0, 0}; - } - // Sort the fragments from back to front (i.e. increasing order of group id) - // https://github.com/frigaut/yorick-imutil/blob/master/insort.c#L37 - for (int i = 1; i < num_fragments; i++) { - auto j = i; - auto temp = fragments[j]; - while (j > 0 && fragments[j - 1].group_id > temp.group_id) { - fragments[j] = fragments[j - 1]; - j--; - } - fragments[j] = temp; - } - // Blend the color - Vector3f accum_color[max_hit_shapes]; - float accum_alpha[max_hit_shapes]; - // auto hit_opaque = false; - auto first_alpha = 0.f; - auto first_color = Vector3f{0, 0, 0}; - if (background_color != nullptr) { - first_alpha = background_color->w; - first_color = Vector3f{background_color->x, - background_color->y, - background_color->z}; - } - for (int i = 0; i < num_fragments; i++) { - const Fragment &fragment = fragments[i]; - auto new_color = fragment.color; - auto new_alpha = fragment.alpha; - auto prev_alpha = i > 0 ? accum_alpha[i - 1] : first_alpha; - auto prev_color = i > 0 ? accum_color[i - 1] : first_color; - if (edge_query != nullptr) { - // Do we hit the target shape? - if (new_alpha >= 1.f && edge_query->hit) { - // A fully opaque shape in front of the target occludes it - edge_query->hit = false; - } - if (edge_query->shape_group_id == fragment.group_id) { - edge_query->hit = true; - } - } - // prev_color is alpha premultiplied, don't need to multiply with - // prev_alpha - accum_color[i] = prev_color * (1 - new_alpha) + new_alpha * new_color; - accum_alpha[i] = prev_alpha * (1 - new_alpha) + new_alpha; - } - auto final_color = accum_color[num_fragments - 1]; - auto final_alpha = accum_alpha[num_fragments - 1]; - if (final_alpha > 1e-6f) { - final_color /= final_alpha; - } - assert(isfinite(final_color)); - assert(isfinite(final_alpha)); - if (d_color != nullptr) { - // Backward pass - auto d_final_color = Vector3f{(*d_color)[0], (*d_color)[1], (*d_color)[2]}; - auto d_final_alpha = (*d_color)[3]; - auto d_curr_color = d_final_color; - auto d_curr_alpha = d_final_alpha; - if (final_alpha > 1e-6f) { - // final_color = curr_color / final_alpha - d_curr_color = d_final_color / final_alpha; - d_curr_alpha -= sum(d_final_color * final_color) / final_alpha; - } - assert(isfinite(*d_color)); - assert(isfinite(d_curr_color)); - assert(isfinite(d_curr_alpha)); - for (int i = num_fragments - 1; i >= 0; i--) { - // color[n] = prev_color * (1 - new_alpha) + new_alpha * new_color; - // alpha[n] = prev_alpha * (1 - new_alpha) + new_alpha; - auto prev_alpha = i > 0 ? accum_alpha[i - 1] : first_alpha; - auto prev_color = i > 0 ? accum_color[i - 1] : first_color; - auto d_prev_alpha = d_curr_alpha * (1.f - fragments[i].alpha); - auto d_alpha_i = d_curr_alpha * (1.f - prev_alpha); - d_alpha_i += sum(d_curr_color * (fragments[i].color - prev_color)); - auto d_prev_color = d_curr_color * (1 - fragments[i].alpha); - auto d_color_i = d_curr_color * fragments[i].alpha; - auto group_id = fragments[i].group_id; - if (fragments[i].is_stroke) { - d_sample_color(scene.shape_groups[group_id].stroke_color_type, - scene.shape_groups[group_id].stroke_color, - pt, - Vector4f{d_color_i[0], d_color_i[1], d_color_i[2], d_alpha_i}, - scene.d_shape_groups[group_id].stroke_color, - d_translation); - } else { - d_sample_color(scene.shape_groups[group_id].fill_color_type, - scene.shape_groups[group_id].fill_color, - pt, - Vector4f{d_color_i[0], d_color_i[1], d_color_i[2], d_alpha_i}, - scene.d_shape_groups[group_id].fill_color, - d_translation); - } - d_curr_color = d_prev_color; - d_curr_alpha = d_prev_alpha; - } - if (d_background_color != nullptr) { - d_background_color->x += d_curr_color.x; - d_background_color->y += d_curr_color.y; - d_background_color->z += d_curr_color.z; - d_background_color->w += d_curr_alpha; - } - } - return Vector4f{final_color[0], final_color[1], final_color[2], final_alpha}; -} - -DEVICE -float sample_distance(const SceneData &scene, - const Vector2f &screen_pt, - float weight, - const float *d_dist = nullptr, - float *d_translation = nullptr) { - // screen_pt is in screen space ([0, 1), [0, 1)), - // need to transform to canvas space - auto pt = screen_pt; - pt.x *= scene.canvas_width; - pt.y *= scene.canvas_height; - // for each shape - auto min_group_id = -1; - auto min_distance = 0.f; - auto min_shape_id = -1; - auto closest_pt = Vector2f{0, 0}; - auto min_path_info = ClosestPointPathInfo{-1, -1, 0}; - for (int group_id = scene.num_shape_groups - 1; group_id >= 0; group_id--) { - auto s = -1; - auto p = Vector2f{0, 0}; - ClosestPointPathInfo local_path_info; - auto d = infinity(); - if (compute_distance(scene, group_id, pt, infinity(), &s, &p, &local_path_info, &d)) { - if (min_group_id == -1 || d < min_distance) { - min_distance = d; - min_group_id = group_id; - min_shape_id = s; - closest_pt = p; - min_path_info = local_path_info; - } - } - } - if (min_group_id == -1) { - return min_distance; - } - min_distance *= weight; - auto inside = false; - const ShapeGroup &shape_group = scene.shape_groups[min_group_id]; - if (shape_group.fill_color != nullptr) { - inside = is_inside(scene, - min_group_id, - pt, - nullptr); - if (inside) { - min_distance = -min_distance; - } - } - assert((min_group_id >= 0 && min_shape_id >= 0) || scene.num_shape_groups == 0); - if (d_dist != nullptr) { - auto d_abs_dist = inside ? -(*d_dist) : (*d_dist); - const ShapeGroup &shape_group = scene.shape_groups[min_group_id]; - const Shape &shape = scene.shapes[min_shape_id]; - ShapeGroup &d_shape_group = scene.d_shape_groups[min_group_id]; - Shape &d_shape = scene.d_shapes[min_shape_id]; - d_compute_distance(shape_group.canvas_to_shape, - shape_group.shape_to_canvas, - shape, - pt, - closest_pt, - min_path_info, - d_abs_dist, - d_shape_group.shape_to_canvas, - d_shape, - d_translation); - } - return min_distance; -} - -// Gather d_color from d_image inside the filter kernel, normalize by -// weight_image. -DEVICE -Vector4f gather_d_color(const Filter &filter, - const float *d_color_image, - const float *weight_image, - int width, - int height, - const Vector2f &pt) { - auto x = int(pt.x); - auto y = int(pt.y); - auto radius = filter.radius; - assert(radius > 0); - auto ri = (int)ceil(radius); - auto d_color = Vector4f{0, 0, 0, 0}; - for (int dy = -ri; dy <= ri; dy++) { - for (int dx = -ri; dx <= ri; dx++) { - auto xx = x + dx; - auto yy = y + dy; - if (xx >= 0 && xx < width && yy >= 0 && yy < height) { - auto xc = xx + 0.5f; - auto yc = yy + 0.5f; - auto filter_weight = - compute_filter_weight(filter, xc - pt.x, yc - pt.y); - // pixel = \sum weight * color / \sum weight - auto weight_sum = weight_image[yy * width + xx]; - if (weight_sum > 0) { - d_color += (filter_weight / weight_sum) * Vector4f{ - d_color_image[4 * (yy * width + xx) + 0], - d_color_image[4 * (yy * width + xx) + 1], - d_color_image[4 * (yy * width + xx) + 2], - d_color_image[4 * (yy * width + xx) + 3], - }; - } - } - } - } - return d_color; -} - -DEVICE -float smoothstep(float d) { - auto t = clamp((d + 1.f) / 2.f, 0.f, 1.f); - return t * t * (3 - 2 * t); -} - -DEVICE -float d_smoothstep(float d, float d_ret) { - if (d < -1.f || d > 1.f) { - return 0.f; - } - auto t = (d + 1.f) / 2.f; - // ret = t * t * (3 - 2 * t) - // = 3 * t * t - 2 * t * t * t - auto d_t = d_ret * (6 * t - 6 * t * t); - return d_t / 2.f; -} - -DEVICE -Vector4f sample_color_prefiltered(const SceneData &scene, - const Vector4f *background_color, - const Vector2f &screen_pt, - const Vector4f *d_color = nullptr, - Vector4f *d_background_color = nullptr, - float *d_translation = nullptr) { - // screen_pt is in screen space ([0, 1), [0, 1)), - // need to transform to canvas space - auto pt = screen_pt; - pt.x *= scene.canvas_width; - pt.y *= scene.canvas_height; - constexpr auto max_hit_shapes = 64; - constexpr auto max_bvh_stack_size = 64; - PrefilterFragment fragments[max_hit_shapes]; - int bvh_stack[max_bvh_stack_size]; - auto stack_size = 0; - auto num_fragments = 0; - bvh_stack[stack_size++] = 2 * scene.num_shape_groups - 2; - while (stack_size > 0) { - const BVHNode &node = scene.bvh_nodes[bvh_stack[--stack_size]]; - if (node.child1 < 0) { - // leaf - auto group_id = node.child0; - const ShapeGroup &shape_group = scene.shape_groups[group_id]; - if (shape_group.stroke_color != nullptr) { - auto min_shape_id = -1; - auto closest_pt = Vector2f{0, 0}; - auto local_path_info = ClosestPointPathInfo{-1, -1, 0}; - auto d = infinity(); - compute_distance(scene, group_id, pt, infinity(), - &min_shape_id, &closest_pt, &local_path_info, &d); - assert(min_shape_id != -1); - const auto &shape = scene.shapes[min_shape_id]; - auto w = smoothstep(fabs(d) + shape.stroke_width) - - smoothstep(fabs(d) - shape.stroke_width); - if (w > 0) { - auto color_alpha = sample_color(shape_group.stroke_color_type, - shape_group.stroke_color, - pt); - color_alpha[3] *= w; - - PrefilterFragment f; - f.color = Vector3f{color_alpha[0], color_alpha[1], color_alpha[2]}; - f.alpha = color_alpha[3]; - f.group_id = group_id; - f.shape_id = min_shape_id; - f.distance = d; - f.closest_pt = closest_pt; - f.is_stroke = true; - f.path_info = local_path_info; - f.within_distance = true; - assert(num_fragments < max_hit_shapes); - fragments[num_fragments++] = f; - } - } - if (shape_group.fill_color != nullptr) { - auto min_shape_id = -1; - auto closest_pt = Vector2f{0, 0}; - auto local_path_info = ClosestPointPathInfo{-1, -1, 0}; - auto d = infinity(); - auto found = compute_distance(scene, - group_id, - pt, - 1.f, - &min_shape_id, - &closest_pt, - &local_path_info, - &d); - auto inside = is_inside(scene, group_id, pt, nullptr); - if (found || inside) { - if (!inside) { - d = -d; - } - auto w = smoothstep(d); - if (w > 0) { - auto color_alpha = sample_color(shape_group.fill_color_type, - shape_group.fill_color, - pt); - color_alpha[3] *= w; - - PrefilterFragment f; - f.color = Vector3f{color_alpha[0], color_alpha[1], color_alpha[2]}; - f.alpha = color_alpha[3]; - f.group_id = group_id; - f.shape_id = min_shape_id; - f.distance = d; - f.closest_pt = closest_pt; - f.is_stroke = false; - f.path_info = local_path_info; - f.within_distance = found; - assert(num_fragments < max_hit_shapes); - fragments[num_fragments++] = f; - } - } - } - } else { - assert(node.child0 >= 0 && node.child1 >= 0); - const AABB &b0 = scene.bvh_nodes[node.child0].box; - if (inside(b0, pt, scene.bvh_nodes[node.child0].max_radius)) { - bvh_stack[stack_size++] = node.child0; - } - const AABB &b1 = scene.bvh_nodes[node.child1].box; - if (inside(b1, pt, scene.bvh_nodes[node.child1].max_radius)) { - bvh_stack[stack_size++] = node.child1; - } - assert(stack_size <= max_bvh_stack_size); - } - } - if (num_fragments <= 0) { - if (background_color != nullptr) { - if (d_background_color != nullptr) { - *d_background_color = *d_color; - } - return *background_color; - } - return Vector4f{0, 0, 0, 0}; - } - // Sort the fragments from back to front (i.e. increasing order of group id) - // https://github.com/frigaut/yorick-imutil/blob/master/insort.c#L37 - for (int i = 1; i < num_fragments; i++) { - auto j = i; - auto temp = fragments[j]; - while (j > 0 && fragments[j - 1].group_id > temp.group_id) { - fragments[j] = fragments[j - 1]; - j--; - } - fragments[j] = temp; - } - // Blend the color - Vector3f accum_color[max_hit_shapes]; - float accum_alpha[max_hit_shapes]; - auto first_alpha = 0.f; - auto first_color = Vector3f{0, 0, 0}; - if (background_color != nullptr) { - first_alpha = background_color->w; - first_color = Vector3f{background_color->x, - background_color->y, - background_color->z}; - } - for (int i = 0; i < num_fragments; i++) { - const PrefilterFragment &fragment = fragments[i]; - auto new_color = fragment.color; - auto new_alpha = fragment.alpha; - auto prev_alpha = i > 0 ? accum_alpha[i - 1] : first_alpha; - auto prev_color = i > 0 ? accum_color[i - 1] : first_color; - // prev_color is alpha premultiplied, don't need to multiply with - // prev_alpha - accum_color[i] = prev_color * (1 - new_alpha) + new_alpha * new_color; - accum_alpha[i] = prev_alpha * (1 - new_alpha) + new_alpha; - } - auto final_color = accum_color[num_fragments - 1]; - auto final_alpha = accum_alpha[num_fragments - 1]; - if (final_alpha > 1e-6f) { - final_color /= final_alpha; - } - assert(isfinite(final_color)); - assert(isfinite(final_alpha)); - if (d_color != nullptr) { - // Backward pass - auto d_final_color = Vector3f{(*d_color)[0], (*d_color)[1], (*d_color)[2]}; - auto d_final_alpha = (*d_color)[3]; - auto d_curr_color = d_final_color; - auto d_curr_alpha = d_final_alpha; - if (final_alpha > 1e-6f) { - // final_color = curr_color / final_alpha - d_curr_color = d_final_color / final_alpha; - d_curr_alpha -= sum(d_final_color * final_color) / final_alpha; - } - assert(isfinite(*d_color)); - assert(isfinite(d_curr_color)); - assert(isfinite(d_curr_alpha)); - for (int i = num_fragments - 1; i >= 0; i--) { - // color[n] = prev_color * (1 - new_alpha) + new_alpha * new_color; - // alpha[n] = prev_alpha * (1 - new_alpha) + new_alpha; - auto prev_alpha = i > 0 ? accum_alpha[i - 1] : first_alpha; - auto prev_color = i > 0 ? accum_color[i - 1] : first_color; - auto d_prev_alpha = d_curr_alpha * (1.f - fragments[i].alpha); - auto d_alpha_i = d_curr_alpha * (1.f - prev_alpha); - d_alpha_i += sum(d_curr_color * (fragments[i].color - prev_color)); - auto d_prev_color = d_curr_color * (1 - fragments[i].alpha); - auto d_color_i = d_curr_color * fragments[i].alpha; - auto group_id = fragments[i].group_id; - if (fragments[i].is_stroke) { - const auto &shape = scene.shapes[fragments[i].shape_id]; - auto d = fragments[i].distance; - auto abs_d_plus_width = fabs(d) + shape.stroke_width; - auto abs_d_minus_width = fabs(d) - shape.stroke_width; - auto w = smoothstep(abs_d_plus_width) - - smoothstep(abs_d_minus_width); - if (w != 0) { - auto d_w = w > 0 ? (fragments[i].alpha / w) * d_alpha_i : 0.f; - d_alpha_i *= w; - - // Backprop to color - d_sample_color(scene.shape_groups[group_id].stroke_color_type, - scene.shape_groups[group_id].stroke_color, - pt, - Vector4f{d_color_i[0], d_color_i[1], d_color_i[2], d_alpha_i}, - scene.d_shape_groups[group_id].stroke_color, - d_translation); - - auto d_abs_d_plus_width = d_smoothstep(abs_d_plus_width, d_w); - auto d_abs_d_minus_width = -d_smoothstep(abs_d_minus_width, d_w); - - auto d_d = d_abs_d_plus_width + d_abs_d_minus_width; - if (d < 0) { - d_d = -d_d; - } - auto d_stroke_width = d_abs_d_plus_width - d_abs_d_minus_width; - - const auto &shape_group = scene.shape_groups[group_id]; - ShapeGroup &d_shape_group = scene.d_shape_groups[group_id]; - Shape &d_shape = scene.d_shapes[fragments[i].shape_id]; - if (fabs(d_d) > 1e-10f) { - d_compute_distance(shape_group.canvas_to_shape, - shape_group.shape_to_canvas, - shape, - pt, - fragments[i].closest_pt, - fragments[i].path_info, - d_d, - d_shape_group.shape_to_canvas, - d_shape, - d_translation); - } - atomic_add(&d_shape.stroke_width, d_stroke_width); - } - } else { - const auto &shape = scene.shapes[fragments[i].shape_id]; - auto d = fragments[i].distance; - auto w = smoothstep(d); - if (w != 0) { - // color_alpha[3] = color_alpha[3] * w; - auto d_w = w > 0 ? (fragments[i].alpha / w) * d_alpha_i : 0.f; - d_alpha_i *= w; - - d_sample_color(scene.shape_groups[group_id].fill_color_type, - scene.shape_groups[group_id].fill_color, - pt, - Vector4f{d_color_i[0], d_color_i[1], d_color_i[2], d_alpha_i}, - scene.d_shape_groups[group_id].fill_color, - d_translation); - - // w = smoothstep(d) - auto d_d = d_smoothstep(d, d_w); - if (d < 0) { - d_d = -d_d; - } - - const auto &shape_group = scene.shape_groups[group_id]; - ShapeGroup &d_shape_group = scene.d_shape_groups[group_id]; - Shape &d_shape = scene.d_shapes[fragments[i].shape_id]; - if (fabs(d_d) > 1e-10f && fragments[i].within_distance) { - d_compute_distance(shape_group.canvas_to_shape, - shape_group.shape_to_canvas, - shape, - pt, - fragments[i].closest_pt, - fragments[i].path_info, - d_d, - d_shape_group.shape_to_canvas, - d_shape, - d_translation); - } - } - } - d_curr_color = d_prev_color; - d_curr_alpha = d_prev_alpha; - } - if (d_background_color != nullptr) { - d_background_color->x += d_curr_color.x; - d_background_color->y += d_curr_color.y; - d_background_color->z += d_curr_color.z; - d_background_color->w += d_curr_alpha; - } - } - return Vector4f{final_color[0], final_color[1], final_color[2], final_alpha}; -} - -struct weight_kernel { - DEVICE void operator()(int idx) { - auto rng_state = init_pcg32(idx, seed); - // height * width * num_samples_y * num_samples_x - auto sx = idx % num_samples_x; - auto sy = (idx / num_samples_x) % num_samples_y; - auto x = (idx / (num_samples_x * num_samples_y)) % width; - auto y = (idx / (num_samples_x * num_samples_y * width)); - assert(y < height); - auto rx = next_pcg32_float(&rng_state); - auto ry = next_pcg32_float(&rng_state); - if (use_prefiltering) { - rx = ry = 0.5f; - } - auto pt = Vector2f{x + ((float)sx + rx) / num_samples_x, - y + ((float)sy + ry) / num_samples_y}; - auto radius = scene.filter->radius; - assert(radius >= 0); - auto ri = (int)ceil(radius); - for (int dy = -ri; dy <= ri; dy++) { - for (int dx = -ri; dx <= ri; dx++) { - auto xx = x + dx; - auto yy = y + dy; - if (xx >= 0 && xx < width && yy >= 0 && yy < height) { - auto xc = xx + 0.5f; - auto yc = yy + 0.5f; - auto filter_weight = compute_filter_weight(*scene.filter, - xc - pt.x, - yc - pt.y); - atomic_add(weight_image[yy * width + xx], filter_weight); - } - } - } - } - - SceneData scene; - float *weight_image; - int width; - int height; - int num_samples_x; - int num_samples_y; - uint64_t seed; - bool use_prefiltering; -}; - -// We use a "mega kernel" for rendering -struct render_kernel { - DEVICE void operator()(int idx) { - // height * width * num_samples_y * num_samples_x - auto pt = Vector2f{0, 0}; - auto x = 0; - auto y = 0; - if (eval_positions == nullptr) { - auto rng_state = init_pcg32(idx, seed); - auto sx = idx % num_samples_x; - auto sy = (idx / num_samples_x) % num_samples_y; - x = (idx / (num_samples_x * num_samples_y)) % width; - y = (idx / (num_samples_x * num_samples_y * width)); - assert(x < width && y < height); - auto rx = next_pcg32_float(&rng_state); - auto ry = next_pcg32_float(&rng_state); - if (use_prefiltering) { - rx = ry = 0.5f; - } - pt = Vector2f{x + ((float)sx + rx) / num_samples_x, - y + ((float)sy + ry) / num_samples_y}; - } else { - pt = Vector2f{eval_positions[2 * idx], - eval_positions[2 * idx + 1]}; - x = int(pt.x); - y = int(pt.y); - } - - // normalize pt to [0, 1] - auto npt = pt; - npt.x /= width; - npt.y /= height; - auto num_samples = num_samples_x * num_samples_y; - if (render_image != nullptr || d_render_image != nullptr) { - Vector4f d_color = Vector4f{0, 0, 0, 0}; - if (d_render_image != nullptr) { - // Gather d_color from d_render_image inside the filter kernel - // normalize using weight_image - d_color = gather_d_color(*scene.filter, - d_render_image, - weight_image, - width, - height, - pt); - } - auto color = Vector4f{0, 0, 0, 0}; - if (use_prefiltering) { - color = sample_color_prefiltered(scene, - background_image != nullptr ? (const Vector4f*)&background_image[4 * ((y * width) + x)] : nullptr, - npt, - d_render_image != nullptr ? &d_color : nullptr, - d_background_image != nullptr ? (Vector4f*)&d_background_image[4 * ((y * width) + x)] : nullptr, - d_translation != nullptr ? &d_translation[2 * (y * width + x)] : nullptr); - } else { - color = sample_color(scene, - background_image != nullptr ? (const Vector4f*)&background_image[4 * ((y * width) + x)] : nullptr, - npt, - d_render_image != nullptr ? &d_color : nullptr, - nullptr, - d_background_image != nullptr ? (Vector4f*)&d_background_image[4 * ((y * width) + x)] : nullptr, - d_translation != nullptr ? &d_translation[2 * (y * width + x)] : nullptr); - } - assert(isfinite(color)); - // Splat color onto render_image - auto radius = scene.filter->radius; - assert(radius >= 0); - auto ri = (int)ceil(radius); - for (int dy = -ri; dy <= ri; dy++) { - for (int dx = -ri; dx <= ri; dx++) { - auto xx = x + dx; - auto yy = y + dy; - if (xx >= 0 && xx < width && yy >= 0 && yy < height && - weight_image[yy * width + xx] > 0) { - auto weight_sum = weight_image[yy * width + xx]; - auto xc = xx + 0.5f; - auto yc = yy + 0.5f; - auto filter_weight = compute_filter_weight(*scene.filter, - xc - pt.x, - yc - pt.y); - auto weighted_color = filter_weight * color / weight_sum; - if (render_image != nullptr) { - atomic_add(render_image[4 * (yy * width + xx) + 0], - weighted_color[0]); - atomic_add(render_image[4 * (yy * width + xx) + 1], - weighted_color[1]); - atomic_add(render_image[4 * (yy * width + xx) + 2], - weighted_color[2]); - atomic_add(render_image[4 * (yy * width + xx) + 3], - weighted_color[3]); - } - if (d_render_image != nullptr) { - // Backprop to filter_weight - // pixel = \sum weight * color / \sum weight - auto d_pixel = Vector4f{ - d_render_image[4 * (yy * width + xx) + 0], - d_render_image[4 * (yy * width + xx) + 1], - d_render_image[4 * (yy * width + xx) + 2], - d_render_image[4 * (yy * width + xx) + 3], - }; - auto d_weight = - (dot(d_pixel, color) * weight_sum - - filter_weight * dot(d_pixel, color) * (weight_sum - filter_weight)) / - square(weight_sum); - d_compute_filter_weight(*scene.filter, - xc - pt.x, - yc - pt.y, - d_weight, - scene.d_filter); - } - } - } - } - } - if (sdf_image != nullptr || d_sdf_image != nullptr) { - float d_dist = 0.f; - if (d_sdf_image != nullptr) { - if (eval_positions == nullptr) { - d_dist = d_sdf_image[y * width + x]; - } else { - d_dist = d_sdf_image[idx]; - } - } - auto weight = eval_positions == nullptr ? 1.f / num_samples : 1.f; - auto dist = sample_distance(scene, npt, weight, - d_sdf_image != nullptr ? &d_dist : nullptr, - d_translation != nullptr ? &d_translation[2 * (y * width + x)] : nullptr); - if (sdf_image != nullptr) { - if (eval_positions == nullptr) { - atomic_add(sdf_image[y * width + x], dist); - } else { - atomic_add(sdf_image[idx], dist); - } - } - } - } - - SceneData scene; - float *background_image; - float *render_image; - float *weight_image; - float *sdf_image; - float *d_background_image; - float *d_render_image; - float *d_sdf_image; - float *d_translation; - int width; - int height; - int num_samples_x; - int num_samples_y; - uint64_t seed; - bool use_prefiltering; - float *eval_positions; -}; - -struct BoundarySample { - Vector2f pt; - Vector2f local_pt; - Vector2f normal; - int shape_group_id; - int shape_id; - float t; - BoundaryData data; - float pdf; -}; - -struct sample_boundary_kernel { - DEVICE void operator()(int idx) { - boundary_samples[idx].pt = Vector2f{0, 0}; - boundary_samples[idx].shape_id = -1; - boundary_ids[idx] = idx; - morton_codes[idx] = 0; - - auto rng_state = init_pcg32(idx, seed); - auto u = next_pcg32_float(&rng_state); - // Sample a shape - auto sample_id = sample(scene.sample_shapes_cdf, - scene.num_total_shapes, - u); - assert(sample_id >= 0 && sample_id < scene.num_total_shapes); - auto shape_id = scene.sample_shape_id[sample_id]; - assert(shape_id >= 0 && shape_id < scene.num_shapes); - auto shape_group_id = scene.sample_group_id[sample_id]; - assert(shape_group_id >= 0 && shape_group_id < scene.num_shape_groups); - auto shape_pmf = scene.sample_shapes_pmf[shape_id]; - if (shape_pmf <= 0) { - return; - } - // Sample a point on the boundary of the shape - auto boundary_pdf = 0.f; - auto normal = Vector2f{0, 0}; - auto t = next_pcg32_float(&rng_state); - BoundaryData boundary_data; - const ShapeGroup &shape_group = scene.shape_groups[shape_group_id]; - auto local_boundary_pt = sample_boundary( - scene, shape_group_id, shape_id, - t, normal, boundary_pdf, boundary_data); - if (boundary_pdf <= 0) { - return; - } - - // local_boundary_pt & normal are in shape's local space, - // transform them to canvas space - auto boundary_pt = xform_pt(shape_group.shape_to_canvas, local_boundary_pt); - normal = xform_normal(shape_group.canvas_to_shape, normal); - // Normalize boundary_pt to [0, 1) - boundary_pt.x /= scene.canvas_width; - boundary_pt.y /= scene.canvas_height; - - boundary_samples[idx].pt = boundary_pt; - boundary_samples[idx].local_pt = local_boundary_pt; - boundary_samples[idx].normal = normal; - boundary_samples[idx].shape_group_id = shape_group_id; - boundary_samples[idx].shape_id = shape_id; - boundary_samples[idx].t = t; - boundary_samples[idx].data = boundary_data; - boundary_samples[idx].pdf = shape_pmf * boundary_pdf; - TVector2 p_i{boundary_pt.x * 1023, boundary_pt.y * 1023}; - morton_codes[idx] = (expand_bits(p_i.x) << 1u) | - (expand_bits(p_i.y) << 0u); - } - - SceneData scene; - uint64_t seed; - BoundarySample *boundary_samples; - int *boundary_ids; - uint32_t *morton_codes; -}; - -struct render_edge_kernel { - DEVICE void operator()(int idx) { - auto bid = boundary_ids[idx]; - if (boundary_samples[bid].shape_id == -1) { - return; - } - auto boundary_pt = boundary_samples[bid].pt; - auto local_boundary_pt = boundary_samples[bid].local_pt; - auto normal = boundary_samples[bid].normal; - auto shape_group_id = boundary_samples[bid].shape_group_id; - auto shape_id = boundary_samples[bid].shape_id; - auto t = boundary_samples[bid].t; - auto boundary_data = boundary_samples[bid].data; - auto pdf = boundary_samples[bid].pdf; - - const ShapeGroup &shape_group = scene.shape_groups[shape_group_id]; - - auto bx = int(boundary_pt.x * width); - auto by = int(boundary_pt.y * height); - if (bx < 0 || bx >= width || by < 0 || by >= height) { - return; - } - - // Sample the two sides of the boundary - auto inside_query = EdgeQuery{shape_group_id, shape_id, false}; - auto outside_query = EdgeQuery{shape_group_id, shape_id, false}; - auto color_inside = sample_color(scene, - background_image != nullptr ? (const Vector4f *)&background_image[4 * ((by * width) + bx)] : nullptr, - boundary_pt - 1e-4f * normal, - nullptr, &inside_query); - auto color_outside = sample_color(scene, - background_image != nullptr ? (const Vector4f *)&background_image[4 * ((by * width) + bx)] : nullptr, - boundary_pt + 1e-4f * normal, - nullptr, &outside_query); - if (!inside_query.hit && !outside_query.hit) { - // occluded - return; - } - if (!inside_query.hit) { - normal = -normal; - swap_(inside_query, outside_query); - swap_(color_inside, color_outside); - } - // Boundary point in screen space - auto sboundary_pt = boundary_pt; - sboundary_pt.x *= width; - sboundary_pt.y *= height; - auto d_color = gather_d_color(*scene.filter, - d_render_image, - weight_image, - width, - height, - sboundary_pt); - // Normalization factor - d_color /= float(scene.canvas_width * scene.canvas_height); - - assert(isfinite(d_color)); - assert(isfinite(pdf) && pdf > 0); - auto contrib = dot(color_inside - color_outside, d_color) / pdf; - ShapeGroup &d_shape_group = scene.d_shape_groups[shape_group_id]; - accumulate_boundary_gradient(scene.shapes[shape_id], - contrib, t, normal, boundary_data, scene.d_shapes[shape_id], - shape_group.shape_to_canvas, local_boundary_pt, d_shape_group.shape_to_canvas); - // Don't need to backprop to filter weights: - // \int f'(x) g(x) dx doesn't contain discontinuities - // if f is continuous, even if g is discontinuous - if (d_translation != nullptr) { - // According to Reynold transport theorem, - // the Jacobian of the boundary integral is dot(velocity, normal) - // The velocity of the object translating x is (1, 0) - // The velocity of the object translating y is (0, 1) - atomic_add(&d_translation[2 * (by * width + bx) + 0], normal.x * contrib); - atomic_add(&d_translation[2 * (by * width + bx) + 1], normal.y * contrib); - } - } - - SceneData scene; - const float *background_image; - const BoundarySample *boundary_samples; - const int *boundary_ids; - float *weight_image; - float *d_render_image; - float *d_translation; - int width; - int height; - int num_samples_x; - int num_samples_y; -}; - -void render(std::shared_ptr scene, - ptr background_image, - ptr render_image, - ptr render_sdf, - int width, - int height, - int num_samples_x, - int num_samples_y, - uint64_t seed, - ptr d_background_image, - ptr d_render_image, - ptr d_render_sdf, - ptr d_translation, - bool use_prefiltering, - ptr eval_positions, - int num_eval_positions) { -#ifdef __NVCC__ - int old_device_id = -1; - if (scene->use_gpu) { - checkCuda(cudaGetDevice(&old_device_id)); - if (scene->gpu_index != -1) { - checkCuda(cudaSetDevice(scene->gpu_index)); - } - } -#endif - parallel_init(); - - float *weight_image = nullptr; - // Allocate and zero the weight image - if (scene->use_gpu) { -#ifdef __CUDACC__ - if (eval_positions.get() == nullptr) { - checkCuda(cudaMallocManaged(&weight_image, width * height * sizeof(float))); - cudaMemset(weight_image, 0, width * height * sizeof(float)); - } -#else - assert(false); -#endif - } else { - if (eval_positions.get() == nullptr) { - weight_image = (float*)malloc(width * height * sizeof(float)); - memset(weight_image, 0, width * height * sizeof(float)); - } - } - - if (render_image.get() != nullptr || d_render_image.get() != nullptr || - render_sdf.get() != nullptr || d_render_sdf.get() != nullptr) { - if (weight_image != nullptr) { - parallel_for(weight_kernel{ - get_scene_data(*scene.get()), - weight_image, - width, - height, - num_samples_x, - num_samples_y, - seed - }, width * height * num_samples_x * num_samples_y, scene->use_gpu); - } - - auto num_samples = eval_positions.get() == nullptr ? - width * height * num_samples_x * num_samples_y : num_eval_positions; - parallel_for(render_kernel{ - get_scene_data(*scene.get()), - background_image.get(), - render_image.get(), - weight_image, - render_sdf.get(), - d_background_image.get(), - d_render_image.get(), - d_render_sdf.get(), - d_translation.get(), - width, - height, - num_samples_x, - num_samples_y, - seed, - use_prefiltering, - eval_positions.get() - }, num_samples, scene->use_gpu); - } - - // Boundary sampling - if (!use_prefiltering && d_render_image.get() != nullptr) { - auto num_samples = width * height * num_samples_x * num_samples_y; - BoundarySample *boundary_samples = nullptr; - int *boundary_ids = nullptr; // for sorting - uint32_t *morton_codes = nullptr; // for sorting - // Allocate boundary samples - if (scene->use_gpu) { -#ifdef __CUDACC__ - checkCuda(cudaMallocManaged(&boundary_samples, - num_samples * sizeof(BoundarySample))); - checkCuda(cudaMallocManaged(&boundary_ids, - num_samples * sizeof(int))); - checkCuda(cudaMallocManaged(&morton_codes, - num_samples * sizeof(uint32_t))); -#else - assert(false); - #endif - } else { - boundary_samples = (BoundarySample*)malloc( - num_samples * sizeof(BoundarySample)); - boundary_ids = (int*)malloc( - num_samples * sizeof(int)); - morton_codes = (uint32_t*)malloc( - num_samples * sizeof(uint32_t)); - } - - // Edge sampling - // We sort the boundary samples for better thread coherency - parallel_for(sample_boundary_kernel{ - get_scene_data(*scene.get()), - seed, - boundary_samples, - boundary_ids, - morton_codes - }, num_samples, scene->use_gpu); - if (scene->use_gpu) { - thrust::sort_by_key(thrust::device, morton_codes, morton_codes + num_samples, boundary_ids); - } else { - // Don't need to sort for CPU, we are not using SIMD hardware anyway. - // thrust::sort_by_key(thrust::host, morton_codes, morton_codes + num_samples, boundary_ids); - } - parallel_for(render_edge_kernel{ - get_scene_data(*scene.get()), - background_image.get(), - boundary_samples, - boundary_ids, - weight_image, - d_render_image.get(), - d_translation.get(), - width, - height, - num_samples_x, - num_samples_y - }, num_samples, scene->use_gpu); - if (scene->use_gpu) { -#ifdef __CUDACC__ - checkCuda(cudaFree(boundary_samples)); - checkCuda(cudaFree(boundary_ids)); - checkCuda(cudaFree(morton_codes)); -#else - assert(false); -#endif - } else { - free(boundary_samples); - free(boundary_ids); - free(morton_codes); - } - } - - // Clean up weight image - if (scene->use_gpu) { -#ifdef __CUDACC__ - checkCuda(cudaFree(weight_image)); -#else - assert(false); -#endif - } else { - free(weight_image); - } - - if (scene->use_gpu) { - cuda_synchronize(); - } - - parallel_cleanup(); -#ifdef __NVCC__ - if (old_device_id != -1) { - checkCuda(cudaSetDevice(old_device_id)); - } -#endif -} - -PYBIND11_MODULE(diffvg, m) { - m.doc() = "Differential Vector Graphics"; - - py::class_>(m, "void_ptr") - .def(py::init()) - .def("as_size_t", &ptr::as_size_t); - py::class_>(m, "float_ptr") - .def(py::init()); - py::class_>(m, "int_ptr") - .def(py::init()); - - py::class_(m, "Vector2f") - .def(py::init()) - .def_readwrite("x", &Vector2f::x) - .def_readwrite("y", &Vector2f::y); - - py::class_(m, "Vector3f") - .def(py::init()) - .def_readwrite("x", &Vector3f::x) - .def_readwrite("y", &Vector3f::y) - .def_readwrite("z", &Vector3f::z); - - py::class_(m, "Vector4f") - .def(py::init()) - .def_readwrite("x", &Vector4f::x) - .def_readwrite("y", &Vector4f::y) - .def_readwrite("z", &Vector4f::z) - .def_readwrite("w", &Vector4f::w); - - py::enum_(m, "ShapeType") - .value("circle", ShapeType::Circle) - .value("ellipse", ShapeType::Ellipse) - .value("path", ShapeType::Path) - .value("rect", ShapeType::Rect); - - py::class_(m, "Circle") - .def(py::init()) - .def("get_ptr", &Circle::get_ptr) - .def_readonly("radius", &Circle::radius) - .def_readonly("center", &Circle::center); - - py::class_(m, "Ellipse") - .def(py::init()) - .def("get_ptr", &Ellipse::get_ptr) - .def_readonly("radius", &Ellipse::radius) - .def_readonly("center", &Ellipse::center); - - py::class_(m, "Path") - .def(py::init, ptr, ptr, int, int, bool, bool>()) - .def("get_ptr", &Path::get_ptr) - .def("has_thickness", &Path::has_thickness) - .def("copy_to", &Path::copy_to) - .def_readonly("num_points", &Path::num_points); - - py::class_(m, "Rect") - .def(py::init()) - .def("get_ptr", &Rect::get_ptr) - .def_readonly("p_min", &Rect::p_min) - .def_readonly("p_max", &Rect::p_max); - - py::enum_(m, "ColorType") - .value("constant", ColorType::Constant) - .value("linear_gradient", ColorType::LinearGradient) - .value("radial_gradient", ColorType::RadialGradient); - - py::class_(m, "Constant") - .def(py::init()) - .def("get_ptr", &Constant::get_ptr) - .def_readonly("color", &Constant::color); - - py::class_(m, "LinearGradient") - .def(py::init, ptr>()) - .def("get_ptr", &LinearGradient::get_ptr) - .def("copy_to", &LinearGradient::copy_to) - .def_readonly("begin", &LinearGradient::begin) - .def_readonly("end", &LinearGradient::end) - .def_readonly("num_stops", &LinearGradient::num_stops); - - py::class_(m, "RadialGradient") - .def(py::init, ptr>()) - .def("get_ptr", &RadialGradient::get_ptr) - .def("copy_to", &RadialGradient::copy_to) - .def_readonly("center", &RadialGradient::center) - .def_readonly("radius", &RadialGradient::radius) - .def_readonly("num_stops", &RadialGradient::num_stops); - - py::class_(m, "Shape") - .def(py::init, float>()) - .def("as_circle", &Shape::as_circle) - .def("as_ellipse", &Shape::as_ellipse) - .def("as_path", &Shape::as_path) - .def("as_rect", &Shape::as_rect) - .def_readonly("type", &Shape::type) - .def_readonly("stroke_width", &Shape::stroke_width); - - py::class_(m, "ShapeGroup") - .def(py::init, - int, - ColorType, - ptr, - ColorType, - ptr, - bool, - ptr>()) - .def("fill_color_as_constant", &ShapeGroup::fill_color_as_constant) - .def("fill_color_as_linear_gradient", &ShapeGroup::fill_color_as_linear_gradient) - .def("fill_color_as_radial_gradient", &ShapeGroup::fill_color_as_radial_gradient) - .def("stroke_color_as_constant", &ShapeGroup::stroke_color_as_constant) - .def("stroke_color_as_linear_gradient", &ShapeGroup::stroke_color_as_linear_gradient) - .def("stroke_color_as_radial_gradient", &ShapeGroup::fill_color_as_radial_gradient) - .def("has_fill_color", &ShapeGroup::has_fill_color) - .def("has_stroke_color", &ShapeGroup::has_stroke_color) - .def("copy_to", &ShapeGroup::copy_to) - .def_readonly("fill_color_type", &ShapeGroup::fill_color_type) - .def_readonly("stroke_color_type", &ShapeGroup::stroke_color_type); - - py::enum_(m, "FilterType") - .value("box", FilterType::Box) - .value("tent", FilterType::Tent) - .value("parabolic", FilterType::RadialParabolic) - .value("hann", FilterType::Hann); - - py::class_(m, "Filter") - .def(py::init()); - - py::class_>(m, "Scene") - .def(py::init &, - const std::vector &, - const Filter &, - bool, - int>()) - .def("get_d_shape", &Scene::get_d_shape) - .def("get_d_shape_group", &Scene::get_d_shape_group) - .def("get_d_filter_radius", &Scene::get_d_filter_radius) - .def_readonly("num_shapes", &Scene::num_shapes) - .def_readonly("num_shape_groups", &Scene::num_shape_groups); - - m.def("render", &render, ""); -} diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/stream.h b/spaces/CVPR/LIVE/thrust/thrust/detail/complex/stream.h deleted file mode 100644 index 9d87bbd548974a745da11521302d27524703f4a0..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/complex/stream.h +++ /dev/null @@ -1,71 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * Copyright 2013 Filipe RNC Maia - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#include - -namespace thrust -{ -template -std::basic_ostream& operator<<(std::basic_ostream& os, const complex& z) -{ - os << '(' << z.real() << ',' << z.imag() << ')'; - return os; -} - -template -std::basic_istream& -operator>>(std::basic_istream& is, complex& z) -{ - ValueType re, im; - - charT ch; - is >> ch; - - if(ch == '(') - { - is >> re >> ch; - if (ch == ',') - { - is >> im >> ch; - if (ch == ')') - { - z = complex(re, im); - } - else - { - is.setstate(std::ios_base::failbit); - } - } - else if (ch == ')') - { - z = re; - } - else - { - is.setstate(std::ios_base::failbit); - } - } - else - { - is.putback(ch); - is >> re; - z = re; - } - return is; -} - -} // namespace thrust diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/omp/pointer.h b/spaces/CVPR/LIVE/thrust/thrust/system/omp/pointer.h deleted file mode 100644 index 36b6bed12ac65b117242c291debb9e1ec9deae7d..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/omp/pointer.h +++ /dev/null @@ -1,360 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/*! \file thrust/system/omp/memory.h - * \brief Managing memory associated with Thrust's OpenMP system. - */ - -#pragma once - -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace omp -{ - -template class pointer; - -} // end omp -} // end system -} // end thrust - - -/*! \cond - */ - -// specialize thrust::iterator_traits to avoid problems with the name of -// pointer's constructor shadowing its nested pointer type -// do this before pointer is defined so the specialization is correctly -// used inside the definition -namespace thrust -{ - -template - struct iterator_traits > -{ - private: - typedef thrust::system::omp::pointer ptr; - - public: - typedef typename ptr::iterator_category iterator_category; - typedef typename ptr::value_type value_type; - typedef typename ptr::difference_type difference_type; - typedef ptr pointer; - typedef typename ptr::reference reference; -}; // end iterator_traits - -} // end thrust - -/*! \endcond - */ - - -namespace thrust -{ -namespace system -{ - -/*! \addtogroup system_backends Systems - * \ingroup system - * \{ - */ - -/*! \namespace thrust::system::omp - * \brief \p thrust::system::omp is the namespace containing functionality for allocating, manipulating, - * and deallocating memory available to Thrust's OpenMP backend system. - * The identifiers are provided in a separate namespace underneath thrust::system - * for import convenience but are also aliased in the top-level thrust::omp - * namespace for easy access. - * - */ -namespace omp -{ - -// forward declaration of reference for pointer -template class reference; - -/*! \cond - */ - -// XXX nvcc + msvc have trouble instantiating reference below -// this is a workaround -namespace detail -{ - -template - struct reference_msvc_workaround -{ - typedef thrust::system::omp::reference type; -}; // end reference_msvc_workaround - -} // end detail - -/*! \endcond - */ - - -/*! \p pointer stores a pointer to an object allocated in memory available to the omp system. - * This type provides type safety when dispatching standard algorithms on ranges resident - * in omp memory. - * - * \p pointer has pointer semantics: it may be dereferenced and manipulated with pointer arithmetic. - * - * \p pointer can be created with the function \p omp::malloc, or by explicitly calling its constructor - * with a raw pointer. - * - * The raw pointer encapsulated by a \p pointer may be obtained by eiter its get member function - * or the \p raw_pointer_cast function. - * - * \note \p pointer is not a "smart" pointer; it is the programmer's responsibility to deallocate memory - * pointed to by \p pointer. - * - * \tparam T specifies the type of the pointee. - * - * \see omp::malloc - * \see omp::free - * \see raw_pointer_cast - */ -template - class pointer - : public thrust::pointer< - T, - thrust::system::omp::tag, - thrust::system::omp::reference, - thrust::system::omp::pointer - > -{ - /*! \cond - */ - - private: - typedef thrust::pointer< - T, - thrust::system::omp::tag, - //thrust::system::omp::reference, - typename detail::reference_msvc_workaround::type, - thrust::system::omp::pointer - > super_t; - - /*! \endcond - */ - - public: - // note that omp::pointer's member functions need __host__ __device__ - // to interoperate with nvcc + iterators' dereference member function - - /*! \p pointer's no-argument constructor initializes its encapsulated pointer to \c 0. - */ - __host__ __device__ - pointer() : super_t() {} - - #if THRUST_CPP_DIALECT >= 2011 - // NOTE: This is needed so that Thrust smart pointers can be used in - // `std::unique_ptr`. - __host__ __device__ - pointer(decltype(nullptr)) : super_t(nullptr) {} - #endif - - /*! This constructor allows construction of a pointer from a T*. - * - * \param ptr A raw pointer to copy from, presumed to point to a location in memory - * accessible by the \p omp system. - * \tparam OtherT \p OtherT shall be convertible to \p T. - */ - template - __host__ __device__ - explicit pointer(OtherT *ptr) : super_t(ptr) {} - - /*! This constructor allows construction from another pointer-like object with related type. - * - * \param other The \p OtherPointer to copy. - * \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible - * to \p thrust::system::omp::tag and its element type shall be convertible to \p T. - */ - template - __host__ __device__ - pointer(const OtherPointer &other, - typename thrust::detail::enable_if_pointer_is_convertible< - OtherPointer, - pointer - >::type * = 0) : super_t(other) {} - - /*! This constructor allows construction from another pointer-like object with \p void type. - * - * \param other The \p OtherPointer to copy. - * \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible - * to \p thrust::system::omp::tag and its element type shall be \p void. - */ - template - __host__ __device__ - explicit - pointer(const OtherPointer &other, - typename thrust::detail::enable_if_void_pointer_is_system_convertible< - OtherPointer, - pointer - >::type * = 0) : super_t(other) {} - - /*! Assignment operator allows assigning from another pointer-like object with related type. - * - * \param other The other pointer-like object to assign from. - * \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible - * to \p thrust::system::omp::tag and its element type shall be convertible to \p T. - */ - template - __host__ __device__ - typename thrust::detail::enable_if_pointer_is_convertible< - OtherPointer, - pointer, - pointer & - >::type - operator=(const OtherPointer &other) - { - return super_t::operator=(other); - } - - #if THRUST_CPP_DIALECT >= 2011 - // NOTE: This is needed so that Thrust smart pointers can be used in - // `std::unique_ptr`. - __host__ __device__ - pointer& operator=(decltype(nullptr)) - { - super_t::operator=(nullptr); - return *this; - } - #endif -}; // end pointer - - -/*! \p reference is a wrapped reference to an object stored in memory available to the \p omp system. - * \p reference is the type of the result of dereferencing a \p omp::pointer. - * - * \tparam T Specifies the type of the referenced object. - */ -template - class reference - : public thrust::reference< - T, - thrust::system::omp::pointer, - thrust::system::omp::reference - > -{ - /*! \cond - */ - - private: - typedef thrust::reference< - T, - thrust::system::omp::pointer, - thrust::system::omp::reference - > super_t; - - /*! \endcond - */ - - public: - /*! \cond - */ - - typedef typename super_t::value_type value_type; - typedef typename super_t::pointer pointer; - - /*! \endcond - */ - - /*! This constructor initializes this \p reference to refer to an object - * pointed to by the given \p pointer. After this \p reference is constructed, - * it shall refer to the object pointed to by \p ptr. - * - * \param ptr A \p pointer to copy from. - */ - __host__ __device__ - explicit reference(const pointer &ptr) - : super_t(ptr) - {} - - /*! This constructor accepts a const reference to another \p reference of related type. - * After this \p reference is constructed, it shall refer to the same object as \p other. - * - * \param other A \p reference to copy from. - * \tparam OtherT The element type of the other \p reference. - * - * \note This constructor is templated primarily to allow initialization of reference - * from reference. - */ - template - __host__ __device__ - reference(const reference &other, - typename thrust::detail::enable_if_convertible< - typename reference::pointer, - pointer - >::type * = 0) - : super_t(other) - {} - - /*! Copy assignment operator copy assigns from another \p reference of related type. - * - * \param other The other \p reference to assign from. - * \return *this - * \tparam OtherT The element type of the other \p reference. - */ - template - reference &operator=(const reference &other); - - /*! Assignment operator assigns from a \p value_type. - * - * \param x The \p value_type to assign from. - * \return *this - */ - reference &operator=(const value_type &x); -}; // end reference - -/*! Exchanges the values of two objects referred to by \p reference. - * \p x The first \p reference of interest. - * \p y The second \p reference of interest. - */ -template -__host__ __device__ -void swap(reference x, reference y); - -} // end omp - -/*! \} - */ - -} // end system - -/*! \namespace thrust::omp - * \brief \p thrust::omp is a top-level alias for thrust::system::omp. - */ -namespace omp -{ - -using thrust::system::omp::pointer; -using thrust::system::omp::reference; - -} // end omp - -} // end thrust - -#include - diff --git a/spaces/CVPR/MonoScene/monoscene/unet3d_nyu.py b/spaces/CVPR/MonoScene/monoscene/unet3d_nyu.py deleted file mode 100644 index e9e3b3718999248efa1b2925658465ba59801b13..0000000000000000000000000000000000000000 --- a/spaces/CVPR/MonoScene/monoscene/unet3d_nyu.py +++ /dev/null @@ -1,90 +0,0 @@ -# encoding: utf-8 -import torch -import torch.nn as nn -import torch.nn.functional as F -import numpy as np -from monoscene.CRP3D import CPMegaVoxels -from monoscene.modules import ( - Process, - Upsample, - Downsample, - SegmentationHead, - ASPP, -) - - -class UNet3D(nn.Module): - def __init__( - self, - class_num, - norm_layer, - feature, - full_scene_size, - n_relations=4, - project_res=[], - context_prior=True, - bn_momentum=0.1, - ): - super(UNet3D, self).__init__() - self.business_layer = [] - self.project_res = project_res - - self.feature_1_4 = feature - self.feature_1_8 = feature * 2 - self.feature_1_16 = feature * 4 - - self.feature_1_16_dec = self.feature_1_16 - self.feature_1_8_dec = self.feature_1_8 - self.feature_1_4_dec = self.feature_1_4 - - self.process_1_4 = nn.Sequential( - Process(self.feature_1_4, norm_layer, bn_momentum, dilations=[1, 2, 3]), - Downsample(self.feature_1_4, norm_layer, bn_momentum), - ) - self.process_1_8 = nn.Sequential( - Process(self.feature_1_8, norm_layer, bn_momentum, dilations=[1, 2, 3]), - Downsample(self.feature_1_8, norm_layer, bn_momentum), - ) - self.up_1_16_1_8 = Upsample( - self.feature_1_16_dec, self.feature_1_8_dec, norm_layer, bn_momentum - ) - self.up_1_8_1_4 = Upsample( - self.feature_1_8_dec, self.feature_1_4_dec, norm_layer, bn_momentum - ) - self.ssc_head_1_4 = SegmentationHead( - self.feature_1_4_dec, self.feature_1_4_dec, class_num, [1, 2, 3] - ) - - self.context_prior = context_prior - size_1_16 = tuple(np.ceil(i / 4).astype(int) for i in full_scene_size) - - if context_prior: - self.CP_mega_voxels = CPMegaVoxels( - self.feature_1_16, - size_1_16, - n_relations=n_relations, - bn_momentum=bn_momentum, - ) - - # - def forward(self, input_dict): - res = {} - - x3d_1_4 = input_dict["x3d"] - x3d_1_8 = self.process_1_4(x3d_1_4) - x3d_1_16 = self.process_1_8(x3d_1_8) - - if self.context_prior: - ret = self.CP_mega_voxels(x3d_1_16) - x3d_1_16 = ret["x"] - for k in ret.keys(): - res[k] = ret[k] - - x3d_up_1_8 = self.up_1_16_1_8(x3d_1_16) + x3d_1_8 - x3d_up_1_4 = self.up_1_8_1_4(x3d_up_1_8) + x3d_1_4 - - ssc_logit_1_4 = self.ssc_head_1_4(x3d_up_1_4) - - res["ssc_logit"] = ssc_logit_1_4 - - return res diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/resources/admin/index.html b/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/resources/admin/index.html deleted file mode 100644 index 4daa4f5b2759b16e919dd9b0a018aae1b856d81b..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/resources/admin/index.html +++ /dev/null @@ -1,42 +0,0 @@ -{{extend defaultLayout}} -{{block 'css'}} - -{{/block}} -{{block 'main'}} - -
-
-
ws管理面板
-
#ws设置
-
-
-{{each schema cfgGroup}} -
-
{{cfgGroup.title}}
-
    - {{each cfgGroup.cfg cfgItem cfgKey}} -
  • -
    - {{cfgItem.title}} - #ws设置{{cfgItem.key}} - {{if cfgItem.type==='num'}} {{cfgItem.def}}{{else}} 开启/关闭{{/if}} - - {{if cfgItem.type === 'num'}} -
    {{cfg[cfgKey]}}
    - {{else}} - {{if cfg[cfgKey]}} -
    已开启
    - {{else}} -
    已关闭
    - {{/if}} - {{/if}} -
    - {{if cfgItem.desc && cfgItem.showDesc!== false}} -
    {{cfgItem.desc}}
    - {{/if}} -
  • - {{/each}} -
-
-{{/each}} -{{/block}} \ No newline at end of file diff --git "a/spaces/ClearLove443/Robby-chatbot/pages/1_\360\237\223\204Robby-Chat.py" "b/spaces/ClearLove443/Robby-chatbot/pages/1_\360\237\223\204Robby-Chat.py" deleted file mode 100644 index 7605742cf7239255f169c31768b3ac72f5b70c18..0000000000000000000000000000000000000000 --- "a/spaces/ClearLove443/Robby-chatbot/pages/1_\360\237\223\204Robby-Chat.py" +++ /dev/null @@ -1,100 +0,0 @@ -import os -import streamlit as st -from io import StringIO -import re -import sys -from modules.history import ChatHistory -from modules.layout import Layout -from modules.utils import Utilities -from modules.sidebar import Sidebar - -#To be able to update the changes made to modules in localhost (press r) -def reload_module(module_name): - import importlib - import sys - if module_name in sys.modules: - importlib.reload(sys.modules[module_name]) - return sys.modules[module_name] - -history_module = reload_module('modules.history') -layout_module = reload_module('modules.layout') -utils_module = reload_module('modules.utils') -sidebar_module = reload_module('modules.sidebar') - -ChatHistory = history_module.ChatHistory -Layout = layout_module.Layout -Utilities = utils_module.Utilities -Sidebar = sidebar_module.Sidebar - -st.set_page_config(layout="wide", page_icon="💬", page_title="Robby | Chat-Bot 🤖") - -# Instantiate the main components -layout, sidebar, utils = Layout(), Sidebar(), Utilities() - -layout.show_header("PDF, TXT, CSV") - -user_api_key = utils.load_api_key() - -if not user_api_key: - layout.show_api_key_missing() -else: - os.environ["OPENAI_API_KEY"] = user_api_key - - uploaded_file = utils.handle_upload(["pdf", "txt", "csv"]) - - if uploaded_file: - - # Configure the sidebar - sidebar.show_options() - sidebar.about() - - # Initialize chat history - history = ChatHistory() - try: - chatbot = utils.setup_chatbot( - uploaded_file, st.session_state["model"], st.session_state["temperature"] - ) - st.session_state["chatbot"] = chatbot - - if st.session_state["ready"]: - # Create containers for chat responses and user prompts - response_container, prompt_container = st.container(), st.container() - - with prompt_container: - # Display the prompt form - is_ready, user_input = layout.prompt_form() - - # Initialize the chat history - history.initialize(uploaded_file) - - # Reset the chat history if button clicked - if st.session_state["reset_chat"]: - history.reset(uploaded_file) - - if is_ready: - # Update the chat history and display the chat messages - history.append("user", user_input) - - old_stdout = sys.stdout - sys.stdout = captured_output = StringIO() - - output = st.session_state["chatbot"].conversational_chat(user_input) - - sys.stdout = old_stdout - - history.append("assistant", output) - - # Clean up the agent's thoughts to remove unwanted characters - thoughts = captured_output.getvalue() - cleaned_thoughts = re.sub(r'\x1b\[[0-9;]*[a-zA-Z]', '', thoughts) - cleaned_thoughts = re.sub(r'\[1m>', '', cleaned_thoughts) - - # Display the agent's thoughts - with st.expander("Display the agent's thoughts"): - st.write(cleaned_thoughts) - - history.generate_messages(response_container) - except Exception as e: - st.error(f"Error: {str(e)}") - - diff --git a/spaces/CoPoBio/skin_cancer_risk_prediction/app_DCCPH.py b/spaces/CoPoBio/skin_cancer_risk_prediction/app_DCCPH.py deleted file mode 100644 index 428bc65eba18981634bbff873477dcd4cb5163d6..0000000000000000000000000000000000000000 --- a/spaces/CoPoBio/skin_cancer_risk_prediction/app_DCCPH.py +++ /dev/null @@ -1,225 +0,0 @@ -import gradio as gr - - -# -*- coding: utf-8 -*- - -import cv2 -import numpy as np -import tensorflow.compat.v1 as tf -tf.disable_v2_behavior() - - - -import imutils -import dlib -from facealigner import FaceAligner -from imutils.face_utils import rect_to_bb -from imutils import face_utils - -#-------------------------------start facial preprocessing------------------------------ -detector_size = 512 - -# construct the arguments; shape-predictor & image -shape_predictor = 'shape_predictor_68_face_landmarks.dat' - -#initialize dlib's face detector (HOG-based) and then create -# the facial landmark predictor and the face aligner -detector = dlib.get_frontal_face_detector() -predictor = dlib.shape_predictor(shape_predictor) -fa = FaceAligner(predictor, desiredFaceWidth=detector_size) - - -def face_preprocessing(image_src):#,save_name): - - image_resized = imutils.resize(image_src, width=768) - gray = cv2.cvtColor(image_resized, cv2.COLOR_BGR2GRAY) - rects = detector(gray, 2) - if len(rects) == 0: - print('no face detected') - return image_src, 0 - rect = rects[0] - #print(image_resized.shape, gray.shape, rect) - img = fa.align(image_resized, gray, rect) - #print(img.shape) - #exit(0) - gray2 = img.copy() - rects2 = detector(gray2, 2) ######### - if len(rects2) == 0: - print('no face detected after alignment') - return img, 0 - rect = rects2[0] - - lm = predictor(gray2, rect) - lm = face_utils.shape_to_np(lm) - - n_size=img.shape[0] - landmarks_points=[] - for n in range(0,17): - if n == 0: - x = 0 - y = 0 - elif n == 16: - x=n_size - y=0 - else: - x=lm[n][0] - y=lm[n][1] - - landmarks_points.append((x,y)) - #print(landmarks_points) - #exit(0) - target_gray=cv2.cvtColor(img,cv2.COLOR_RGB2GRAY) - mask=np.zeros_like(target_gray) - points=np.array(landmarks_points,np.int32) - - convexhull=cv2.convexHull(points) - - cv2.fillConvexPoly(mask,convexhull,255) - - target_face_1=cv2.bitwise_and(img,img,mask=mask) - - left = lm[0] - right = lm[16] - top = lm[20] - nosetip = lm[30] - jaw = lm[8] - n=60 - #print(left) - #print(2*(left[0]-n)-(1024-jaw[1]),jaw[1],left[0]-n,1024-(left[0]-n)) - if left[0]-n >= detector_size-(left[0]-n) or left[0]-n > detector_size-(left[0]-n): - print('not cropped') - return img, 1 - img_crop = target_face_1[left[0]-n:detector_size-(left[0]-n),left[0]-n:detector_size-(left[0]-n)] - #mg_crop = img[ 2*(left[0]-n)-(1024-jaw[1]):jaw[1],left[0]-n:1024-(left[0]-n)] - return img_crop, 1 - - - -#----------------------------end facial preprocessing------------ - - - - - - -batch_size = 1 -w,h=200,200 -ch=3 - -# -----------------build networks---------------------- - -x = tf.placeholder(tf.float32, shape=[batch_size, w, h, ch], name='x') -y_ = tf.placeholder(tf.float32, shape=[batch_size, 1], name='y_') -y_event = tf.placeholder(tf.float32, shape=[batch_size, 1], name='y_event') -y_ = tf.transpose(y_) -y_mask = tf.placeholder(tf.float32, shape=[batch_size, batch_size], name='y_mask') -y_true = tf.concat ([y_, y_mask], axis = 0) -y_true=tf.transpose(y_true) -print(tf.shape(y_true)) - - -# conv1 -conv1 = tf.layers.conv2d(inputs=x, filters=32, kernel_size=[5, 5], padding="same", activation=tf.nn.relu, - kernel_initializer=tf.truncated_normal_initializer(stddev=0.01)) -pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2) - -# conv2 -conv2 = tf.layers.conv2d(inputs=pool1, filters=64, kernel_size=[5, 5], padding="same", activation=tf.nn.relu, - kernel_initializer=tf.truncated_normal_initializer(stddev=0.01)) -pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2) - -# conv3 -conv3 = tf.layers.conv2d(inputs=pool2, filters=128, kernel_size=[3, 3], padding="same", activation=tf.nn.relu, - kernel_initializer=tf.truncated_normal_initializer(stddev=0.01)) -pool3 = tf.layers.max_pooling2d(inputs=conv3, pool_size=[2, 2], strides=2) - -# conv4 -conv4 = tf.layers.conv2d(inputs=pool3, filters=256, kernel_size=[3, 3], padding="same", activation=tf.nn.relu, - kernel_initializer=tf.truncated_normal_initializer(stddev=0.01)) -pool4 = tf.layers.max_pooling2d(inputs=conv4, pool_size=[2, 2], strides=2) - -re1 = tf.reshape(pool4, [-1, 12 * 12 * 256]) - -# fully connected -dense1 = tf.layers.dense(inputs=re1, units=1024, activation=tf.nn.relu, - kernel_initializer=tf.truncated_normal_initializer(stddev=0.01)) -dense2 = tf.layers.dense(inputs=dense1, units=512, activation=tf.nn.relu, - kernel_initializer=tf.truncated_normal_initializer(stddev=0.01)) - - -keep_prob = tf.placeholder(tf.float32) # keep_prob: 1.0 means 100% keep -h_fcl_drop = tf.nn.dropout(dense2,keep_prob) # dropout -logits = tf.layers.dense(inputs=h_fcl_drop, units=1, activation=None, - kernel_initializer=tf.truncated_normal_initializer(stddev=0.01)) -#--------------------------------------end build networks - - -config = tf.ConfigProto() -config.gpu_options.allow_growth = True -saver = tf.train.Saver() - -sess = tf.Session(config=config) -sess.run(tf.global_variables_initializer()) -saver.restore(sess, tf.train.latest_checkpoint('checkpoint2/')) - - - -# Images - - -def image_classifier(image): - image_processed, processed_flag = face_preprocessing(image) - if processed_flag == 0: - results = {} - results['no face detected']= 0 - return 'no face detected' - - #image = np.array(image) - image = cv2.resize(image_processed, (w, h),interpolation=cv2.INTER_AREA) - print(image.shape) - colorimage_b = cv2.equalizeHist(image[:,:,0]) - colorimage_g = cv2.equalizeHist(image[:,:,1]) - colorimage_r = cv2.equalizeHist(image[:,:,2]) - - # Next we stack our equalized channels back into a single image - image_feed = np.stack((colorimage_b,colorimage_g,colorimage_r), axis=2) - image_feed = np.reshape(image_feed, (1, w, h, 3)) - image_feed = image_feed.astype(np.float32) - - logits_out = sess.run([logits ], feed_dict={x: image_feed, keep_prob:1.0}) - #normalized_score = (logits_out[0]+4326668288.0)/(8428149760.0+4326668288.0) - normalized_score = (logits_out[0]+40.463287353515625)/(31.093469619750977+40.463287353515625) - if normalized_score > 1: - normalized_score == 1 - if normalized_score < 0: - normalized_score == 0 - - #results[''] = (logits_out[0]+4326668288.0)/(8428149760.0+4326668288.0) - #results['risk']= normalized_score - return 'The predicted risk is:', normalized_score[0] - -title = "Demonstration of skin cancer risk prediction" -description = """ -This app is a proof-of-concept demonstration of predicting the risk of developing skin cancer\n -Please kindly note that the model was trained with facial images of participants (age > 50) from the [Rotterdam Study](http://www.epib.nl/research/ergo.htm). \n -Facial images were taken in a 3D imaging room with consistent ambient lighting \n -For more information, please check: https://www.medrxiv.org/content/10.1101/2023.10.04.23296549v1\n - -To start, please upload a frontal facial image: - -""" -examples=[ -['01.jpg', 'Simple Lines'], ['02.jpg', 'Simple Lines'], ['03.jpg', 'Simple Lines'] -] - -with gr.Blocks() as demo: - uploaded_image = gr.Image(type="numpy") - txt_output = gr.Textbox(value="", label="Output") - btn = gr.Button(value="Submit") - btn.click(image_classifier, inputs=[uploaded_image], outputs=[txt_output]) - -#demo = gr.Interface(fn=image_classifier, inputs=uploaded_image, outputs="label", title=title, description=description) -#demo.launch(show_api=False) -demo.launch() - -sess.close() \ No newline at end of file diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/__init__.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/imports.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/imports.py deleted file mode 100644 index 53e27e2bcfd6d9dd57579f48d42811072daf0df5..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/utils/imports.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import torch - -if torch._six.PY3: - import importlib - import importlib.util - import sys - - - # from https://stackoverflow.com/questions/67631/how-to-import-a-module-given-the-full-path?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa - def import_file(module_name, file_path, make_importable=False): - spec = importlib.util.spec_from_file_location(module_name, file_path) - module = importlib.util.module_from_spec(spec) - spec.loader.exec_module(module) - if make_importable: - sys.modules[module_name] = module - return module -else: - import imp - - def import_file(module_name, file_path, make_importable=None): - module = imp.load_source(module_name, file_path) - return module diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageMath.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageMath.py deleted file mode 100644 index ac7d36b698c2ec9839d8a771734c9f730f701534..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/ImageMath.py +++ /dev/null @@ -1,263 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# a simple math add-on for the Python Imaging Library -# -# History: -# 1999-02-15 fl Original PIL Plus release -# 2005-05-05 fl Simplified and cleaned up for PIL 1.1.6 -# 2005-09-12 fl Fixed int() and float() for Python 2.4.1 -# -# Copyright (c) 1999-2005 by Secret Labs AB -# Copyright (c) 2005 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import builtins - -from . import Image, _imagingmath - - -def _isconstant(v): - return isinstance(v, (int, float)) - - -class _Operand: - """Wraps an image operand, providing standard operators""" - - def __init__(self, im): - self.im = im - - def __fixup(self, im1): - # convert image to suitable mode - if isinstance(im1, _Operand): - # argument was an image. - if im1.im.mode in ("1", "L"): - return im1.im.convert("I") - elif im1.im.mode in ("I", "F"): - return im1.im - else: - msg = f"unsupported mode: {im1.im.mode}" - raise ValueError(msg) - else: - # argument was a constant - if _isconstant(im1) and self.im.mode in ("1", "L", "I"): - return Image.new("I", self.im.size, im1) - else: - return Image.new("F", self.im.size, im1) - - def apply(self, op, im1, im2=None, mode=None): - im1 = self.__fixup(im1) - if im2 is None: - # unary operation - out = Image.new(mode or im1.mode, im1.size, None) - im1.load() - try: - op = getattr(_imagingmath, op + "_" + im1.mode) - except AttributeError as e: - msg = f"bad operand type for '{op}'" - raise TypeError(msg) from e - _imagingmath.unop(op, out.im.id, im1.im.id) - else: - # binary operation - im2 = self.__fixup(im2) - if im1.mode != im2.mode: - # convert both arguments to floating point - if im1.mode != "F": - im1 = im1.convert("F") - if im2.mode != "F": - im2 = im2.convert("F") - if im1.size != im2.size: - # crop both arguments to a common size - size = (min(im1.size[0], im2.size[0]), min(im1.size[1], im2.size[1])) - if im1.size != size: - im1 = im1.crop((0, 0) + size) - if im2.size != size: - im2 = im2.crop((0, 0) + size) - out = Image.new(mode or im1.mode, im1.size, None) - im1.load() - im2.load() - try: - op = getattr(_imagingmath, op + "_" + im1.mode) - except AttributeError as e: - msg = f"bad operand type for '{op}'" - raise TypeError(msg) from e - _imagingmath.binop(op, out.im.id, im1.im.id, im2.im.id) - return _Operand(out) - - # unary operators - def __bool__(self): - # an image is "true" if it contains at least one non-zero pixel - return self.im.getbbox() is not None - - def __abs__(self): - return self.apply("abs", self) - - def __pos__(self): - return self - - def __neg__(self): - return self.apply("neg", self) - - # binary operators - def __add__(self, other): - return self.apply("add", self, other) - - def __radd__(self, other): - return self.apply("add", other, self) - - def __sub__(self, other): - return self.apply("sub", self, other) - - def __rsub__(self, other): - return self.apply("sub", other, self) - - def __mul__(self, other): - return self.apply("mul", self, other) - - def __rmul__(self, other): - return self.apply("mul", other, self) - - def __truediv__(self, other): - return self.apply("div", self, other) - - def __rtruediv__(self, other): - return self.apply("div", other, self) - - def __mod__(self, other): - return self.apply("mod", self, other) - - def __rmod__(self, other): - return self.apply("mod", other, self) - - def __pow__(self, other): - return self.apply("pow", self, other) - - def __rpow__(self, other): - return self.apply("pow", other, self) - - # bitwise - def __invert__(self): - return self.apply("invert", self) - - def __and__(self, other): - return self.apply("and", self, other) - - def __rand__(self, other): - return self.apply("and", other, self) - - def __or__(self, other): - return self.apply("or", self, other) - - def __ror__(self, other): - return self.apply("or", other, self) - - def __xor__(self, other): - return self.apply("xor", self, other) - - def __rxor__(self, other): - return self.apply("xor", other, self) - - def __lshift__(self, other): - return self.apply("lshift", self, other) - - def __rshift__(self, other): - return self.apply("rshift", self, other) - - # logical - def __eq__(self, other): - return self.apply("eq", self, other) - - def __ne__(self, other): - return self.apply("ne", self, other) - - def __lt__(self, other): - return self.apply("lt", self, other) - - def __le__(self, other): - return self.apply("le", self, other) - - def __gt__(self, other): - return self.apply("gt", self, other) - - def __ge__(self, other): - return self.apply("ge", self, other) - - -# conversions -def imagemath_int(self): - return _Operand(self.im.convert("I")) - - -def imagemath_float(self): - return _Operand(self.im.convert("F")) - - -# logical -def imagemath_equal(self, other): - return self.apply("eq", self, other, mode="I") - - -def imagemath_notequal(self, other): - return self.apply("ne", self, other, mode="I") - - -def imagemath_min(self, other): - return self.apply("min", self, other) - - -def imagemath_max(self, other): - return self.apply("max", self, other) - - -def imagemath_convert(self, mode): - return _Operand(self.im.convert(mode)) - - -ops = {} -for k, v in list(globals().items()): - if k[:10] == "imagemath_": - ops[k[10:]] = v - - -def eval(expression, _dict={}, **kw): - """ - Evaluates an image expression. - - :param expression: A string containing a Python-style expression. - :param options: Values to add to the evaluation context. You - can either use a dictionary, or one or more keyword - arguments. - :return: The evaluated expression. This is usually an image object, but can - also be an integer, a floating point value, or a pixel tuple, - depending on the expression. - """ - - # build execution namespace - args = ops.copy() - args.update(_dict) - args.update(kw) - for k, v in list(args.items()): - if hasattr(v, "im"): - args[k] = _Operand(v) - - compiled_code = compile(expression, "", "eval") - - def scan(code): - for const in code.co_consts: - if type(const) == type(compiled_code): - scan(const) - - for name in code.co_names: - if name not in args and name != "abs": - msg = f"'{name}' not allowed" - raise ValueError(msg) - - scan(compiled_code) - out = builtins.eval(expression, {"__builtins": {"abs": abs}}, args) - try: - return out.im - except AttributeError: - return out diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PngImagePlugin.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PngImagePlugin.py deleted file mode 100644 index bfa8cb7ac66c15e2f5d1128f4ba9a1ad69758ec1..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/PIL/PngImagePlugin.py +++ /dev/null @@ -1,1456 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# PNG support code -# -# See "PNG (Portable Network Graphics) Specification, version 1.0; -# W3C Recommendation", 1996-10-01, Thomas Boutell (ed.). -# -# history: -# 1996-05-06 fl Created (couldn't resist it) -# 1996-12-14 fl Upgraded, added read and verify support (0.2) -# 1996-12-15 fl Separate PNG stream parser -# 1996-12-29 fl Added write support, added getchunks -# 1996-12-30 fl Eliminated circular references in decoder (0.3) -# 1998-07-12 fl Read/write 16-bit images as mode I (0.4) -# 2001-02-08 fl Added transparency support (from Zircon) (0.5) -# 2001-04-16 fl Don't close data source in "open" method (0.6) -# 2004-02-24 fl Don't even pretend to support interlaced files (0.7) -# 2004-08-31 fl Do basic sanity check on chunk identifiers (0.8) -# 2004-09-20 fl Added PngInfo chunk container -# 2004-12-18 fl Added DPI read support (based on code by Niki Spahiev) -# 2008-08-13 fl Added tRNS support for RGB images -# 2009-03-06 fl Support for preserving ICC profiles (by Florian Hoech) -# 2009-03-08 fl Added zTXT support (from Lowell Alleman) -# 2009-03-29 fl Read interlaced PNG files (from Conrado Porto Lopes Gouvua) -# -# Copyright (c) 1997-2009 by Secret Labs AB -# Copyright (c) 1996 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import itertools -import logging -import re -import struct -import warnings -import zlib -from enum import IntEnum - -from . import Image, ImageChops, ImageFile, ImagePalette, ImageSequence -from ._binary import i16be as i16 -from ._binary import i32be as i32 -from ._binary import o8 -from ._binary import o16be as o16 -from ._binary import o32be as o32 - -logger = logging.getLogger(__name__) - -is_cid = re.compile(rb"\w\w\w\w").match - - -_MAGIC = b"\211PNG\r\n\032\n" - - -_MODES = { - # supported bits/color combinations, and corresponding modes/rawmodes - # Greyscale - (1, 0): ("1", "1"), - (2, 0): ("L", "L;2"), - (4, 0): ("L", "L;4"), - (8, 0): ("L", "L"), - (16, 0): ("I", "I;16B"), - # Truecolour - (8, 2): ("RGB", "RGB"), - (16, 2): ("RGB", "RGB;16B"), - # Indexed-colour - (1, 3): ("P", "P;1"), - (2, 3): ("P", "P;2"), - (4, 3): ("P", "P;4"), - (8, 3): ("P", "P"), - # Greyscale with alpha - (8, 4): ("LA", "LA"), - (16, 4): ("RGBA", "LA;16B"), # LA;16B->LA not yet available - # Truecolour with alpha - (8, 6): ("RGBA", "RGBA"), - (16, 6): ("RGBA", "RGBA;16B"), -} - - -_simple_palette = re.compile(b"^\xff*\x00\xff*$") - -MAX_TEXT_CHUNK = ImageFile.SAFEBLOCK -""" -Maximum decompressed size for a iTXt or zTXt chunk. -Eliminates decompression bombs where compressed chunks can expand 1000x. -See :ref:`Text in PNG File Format`. -""" -MAX_TEXT_MEMORY = 64 * MAX_TEXT_CHUNK -""" -Set the maximum total text chunk size. -See :ref:`Text in PNG File Format`. -""" - - -# APNG frame disposal modes -class Disposal(IntEnum): - OP_NONE = 0 - """ - No disposal is done on this frame before rendering the next frame. - See :ref:`Saving APNG sequences`. - """ - OP_BACKGROUND = 1 - """ - This frame’s modified region is cleared to fully transparent black before rendering - the next frame. - See :ref:`Saving APNG sequences`. - """ - OP_PREVIOUS = 2 - """ - This frame’s modified region is reverted to the previous frame’s contents before - rendering the next frame. - See :ref:`Saving APNG sequences`. - """ - - -# APNG frame blend modes -class Blend(IntEnum): - OP_SOURCE = 0 - """ - All color components of this frame, including alpha, overwrite the previous output - image contents. - See :ref:`Saving APNG sequences`. - """ - OP_OVER = 1 - """ - This frame should be alpha composited with the previous output image contents. - See :ref:`Saving APNG sequences`. - """ - - -def _safe_zlib_decompress(s): - dobj = zlib.decompressobj() - plaintext = dobj.decompress(s, MAX_TEXT_CHUNK) - if dobj.unconsumed_tail: - msg = "Decompressed Data Too Large" - raise ValueError(msg) - return plaintext - - -def _crc32(data, seed=0): - return zlib.crc32(data, seed) & 0xFFFFFFFF - - -# -------------------------------------------------------------------- -# Support classes. Suitable for PNG and related formats like MNG etc. - - -class ChunkStream: - def __init__(self, fp): - self.fp = fp - self.queue = [] - - def read(self): - """Fetch a new chunk. Returns header information.""" - cid = None - - if self.queue: - cid, pos, length = self.queue.pop() - self.fp.seek(pos) - else: - s = self.fp.read(8) - cid = s[4:] - pos = self.fp.tell() - length = i32(s) - - if not is_cid(cid): - if not ImageFile.LOAD_TRUNCATED_IMAGES: - msg = f"broken PNG file (chunk {repr(cid)})" - raise SyntaxError(msg) - - return cid, pos, length - - def __enter__(self): - return self - - def __exit__(self, *args): - self.close() - - def close(self): - self.queue = self.fp = None - - def push(self, cid, pos, length): - self.queue.append((cid, pos, length)) - - def call(self, cid, pos, length): - """Call the appropriate chunk handler""" - - logger.debug("STREAM %r %s %s", cid, pos, length) - return getattr(self, "chunk_" + cid.decode("ascii"))(pos, length) - - def crc(self, cid, data): - """Read and verify checksum""" - - # Skip CRC checks for ancillary chunks if allowed to load truncated - # images - # 5th byte of first char is 1 [specs, section 5.4] - if ImageFile.LOAD_TRUNCATED_IMAGES and (cid[0] >> 5 & 1): - self.crc_skip(cid, data) - return - - try: - crc1 = _crc32(data, _crc32(cid)) - crc2 = i32(self.fp.read(4)) - if crc1 != crc2: - msg = f"broken PNG file (bad header checksum in {repr(cid)})" - raise SyntaxError(msg) - except struct.error as e: - msg = f"broken PNG file (incomplete checksum in {repr(cid)})" - raise SyntaxError(msg) from e - - def crc_skip(self, cid, data): - """Read checksum""" - - self.fp.read(4) - - def verify(self, endchunk=b"IEND"): - # Simple approach; just calculate checksum for all remaining - # blocks. Must be called directly after open. - - cids = [] - - while True: - try: - cid, pos, length = self.read() - except struct.error as e: - msg = "truncated PNG file" - raise OSError(msg) from e - - if cid == endchunk: - break - self.crc(cid, ImageFile._safe_read(self.fp, length)) - cids.append(cid) - - return cids - - -class iTXt(str): - """ - Subclass of string to allow iTXt chunks to look like strings while - keeping their extra information - - """ - - @staticmethod - def __new__(cls, text, lang=None, tkey=None): - """ - :param cls: the class to use when creating the instance - :param text: value for this key - :param lang: language code - :param tkey: UTF-8 version of the key name - """ - - self = str.__new__(cls, text) - self.lang = lang - self.tkey = tkey - return self - - -class PngInfo: - """ - PNG chunk container (for use with save(pnginfo=)) - - """ - - def __init__(self): - self.chunks = [] - - def add(self, cid, data, after_idat=False): - """Appends an arbitrary chunk. Use with caution. - - :param cid: a byte string, 4 bytes long. - :param data: a byte string of the encoded data - :param after_idat: for use with private chunks. Whether the chunk - should be written after IDAT - - """ - - chunk = [cid, data] - if after_idat: - chunk.append(True) - self.chunks.append(tuple(chunk)) - - def add_itxt(self, key, value, lang="", tkey="", zip=False): - """Appends an iTXt chunk. - - :param key: latin-1 encodable text key name - :param value: value for this key - :param lang: language code - :param tkey: UTF-8 version of the key name - :param zip: compression flag - - """ - - if not isinstance(key, bytes): - key = key.encode("latin-1", "strict") - if not isinstance(value, bytes): - value = value.encode("utf-8", "strict") - if not isinstance(lang, bytes): - lang = lang.encode("utf-8", "strict") - if not isinstance(tkey, bytes): - tkey = tkey.encode("utf-8", "strict") - - if zip: - self.add( - b"iTXt", - key + b"\0\x01\0" + lang + b"\0" + tkey + b"\0" + zlib.compress(value), - ) - else: - self.add(b"iTXt", key + b"\0\0\0" + lang + b"\0" + tkey + b"\0" + value) - - def add_text(self, key, value, zip=False): - """Appends a text chunk. - - :param key: latin-1 encodable text key name - :param value: value for this key, text or an - :py:class:`PIL.PngImagePlugin.iTXt` instance - :param zip: compression flag - - """ - if isinstance(value, iTXt): - return self.add_itxt(key, value, value.lang, value.tkey, zip=zip) - - # The tEXt chunk stores latin-1 text - if not isinstance(value, bytes): - try: - value = value.encode("latin-1", "strict") - except UnicodeError: - return self.add_itxt(key, value, zip=zip) - - if not isinstance(key, bytes): - key = key.encode("latin-1", "strict") - - if zip: - self.add(b"zTXt", key + b"\0\0" + zlib.compress(value)) - else: - self.add(b"tEXt", key + b"\0" + value) - - -# -------------------------------------------------------------------- -# PNG image stream (IHDR/IEND) - - -class PngStream(ChunkStream): - def __init__(self, fp): - super().__init__(fp) - - # local copies of Image attributes - self.im_info = {} - self.im_text = {} - self.im_size = (0, 0) - self.im_mode = None - self.im_tile = None - self.im_palette = None - self.im_custom_mimetype = None - self.im_n_frames = None - self._seq_num = None - self.rewind_state = None - - self.text_memory = 0 - - def check_text_memory(self, chunklen): - self.text_memory += chunklen - if self.text_memory > MAX_TEXT_MEMORY: - msg = ( - "Too much memory used in text chunks: " - f"{self.text_memory}>MAX_TEXT_MEMORY" - ) - raise ValueError(msg) - - def save_rewind(self): - self.rewind_state = { - "info": self.im_info.copy(), - "tile": self.im_tile, - "seq_num": self._seq_num, - } - - def rewind(self): - self.im_info = self.rewind_state["info"] - self.im_tile = self.rewind_state["tile"] - self._seq_num = self.rewind_state["seq_num"] - - def chunk_iCCP(self, pos, length): - # ICC profile - s = ImageFile._safe_read(self.fp, length) - # according to PNG spec, the iCCP chunk contains: - # Profile name 1-79 bytes (character string) - # Null separator 1 byte (null character) - # Compression method 1 byte (0) - # Compressed profile n bytes (zlib with deflate compression) - i = s.find(b"\0") - logger.debug("iCCP profile name %r", s[:i]) - logger.debug("Compression method %s", s[i]) - comp_method = s[i] - if comp_method != 0: - msg = f"Unknown compression method {comp_method} in iCCP chunk" - raise SyntaxError(msg) - try: - icc_profile = _safe_zlib_decompress(s[i + 2 :]) - except ValueError: - if ImageFile.LOAD_TRUNCATED_IMAGES: - icc_profile = None - else: - raise - except zlib.error: - icc_profile = None # FIXME - self.im_info["icc_profile"] = icc_profile - return s - - def chunk_IHDR(self, pos, length): - # image header - s = ImageFile._safe_read(self.fp, length) - if length < 13: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "Truncated IHDR chunk" - raise ValueError(msg) - self.im_size = i32(s, 0), i32(s, 4) - try: - self.im_mode, self.im_rawmode = _MODES[(s[8], s[9])] - except Exception: - pass - if s[12]: - self.im_info["interlace"] = 1 - if s[11]: - msg = "unknown filter category" - raise SyntaxError(msg) - return s - - def chunk_IDAT(self, pos, length): - # image data - if "bbox" in self.im_info: - tile = [("zip", self.im_info["bbox"], pos, self.im_rawmode)] - else: - if self.im_n_frames is not None: - self.im_info["default_image"] = True - tile = [("zip", (0, 0) + self.im_size, pos, self.im_rawmode)] - self.im_tile = tile - self.im_idat = length - raise EOFError - - def chunk_IEND(self, pos, length): - # end of PNG image - raise EOFError - - def chunk_PLTE(self, pos, length): - # palette - s = ImageFile._safe_read(self.fp, length) - if self.im_mode == "P": - self.im_palette = "RGB", s - return s - - def chunk_tRNS(self, pos, length): - # transparency - s = ImageFile._safe_read(self.fp, length) - if self.im_mode == "P": - if _simple_palette.match(s): - # tRNS contains only one full-transparent entry, - # other entries are full opaque - i = s.find(b"\0") - if i >= 0: - self.im_info["transparency"] = i - else: - # otherwise, we have a byte string with one alpha value - # for each palette entry - self.im_info["transparency"] = s - elif self.im_mode in ("1", "L", "I"): - self.im_info["transparency"] = i16(s) - elif self.im_mode == "RGB": - self.im_info["transparency"] = i16(s), i16(s, 2), i16(s, 4) - return s - - def chunk_gAMA(self, pos, length): - # gamma setting - s = ImageFile._safe_read(self.fp, length) - self.im_info["gamma"] = i32(s) / 100000.0 - return s - - def chunk_cHRM(self, pos, length): - # chromaticity, 8 unsigned ints, actual value is scaled by 100,000 - # WP x,y, Red x,y, Green x,y Blue x,y - - s = ImageFile._safe_read(self.fp, length) - raw_vals = struct.unpack(">%dI" % (len(s) // 4), s) - self.im_info["chromaticity"] = tuple(elt / 100000.0 for elt in raw_vals) - return s - - def chunk_sRGB(self, pos, length): - # srgb rendering intent, 1 byte - # 0 perceptual - # 1 relative colorimetric - # 2 saturation - # 3 absolute colorimetric - - s = ImageFile._safe_read(self.fp, length) - if length < 1: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "Truncated sRGB chunk" - raise ValueError(msg) - self.im_info["srgb"] = s[0] - return s - - def chunk_pHYs(self, pos, length): - # pixels per unit - s = ImageFile._safe_read(self.fp, length) - if length < 9: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "Truncated pHYs chunk" - raise ValueError(msg) - px, py = i32(s, 0), i32(s, 4) - unit = s[8] - if unit == 1: # meter - dpi = px * 0.0254, py * 0.0254 - self.im_info["dpi"] = dpi - elif unit == 0: - self.im_info["aspect"] = px, py - return s - - def chunk_tEXt(self, pos, length): - # text - s = ImageFile._safe_read(self.fp, length) - try: - k, v = s.split(b"\0", 1) - except ValueError: - # fallback for broken tEXt tags - k = s - v = b"" - if k: - k = k.decode("latin-1", "strict") - v_str = v.decode("latin-1", "replace") - - self.im_info[k] = v if k == "exif" else v_str - self.im_text[k] = v_str - self.check_text_memory(len(v_str)) - - return s - - def chunk_zTXt(self, pos, length): - # compressed text - s = ImageFile._safe_read(self.fp, length) - try: - k, v = s.split(b"\0", 1) - except ValueError: - k = s - v = b"" - if v: - comp_method = v[0] - else: - comp_method = 0 - if comp_method != 0: - msg = f"Unknown compression method {comp_method} in zTXt chunk" - raise SyntaxError(msg) - try: - v = _safe_zlib_decompress(v[1:]) - except ValueError: - if ImageFile.LOAD_TRUNCATED_IMAGES: - v = b"" - else: - raise - except zlib.error: - v = b"" - - if k: - k = k.decode("latin-1", "strict") - v = v.decode("latin-1", "replace") - - self.im_info[k] = self.im_text[k] = v - self.check_text_memory(len(v)) - - return s - - def chunk_iTXt(self, pos, length): - # international text - r = s = ImageFile._safe_read(self.fp, length) - try: - k, r = r.split(b"\0", 1) - except ValueError: - return s - if len(r) < 2: - return s - cf, cm, r = r[0], r[1], r[2:] - try: - lang, tk, v = r.split(b"\0", 2) - except ValueError: - return s - if cf != 0: - if cm == 0: - try: - v = _safe_zlib_decompress(v) - except ValueError: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - else: - raise - except zlib.error: - return s - else: - return s - try: - k = k.decode("latin-1", "strict") - lang = lang.decode("utf-8", "strict") - tk = tk.decode("utf-8", "strict") - v = v.decode("utf-8", "strict") - except UnicodeError: - return s - - self.im_info[k] = self.im_text[k] = iTXt(v, lang, tk) - self.check_text_memory(len(v)) - - return s - - def chunk_eXIf(self, pos, length): - s = ImageFile._safe_read(self.fp, length) - self.im_info["exif"] = b"Exif\x00\x00" + s - return s - - # APNG chunks - def chunk_acTL(self, pos, length): - s = ImageFile._safe_read(self.fp, length) - if length < 8: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "APNG contains truncated acTL chunk" - raise ValueError(msg) - if self.im_n_frames is not None: - self.im_n_frames = None - warnings.warn("Invalid APNG, will use default PNG image if possible") - return s - n_frames = i32(s) - if n_frames == 0 or n_frames > 0x80000000: - warnings.warn("Invalid APNG, will use default PNG image if possible") - return s - self.im_n_frames = n_frames - self.im_info["loop"] = i32(s, 4) - self.im_custom_mimetype = "image/apng" - return s - - def chunk_fcTL(self, pos, length): - s = ImageFile._safe_read(self.fp, length) - if length < 26: - if ImageFile.LOAD_TRUNCATED_IMAGES: - return s - msg = "APNG contains truncated fcTL chunk" - raise ValueError(msg) - seq = i32(s) - if (self._seq_num is None and seq != 0) or ( - self._seq_num is not None and self._seq_num != seq - 1 - ): - msg = "APNG contains frame sequence errors" - raise SyntaxError(msg) - self._seq_num = seq - width, height = i32(s, 4), i32(s, 8) - px, py = i32(s, 12), i32(s, 16) - im_w, im_h = self.im_size - if px + width > im_w or py + height > im_h: - msg = "APNG contains invalid frames" - raise SyntaxError(msg) - self.im_info["bbox"] = (px, py, px + width, py + height) - delay_num, delay_den = i16(s, 20), i16(s, 22) - if delay_den == 0: - delay_den = 100 - self.im_info["duration"] = float(delay_num) / float(delay_den) * 1000 - self.im_info["disposal"] = s[24] - self.im_info["blend"] = s[25] - return s - - def chunk_fdAT(self, pos, length): - if length < 4: - if ImageFile.LOAD_TRUNCATED_IMAGES: - s = ImageFile._safe_read(self.fp, length) - return s - msg = "APNG contains truncated fDAT chunk" - raise ValueError(msg) - s = ImageFile._safe_read(self.fp, 4) - seq = i32(s) - if self._seq_num != seq - 1: - msg = "APNG contains frame sequence errors" - raise SyntaxError(msg) - self._seq_num = seq - return self.chunk_IDAT(pos + 4, length - 4) - - -# -------------------------------------------------------------------- -# PNG reader - - -def _accept(prefix): - return prefix[:8] == _MAGIC - - -## -# Image plugin for PNG images. - - -class PngImageFile(ImageFile.ImageFile): - format = "PNG" - format_description = "Portable network graphics" - - def _open(self): - if not _accept(self.fp.read(8)): - msg = "not a PNG file" - raise SyntaxError(msg) - self._fp = self.fp - self.__frame = 0 - - # - # Parse headers up to the first IDAT or fDAT chunk - - self.private_chunks = [] - self.png = PngStream(self.fp) - - while True: - # - # get next chunk - - cid, pos, length = self.png.read() - - try: - s = self.png.call(cid, pos, length) - except EOFError: - break - except AttributeError: - logger.debug("%r %s %s (unknown)", cid, pos, length) - s = ImageFile._safe_read(self.fp, length) - if cid[1:2].islower(): - self.private_chunks.append((cid, s)) - - self.png.crc(cid, s) - - # - # Copy relevant attributes from the PngStream. An alternative - # would be to let the PngStream class modify these attributes - # directly, but that introduces circular references which are - # difficult to break if things go wrong in the decoder... - # (believe me, I've tried ;-) - - self.mode = self.png.im_mode - self._size = self.png.im_size - self.info = self.png.im_info - self._text = None - self.tile = self.png.im_tile - self.custom_mimetype = self.png.im_custom_mimetype - self.n_frames = self.png.im_n_frames or 1 - self.default_image = self.info.get("default_image", False) - - if self.png.im_palette: - rawmode, data = self.png.im_palette - self.palette = ImagePalette.raw(rawmode, data) - - if cid == b"fdAT": - self.__prepare_idat = length - 4 - else: - self.__prepare_idat = length # used by load_prepare() - - if self.png.im_n_frames is not None: - self._close_exclusive_fp_after_loading = False - self.png.save_rewind() - self.__rewind_idat = self.__prepare_idat - self.__rewind = self._fp.tell() - if self.default_image: - # IDAT chunk contains default image and not first animation frame - self.n_frames += 1 - self._seek(0) - self.is_animated = self.n_frames > 1 - - @property - def text(self): - # experimental - if self._text is None: - # iTxt, tEXt and zTXt chunks may appear at the end of the file - # So load the file to ensure that they are read - if self.is_animated: - frame = self.__frame - # for APNG, seek to the final frame before loading - self.seek(self.n_frames - 1) - self.load() - if self.is_animated: - self.seek(frame) - return self._text - - def verify(self): - """Verify PNG file""" - - if self.fp is None: - msg = "verify must be called directly after open" - raise RuntimeError(msg) - - # back up to beginning of IDAT block - self.fp.seek(self.tile[0][2] - 8) - - self.png.verify() - self.png.close() - - if self._exclusive_fp: - self.fp.close() - self.fp = None - - def seek(self, frame): - if not self._seek_check(frame): - return - if frame < self.__frame: - self._seek(0, True) - - last_frame = self.__frame - for f in range(self.__frame + 1, frame + 1): - try: - self._seek(f) - except EOFError as e: - self.seek(last_frame) - msg = "no more images in APNG file" - raise EOFError(msg) from e - - def _seek(self, frame, rewind=False): - if frame == 0: - if rewind: - self._fp.seek(self.__rewind) - self.png.rewind() - self.__prepare_idat = self.__rewind_idat - self.im = None - if self.pyaccess: - self.pyaccess = None - self.info = self.png.im_info - self.tile = self.png.im_tile - self.fp = self._fp - self._prev_im = None - self.dispose = None - self.default_image = self.info.get("default_image", False) - self.dispose_op = self.info.get("disposal") - self.blend_op = self.info.get("blend") - self.dispose_extent = self.info.get("bbox") - self.__frame = 0 - else: - if frame != self.__frame + 1: - msg = f"cannot seek to frame {frame}" - raise ValueError(msg) - - # ensure previous frame was loaded - self.load() - - if self.dispose: - self.im.paste(self.dispose, self.dispose_extent) - self._prev_im = self.im.copy() - - self.fp = self._fp - - # advance to the next frame - if self.__prepare_idat: - ImageFile._safe_read(self.fp, self.__prepare_idat) - self.__prepare_idat = 0 - frame_start = False - while True: - self.fp.read(4) # CRC - - try: - cid, pos, length = self.png.read() - except (struct.error, SyntaxError): - break - - if cid == b"IEND": - msg = "No more images in APNG file" - raise EOFError(msg) - if cid == b"fcTL": - if frame_start: - # there must be at least one fdAT chunk between fcTL chunks - msg = "APNG missing frame data" - raise SyntaxError(msg) - frame_start = True - - try: - self.png.call(cid, pos, length) - except UnicodeDecodeError: - break - except EOFError: - if cid == b"fdAT": - length -= 4 - if frame_start: - self.__prepare_idat = length - break - ImageFile._safe_read(self.fp, length) - except AttributeError: - logger.debug("%r %s %s (unknown)", cid, pos, length) - ImageFile._safe_read(self.fp, length) - - self.__frame = frame - self.tile = self.png.im_tile - self.dispose_op = self.info.get("disposal") - self.blend_op = self.info.get("blend") - self.dispose_extent = self.info.get("bbox") - - if not self.tile: - raise EOFError - - # setup frame disposal (actual disposal done when needed in the next _seek()) - if self._prev_im is None and self.dispose_op == Disposal.OP_PREVIOUS: - self.dispose_op = Disposal.OP_BACKGROUND - - if self.dispose_op == Disposal.OP_PREVIOUS: - self.dispose = self._prev_im.copy() - self.dispose = self._crop(self.dispose, self.dispose_extent) - elif self.dispose_op == Disposal.OP_BACKGROUND: - self.dispose = Image.core.fill(self.mode, self.size) - self.dispose = self._crop(self.dispose, self.dispose_extent) - else: - self.dispose = None - - def tell(self): - return self.__frame - - def load_prepare(self): - """internal: prepare to read PNG file""" - - if self.info.get("interlace"): - self.decoderconfig = self.decoderconfig + (1,) - - self.__idat = self.__prepare_idat # used by load_read() - ImageFile.ImageFile.load_prepare(self) - - def load_read(self, read_bytes): - """internal: read more image data""" - - while self.__idat == 0: - # end of chunk, skip forward to next one - - self.fp.read(4) # CRC - - cid, pos, length = self.png.read() - - if cid not in [b"IDAT", b"DDAT", b"fdAT"]: - self.png.push(cid, pos, length) - return b"" - - if cid == b"fdAT": - try: - self.png.call(cid, pos, length) - except EOFError: - pass - self.__idat = length - 4 # sequence_num has already been read - else: - self.__idat = length # empty chunks are allowed - - # read more data from this chunk - if read_bytes <= 0: - read_bytes = self.__idat - else: - read_bytes = min(read_bytes, self.__idat) - - self.__idat = self.__idat - read_bytes - - return self.fp.read(read_bytes) - - def load_end(self): - """internal: finished reading image data""" - if self.__idat != 0: - self.fp.read(self.__idat) - while True: - self.fp.read(4) # CRC - - try: - cid, pos, length = self.png.read() - except (struct.error, SyntaxError): - break - - if cid == b"IEND": - break - elif cid == b"fcTL" and self.is_animated: - # start of the next frame, stop reading - self.__prepare_idat = 0 - self.png.push(cid, pos, length) - break - - try: - self.png.call(cid, pos, length) - except UnicodeDecodeError: - break - except EOFError: - if cid == b"fdAT": - length -= 4 - ImageFile._safe_read(self.fp, length) - except AttributeError: - logger.debug("%r %s %s (unknown)", cid, pos, length) - s = ImageFile._safe_read(self.fp, length) - if cid[1:2].islower(): - self.private_chunks.append((cid, s, True)) - self._text = self.png.im_text - if not self.is_animated: - self.png.close() - self.png = None - else: - if self._prev_im and self.blend_op == Blend.OP_OVER: - updated = self._crop(self.im, self.dispose_extent) - if self.im.mode == "RGB" and "transparency" in self.info: - mask = updated.convert_transparent( - "RGBA", self.info["transparency"] - ) - else: - mask = updated.convert("RGBA") - self._prev_im.paste(updated, self.dispose_extent, mask) - self.im = self._prev_im - if self.pyaccess: - self.pyaccess = None - - def _getexif(self): - if "exif" not in self.info: - self.load() - if "exif" not in self.info and "Raw profile type exif" not in self.info: - return None - return self.getexif()._get_merged_dict() - - def getexif(self): - if "exif" not in self.info: - self.load() - - return super().getexif() - - def getxmp(self): - """ - Returns a dictionary containing the XMP tags. - Requires defusedxml to be installed. - - :returns: XMP tags in a dictionary. - """ - return ( - self._getxmp(self.info["XML:com.adobe.xmp"]) - if "XML:com.adobe.xmp" in self.info - else {} - ) - - -# -------------------------------------------------------------------- -# PNG writer - -_OUTMODES = { - # supported PIL modes, and corresponding rawmodes/bits/color combinations - "1": ("1", b"\x01\x00"), - "L;1": ("L;1", b"\x01\x00"), - "L;2": ("L;2", b"\x02\x00"), - "L;4": ("L;4", b"\x04\x00"), - "L": ("L", b"\x08\x00"), - "LA": ("LA", b"\x08\x04"), - "I": ("I;16B", b"\x10\x00"), - "I;16": ("I;16B", b"\x10\x00"), - "P;1": ("P;1", b"\x01\x03"), - "P;2": ("P;2", b"\x02\x03"), - "P;4": ("P;4", b"\x04\x03"), - "P": ("P", b"\x08\x03"), - "RGB": ("RGB", b"\x08\x02"), - "RGBA": ("RGBA", b"\x08\x06"), -} - - -def putchunk(fp, cid, *data): - """Write a PNG chunk (including CRC field)""" - - data = b"".join(data) - - fp.write(o32(len(data)) + cid) - fp.write(data) - crc = _crc32(data, _crc32(cid)) - fp.write(o32(crc)) - - -class _idat: - # wrap output from the encoder in IDAT chunks - - def __init__(self, fp, chunk): - self.fp = fp - self.chunk = chunk - - def write(self, data): - self.chunk(self.fp, b"IDAT", data) - - -class _fdat: - # wrap encoder output in fdAT chunks - - def __init__(self, fp, chunk, seq_num): - self.fp = fp - self.chunk = chunk - self.seq_num = seq_num - - def write(self, data): - self.chunk(self.fp, b"fdAT", o32(self.seq_num), data) - self.seq_num += 1 - - -def _write_multiple_frames(im, fp, chunk, rawmode, default_image, append_images): - duration = im.encoderinfo.get("duration", im.info.get("duration", 0)) - loop = im.encoderinfo.get("loop", im.info.get("loop", 0)) - disposal = im.encoderinfo.get("disposal", im.info.get("disposal", Disposal.OP_NONE)) - blend = im.encoderinfo.get("blend", im.info.get("blend", Blend.OP_SOURCE)) - - if default_image: - chain = itertools.chain(append_images) - else: - chain = itertools.chain([im], append_images) - - im_frames = [] - frame_count = 0 - for im_seq in chain: - for im_frame in ImageSequence.Iterator(im_seq): - if im_frame.mode == rawmode: - im_frame = im_frame.copy() - else: - if rawmode == "P": - im_frame = im_frame.convert(rawmode, palette=im.palette) - else: - im_frame = im_frame.convert(rawmode) - encoderinfo = im.encoderinfo.copy() - if isinstance(duration, (list, tuple)): - encoderinfo["duration"] = duration[frame_count] - if isinstance(disposal, (list, tuple)): - encoderinfo["disposal"] = disposal[frame_count] - if isinstance(blend, (list, tuple)): - encoderinfo["blend"] = blend[frame_count] - frame_count += 1 - - if im_frames: - previous = im_frames[-1] - prev_disposal = previous["encoderinfo"].get("disposal") - prev_blend = previous["encoderinfo"].get("blend") - if prev_disposal == Disposal.OP_PREVIOUS and len(im_frames) < 2: - prev_disposal = Disposal.OP_BACKGROUND - - if prev_disposal == Disposal.OP_BACKGROUND: - base_im = previous["im"].copy() - dispose = Image.core.fill("RGBA", im.size, (0, 0, 0, 0)) - bbox = previous["bbox"] - if bbox: - dispose = dispose.crop(bbox) - else: - bbox = (0, 0) + im.size - base_im.paste(dispose, bbox) - elif prev_disposal == Disposal.OP_PREVIOUS: - base_im = im_frames[-2]["im"] - else: - base_im = previous["im"] - delta = ImageChops.subtract_modulo( - im_frame.convert("RGBA"), base_im.convert("RGBA") - ) - bbox = delta.getbbox(alpha_only=False) - if ( - not bbox - and prev_disposal == encoderinfo.get("disposal") - and prev_blend == encoderinfo.get("blend") - ): - previous["encoderinfo"]["duration"] += encoderinfo.get( - "duration", duration - ) - continue - else: - bbox = None - if "duration" not in encoderinfo: - encoderinfo["duration"] = duration - im_frames.append({"im": im_frame, "bbox": bbox, "encoderinfo": encoderinfo}) - - # animation control - chunk( - fp, - b"acTL", - o32(len(im_frames)), # 0: num_frames - o32(loop), # 4: num_plays - ) - - # default image IDAT (if it exists) - if default_image: - ImageFile._save(im, _idat(fp, chunk), [("zip", (0, 0) + im.size, 0, rawmode)]) - - seq_num = 0 - for frame, frame_data in enumerate(im_frames): - im_frame = frame_data["im"] - if not frame_data["bbox"]: - bbox = (0, 0) + im_frame.size - else: - bbox = frame_data["bbox"] - im_frame = im_frame.crop(bbox) - size = im_frame.size - encoderinfo = frame_data["encoderinfo"] - frame_duration = int(round(encoderinfo["duration"])) - frame_disposal = encoderinfo.get("disposal", disposal) - frame_blend = encoderinfo.get("blend", blend) - # frame control - chunk( - fp, - b"fcTL", - o32(seq_num), # sequence_number - o32(size[0]), # width - o32(size[1]), # height - o32(bbox[0]), # x_offset - o32(bbox[1]), # y_offset - o16(frame_duration), # delay_numerator - o16(1000), # delay_denominator - o8(frame_disposal), # dispose_op - o8(frame_blend), # blend_op - ) - seq_num += 1 - # frame data - if frame == 0 and not default_image: - # first frame must be in IDAT chunks for backwards compatibility - ImageFile._save( - im_frame, - _idat(fp, chunk), - [("zip", (0, 0) + im_frame.size, 0, rawmode)], - ) - else: - fdat_chunks = _fdat(fp, chunk, seq_num) - ImageFile._save( - im_frame, - fdat_chunks, - [("zip", (0, 0) + im_frame.size, 0, rawmode)], - ) - seq_num = fdat_chunks.seq_num - - -def _save_all(im, fp, filename): - _save(im, fp, filename, save_all=True) - - -def _save(im, fp, filename, chunk=putchunk, save_all=False): - # save an image to disk (called by the save method) - - if save_all: - default_image = im.encoderinfo.get( - "default_image", im.info.get("default_image") - ) - modes = set() - append_images = im.encoderinfo.get("append_images", []) - if default_image: - chain = itertools.chain(append_images) - else: - chain = itertools.chain([im], append_images) - for im_seq in chain: - for im_frame in ImageSequence.Iterator(im_seq): - modes.add(im_frame.mode) - for mode in ("RGBA", "RGB", "P"): - if mode in modes: - break - else: - mode = modes.pop() - else: - mode = im.mode - - if mode == "P": - # - # attempt to minimize storage requirements for palette images - if "bits" in im.encoderinfo: - # number of bits specified by user - colors = min(1 << im.encoderinfo["bits"], 256) - else: - # check palette contents - if im.palette: - colors = max(min(len(im.palette.getdata()[1]) // 3, 256), 1) - else: - colors = 256 - - if colors <= 16: - if colors <= 2: - bits = 1 - elif colors <= 4: - bits = 2 - else: - bits = 4 - mode = f"{mode};{bits}" - - # encoder options - im.encoderconfig = ( - im.encoderinfo.get("optimize", False), - im.encoderinfo.get("compress_level", -1), - im.encoderinfo.get("compress_type", -1), - im.encoderinfo.get("dictionary", b""), - ) - - # get the corresponding PNG mode - try: - rawmode, mode = _OUTMODES[mode] - except KeyError as e: - msg = f"cannot write mode {mode} as PNG" - raise OSError(msg) from e - - # - # write minimal PNG file - - fp.write(_MAGIC) - - chunk( - fp, - b"IHDR", - o32(im.size[0]), # 0: size - o32(im.size[1]), - mode, # 8: depth/type - b"\0", # 10: compression - b"\0", # 11: filter category - b"\0", # 12: interlace flag - ) - - chunks = [b"cHRM", b"gAMA", b"sBIT", b"sRGB", b"tIME"] - - icc = im.encoderinfo.get("icc_profile", im.info.get("icc_profile")) - if icc: - # ICC profile - # according to PNG spec, the iCCP chunk contains: - # Profile name 1-79 bytes (character string) - # Null separator 1 byte (null character) - # Compression method 1 byte (0) - # Compressed profile n bytes (zlib with deflate compression) - name = b"ICC Profile" - data = name + b"\0\0" + zlib.compress(icc) - chunk(fp, b"iCCP", data) - - # You must either have sRGB or iCCP. - # Disallow sRGB chunks when an iCCP-chunk has been emitted. - chunks.remove(b"sRGB") - - info = im.encoderinfo.get("pnginfo") - if info: - chunks_multiple_allowed = [b"sPLT", b"iTXt", b"tEXt", b"zTXt"] - for info_chunk in info.chunks: - cid, data = info_chunk[:2] - if cid in chunks: - chunks.remove(cid) - chunk(fp, cid, data) - elif cid in chunks_multiple_allowed: - chunk(fp, cid, data) - elif cid[1:2].islower(): - # Private chunk - after_idat = info_chunk[2:3] - if not after_idat: - chunk(fp, cid, data) - - if im.mode == "P": - palette_byte_number = colors * 3 - palette_bytes = im.im.getpalette("RGB")[:palette_byte_number] - while len(palette_bytes) < palette_byte_number: - palette_bytes += b"\0" - chunk(fp, b"PLTE", palette_bytes) - - transparency = im.encoderinfo.get("transparency", im.info.get("transparency", None)) - - if transparency or transparency == 0: - if im.mode == "P": - # limit to actual palette size - alpha_bytes = colors - if isinstance(transparency, bytes): - chunk(fp, b"tRNS", transparency[:alpha_bytes]) - else: - transparency = max(0, min(255, transparency)) - alpha = b"\xFF" * transparency + b"\0" - chunk(fp, b"tRNS", alpha[:alpha_bytes]) - elif im.mode in ("1", "L", "I"): - transparency = max(0, min(65535, transparency)) - chunk(fp, b"tRNS", o16(transparency)) - elif im.mode == "RGB": - red, green, blue = transparency - chunk(fp, b"tRNS", o16(red) + o16(green) + o16(blue)) - else: - if "transparency" in im.encoderinfo: - # don't bother with transparency if it's an RGBA - # and it's in the info dict. It's probably just stale. - msg = "cannot use transparency for this mode" - raise OSError(msg) - else: - if im.mode == "P" and im.im.getpalettemode() == "RGBA": - alpha = im.im.getpalette("RGBA", "A") - alpha_bytes = colors - chunk(fp, b"tRNS", alpha[:alpha_bytes]) - - dpi = im.encoderinfo.get("dpi") - if dpi: - chunk( - fp, - b"pHYs", - o32(int(dpi[0] / 0.0254 + 0.5)), - o32(int(dpi[1] / 0.0254 + 0.5)), - b"\x01", - ) - - if info: - chunks = [b"bKGD", b"hIST"] - for info_chunk in info.chunks: - cid, data = info_chunk[:2] - if cid in chunks: - chunks.remove(cid) - chunk(fp, cid, data) - - exif = im.encoderinfo.get("exif") - if exif: - if isinstance(exif, Image.Exif): - exif = exif.tobytes(8) - if exif.startswith(b"Exif\x00\x00"): - exif = exif[6:] - chunk(fp, b"eXIf", exif) - - if save_all: - _write_multiple_frames(im, fp, chunk, rawmode, default_image, append_images) - else: - ImageFile._save(im, _idat(fp, chunk), [("zip", (0, 0) + im.size, 0, rawmode)]) - - if info: - for info_chunk in info.chunks: - cid, data = info_chunk[:2] - if cid[1:2].islower(): - # Private chunk - after_idat = info_chunk[2:3] - if after_idat: - chunk(fp, cid, data) - - chunk(fp, b"IEND", b"") - - if hasattr(fp, "flush"): - fp.flush() - - -# -------------------------------------------------------------------- -# PNG chunk converter - - -def getchunks(im, **params): - """Return a list of PNG chunks representing this image.""" - - class collector: - data = [] - - def write(self, data): - pass - - def append(self, chunk): - self.data.append(chunk) - - def append(fp, cid, *data): - data = b"".join(data) - crc = o32(_crc32(data, _crc32(cid))) - fp.append((cid, data, crc)) - - fp = collector() - - try: - im.encoderinfo = params - _save(im, fp, None, append) - finally: - del im.encoderinfo - - return fp.data - - -# -------------------------------------------------------------------- -# Registry - -Image.register_open(PngImageFile.format, PngImageFile, _accept) -Image.register_save(PngImageFile.format, _save) -Image.register_save_all(PngImageFile.format, _save_all) - -Image.register_extensions(PngImageFile.format, [".png", ".apng"]) - -Image.register_mime(PngImageFile.format, "image/png") diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_git_credential.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_git_credential.py deleted file mode 100644 index fc287b2a77236df4024b53bccc2559a99a79b8f7..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/huggingface_hub/utils/_git_credential.py +++ /dev/null @@ -1,96 +0,0 @@ -# coding=utf-8 -# Copyright 2022-present, the HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""Contains utilities to manage Git credentials.""" -import subprocess -from typing import List, Optional - -from ..constants import ENDPOINT -from ._subprocess import run_interactive_subprocess, run_subprocess - - -def list_credential_helpers(folder: Optional[str] = None) -> List[str]: - """Return the list of git credential helpers configured. - - See https://git-scm.com/docs/gitcredentials. - - Credentials are saved in all configured helpers (store, cache, macOS keychain,...). - Calls "`git credential approve`" internally. See https://git-scm.com/docs/git-credential. - - Args: - folder (`str`, *optional*): - The folder in which to check the configured helpers. - """ - try: - output = run_subprocess("git config --list", folder=folder).stdout - # NOTE: If user has set an helper for a custom URL, it will not we caught here. - # Example: `credential.https://huggingface.co.helper=store` - # See: https://github.com/huggingface/huggingface_hub/pull/1138#discussion_r1013324508 - return sorted( # Sort for nice printing - { # Might have some duplicates - line.split("=")[-1].split()[0] for line in output.split("\n") if "credential.helper" in line - } - ) - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - -def set_git_credential(token: str, username: str = "hf_user", folder: Optional[str] = None) -> None: - """Save a username/token pair in git credential for HF Hub registry. - - Credentials are saved in all configured helpers (store, cache, macOS keychain,...). - Calls "`git credential approve`" internally. See https://git-scm.com/docs/git-credential. - - Args: - username (`str`, defaults to `"hf_user"`): - A git username. Defaults to `"hf_user"`, the default user used in the Hub. - token (`str`, defaults to `"hf_user"`): - A git password. In practice, the User Access Token for the Hub. - See https://huggingface.co/settings/tokens. - folder (`str`, *optional*): - The folder in which to check the configured helpers. - """ - with run_interactive_subprocess("git credential approve", folder=folder) as ( - stdin, - _, - ): - stdin.write(f"url={ENDPOINT}\nusername={username.lower()}\npassword={token}\n\n") - stdin.flush() - - -def unset_git_credential(username: str = "hf_user", folder: Optional[str] = None) -> None: - """Erase credentials from git credential for HF Hub registry. - - Credentials are erased from the configured helpers (store, cache, macOS - keychain,...), if any. If `username` is not provided, any credential configured for - HF Hub endpoint is erased. - Calls "`git credential erase`" internally. See https://git-scm.com/docs/git-credential. - - Args: - username (`str`, defaults to `"hf_user"`): - A git username. Defaults to `"hf_user"`, the default user used in the Hub. - folder (`str`, *optional*): - The folder in which to check the configured helpers. - """ - with run_interactive_subprocess("git credential reject", folder=folder) as ( - stdin, - _, - ): - standard_input = f"url={ENDPOINT}\n" - if username is not None: - standard_input += f"username={username.lower()}\n" - standard_input += "\n" - - stdin.write(standard_input) - stdin.flush() diff --git a/spaces/DaleChen/AutoGPT/autogpt/commands/google_search.py b/spaces/DaleChen/AutoGPT/autogpt/commands/google_search.py deleted file mode 100644 index 7d38ce7568d2de207d521b077cfebd72527c9795..0000000000000000000000000000000000000000 --- a/spaces/DaleChen/AutoGPT/autogpt/commands/google_search.py +++ /dev/null @@ -1,87 +0,0 @@ -"""Google search command for Autogpt.""" -from __future__ import annotations - -import json - -from duckduckgo_search import ddg - -from autogpt.config import Config - -CFG = Config() - - -def google_search(query: str, num_results: int = 8) -> str: - """Return the results of a Google search - - Args: - query (str): The search query. - num_results (int): The number of results to return. - - Returns: - str: The results of the search. - """ - search_results = [] - if not query: - return json.dumps(search_results) - - results = ddg(query, max_results=num_results) - if not results: - return json.dumps(search_results) - - for j in results: - search_results.append(j) - - return json.dumps(search_results, ensure_ascii=False, indent=4) - - -def google_official_search(query: str, num_results: int = 8) -> str | list[str]: - """Return the results of a Google search using the official Google API - - Args: - query (str): The search query. - num_results (int): The number of results to return. - - Returns: - str: The results of the search. - """ - - from googleapiclient.discovery import build - from googleapiclient.errors import HttpError - - try: - # Get the Google API key and Custom Search Engine ID from the config file - api_key = CFG.google_api_key - custom_search_engine_id = CFG.custom_search_engine_id - - # Initialize the Custom Search API service - service = build("customsearch", "v1", developerKey=api_key) - - # Send the search query and retrieve the results - result = ( - service.cse() - .list(q=query, cx=custom_search_engine_id, num=num_results) - .execute() - ) - - # Extract the search result items from the response - search_results = result.get("items", []) - - # Create a list of only the URLs from the search results - search_results_links = [item["link"] for item in search_results] - - except HttpError as e: - # Handle errors in the API call - error_details = json.loads(e.content.decode()) - - # Check if the error is related to an invalid or missing API key - if error_details.get("error", {}).get( - "code" - ) == 403 and "invalid API key" in error_details.get("error", {}).get( - "message", "" - ): - return "Error: The provided Google API key is invalid or missing." - else: - return f"Error: {e}" - - # Return the list of search result URLs - return search_results_links diff --git a/spaces/Dao3/DaJuZi_OrangeCatTheGreat/README.md b/spaces/Dao3/DaJuZi_OrangeCatTheGreat/README.md deleted file mode 100644 index 76f630609c5e543685118504e585baa150f49ea2..0000000000000000000000000000000000000000 --- a/spaces/Dao3/DaJuZi_OrangeCatTheGreat/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: DaJuZi_OrangeCatTheGreat -emoji: 🐨 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: cc-by-4.0 -duplicated_from: benjamin2023/hackathon_chatbot_openai_api ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/DarwinAnim8or/Pythia-Greentext-Playground/app.py b/spaces/DarwinAnim8or/Pythia-Greentext-Playground/app.py deleted file mode 100644 index a5d8089ea7696dd98a7e0785281da23698067431..0000000000000000000000000000000000000000 --- a/spaces/DarwinAnim8or/Pythia-Greentext-Playground/app.py +++ /dev/null @@ -1,61 +0,0 @@ -import gradio as gr -from transformers import AutoModelForCausalLM, AutoTokenizer -from optimum.intel import OVModelForCausalLM - - -model_name = "DarwinAnim8or/Pythia-Greentext-1.4b" -tokenizer = AutoTokenizer.from_pretrained(model_name) -model = OVModelForCausalLM.from_pretrained(model_name, export=True) - -def generate(text, length=100, penalty=3, temperature=0.8, topk=40): - input_text = "Write a greentext from 4chan.org. The story should be like a bullet-point list using > as the start of each line. Most greentexts are humorous or absurd in nature. Most greentexts have a twist near the end.\n" - - if not text.startswith(">"): - input_text += ">" + text + "\n>" - else: - input_text += text + "\n>" - - input_ids = tokenizer.encode(input_text, return_tensors="pt") - input_ids = input_ids[:, :-1] # remove the last token, which is ">" - - length = length + input_ids.size(1) # adjust total length - - output = model.generate( - input_ids, - max_length=length, - temperature=temperature, - top_k=topk, - do_sample=True, - pad_token_id=tokenizer.eos_token_id, - no_repeat_ngram_size=penalty, - early_stopping=True, - ) - - generated_text = tokenizer.decode(output[:, input_ids.size(1):][0], skip_special_tokens=True) - return generated_text - -examples = [ - ["be me"], - ["be going to heaven"], - #["be going to work"], - #["be baking a pie"], - #["come home after another tiring day"], - ["be a plague doctor"] -] - -demo = gr.Interface( - fn=generate, - inputs=[ - gr.inputs.Textbox(lines=5, label="Input Text"), - gr.inputs.Slider(5, 200, label='Length', default=100, step=5), - gr.inputs.Slider(1, 10, label='no repeat ngram size', default=2, step=1), - gr.inputs.Slider(0.0, 1.0, label='Temperature - control randomness', default=0.2, step=0.1), - gr.inputs.Slider(10, 100, label="top_k", default=40, step=10) - ], - outputs=gr.outputs.Textbox(label="Generated Text"), - examples=examples, - title="Pythia-Greentext Playground", - description="Using the 1.4b size model. You may need to run it a few times in order to get something good!" -) - -demo.launch() \ No newline at end of file diff --git a/spaces/Datasculptor/DescriptionGPT/detic/data/datasets/lvis_v1.py b/spaces/Datasculptor/DescriptionGPT/detic/data/datasets/lvis_v1.py deleted file mode 100644 index 4b9b279f17663def1c4913321efbb7490d591e90..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/DescriptionGPT/detic/data/datasets/lvis_v1.py +++ /dev/null @@ -1,155 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import os - -from fvcore.common.timer import Timer -from detectron2.structures import BoxMode -from fvcore.common.file_io import PathManager -from detectron2.data import DatasetCatalog, MetadataCatalog -from detectron2.data.datasets.lvis import get_lvis_instances_meta - -logger = logging.getLogger(__name__) - -__all__ = ["custom_load_lvis_json", "custom_register_lvis_instances"] - - -def custom_register_lvis_instances(name, metadata, json_file, image_root): - """ - """ - DatasetCatalog.register(name, lambda: custom_load_lvis_json( - json_file, image_root, name)) - MetadataCatalog.get(name).set( - json_file=json_file, image_root=image_root, - evaluator_type="lvis", **metadata - ) - - -def custom_load_lvis_json(json_file, image_root, dataset_name=None): - ''' - Modifications: - use `file_name` - convert neg_category_ids - add pos_category_ids - ''' - from lvis import LVIS - - json_file = PathManager.get_local_path(json_file) - - timer = Timer() - lvis_api = LVIS(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format( - json_file, timer.seconds())) - - catid2contid = {x['id']: i for i, x in enumerate( - sorted(lvis_api.dataset['categories'], key=lambda x: x['id']))} - if len(lvis_api.dataset['categories']) == 1203: - for x in lvis_api.dataset['categories']: - assert catid2contid[x['id']] == x['id'] - 1 - img_ids = sorted(lvis_api.imgs.keys()) - imgs = lvis_api.load_imgs(img_ids) - anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids] - - ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image] - assert len(set(ann_ids)) == len(ann_ids), \ - "Annotation ids in '{}' are not unique".format(json_file) - - imgs_anns = list(zip(imgs, anns)) - logger.info("Loaded {} images in the LVIS v1 format from {}".format( - len(imgs_anns), json_file)) - - dataset_dicts = [] - - for (img_dict, anno_dict_list) in imgs_anns: - record = {} - if "file_name" in img_dict: - file_name = img_dict["file_name"] - if img_dict["file_name"].startswith("COCO"): - file_name = file_name[-16:] - record["file_name"] = os.path.join(image_root, file_name) - elif 'coco_url' in img_dict: - # e.g., http://images.cocodataset.org/train2017/000000391895.jpg - file_name = img_dict["coco_url"][30:] - record["file_name"] = os.path.join(image_root, file_name) - elif 'tar_index' in img_dict: - record['tar_index'] = img_dict['tar_index'] - - record["height"] = img_dict["height"] - record["width"] = img_dict["width"] - record["not_exhaustive_category_ids"] = img_dict.get( - "not_exhaustive_category_ids", []) - record["neg_category_ids"] = img_dict.get("neg_category_ids", []) - # NOTE: modified by Xingyi: convert to 0-based - record["neg_category_ids"] = [ - catid2contid[x] for x in record["neg_category_ids"]] - if 'pos_category_ids' in img_dict: - record['pos_category_ids'] = [ - catid2contid[x] for x in img_dict.get("pos_category_ids", [])] - if 'captions' in img_dict: - record['captions'] = img_dict['captions'] - if 'caption_features' in img_dict: - record['caption_features'] = img_dict['caption_features'] - image_id = record["image_id"] = img_dict["id"] - - objs = [] - for anno in anno_dict_list: - assert anno["image_id"] == image_id - if anno.get('iscrowd', 0) > 0: - continue - obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS} - obj["category_id"] = catid2contid[anno['category_id']] - if 'segmentation' in anno: - segm = anno["segmentation"] - valid_segm = [poly for poly in segm \ - if len(poly) % 2 == 0 and len(poly) >= 6] - # assert len(segm) == len( - # valid_segm - # ), "Annotation contains an invalid polygon with < 3 points" - if not len(segm) == len(valid_segm): - print('Annotation contains an invalid polygon with < 3 points') - assert len(segm) > 0 - obj["segmentation"] = segm - objs.append(obj) - record["annotations"] = objs - dataset_dicts.append(record) - - return dataset_dicts - -_CUSTOM_SPLITS_LVIS = { - "lvis_v1_train+coco": ("coco/", "lvis/lvis_v1_train+coco_mask.json"), - "lvis_v1_train_norare": ("coco/", "lvis/lvis_v1_train_norare.json"), -} - - -for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS.items(): - custom_register_lvis_instances( - key, - get_lvis_instances_meta(key), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) - - -def get_lvis_22k_meta(): - from .lvis_22k_categories import CATEGORIES - cat_ids = [k["id"] for k in CATEGORIES] - assert min(cat_ids) == 1 and max(cat_ids) == len( - cat_ids - ), "Category ids are not in [1, #categories], as expected" - # Ensure that the category list is sorted by id - lvis_categories = sorted(CATEGORIES, key=lambda x: x["id"]) - thing_classes = [k["name"] for k in lvis_categories] - meta = {"thing_classes": thing_classes} - return meta - -_CUSTOM_SPLITS_LVIS_22K = { - "lvis_v1_train_22k": ("coco/", "lvis/lvis_v1_train_lvis-22k.json"), -} - -for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS_22K.items(): - custom_register_lvis_instances( - key, - get_lvis_22k_meta(), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) \ No newline at end of file diff --git a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/run_metrics.py b/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/run_metrics.py deleted file mode 100644 index 5d1597bbd4e16a2535309ea74c3559cae2a5fa53..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/models/stylegan/stylegan_tf/run_metrics.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright (c) 2019, NVIDIA CORPORATION. All rights reserved. -# -# This work is licensed under the Creative Commons Attribution-NonCommercial -# 4.0 International License. To view a copy of this license, visit -# http://creativecommons.org/licenses/by-nc/4.0/ or send a letter to -# Creative Commons, PO Box 1866, Mountain View, CA 94042, USA. - -"""Main entry point for training StyleGAN and ProGAN networks.""" - -import dnnlib -from dnnlib import EasyDict -import dnnlib.tflib as tflib - -import config -from metrics import metric_base -from training import misc - -#---------------------------------------------------------------------------- - -def run_pickle(submit_config, metric_args, network_pkl, dataset_args, mirror_augment): - ctx = dnnlib.RunContext(submit_config) - tflib.init_tf() - print('Evaluating %s metric on network_pkl "%s"...' % (metric_args.name, network_pkl)) - metric = dnnlib.util.call_func_by_name(**metric_args) - print() - metric.run(network_pkl, dataset_args=dataset_args, mirror_augment=mirror_augment, num_gpus=submit_config.num_gpus) - print() - ctx.close() - -#---------------------------------------------------------------------------- - -def run_snapshot(submit_config, metric_args, run_id, snapshot): - ctx = dnnlib.RunContext(submit_config) - tflib.init_tf() - print('Evaluating %s metric on run_id %s, snapshot %s...' % (metric_args.name, run_id, snapshot)) - run_dir = misc.locate_run_dir(run_id) - network_pkl = misc.locate_network_pkl(run_dir, snapshot) - metric = dnnlib.util.call_func_by_name(**metric_args) - print() - metric.run(network_pkl, run_dir=run_dir, num_gpus=submit_config.num_gpus) - print() - ctx.close() - -#---------------------------------------------------------------------------- - -def run_all_snapshots(submit_config, metric_args, run_id): - ctx = dnnlib.RunContext(submit_config) - tflib.init_tf() - print('Evaluating %s metric on all snapshots of run_id %s...' % (metric_args.name, run_id)) - run_dir = misc.locate_run_dir(run_id) - network_pkls = misc.list_network_pkls(run_dir) - metric = dnnlib.util.call_func_by_name(**metric_args) - print() - for idx, network_pkl in enumerate(network_pkls): - ctx.update('', idx, len(network_pkls)) - metric.run(network_pkl, run_dir=run_dir, num_gpus=submit_config.num_gpus) - print() - ctx.close() - -#---------------------------------------------------------------------------- - -def main(): - submit_config = dnnlib.SubmitConfig() - - # Which metrics to evaluate? - metrics = [] - metrics += [metric_base.fid50k] - #metrics += [metric_base.ppl_zfull] - #metrics += [metric_base.ppl_wfull] - #metrics += [metric_base.ppl_zend] - #metrics += [metric_base.ppl_wend] - #metrics += [metric_base.ls] - #metrics += [metric_base.dummy] - - # Which networks to evaluate them on? - tasks = [] - tasks += [EasyDict(run_func_name='run_metrics.run_pickle', network_pkl='https://drive.google.com/uc?id=1MEGjdvVpUsu1jB4zrXZN7Y4kBBOzizDQ', dataset_args=EasyDict(tfrecord_dir='ffhq', shuffle_mb=0), mirror_augment=True)] # karras2019stylegan-ffhq-1024x1024.pkl - #tasks += [EasyDict(run_func_name='run_metrics.run_snapshot', run_id=100, snapshot=25000)] - #tasks += [EasyDict(run_func_name='run_metrics.run_all_snapshots', run_id=100)] - - # How many GPUs to use? - submit_config.num_gpus = 1 - #submit_config.num_gpus = 2 - #submit_config.num_gpus = 4 - #submit_config.num_gpus = 8 - - # Execute. - submit_config.run_dir_root = dnnlib.submission.submit.get_template_from_path(config.result_dir) - submit_config.run_dir_ignore += config.run_dir_ignore - for task in tasks: - for metric in metrics: - submit_config.run_desc = '%s-%s' % (task.run_func_name, metric.name) - if task.run_func_name.endswith('run_snapshot'): - submit_config.run_desc += '-%s-%s' % (task.run_id, task.snapshot) - if task.run_func_name.endswith('run_all_snapshots'): - submit_config.run_desc += '-%s' % task.run_id - submit_config.run_desc += '-%dgpu' % submit_config.num_gpus - dnnlib.submit_run(submit_config, metric_args=metric, **task) - -#---------------------------------------------------------------------------- - -if __name__ == "__main__": - main() - -#---------------------------------------------------------------------------- diff --git a/spaces/Djacon/emotion_detection/static/text_summarizer.html b/spaces/Djacon/emotion_detection/static/text_summarizer.html deleted file mode 100644 index 9828722fba804a45491881766c2cb605b3112a97..0000000000000000000000000000000000000000 --- a/spaces/Djacon/emotion_detection/static/text_summarizer.html +++ /dev/null @@ -1,309 +0,0 @@ - - - - - - - - Text2Feature | Summarizer - - - - - - - - - - -
-
- - - -
-
-
-
- - -
-
- - - - - - - - - -
- - -
-
- - -
-
- -
-
-
-

Text Summarizer

- - -
- -
-
-
    -
  • - -
  • -
  • - -
  • -
- -
-
- - -
- -
- - -
- - - - - -
- - - -
-
- - -
-
-
-
-
-
- - - - - - \ No newline at end of file diff --git a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/fma.py b/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/fma.py deleted file mode 100644 index a934ea1137d2ade6caefcbdb0476fca40fed8f0c..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/stylegan_human/torch_utils/ops/fma.py +++ /dev/null @@ -1,66 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Fused multiply-add, with slightly faster gradients than `torch.addcmul()`.""" - -import torch - -# ---------------------------------------------------------------------------- - - -def fma(a, b, c): # => a * b + c - return _FusedMultiplyAdd.apply(a, b, c) - -# ---------------------------------------------------------------------------- - - -class _FusedMultiplyAdd(torch.autograd.Function): # a * b + c - @staticmethod - def forward(ctx, a, b, c): # pylint: disable=arguments-differ - out = torch.addcmul(c, a, b) - ctx.save_for_backward(a, b) - ctx.c_shape = c.shape - return out - - @staticmethod - def backward(ctx, dout): # pylint: disable=arguments-differ - a, b = ctx.saved_tensors - c_shape = ctx.c_shape - da = None - db = None - dc = None - - if ctx.needs_input_grad[0]: - da = _unbroadcast(dout * b, a.shape) - - if ctx.needs_input_grad[1]: - db = _unbroadcast(dout * a, b.shape) - - if ctx.needs_input_grad[2]: - dc = _unbroadcast(dout, c_shape) - - return da, db, dc - -# ---------------------------------------------------------------------------- - - -def _unbroadcast(x, shape): - extra_dims = x.ndim - len(shape) - assert extra_dims >= 0 - dim = [i for i in range(x.ndim) if x.shape[i] > 1 and ( - i < extra_dims or shape[i - extra_dims] == 1)] - if len(dim): - x = x.sum(dim=dim, keepdim=True) - if extra_dims: - x = x.reshape(-1, *x.shape[extra_dims+1:]) - assert x.shape == shape - return x - -# ---------------------------------------------------------------------------- diff --git a/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_x_ch.py b/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_x_ch.py deleted file mode 100644 index 0e4765ef92fdfe61c9a28c4a384f156302523e24..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/exps/example/mot/yolox_x_ch.py +++ /dev/null @@ -1,138 +0,0 @@ -# encoding: utf-8 -import os -import random -import torch -import torch.nn as nn -import torch.distributed as dist - -from yolox.exp import Exp as MyExp -from yolox.data import get_yolox_datadir - -class Exp(MyExp): - def __init__(self): - super(Exp, self).__init__() - self.num_classes = 1 - self.depth = 1.33 - self.width = 1.25 - self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] - self.train_ann = "train.json" - self.val_ann = "val_half.json" - self.input_size = (800, 1440) - self.test_size = (800, 1440) - self.random_size = (18, 32) - self.max_epoch = 80 - self.print_interval = 20 - self.eval_interval = 5 - self.test_conf = 0.1 - self.nmsthre = 0.7 - self.no_aug_epochs = 10 - self.basic_lr_per_img = 0.001 / 64.0 - self.warmup_epochs = 1 - - def get_data_loader(self, batch_size, is_distributed, no_aug=False): - from yolox.data import ( - MOTDataset, - TrainTransform, - YoloBatchSampler, - DataLoader, - InfiniteSampler, - MosaicDetection, - ) - - dataset = MOTDataset( - data_dir=os.path.join(get_yolox_datadir(), "ch_all"), - json_file=self.train_ann, - name='', - img_size=self.input_size, - preproc=TrainTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - max_labels=500, - ), - ) - - dataset = MosaicDetection( - dataset, - mosaic=not no_aug, - img_size=self.input_size, - preproc=TrainTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - max_labels=1000, - ), - degrees=self.degrees, - translate=self.translate, - scale=self.scale, - shear=self.shear, - perspective=self.perspective, - enable_mixup=self.enable_mixup, - ) - - self.dataset = dataset - - if is_distributed: - batch_size = batch_size // dist.get_world_size() - - sampler = InfiniteSampler( - len(self.dataset), seed=self.seed if self.seed else 0 - ) - - batch_sampler = YoloBatchSampler( - sampler=sampler, - batch_size=batch_size, - drop_last=False, - input_dimension=self.input_size, - mosaic=not no_aug, - ) - - dataloader_kwargs = {"num_workers": self.data_num_workers, "pin_memory": True} - dataloader_kwargs["batch_sampler"] = batch_sampler - train_loader = DataLoader(self.dataset, **dataloader_kwargs) - - return train_loader - - def get_eval_loader(self, batch_size, is_distributed, testdev=False): - from yolox.data import MOTDataset, ValTransform - - valdataset = MOTDataset( - data_dir=os.path.join(get_yolox_datadir(), "mot"), - json_file=self.val_ann, - img_size=self.test_size, - name='train', - preproc=ValTransform( - rgb_means=(0.485, 0.456, 0.406), - std=(0.229, 0.224, 0.225), - ), - ) - - if is_distributed: - batch_size = batch_size // dist.get_world_size() - sampler = torch.utils.data.distributed.DistributedSampler( - valdataset, shuffle=False - ) - else: - sampler = torch.utils.data.SequentialSampler(valdataset) - - dataloader_kwargs = { - "num_workers": self.data_num_workers, - "pin_memory": True, - "sampler": sampler, - } - dataloader_kwargs["batch_size"] = batch_size - val_loader = torch.utils.data.DataLoader(valdataset, **dataloader_kwargs) - - return val_loader - - def get_evaluator(self, batch_size, is_distributed, testdev=False): - from yolox.evaluators import COCOEvaluator - - val_loader = self.get_eval_loader(batch_size, is_distributed, testdev=testdev) - evaluator = COCOEvaluator( - dataloader=val_loader, - img_size=self.test_size, - confthre=self.test_conf, - nmsthre=self.nmsthre, - num_classes=self.num_classes, - testdev=testdev, - ) - return evaluator diff --git a/spaces/ECCV2022/bytetrack/yolox/utils/logger.py b/spaces/ECCV2022/bytetrack/yolox/utils/logger.py deleted file mode 100644 index 4bd51d9ec6569c452b34c1cf60ff03044842c2ee..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/yolox/utils/logger.py +++ /dev/null @@ -1,96 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -# Copyright (c) 2014-2021 Megvii Inc. All rights reserved. - -from loguru import logger - -import inspect -import os -import sys - - -def get_caller_name(depth=0): - """ - Args: - depth (int): Depth of caller conext, use 0 for caller depth. Default value: 0. - - Returns: - str: module name of the caller - """ - # the following logic is a little bit faster than inspect.stack() logic - frame = inspect.currentframe().f_back - for _ in range(depth): - frame = frame.f_back - - return frame.f_globals["__name__"] - - -class StreamToLoguru: - """ - stream object that redirects writes to a logger instance. - """ - - def __init__(self, level="INFO", caller_names=("apex", "pycocotools")): - """ - Args: - level(str): log level string of loguru. Default value: "INFO". - caller_names(tuple): caller names of redirected module. - Default value: (apex, pycocotools). - """ - self.level = level - self.linebuf = "" - self.caller_names = caller_names - - def write(self, buf): - full_name = get_caller_name(depth=1) - module_name = full_name.rsplit(".", maxsplit=-1)[0] - if module_name in self.caller_names: - for line in buf.rstrip().splitlines(): - # use caller level log - logger.opt(depth=2).log(self.level, line.rstrip()) - else: - sys.__stdout__.write(buf) - - def flush(self): - pass - - -def redirect_sys_output(log_level="INFO"): - redirect_logger = StreamToLoguru(log_level) - sys.stderr = redirect_logger - sys.stdout = redirect_logger - - -def setup_logger(save_dir, distributed_rank=0, filename="log.txt", mode="a"): - """setup logger for training and testing. - Args: - save_dir(str): location to save log file - distributed_rank(int): device rank when multi-gpu environment - filename (string): log save name. - mode(str): log file write mode, `append` or `override`. default is `a`. - - Return: - logger instance. - """ - loguru_format = ( - "{time:YYYY-MM-DD HH:mm:ss} | " - "{level: <8} | " - "{name}:{line} - {message}" - ) - - logger.remove() - save_file = os.path.join(save_dir, filename) - if mode == "o" and os.path.exists(save_file): - os.remove(save_file) - # only keep logger in rank0 process - if distributed_rank == 0: - logger.add( - sys.stderr, - format=loguru_format, - level="INFO", - enqueue=True, - ) - logger.add(save_file) - - # redirect stdout/stderr to loguru - redirect_sys_output("INFO") diff --git a/spaces/FarziBuilder/Last/README.md b/spaces/FarziBuilder/Last/README.md deleted file mode 100644 index ae5d60daa9170df2095461229573a50c5c2e03e5..0000000000000000000000000000000000000000 --- a/spaces/FarziBuilder/Last/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Last -emoji: 📚 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.8.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FridaZuley/RVC_HFKawaii/easy_infer.py b/spaces/FridaZuley/RVC_HFKawaii/easy_infer.py deleted file mode 100644 index 81a70d3648c38120f908cdaf2ea3bd15af9dec26..0000000000000000000000000000000000000000 --- a/spaces/FridaZuley/RVC_HFKawaii/easy_infer.py +++ /dev/null @@ -1,1383 +0,0 @@ -import subprocess -import os -import sys -import errno -import shutil -import yt_dlp -from mega import Mega -import datetime -import unicodedata -import torch -import glob -import gradio as gr -import gdown -import zipfile -import traceback -import json -import mdx -from mdx_processing_script import get_model_list,id_to_ptm,prepare_mdx,run_mdx -import requests -import wget -import ffmpeg -import hashlib -now_dir = os.getcwd() -sys.path.append(now_dir) -from unidecode import unidecode -import re -import time -from lib.infer_pack.models_onnx import SynthesizerTrnMsNSFsidM -from infer.modules.vc.pipeline import Pipeline -VC = Pipeline -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from MDXNet import MDXNetDereverb -from configs.config import Config -from infer_uvr5 import _audio_pre_, _audio_pre_new -from huggingface_hub import HfApi, list_models -from huggingface_hub import login -from i18n import I18nAuto -i18n = I18nAuto() -from bs4 import BeautifulSoup -from sklearn.cluster import MiniBatchKMeans -from dotenv import load_dotenv -load_dotenv() -config = Config() -tmp = os.path.join(now_dir, "TEMP") -shutil.rmtree(tmp, ignore_errors=True) -os.environ["TEMP"] = tmp -weight_root = os.getenv("weight_root") -weight_uvr5_root = os.getenv("weight_uvr5_root") -index_root = os.getenv("index_root") -audio_root = "audios" -names = [] -for name in os.listdir(weight_root): - if name.endswith(".pth"): - names.append(name) -index_paths = [] - -global indexes_list -indexes_list = [] - -audio_paths = [] -for root, dirs, files in os.walk(index_root, topdown=False): - for name in files: - if name.endswith(".index") and "trained" not in name: - index_paths.append("%s\\%s" % (root, name)) - -for root, dirs, files in os.walk(audio_root, topdown=False): - for name in files: - audio_paths.append("%s/%s" % (root, name)) - -uvr5_names = [] -for name in os.listdir(weight_uvr5_root): - if name.endswith(".pth") or "onnx" in name: - uvr5_names.append(name.replace(".pth", "")) - -def calculate_md5(file_path): - hash_md5 = hashlib.md5() - with open(file_path, "rb") as f: - for chunk in iter(lambda: f.read(4096), b""): - hash_md5.update(chunk) - return hash_md5.hexdigest() - -def format_title(title): - formatted_title = re.sub(r'[^\w\s-]', '', title) - formatted_title = formatted_title.replace(" ", "_") - return formatted_title - -def silentremove(filename): - try: - os.remove(filename) - except OSError as e: - if e.errno != errno.ENOENT: - raise -def get_md5(temp_folder): - for root, subfolders, files in os.walk(temp_folder): - for file in files: - if not file.startswith("G_") and not file.startswith("D_") and file.endswith(".pth") and not "_G_" in file and not "_D_" in file: - md5_hash = calculate_md5(os.path.join(root, file)) - return md5_hash - - return None - -def find_parent(search_dir, file_name): - for dirpath, dirnames, filenames in os.walk(search_dir): - if file_name in filenames: - return os.path.abspath(dirpath) - return None - -def find_folder_parent(search_dir, folder_name): - for dirpath, dirnames, filenames in os.walk(search_dir): - if folder_name in dirnames: - return os.path.abspath(dirpath) - return None - - - -def download_from_url(url): - parent_path = find_folder_parent(".", "pretrained_v2") - zips_path = os.path.join(parent_path, 'zips') - - if url != '': - print(i18n("Downloading the file: ") + f"{url}") - if "drive.google.com" in url: - if "file/d/" in url: - file_id = url.split("file/d/")[1].split("/")[0] - elif "id=" in url: - file_id = url.split("id=")[1].split("&")[0] - else: - return None - - if file_id: - os.chdir('./zips') - result = subprocess.run(["gdown", f"https://drive.google.com/uc?id={file_id}", "--fuzzy"], capture_output=True, text=True, encoding='utf-8') - if "Too many users have viewed or downloaded this file recently" in str(result.stderr): - return "too much use" - if "Cannot retrieve the public link of the file." in str(result.stderr): - return "private link" - print(result.stderr) - - elif "/blob/" in url: - os.chdir('./zips') - url = url.replace("blob", "resolve") - response = requests.get(url) - if response.status_code == 200: - file_name = url.split('/')[-1] - with open(os.path.join(zips_path, file_name), "wb") as newfile: - newfile.write(response.content) - else: - os.chdir(parent_path) - elif "mega.nz" in url: - if "#!" in url: - file_id = url.split("#!")[1].split("!")[0] - elif "file/" in url: - file_id = url.split("file/")[1].split("/")[0] - else: - return None - if file_id: - m = Mega() - m.download_url(url, zips_path) - elif "/tree/main" in url: - response = requests.get(url) - soup = BeautifulSoup(response.content, 'html.parser') - temp_url = '' - for link in soup.find_all('a', href=True): - if link['href'].endswith('.zip'): - temp_url = link['href'] - break - if temp_url: - url = temp_url - url = url.replace("blob", "resolve") - if "huggingface.co" not in url: - url = "https://huggingface.co" + url - - wget.download(url) - else: - print("No .zip file found on the page.") - elif "cdn.discordapp.com" in url: - file = requests.get(url) - if file.status_code == 200: - name = url.split('/') - with open(os.path.join(zips_path, name[len(name)-1]), "wb") as newfile: - newfile.write(file.content) - else: - return None - elif "pixeldrain.com" in url: - try: - file_id = url.split("pixeldrain.com/u/")[1] - os.chdir('./zips') - print(file_id) - response = requests.get(f"https://pixeldrain.com/api/file/{file_id}") - if response.status_code == 200: - file_name = response.headers.get("Content-Disposition").split('filename=')[-1].strip('";') - if not os.path.exists(zips_path): - os.makedirs(zips_path) - with open(os.path.join(zips_path, file_name), "wb") as newfile: - newfile.write(response.content) - os.chdir(parent_path) - return "downloaded" - else: - os.chdir(parent_path) - return None - except Exception as e: - print(e) - os.chdir(parent_path) - return None - else: - os.chdir('./zips') - wget.download(url) - - os.chdir(parent_path) - print(i18n("Full download")) - return "downloaded" - else: - return None - -class error_message(Exception): - def __init__(self, mensaje): - self.mensaje = mensaje - super().__init__(mensaje) - -def get_vc(sid, to_return_protect0, to_return_protect1): - global n_spk, tgt_sr, net_g, vc, cpt, version - if sid == "" or sid == []: - global hubert_model - if hubert_model is not None: - print("clean_empty_cache") - del net_g, n_spk, vc, hubert_model, tgt_sr - hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None - if torch.cuda.is_available(): - torch.cuda.empty_cache() - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g, cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - cpt = None - return ( - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - ) - person = "%s/%s" % (weight_root, sid) - print("loading %s" % person) - cpt = torch.load(person, map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] - if_f0 = cpt.get("f0", 1) - if if_f0 == 0: - to_return_protect0 = to_return_protect1 = { - "visible": False, - "value": 0.5, - "__type__": "update", - } - else: - to_return_protect0 = { - "visible": True, - "value": to_return_protect0, - "__type__": "update", - } - to_return_protect1 = { - "visible": True, - "value": to_return_protect1, - "__type__": "update", - } - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - n_spk = cpt["config"][-3] - return ( - {"visible": True, "maximum": n_spk, "__type__": "update"}, - to_return_protect0, - to_return_protect1, - ) - -def load_downloaded_model(url): - parent_path = find_folder_parent(".", "pretrained_v2") - try: - infos = [] - logs_folders = ['0_gt_wavs','1_16k_wavs','2a_f0','2b-f0nsf','3_feature256','3_feature768'] - zips_path = os.path.join(parent_path, 'zips') - unzips_path = os.path.join(parent_path, 'unzips') - weights_path = os.path.join(parent_path, 'weights') - logs_dir = "" - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - if os.path.exists(unzips_path): - shutil.rmtree(unzips_path) - - os.mkdir(zips_path) - os.mkdir(unzips_path) - - download_file = download_from_url(url) - if not download_file: - print(i18n("The file could not be downloaded.")) - infos.append(i18n("The file could not be downloaded.")) - yield "\n".join(infos) - elif download_file == "downloaded": - print(i18n("It has been downloaded successfully.")) - infos.append(i18n("It has been downloaded successfully.")) - yield "\n".join(infos) - elif download_file == "too much use": - raise Exception(i18n("Too many users have recently viewed or downloaded this file")) - elif download_file == "private link": - raise Exception(i18n("Cannot get file from this private link")) - - for filename in os.listdir(zips_path): - if filename.endswith(".zip"): - zipfile_path = os.path.join(zips_path,filename) - print(i18n("Proceeding with the extraction...")) - infos.append(i18n("Proceeding with the extraction...")) - shutil.unpack_archive(zipfile_path, unzips_path, 'zip') - model_name = os.path.basename(zipfile_path) - logs_dir = os.path.join(parent_path,'logs', os.path.normpath(str(model_name).replace(".zip",""))) - yield "\n".join(infos) - else: - print(i18n("Unzip error.")) - infos.append(i18n("Unzip error.")) - yield "\n".join(infos) - - index_file = False - model_file = False - D_file = False - G_file = False - - for path, subdirs, files in os.walk(unzips_path): - for item in files: - item_path = os.path.join(path, item) - if not 'G_' in item and not 'D_' in item and item.endswith('.pth'): - model_file = True - model_name = item.replace(".pth","") - logs_dir = os.path.join(parent_path,'logs', model_name) - if os.path.exists(logs_dir): - shutil.rmtree(logs_dir) - os.mkdir(logs_dir) - if not os.path.exists(weights_path): - os.mkdir(weights_path) - if os.path.exists(os.path.join(weights_path, item)): - os.remove(os.path.join(weights_path, item)) - if os.path.exists(item_path): - shutil.move(item_path, weights_path) - - if not model_file and not os.path.exists(logs_dir): - os.mkdir(logs_dir) - for path, subdirs, files in os.walk(unzips_path): - for item in files: - item_path = os.path.join(path, item) - if item.startswith('added_') and item.endswith('.index'): - index_file = True - if os.path.exists(item_path): - if os.path.exists(os.path.join(logs_dir, item)): - os.remove(os.path.join(logs_dir, item)) - shutil.move(item_path, logs_dir) - if item.startswith('total_fea.npy') or item.startswith('events.'): - if os.path.exists(item_path): - if os.path.exists(os.path.join(logs_dir, item)): - os.remove(os.path.join(logs_dir, item)) - shutil.move(item_path, logs_dir) - - - result = "" - if model_file: - if index_file: - print(i18n("The model works for inference, and has the .index file.")) - infos.append("\n" + i18n("The model works for inference, and has the .index file.")) - yield "\n".join(infos) - else: - print(i18n("The model works for inference, but it doesn't have the .index file.")) - infos.append("\n" + i18n("The model works for inference, but it doesn't have the .index file.")) - yield "\n".join(infos) - - if not index_file and not model_file: - print(i18n("No relevant file was found to upload.")) - infos.append(i18n("No relevant file was found to upload.")) - yield "\n".join(infos) - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - if os.path.exists(unzips_path): - shutil.rmtree(unzips_path) - os.chdir(parent_path) - return result - except Exception as e: - os.chdir(parent_path) - if "too much use" in str(e): - print(i18n("Too many users have recently viewed or downloaded this file")) - yield i18n("Too many users have recently viewed or downloaded this file") - elif "private link" in str(e): - print(i18n("Cannot get file from this private link")) - yield i18n("Cannot get file from this private link") - else: - print(e) - yield i18n("An error occurred downloading") - finally: - os.chdir(parent_path) - -def load_dowloaded_dataset(url): - parent_path = find_folder_parent(".", "pretrained_v2") - infos = [] - try: - zips_path = os.path.join(parent_path, 'zips') - unzips_path = os.path.join(parent_path, 'unzips') - datasets_path = os.path.join(parent_path, 'datasets') - audio_extenions =['wav', 'mp3', 'flac', 'ogg', 'opus', - 'm4a', 'mp4', 'aac', 'alac', 'wma', - 'aiff', 'webm', 'ac3'] - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - if os.path.exists(unzips_path): - shutil.rmtree(unzips_path) - - if not os.path.exists(datasets_path): - os.mkdir(datasets_path) - - os.mkdir(zips_path) - os.mkdir(unzips_path) - - download_file = download_from_url(url) - - if not download_file: - print(i18n("An error occurred downloading")) - infos.append(i18n("An error occurred downloading")) - yield "\n".join(infos) - raise Exception(i18n("An error occurred downloading")) - elif download_file == "downloaded": - print(i18n("It has been downloaded successfully.")) - infos.append(i18n("It has been downloaded successfully.")) - yield "\n".join(infos) - elif download_file == "too much use": - raise Exception(i18n("Too many users have recently viewed or downloaded this file")) - elif download_file == "private link": - raise Exception(i18n("Cannot get file from this private link")) - - zip_path = os.listdir(zips_path) - foldername = "" - for file in zip_path: - if file.endswith('.zip'): - file_path = os.path.join(zips_path, file) - print("....") - foldername = file.replace(".zip","").replace(" ","").replace("-","_") - dataset_path = os.path.join(datasets_path, foldername) - print(i18n("Proceeding with the extraction...")) - infos.append(i18n("Proceeding with the extraction...")) - yield "\n".join(infos) - shutil.unpack_archive(file_path, unzips_path, 'zip') - if os.path.exists(dataset_path): - shutil.rmtree(dataset_path) - - os.mkdir(dataset_path) - - for root, subfolders, songs in os.walk(unzips_path): - for song in songs: - song_path = os.path.join(root, song) - if song.endswith(tuple(audio_extenions)): - formatted_song_name = format_title(os.path.splitext(song)[0]) - extension = os.path.splitext(song)[1] - new_song_path = os.path.join(dataset_path, f"{formatted_song_name}{extension}") - shutil.move(song_path, new_song_path) - else: - print(i18n("Unzip error.")) - infos.append(i18n("Unzip error.")) - yield "\n".join(infos) - - - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - if os.path.exists(unzips_path): - shutil.rmtree(unzips_path) - - print(i18n("The Dataset has been loaded successfully.")) - infos.append(i18n("The Dataset has been loaded successfully.")) - yield "\n".join(infos) - except Exception as e: - os.chdir(parent_path) - if "too much use" in str(e): - print(i18n("Too many users have recently viewed or downloaded this file")) - yield i18n("Too many users have recently viewed or downloaded this file") - elif "private link" in str(e): - print(i18n("Cannot get file from this private link")) - yield i18n("Cannot get file from this private link") - else: - print(e) - yield i18n("An error occurred downloading") - finally: - os.chdir(parent_path) - -def save_model(modelname, save_action): - - parent_path = find_folder_parent(".", "pretrained_v2") - zips_path = os.path.join(parent_path, 'zips') - dst = os.path.join(zips_path,modelname) - logs_path = os.path.join(parent_path, 'logs', modelname) - weights_path = os.path.join(parent_path, 'weights', f"{modelname}.pth") - save_folder = parent_path - infos = [] - - try: - if not os.path.exists(logs_path): - raise Exception("No model found.") - - if not 'content' in parent_path: - save_folder = os.path.join(parent_path, 'RVC_Backup') - else: - save_folder = '/content/drive/MyDrive/RVC_Backup' - - infos.append(i18n("Save model")) - yield "\n".join(infos) - - if not os.path.exists(save_folder): - os.mkdir(save_folder) - if not os.path.exists(os.path.join(save_folder, 'ManualTrainingBackup')): - os.mkdir(os.path.join(save_folder, 'ManualTrainingBackup')) - if not os.path.exists(os.path.join(save_folder, 'Finished')): - os.mkdir(os.path.join(save_folder, 'Finished')) - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - - os.mkdir(zips_path) - added_file = glob.glob(os.path.join(logs_path, "added_*.index")) - d_file = glob.glob(os.path.join(logs_path, "D_*.pth")) - g_file = glob.glob(os.path.join(logs_path, "G_*.pth")) - - if save_action == i18n("Choose the method"): - raise Exception("No method choosen.") - - if save_action == i18n("Save all"): - print(i18n("Save all")) - save_folder = os.path.join(save_folder, 'ManualTrainingBackup') - shutil.copytree(logs_path, dst) - else: - if not os.path.exists(dst): - os.mkdir(dst) - - if save_action == i18n("Save D and G"): - print(i18n("Save D and G")) - save_folder = os.path.join(save_folder, 'ManualTrainingBackup') - if len(d_file) > 0: - shutil.copy(d_file[0], dst) - if len(g_file) > 0: - shutil.copy(g_file[0], dst) - - if len(added_file) > 0: - shutil.copy(added_file[0], dst) - else: - infos.append(i18n("Saved without index...")) - - if save_action == i18n("Save voice"): - print(i18n("Save voice")) - save_folder = os.path.join(save_folder, 'Finished') - if len(added_file) > 0: - shutil.copy(added_file[0], dst) - else: - infos.append(i18n("Saved without index...")) - - yield "\n".join(infos) - if not os.path.exists(weights_path): - infos.append(i18n("Saved without inference model...")) - else: - shutil.copy(weights_path, dst) - - yield "\n".join(infos) - infos.append("\n" + i18n("This may take a few minutes, please wait...")) - yield "\n".join(infos) - - shutil.make_archive(os.path.join(zips_path,f"{modelname}"), 'zip', zips_path) - shutil.move(os.path.join(zips_path,f"{modelname}.zip"), os.path.join(save_folder, f'{modelname}.zip')) - - shutil.rmtree(zips_path) - infos.append("\n" + i18n("Model saved successfully")) - yield "\n".join(infos) - - except Exception as e: - print(e) - if "No model found." in str(e): - infos.append(i18n("The model you want to save does not exist, be sure to enter the correct name.")) - else: - infos.append(i18n("An error occurred saving the model")) - - yield "\n".join(infos) - -def load_downloaded_backup(url): - parent_path = find_folder_parent(".", "pretrained_v2") - try: - infos = [] - logs_folders = ['0_gt_wavs','1_16k_wavs','2a_f0','2b-f0nsf','3_feature256','3_feature768'] - zips_path = os.path.join(parent_path, 'zips') - unzips_path = os.path.join(parent_path, 'unzips') - weights_path = os.path.join(parent_path, 'weights') - logs_dir = os.path.join(parent_path, 'logs') - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - if os.path.exists(unzips_path): - shutil.rmtree(unzips_path) - - os.mkdir(zips_path) - os.mkdir(unzips_path) - - download_file = download_from_url(url) - if not download_file: - print(i18n("The file could not be downloaded.")) - infos.append(i18n("The file could not be downloaded.")) - yield "\n".join(infos) - elif download_file == "downloaded": - print(i18n("It has been downloaded successfully.")) - infos.append(i18n("It has been downloaded successfully.")) - yield "\n".join(infos) - elif download_file == "too much use": - raise Exception(i18n("Too many users have recently viewed or downloaded this file")) - elif download_file == "private link": - raise Exception(i18n("Cannot get file from this private link")) - - for filename in os.listdir(zips_path): - if filename.endswith(".zip"): - zipfile_path = os.path.join(zips_path,filename) - zip_dir_name = os.path.splitext(filename)[0] - unzip_dir = unzips_path - print(i18n("Proceeding with the extraction...")) - infos.append(i18n("Proceeding with the extraction...")) - shutil.unpack_archive(zipfile_path, unzip_dir, 'zip') - - if os.path.exists(os.path.join(unzip_dir, zip_dir_name)): - shutil.move(os.path.join(unzip_dir, zip_dir_name), logs_dir) - else: - new_folder_path = os.path.join(logs_dir, zip_dir_name) - os.mkdir(new_folder_path) - for item_name in os.listdir(unzip_dir): - item_path = os.path.join(unzip_dir, item_name) - if os.path.isfile(item_path): - shutil.move(item_path, new_folder_path) - elif os.path.isdir(item_path): - shutil.move(item_path, new_folder_path) - - yield "\n".join(infos) - else: - print(i18n("Unzip error.")) - infos.append(i18n("Unzip error.")) - yield "\n".join(infos) - - result = "" - - for filename in os.listdir(unzips_path): - if filename.endswith(".zip"): - silentremove(filename) - - if os.path.exists(zips_path): - shutil.rmtree(zips_path) - if os.path.exists(os.path.join(parent_path, 'unzips')): - shutil.rmtree(os.path.join(parent_path, 'unzips')) - print(i18n("The Backup has been uploaded successfully.")) - infos.append("\n" + i18n("The Backup has been uploaded successfully.")) - yield "\n".join(infos) - os.chdir(parent_path) - return result - except Exception as e: - os.chdir(parent_path) - if "too much use" in str(e): - print(i18n("Too many users have recently viewed or downloaded this file")) - yield i18n("Too many users have recently viewed or downloaded this file") - elif "private link" in str(e): - print(i18n("Cannot get file from this private link")) - yield i18n("Cannot get file from this private link") - else: - print(e) - yield i18n("An error occurred downloading") - finally: - os.chdir(parent_path) - -def save_to_wav(record_button): - if record_button is None: - pass - else: - path_to_file=record_button - new_name = datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")+'.wav' - new_path='./audios/'+new_name - shutil.move(path_to_file,new_path) - return new_name - - -def change_choices2(): - audio_paths=[] - for filename in os.listdir("./audios"): - if filename.endswith(('wav', 'mp3', 'flac', 'ogg', 'opus', - 'm4a', 'mp4', 'aac', 'alac', 'wma', - 'aiff', 'webm', 'ac3')): - audio_paths.append(os.path.join('./audios',filename).replace('\\', '/')) - return {"choices": sorted(audio_paths), "__type__": "update"}, {"__type__": "update"} - - - - - -def uvr(input_url, output_path, model_name, inp_root, save_root_vocal, paths, save_root_ins, agg, format0, architecture): - carpeta_a_eliminar = "yt_downloads" - if os.path.exists(carpeta_a_eliminar) and os.path.isdir(carpeta_a_eliminar): - for archivo in os.listdir(carpeta_a_eliminar): - ruta_archivo = os.path.join(carpeta_a_eliminar, archivo) - if os.path.isfile(ruta_archivo): - os.remove(ruta_archivo) - elif os.path.isdir(ruta_archivo): - shutil.rmtree(ruta_archivo) - - - - ydl_opts = { - 'no-windows-filenames': True, - 'restrict-filenames': True, - 'extract_audio': True, - 'format': 'bestaudio', - 'quiet': True, - 'no-warnings': True, - } - - try: - print(i18n("Downloading audio from the video...")) - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - info_dict = ydl.extract_info(input_url, download=False) - formatted_title = format_title(info_dict.get('title', 'default_title')) - formatted_outtmpl = output_path + '/' + formatted_title + '.wav' - ydl_opts['outtmpl'] = formatted_outtmpl - ydl = yt_dlp.YoutubeDL(ydl_opts) - ydl.download([input_url]) - print(i18n("Audio downloaded!")) - except Exception as error: - print(i18n("An error occurred:"), error) - - actual_directory = os.path.dirname(__file__) - - vocal_directory = os.path.join(actual_directory, save_root_vocal) - instrumental_directory = os.path.join(actual_directory, save_root_ins) - - vocal_formatted = f"vocal_{formatted_title}.wav.reformatted.wav_10.wav" - instrumental_formatted = f"instrument_{formatted_title}.wav.reformatted.wav_10.wav" - - vocal_audio_path = os.path.join(vocal_directory, vocal_formatted) - instrumental_audio_path = os.path.join(instrumental_directory, instrumental_formatted) - - vocal_formatted_mdx = f"{formatted_title}_vocal_.wav" - instrumental_formatted_mdx = f"{formatted_title}_instrument_.wav" - - vocal_audio_path_mdx = os.path.join(vocal_directory, vocal_formatted_mdx) - instrumental_audio_path_mdx = os.path.join(instrumental_directory, instrumental_formatted_mdx) - - if architecture == "VR": - try: - print(i18n("Starting audio conversion... (This might take a moment)")) - inp_root, save_root_vocal, save_root_ins = [x.strip(" ").strip('"').strip("\n").strip('"').strip(" ") for x in [inp_root, save_root_vocal, save_root_ins]] - usable_files = [os.path.join(inp_root, file) - for file in os.listdir(inp_root) - if file.endswith(tuple(sup_audioext))] - - - pre_fun = MDXNetDereverb(15) if model_name == "onnx_dereverb_By_FoxJoy" else (_audio_pre_ if "DeEcho" not in model_name else _audio_pre_new)( - agg=int(agg), - model_path=os.path.join(weight_uvr5_root, model_name + ".pth"), - device=config.device, - is_half=config.is_half, - ) - - try: - if paths != None: - paths = [path.name for path in paths] - else: - paths = usable_files - - except: - traceback.print_exc() - paths = usable_files - print(paths) - for path in paths: - inp_path = os.path.join(inp_root, path) - need_reformat, done = 1, 0 - - try: - info = ffmpeg.probe(inp_path, cmd="ffprobe") - if info["streams"][0]["channels"] == 2 and info["streams"][0]["sample_rate"] == "44100": - need_reformat = 0 - pre_fun._path_audio_(inp_path, save_root_ins, save_root_vocal, format0) - done = 1 - except: - traceback.print_exc() - - if need_reformat: - tmp_path = f"{tmp}/{os.path.basename(inp_path)}.reformatted.wav" - os.system(f"ffmpeg -i {inp_path} -vn -acodec pcm_s16le -ac 2 -ar 44100 {tmp_path} -y") - inp_path = tmp_path - - try: - if not done: - pre_fun._path_audio_(inp_path, save_root_ins, save_root_vocal, format0) - print(f"{os.path.basename(inp_path)}->Success") - except: - print(f"{os.path.basename(inp_path)}->{traceback.format_exc()}") - except: - traceback.print_exc() - finally: - try: - if model_name == "onnx_dereverb_By_FoxJoy": - del pre_fun.pred.model - del pre_fun.pred.model_ - else: - del pre_fun.model - - del pre_fun - return i18n("Finished"), vocal_audio_path, instrumental_audio_path - except: traceback.print_exc() - - if torch.cuda.is_available(): torch.cuda.empty_cache() - - elif architecture == "MDX": - try: - print(i18n("Starting audio conversion... (This might take a moment)")) - inp_root, save_root_vocal, save_root_ins = [x.strip(" ").strip('"').strip("\n").strip('"').strip(" ") for x in [inp_root, save_root_vocal, save_root_ins]] - - usable_files = [os.path.join(inp_root, file) - for file in os.listdir(inp_root) - if file.endswith(tuple(sup_audioext))] - try: - if paths != None: - paths = [path.name for path in paths] - else: - paths = usable_files - - except: - traceback.print_exc() - paths = usable_files - print(paths) - invert=True - denoise=True - use_custom_parameter=True - dim_f=2048 - dim_t=256 - n_fft=7680 - use_custom_compensation=True - compensation=1.025 - suffix = "vocal_" #@param ["Vocals", "Drums", "Bass", "Other"]{allow-input: true} - suffix_invert = "instrument_" #@param ["Instrumental", "Drumless", "Bassless", "Instruments"]{allow-input: true} - print_settings = True # @param{type:"boolean"} - onnx = id_to_ptm(model_name) - compensation = compensation if use_custom_compensation or use_custom_parameter else None - mdx_model = prepare_mdx(onnx,use_custom_parameter, dim_f, dim_t, n_fft, compensation=compensation) - - - for path in paths: - #inp_path = os.path.join(inp_root, path) - suffix_naming = suffix if use_custom_parameter else None - diff_suffix_naming = suffix_invert if use_custom_parameter else None - run_mdx(onnx, mdx_model, path, format0, diff=invert,suffix=suffix_naming,diff_suffix=diff_suffix_naming,denoise=denoise) - - if print_settings: - print() - print('[MDX-Net_Colab settings used]') - print(f'Model used: {onnx}') - print(f'Model MD5: {mdx.MDX.get_hash(onnx)}') - print(f'Model parameters:') - print(f' -dim_f: {mdx_model.dim_f}') - print(f' -dim_t: {mdx_model.dim_t}') - print(f' -n_fft: {mdx_model.n_fft}') - print(f' -compensation: {mdx_model.compensation}') - print() - print('[Input file]') - print('filename(s): ') - for filename in paths: - print(f' -{filename}') - print(f"{os.path.basename(filename)}->Success") - except: - traceback.print_exc() - finally: - try: - del mdx_model - return i18n("Finished"), vocal_audio_path_mdx, instrumental_audio_path_mdx - except: traceback.print_exc() - - print("clean_empty_cache") - - if torch.cuda.is_available(): torch.cuda.empty_cache() -sup_audioext = {'wav', 'mp3', 'flac', 'ogg', 'opus', - 'm4a', 'mp4', 'aac', 'alac', 'wma', - 'aiff', 'webm', 'ac3'} - -def load_downloaded_audio(url): - parent_path = find_folder_parent(".", "pretrained_v2") - try: - infos = [] - audios_path = os.path.join(parent_path, 'audios') - zips_path = os.path.join(parent_path, 'zips') - - if not os.path.exists(audios_path): - os.mkdir(audios_path) - - download_file = download_from_url(url) - if not download_file: - print(i18n("The file could not be downloaded.")) - infos.append(i18n("The file could not be downloaded.")) - yield "\n".join(infos) - elif download_file == "downloaded": - print(i18n("It has been downloaded successfully.")) - infos.append(i18n("It has been downloaded successfully.")) - yield "\n".join(infos) - elif download_file == "too much use": - raise Exception(i18n("Too many users have recently viewed or downloaded this file")) - elif download_file == "private link": - raise Exception(i18n("Cannot get file from this private link")) - - for filename in os.listdir(zips_path): - item_path = os.path.join(zips_path, filename) - if item_path.split('.')[-1] in sup_audioext: - if os.path.exists(item_path): - shutil.move(item_path, audios_path) - - result = "" - print(i18n("Audio files have been moved to the 'audios' folder.")) - infos.append(i18n("Audio files have been moved to the 'audios' folder.")) - yield "\n".join(infos) - - os.chdir(parent_path) - return result - except Exception as e: - os.chdir(parent_path) - if "too much use" in str(e): - print(i18n("Too many users have recently viewed or downloaded this file")) - yield i18n("Too many users have recently viewed or downloaded this file") - elif "private link" in str(e): - print(i18n("Cannot get file from this private link")) - yield i18n("Cannot get file from this private link") - else: - print(e) - yield i18n("An error occurred downloading") - finally: - os.chdir(parent_path) - - -class error_message(Exception): - def __init__(self, mensaje): - self.mensaje = mensaje - super().__init__(mensaje) - -def get_vc(sid, to_return_protect0, to_return_protect1): - global n_spk, tgt_sr, net_g, vc, cpt, version - if sid == "" or sid == []: - global hubert_model - if hubert_model is not None: - print("clean_empty_cache") - del net_g, n_spk, vc, hubert_model, tgt_sr - hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None - if torch.cuda.is_available(): - torch.cuda.empty_cache() - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g, cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - cpt = None - return ( - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - {"visible": False, "__type__": "update"}, - ) - person = "%s/%s" % (weight_root, sid) - print("loading %s" % person) - cpt = torch.load(person, map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] - if_f0 = cpt.get("f0", 1) - if if_f0 == 0: - to_return_protect0 = to_return_protect1 = { - "visible": False, - "value": 0.5, - "__type__": "update", - } - else: - to_return_protect0 = { - "visible": True, - "value": to_return_protect0, - "__type__": "update", - } - to_return_protect1 = { - "visible": True, - "value": to_return_protect1, - "__type__": "update", - } - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - n_spk = cpt["config"][-3] - return ( - {"visible": True, "maximum": n_spk, "__type__": "update"}, - to_return_protect0, - to_return_protect1, - ) - -def update_model_choices(select_value): - model_ids = get_model_list() - model_ids_list = list(model_ids) - if select_value == "VR": - return {"choices": uvr5_names, "__type__": "update"} - elif select_value == "MDX": - return {"choices": model_ids_list, "__type__": "update"} - -def download_model(): - gr.Markdown(value="# " + i18n("Download Model")) - gr.Markdown(value=i18n("It is used to download your inference models.")) - with gr.Row(): - model_url=gr.Textbox(label=i18n("Url:")) - with gr.Row(): - download_model_status_bar=gr.Textbox(label=i18n("Status:")) - with gr.Row(): - download_button=gr.Button(i18n("Download")) - download_button.click(fn=load_downloaded_model, inputs=[model_url], outputs=[download_model_status_bar]) - -def download_backup(): - gr.Markdown(value="# " + i18n("Download Backup")) - gr.Markdown(value=i18n("It is used to download your training backups.")) - with gr.Row(): - model_url=gr.Textbox(label=i18n("Url:")) - with gr.Row(): - download_model_status_bar=gr.Textbox(label=i18n("Status:")) - with gr.Row(): - download_button=gr.Button(i18n("Download")) - download_button.click(fn=load_downloaded_backup, inputs=[model_url], outputs=[download_model_status_bar]) - -def update_dataset_list(name): - new_datasets = [] - for foldername in os.listdir("./datasets"): - if "." not in foldername: - new_datasets.append(os.path.join(find_folder_parent(".","pretrained"),"datasets",foldername)) - return gr.Dropdown.update(choices=new_datasets) - -def download_dataset(trainset_dir4): - gr.Markdown(value="# " + i18n("Download Dataset")) - gr.Markdown(value=i18n("Download the dataset with the audios in a compatible format (.wav/.flac) to train your model.")) - with gr.Row(): - dataset_url=gr.Textbox(label=i18n("Url:")) - with gr.Row(): - load_dataset_status_bar=gr.Textbox(label=i18n("Status:")) - with gr.Row(): - load_dataset_button=gr.Button(i18n("Download")) - load_dataset_button.click(fn=load_dowloaded_dataset, inputs=[dataset_url], outputs=[load_dataset_status_bar]) - load_dataset_status_bar.change(update_dataset_list, dataset_url, trainset_dir4) - -def download_audio(): - gr.Markdown(value="# " + i18n("Download Audio")) - gr.Markdown(value=i18n("Download audios of any format for use in inference (recommended for mobile users).")) - with gr.Row(): - audio_url=gr.Textbox(label=i18n("Url:")) - with gr.Row(): - download_audio_status_bar=gr.Textbox(label=i18n("Status:")) - with gr.Row(): - download_button2=gr.Button(i18n("Download")) - download_button2.click(fn=load_downloaded_audio, inputs=[audio_url], outputs=[download_audio_status_bar]) - -def youtube_separator(): - gr.Markdown(value="# " + i18n("Separate YouTube tracks")) - gr.Markdown(value=i18n("Download audio from a YouTube video and automatically separate the vocal and instrumental tracks")) - with gr.Row(): - input_url = gr.inputs.Textbox(label=i18n("Enter the YouTube link:")) - output_path = gr.Textbox( - label=i18n("Enter the path of the audio folder to be processed (copy it from the address bar of the file manager):"), - value=os.path.abspath(os.getcwd()).replace('\\', '/') + "/yt_downloads", - visible=False, - ) - advanced_settings_checkbox = gr.Checkbox( - value=False, - label=i18n("Advanced Settings"), - interactive=True, - ) - with gr.Row(label = i18n("Advanced Settings"), visible=False, variant='compact') as advanced_settings: - with gr.Column(): - model_select = gr.Radio( - label=i18n("Model Architecture:"), - choices=["VR", "MDX"], - value="VR", - interactive=True, - ) - model_choose = gr.Dropdown(label=i18n("Model: (Be aware that in some models the named vocal will be the instrumental)"), - choices=uvr5_names, - value="HP5_only_main_vocal" - ) - with gr.Row(): - agg = gr.Slider( - minimum=0, - maximum=20, - step=1, - label=i18n("Vocal Extraction Aggressive"), - value=10, - interactive=True, - ) - with gr.Row(): - opt_vocal_root = gr.Textbox( - label=i18n("Specify the output folder for vocals:"), value="audios", - ) - opt_ins_root = gr.Textbox( - label=i18n("Specify the output folder for accompaniment:"), value="audio-others", - ) - dir_wav_input = gr.Textbox( - label=i18n("Enter the path of the audio folder to be processed:"), - value=((os.getcwd()).replace('\\', '/') + "/yt_downloads"), - visible=False, - ) - format0 = gr.Radio( - label=i18n("Export file format"), - choices=["wav", "flac", "mp3", "m4a"], - value="wav", - visible=False, - interactive=True, - ) - wav_inputs = gr.File( - file_count="multiple", label=i18n("You can also input audio files in batches. Choose one of the two options. Priority is given to reading from the folder."), - visible=False, - ) - model_select.change( - fn=update_model_choices, - inputs=model_select, - outputs=model_choose, - ) - with gr.Row(): - vc_output4 = gr.Textbox(label=i18n("Status:")) - vc_output5 = gr.Audio(label=i18n("Vocal"), type='filepath') - vc_output6 = gr.Audio(label=i18n("Instrumental"), type='filepath') - with gr.Row(): - but2 = gr.Button(i18n("Download and Separate")) - but2.click( - uvr, - [ - input_url, - output_path, - model_choose, - dir_wav_input, - opt_vocal_root, - wav_inputs, - opt_ins_root, - agg, - format0, - model_select - ], - [vc_output4, vc_output5, vc_output6], - ) - def toggle_advanced_settings(checkbox): - return {"visible": checkbox, "__type__": "update"} - - advanced_settings_checkbox.change( - fn=toggle_advanced_settings, - inputs=[advanced_settings_checkbox], - outputs=[advanced_settings] - ) - - -def get_bark_voice(): - mensaje = """ -v2/en_speaker_0 English Male -v2/en_speaker_1 English Male -v2/en_speaker_2 English Male -v2/en_speaker_3 English Male -v2/en_speaker_4 English Male -v2/en_speaker_5 English Male -v2/en_speaker_6 English Male -v2/en_speaker_7 English Male -v2/en_speaker_8 English Male -v2/en_speaker_9 English Female -v2/zh_speaker_0 Chinese (Simplified) Male -v2/zh_speaker_1 Chinese (Simplified) Male -v2/zh_speaker_2 Chinese (Simplified) Male -v2/zh_speaker_3 Chinese (Simplified) Male -v2/zh_speaker_4 Chinese (Simplified) Female -v2/zh_speaker_5 Chinese (Simplified) Male -v2/zh_speaker_6 Chinese (Simplified) Female -v2/zh_speaker_7 Chinese (Simplified) Female -v2/zh_speaker_8 Chinese (Simplified) Male -v2/zh_speaker_9 Chinese (Simplified) Female -v2/fr_speaker_0 French Male -v2/fr_speaker_1 French Female -v2/fr_speaker_2 French Female -v2/fr_speaker_3 French Male -v2/fr_speaker_4 French Male -v2/fr_speaker_5 French Female -v2/fr_speaker_6 French Male -v2/fr_speaker_7 French Male -v2/fr_speaker_8 French Male -v2/fr_speaker_9 French Male -v2/de_speaker_0 German Male -v2/de_speaker_1 German Male -v2/de_speaker_2 German Male -v2/de_speaker_3 German Female -v2/de_speaker_4 German Male -v2/de_speaker_5 German Male -v2/de_speaker_6 German Male -v2/de_speaker_7 German Male -v2/de_speaker_8 German Female -v2/de_speaker_9 German Male -v2/hi_speaker_0 Hindi Female -v2/hi_speaker_1 Hindi Female -v2/hi_speaker_2 Hindi Male -v2/hi_speaker_3 Hindi Female -v2/hi_speaker_4 Hindi Female -v2/hi_speaker_5 Hindi Male -v2/hi_speaker_6 Hindi Male -v2/hi_speaker_7 Hindi Male -v2/hi_speaker_8 Hindi Male -v2/hi_speaker_9 Hindi Female -v2/it_speaker_0 Italian Male -v2/it_speaker_1 Italian Male -v2/it_speaker_2 Italian Female -v2/it_speaker_3 Italian Male -v2/it_speaker_4 Italian Male -v2/it_speaker_5 Italian Male -v2/it_speaker_6 Italian Male -v2/it_speaker_7 Italian Female -v2/it_speaker_8 Italian Male -v2/it_speaker_9 Italian Female -v2/ja_speaker_0 Japanese Female -v2/ja_speaker_1 Japanese Female -v2/ja_speaker_2 Japanese Male -v2/ja_speaker_3 Japanese Female -v2/ja_speaker_4 Japanese Female -v2/ja_speaker_5 Japanese Female -v2/ja_speaker_6 Japanese Male -v2/ja_speaker_7 Japanese Female -v2/ja_speaker_8 Japanese Female -v2/ja_speaker_9 Japanese Female -v2/ko_speaker_0 Korean Female -v2/ko_speaker_1 Korean Male -v2/ko_speaker_2 Korean Male -v2/ko_speaker_3 Korean Male -v2/ko_speaker_4 Korean Male -v2/ko_speaker_5 Korean Male -v2/ko_speaker_6 Korean Male -v2/ko_speaker_7 Korean Male -v2/ko_speaker_8 Korean Male -v2/ko_speaker_9 Korean Male -v2/pl_speaker_0 Polish Male -v2/pl_speaker_1 Polish Male -v2/pl_speaker_2 Polish Male -v2/pl_speaker_3 Polish Male -v2/pl_speaker_4 Polish Female -v2/pl_speaker_5 Polish Male -v2/pl_speaker_6 Polish Female -v2/pl_speaker_7 Polish Male -v2/pl_speaker_8 Polish Male -v2/pl_speaker_9 Polish Female -v2/pt_speaker_0 Portuguese Male -v2/pt_speaker_1 Portuguese Male -v2/pt_speaker_2 Portuguese Male -v2/pt_speaker_3 Portuguese Male -v2/pt_speaker_4 Portuguese Male -v2/pt_speaker_5 Portuguese Male -v2/pt_speaker_6 Portuguese Male -v2/pt_speaker_7 Portuguese Male -v2/pt_speaker_8 Portuguese Male -v2/pt_speaker_9 Portuguese Male -v2/ru_speaker_0 Russian Male -v2/ru_speaker_1 Russian Male -v2/ru_speaker_2 Russian Male -v2/ru_speaker_3 Russian Male -v2/ru_speaker_4 Russian Male -v2/ru_speaker_5 Russian Female -v2/ru_speaker_6 Russian Female -v2/ru_speaker_7 Russian Male -v2/ru_speaker_8 Russian Male -v2/ru_speaker_9 Russian Female -v2/es_speaker_0 Spanish Male -v2/es_speaker_1 Spanish Male -v2/es_speaker_2 Spanish Male -v2/es_speaker_3 Spanish Male -v2/es_speaker_4 Spanish Male -v2/es_speaker_5 Spanish Male -v2/es_speaker_6 Spanish Male -v2/es_speaker_7 Spanish Male -v2/es_speaker_8 Spanish Female -v2/es_speaker_9 Spanish Female -v2/tr_speaker_0 Turkish Male -v2/tr_speaker_1 Turkish Male -v2/tr_speaker_2 Turkish Male -v2/tr_speaker_3 Turkish Male -v2/tr_speaker_4 Turkish Female -v2/tr_speaker_5 Turkish Female -v2/tr_speaker_6 Turkish Male -v2/tr_speaker_7 Turkish Male -v2/tr_speaker_8 Turkish Male -v2/tr_speaker_9 Turkish Male - """ -# Dividir el mensaje en líneas - lineas = mensaje.split("\n") - datos_deseados = [] - for linea in lineas: - partes = linea.split("\t") - if len(partes) == 3: - clave, _, genero = partes - datos_deseados.append(f"{clave}-{genero}") - - return datos_deseados - - -def get_edge_voice(): - completed_process = subprocess.run(['edge-tts',"-l"], capture_output=True, text=True) - lines = completed_process.stdout.strip().split("\n") - data = [] - current_entry = {} - for line in lines: - if line.startswith("Name: "): - if current_entry: - data.append(current_entry) - current_entry = {"Name": line.split(": ")[1]} - elif line.startswith("Gender: "): - current_entry["Gender"] = line.split(": ")[1] - if current_entry: - data.append(current_entry) - tts_voice = [] - for entry in data: - name = entry["Name"] - gender = entry["Gender"] - formatted_entry = f'{name}-{gender}' - tts_voice.append(formatted_entry) - return tts_voice - - -#print(set_tts_voice) diff --git a/spaces/FritsLyneborg/kunstnerfrits/Makefile b/spaces/FritsLyneborg/kunstnerfrits/Makefile deleted file mode 100644 index e418a64d4986ed7fc6401781b9b2743fcc7d85c6..0000000000000000000000000000000000000000 --- a/spaces/FritsLyneborg/kunstnerfrits/Makefile +++ /dev/null @@ -1,5 +0,0 @@ -.PHONY: style - -style: - black . - isort . \ No newline at end of file diff --git a/spaces/GIZ/embedding_visualisation/apps/sdg_pd.py b/spaces/GIZ/embedding_visualisation/apps/sdg_pd.py deleted file mode 100644 index 5f0affa96bdd6645a076b921ef723cb5395ba428..0000000000000000000000000000000000000000 --- a/spaces/GIZ/embedding_visualisation/apps/sdg_pd.py +++ /dev/null @@ -1,45 +0,0 @@ -import plotly.express as px -import streamlit as st -from sentence_transformers import SentenceTransformer -import umap.umap_ as umap -import pandas as pd -import os - -def app(): - st.title("SDG Embedding Visualisation") - with st.expander("ℹ️ - About this app", expanded=True): - - st.write( - """ - Information cartography - Get your word/phrase/sentence/paragraph embedded and visualized. - The (English) sentence-transformers model "all-MiniLM-L6-v2" maps sentences & paragraphs to a 384 dimensional dense vector space This is normally used for tasks like clustering or semantic search, but in this case, we use it to place your text to a 3D map. Before plotting, the dimension needs to be reduced to three so we can actually plot it, but preserve as much information as possible. For this, we use a technology called umap. - - On this page, you find thousands of text excerpts that were labelled by the community volunteers with respect to Sustainable Development Goals, a project by OSDG.ai, embedded as described. Ready to explore. - """) - - with st.spinner("👑 load data"): - df_osdg = pd.read_csv("sdg_umap.csv", sep = "|") - - #labels = [_lab_dict[lab] for lab in df_osdg['label'] ] - keys = list(df_osdg['keys']) - #docs = list(df_osdg['text']) - - agree = st.checkbox('add labels') - - if agree: - with st.spinner("👑 create visualisation"): - fig = px.scatter_3d( - df_osdg, x='coord_x', y='coord_y', z='coord_z', - color='labels', - opacity = .5, hover_data=[keys]) - fig.update_scenes(xaxis_visible=False, yaxis_visible=False,zaxis_visible=False ) - fig.update_traces(marker_size=4) - st.plotly_chart(fig) - else: - with st.spinner("👑 create visualisation"): - fig = px.scatter_3d( - df_osdg, x='coord_x', y='coord_y', z='coord_z', - opacity = .5, hover_data=[keys]) - fig.update_scenes(xaxis_visible=False, yaxis_visible=False,zaxis_visible=False ) - fig.update_traces(marker_size=4) - st.plotly_chart(fig) \ No newline at end of file diff --git a/spaces/GMFTBY/PandaGPT/model/ImageBind/models/multimodal_preprocessors.py b/spaces/GMFTBY/PandaGPT/model/ImageBind/models/multimodal_preprocessors.py deleted file mode 100644 index 44de961053601fd288c5c92c56b799d5762b8b4c..0000000000000000000000000000000000000000 --- a/spaces/GMFTBY/PandaGPT/model/ImageBind/models/multimodal_preprocessors.py +++ /dev/null @@ -1,687 +0,0 @@ -#!/usr/bin/env python3 -# Portions Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import gzip -import html -import io -import math -from functools import lru_cache -from typing import Callable, List, Optional - -import ftfy - -import numpy as np -import regex as re -import torch -import torch.nn as nn -from iopath.common.file_io import g_pathmgr -from timm.models.layers import trunc_normal_ - -from .helpers import cast_if_src_dtype, VerboseNNModule - - -def get_sinusoid_encoding_table(n_position, d_hid): - """Sinusoid position encoding table""" - - # TODO: make it with torch instead of numpy - def get_position_angle_vec(position): - return [ - position / np.power(10000, 2 * (hid_j // 2) / d_hid) - for hid_j in range(d_hid) - ] - - sinusoid_table = np.array( - [get_position_angle_vec(pos_i) for pos_i in range(n_position)] - ) - sinusoid_table[:, 0::2] = np.sin(sinusoid_table[:, 0::2]) # dim 2i - sinusoid_table[:, 1::2] = np.cos(sinusoid_table[:, 1::2]) # dim 2i+1 - - return torch.FloatTensor(sinusoid_table).unsqueeze(0) - - -def interpolate_pos_encoding_2d(target_spatial_size, pos_embed): - N = pos_embed.shape[1] - if N == target_spatial_size: - return pos_embed - dim = pos_embed.shape[-1] - # nn.functional.interpolate doesn't work with bfloat16 so we cast to float32 - pos_embed, updated = cast_if_src_dtype(pos_embed, torch.bfloat16, torch.float32) - pos_embed = nn.functional.interpolate( - pos_embed.reshape(1, int(math.sqrt(N)), int(math.sqrt(N)), dim).permute( - 0, 3, 1, 2 - ), - scale_factor=math.sqrt(target_spatial_size / N), - mode="bicubic", - ) - if updated: - pos_embed, _ = cast_if_src_dtype(pos_embed, torch.float32, torch.bfloat16) - pos_embed = pos_embed.permute(0, 2, 3, 1).view(1, -1, dim) - return pos_embed - - -def interpolate_pos_encoding( - npatch_per_img, - pos_embed, - patches_layout, - input_shape=None, - first_patch_idx=1, -): - assert first_patch_idx == 0 or first_patch_idx == 1, "there is 1 CLS token or none" - N = pos_embed.shape[1] - first_patch_idx # since it's 1 if cls_token exists - if npatch_per_img == N: - return pos_embed - - assert ( - patches_layout[-1] == patches_layout[-2] - ), "Interpolation of pos embed not supported for non-square layouts" - - class_emb = pos_embed[:, :first_patch_idx] - pos_embed = pos_embed[:, first_patch_idx:] - - if input_shape is None or patches_layout[0] == 1: - # simple 2D pos embedding, no temporal component - pos_embed = interpolate_pos_encoding_2d(npatch_per_img, pos_embed) - elif patches_layout[0] > 1: - # pos embed has a temporal component - assert len(input_shape) == 4, "temporal interpolation not supported" - # we only support 2D interpolation in this case - num_frames = patches_layout[0] - num_spatial_tokens = patches_layout[1] * patches_layout[2] - pos_embed = pos_embed.view(1, num_frames, num_spatial_tokens, -1) - # interpolate embedding for zeroth frame - pos_embed = interpolate_pos_encoding_2d( - npatch_per_img, pos_embed[0, 0, ...].unsqueeze(0) - ) - else: - raise ValueError("This type of interpolation isn't implemented") - - return torch.cat((class_emb, pos_embed), dim=1) - - -def _get_pos_embedding( - npatch_per_img, - pos_embed, - patches_layout, - input_shape, - first_patch_idx=1, -): - pos_embed = interpolate_pos_encoding( - npatch_per_img, - pos_embed, - patches_layout, - input_shape=input_shape, - first_patch_idx=first_patch_idx, - ) - return pos_embed - - -class PatchEmbedGeneric(nn.Module): - """ - PatchEmbed from Hydra - """ - - def __init__(self, proj_stem, norm_layer: Optional[nn.Module] = None): - super().__init__() - - if len(proj_stem) > 1: - self.proj = nn.Sequential(*proj_stem) - else: - # Special case to be able to load pre-trained models that were - # trained with a standard stem - self.proj = proj_stem[0] - self.norm_layer = norm_layer - - def get_patch_layout(self, img_size): - with torch.no_grad(): - dummy_img = torch.zeros( - [ - 1, - ] - + img_size - ) - dummy_out = self.proj(dummy_img) - embed_dim = dummy_out.shape[1] - patches_layout = tuple(dummy_out.shape[2:]) - num_patches = np.prod(patches_layout) - return patches_layout, num_patches, embed_dim - - def forward(self, x): - x = self.proj(x) - # B C (T) H W -> B (T)HW C - x = x.flatten(2).transpose(1, 2) - if self.norm_layer is not None: - x = self.norm_layer(x) - return x - - -class SpatioTemporalPosEmbeddingHelper(VerboseNNModule): - def __init__( - self, - patches_layout: List, - num_patches: int, - num_cls_tokens: int, - embed_dim: int, - learnable: bool, - ) -> None: - super().__init__() - self.num_cls_tokens = num_cls_tokens - self.patches_layout = patches_layout - self.num_patches = num_patches - self.num_tokens = num_cls_tokens + num_patches - self.learnable = learnable - if self.learnable: - self.pos_embed = nn.Parameter(torch.zeros(1, self.num_tokens, embed_dim)) - trunc_normal_(self.pos_embed, std=0.02) - else: - self.register_buffer( - "pos_embed", get_sinusoid_encoding_table(self.num_tokens, embed_dim) - ) - - def get_pos_embedding(self, vision_input, all_vision_tokens): - input_shape = vision_input.shape - pos_embed = _get_pos_embedding( - all_vision_tokens.size(1) - self.num_cls_tokens, - pos_embed=self.pos_embed, - patches_layout=self.patches_layout, - input_shape=input_shape, - first_patch_idx=self.num_cls_tokens, - ) - return pos_embed - - -class RGBDTPreprocessor(VerboseNNModule): - def __init__( - self, - rgbt_stem: PatchEmbedGeneric, - depth_stem: PatchEmbedGeneric, - img_size: List = (3, 224, 224), - num_cls_tokens: int = 1, - pos_embed_fn: Callable = None, - use_type_embed: bool = False, - init_param_style: str = "openclip", - ) -> None: - super().__init__() - stem = rgbt_stem if rgbt_stem is not None else depth_stem - ( - self.patches_layout, - self.num_patches, - self.embed_dim, - ) = stem.get_patch_layout(img_size) - self.rgbt_stem = rgbt_stem - self.depth_stem = depth_stem - self.use_pos_embed = pos_embed_fn is not None - self.use_type_embed = use_type_embed - self.num_cls_tokens = num_cls_tokens - - if self.use_pos_embed: - self.pos_embedding_helper = pos_embed_fn( - patches_layout=self.patches_layout, - num_cls_tokens=num_cls_tokens, - num_patches=self.num_patches, - embed_dim=self.embed_dim, - ) - if self.num_cls_tokens > 0: - self.cls_token = nn.Parameter( - torch.zeros(1, self.num_cls_tokens, self.embed_dim) - ) - if self.use_type_embed: - self.type_embed = nn.Parameter(torch.zeros(1, 1, self.embed_dim)) - - self.init_parameters(init_param_style) - - @torch.no_grad() - def init_parameters(self, init_param_style): - if init_param_style == "openclip": - # OpenCLIP style initialization - scale = self.embed_dim**-0.5 - if self.use_pos_embed: - nn.init.normal_(self.pos_embedding_helper.pos_embed) - self.pos_embedding_helper.pos_embed *= scale - - if self.num_cls_tokens > 0: - nn.init.normal_(self.cls_token) - self.cls_token *= scale - elif init_param_style == "vit": - self.cls_token.data.fill_(0) - else: - raise ValueError(f"Unknown init {init_param_style}") - - if self.use_type_embed: - nn.init.normal_(self.type_embed) - - def tokenize_input_and_cls_pos(self, input, stem, mask): - # tokens is of shape B x L x D - tokens = stem(input) - assert tokens.ndim == 3 - assert tokens.shape[2] == self.embed_dim - B = tokens.shape[0] - if self.num_cls_tokens > 0: - class_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole class_tokens impl from Phil Wang, thanks - tokens = torch.cat((class_tokens, tokens), dim=1) - if self.use_pos_embed: - pos_embed = self.pos_embedding_helper.get_pos_embedding(input, tokens) - tokens = tokens + pos_embed - if self.use_type_embed: - tokens = tokens + self.type_embed.expand(B, -1, -1) - return tokens - - def forward(self, vision=None, depth=None, patch_mask=None): - if patch_mask is not None: - raise NotImplementedError() - - if vision is not None: - vision_tokens = self.tokenize_input_and_cls_pos( - vision, self.rgbt_stem, patch_mask - ) - - if depth is not None: - depth_tokens = self.tokenize_input_and_cls_pos( - depth, self.depth_stem, patch_mask - ) - - # aggregate tokens - if vision is not None and depth is not None: - final_tokens = vision_tokens + depth_tokens - else: - final_tokens = vision_tokens if vision is not None else depth_tokens - return_dict = { - "trunk": { - "tokens": final_tokens, - }, - "head": {}, - } - return return_dict - - -class AudioPreprocessor(RGBDTPreprocessor): - def __init__(self, audio_stem: PatchEmbedGeneric, **kwargs) -> None: - super().__init__(rgbt_stem=audio_stem, depth_stem=None, **kwargs) - - def forward(self, audio=None): - return super().forward(vision=audio) - - -class ThermalPreprocessor(RGBDTPreprocessor): - def __init__(self, thermal_stem: PatchEmbedGeneric, **kwargs) -> None: - super().__init__(rgbt_stem=thermal_stem, depth_stem=None, **kwargs) - - def forward(self, thermal=None): - return super().forward(vision=thermal) - - -def build_causal_attention_mask(context_length): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(context_length, context_length, requires_grad=False) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - -class TextPreprocessor(VerboseNNModule): - def __init__( - self, - vocab_size: int, - context_length: int, - embed_dim: int, - causal_masking: bool, - supply_seq_len_to_head: bool = True, - num_cls_tokens: int = 0, - init_param_style: str = "openclip", - ) -> None: - super().__init__() - self.vocab_size = vocab_size - self.context_length = context_length - self.token_embedding = nn.Embedding(vocab_size, embed_dim) - self.pos_embed = nn.Parameter( - torch.empty(1, self.context_length + num_cls_tokens, embed_dim) - ) - self.causal_masking = causal_masking - if self.causal_masking: - mask = build_causal_attention_mask(self.context_length) - # register the mask as a buffer so it can be moved to the right device - self.register_buffer("mask", mask) - - self.supply_seq_len_to_head = supply_seq_len_to_head - self.num_cls_tokens = num_cls_tokens - self.embed_dim = embed_dim - if num_cls_tokens > 0: - assert self.causal_masking is False, "Masking + CLS token isn't implemented" - self.cls_token = nn.Parameter( - torch.zeros(1, self.num_cls_tokens, embed_dim) - ) - - self.init_parameters(init_param_style) - - @torch.no_grad() - def init_parameters(self, init_param_style="openclip"): - # OpenCLIP style initialization - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.pos_embed, std=0.01) - - if init_param_style == "openclip": - # OpenCLIP style initialization - scale = self.embed_dim**-0.5 - if self.num_cls_tokens > 0: - nn.init.normal_(self.cls_token) - self.cls_token *= scale - elif init_param_style == "vit": - self.cls_token.data.fill_(0) - else: - raise ValueError(f"Unknown init {init_param_style}") - - def forward(self, text): - # text tokens are of shape B x L x D - text_tokens = self.token_embedding(text) - # concat CLS tokens if any - if self.num_cls_tokens > 0: - B = text_tokens.shape[0] - class_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole class_tokens impl from Phil Wang, thanks - text_tokens = torch.cat((class_tokens, text_tokens), dim=1) - text_tokens = text_tokens + self.pos_embed - return_dict = { - "trunk": { - "tokens": text_tokens, - }, - "head": {}, - } - # Compute sequence length after adding CLS tokens - if self.supply_seq_len_to_head: - text_lengths = text.argmax(dim=-1) - return_dict["head"] = { - "seq_len": text_lengths, - } - if self.causal_masking: - return_dict["trunk"].update({"attn_mask": self.mask}) - return return_dict - - -class Im2Video(nn.Module): - """Convert an image into a trivial video.""" - - def __init__(self, time_dim=2): - super().__init__() - self.time_dim = time_dim - - def forward(self, x): - if x.ndim == 4: - # B, C, H, W -> B, C, T, H, W - return x.unsqueeze(self.time_dim) - elif x.ndim == 5: - return x - else: - raise ValueError(f"Dimension incorrect {x.shape}") - - -class PadIm2Video(Im2Video): - def __init__(self, ntimes, pad_type, time_dim=2): - super().__init__(time_dim=time_dim) - assert ntimes > 0 - assert pad_type in ["zero", "repeat"] - self.ntimes = ntimes - self.pad_type = pad_type - - def forward(self, x): - x = super().forward(x) - if x.shape[self.time_dim] == 1: - if self.pad_type == "repeat": - new_shape = [1] * len(x.shape) - new_shape[self.time_dim] = self.ntimes - x = x.repeat(new_shape) - elif self.pad_type == "zero": - padarg = [0, 0] * len(x.shape) - padarg[2 * self.time_dim + 1] = self.ntimes - x.shape[self.time_dim] - x = nn.functional.pad(x, padarg) - return x - - -# Modified from github.com/openai/CLIP -@lru_cache() -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a corresponding list of unicode strings. - The reversible bpe codes work on unicode strings. - This means you need a large # of unicode characters in your vocab if you want to avoid UNKs. - When you're at something like a 10B token dataset you end up needing around 5K for decent coverage. - This is a signficant percentage of your normal, say, 32K bpe vocab. - To avoid that, we want lookup tables between utf-8 bytes and unicode strings. - And avoids mapping to whitespace/control characters the bpe code barfs on. - """ - bs = ( - list(range(ord("!"), ord("~") + 1)) - + list(range(ord("¡"), ord("¬") + 1)) - + list(range(ord("®"), ord("ÿ") + 1)) - ) - cs = bs[:] - n = 0 - for b in range(2**8): - if b not in bs: - bs.append(b) - cs.append(2**8 + n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - """Return set of symbol pairs in a word. - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -def basic_clean(text): - text = ftfy.fix_text(text) - text = html.unescape(html.unescape(text)) - return text.strip() - - -def whitespace_clean(text): - text = re.sub(r"\s+", " ", text) - text = text.strip() - return text - - -class SimpleTokenizer(object): - def __init__(self, bpe_path: str, context_length=77): - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - - with g_pathmgr.open(bpe_path, "rb") as fh: - bpe_bytes = io.BytesIO(fh.read()) - merges = gzip.open(bpe_bytes).read().decode("utf-8").split("\n") - merges = merges[1 : 49152 - 256 - 2 + 1] - merges = [tuple(merge.split()) for merge in merges] - vocab = list(bytes_to_unicode().values()) - vocab = vocab + [v + "" for v in vocab] - for merge in merges: - vocab.append("".join(merge)) - vocab.extend(["<|startoftext|>", "<|endoftext|>"]) - self.encoder = dict(zip(vocab, range(len(vocab)))) - self.decoder = {v: k for k, v in self.encoder.items()} - self.bpe_ranks = dict(zip(merges, range(len(merges)))) - self.cache = { - "<|startoftext|>": "<|startoftext|>", - "<|endoftext|>": "<|endoftext|>", - } - self.pat = re.compile( - r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+""", - re.IGNORECASE, - ) - self.context_length = context_length - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token[:-1]) + (token[-1] + "",) - pairs = get_pairs(word) - - if not pairs: - return token + "" - - while True: - bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - new_word.extend(word[i:j]) - i = j - except: - new_word.extend(word[i:]) - break - - if word[i] == first and i < len(word) - 1 and word[i + 1] == second: - new_word.append(first + second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = " ".join(word) - self.cache[token] = word - return word - - def encode(self, text): - bpe_tokens = [] - text = whitespace_clean(basic_clean(text)).lower() - for token in re.findall(self.pat, text): - token = "".join(self.byte_encoder[b] for b in token.encode("utf-8")) - bpe_tokens.extend( - self.encoder[bpe_token] for bpe_token in self.bpe(token).split(" ") - ) - return bpe_tokens - - def decode(self, tokens): - text = "".join([self.decoder[token] for token in tokens]) - text = ( - bytearray([self.byte_decoder[c] for c in text]) - .decode("utf-8", errors="replace") - .replace("", " ") - ) - return text - - def __call__(self, texts, context_length=None): - if not context_length: - context_length = self.context_length - - if isinstance(texts, str): - texts = [texts] - - sot_token = self.encoder["<|startoftext|>"] - eot_token = self.encoder["<|endoftext|>"] - all_tokens = [[sot_token] + self.encode(text) + [eot_token] for text in texts] - result = torch.zeros(len(all_tokens), context_length, dtype=torch.long) - - for i, tokens in enumerate(all_tokens): - tokens = tokens[:context_length] - result[i, : len(tokens)] = torch.tensor(tokens) - - if len(result) == 1: - return result[0] - return result - - -class IMUPreprocessor(VerboseNNModule): - def __init__( - self, - kernel_size: int, - imu_stem: PatchEmbedGeneric, - embed_dim: int, - img_size: List = (6, 2000), - num_cls_tokens: int = 1, - pos_embed_fn: Callable = None, - init_param_style: str = "openclip", - ) -> None: - super().__init__() - stem = imu_stem - self.imu_stem = imu_stem - self.embed_dim = embed_dim - self.use_pos_embed = pos_embed_fn is not None - self.num_cls_tokens = num_cls_tokens - self.kernel_size = kernel_size - self.pos_embed = nn.Parameter( - torch.empty(1, (img_size[1] // kernel_size) + num_cls_tokens, embed_dim) - ) - - if self.num_cls_tokens > 0: - self.cls_token = nn.Parameter( - torch.zeros(1, self.num_cls_tokens, self.embed_dim) - ) - - self.init_parameters(init_param_style) - - @torch.no_grad() - def init_parameters(self, init_param_style): - nn.init.normal_(self.pos_embed, std=0.01) - - if init_param_style == "openclip": - # OpenCLIP style initialization - scale = self.embed_dim**-0.5 - - if self.num_cls_tokens > 0: - nn.init.normal_(self.cls_token) - self.cls_token *= scale - elif init_param_style == "vit": - self.cls_token.data.fill_(0) - else: - raise ValueError(f"Unknown init {init_param_style}") - - def tokenize_input_and_cls_pos(self, input, stem): - # tokens is of shape B x L x D - tokens = stem.norm_layer(stem.proj(input)) - assert tokens.ndim == 3 - assert tokens.shape[2] == self.embed_dim - B = tokens.shape[0] - if self.num_cls_tokens > 0: - class_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole class_tokens impl from Phil Wang, thanks - tokens = torch.cat((class_tokens, tokens), dim=1) - if self.use_pos_embed: - tokens = tokens + self.pos_embed - return tokens - - def forward(self, imu): - # Patchify - imu = imu.unfold( - -1, - self.kernel_size, - self.kernel_size, - ).permute(0, 2, 1, 3) - imu = imu.reshape(imu.size(0), imu.size(1), -1) - - imu_tokens = self.tokenize_input_and_cls_pos( - imu, - self.imu_stem, - ) - - return_dict = { - "trunk": { - "tokens": imu_tokens, - }, - "head": {}, - } - return return_dict diff --git a/spaces/GT4SD/paccmann_gp/model_cards/description.md b/spaces/GT4SD/paccmann_gp/model_cards/description.md deleted file mode 100644 index b1e73da3c077cc3eadd3782250812fc05f81cd8c..0000000000000000000000000000000000000000 --- a/spaces/GT4SD/paccmann_gp/model_cards/description.md +++ /dev/null @@ -1,6 +0,0 @@ -logo - -[PaccMannGP](https://github.com/PaccMann/paccmann_gp) is a language-based Variational Autoencoder that is coupled with a GaussianProcess for controlled sampling. For details of the methodology, please see [Born et al., (2022), *Journal of Chemical Information & Modeling*](https://pubs.acs.org/doi/10.1021/acs.jcim.1c00889). - -For **examples** and **documentation** of the model parameters, please see below. -Moreover, we provide a **model card** ([Mitchell et al. (2019)](https://dl.acm.org/doi/abs/10.1145/3287560.3287596?casa_token=XD4eHiE2cRUAAAAA:NL11gMa1hGPOUKTAbtXnbVQBDBbjxwcjGECF_i-WC_3g1aBgU1Hbz_f2b4kI_m1in-w__1ztGeHnwHs)) at the bottom of this page. diff --git a/spaces/GaenKoki/voicevox/voicevox_engine/metas/Metas.py b/spaces/GaenKoki/voicevox/voicevox_engine/metas/Metas.py deleted file mode 100644 index 58c42f06765c3554a138471d83fc90800e6a8540..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/voicevox_engine/metas/Metas.py +++ /dev/null @@ -1,83 +0,0 @@ -from enum import Enum -from typing import List, Optional - -from pydantic import BaseModel, Field - - -class SpeakerStyle(BaseModel): - """ - スピーカーのスタイル情報 - """ - - name: str = Field(title="スタイル名") - id: int = Field(title="スタイルID") - - -class SpeakerSupportPermittedSynthesisMorphing(str, Enum): - ALL = "ALL" # 全て許可 - SELF_ONLY = "SELF_ONLY" # 同じ話者内でのみ許可 - NOTHING = "NOTHING" # 全て禁止 - - @classmethod - def _missing_(cls, value: object) -> "SpeakerSupportPermittedSynthesisMorphing": - return SpeakerSupportPermittedSynthesisMorphing.ALL - - -class SpeakerSupportedFeatures(BaseModel): - """ - 話者の対応機能の情報 - """ - - permitted_synthesis_morphing: SpeakerSupportPermittedSynthesisMorphing = Field( - title="モーフィング機能への対応", default=SpeakerSupportPermittedSynthesisMorphing(None) - ) - - -class CoreSpeaker(BaseModel): - """ - コアに含まれるスピーカー情報 - """ - - name: str = Field(title="名前") - speaker_uuid: str = Field(title="スピーカーのUUID") - styles: List[SpeakerStyle] = Field(title="スピーカースタイルの一覧") - version: str = Field("スピーカーのバージョン") - - -class EngineSpeaker(BaseModel): - """ - エンジンに含まれるスピーカー情報 - """ - - supported_features: SpeakerSupportedFeatures = Field( - title="スピーカーの対応機能", default_factory=SpeakerSupportedFeatures - ) - - -class Speaker(CoreSpeaker, EngineSpeaker): - """ - スピーカー情報 - """ - - pass - - -class StyleInfo(BaseModel): - """ - スタイルの追加情報 - """ - - id: int = Field(title="スタイルID") - icon: str = Field(title="当該スタイルのアイコンをbase64エンコードしたもの") - portrait: Optional[str] = Field(title="当該スタイルのportrait.pngをbase64エンコードしたもの") - voice_samples: List[str] = Field(title="voice_sampleのwavファイルをbase64エンコードしたもの") - - -class SpeakerInfo(BaseModel): - """ - 話者の追加情報 - """ - - policy: str = Field(title="policy.md") - portrait: str = Field(title="portrait.pngをbase64エンコードしたもの") - style_infos: List[StyleInfo] = Field(title="スタイルの追加情報") diff --git a/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_multi_task_finetune_goal.sh b/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_multi_task_finetune_goal.sh deleted file mode 100644 index 82c556954a88623308d6c27f9e1cd3acce4dfe6f..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/scripts/traintest_scripts/train_test_multi_task_finetune_goal.sh +++ /dev/null @@ -1,85 +0,0 @@ -#!/bin/bash - -DATA_DIR=$1 -TRAINTASK=${2-'[rainbow-stack,bowl-ball-placement]'} -TESTTASK=${3-'[rainbow-stack,bowl-ball-placement]'} -TASKNAME=${4-'mix-two'} -STEPS=${5-'10000'} -DISP=False - -echo "Training multi-task dataset... Folder: $DATA_DIR Task $TRAINTASK" - -# You can parallelize these depending on how much resources you have - -############################# -## Language-Conditioned Tasks -# [align-rope,assembling-kits-seq-seen-colors,assembling-kits-seq-unseen-colors,packing-shapes,stack-block-pyramid-seq-unseen-colors, -# separating-piles-seen-colors,separating-piles-unseen-colors,towers-of-hanoi-seq-seen-colors,towers-of-hanoi-seq-unseen-colors] - -# example: sh scripts/traintest_scripts/train_test_multi_task_indistribution.sh data "[align-rope,sweeping-piles,align-box-corner,block-insertion,manipulating-rope,place-red-in-green]" 6taskindomain -# sh scripts/traintest_scripts/train_test_multi_task_goal.sh data "[align-rope,sweeping-piles,align-box-corner,block-insertion,manipulating-rope,place-red-in-green]" "[towers-of-hanoi]" 6taskgen -# sh scripts/traintest_scripts/train_test_multi_task_goal.sh data "[align-rope,sweeping-piles,align-box-corner]" "[towers-of-hanoi]" 3taskgen -# sh scripts/traintest_scripts/train_test_multi_task_goal.sh data "[align-rope]" "[towers-of-hanoi]" 1taskgen -# sh scripts/traintest_scripts/train_test_multi_task_goal.sh data "[align-rope,sweeping-piles,align-box-corner,block-insertion,manipulating-rope,place-red-in-green]" "[towers-of-hanoi]" 10taskgen - -trap "kill 0" SIGINT - -python cliport/train.py train.task=$TRAINTASK \ - train.agent=cliport \ - train.model_task=$TASKNAME \ - train.attn_stream_fusion_type=add \ - train.trans_stream_fusion_type=conv \ - train.lang_fusion_type=mult \ - train.n_demos=200 \ - train.n_steps=$STEPS \ - dataset.cache=True \ - train.exp_folder=exps/exp-$TASKNAME \ - dataset.type=multi \ - train.load_from_last_ckpt=False - - -# finetuning. todo: check if model loading is done properly. -# check if smaller lr is necessary. -python cliport/train.py train.task=$TESTTASK \ - train.agent=cliport \ - train.model_task=$TASKNAME \ - train.attn_stream_fusion_type=add \ - train.trans_stream_fusion_type=conv \ - train.lang_fusion_type=mult \ - train.n_demos=10 \ - train.lr=1e-5 \ - dataset.cache=True \ - train.exp_folder=exps/exp-$TASKNAME \ - dataset.type=multi - - - -# Convert Python list to Bash array -bash_array=$(python3 -c "import sys; print(' '.join((sys.argv[1])[1:-1].split(',')))" "$TESTTASK") - -# Convert the space-separated string to a bash array -echo "Testing multi-task dataset... Folder: $DATA_DIR Task $TESTTASK" - - -for task in $bash_array - do - echo "Testing $task" - # TEST - bash scripts/generate_gpt_datasets.sh data $task - - python cliport/eval.py model_task=$TASKNAME \ - eval_task=$task \ - agent=cliport \ - mode=test \ - n_demos=100 \ - train_demos=200 \ - checkpoint_type=test_best \ - type=single \ - exp_folder=exps/exp-$TASKNAME \ - update_results=True & - done -wait - -python notebooks/print_results.py -r=exps/exp-$TASKNAME - -echo "Finished Training." \ No newline at end of file diff --git a/spaces/Gradio-Blocks/EDSR/README.md b/spaces/Gradio-Blocks/EDSR/README.md deleted file mode 100644 index 227547321a05da19cc51856f78ffea6e11bc7413..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/EDSR/README.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: EDSR Keras -emoji: 🚀 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.18.0 -python_version: 3.10.9 -app_file: app.py -pinned: false -license: mit ---- - -This space is the demo for the EDSR (Enhanced Deep Residual Networks for Single Image Super-Resolution) model. This model surpassed the performace of the current available SOTA models. - -Paper Link - https://arxiv.org/pdf/1707.02921 - -Keras Example link - https://keras.io/examples/vision/edsr/ - - -TODO: - -Hack to make this work for any image size. Currently the model takes input of image size 150 x 150. -We pad the input image with transparant pixels so that it is a square image, which is a multiple of 150 x 150 -Then we chop the image into multiple 150 x 150 sub images -Upscale it and stich it together. - -The output image might look a bit off, because each sub-image dosent have data about other sub-images. -This approach assumes that the subimage has enough data about its surroundings diff --git a/spaces/Gradio-Blocks/magnificento/README.md b/spaces/Gradio-Blocks/magnificento/README.md deleted file mode 100644 index 8fcd510a2a9ee716473c964d125543af041ae193..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/magnificento/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Magnificento -emoji: 🗣 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.0.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/ssd/ssd512_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/ssd/ssd512_coco.py deleted file mode 100644 index 44d2920f4289c351c27e0d70dc03de0deb064a54..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/ssd/ssd512_coco.py +++ /dev/null @@ -1,71 +0,0 @@ -_base_ = 'ssd300_coco.py' -input_size = 512 -model = dict( - backbone=dict(input_size=input_size), - bbox_head=dict( - in_channels=(512, 1024, 512, 256, 256, 256, 256), - anchor_generator=dict( - type='SSDAnchorGenerator', - scale_major=False, - input_size=input_size, - basesize_ratio_range=(0.1, 0.9), - strides=[8, 16, 32, 64, 128, 256, 512], - ratios=[[2], [2, 3], [2, 3], [2, 3], [2, 3], [2], [2]]))) -# dataset settings -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict(mean=[123.675, 116.28, 103.53], std=[1, 1, 1], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile', to_float32=True), - dict(type='LoadAnnotations', with_bbox=True), - dict( - type='PhotoMetricDistortion', - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18), - dict( - type='Expand', - mean=img_norm_cfg['mean'], - to_rgb=img_norm_cfg['to_rgb'], - ratio_range=(1, 4)), - dict( - type='MinIoURandomCrop', - min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), - min_crop_size=0.3), - dict(type='Resize', img_scale=(512, 512), keep_ratio=False), - dict(type='Normalize', **img_norm_cfg), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(512, 512), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=False), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=8, - workers_per_gpu=3, - train=dict( - _delete_=True, - type='RepeatDataset', - times=5, - dataset=dict( - type=dataset_type, - ann_file=data_root + 'annotations/instances_train2017.json', - img_prefix=data_root + 'train2017/', - pipeline=train_pipeline)), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# optimizer -optimizer = dict(type='SGD', lr=2e-3, momentum=0.9, weight_decay=5e-4) -optimizer_config = dict(_delete_=True) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/assigners/atss_assigner.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/assigners/atss_assigner.py deleted file mode 100644 index d4fe9d0e3c8704bd780d493eff20a5505dbe9580..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/core/bbox/assigners/atss_assigner.py +++ /dev/null @@ -1,178 +0,0 @@ -import torch - -from ..builder import BBOX_ASSIGNERS -from ..iou_calculators import build_iou_calculator -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -@BBOX_ASSIGNERS.register_module() -class ATSSAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `0` or a positive integer - indicating the ground truth index. - - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - topk (float): number of bbox selected in each level - """ - - def __init__(self, - topk, - iou_calculator=dict(type='BboxOverlaps2D'), - ignore_iof_thr=-1): - self.topk = topk - self.iou_calculator = build_iou_calculator(iou_calculator) - self.ignore_iof_thr = ignore_iof_thr - - # https://github.com/sfzhang15/ATSS/blob/master/atss_core/modeling/rpn/atss/loss.py - - def assign(self, - bboxes, - num_level_bboxes, - gt_bboxes, - gt_bboxes_ignore=None, - gt_labels=None): - """Assign gt to bboxes. - - The assignment is done in following steps - - 1. compute iou between all bbox (bbox of all pyramid levels) and gt - 2. compute center distance between all bbox and gt - 3. on each pyramid level, for each gt, select k bbox whose center - are closest to the gt center, so we total select k*l bbox as - candidates for each gt - 4. get corresponding iou for the these candidates, and compute the - mean and std, set mean + std as the iou threshold - 5. select these candidates whose iou are greater than or equal to - the threshold as positive - 6. limit the positive sample's center in gt - - - Args: - bboxes (Tensor): Bounding boxes to be assigned, shape(n, 4). - num_level_bboxes (List): num of bboxes in each level - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - - Returns: - :obj:`AssignResult`: The assign result. - """ - INF = 100000000 - bboxes = bboxes[:, :4] - num_gt, num_bboxes = gt_bboxes.size(0), bboxes.size(0) - - # compute iou between all bbox and gt - overlaps = self.iou_calculator(bboxes, gt_bboxes) - - # assign 0 by default - assigned_gt_inds = overlaps.new_full((num_bboxes, ), - 0, - dtype=torch.long) - - if num_gt == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = overlaps.new_zeros((num_bboxes, )) - if num_gt == 0: - # No truth, assign everything to background - assigned_gt_inds[:] = 0 - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = overlaps.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) - - # compute center distance between all bbox and gt - gt_cx = (gt_bboxes[:, 0] + gt_bboxes[:, 2]) / 2.0 - gt_cy = (gt_bboxes[:, 1] + gt_bboxes[:, 3]) / 2.0 - gt_points = torch.stack((gt_cx, gt_cy), dim=1) - - bboxes_cx = (bboxes[:, 0] + bboxes[:, 2]) / 2.0 - bboxes_cy = (bboxes[:, 1] + bboxes[:, 3]) / 2.0 - bboxes_points = torch.stack((bboxes_cx, bboxes_cy), dim=1) - - distances = (bboxes_points[:, None, :] - - gt_points[None, :, :]).pow(2).sum(-1).sqrt() - - if (self.ignore_iof_thr > 0 and gt_bboxes_ignore is not None - and gt_bboxes_ignore.numel() > 0 and bboxes.numel() > 0): - ignore_overlaps = self.iou_calculator( - bboxes, gt_bboxes_ignore, mode='iof') - ignore_max_overlaps, _ = ignore_overlaps.max(dim=1) - ignore_idxs = ignore_max_overlaps > self.ignore_iof_thr - distances[ignore_idxs, :] = INF - assigned_gt_inds[ignore_idxs] = -1 - - # Selecting candidates based on the center distance - candidate_idxs = [] - start_idx = 0 - for level, bboxes_per_level in enumerate(num_level_bboxes): - # on each pyramid level, for each gt, - # select k bbox whose center are closest to the gt center - end_idx = start_idx + bboxes_per_level - distances_per_level = distances[start_idx:end_idx, :] - selectable_k = min(self.topk, bboxes_per_level) - _, topk_idxs_per_level = distances_per_level.topk( - selectable_k, dim=0, largest=False) - candidate_idxs.append(topk_idxs_per_level + start_idx) - start_idx = end_idx - candidate_idxs = torch.cat(candidate_idxs, dim=0) - - # get corresponding iou for the these candidates, and compute the - # mean and std, set mean + std as the iou threshold - candidate_overlaps = overlaps[candidate_idxs, torch.arange(num_gt)] - overlaps_mean_per_gt = candidate_overlaps.mean(0) - overlaps_std_per_gt = candidate_overlaps.std(0) - overlaps_thr_per_gt = overlaps_mean_per_gt + overlaps_std_per_gt - - is_pos = candidate_overlaps >= overlaps_thr_per_gt[None, :] - - # limit the positive sample's center in gt - for gt_idx in range(num_gt): - candidate_idxs[:, gt_idx] += gt_idx * num_bboxes - ep_bboxes_cx = bboxes_cx.view(1, -1).expand( - num_gt, num_bboxes).contiguous().view(-1) - ep_bboxes_cy = bboxes_cy.view(1, -1).expand( - num_gt, num_bboxes).contiguous().view(-1) - candidate_idxs = candidate_idxs.view(-1) - - # calculate the left, top, right, bottom distance between positive - # bbox center and gt side - l_ = ep_bboxes_cx[candidate_idxs].view(-1, num_gt) - gt_bboxes[:, 0] - t_ = ep_bboxes_cy[candidate_idxs].view(-1, num_gt) - gt_bboxes[:, 1] - r_ = gt_bboxes[:, 2] - ep_bboxes_cx[candidate_idxs].view(-1, num_gt) - b_ = gt_bboxes[:, 3] - ep_bboxes_cy[candidate_idxs].view(-1, num_gt) - is_in_gts = torch.stack([l_, t_, r_, b_], dim=1).min(dim=1)[0] > 0.01 - is_pos = is_pos & is_in_gts - - # if an anchor box is assigned to multiple gts, - # the one with the highest IoU will be selected. - overlaps_inf = torch.full_like(overlaps, - -INF).t().contiguous().view(-1) - index = candidate_idxs.view(-1)[is_pos.view(-1)] - overlaps_inf[index] = overlaps.t().contiguous().view(-1)[index] - overlaps_inf = overlaps_inf.view(num_gt, -1).t() - - max_overlaps, argmax_overlaps = overlaps_inf.max(dim=1) - assigned_gt_inds[ - max_overlaps != -INF] = argmax_overlaps[max_overlaps != -INF] + 1 - - if gt_labels is not None: - assigned_labels = assigned_gt_inds.new_full((num_bboxes, ), -1) - pos_inds = torch.nonzero( - assigned_gt_inds > 0, as_tuple=False).squeeze() - if pos_inds.numel() > 0: - assigned_labels[pos_inds] = gt_labels[ - assigned_gt_inds[pos_inds] - 1] - else: - assigned_labels = None - return AssignResult( - num_gt, assigned_gt_inds, max_overlaps, labels=assigned_labels) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_512x1024_80k_cityscapes.py deleted file mode 100644 index e4b623aca9ce1138baa259cbdd02920a47765f8d..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_d6_r50-d16_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,8 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py' -] -model = dict( - backbone=dict(dilations=(1, 1, 1, 2), strides=(1, 2, 2, 1)), - decode_head=dict(dilation=6), - auxiliary_head=dict(dilation=6)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/necks/__init__.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/necks/__init__.py deleted file mode 100644 index 9b9d3d5b3fe80247642d962edd6fb787537d01d6..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/necks/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .fpn import FPN -from .multilevel_neck import MultiLevelNeck - -__all__ = ['FPN', 'MultiLevelNeck'] diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/scripts/templates/index.html b/spaces/GrandaddyShmax/AudioCraft_Plus/scripts/templates/index.html deleted file mode 100644 index 7bd3afe9d933271bb922c1a0a534dd6b86fe67bc..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/scripts/templates/index.html +++ /dev/null @@ -1,28 +0,0 @@ -{% extends "base.html" %} -{% block content %} - -

- Welcome {{session['user']}} to the internal MOS assistant for AudioCraft. - You can create custom surveys between your models, that you can - evaluate yourself, or with the help of your teammates, by simply - sharing a link! -

- -{% for error in errors %} -

{{error}}

-{% endfor %} -
-
-
- -
-
- -
- - - -{% endblock %} diff --git a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/visualize_2d_posemb.py b/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/visualize_2d_posemb.py deleted file mode 100644 index 386a1ffe10b38c00ea3343b21cedee8c6f73ece6..0000000000000000000000000000000000000000 --- a/spaces/HCMUT-GraduateThesis-HNTThinh/rgbdsod-multimae-demo/s_multimae/visualize_2d_posemb.py +++ /dev/null @@ -1,35 +0,0 @@ -import numpy as np -from torch import Tensor -import matplotlib.pyplot as plt - -from s_multimae.model.multimae import build_2d_sincos_posemb - -def visualize_2d_posemb(): - NH, NW = 14, 14 - dim_tokens = 768 - - colors = ['Greys', 'Purples', 'Blues', 'Greens', 'Oranges', 'Reds', - 'YlOrBr', 'YlOrRd', 'OrRd', 'PuRd', 'RdPu', 'BuPu', - 'GnBu', 'PuBu', 'YlGnBu', 'PuBuGn', 'BuGn', 'YlGn'] - - pos_emb: Tensor = build_2d_sincos_posemb(NH, NW, dim_tokens) - pos_emb_numpy: np.ndarray = pos_emb.squeeze(0).permute(1,2,0).numpy() # 14 x 14 x 768 - - x = np.linspace(0, NH-1, NH) - y = np.linspace(0, NW-1, NW) - X, Y = np.meshgrid(x, y) - - for color, i in zip(colors, range(0, pos_emb_numpy.shape[2], 100)): - ax = plt.axes(projection='3d') - Z = pos_emb_numpy[:, :, i] - - # plt.imshow(Z, cmap='viridis') - # plt.savefig(f'posemb_visualization/test_{i}.png') - - ax.plot_surface( - X, Y, Z, - # rstride=1, cstride=1, - cmap='viridis', edgecolor='none' - ) - plt.show() - plt.savefig(f'posemb_visualization/test_{i}.png') diff --git a/spaces/Hackatos/Smart-Shower-ATC/app/core.py b/spaces/Hackatos/Smart-Shower-ATC/app/core.py deleted file mode 100644 index e43fe6de21d16a502f4c54807d84fb00b9eadd10..0000000000000000000000000000000000000000 --- a/spaces/Hackatos/Smart-Shower-ATC/app/core.py +++ /dev/null @@ -1,99 +0,0 @@ -# Electric cost(Euros) = energy_price(Euros/kWh) * wasted_energy(kWh) -def custo(energy_price_hour, wasted_energy): - return energy_price_hour * wasted_energy - - -# caldeira 20 litros -def spent_energy( - temperatura_inicial_caldeira_t, # existente na cadeira, t sendo a hora - temperatura_objetivo_caldeira_t_plus_1, - outside_temp, - pressao_caldeira, - litros_gastos_no_banho=0, - temperatura_entrada_agua_na_caldeira=15, # Temperatura ambiente da iNOVA - capacidade_caldeira=20, # 20 litros -): - # E = (4.2 kJ/kgoC) ((90 oC) - (20 oC)) (1000 liter) (1 kg/liter) - # cp = specific heat of water (kJ/kgoC, Btu/lb oF) (4.2 kJ/kgoC, 1 Btu/lbmoF for water) - heat_capacity = 4.2 - # Energy = heat_capacity * (temperatura_saida_agua_na_caldeira - outside_temp) * capacidade_caldeira * 1\ - delta_t = temperatura_objetivo_caldeira_t_plus_1 - temperatura_inicial_caldeira_t - - energy = ( - heat_capacity - * (delta_t - outside_temp) - * (capacidade_caldeira - litros_gastos_no_banho) - * 1 - ) # isto vai ser minimo - - delta_t = ( - temperatura_objetivo_caldeira_t_plus_1 - temperatura_entrada_agua_na_caldeira - ) - energy_incoming_water = ( - heat_capacity * (delta_t - outside_temp) * litros_gastos_no_banho * 1 - ) - - # 20 Litros totais - # Joao gatou 5 litros - # Gastar energia em: - # 15 litros para manter a temperatura da caldeira -> minimo - # 5 litros para aquecer a agua que entra - - total_energy = energy + energy_incoming_water - - # TODO: Correlação entre pressão e temperatura - # https://www.engineeringtoolbox.com/boiling-points-water-altitude-d_1344.html - # https://www.engineeringtoolbox.com/boiling-point-water-d_926.html - """ - That depends on whether the pressure is held constant during the heating. If there is a relief valve which maintains - constant pressure as the water heats, then no, the 2 samples will heat at the same rate. However, if the pressurised sample - has no pressure relief, then it will heat faster because the pressure will increase, and that increase in pressure will increase - the heat in addition to the heat applied. - """ - # kJ - - # TODO: FIND WHAT SHOULD BE THE RELATION BETWEEN TEMPERATURE OF OUTGOING WATER AND BOILER TEMPERATURE - temperatura_saida_agua_na_caldeira = temperatura_objetivo_caldeira_t_plus_1 * 0.87 - - return total_energy, temperatura_saida_agua_na_caldeira - - -def calculate_weights_for_all_hours( - temperatura_inicial_caldeira, - temperatura_objetivo_caldeira, - outside_temp, - pressao_caldeira, - litros_gastos_no_banho, - temperatura_entrada_agua_na_caldeira, - capacidade_caldeira, -): - weights = [] - temperatures = [] - for i in range(len(temperatura_inicial_caldeira)): - energy, temperature_water = spent_energy( - temperatura_inicial_caldeira_t=temperatura_inicial_caldeira[i], - temperatura_objetivo_caldeira_t_plus_1=temperatura_objetivo_caldeira, - outside_temp=outside_temp[i], - pressao_caldeira=pressao_caldeira[i], - litros_gastos_no_banho=litros_gastos_no_banho, - temperatura_entrada_agua_na_caldeira=temperatura_entrada_agua_na_caldeira[ - i - ], - capacidade_caldeira=capacidade_caldeira, - ) - weights.append(energy) - temperatures.append(temperature_water) - return weights, temperatures - # Output energy wasted - - -def calculate_confort(temperatura_given, temperatura_ideal): - return abs(temperatura_given - temperatura_ideal) - - -# create exception for no solution found -class NoSolutionFound(Exception): - pass - -class SoltionFoundWithLargerConfortValue(Exception): - pass \ No newline at end of file diff --git a/spaces/HadiTajari/Penguins_pred_App/app.py b/spaces/HadiTajari/Penguins_pred_App/app.py deleted file mode 100644 index 766a19363bc455422c43afd5c892529f342d1706..0000000000000000000000000000000000000000 --- a/spaces/HadiTajari/Penguins_pred_App/app.py +++ /dev/null @@ -1,72 +0,0 @@ -import pandas as pd -import numpy as np -import streamlit as st -from sklearn.ensemble import RandomForestClassifier -import pickle - -st.write(""" -# Penguin Prediction App -this is a simple app to predict penguin type -# """) -st.markdown(""" -[Orginal Dataset Link](https://github.com/allisonhorst/palmerpenguins) """) - -st.sidebar.header("Input Features") -st.sidebar.markdown(""" -[Example CSV inputs file](https://raw.githubusercontent.com/dataprofessor/data/master/penguins_example.csv) """) - -uploaded_file = st.sidebar.file_uploader("Upload your input csv file", type= ["csv"]) - -if uploaded_file is not None: - input_user = pd.read_csv(uploaded_file) -else: - - def input_user_features(): - sex = st.sidebar.selectbox("sex", ("male", "female")) - island = st.sidebar.selectbox("island", ("Biscoe", "Dream", "Torgersen")) - bill_length_mm = st.sidebar.slider("bill_length_mm", 32.0, 60.0, 41.0) - bill_depth_mm = st.sidebar.slider("bill_depth_mm", 13.0, 22.0, 15.0) - flipper_length_mm = st.sidebar.slider("flipper_length_mm", 170, 235, 200) - body_mass_g = st.sidebar.slider("body_mass_g", 2700, 6300, 3500) - data = {"island": island, - "bill_length_mm": bill_length_mm, - "bill_depth_mm": bill_depth_mm, - "flipper_length_mm":flipper_length_mm, - "body_mass_g": body_mass_g, - "sex": sex,} - features = pd.DataFrame(data, index=[0]) - return features - input_user = input_user_features() - -#importing raw dataset to encondig for new sample -penguins_raws = pd.read_csv("penguins_cleaned.csv") -penguins = penguins_raws.drop("species" ,axis=1) - -df = pd.concat([input_user, penguins], axis=0) - -# selecting "Object" type feature -encod_col = df.select_dtypes("O").columns.values -for col in encod_col: - dummy = pd.get_dummies(df[col], prefix=col) - df = pd.concat([df, dummy], axis=1) - del df[col] -df = df[:1] - - -st.subheader("Input User Features") -if uploaded_file is not None: - st.write(df) -else: - st.write("Awaiting CSV file to be uploded. Currently using example input parametrs.(Shown below)") - st.write(df) - -# importing fitted model -model_pickle = pickle.load(open ("penguins_clf.pkl", "rb")) -prediction_df = model_pickle.predict(df) -prediction_df_prob = model_pickle.predict_proba(df) - -st.subheader("Prediction type of penguin") -st.write( prediction_df ) - -st.subheader("Prediction Probability of Penguin Types") -st.write( prediction_df_prob ) \ No newline at end of file diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/bart/README.summarization.md b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/bart/README.summarization.md deleted file mode 100644 index 8727584f2b2bdd880c6cd3abbf39b75dfbf4a67c..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/bart/README.summarization.md +++ /dev/null @@ -1,102 +0,0 @@ -# Fine-tuning BART on CNN-Dailymail summarization task - -### 1) Download the CNN and Daily Mail data and preprocess it into data files with non-tokenized cased samples. - -Follow the instructions [here](https://github.com/abisee/cnn-dailymail) to download the original CNN and Daily Mail datasets. To preprocess the data, refer to the pointers in [this issue](https://github.com/pytorch/fairseq/issues/1391) or check out the code [here](https://github.com/artmatsak/cnn-dailymail). - -Follow the instructions [here](https://github.com/EdinburghNLP/XSum) to download the original Extreme Summarization datasets, or check out the code [here](https://github.com/EdinburghNLP/XSum/tree/master/XSum-Dataset), Please keep the raw dataset and make sure no tokenization nor BPE on the dataset. - -### 2) BPE preprocess: - -```bash -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/encoder.json' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/vocab.bpe' -wget -N 'https://dl.fbaipublicfiles.com/fairseq/gpt2_bpe/dict.txt' - -TASK=cnn_dm -for SPLIT in train val -do - for LANG in source target - do - python -m examples.roberta.multiprocessing_bpe_encoder \ - --encoder-json encoder.json \ - --vocab-bpe vocab.bpe \ - --inputs "$TASK/$SPLIT.$LANG" \ - --outputs "$TASK/$SPLIT.bpe.$LANG" \ - --workers 60 \ - --keep-empty; - done -done -``` - -### 3) Binarize dataset: -```bash -fairseq-preprocess \ - --source-lang "source" \ - --target-lang "target" \ - --trainpref "${TASK}/train.bpe" \ - --validpref "${TASK}/val.bpe" \ - --destdir "${TASK}-bin/" \ - --workers 60 \ - --srcdict dict.txt \ - --tgtdict dict.txt; -``` - -### 4) Fine-tuning on CNN-DM summarization task: -Example fine-tuning CNN-DM -```bash -TOTAL_NUM_UPDATES=20000 -WARMUP_UPDATES=500 -LR=3e-05 -MAX_TOKENS=2048 -UPDATE_FREQ=4 -BART_PATH=/path/to/bart/model.pt - -CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 fairseq-train cnn_dm-bin \ - --restore-file $BART_PATH \ - --max-tokens $MAX_TOKENS \ - --task translation \ - --source-lang source --target-lang target \ - --truncate-source \ - --layernorm-embedding \ - --share-all-embeddings \ - --share-decoder-input-output-embed \ - --reset-optimizer --reset-dataloader --reset-meters \ - --required-batch-size-multiple 1 \ - --arch bart_large \ - --criterion label_smoothed_cross_entropy \ - --label-smoothing 0.1 \ - --dropout 0.1 --attention-dropout 0.1 \ - --weight-decay 0.01 --optimizer adam --adam-betas "(0.9, 0.999)" --adam-eps 1e-08 \ - --clip-norm 0.1 \ - --lr-scheduler polynomial_decay --lr $LR --total-num-update $TOTAL_NUM_UPDATES --warmup-updates $WARMUP_UPDATES \ - --fp16 --update-freq $UPDATE_FREQ \ - --skip-invalid-size-inputs-valid-test \ - --find-unused-parameters; -``` -Above is expected to run on `1` node with `8 32gb-V100`. -Expected training time is about `5 hours`. Training time can be reduced with distributed training on `4` nodes and `--update-freq 1`. - -Use TOTAL_NUM_UPDATES=15000 UPDATE_FREQ=2 for Xsum task - -### Inference for CNN-DM test data using above trained checkpoint. -After training the model as mentioned in previous step, you can perform inference with checkpoints in `checkpoints/` directory using `eval_cnn.py`, for example - -```bash -cp data-bin/cnn_dm/dict.source.txt checkpoints/ -python examples/bart/summarize.py \ - --model-dir checkpoints \ - --model-file checkpoint_best.pt \ - --src cnn_dm/test.source \ - --out cnn_dm/test.hypo -``` -For XSUM, which uses beam=6, lenpen=1.0, max_len_b=60, min_len=10: -```bash -cp data-bin/cnn_dm/dict.source.txt checkpoints/ -python examples/bart/summarize.py \ - --model-dir checkpoints \ - --model-file checkpoint_best.pt \ - --src cnn_dm/test.source \ - --out cnn_dm/test.hypo \ - --xsum-kwargs -``` diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/data/random_input_dataset.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/data/random_input_dataset.py deleted file mode 100644 index 886505616cc7f7a515ecebf34fae5c2bc541de03..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/data/random_input_dataset.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import random -from typing import List - -from fairseq.data import BaseWrapperDataset, data_utils - - -class RandomInputDataset(BaseWrapperDataset): - def __init__( - self, - dataset, - random_input_dataset, - input_key_path: List[str], - add_to_input, - pad_idx, - ): - super().__init__(dataset) - self.random_input_dataset = random_input_dataset - if isinstance(input_key_path, str): - input_key_path = [input_key_path] - assert len(input_key_path) > 0 - self.input_key_path = input_key_path - self.add_to_input = add_to_input - self.pad_idx = pad_idx - - def get_target(self, item): - target_loc = item - for p in self.input_key_path[:-1]: - target_loc = target_loc[p] - return self.input_key_path[-1], target_loc - - def get_target_value(self, item): - k, target_loc = self.get_target(item) - return target_loc[k] - - def __getitem__(self, index): - item = self.dataset[index] - k, target_loc = self.get_target(item) - target_loc[k] = random.choice(self.random_input_dataset) - return item - - def collater(self, samples): - collated = self.dataset.collater(samples) - if len(collated) == 0: - return collated - indices = set(collated["id"].tolist()) - - random_inputs = data_utils.collate_tokens( - [self.get_target_value(s) for s in samples if s["id"] in indices], - pad_idx=self.pad_idx, - left_pad=False, - ) - k, target_loc = self.get_target( - collated if not self.add_to_input else collated["net_input"] - ) - target_loc[k] = random_inputs - - return collated diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/commons.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/commons.py deleted file mode 100644 index 8da7b35049d768a29de6f66cbe8795a825967818..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/commons.py +++ /dev/null @@ -1,273 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from librosa.filters import mel as librosa_mel_fn -from audio_processing import dynamic_range_compression -from audio_processing import dynamic_range_decompression -from stft import STFT - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def mle_loss(z, m, logs, logdet, mask): - l = torch.sum(logs) + 0.5 * torch.sum( - torch.exp(-2 * logs) * ((z - m) ** 2) - ) # neg normal likelihood w/o the constant term - l = l - torch.sum(logdet) # log jacobian determinant - l = l / torch.sum( - torch.ones_like(z) * mask - ) # averaging across batch, channel and time axes - l = l + 0.5 * math.log(2 * math.pi) # add the remaining constant term - return l - - -def duration_loss(logw, logw_, lengths): - l = torch.sum((logw - logw_) ** 2) / torch.sum(lengths) - return l - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def maximum_path(value, mask, max_neg_val=-np.inf): - """Numpy-friendly version. It's about 4 times faster than torch version. - value: [b, t_x, t_y] - mask: [b, t_x, t_y] - """ - value = value * mask - - device = value.device - dtype = value.dtype - value = value.cpu().detach().numpy() - mask = mask.cpu().detach().numpy().astype(np.bool) - - b, t_x, t_y = value.shape - direction = np.zeros(value.shape, dtype=np.int64) - v = np.zeros((b, t_x), dtype=np.float32) - x_range = np.arange(t_x, dtype=np.float32).reshape(1, -1) - for j in range(t_y): - v0 = np.pad(v, [[0, 0], [1, 0]], mode="constant", constant_values=max_neg_val)[ - :, :-1 - ] - v1 = v - max_mask = v1 >= v0 - v_max = np.where(max_mask, v1, v0) - direction[:, :, j] = max_mask - - index_mask = x_range <= j - v = np.where(index_mask, v_max + value[:, :, j], max_neg_val) - direction = np.where(mask, direction, 1) - - path = np.zeros(value.shape, dtype=np.float32) - index = mask[:, :, 0].sum(1).astype(np.int64) - 1 - index_range = np.arange(b) - for j in reversed(range(t_y)): - path[index_range, index, j] = 1 - index = index + direction[index_range, index, j] - 1 - path = path * mask.astype(np.float32) - path = torch.from_numpy(path).to(device=device, dtype=dtype) - return path - - -def generate_path(duration, mask): - """ - duration: [b, t_x] - mask: [b, t_x, t_y] - """ - device = duration.device - - b, t_x, t_y = mask.shape - cum_duration = torch.cumsum(duration, 1) - path = torch.zeros(b, t_x, t_y, dtype=mask.dtype).to(device=device) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path * mask - return path - - -class Adam: - def __init__( - self, - params, - scheduler, - dim_model, - warmup_steps=4000, - lr=1e0, - betas=(0.9, 0.98), - eps=1e-9, - ): - self.params = params - self.scheduler = scheduler - self.dim_model = dim_model - self.warmup_steps = warmup_steps - self.lr = lr - self.betas = betas - self.eps = eps - - self.step_num = 1 - self.cur_lr = lr * self._get_lr_scale() - - self._optim = torch.optim.Adam(params, lr=self.cur_lr, betas=betas, eps=eps) - - def _get_lr_scale(self): - if self.scheduler == "noam": - return np.power(self.dim_model, -0.5) * np.min( - [ - np.power(self.step_num, -0.5), - self.step_num * np.power(self.warmup_steps, -1.5), - ] - ) - else: - return 1 - - def _update_learning_rate(self): - self.step_num += 1 - if self.scheduler == "noam": - self.cur_lr = self.lr * self._get_lr_scale() - for param_group in self._optim.param_groups: - param_group["lr"] = self.cur_lr - - def get_lr(self): - return self.cur_lr - - def step(self): - self._optim.step() - self._update_learning_rate() - - def zero_grad(self): - self._optim.zero_grad() - - def load_state_dict(self, d): - self._optim.load_state_dict(d) - - def state_dict(self): - return self._optim.state_dict() - - -class TacotronSTFT(nn.Module): - def __init__( - self, - filter_length=1024, - hop_length=256, - win_length=1024, - n_mel_channels=80, - sampling_rate=22050, - mel_fmin=0.0, - mel_fmax=8000.0, - ): - super(TacotronSTFT, self).__init__() - self.n_mel_channels = n_mel_channels - self.sampling_rate = sampling_rate - self.stft_fn = STFT(filter_length, hop_length, win_length) - mel_basis = librosa_mel_fn( - sampling_rate, filter_length, n_mel_channels, mel_fmin, mel_fmax - ) - mel_basis = torch.from_numpy(mel_basis).float() - self.register_buffer("mel_basis", mel_basis) - - def spectral_normalize(self, magnitudes): - output = dynamic_range_compression(magnitudes) - return output - - def spectral_de_normalize(self, magnitudes): - output = dynamic_range_decompression(magnitudes) - return output - - def mel_spectrogram(self, y): - """Computes mel-spectrograms from a batch of waves - PARAMS - ------ - y: Variable(torch.FloatTensor) with shape (B, T) in range [-1, 1] - - RETURNS - ------- - mel_output: torch.FloatTensor of shape (B, n_mel_channels, T) - """ - assert torch.min(y.data) >= -1 - assert torch.max(y.data) <= 1 - - magnitudes, phases = self.stft_fn.transform(y) - magnitudes = magnitudes.data - mel_output = torch.matmul(self.mel_basis, magnitudes) - mel_output = self.spectral_normalize(mel_output) - return mel_output - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm - - -def squeeze(x, x_mask=None, n_sqz=2): - b, c, t = x.size() - - t = (t // n_sqz) * n_sqz - x = x[:, :, :t] - x_sqz = x.view(b, c, t // n_sqz, n_sqz) - x_sqz = x_sqz.permute(0, 3, 1, 2).contiguous().view(b, c * n_sqz, t // n_sqz) - - if x_mask is not None: - x_mask = x_mask[:, :, n_sqz - 1 :: n_sqz] - else: - x_mask = torch.ones(b, 1, t // n_sqz).to(device=x.device, dtype=x.dtype) - return x_sqz * x_mask, x_mask - - -def unsqueeze(x, x_mask=None, n_sqz=2): - b, c, t = x.size() - - x_unsqz = x.view(b, n_sqz, c // n_sqz, t) - x_unsqz = x_unsqz.permute(0, 2, 3, 1).contiguous().view(b, c // n_sqz, t * n_sqz) - - if x_mask is not None: - x_mask = x_mask.unsqueeze(-1).repeat(1, 1, 1, n_sqz).view(b, 1, t * n_sqz) - else: - x_mask = torch.ones(b, 1, t * n_sqz).to(device=x.device, dtype=x.dtype) - return x_unsqz * x_mask, x_mask diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/utils/hifi/prepare_iitm_data_hifi.py b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/utils/hifi/prepare_iitm_data_hifi.py deleted file mode 100644 index 1e1de2e28735143aeef8ddb10bc5a4672c02564b..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/utils/hifi/prepare_iitm_data_hifi.py +++ /dev/null @@ -1,64 +0,0 @@ - -import glob -import random -import sys -import os -import argparse - - - - -def process_data(args): - - path = args.input_path - valid_files = args.valid_files - test_files = args.test_files - dest_path = args.dest_path - - list_paths = path.split(',') - - valid_set = [] - training_set = [] - test_set = [] - - for local_path in list_paths: - files = glob.glob(local_path+'/*.wav') - print(f"Total files: {len(files)}") - - valid_set_local = random.sample(files, valid_files) - - test_set_local = random.sample(valid_set_local, test_files) - valid_set.extend(list(set(valid_set_local) - set(test_set_local))) - test_set.extend(test_set_local) - - print(len(valid_set_local)) - - training_set_local = set(files) - set(valid_set_local) - print(len(training_set_local)) - training_set.extend(training_set_local) - - - valid_set = random.sample(valid_set, len(valid_set)) - test_set = random.sample(test_set, len(test_set)) - training_set = random.sample(training_set, len(training_set)) - - with open(os.path.join(dest_path , 'valid.txt'), mode = 'w+') as file: - file.write("\n".join(list(valid_set))) - - with open(os.path.join(dest_path , 'train.txt'), mode = 'w+') as file: - file.write("\n".join(list(training_set))) - - with open(os.path.join(dest_path , 'test.txt'), mode = 'w+') as file: - file.write("\n".join(list(test_set))) - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument('-i','--input-path',type=str,help='path to input wav files') - parser.add_argument('-v','--valid-files',type=int,help='number of valid files') - parser.add_argument('-t','--test-files',type=int,help='number of test files') - parser.add_argument('-d','--dest-path',type=str,help='destination path to output filelists') - - args = parser.parse_args() - - process_data(args) \ No newline at end of file diff --git a/spaces/Heber/google-flan-t5-xl/app.py b/spaces/Heber/google-flan-t5-xl/app.py deleted file mode 100644 index 3cb2a6ad9e93335abdc2db6c100e8358fdd85249..0000000000000000000000000000000000000000 --- a/spaces/Heber/google-flan-t5-xl/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/google/flan-t5-base").launch() \ No newline at end of file diff --git a/spaces/Hoodady/3DFuse/pc_project.py b/spaces/Hoodady/3DFuse/pc_project.py deleted file mode 100644 index 846c5e70b0fc582233e516a59226c9950e0c4e2d..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/pc_project.py +++ /dev/null @@ -1,218 +0,0 @@ -import os -import numpy as np -import torch -import torch.nn as nn - -from PIL import Image - -from my.utils import tqdm - -from pytorch3d.structures import Pointclouds -from pytorch3d.renderer.cameras import PerspectiveCameras -from pytorch3d.renderer import ( - PointsRasterizer, - AlphaCompositor, - look_at_view_transform, -) - -import torch.nn.functional as F - -from point_e.diffusion.configs import DIFFUSION_CONFIGS, diffusion_from_config -from point_e.diffusion.sampler import PointCloudSampler -from point_e.models.download import load_checkpoint -from point_e.models.configs import MODEL_CONFIGS, model_from_config - - -class PointsRenderer(nn.Module): - """ - Modified version of Pytorch3D PointsRenderer - """ - - def __init__(self, rasterizer, compositor) -> None: - super().__init__() - self.rasterizer = rasterizer - self.compositor = compositor - - def to(self, device): - # Manually move to device rasterizer as the cameras - # within the class are not of type nn.Module - self.rasterizer = self.rasterizer.to(device) - self.compositor = self.compositor.to(device) - return self - - def forward(self, point_clouds, **kwargs) -> torch.Tensor: - fragments = self.rasterizer(point_clouds, **kwargs) - - # import pdb; pdb.set_trace() - - depth_map = fragments[1][0,...,:1] - - # Construct weights based on the distance of a point to the true point. - # However, this could be done differently: e.g. predicted as opposed - # to a function of the weights. - r = self.rasterizer.raster_settings.radius - - dists2 = fragments.dists.permute(0, 3, 1, 2) - weights = 1 - dists2 / (r * r) - images = self.compositor( - fragments.idx.long().permute(0, 3, 1, 2), - weights, - point_clouds.features_packed().permute(1, 0), - **kwargs, - ) - - # permute so image comes at the end - images = images.permute(0, 2, 3, 1) - - return images, depth_map - - -def render_depth_from_cloud(points, angles, raster_settings, device,calibration_value=0): - - radius = 2.3 - - horizontal = angles[0]+calibration_value - elevation = angles[1] - FoV = angles[2] - - - camera = py3d_camera(radius, elevation, horizontal, FoV, device) - - point_loc = torch.tensor(points.coords).to(device) - colors = torch.tensor(np.stack([points.channels["R"], points.channels["G"], points.channels["B"]], axis=-1)).to(device) - - matching_rotation = torch.tensor([[[1.0, 0.0, 0.0], - [0.0, 0.0, 1.0], - [0.0, -1.0, 0.0]]]).to(device) - - rot_points = (matching_rotation @ point_loc[...,None]).squeeze() - - point_cloud = Pointclouds(points=[rot_points], features=[colors]) - - _, raw_depth_map = pointcloud_renderer(point_cloud, camera, raster_settings, device) - - disparity = camera.focal_length[0,0] / (raw_depth_map + 1e-9) - - max_disp = torch.max(disparity) - min_disp = torch.min(disparity[disparity > 0]) - - norm_disparity = (disparity - min_disp) / (max_disp - min_disp) - - mask = norm_disparity > 0 - norm_disparity = norm_disparity * mask - - depth_map = F.interpolate(norm_disparity.permute(2,0,1)[None,...],size=512,mode='bilinear')[0] - depth_map = depth_map.repeat(3,1,1) - - return depth_map - - -def py3d_camera(radius, elevation, horizontal, FoV, device, img_size=800): - - fov_rad = torch.deg2rad(torch.tensor(FoV)) - focal = 1 / torch.tan(fov_rad / 2) * (2. / 2) - - focal_length = torch.tensor([[focal,focal]]).float() - image_size = torch.tensor([[img_size,img_size]]).double() - - - R, T = look_at_view_transform(dist=radius, elev=elevation, azim=horizontal, degrees=True) - - - camera = PerspectiveCameras( - R=R, - T=T, - focal_length=focal_length, - image_size=image_size, - device=device, - ) - - return camera - -def pointcloud_renderer(point_cloud, camera, raster_settings, device): - - camera = camera.to(device) - - rasterizer = PointsRasterizer(cameras=camera, raster_settings=raster_settings) - renderer = PointsRenderer( - rasterizer=rasterizer, - compositor=AlphaCompositor() - ).to(device) - - image = renderer(point_cloud) - - return image - -def point_e(device,exp_dir): - print('creating base model...') - base_name = 'base1B' # use base300M or base1B for better results - base_model = model_from_config(MODEL_CONFIGS[base_name], device) - base_model.eval() - base_diffusion = diffusion_from_config(DIFFUSION_CONFIGS[base_name]) - - print('creating upsample model...') - upsampler_model = model_from_config(MODEL_CONFIGS['upsample'], device) - upsampler_model.eval() - upsampler_diffusion = diffusion_from_config(DIFFUSION_CONFIGS['upsample']) - - print('downloading base checkpoint...') - base_model.load_state_dict(load_checkpoint(base_name, device)) - - print('downloading upsampler checkpoint...') - upsampler_model.load_state_dict(load_checkpoint('upsample', device)) - - sampler = PointCloudSampler( - device=device, - models=[base_model, upsampler_model], - diffusions=[base_diffusion, upsampler_diffusion], - num_points=[1024, 4096 - 1024], - aux_channels=['R', 'G', 'B'], - guidance_scale=[3.0, 3.0], - ) - - img = Image.open(os.path.join(exp_dir,'initial_image','instance0.png')) - - samples = None - for x in tqdm(sampler.sample_batch_progressive(batch_size=1, model_kwargs=dict(images=[img]))): - samples = x - - pc = sampler.output_to_point_clouds(samples)[0] - - return pc - - -def point_e_gradio(img,device): - print('creating base model...') - base_name = 'base1B' # use base300M or base1B for better results - base_model = model_from_config(MODEL_CONFIGS[base_name], device) - base_model.eval() - base_diffusion = diffusion_from_config(DIFFUSION_CONFIGS[base_name]) - - print('creating upsample model...') - upsampler_model = model_from_config(MODEL_CONFIGS['upsample'], device) - upsampler_model.eval() - upsampler_diffusion = diffusion_from_config(DIFFUSION_CONFIGS['upsample']) - - print('downloading base checkpoint...') - base_model.load_state_dict(load_checkpoint(base_name, device)) - - print('downloading upsampler checkpoint...') - upsampler_model.load_state_dict(load_checkpoint('upsample', device)) - - sampler = PointCloudSampler( - device=device, - models=[base_model, upsampler_model], - diffusions=[base_diffusion, upsampler_diffusion], - num_points=[1024, 4096 - 1024], - aux_channels=['R', 'G', 'B'], - guidance_scale=[3.0, 3.0], - ) - - - samples = None - for x in tqdm(sampler.sample_batch_progressive(batch_size=1, model_kwargs=dict(images=[img]))): - samples = x - - pc = sampler.output_to_point_clouds(samples)[0] - - return pc \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/list_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/list_dataset.py deleted file mode 100644 index 12f00aa43661d6bad701c9e72653ba8779136906..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/list_dataset.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from . import BaseWrapperDataset - - -class ListDataset(BaseWrapperDataset): - def __init__(self, dataset, sizes=None): - super().__init__(dataset) - self._sizes = sizes - - def __iter__(self): - for x in self.dataset: - yield x - - def collater(self, samples): - return samples - - @property - def sizes(self): - return self._sizes - - def num_tokens(self, index): - return self.sizes[index] - - def size(self, index): - return self.sizes[index] - - def set_epoch(self, epoch): - pass diff --git a/spaces/Illumotion/Koboldcpp/examples/jeopardy/README.md b/spaces/Illumotion/Koboldcpp/examples/jeopardy/README.md deleted file mode 100644 index 4c42e3cdbf5264805981e9ebe88a437e6cdd4aec..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/examples/jeopardy/README.md +++ /dev/null @@ -1,21 +0,0 @@ -# llama.cpp/example/jeopardy - -This is pretty much just a straight port of aigoopy/llm-jeopardy/ with an added graph viewer. - -The jeopardy test can be used to compare the fact knowledge of different models and compare them to eachother. This is in contrast to some other tests, which test logical deduction, creativity, writing skills, etc. - - -Step 1: Open jeopardy.sh and modify the following: -``` -MODEL=(path to your model) -MODEL_NAME=(name of your model) -prefix=(basically, if you use vicuna it's Human: , if you use something else it might be User: , etc) -opts=(add -instruct here if needed for your model, or anything else you want to test out) -``` -Step 2: Run `jeopardy.sh` from the llama.cpp folder - -Step 3: Repeat steps 1 and 2 until you have all the results you need. - -Step 4: Run `graph.py`, and follow the instructions. At the end, it will generate your final graph. - -Note: The Human bar is based off of the full, original 100 sample questions. If you modify the question count or questions, it will not be valid. diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/segment_anything/modeling/mask_decoder.py b/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/segment_anything/modeling/mask_decoder.py deleted file mode 100644 index 5d2fdb03d535a91fa725d1ec4e92a7a1f217dfe0..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/segment_anything/modeling/mask_decoder.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from torch import nn -from torch.nn import functional as F - -from typing import List, Tuple, Type - -from .common import LayerNorm2d - - -class MaskDecoder(nn.Module): - def __init__( - self, - *, - transformer_dim: int, - transformer: nn.Module, - num_multimask_outputs: int = 3, - activation: Type[nn.Module] = nn.GELU, - iou_head_depth: int = 3, - iou_head_hidden_dim: int = 256, - ) -> None: - """ - Predicts masks given an image and prompt embeddings, using a - transformer architecture. - - Arguments: - transformer_dim (int): the channel dimension of the transformer - transformer (nn.Module): the transformer used to predict masks - num_multimask_outputs (int): the number of masks to predict - when disambiguating masks - activation (nn.Module): the type of activation to use when - upscaling masks - iou_head_depth (int): the depth of the MLP used to predict - mask quality - iou_head_hidden_dim (int): the hidden dimension of the MLP - used to predict mask quality - """ - super().__init__() - self.transformer_dim = transformer_dim - self.transformer = transformer - - self.num_multimask_outputs = num_multimask_outputs - - self.iou_token = nn.Embedding(1, transformer_dim) - self.num_mask_tokens = num_multimask_outputs + 1 - self.mask_tokens = nn.Embedding(self.num_mask_tokens, transformer_dim) - - self.output_upscaling = nn.Sequential( - nn.ConvTranspose2d(transformer_dim, transformer_dim // 4, kernel_size=2, stride=2), - LayerNorm2d(transformer_dim // 4), - activation(), - nn.ConvTranspose2d(transformer_dim // 4, transformer_dim // 8, kernel_size=2, stride=2), - activation(), - ) - self.output_hypernetworks_mlps = nn.ModuleList( - [ - MLP(transformer_dim, transformer_dim, transformer_dim // 8, 3) - for i in range(self.num_mask_tokens) - ] - ) - - self.iou_prediction_head = MLP( - transformer_dim, iou_head_hidden_dim, self.num_mask_tokens, iou_head_depth - ) - - def forward( - self, - image_embeddings: torch.Tensor, - image_pe: torch.Tensor, - sparse_prompt_embeddings: torch.Tensor, - dense_prompt_embeddings: torch.Tensor, - multimask_output: bool, - ) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Predict masks given image and prompt embeddings. - - Arguments: - image_embeddings (torch.Tensor): the embeddings from the image encoder - image_pe (torch.Tensor): positional encoding with the shape of image_embeddings - sparse_prompt_embeddings (torch.Tensor): the embeddings of the points and boxes - dense_prompt_embeddings (torch.Tensor): the embeddings of the mask inputs - multimask_output (bool): Whether to return multiple masks or a single - mask. - - Returns: - torch.Tensor: batched predicted masks - torch.Tensor: batched predictions of mask quality - """ - masks, iou_pred = self.predict_masks( - image_embeddings=image_embeddings, - image_pe=image_pe, - sparse_prompt_embeddings=sparse_prompt_embeddings, - dense_prompt_embeddings=dense_prompt_embeddings, - ) - - # Select the correct mask or masks for output - if multimask_output: - mask_slice = slice(1, None) - else: - mask_slice = slice(0, 1) - masks = masks[:, mask_slice, :, :] - iou_pred = iou_pred[:, mask_slice] - - # Prepare output - return masks, iou_pred - - def predict_masks( - self, - image_embeddings: torch.Tensor, - image_pe: torch.Tensor, - sparse_prompt_embeddings: torch.Tensor, - dense_prompt_embeddings: torch.Tensor, - ) -> Tuple[torch.Tensor, torch.Tensor]: - """Predicts masks. See 'forward' for more details.""" - # Concatenate output tokens - output_tokens = torch.cat([self.iou_token.weight, self.mask_tokens.weight], dim=0) - output_tokens = output_tokens.unsqueeze(0).expand(sparse_prompt_embeddings.size(0), -1, -1) - tokens = torch.cat((output_tokens, sparse_prompt_embeddings), dim=1) - - # Expand per-image data in batch direction to be per-mask - src = torch.repeat_interleave(image_embeddings, tokens.shape[0], dim=0) - src = src + dense_prompt_embeddings - pos_src = torch.repeat_interleave(image_pe, tokens.shape[0], dim=0) - b, c, h, w = src.shape - - # Run the transformer - hs, src = self.transformer(src, pos_src, tokens) - iou_token_out = hs[:, 0, :] - mask_tokens_out = hs[:, 1 : (1 + self.num_mask_tokens), :] - - # Upscale mask embeddings and predict masks using the mask tokens - src = src.transpose(1, 2).view(b, c, h, w) - upscaled_embedding = self.output_upscaling(src) - hyper_in_list: List[torch.Tensor] = [] - for i in range(self.num_mask_tokens): - hyper_in_list.append(self.output_hypernetworks_mlps[i](mask_tokens_out[:, i, :])) - hyper_in = torch.stack(hyper_in_list, dim=1) - b, c, h, w = upscaled_embedding.shape - masks = (hyper_in @ upscaled_embedding.view(b, c, h * w)).view(b, -1, h, w) - - # Generate mask quality predictions - iou_pred = self.iou_prediction_head(iou_token_out) - - return masks, iou_pred - - -# Lightly adapted from -# https://github.com/facebookresearch/MaskFormer/blob/main/mask_former/modeling/transformer/transformer_predictor.py # noqa -class MLP(nn.Module): - def __init__( - self, - input_dim: int, - hidden_dim: int, - output_dim: int, - num_layers: int, - sigmoid_output: bool = False, - ) -> None: - super().__init__() - self.num_layers = num_layers - h = [hidden_dim] * (num_layers - 1) - self.layers = nn.ModuleList( - nn.Linear(n, k) for n, k in zip([input_dim] + h, h + [output_dim]) - ) - self.sigmoid_output = sigmoid_output - - def forward(self, x): - for i, layer in enumerate(self.layers): - x = F.relu(layer(x)) if i < self.num_layers - 1 else layer(x) - if self.sigmoid_output: - x = F.sigmoid(x) - return x diff --git a/spaces/Isuru623/CardioScanPro/train.py b/spaces/Isuru623/CardioScanPro/train.py deleted file mode 100644 index ed83d7f97fbb5f469c5b1bed09c5f62e0625163f..0000000000000000000000000000000000000000 --- a/spaces/Isuru623/CardioScanPro/train.py +++ /dev/null @@ -1,65 +0,0 @@ -import utils -import pandas as pd -import random -import numpy as np -from sklearn.model_selection import train_test_split -import model as mdl - -data = pd.read_csv('Dx_map.csv') - -df = utils.create_dataframes('training') - - -srce_files_df = ['cpsc_2018_df', 'cpsc_2018_extra_df', 'georgia_df', 'ptb_df', 'ptb-xl_df', 'st_petersburg_incart_df'] - - -srce_files = ['cpsc_2018', 'cpsc_2018_extra', 'georgia', 'ptb', 'ptb-xl', 'st_petersburg_incart'] - -#================================================================================================================ -X,lengths = utils.create_y_array(srce_files) -Y = utils.create_y_array(df,data,srce_files_df) - - -#================================================================================================================ -# Removing the outliers / Unwanted data -new_sizes = [] -for i in range(len(lengths)): - if(lengths[i] < 1000 or lengths[i] > 5000): - Y[i] = 0 - X[i] = 0 - else: - new_sizes.append(lengths[i]) - -# Modifying the arrays after removing unwanted values -X = [item for item in X if type(item) != int] -Y = [item for item in Y if type(item) != int] - - -# Adding noice to the data to make it 2617 points long -X = utils.equalizing_wave_array(X) - -# Convering the list of arrays to numpy arrays -for i in range(len(X)): - X[i] = np.array(X[i]) - -for i in range(len(Y)): - Y[i] = np.array(Y[i]) - -# Splitting the data into train and test -X_train, X_test, y_train, y_test = train_test_split(np.array(X), np.array(Y), test_size=0.1, random_state=42) - - -# Getting the input shape and number of classes(output) -input_shape = (X_train.shape[1], X_train.shape[2]) # Shape: (sequence_length, num_leads) -num_classes = y_train.shape[1] # Number of anomaly classes - - -# Creating the model -resnet_model = mdl.ResNet_model(input_shape,num_classes) - -# Training the model -trained_model,accuracy_results_loss_results = mdl.model_train(X_train,y_train,resnet_model,5,15) - - -# Saving the model -trained_model.save('CardioScanPro_resnet_model.h5') \ No newline at end of file diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/README.md b/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/README.md deleted file mode 100644 index 9494e357fd43465d5a1aa4da1bf784e1fcc40039..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/schedulers/README.md +++ /dev/null @@ -1,3 +0,0 @@ -# Schedulers - -For more information on the schedulers, please refer to the [docs](https://huggingface.co/docs/diffusers/api/schedulers). \ No newline at end of file diff --git a/spaces/JonatanGk/cyberbullying-detector/README.md b/spaces/JonatanGk/cyberbullying-detector/README.md deleted file mode 100644 index f2ef8ebc9fc097550963a13c5ef23a3a2da7ad34..0000000000000000000000000000000000000000 --- a/spaces/JonatanGk/cyberbullying-detector/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: CyberBullying Detector -emoji: 😭 -colorFrom: purple -colorTo: indigo -sdk: gradio -app_file: app.py -pinned: true ---- - - diff --git a/spaces/Josekutty/project_01/app.py b/spaces/Josekutty/project_01/app.py deleted file mode 100644 index cb8ecc966f37e224a9d9691dffd83efb8a1be8bb..0000000000000000000000000000000000000000 --- a/spaces/Josekutty/project_01/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr -import os -import torch -from model import create_effnet_b2 - -class_names = ["cat","dog"] -model,transform = create_effnet_b2() -model.load_state_dict(torch.load( - f="best_model.pth", - map_location=torch.device("cpu"))) -def pred(img): - transformed_image = transform(img).unsqueeze(dim=0) - model.eval() - with torch.inference_mode(): - y_logit = model(transformed_image) - y_pred = torch.round(torch.sigmoid(y_logit)).squeeze().item() - result = class_names[int(y_pred)] - return result -example_list = [["examples/" + example] for example in os.listdir("examples")] -title = "Dog or Cat...." -description = "An Efficent net B2 model to predict on cat and dog images." -demo = gr.Interface(fn=pred, - inputs=gr.Image(type="pil"), - outputs=gr.Label(label="Prediction"), - examples=example_list, - title=title, - description=description) -demo.launch() diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/coders/tblr_bbox_coder.py b/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/coders/tblr_bbox_coder.py deleted file mode 100644 index 74b388f7bad6ebc1911cee5b0b7d73bbd04de17a..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/coders/tblr_bbox_coder.py +++ /dev/null @@ -1,228 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Optional, Sequence, Union - -import torch -from torch import Tensor - -from mmdet.registry import TASK_UTILS -from mmdet.structures.bbox import BaseBoxes, HorizontalBoxes, get_box_tensor -from .base_bbox_coder import BaseBBoxCoder - - -@TASK_UTILS.register_module() -class TBLRBBoxCoder(BaseBBoxCoder): - """TBLR BBox coder. - - Following the practice in `FSAF `_, - this coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left, - right) and decode it back to the original. - - Args: - normalizer (list | float): Normalization factor to be - divided with when coding the coordinates. If it is a list, it should - have length of 4 indicating normalization factor in tblr dims. - Otherwise it is a unified float factor for all dims. Default: 4.0 - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - """ - - def __init__(self, - normalizer: Union[Sequence[float], float] = 4.0, - clip_border: bool = True, - **kwargs) -> None: - super().__init__(**kwargs) - self.normalizer = normalizer - self.clip_border = clip_border - - def encode(self, bboxes: Union[Tensor, BaseBoxes], - gt_bboxes: Union[Tensor, BaseBoxes]) -> Tensor: - """Get box regression transformation deltas that can be used to - transform the ``bboxes`` into the ``gt_bboxes`` in the (top, left, - bottom, right) order. - - Args: - bboxes (torch.Tensor or :obj:`BaseBoxes`): source boxes, - e.g., object proposals. - gt_bboxes (torch.Tensor or :obj:`BaseBoxes`): target of the - transformation, e.g., ground truth boxes. - - Returns: - torch.Tensor: Box transformation deltas - """ - bboxes = get_box_tensor(bboxes) - gt_bboxes = get_box_tensor(gt_bboxes) - assert bboxes.size(0) == gt_bboxes.size(0) - assert bboxes.size(-1) == gt_bboxes.size(-1) == 4 - encoded_bboxes = bboxes2tblr( - bboxes, gt_bboxes, normalizer=self.normalizer) - return encoded_bboxes - - def decode( - self, - bboxes: Union[Tensor, BaseBoxes], - pred_bboxes: Tensor, - max_shape: Optional[Union[Sequence[int], Tensor, - Sequence[Sequence[int]]]] = None - ) -> Union[Tensor, BaseBoxes]: - """Apply transformation `pred_bboxes` to `boxes`. - - Args: - bboxes (torch.Tensor or :obj:`BaseBoxes`): Basic boxes.Shape - (B, N, 4) or (N, 4) - pred_bboxes (torch.Tensor): Encoded boxes with shape - (B, N, 4) or (N, 4) - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If bboxes shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - - Returns: - Union[torch.Tensor, :obj:`BaseBoxes`]: Decoded boxes. - """ - bboxes = get_box_tensor(bboxes) - decoded_bboxes = tblr2bboxes( - bboxes, - pred_bboxes, - normalizer=self.normalizer, - max_shape=max_shape, - clip_border=self.clip_border) - - if self.use_box_type: - decoded_bboxes = HorizontalBoxes(decoded_bboxes) - return decoded_bboxes - - -def bboxes2tblr(priors: Tensor, - gts: Tensor, - normalizer: Union[Sequence[float], float] = 4.0, - normalize_by_wh: bool = True) -> Tensor: - """Encode ground truth boxes to tblr coordinate. - - It first convert the gt coordinate to tblr format, - (top, bottom, left, right), relative to prior box centers. - The tblr coordinate may be normalized by the side length of prior bboxes - if `normalize_by_wh` is specified as True, and it is then normalized by - the `normalizer` factor. - - Args: - priors (Tensor): Prior boxes in point form - Shape: (num_proposals,4). - gts (Tensor): Coords of ground truth for each prior in point-form - Shape: (num_proposals, 4). - normalizer (Sequence[float] | float): normalization parameter of - encoded boxes. If it is a list, it has to have length = 4. - Default: 4.0 - normalize_by_wh (bool): Whether to normalize tblr coordinate by the - side length (wh) of prior bboxes. - - Return: - encoded boxes (Tensor), Shape: (num_proposals, 4) - """ - - # dist b/t match center and prior's center - if not isinstance(normalizer, float): - normalizer = torch.tensor(normalizer, device=priors.device) - assert len(normalizer) == 4, 'Normalizer must have length = 4' - assert priors.size(0) == gts.size(0) - prior_centers = (priors[:, 0:2] + priors[:, 2:4]) / 2 - xmin, ymin, xmax, ymax = gts.split(1, dim=1) - top = prior_centers[:, 1].unsqueeze(1) - ymin - bottom = ymax - prior_centers[:, 1].unsqueeze(1) - left = prior_centers[:, 0].unsqueeze(1) - xmin - right = xmax - prior_centers[:, 0].unsqueeze(1) - loc = torch.cat((top, bottom, left, right), dim=1) - if normalize_by_wh: - # Normalize tblr by anchor width and height - wh = priors[:, 2:4] - priors[:, 0:2] - w, h = torch.split(wh, 1, dim=1) - loc[:, :2] /= h # tb is normalized by h - loc[:, 2:] /= w # lr is normalized by w - # Normalize tblr by the given normalization factor - return loc / normalizer - - -def tblr2bboxes(priors: Tensor, - tblr: Tensor, - normalizer: Union[Sequence[float], float] = 4.0, - normalize_by_wh: bool = True, - max_shape: Optional[Union[Sequence[int], Tensor, - Sequence[Sequence[int]]]] = None, - clip_border: bool = True) -> Tensor: - """Decode tblr outputs to prediction boxes. - - The process includes 3 steps: 1) De-normalize tblr coordinates by - multiplying it with `normalizer`; 2) De-normalize tblr coordinates by the - prior bbox width and height if `normalize_by_wh` is `True`; 3) Convert - tblr (top, bottom, left, right) pair relative to the center of priors back - to (xmin, ymin, xmax, ymax) coordinate. - - Args: - priors (Tensor): Prior boxes in point form (x0, y0, x1, y1) - Shape: (N,4) or (B, N, 4). - tblr (Tensor): Coords of network output in tblr form - Shape: (N, 4) or (B, N, 4). - normalizer (Sequence[float] | float): Normalization parameter of - encoded boxes. By list, it represents the normalization factors at - tblr dims. By float, it is the unified normalization factor at all - dims. Default: 4.0 - normalize_by_wh (bool): Whether the tblr coordinates have been - normalized by the side length (wh) of prior bboxes. - max_shape (Sequence[int] or torch.Tensor or Sequence[ - Sequence[int]],optional): Maximum bounds for boxes, specifies - (H, W, C) or (H, W). If priors shape is (B, N, 4), then - the max_shape should be a Sequence[Sequence[int]] - and the length of max_shape should also be B. - clip_border (bool, optional): Whether clip the objects outside the - border of the image. Defaults to True. - - Return: - encoded boxes (Tensor): Boxes with shape (N, 4) or (B, N, 4) - """ - if not isinstance(normalizer, float): - normalizer = torch.tensor(normalizer, device=priors.device) - assert len(normalizer) == 4, 'Normalizer must have length = 4' - assert priors.size(0) == tblr.size(0) - if priors.ndim == 3: - assert priors.size(1) == tblr.size(1) - - loc_decode = tblr * normalizer - prior_centers = (priors[..., 0:2] + priors[..., 2:4]) / 2 - if normalize_by_wh: - wh = priors[..., 2:4] - priors[..., 0:2] - w, h = torch.split(wh, 1, dim=-1) - # Inplace operation with slice would failed for exporting to ONNX - th = h * loc_decode[..., :2] # tb - tw = w * loc_decode[..., 2:] # lr - loc_decode = torch.cat([th, tw], dim=-1) - # Cannot be exported using onnx when loc_decode.split(1, dim=-1) - top, bottom, left, right = loc_decode.split((1, 1, 1, 1), dim=-1) - xmin = prior_centers[..., 0].unsqueeze(-1) - left - xmax = prior_centers[..., 0].unsqueeze(-1) + right - ymin = prior_centers[..., 1].unsqueeze(-1) - top - ymax = prior_centers[..., 1].unsqueeze(-1) + bottom - - bboxes = torch.cat((xmin, ymin, xmax, ymax), dim=-1) - - if clip_border and max_shape is not None: - # clip bboxes with dynamic `min` and `max` for onnx - if torch.onnx.is_in_onnx_export(): - from mmdet.core.export import dynamic_clip_for_onnx - xmin, ymin, xmax, ymax = dynamic_clip_for_onnx( - xmin, ymin, xmax, ymax, max_shape) - bboxes = torch.cat([xmin, ymin, xmax, ymax], dim=-1) - return bboxes - if not isinstance(max_shape, torch.Tensor): - max_shape = priors.new_tensor(max_shape) - max_shape = max_shape[..., :2].type_as(priors) - if max_shape.ndim == 2: - assert bboxes.ndim == 3 - assert max_shape.size(0) == bboxes.size(0) - - min_xy = priors.new_tensor(0) - max_xy = torch.cat([max_shape, max_shape], - dim=-1).flip(-1).unsqueeze(-2) - bboxes = torch.where(bboxes < min_xy, min_xy, bboxes) - bboxes = torch.where(bboxes > max_xy, max_xy, bboxes) - - return bboxes diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/samplers/combined_sampler.py b/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/samplers/combined_sampler.py deleted file mode 100644 index 8e0560e372efffe865fa32028d823280a8bd5d87..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/samplers/combined_sampler.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmdet.registry import TASK_UTILS -from .base_sampler import BaseSampler - - -@TASK_UTILS.register_module() -class CombinedSampler(BaseSampler): - """A sampler that combines positive sampler and negative sampler.""" - - def __init__(self, pos_sampler, neg_sampler, **kwargs): - super(CombinedSampler, self).__init__(**kwargs) - self.pos_sampler = TASK_UTILS.build(pos_sampler, default_args=kwargs) - self.neg_sampler = TASK_UTILS.build(neg_sampler, default_args=kwargs) - - def _sample_pos(self, **kwargs): - """Sample positive samples.""" - raise NotImplementedError - - def _sample_neg(self, **kwargs): - """Sample negative samples.""" - raise NotImplementedError diff --git a/spaces/LanguageBind/LanguageBind/d_cls/precision.py b/spaces/LanguageBind/LanguageBind/d_cls/precision.py deleted file mode 100644 index a63b92256518d13afd57261df1568e26b1622201..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/d_cls/precision.py +++ /dev/null @@ -1,12 +0,0 @@ -import torch -from contextlib import suppress - - -def get_autocast(precision): - if precision == 'amp': - return torch.cuda.amp.autocast - elif precision == 'amp_bfloat16' or precision == 'amp_bf16': - # amp_bfloat16 is more stable than amp float16 for clip training - return lambda: torch.cuda.amp.autocast(dtype=torch.bfloat16) - else: - return suppress diff --git a/spaces/LaynzKunz/RCVAICOVER/src/mdx.py b/spaces/LaynzKunz/RCVAICOVER/src/mdx.py deleted file mode 100644 index 575c456fbdf9e5b5955401f41a3a58ddc27267b3..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/RCVAICOVER/src/mdx.py +++ /dev/null @@ -1,287 +0,0 @@ -import gc -import hashlib -import os -import queue -import threading -import warnings - -import librosa -import numpy as np -import onnxruntime as ort -import soundfile as sf -import torch -from tqdm import tqdm - -warnings.filterwarnings("ignore") -stem_naming = {'Vocals': 'Instrumental', 'Other': 'Instruments', 'Instrumental': 'Vocals', 'Drums': 'Drumless', 'Bass': 'Bassless'} - - -class MDXModel: - def __init__(self, device, dim_f, dim_t, n_fft, hop=1024, stem_name=None, compensation=1.000): - self.dim_f = dim_f - self.dim_t = dim_t - self.dim_c = 4 - self.n_fft = n_fft - self.hop = hop - self.stem_name = stem_name - self.compensation = compensation - - self.n_bins = self.n_fft // 2 + 1 - self.chunk_size = hop * (self.dim_t - 1) - self.window = torch.hann_window(window_length=self.n_fft, periodic=True).to(device) - - out_c = self.dim_c - - self.freq_pad = torch.zeros([1, out_c, self.n_bins - self.dim_f, self.dim_t]).to(device) - - def stft(self, x): - x = x.reshape([-1, self.chunk_size]) - x = torch.stft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True, return_complex=True) - x = torch.view_as_real(x) - x = x.permute([0, 3, 1, 2]) - x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape([-1, 4, self.n_bins, self.dim_t]) - return x[:, :, :self.dim_f] - - def istft(self, x, freq_pad=None): - freq_pad = self.freq_pad.repeat([x.shape[0], 1, 1, 1]) if freq_pad is None else freq_pad - x = torch.cat([x, freq_pad], -2) - # c = 4*2 if self.target_name=='*' else 2 - x = x.reshape([-1, 2, 2, self.n_bins, self.dim_t]).reshape([-1, 2, self.n_bins, self.dim_t]) - x = x.permute([0, 2, 3, 1]) - x = x.contiguous() - x = torch.view_as_complex(x) - x = torch.istft(x, n_fft=self.n_fft, hop_length=self.hop, window=self.window, center=True) - return x.reshape([-1, 2, self.chunk_size]) - - -class MDX: - DEFAULT_SR = 44100 - # Unit: seconds - DEFAULT_CHUNK_SIZE = 0 * DEFAULT_SR - DEFAULT_MARGIN_SIZE = 1 * DEFAULT_SR - - DEFAULT_PROCESSOR = 0 - - def __init__(self, model_path: str, params: MDXModel, processor=DEFAULT_PROCESSOR): - - # Set the device and the provider (CPU or CUDA) - self.device = torch.device(f'cuda:{processor}') if processor >= 0 else torch.device('cpu') - self.provider = ['CUDAExecutionProvider'] if processor >= 0 else ['CPUExecutionProvider'] - - self.model = params - - # Load the ONNX model using ONNX Runtime - self.ort = ort.InferenceSession(model_path, providers=self.provider) - # Preload the model for faster performance - self.ort.run(None, {'input': torch.rand(1, 4, params.dim_f, params.dim_t).numpy()}) - self.process = lambda spec: self.ort.run(None, {'input': spec.cpu().numpy()})[0] - - self.prog = None - - @staticmethod - def get_hash(model_path): - try: - with open(model_path, 'rb') as f: - f.seek(- 10000 * 1024, 2) - model_hash = hashlib.md5(f.read()).hexdigest() - except: - model_hash = hashlib.md5(open(model_path, 'rb').read()).hexdigest() - - return model_hash - - @staticmethod - def segment(wave, combine=True, chunk_size=DEFAULT_CHUNK_SIZE, margin_size=DEFAULT_MARGIN_SIZE): - """ - Segment or join segmented wave array - - Args: - wave: (np.array) Wave array to be segmented or joined - combine: (bool) If True, combines segmented wave array. If False, segments wave array. - chunk_size: (int) Size of each segment (in samples) - margin_size: (int) Size of margin between segments (in samples) - - Returns: - numpy array: Segmented or joined wave array - """ - - if combine: - processed_wave = None # Initializing as None instead of [] for later numpy array concatenation - for segment_count, segment in enumerate(wave): - start = 0 if segment_count == 0 else margin_size - end = None if segment_count == len(wave) - 1 else -margin_size - if margin_size == 0: - end = None - if processed_wave is None: # Create array for first segment - processed_wave = segment[:, start:end] - else: # Concatenate to existing array for subsequent segments - processed_wave = np.concatenate((processed_wave, segment[:, start:end]), axis=-1) - - else: - processed_wave = [] - sample_count = wave.shape[-1] - - if chunk_size <= 0 or chunk_size > sample_count: - chunk_size = sample_count - - if margin_size > chunk_size: - margin_size = chunk_size - - for segment_count, skip in enumerate(range(0, sample_count, chunk_size)): - - margin = 0 if segment_count == 0 else margin_size - end = min(skip + chunk_size + margin_size, sample_count) - start = skip - margin - - cut = wave[:, start:end].copy() - processed_wave.append(cut) - - if end == sample_count: - break - - return processed_wave - - def pad_wave(self, wave): - """ - Pad the wave array to match the required chunk size - - Args: - wave: (np.array) Wave array to be padded - - Returns: - tuple: (padded_wave, pad, trim) - - padded_wave: Padded wave array - - pad: Number of samples that were padded - - trim: Number of samples that were trimmed - """ - n_sample = wave.shape[1] - trim = self.model.n_fft // 2 - gen_size = self.model.chunk_size - 2 * trim - pad = gen_size - n_sample % gen_size - - # Padded wave - wave_p = np.concatenate((np.zeros((2, trim)), wave, np.zeros((2, pad)), np.zeros((2, trim))), 1) - - mix_waves = [] - for i in range(0, n_sample + pad, gen_size): - waves = np.array(wave_p[:, i:i + self.model.chunk_size]) - mix_waves.append(waves) - - mix_waves = torch.tensor(mix_waves, dtype=torch.float32).to(self.device) - - return mix_waves, pad, trim - - def _process_wave(self, mix_waves, trim, pad, q: queue.Queue, _id: int): - """ - Process each wave segment in a multi-threaded environment - - Args: - mix_waves: (torch.Tensor) Wave segments to be processed - trim: (int) Number of samples trimmed during padding - pad: (int) Number of samples padded during padding - q: (queue.Queue) Queue to hold the processed wave segments - _id: (int) Identifier of the processed wave segment - - Returns: - numpy array: Processed wave segment - """ - mix_waves = mix_waves.split(1) - with torch.no_grad(): - pw = [] - for mix_wave in mix_waves: - self.prog.update() - spec = self.model.stft(mix_wave) - processed_spec = torch.tensor(self.process(spec)) - processed_wav = self.model.istft(processed_spec.to(self.device)) - processed_wav = processed_wav[:, :, trim:-trim].transpose(0, 1).reshape(2, -1).cpu().numpy() - pw.append(processed_wav) - processed_signal = np.concatenate(pw, axis=-1)[:, :-pad] - q.put({_id: processed_signal}) - return processed_signal - - def process_wave(self, wave: np.array, mt_threads=1): - """ - Process the wave array in a multi-threaded environment - - Args: - wave: (np.array) Wave array to be processed - mt_threads: (int) Number of threads to be used for processing - - Returns: - numpy array: Processed wave array - """ - self.prog = tqdm(total=0) - chunk = wave.shape[-1] // mt_threads - waves = self.segment(wave, False, chunk) - - # Create a queue to hold the processed wave segments - q = queue.Queue() - threads = [] - for c, batch in enumerate(waves): - mix_waves, pad, trim = self.pad_wave(batch) - self.prog.total = len(mix_waves) * mt_threads - thread = threading.Thread(target=self._process_wave, args=(mix_waves, trim, pad, q, c)) - thread.start() - threads.append(thread) - for thread in threads: - thread.join() - self.prog.close() - - processed_batches = [] - while not q.empty(): - processed_batches.append(q.get()) - processed_batches = [list(wave.values())[0] for wave in - sorted(processed_batches, key=lambda d: list(d.keys())[0])] - assert len(processed_batches) == len(waves), 'Incomplete processed batches, please reduce batch size!' - return self.segment(processed_batches, True, chunk) - - -def run_mdx(model_params, output_dir, model_path, filename, exclude_main=False, exclude_inversion=False, suffix=None, invert_suffix=None, denoise=False, keep_orig=True, m_threads=2): - device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu') - - device_properties = torch.cuda.get_device_properties(device) - vram_gb = device_properties.total_memory / 1024**3 - m_threads = 1 if vram_gb < 8 else 2 - - model_hash = MDX.get_hash(model_path) - mp = model_params.get(model_hash) - model = MDXModel( - device, - dim_f=mp["mdx_dim_f_set"], - dim_t=2 ** mp["mdx_dim_t_set"], - n_fft=mp["mdx_n_fft_scale_set"], - stem_name=mp["primary_stem"], - compensation=mp["compensate"] - ) - - mdx_sess = MDX(model_path, model) - wave, sr = librosa.load(filename, mono=False, sr=44100) - # normalizing input wave gives better output - peak = max(np.max(wave), abs(np.min(wave))) - wave /= peak - if denoise: - wave_processed = -(mdx_sess.process_wave(-wave, m_threads)) + (mdx_sess.process_wave(wave, m_threads)) - wave_processed *= 0.5 - else: - wave_processed = mdx_sess.process_wave(wave, m_threads) - # return to previous peak - wave_processed *= peak - stem_name = model.stem_name if suffix is None else suffix - - main_filepath = None - if not exclude_main: - main_filepath = os.path.join(output_dir, f"{os.path.basename(os.path.splitext(filename)[0])}_{stem_name}.wav") - sf.write(main_filepath, wave_processed.T, sr) - - invert_filepath = None - if not exclude_inversion: - diff_stem_name = stem_naming.get(stem_name) if invert_suffix is None else invert_suffix - stem_name = f"{stem_name}_diff" if diff_stem_name is None else diff_stem_name - invert_filepath = os.path.join(output_dir, f"{os.path.basename(os.path.splitext(filename)[0])}_{stem_name}.wav") - sf.write(invert_filepath, (-wave_processed.T * model.compensation) + wave.T, sr) - - if not keep_orig: - os.remove(filename) - - del mdx_sess, wave_processed, wave - gc.collect() - return main_filepath, invert_filepath diff --git a/spaces/Lianjd/stock_dashboard/backtrader/analyzers/positions.py b/spaces/Lianjd/stock_dashboard/backtrader/analyzers/positions.py deleted file mode 100644 index 17d817fa3c990e31f117df44715143bdc16944d9..0000000000000000000000000000000000000000 --- a/spaces/Lianjd/stock_dashboard/backtrader/analyzers/positions.py +++ /dev/null @@ -1,85 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8; py-indent-offset:4 -*- -############################################################################### -# -# Copyright (C) 2015-2020 Daniel Rodriguez -# -# This program is free software: you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation, either version 3 of the License, or -# (at your option) any later version. -# -# This program is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the -# GNU General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with this program. If not, see . -# -############################################################################### -from __future__ import (absolute_import, division, print_function, - unicode_literals) - - -import backtrader as bt - - -class PositionsValue(bt.Analyzer): - '''This analyzer reports the value of the positions of the current set of - datas - - Params: - - - timeframe (default: ``None``) - If ``None`` then the timeframe of the 1st data of the system will be - used - - - compression (default: ``None``) - - Only used for sub-day timeframes to for example work on an hourly - timeframe by specifying "TimeFrame.Minutes" and 60 as compression - - If ``None`` then the compression of the 1st data of the system will be - used - - - headers (default: ``False``) - - Add an initial key to the dictionary holding the results with the names - of the datas ('Datetime' as key - - - cash (default: ``False``) - - Include the actual cash as an extra position (for the header 'cash' - will be used as name) - - Methods: - - - get_analysis - - Returns a dictionary with returns as values and the datetime points for - each return as keys - ''' - params = ( - ('headers', False), - ('cash', False), - ) - - def start(self): - if self.p.headers: - headers = [d._name or 'Data%d' % i - for i, d in enumerate(self.datas)] - self.rets['Datetime'] = headers + ['cash'] * self.p.cash - - tf = min(d._timeframe for d in self.datas) - self._usedate = tf >= bt.TimeFrame.Days - - def next(self): - pvals = [self.strategy.broker.get_value([d]) for d in self.datas] - if self.p.cash: - pvals.append(self.strategy.broker.get_cash()) - - if self._usedate: - self.rets[self.strategy.datetime.date()] = pvals - else: - self.rets[self.strategy.datetime.datetime()] = pvals diff --git a/spaces/Linly-AI/Linly-ChatFlow/models/rope.py b/spaces/Linly-AI/Linly-ChatFlow/models/rope.py deleted file mode 100644 index 34d6cc911c6f99612fd285c74380b0eb73756af6..0000000000000000000000000000000000000000 --- a/spaces/Linly-AI/Linly-ChatFlow/models/rope.py +++ /dev/null @@ -1,30 +0,0 @@ -import torch -from typing import Tuple - -def precompute_freqs_cis(dim: int, end: int, theta: float = 10000.0): - freqs = 1.0 / (theta ** (torch.arange(0, dim, 2)[: (dim // 2)].float() / dim)) - t = torch.arange(end, device=freqs.device) # type: ignore - freqs = torch.outer(t, freqs).float() # type: ignore - freqs_cis = torch.polar(torch.ones_like(freqs), freqs) # complex64 - return freqs_cis - - -def reshape_for_broadcast(freqs_cis: torch.Tensor, x: torch.Tensor): - ndim = x.ndim - assert 0 <= 1 < ndim - assert freqs_cis.shape == (x.shape[1], x.shape[-1]) - shape = [d if i == 1 or i == ndim - 1 else 1 for i, d in enumerate(x.shape)] - return freqs_cis.view(*shape) - - -def apply_rotary_emb( - xq: torch.Tensor, - xk: torch.Tensor, - freqs_cis: torch.Tensor, -) -> Tuple[torch.Tensor, torch.Tensor]: - xq_ = torch.view_as_complex(xq.float().reshape(*xq.shape[:-1], -1, 2)) - xk_ = torch.view_as_complex(xk.float().reshape(*xk.shape[:-1], -1, 2)) - freqs_cis = reshape_for_broadcast(freqs_cis, xq_) - xq_out = torch.view_as_real(xq_ * freqs_cis).flatten(3) - xk_out = torch.view_as_real(xk_ * freqs_cis).flatten(3) - return xq_out.type_as(xq), xk_out.type_as(xk) diff --git a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/panet_r18_fpem_ffm.py b/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/panet_r18_fpem_ffm.py deleted file mode 100644 index a69a4d87603275bc1f89b5f58c722d79274e4fd7..0000000000000000000000000000000000000000 --- a/spaces/Loren/Streamlit_OCR_comparator/configs/_base_/det_models/panet_r18_fpem_ffm.py +++ /dev/null @@ -1,43 +0,0 @@ -model_poly = dict( - type='PANet', - backbone=dict( - type='mmdet.ResNet', - depth=18, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='SyncBN', requires_grad=True), - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18'), - norm_eval=True, - style='caffe'), - neck=dict(type='FPEM_FFM', in_channels=[64, 128, 256, 512]), - bbox_head=dict( - type='PANHead', - in_channels=[128, 128, 128, 128], - out_channels=6, - loss=dict(type='PANLoss'), - postprocessor=dict(type='PANPostprocessor', text_repr_type='poly')), - train_cfg=None, - test_cfg=None) - -model_quad = dict( - type='PANet', - backbone=dict( - type='mmdet.ResNet', - depth=18, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=-1, - norm_cfg=dict(type='SyncBN', requires_grad=True), - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet18'), - norm_eval=True, - style='caffe'), - neck=dict(type='FPEM_FFM', in_channels=[64, 128, 256, 512]), - bbox_head=dict( - type='PANHead', - in_channels=[128, 128, 128, 128], - out_channels=6, - loss=dict(type='PANLoss'), - postprocessor=dict(type='PANPostprocessor', text_repr_type='quad')), - train_cfg=None, - test_cfg=None) diff --git a/spaces/Margaret/mazzuma-sentiment-engine/README.md b/spaces/Margaret/mazzuma-sentiment-engine/README.md deleted file mode 100644 index 46d090df5b558a1c03b9b293d920de605f283eed..0000000000000000000000000000000000000000 --- a/spaces/Margaret/mazzuma-sentiment-engine/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Mazzuma Sentiment Engine -emoji: 🐢 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.0.20 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Marshalls/testmtd/analysis/sandbox0.py b/spaces/Marshalls/testmtd/analysis/sandbox0.py deleted file mode 100644 index 444e5b8cebbb526184ec7b853492c556fc5ab3e2..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/analysis/sandbox0.py +++ /dev/null @@ -1,182 +0,0 @@ -from analysis.pymo.parsers import BVHParser -from analysis.pymo.data import Joint, MocapData -from analysis.pymo.preprocessing import * -from analysis.pymo.viz_tools import * -from analysis.pymo.writers import * -import analysis.pymo -import imp;imp.reload(analysis.pymo) -import imp;imp.reload(analysis.pymo.preprocessing) -from sklearn.pipeline import Pipeline -from analysis.pymo.rotation_tools import euler2expmap - -import matplotlib.pyplot as plt - -#%% -p = BVHParser() - -# f="data/dance_full/aistpp_bvh/bvh/gWA_sFM_cAll_d26_mWA4_ch12.bvh" -# f="data/dance_full/shadermotion_data2_retarget/bvh/VRChat_Dance_2.bvh" -# f="data/dance_full/shadermotion_data2_retarget/bvh/VRChat_Dance_8.bvh" -# f="data/dance_full/kth_streetdance_data/bvh/Streetdance_001.bvh" -# f="data/dance_full/shadermotion_justdance/bvh/justdance_0.bvh" -# f="data/dance_full/vibe_dance/bvh/Take1.bvh" -# f="data/dance_full/shadermotion_data2_retarget/bvh/VRChat_Dance_0.bvh" -f1="data/dance_full/kth_streetdance_data/bvh/Streetdance_001.bvh" -f2="data/dance_full/shadermotion_justdance/bvh/justdance_0.bvh" -# f2="data/dance_full/shadermotion_justdance/bvh/justdance_1.bvh" -f1="/media/guillefix/SAMSUNG/mt-lightning-stuff/dance_full/shadermotion_justdance/bvh/justdance_1.bvh" -# f="data/dance_full/tmp/bvh/VRChat_Dance_0.bvh" -# f="data/dance_full/testing/VRChat_Dance_0.bvh" -# f="data/dance_full/tmp/bvh/VRChat_Dance_0.bvh" -# f="data/dance_full/testing/VRChat_Dance_0.bvh" - -data = p.parse(f1) -# data2 = p.parse(f2) - -len(data.skeleton.items()) -#%% - -# print_skel(data) - -# f="analysis/mixamo.bvh" -# -# data = p.parse(f) -# -# print_skel(data) -data.values["LeftFoot_Zrotation"][2] -data2.values["LeftFoot_Zrotation"][2] -data2.values["LeftFoot_Zrotation"] = data.values["LeftFoot_Zrotation"].values[:13250] -data2.values["LeftFoot_Xrotation"] = data.values["LeftFoot_Xrotation"].values[:13250] -data2.values["LeftFoot_Yrotation"] = data.values["LeftFoot_Yrotation"].values[:13250] -euler2expmap((data.values["LeftFoot_Zrotation"][1], data.values["LeftFoot_Xrotation"][1], data.values["LeftFoot_Yrotation"][1]), 'ZXY', True) -e1=euler2expmap((data2.values["LeftFoot_Zrotation"][0], data2.values["LeftFoot_Xrotation"][0], data2.values["LeftFoot_Yrotation"][0]), 'ZXY', True) -e2=euler2expmap((data2.values["LeftFoot_Zrotation"][1], data2.values["LeftFoot_Xrotation"][1], data2.values["LeftFoot_Yrotation"][1]), 'ZXY', True) -e3=euler2expmap((data2.values["LeftFoot_Zrotation"][2], data2.values["LeftFoot_Xrotation"][2], data2.values["LeftFoot_Yrotation"][2]), 'ZXY', True) - -np.linalg.norm(e1) - np.linalg.norm(e2) -np.linalg.norm(e2) - np.linalg.norm(e3) -(2*np.pi - np.linalg.norm(e2)) - (np.linalg.norm(e3)) - -data.values["LeftFoot_Zrotation"].mean() -data2.values["LeftFoot_Zrotation"].mean() -list(data2.values.std()) -list(data2.values.mean()) -list(data.values.mean()) - -data.values - -data.skeleton - -#%% - -# fps=60 -# p = BVHParser() -data_pipe = Pipeline([ - # ('dwnsampl', DownSampler(tgt_fps=fps, keep_all=False)), - ('mir', Mirror(axis='X', append=True)), - ('root', RootTransformer('pos_rot_deltas')), - ('jtsel', JointSelector(['Spine', 'Spine1', 'Neck', 'Head', 'RightShoulder', 'RightArm', 'RightForeArm', 'RightHand', 'LeftShoulder', 'LeftArm', 'LeftForeArm', 'LeftHand', 'RightUpLeg', 'RightLeg', 'RightFoot', 'RightToeBase', 'LeftUpLeg', 'LeftLeg', 'LeftFoot', 'LeftToeBase'], include_root=True)), - # ('jtsel', JointSelector(['Spine1', 'Spine', 'Neck', 'Head', 'RightShoulder', 'RightArm', 'RightForeArm', 'RightHand', 'LeftShoulder', 'LeftArm', 'LeftForeArm', 'LeftHand', 'RightUpLeg', 'RightLeg', 'RightFoot', 'RightToeBase', 'LeftUpLeg', 'LeftLeg', 'LeftFoot', 'LeftToeBase'], include_root=True)), - # ('exp', MocapParameterizer('position')), - ('exp', MocapParameterizer('expmap')), - ('cnst', ConstantsRemover(only_cols=["Hips_Xposition", "Hips_Zposition"])), - # ('np', Numpyfier()) -]) - - -out_data = data_pipe.fit_transform([data]) -out_data2 = data_pipe.fit_transform([data2]) -out_data[0].values.columns.size -out_data2[0].values.columns.size - -out_data[0].values.columns[17] -out_data[0].values -out_data2[0].values -out_data2[0].values["LeftFoot_beta"].std() -out_data2[0].values["LeftFoot_beta"].max() -out_data2[0].values["LeftFoot_beta"].mean() -out_data[0].values["LeftFoot_beta"].std() -out_data[0].values["LeftFoot_beta"].max() -out_data[0].values["LeftFoot_beta"].mean() -(out_data[0].values["LeftFoot_alpha"]**2 + out_data[0].values["LeftFoot_beta"]**2 + out_data[0].values["LeftFoot_gamma"]**2).mean() -(out_data2[0].values["LeftFoot_alpha"]**2 + out_data2[0].values["LeftFoot_beta"]**2 + out_data2[0].values["LeftFoot_gamma"]**2).mean() -out_data[0].values["LeftFoot_gamma"][1] -out_data2[0].values["LeftFoot_gamma"][3] - -(out_data[0].values["RightFoot_alpha"]**2 + out_data[0].values["RightFoot_beta"]**2 + out_data[0].values["RightFoot_gamma"]**2).mean() -(out_data2[0].values["RightFoot_alpha"][10:]**2 + out_data2[0].values["RightFoot_beta"][10:]**2 + out_data2[0].values["RightFoot_gamma"][10:]**2).mean() -(out_data2[0].values["RightFoot_alpha"][10:]**2 + out_data2[0].values["RightFoot_beta"][10:]**2 + out_data2[0].values["RightFoot_gamma"][10:]**2).diff().max() - -np.diff(out_data2[0].values["LeftFoot_beta"]).max() -np.diff(out_data[0].values["LeftFoot_beta"]).max() - -out_data[0].shape -inv_data = data_pipe.inverse_transform(out_data) -inv_data[0] == data - -data.values -inv_data[0].values - -# out_data[0][0] -# out_data[0].values.columns - -# video_file = "analysis/tmp/Streetdance_001.mp4" -# video_file = "analysis/tmp/sm01.mp4" -video_file = "analysis/tmp/sm01b.mp4" -render_mp4(out_data[0], video_file, axis_scale=3, elev=45, azim=45) -# render_mp4(out_data[0], video_file, axis_scale=100, elev=45, azim=45) -# audio_file = "data/dance_full/kth_streetdance_data/music/Streetdance_001.wav" -# audio_file = "data/dance_full/vibe_dance/audio/audio_001.wav" -# audio_file = "data/dance_full/shadermotion_data2_retarget/audio/VRChat\ Dance_0.wav" -audio_file = "data/dance_full/testing/VRChat_Dance_0.mp3" -# audio_file = "data/dance_full/tmp/audio/VRChat\ Dance_0.wav" -from analysis.visualization.utils import generate_video_from_images, join_video_and_audio -join_video_and_audio(video_file, audio_file, 0) - -yposs = list(filter(lambda x: x.split("_")[1]=="Yposition", out_data[0].values.columns)) - -out_data[0].values[yposs].iloc[100:].min().min() -out_data[0].values[yposs].min() -out_data[0].values[yposs].iloc[10:] -out_data[0].values["Hips_Yposition"].iloc[52] - -# out_data[0].values -out_data.shape -out_data[0,:10,-1] - -bvh_data=data_pipe.inverse_transform(out_data) - -writer = BVHWriter() -with open('analysis/tmp/test.bvh','w') as f: - writer.write(bvh_data[0], f) - - -#### -last_index = data.values[(data.values["Hips_Xposition"] > 100000) | (data.values["Hips_Xposition"] < -100000)].index[-1] - -data.values.loc[last_index:].iloc[1:] - - -################## - -import numpy as np - -a = np.load("inference/generated_1/transflower_expmap_finetune2/predicted_mods/aistpp_gBR_sBM_cAll_d04_mBR0_ch10.expmap_scaled_20.generated.npy") - -a[:2,0,-9:] - -######################## -#%% - -# import pickle -import joblib as jl -data_pipe = jl.load(open("data/dance_combined/motion_expmap_cr_scaled_20_data_pipe.sav", "rb")) - -data = np.load("data/dance_combined/justdance_0_mirrored.bvh_expmap_cr.npy") -data = np.load("data/dance_combined/justdance_0.bvh_expmap_cr.npy") - -bvh_data=data_pipe.inverse_transform([data]) - -writer = BVHWriter() -with open('analysis/tmp/test.bvh','w') as f: - writer.write(bvh_data[0], f) diff --git a/spaces/Meena/table-question-answering-space/app.py b/spaces/Meena/table-question-answering-space/app.py deleted file mode 100644 index e001d6c4916acdf547d2fe691ec6509490f61d08..0000000000000000000000000000000000000000 --- a/spaces/Meena/table-question-answering-space/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import pandas as pd -import torch -from io import StringIO -import streamlit as st -from app.tapas import execute_query - -query = st.text_input(label='Enter your query') -st.caption('Multiple queries separated by comma(,)') -# st.write('The current movie title is', title) -uploaded_file = st.file_uploader("Choose a csv file") - -if uploaded_file is not None: - dataframe = pd.read_csv(uploaded_file) - if query: - query_results = execute_query(query, dataframe) - st.markdown('**Prediction**') - for query_result in query_results: - st.markdown('_'+query_result+'_') - st.dataframe(dataframe) - - - - - - - - - \ No newline at end of file diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/da_head.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/da_head.py deleted file mode 100644 index 5cd49fcfdc7c0a70f9485cc71843dcf3e0cb1774..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/models/decode_heads/da_head.py +++ /dev/null @@ -1,178 +0,0 @@ -import torch -import torch.nn.functional as F -from annotator.uniformer.mmcv.cnn import ConvModule, Scale -from torch import nn - -from annotator.uniformer.mmseg.core import add_prefix -from ..builder import HEADS -from ..utils import SelfAttentionBlock as _SelfAttentionBlock -from .decode_head import BaseDecodeHead - - -class PAM(_SelfAttentionBlock): - """Position Attention Module (PAM) - - Args: - in_channels (int): Input channels of key/query feature. - channels (int): Output channels of key/query transform. - """ - - def __init__(self, in_channels, channels): - super(PAM, self).__init__( - key_in_channels=in_channels, - query_in_channels=in_channels, - channels=channels, - out_channels=in_channels, - share_key_query=False, - query_downsample=None, - key_downsample=None, - key_query_num_convs=1, - key_query_norm=False, - value_out_num_convs=1, - value_out_norm=False, - matmul_norm=False, - with_out=False, - conv_cfg=None, - norm_cfg=None, - act_cfg=None) - - self.gamma = Scale(0) - - def forward(self, x): - """Forward function.""" - out = super(PAM, self).forward(x, x) - - out = self.gamma(out) + x - return out - - -class CAM(nn.Module): - """Channel Attention Module (CAM)""" - - def __init__(self): - super(CAM, self).__init__() - self.gamma = Scale(0) - - def forward(self, x): - """Forward function.""" - batch_size, channels, height, width = x.size() - proj_query = x.view(batch_size, channels, -1) - proj_key = x.view(batch_size, channels, -1).permute(0, 2, 1) - energy = torch.bmm(proj_query, proj_key) - energy_new = torch.max( - energy, -1, keepdim=True)[0].expand_as(energy) - energy - attention = F.softmax(energy_new, dim=-1) - proj_value = x.view(batch_size, channels, -1) - - out = torch.bmm(attention, proj_value) - out = out.view(batch_size, channels, height, width) - - out = self.gamma(out) + x - return out - - -@HEADS.register_module() -class DAHead(BaseDecodeHead): - """Dual Attention Network for Scene Segmentation. - - This head is the implementation of `DANet - `_. - - Args: - pam_channels (int): The channels of Position Attention Module(PAM). - """ - - def __init__(self, pam_channels, **kwargs): - super(DAHead, self).__init__(**kwargs) - self.pam_channels = pam_channels - self.pam_in_conv = ConvModule( - self.in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.pam = PAM(self.channels, pam_channels) - self.pam_out_conv = ConvModule( - self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.pam_conv_seg = nn.Conv2d( - self.channels, self.num_classes, kernel_size=1) - - self.cam_in_conv = ConvModule( - self.in_channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.cam = CAM() - self.cam_out_conv = ConvModule( - self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - self.cam_conv_seg = nn.Conv2d( - self.channels, self.num_classes, kernel_size=1) - - def pam_cls_seg(self, feat): - """PAM feature classification.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.pam_conv_seg(feat) - return output - - def cam_cls_seg(self, feat): - """CAM feature classification.""" - if self.dropout is not None: - feat = self.dropout(feat) - output = self.cam_conv_seg(feat) - return output - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - pam_feat = self.pam_in_conv(x) - pam_feat = self.pam(pam_feat) - pam_feat = self.pam_out_conv(pam_feat) - pam_out = self.pam_cls_seg(pam_feat) - - cam_feat = self.cam_in_conv(x) - cam_feat = self.cam(cam_feat) - cam_feat = self.cam_out_conv(cam_feat) - cam_out = self.cam_cls_seg(cam_feat) - - feat_sum = pam_feat + cam_feat - pam_cam_out = self.cls_seg(feat_sum) - - return pam_cam_out, pam_out, cam_out - - def forward_test(self, inputs, img_metas, test_cfg): - """Forward function for testing, only ``pam_cam`` is used.""" - return self.forward(inputs)[0] - - def losses(self, seg_logit, seg_label): - """Compute ``pam_cam``, ``pam``, ``cam`` loss.""" - pam_cam_seg_logit, pam_seg_logit, cam_seg_logit = seg_logit - loss = dict() - loss.update( - add_prefix( - super(DAHead, self).losses(pam_cam_seg_logit, seg_label), - 'pam_cam')) - loss.update( - add_prefix( - super(DAHead, self).losses(pam_seg_logit, seg_label), 'pam')) - loss.update( - add_prefix( - super(DAHead, self).losses(cam_seg_logit, seg_label), 'cam')) - return loss diff --git a/spaces/MilesCranmer/PySR/install_pysr.sh b/spaces/MilesCranmer/PySR/install_pysr.sh deleted file mode 100644 index 3885cfc03c9e8b957318d6b9e699963cdc6587f1..0000000000000000000000000000000000000000 --- a/spaces/MilesCranmer/PySR/install_pysr.sh +++ /dev/null @@ -1,14 +0,0 @@ -import os - -# Install Julia: -if [ ! -f "/home/user/.local/bin/julia" ]; then - wget https://raw.githubusercontent.com/abelsiqueira/jill/main/jill.sh - chmod a+x jill.sh - ./jill.sh --version 1.8.2 -y -fi - -# Need to install PySR in separate python instance: -if [ ! -d "/home/user/.julia/environments/pysr-0.11.9" ]; then - export PATH="$HOME/.local/bin:$PATH" - python -c 'import pysr; pysr.install()' -fi \ No newline at end of file diff --git a/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/__init__.py b/spaces/MisterZee/PIFu-Clothed-Human-Digitization/PIFu/lib/renderer/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/layers/conv_layer.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/layers/conv_layer.py deleted file mode 100644 index a60f2f5599318e29fd3e97b6079fa6db388a507e..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/layers/conv_layer.py +++ /dev/null @@ -1,182 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from mmcv.cnn import build_plugin_layer - - -def conv3x3(in_planes, out_planes, stride=1): - """3x3 convolution with padding.""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=1, - bias=False) - - -def conv1x1(in_planes, out_planes): - """1x1 convolution with padding.""" - return nn.Conv2d( - in_planes, out_planes, kernel_size=1, stride=1, padding=0, bias=False) - - -class BasicBlock(nn.Module): - - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - downsample=None, - use_conv1x1=False, - plugins=None): - super().__init__() - - if use_conv1x1: - self.conv1 = conv1x1(inplanes, planes) - self.conv2 = conv3x3(planes, planes * self.expansion, stride) - else: - self.conv1 = conv3x3(inplanes, planes, stride) - self.conv2 = conv3x3(planes, planes * self.expansion) - - self.with_plugins = False - if plugins: - if isinstance(plugins, dict): - plugins = [plugins] - self.with_plugins = True - # collect plugins for conv1/conv2/ - self.before_conv1_plugin = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'before_conv1' - ] - self.after_conv1_plugin = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv1' - ] - self.after_conv2_plugin = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv2' - ] - self.after_shortcut_plugin = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_shortcut' - ] - - self.planes = planes - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.bn2 = nn.BatchNorm2d(planes * self.expansion) - self.downsample = downsample - self.stride = stride - - if self.with_plugins: - self.before_conv1_plugin_names = self.make_block_plugins( - inplanes, self.before_conv1_plugin) - self.after_conv1_plugin_names = self.make_block_plugins( - planes, self.after_conv1_plugin) - self.after_conv2_plugin_names = self.make_block_plugins( - planes, self.after_conv2_plugin) - self.after_shortcut_plugin_names = self.make_block_plugins( - planes, self.after_shortcut_plugin) - - def make_block_plugins(self, in_channels, plugins): - """make plugins for block. - - Args: - in_channels (int): Input channels of plugin. - plugins (list[dict]): List of plugins cfg to build. - - Returns: - list[str]: List of the names of plugin. - """ - assert isinstance(plugins, list) - plugin_names = [] - for plugin in plugins: - plugin = plugin.copy() - name, layer = build_plugin_layer( - plugin, - in_channels=in_channels, - out_channels=in_channels, - postfix=plugin.pop('postfix', '')) - assert not hasattr(self, name), f'duplicate plugin {name}' - self.add_module(name, layer) - plugin_names.append(name) - return plugin_names - - def forward_plugin(self, x, plugin_names): - out = x - for name in plugin_names: - out = getattr(self, name)(x) - return out - - def forward(self, x): - if self.with_plugins: - x = self.forward_plugin(x, self.before_conv1_plugin_names) - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.bn2(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_shortcut_plugin_names) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=False): - super().__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = nn.Conv2d(planes, planes, 3, stride, 1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d( - planes, planes * self.expansion, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - if downsample: - self.downsample = nn.Sequential( - nn.Conv2d( - inplanes, planes * self.expansion, 1, stride, bias=False), - nn.BatchNorm2d(planes * self.expansion), - ) - else: - self.downsample = nn.Sequential() - - def forward(self, x): - residual = self.downsample(x) - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - out += residual - out = self.relu(out) - - return out diff --git a/spaces/NCTCMumbai/NCTC/models/official/staging/training/__init__.py b/spaces/NCTCMumbai/NCTC/models/official/staging/training/__init__.py deleted file mode 100644 index 931c2ef11db4a949e6c2e95bca44e36bac1241e9..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/staging/training/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== diff --git a/spaces/Nikhil0987/omm/chat.py b/spaces/Nikhil0987/omm/chat.py deleted file mode 100644 index cf69df793404f2c0c11e16b1e5d08fa862ac2cbe..0000000000000000000000000000000000000000 --- a/spaces/Nikhil0987/omm/chat.py +++ /dev/null @@ -1,31 +0,0 @@ -# from transformers import pipeline, Conversation -# # import streamlit_option_menu -# import streamlit as st - -# def Chat(): - -# query = st.chat_input("Enter your query") -# convo = pipeline("conversational") -# oracle = pipeline(task="zero-shot-classification", model="facebook/bart-large-mnli") -# usrinput = Conversation(query) -# chitchat = convo(usrinput) -# ans = oracle( -# query, -# candidate_labels=["logout"]) - -# if ans["scores"][0] > 0.85: -# st.session_state["user"] = "visitor" -# with st.chat_message("assistant"): -# "You are now living in dream" -# st.experimental_rerun() -# else: -# with st.chat_message("assistant"): -# chitchat - - - - - - - - diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/laser/laser_src/multitask_data_utils.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/laser/laser_src/multitask_data_utils.py deleted file mode 100644 index b05caea26793bf5112a7abc29d76225f578f3ebe..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/laser/laser_src/multitask_data_utils.py +++ /dev/null @@ -1,143 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections import OrderedDict - -import numpy as np - -from fairseq.data import BaseWrapperDataset, FairseqDataset, iterators - - -class MultiItr(object): - def __init__(self, itr): - self.itr = itr - self._counts = [0 for x in itr] - - def __len__(self): - return sum(len(itr) for itr in self.itr) - - def __iter__(self): - return self - - def __next__(self): - ratios = [count / len(itr) for count, itr in zip(self._counts, self.itr)] - idx = ratios.index(min(ratios)) - self._counts[idx] += 1 - return next(self.itr[idx]) - - -class MultidatasetEpochBatchIterator(iterators.EpochBatchIterating): - """A wrapper around multiple epoch batch iterators.""" - - def __init__( - self, - dataset, - batch_sampler, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=1, - ): - - assert isinstance(dataset, OrderedDict) - assert len(dataset) - assert isinstance(dataset[next(iter(dataset))], FairseqDataset) - - self.iterators = [] - - self.epoch = epoch - for key, dt in dataset.items(): - epoch_iter = iterators.EpochBatchIterator( - dataset=dt, - collate_fn=dt.collater, - batch_sampler=batch_sampler[key], - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=0, - epoch=epoch, - ) - self.iterators.append(epoch_iter) - - def __len__(self): - return sum(len(itr) for itr in self.iterators) - - def next_epoch_itr(self, shuffle=True, fix_batches_to_gpus=False): - # `self.epoch += 1` should be handled by underlying `EpochBatchIterator`s. - return MultiItr( - [ - itr.next_epoch_itr( - shuffle=shuffle, fix_batches_to_gpus=fix_batches_to_gpus - ) - for itr in self.iterators - ] - ) - - def end_of_epoch(self): - return all(itr.end_of_epoch() for itr in self.iterators) - - @property - def next_epoch_idx(self): - """Return the epoch index after *next_epoch_itr* is called.""" - - epochs = [itr.next_epoch_idx for itr in self.iterators] - self.epoch = epochs[0] - assert all(epoch == self.epoch for epoch in epochs) - - return self.epoch - - @property - def iterations_in_epoch(self): - return sum(itr.iterations_in_epoch for itr in self.iterators) - - def state_dict(self): - return { - "iterators": [it.state_dict() for it in self.iterators], - "epoch": self.epoch, - } - - def load_state_dict(self, state_dict): - self.epoch = state_dict["epoch"] - for it, d in zip(self.iterators, state_dict["iterators"]): - it.load_state_dict(d) - - -class MultitaskDatasetWrapper(BaseWrapperDataset): - """A wrapper for a multitask dataset.""" - - def __init__(self, dataset, target_language_id, sample=1.0, name=""): - super().__init__(dataset) - self.target_language_id = target_language_id - self.sample = sample - self.name = name - - def collater(self, *args, **kwargs): - ans = self.dataset.collater(*args, **kwargs) - if "net_input" in ans: - ans["net_input"]["target_language_id"] = self.target_language_id - ans["net_input"]["dataset_name"] = self.name - return ans - - def num_tokens(self, *args, **kwargs): - return self.dataset.num_tokens(*args, **kwargs) - - def ordered_indices(self, *args, **kwargs): - indices = self.dataset.ordered_indices(*args, **kwargs) - # Hacky solution for sampling - size = int(self.sample * indices.shape[0]) - - return indices.take(np.sort(np.random.permutation(indices.shape[0])[:size])) - - def size(self, index: int): - return self.dataset.size(index) - - @property - def supports_prefetch(self): - """Whether this dataset supports prefetching.""" - return getattr(self.dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.dataset.prefetch(indices) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_memory_efficient_fp16.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_memory_efficient_fp16.py deleted file mode 100644 index 2bf2f29888d6027896128930626b1aafe7f18475..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/tests/test_memory_efficient_fp16.py +++ /dev/null @@ -1,78 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -import unittest - -import torch -from fairseq.optim.adam import FairseqAdam -from fairseq.optim.fp16_optimizer import MemoryEfficientFP16Optimizer -from omegaconf import OmegaConf - - -@unittest.skipIf(not torch.cuda.is_available(), "test requires a GPU") -class TestMemoryEfficientFP16(unittest.TestCase): - def setUp(self): - logging.disable(logging.CRITICAL) - - def tearDown(self): - logging.disable(logging.NOTSET) - - def test_load_state_dict(self): - # define simple FP16 model - model = torch.nn.Linear(5, 5).cuda().half() - params = list(model.parameters()) - - # initialize memory efficient FP16 optimizer - # with pseudo DictConfigs - optimizer = FairseqAdam( - cfg=OmegaConf.create( - vars( - argparse.Namespace( - adam_betas="(0.9, 0.999)", - adam_eps=1e-8, - weight_decay=0.0, - lr=[0.00001], - ) - ) - ), - params=params, - ) - me_optimizer = MemoryEfficientFP16Optimizer( - cfg=OmegaConf.create( - { - "common": vars( - argparse.Namespace( - fp16_init_scale=1, - fp16_scale_window=1, - fp16_scale_tolerance=1, - threshold_loss_scale=1, - min_loss_scale=1e-4, - ) - ) - } - ), - params=params, - optimizer=optimizer, - ) - - # optimizer state is created in the first step - loss = model(torch.rand(5).cuda().half()).sum() - me_optimizer.backward(loss) - me_optimizer.step() - - # reload state - state = me_optimizer.state_dict() - me_optimizer.load_state_dict(state) - for k, v in me_optimizer.optimizer.state.items(): - self.assertTrue(k.dtype == torch.float16) - for v_i in v.values(): - if torch.is_tensor(v_i): - self.assertTrue(v_i.dtype == torch.float32) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/README.md deleted file mode 100644 index e071d241e0e02b35d3aac777ac09b4ef3be9119f..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/README.md +++ /dev/null @@ -1,46 +0,0 @@ -# Joint Speech Text training in Fairseq -An extension of Fairseq s2t project with the speech to text task enhanced by the co-trained text to text mapping task. More details about Fairseq s2t can be found [here](../speech_to_text/README.md) - -## Examples -Examples of speech text joint training in fairseq -- [English-to-German MuST-C model](docs/ende-mustc.md) -- [IWSLT 2021 Multilingual Speech Translation](docs/iwslt2021.md) - -## Citation -Please cite as: -``` -@inproceedings{Tang2021AGM, - title={A General Multi-Task Learning Framework to Leverage Text Data for Speech to Text Tasks}, - author={Yun Tang and J. Pino and Changhan Wang and Xutai Ma and Dmitriy Genzel}, - booktitle={ICASSP}, - year={2021} -} - -@inproceedings{Tang2021IST, - title = {Improving Speech Translation by Understanding and Learning from the Auxiliary Text Translation Task}, - author = {Yun Tang and Juan Pino and Xian Li and Changhan Wang and Dmitriy Genzel}, - booktitle = {ACL}, - year = {2021}, -} - -@inproceedings{Tang2021FST, - title = {FST: the FAIR Speech Translation System for the IWSLT21 Multilingual Shared Task}, - author = {Yun Tang and Hongyu Gong and Xian Li and Changhan Wang and Juan Pino and Holger Schwenk and Naman Goyal}, - booktitle = {IWSLT}, - year = {2021}, -} - -@inproceedings{wang2020fairseqs2t, - title = {fairseq S2T: Fast Speech-to-Text Modeling with fairseq}, - author = {Changhan Wang and Yun Tang and Xutai Ma and Anne Wu and Dmytro Okhonko and Juan Pino}, - booktitle = {Proceedings of the 2020 Conference of the Asian Chapter of the Association for Computational Linguistics (AACL): System Demonstrations}, - year = {2020}, -} - -@inproceedings{ott2019fairseq, - title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling}, - author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli}, - booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations}, - year = {2019}, -} -``` diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/data_cfg.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/data_cfg.py deleted file mode 100644 index 95b403ad9c617afb5656131693c92b9cc3befd3b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/data/audio/data_cfg.py +++ /dev/null @@ -1,139 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from pathlib import Path -from typing import Dict, Optional - - -class S2TDataConfig(object): - """Wrapper class for data config YAML""" - - def __init__(self, yaml_path: Path): - try: - import yaml - except ImportError: - print("Please install PyYAML: pip install PyYAML") - self.config = {} - if yaml_path.is_file(): - try: - with open(yaml_path) as f: - self.config = yaml.load(f, Loader=yaml.FullLoader) - except Exception as e: - raise Exception( - f"Failed to load config from {yaml_path.as_posix()}: {e}" - ) - else: - raise FileNotFoundError(f"{yaml_path.as_posix()} not found") - self.root = yaml_path.parent - - def _auto_convert_to_abs_path(self, x): - if isinstance(x, str): - if not Path(x).exists() and (self.root / x).exists(): - return (self.root / x).as_posix() - elif isinstance(x, dict): - return {k: self._auto_convert_to_abs_path(v) for k, v in x.items()} - return x - - @property - def vocab_filename(self): - """fairseq vocabulary file under data root""" - return self.config.get("vocab_filename", "dict.txt") - - @property - def speaker_set_filename(self): - """fairseq vocabulary file under data root""" - return self.config.get("speaker_set_filename", None) - - @property - def shuffle(self) -> bool: - """Shuffle dataset samples before batching""" - return self.config.get("shuffle", False) - - @property - def pre_tokenizer(self) -> Dict: - """Pre-tokenizer to apply before subword tokenization. Returning - a dictionary with `tokenizer` providing the tokenizer name and - the other items providing the tokenizer-specific arguments. - Tokenizers are defined in `fairseq.data.encoders.*`""" - tokenizer = self.config.get("pre_tokenizer", {"tokenizer": None}) - return self._auto_convert_to_abs_path(tokenizer) - - @property - def bpe_tokenizer(self) -> Dict: - """Subword tokenizer to apply after pre-tokenization. Returning - a dictionary with `bpe` providing the tokenizer name and - the other items providing the tokenizer-specific arguments. - Tokenizers are defined in `fairseq.data.encoders.*`""" - tokenizer = self.config.get("bpe_tokenizer", {"bpe": None}) - return self._auto_convert_to_abs_path(tokenizer) - - @property - def prepend_tgt_lang_tag(self) -> bool: - """Prepend target lang ID token as the target BOS (e.g. for to-many - multilingual setting). During inference, this requires `--prefix-size 1` - to force BOS to be lang ID token.""" - return self.config.get("prepend_tgt_lang_tag", False) - - @property - def input_feat_per_channel(self): - """The dimension of input features (per audio channel)""" - return self.config.get("input_feat_per_channel", 80) - - @property - def input_channels(self): - """The number of channels in the input audio""" - return self.config.get("input_channels", 1) - - @property - def sample_rate(self): - return self.config.get("sample_rate", 16_000) - - @property - def sampling_alpha(self): - """Hyper-parameter alpha = 1/T for temperature-based resampling. - (alpha = 1 for no resampling)""" - return self.config.get("sampling_alpha", 1.0) - - @property - def use_audio_input(self): - """Needed by the dataset loader to see if the model requires - raw audio as inputs.""" - return self.config.get("use_audio_input", False) - - @property - def use_sample_rate(self): - """Needed by the dataset loader to see if the model requires - raw audio with specific sample rate as inputs.""" - return self.config.get("use_sample_rate", 16000) - - @property - def audio_root(self): - """Audio paths in the manifest TSV can be relative and this provides - the root path. Set this to empty string when using absolute paths.""" - return self.config.get("audio_root", "") - - def get_feature_transforms(self, split, is_train): - """Split-specific feature transforms. Allowing train set - wildcard `_train`, evaluation set wildcard `_eval` and general - wildcard `*` for matching.""" - from copy import deepcopy - - cfg = deepcopy(self.config) - _cur = cfg.get("transforms", {}) - cur = _cur.get(split) - cur = _cur.get("_train") if cur is None and is_train else cur - cur = _cur.get("_eval") if cur is None and not is_train else cur - cur = _cur.get("*") if cur is None else cur - cfg["transforms"] = cur - return cfg - - @property - def global_cmvn_stats_npz(self) -> Optional[str]: - path = self.config.get("global_cmvn", {}).get("stats_npz_path", None) - return self._auto_convert_to_abs_path(path) - - @property - def vocoder(self) -> Optional[Dict[str, str]]: - return self.config.get("vocoder", None) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/modeling/text/load_text_token.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/modeling/text/load_text_token.py deleted file mode 100644 index 8491021bf5d7d23d7f3826395f270dccad30df36..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/grit/modeling/text/load_text_token.py +++ /dev/null @@ -1,80 +0,0 @@ -import torch - - -class LoadTextTokens(object): - def __init__(self, tokenizer, max_text_len=40, padding='do_not_pad'): - self.tokenizer = tokenizer - self.max_text_len = max_text_len - self.padding = padding - - def descriptions_to_text_tokens(self, target, begin_token): - target_encoding = self.tokenizer( - target, padding=self.padding, - add_special_tokens=False, - truncation=True, max_length=self.max_text_len) - - need_predict = [1] * len(target_encoding['input_ids']) - payload = target_encoding['input_ids'] - if len(payload) > self.max_text_len - 2: - payload = payload[-(self.max_text_len - 2):] - need_predict = payload[-(self.max_text_len - 2):] - - input_ids = [begin_token] + payload + [self.tokenizer.sep_token_id] - - need_predict = [0] + need_predict + [1] - data = { - 'text_tokens': torch.tensor(input_ids), - 'text_lengths': len(input_ids), - 'need_predict': torch.tensor(need_predict), - } - - return data - - def __call__(self, object_descriptions, box_features, begin_token): - text_tokens = [] - text_lengths = [] - need_predict = [] - for description in object_descriptions: - tokens = self.descriptions_to_text_tokens(description, begin_token) - text_tokens.append(tokens['text_tokens']) - text_lengths.append(tokens['text_lengths']) - need_predict.append(tokens['need_predict']) - - text_tokens = torch.cat(self.collate(text_tokens), dim=0).to(box_features.device) - text_lengths = torch.tensor(text_lengths).to(box_features.device) - need_predict = torch.cat(self.collate(need_predict), dim=0).to(box_features.device) - - assert text_tokens.dim() == 2 and need_predict.dim() == 2 - data = {'text_tokens': text_tokens, - 'text_lengths': text_lengths, - 'need_predict': need_predict} - - return data - - def collate(self, batch): - if all(isinstance(b, torch.Tensor) for b in batch) and len(batch) > 0: - if not all(b.shape == batch[0].shape for b in batch[1:]): - assert all(len(b.shape) == len(batch[0].shape) for b in batch[1:]) - shape = torch.tensor([b.shape for b in batch]) - max_shape = tuple(shape.max(dim=0)[0].tolist()) - batch2 = [] - for b in batch: - if any(c < m for c, m in zip(b.shape, max_shape)): - b2 = torch.zeros(max_shape, dtype=b.dtype, device=b.device) - if b.dim() == 1: - b2[:b.shape[0]] = b - elif b.dim() == 2: - b2[:b.shape[0], :b.shape[1]] = b - elif b.dim() == 3: - b2[:b.shape[0], :b.shape[1], :b.shape[2]] = b - else: - raise NotImplementedError - b = b2 - batch2.append(b[None, ...]) - else: - batch2 = [] - for b in batch: - batch2.append(b[None, ...]) - return batch2 - else: - raise NotImplementedError diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/analysis.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/analysis.py deleted file mode 100644 index 178da7968cc08c29ec61b823bba8b74e8d97e1d6..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/utils/analysis.py +++ /dev/null @@ -1,188 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -*- coding: utf-8 -*- - -import typing -from typing import Any, List -import fvcore -from fvcore.nn import activation_count, flop_count, parameter_count, parameter_count_table -from torch import nn - -from detectron2.export import TracingAdapter - -__all__ = [ - "activation_count_operators", - "flop_count_operators", - "parameter_count_table", - "parameter_count", - "FlopCountAnalysis", -] - -FLOPS_MODE = "flops" -ACTIVATIONS_MODE = "activations" - - -# Some extra ops to ignore from counting, including elementwise and reduction ops -_IGNORED_OPS = { - "aten::add", - "aten::add_", - "aten::argmax", - "aten::argsort", - "aten::batch_norm", - "aten::constant_pad_nd", - "aten::div", - "aten::div_", - "aten::exp", - "aten::log2", - "aten::max_pool2d", - "aten::meshgrid", - "aten::mul", - "aten::mul_", - "aten::neg", - "aten::nonzero_numpy", - "aten::reciprocal", - "aten::repeat_interleave", - "aten::rsub", - "aten::sigmoid", - "aten::sigmoid_", - "aten::softmax", - "aten::sort", - "aten::sqrt", - "aten::sub", - "torchvision::nms", # TODO estimate flop for nms -} - - -class FlopCountAnalysis(fvcore.nn.FlopCountAnalysis): - """ - Same as :class:`fvcore.nn.FlopCountAnalysis`, but supports detectron2 models. - """ - - def __init__(self, model, inputs): - """ - Args: - model (nn.Module): - inputs (Any): inputs of the given model. Does not have to be tuple of tensors. - """ - wrapper = TracingAdapter(model, inputs, allow_non_tensor=True) - super().__init__(wrapper, wrapper.flattened_inputs) - self.set_op_handle(**{k: None for k in _IGNORED_OPS}) - - -def flop_count_operators(model: nn.Module, inputs: list) -> typing.DefaultDict[str, float]: - """ - Implement operator-level flops counting using jit. - This is a wrapper of :func:`fvcore.nn.flop_count` and adds supports for standard - detection models in detectron2. - Please use :class:`FlopCountAnalysis` for more advanced functionalities. - - Note: - The function runs the input through the model to compute flops. - The flops of a detection model is often input-dependent, for example, - the flops of box & mask head depends on the number of proposals & - the number of detected objects. - Therefore, the flops counting using a single input may not accurately - reflect the computation cost of a model. It's recommended to average - across a number of inputs. - - Args: - model: a detectron2 model that takes `list[dict]` as input. - inputs (list[dict]): inputs to model, in detectron2's standard format. - Only "image" key will be used. - supported_ops (dict[str, Handle]): see documentation of :func:`fvcore.nn.flop_count` - - Returns: - Counter: Gflop count per operator - """ - old_train = model.training - model.eval() - ret = FlopCountAnalysis(model, inputs).by_operator() - model.train(old_train) - return {k: v / 1e9 for k, v in ret.items()} - - -def activation_count_operators( - model: nn.Module, inputs: list, **kwargs -) -> typing.DefaultDict[str, float]: - """ - Implement operator-level activations counting using jit. - This is a wrapper of fvcore.nn.activation_count, that supports standard detection models - in detectron2. - - Note: - The function runs the input through the model to compute activations. - The activations of a detection model is often input-dependent, for example, - the activations of box & mask head depends on the number of proposals & - the number of detected objects. - - Args: - model: a detectron2 model that takes `list[dict]` as input. - inputs (list[dict]): inputs to model, in detectron2's standard format. - Only "image" key will be used. - - Returns: - Counter: activation count per operator - """ - return _wrapper_count_operators(model=model, inputs=inputs, mode=ACTIVATIONS_MODE, **kwargs) - - -def _wrapper_count_operators( - model: nn.Module, inputs: list, mode: str, **kwargs -) -> typing.DefaultDict[str, float]: - # ignore some ops - supported_ops = {k: lambda *args, **kwargs: {} for k in _IGNORED_OPS} - supported_ops.update(kwargs.pop("supported_ops", {})) - kwargs["supported_ops"] = supported_ops - - assert len(inputs) == 1, "Please use batch size=1" - tensor_input = inputs[0]["image"] - inputs = [{"image": tensor_input}] # remove other keys, in case there are any - - old_train = model.training - if isinstance(model, (nn.parallel.distributed.DistributedDataParallel, nn.DataParallel)): - model = model.module - wrapper = TracingAdapter(model, inputs) - wrapper.eval() - if mode == FLOPS_MODE: - ret = flop_count(wrapper, (tensor_input,), **kwargs) - elif mode == ACTIVATIONS_MODE: - ret = activation_count(wrapper, (tensor_input,), **kwargs) - else: - raise NotImplementedError("Count for mode {} is not supported yet.".format(mode)) - # compatible with change in fvcore - if isinstance(ret, tuple): - ret = ret[0] - model.train(old_train) - return ret - - -def find_unused_parameters(model: nn.Module, inputs: Any) -> List[str]: - """ - Given a model, find parameters that do not contribute - to the loss. - - Args: - model: a model in training mode that returns losses - inputs: argument or a tuple of arguments. Inputs of the model - - Returns: - list[str]: the name of unused parameters - """ - assert model.training - for _, prm in model.named_parameters(): - prm.grad = None - - if isinstance(inputs, tuple): - losses = model(*inputs) - else: - losses = model(inputs) - - if isinstance(losses, dict): - losses = sum(losses.values()) - losses.backward() - - unused: List[str] = [] - for name, prm in model.named_parameters(): - if prm.grad is None: - unused.append(name) - prm.grad = None - return unused diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/test_registry.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/test_registry.py deleted file mode 100644 index 4e425a6ec44c7c47a5a106bfdf5ce8062c2110c9..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/test_registry.py +++ /dev/null @@ -1,45 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import unittest -import torch - -from detectron2.modeling.meta_arch import GeneralizedRCNN -from detectron2.utils.registry import _convert_target_to_string, locate - - -class A: - class B: - pass - - -class TestLocate(unittest.TestCase): - def _test_obj(self, obj): - name = _convert_target_to_string(obj) - newobj = locate(name) - self.assertIs(obj, newobj) - - def test_basic(self): - self._test_obj(GeneralizedRCNN) - - def test_inside_class(self): - # requires using __qualname__ instead of __name__ - self._test_obj(A.B) - - def test_builtin(self): - self._test_obj(len) - self._test_obj(dict) - - def test_pytorch_optim(self): - # pydoc.locate does not work for it - self._test_obj(torch.optim.SGD) - - def test_failure(self): - with self.assertRaises(ImportError): - locate("asdf") - - def test_compress_target(self): - from detectron2.data.transforms import RandomCrop - - name = _convert_target_to_string(RandomCrop) - # name shouldn't contain 'augmentation_impl' - self.assertEqual(name, "detectron2.data.transforms.RandomCrop") - self.assertIs(RandomCrop, locate(name)) diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/engine/__init__.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/engine/__init__.py deleted file mode 100644 index 3193b7f664e19ce2458d81c836597fa22e4bb082..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/engine/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .test import (collect_results_cpu, collect_results_gpu, multi_gpu_test, - single_gpu_test) - -__all__ = [ - 'collect_results_cpu', 'collect_results_gpu', 'multi_gpu_test', - 'single_gpu_test' -] diff --git a/spaces/PKUWilliamYang/VToonify/vtoonify/model/stylegan/op/upfirdn2d.py b/spaces/PKUWilliamYang/VToonify/vtoonify/model/stylegan/op/upfirdn2d.py deleted file mode 100644 index d509eb5e11e8cd01468dded5e5b53f5326057706..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/VToonify/vtoonify/model/stylegan/op/upfirdn2d.py +++ /dev/null @@ -1,61 +0,0 @@ -from collections import abc - -import torch -from torch.nn import functional as F - - -def upfirdn2d(inputs, kernel, up=1, down=1, pad=(0, 0)): - if not isinstance(up, abc.Iterable): - up = (up, up) - - if not isinstance(down, abc.Iterable): - down = (down, down) - - if len(pad) == 2: - pad = (pad[0], pad[1], pad[0], pad[1]) - - return upfirdn2d_native(inputs, kernel, *up, *down, *pad) - - -def upfirdn2d_native( - inputs, kernel, up_x, up_y, down_x, down_y, pad_x0, pad_x1, pad_y0, pad_y1 -): - _, channel, in_h, in_w = inputs.shape - inputs = inputs.reshape(-1, in_h, in_w, 1) - - _, in_h, in_w, minor = inputs.shape - kernel_h, kernel_w = kernel.shape - - out = inputs.view(-1, in_h, 1, in_w, 1, minor) - out = F.pad(out, [0, 0, 0, up_x - 1, 0, 0, 0, up_y - 1]) - out = out.view(-1, in_h * up_y, in_w * up_x, minor) - - out = F.pad( - out, [0, 0, max(pad_x0, 0), max(pad_x1, 0), max(pad_y0, 0), max(pad_y1, 0)] - ) - out = out[ - :, - max(-pad_y0, 0): out.shape[1] - max(-pad_y1, 0), - max(-pad_x0, 0): out.shape[2] - max(-pad_x1, 0), - :, - ] - - out = out.permute(0, 3, 1, 2) - out = out.reshape( - [-1, 1, in_h * up_y + pad_y0 + pad_y1, in_w * up_x + pad_x0 + pad_x1] - ) - w = torch.flip(kernel, [0, 1]).view(1, 1, kernel_h, kernel_w) - out = F.conv2d(out, w) - out = out.reshape( - -1, - minor, - in_h * up_y + pad_y0 + pad_y1 - kernel_h + 1, - in_w * up_x + pad_x0 + pad_x1 - kernel_w + 1, - ) - out = out.permute(0, 2, 3, 1) - out = out[:, ::down_y, ::down_x, :] - - out_h = (in_h * up_y + pad_y0 + pad_y1 - kernel_h + down_y) // down_y - out_w = (in_w * up_x + pad_x0 + pad_x1 - kernel_w + down_x) // down_x - - return out.view(-1, channel, out_h, out_w) \ No newline at end of file diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/dsd/stylegan2.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/dsd/stylegan2.py deleted file mode 100644 index ed1e26280f0ea16bd67adcbe0f9bf23ab66cc2d5..0000000000000000000000000000000000000000 --- a/spaces/PSLD/PSLD/diffusion-posterior-sampling/bkse/models/dsd/stylegan2.py +++ /dev/null @@ -1,621 +0,0 @@ -import math -import random - -import torch -from models.dsd.op.fused_act import FusedLeakyReLU, fused_leaky_relu -from models.dsd.op.upfirdn2d import upfirdn2d -from torch import nn -from torch.nn import functional as F - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer("kernel", kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer("kernel", kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__(self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_channel, in_channel, kernel_size, kernel_size)) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]}," - f" {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})" - ) - - -class EqualLinear(nn.Module): - def __init__(self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear(input, self.weight * self.scale, bias=self.bias * self.lr_mul) - - return out - - def __repr__(self): - return f"{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})" - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter(torch.randn(1, out_channel, in_channel, kernel_size, kernel_size)) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f"{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, " - f"upsample={self.upsample}, downsample={self.downsample})" - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view(batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view(batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append(EqualLinear(style_dim, style_dim, lr_mul=lr_mlp, activation="fused_lrelu")) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv(self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f"noise_{layer_idx}", torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append(StyledConv(out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel)) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn(n_latent, self.style_dim, device=self.input.input.device) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_latent=False, - noise=None, - randomize_noise=True, - ): - if not input_is_latent: - styles = [self.style(s) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [getattr(self.noises, f"noise_{i}") for i in range(self.num_layers)] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append(truncation_latent + truncation * (style - truncation_latent)) - - styles = style_t - - if len(styles) < 2: - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - else: - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = torch.cat([latent, latent2], 1) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - layers.append(FusedLeakyReLU(out_channel, bias=bias)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer(in_channel, out_channel, 1, downsample=True, activate=False, bias=False) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation="fused_lrelu"), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view(group, -1, self.stddev_feat, channel // self.stddev_feat, height, width) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/define-music-types.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/define-music-types.go deleted file mode 100644 index a00b13cb4a5c35f62e816b5f1203ca2850451b33..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/define-music-types.go and /dev/null differ diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/roi_align_rotated.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/roi_align_rotated.py deleted file mode 100644 index 0ce4961a3555d4da8bc3e32f1f7d5ad50036587d..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmcv/ops/roi_align_rotated.py +++ /dev/null @@ -1,177 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['roi_align_rotated_forward', 'roi_align_rotated_backward']) - - -class RoIAlignRotatedFunction(Function): - - @staticmethod - def symbolic(g, features, rois, out_size, spatial_scale, sample_num, - aligned, clockwise): - if isinstance(out_size, int): - out_h = out_size - out_w = out_size - elif isinstance(out_size, tuple): - assert len(out_size) == 2 - assert isinstance(out_size[0], int) - assert isinstance(out_size[1], int) - out_h, out_w = out_size - else: - raise TypeError( - '"out_size" must be an integer or tuple of integers') - return g.op( - 'mmcv::MMCVRoIAlignRotated', - features, - rois, - output_height_i=out_h, - output_width_i=out_h, - spatial_scale_f=spatial_scale, - sampling_ratio_i=sample_num, - aligned_i=aligned, - clockwise_i=clockwise) - - @staticmethod - def forward(ctx, - features, - rois, - out_size, - spatial_scale, - sample_num=0, - aligned=True, - clockwise=False): - if isinstance(out_size, int): - out_h = out_size - out_w = out_size - elif isinstance(out_size, tuple): - assert len(out_size) == 2 - assert isinstance(out_size[0], int) - assert isinstance(out_size[1], int) - out_h, out_w = out_size - else: - raise TypeError( - '"out_size" must be an integer or tuple of integers') - ctx.spatial_scale = spatial_scale - ctx.sample_num = sample_num - ctx.aligned = aligned - ctx.clockwise = clockwise - ctx.save_for_backward(rois) - ctx.feature_size = features.size() - - batch_size, num_channels, data_height, data_width = features.size() - num_rois = rois.size(0) - - output = features.new_zeros(num_rois, num_channels, out_h, out_w) - ext_module.roi_align_rotated_forward( - features, - rois, - output, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=spatial_scale, - sample_num=sample_num, - aligned=aligned, - clockwise=clockwise) - return output - - @staticmethod - def backward(ctx, grad_output): - feature_size = ctx.feature_size - spatial_scale = ctx.spatial_scale - aligned = ctx.aligned - clockwise = ctx.clockwise - sample_num = ctx.sample_num - rois = ctx.saved_tensors[0] - assert feature_size is not None - batch_size, num_channels, data_height, data_width = feature_size - - out_w = grad_output.size(3) - out_h = grad_output.size(2) - - grad_input = grad_rois = None - - if ctx.needs_input_grad[0]: - grad_input = rois.new_zeros(batch_size, num_channels, data_height, - data_width) - ext_module.roi_align_rotated_backward( - grad_output.contiguous(), - rois, - grad_input, - pooled_height=out_h, - pooled_width=out_w, - spatial_scale=spatial_scale, - sample_num=sample_num, - aligned=aligned, - clockwise=clockwise) - return grad_input, grad_rois, None, None, None, None, None - - -roi_align_rotated = RoIAlignRotatedFunction.apply - - -class RoIAlignRotated(nn.Module): - """RoI align pooling layer for rotated proposals. - - It accepts a feature map of shape (N, C, H, W) and rois with shape - (n, 6) with each roi decoded as (batch_index, center_x, center_y, - w, h, angle). The angle is in radian. - - Args: - out_size (tuple): h, w - spatial_scale (float): scale the input boxes by this number - sample_num (int): number of inputs samples to take for each - output sample. 0 to take samples densely for current models. - aligned (bool): if False, use the legacy implementation in - MMDetection. If True, align the results more perfectly. - Default: True. - clockwise (bool): If True, the angle in each proposal follows a - clockwise fashion in image space, otherwise, the angle is - counterclockwise. Default: False. - - Note: - The implementation of RoIAlign when aligned=True is modified from - https://github.com/facebookresearch/detectron2/ - - The meaning of aligned=True: - - Given a continuous coordinate c, its two neighboring pixel - indices (in our pixel model) are computed by floor(c - 0.5) and - ceil(c - 0.5). For example, c=1.3 has pixel neighbors with discrete - indices [0] and [1] (which are sampled from the underlying signal - at continuous coordinates 0.5 and 1.5). But the original roi_align - (aligned=False) does not subtract the 0.5 when computing - neighboring pixel indices and therefore it uses pixels with a - slightly incorrect alignment (relative to our pixel model) when - performing bilinear interpolation. - - With `aligned=True`, - we first appropriately scale the ROI and then shift it by -0.5 - prior to calling roi_align. This produces the correct neighbors; - - The difference does not make a difference to the model's - performance if ROIAlign is used together with conv layers. - """ - - def __init__(self, - out_size, - spatial_scale, - sample_num=0, - aligned=True, - clockwise=False): - super(RoIAlignRotated, self).__init__() - - self.out_size = out_size - self.spatial_scale = float(spatial_scale) - self.sample_num = int(sample_num) - self.aligned = aligned - self.clockwise = clockwise - - def forward(self, features, rois): - return RoIAlignRotatedFunction.apply(features, rois, self.out_size, - self.spatial_scale, - self.sample_num, self.aligned, - self.clockwise) diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/metrics/visqol.py b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/metrics/visqol.py deleted file mode 100644 index 44f4b0a2c3c6c726857db8386491823dd85dde51..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/audiocraft/metrics/visqol.py +++ /dev/null @@ -1,216 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import csv -import json -import logging -from pathlib import Path -import tempfile -import typing as tp -import subprocess -import shutil - -import torch -import torchaudio - -logger = logging.getLogger(__name__) - - -class ViSQOL: - """ViSQOL wrapper to run ViSQOL from Python using a pre-installed binary. - - To learn more about ViSQOL and how to build ViSQOL binary using bazel, please refer to the - instructions available in the open source repository: https://github.com/google/visqol - - ViSQOL is capable of running in two modes: - - Audio Mode: - When running in audio mode, input signals must have a 48kHz sample rate. Input should be resampled to 48kHz. - Input signals can be multi-channel, but they will be down-mixed to mono for performing the comparison. - Audio mode uses support vector regression, with the maximum range at ~4.75. - - Speech Mode: - When running in speech mode, ViSQOL uses a wideband model. It therefore expects input sample rates of 16kHz. - Input should be resampled to 16kHz. - As part of the speech mode processing, a root mean square implementation for voice activity detection - is performed on the reference signal to determine what parts of the signal have voice activity and - should therefore be included in the comparison. The signal is normalized before performing the voice - activity detection. - Input signals can be multi-channel, but they will be down-mixed to mono for performing the comparison. - Speech mode is scaled to have a maximum MOS of 5.0 to match previous version behavior. - - For more details, check the guidelines: https://github.com/google/visqol#general-guidelines-for-input - - Args: - visqol_bin (str): Path to the ViSQOL binary. - mode (str): ViSQOL computation mode, expecting "audio" or "speech". - model (str): Name of the model to use for similarity to quality model. - debug (bool): Whether to also get debug metrics from ViSQOL or not. - """ - SAMPLE_RATES_MODES = {"audio": 48_000, "speech": 16_000} - ALLOWED_SAMPLE_RATES = frozenset(SAMPLE_RATES_MODES.values()) - - def __init__(self, bin: tp.Union[Path, str], mode: str = "audio", - model: str = "libsvm_nu_svr_model.txt", debug: bool = False): - assert bin is not None and Path(bin).exists(), f"Could not find ViSQOL binary in specified path: {bin}" - self.visqol_bin = str(bin) - self.visqol_mode = mode - self.target_sr = self._get_target_sr(self.visqol_mode) - self.model = model - self.debug = debug - assert Path(self.visqol_model).exists(), \ - f"Could not find the specified model in ViSQOL install: {self.visqol_model}" - - def _get_target_sr(self, mode: str) -> int: - # returns target sampling rate for the corresponding ViSQOL mode. - if mode not in ViSQOL.SAMPLE_RATES_MODES: - raise ValueError( - f"Unsupported mode! Allowed are: {', '.join(ViSQOL.SAMPLE_RATES_MODES.keys())}" - ) - return ViSQOL.SAMPLE_RATES_MODES[mode] - - def _prepare_files( - self, ref_sig: torch.Tensor, deg_sig: torch.Tensor, sr: int, target_sr: int, pad_with_silence: bool = False - ): - # prepare files for ViSQOL evaluation. - assert target_sr in ViSQOL.ALLOWED_SAMPLE_RATES - assert len(ref_sig) == len(deg_sig), ( - "Expects same number of ref and degraded inputs", - f" but ref len {len(ref_sig)} != deg len {len(deg_sig)}" - ) - # resample audio if needed - if sr != target_sr: - transform = torchaudio.transforms.Resample(sr, target_sr) - pad = int(0.5 * target_sr) - rs_ref = [] - rs_deg = [] - for i in range(len(ref_sig)): - rs_ref_i = transform(ref_sig[i]) - rs_deg_i = transform(deg_sig[i]) - if pad_with_silence: - rs_ref_i = torch.nn.functional.pad(rs_ref_i, (pad, pad), mode='constant', value=0) - rs_deg_i = torch.nn.functional.pad(rs_deg_i, (pad, pad), mode='constant', value=0) - rs_ref.append(rs_ref_i) - rs_deg.append(rs_deg_i) - ref_sig = torch.stack(rs_ref) - deg_sig = torch.stack(rs_deg) - # save audio chunks to tmp dir and create csv - tmp_dir = Path(tempfile.mkdtemp()) - try: - tmp_input_csv_path = tmp_dir / "input.csv" - tmp_results_csv_path = tmp_dir / "results.csv" - tmp_debug_json_path = tmp_dir / "debug.json" - with open(tmp_input_csv_path, "w") as csv_file: - csv_writer = csv.writer(csv_file) - csv_writer.writerow(["reference", "degraded"]) - for i in range(len(ref_sig)): - tmp_ref_filename = tmp_dir / f"ref_{i}.wav" - tmp_deg_filename = tmp_dir / f"deg_{i}.wav" - torchaudio.save( - tmp_ref_filename, - torch.clamp(ref_sig[i], min=-0.99, max=0.99), - sample_rate=target_sr, - bits_per_sample=16, - encoding="PCM_S" - ) - torchaudio.save( - tmp_deg_filename, - torch.clamp(deg_sig[i], min=-0.99, max=0.99), - sample_rate=target_sr, - bits_per_sample=16, - encoding="PCM_S" - ) - csv_writer.writerow([str(tmp_ref_filename), str(tmp_deg_filename)]) - return tmp_dir, tmp_input_csv_path, tmp_results_csv_path, tmp_debug_json_path - except Exception as e: - logger.error("Exception occurred when preparing files for ViSQOL: %s", e) - return tmp_dir, None, None, None - - def _flush_files(self, tmp_dir: tp.Union[Path, str]): - # flush tmp files used to compute ViSQOL. - shutil.rmtree(str(tmp_dir)) - - def _collect_moslqo_score(self, results_csv_path: tp.Union[Path, str]) -> float: - # collect results for each evaluated pair and return averaged moslqo score. - with open(results_csv_path, "r") as csv_file: - reader = csv.DictReader(csv_file) - moslqo_scores = [float(row["moslqo"]) for row in reader] - if len(moslqo_scores) > 0: - return sum(moslqo_scores) / len(moslqo_scores) - else: - return 0.0 - - def _collect_debug_data(self, debug_json_path: tp.Union[Path, str]) -> dict: - # collect debug data for the visqol inference. - with open(debug_json_path, "r") as f: - data = json.load(f) - return data - - @property - def visqol_model(self): - return f'{self.visqol_bin}/model/{self.model}' - - def _run_visqol( - self, - input_csv_path: tp.Union[Path, str], - results_csv_path: tp.Union[Path, str], - debug_csv_path: tp.Optional[tp.Union[Path, str]], - ): - input_csv_path = str(input_csv_path) - results_csv_path = str(results_csv_path) - debug_csv_path = str(debug_csv_path) - cmd = [ - f'{self.visqol_bin}/bazel-bin/visqol', - '--batch_input_csv', f'{input_csv_path}', - '--results_csv', f'{results_csv_path}' - ] - if debug_csv_path is not None: - cmd += ['--output_debug', f'{debug_csv_path}'] - if self.visqol_mode == "speech": - cmd += ['--use_speech_mode'] - cmd += ['--similarity_to_quality_model', f'{self.visqol_model}'] - result = subprocess.run(cmd, capture_output=True) - if result.returncode: - logger.error("Error with visqol: \n %s \n %s", result.stdout.decode(), result.stderr.decode()) - raise RuntimeError("Error while executing visqol") - result.check_returncode() - - def __call__( - self, - ref_sig: torch.Tensor, - deg_sig: torch.Tensor, - sr: int, - pad_with_silence: bool = False, - ): - """Calculate the ViSQOL metric for a pair of audio signals at a given sample rate. - Args: - ref_sig (torch.Tensor): Reference signals as [B, C, T]. - deg_sig (torch.Tensor): Degraded signals as [B, C, T]. - sr (int): Sample rate of the two audio signals. - pad_with_silence (bool): Whether to pad the file with silences as recommended - in visqol guidelines (see: https://github.com/google/visqol#general-guidelines-for-input). - Returns: - float: The ViSQOL score or mean score for the batch. - """ - logger.debug(f"Calculating visqol with mode={self.visqol_mode} on {len(ref_sig)} samples") - tmp_dir, input_csv, results_csv, debug_json = self._prepare_files( - ref_sig, deg_sig, sr, self.target_sr, pad_with_silence - ) - try: - if input_csv and results_csv: - self._run_visqol( - input_csv, - results_csv, - debug_json if self.debug else None, - ) - mosqol = self._collect_moslqo_score(results_csv) - return mosqol - else: - raise RuntimeError("Something unexpected happened when running VISQOL!") - except Exception as e: - logger.error("Exception occurred when running ViSQOL: %s", e) - finally: - self._flush_files(tmp_dir) diff --git a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/examples/submit_example_2.sh b/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/examples/submit_example_2.sh deleted file mode 100644 index bf4a4f5960deb884d447281de1a93b6eb27fef31..0000000000000000000000000000000000000000 --- a/spaces/ProteinDesignLab/protpardelle/ProteinMPNN/examples/submit_example_2.sh +++ /dev/null @@ -1,34 +0,0 @@ -#!/bin/bash -#SBATCH -p gpu -#SBATCH --mem=32g -#SBATCH --gres=gpu:rtx2080:1 -#SBATCH -c 2 -#SBATCH --output=example_2.out - -source activate mlfold - - -folder_with_pdbs="../inputs/PDB_complexes/pdbs/" - -output_dir="../outputs/example_2_outputs" -if [ ! -d $output_dir ] -then - mkdir -p $output_dir -fi - -path_for_parsed_chains=$output_dir"/parsed_pdbs.jsonl" -path_for_assigned_chains=$output_dir"/assigned_pdbs.jsonl" -chains_to_design="A B" - -python ../helper_scripts/parse_multiple_chains.py --input_path=$folder_with_pdbs --output_path=$path_for_parsed_chains - -python ../helper_scripts/assign_fixed_chains.py --input_path=$path_for_parsed_chains --output_path=$path_for_assigned_chains --chain_list "$chains_to_design" - -python ../protein_mpnn_run.py \ - --jsonl_path $path_for_parsed_chains \ - --chain_id_jsonl $path_for_assigned_chains \ - --out_folder $output_dir \ - --num_seq_per_target 2 \ - --sampling_temp "0.1" \ - --seed 37 \ - --batch_size 1 diff --git a/spaces/RMXK/RVC_HFF/extract_locale.py b/spaces/RMXK/RVC_HFF/extract_locale.py deleted file mode 100644 index a4ff5ea3ddd7c612c640544099ab98a861b8fe35..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/extract_locale.py +++ /dev/null @@ -1,34 +0,0 @@ -import json -import re - -# Define regular expression patterns -pattern = r"""i18n\([\s\n\t]*(["'][^"']+["'])[\s\n\t]*\)""" - -# Initialize the dictionary to store key-value pairs -data = {} - - -def process(fn: str): - global data - with open(fn, "r", encoding="utf-8") as f: - contents = f.read() - matches = re.findall(pattern, contents) - for key in matches: - key = eval(key) - print("extract:", key) - data[key] = key - - -print("processing infer-web.py") -process("infer-web.py") - -print("processing gui_v0.py") -process("gui_v0.py") - -print("processing gui_v1.py") -process("gui_v1.py") - -# Save as a JSON file -with open("./i18n/en_US.json", "w", encoding="utf-8") as f: - json.dump(data, f, ensure_ascii=False, indent=4) - f.write("\n") diff --git a/spaces/RamAnanth1/ZoeDepth/app.py b/spaces/RamAnanth1/ZoeDepth/app.py deleted file mode 100644 index 915ae4f3d750e29eebc1e71fdfd14f7acf8daa1e..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/ZoeDepth/app.py +++ /dev/null @@ -1,30 +0,0 @@ -import gradio as gr -import torch -from utils import get_image_from_url, colorize -from PIL import Image -import matplotlib.pyplot as plt - -title = "Interactive demo: ZoeDepth" -description = "Unofficial Gradio Demo for using ZoeDepth: Zero-shot Transfer by Combining Relative and Metric Depth. ZoeDepth is a technique that lets you perform metric depth estimation from a single image. For more information, please refer to the paper or the Github implementation.

To use it, simply upload an image or use one of the examples below and click 'Submit'. Results will show up in a few seconds." -examples = [["example.png"],["example_2.png"]] -repo = "isl-org/ZoeDepth" -# Zoe_N -model_zoe_n = torch.hub.load(repo, "ZoeD_NK", pretrained=True) -DEVICE = "cuda" if torch.cuda.is_available() else "cpu" -zoe = model_zoe_n.to(DEVICE) - -def process_image(image): - depth = zoe.infer_pil(image) # as numpy - colored_depth = colorize(depth, cmap = 'magma_r') - return colored_depth - -interface = gr.Interface(fn=process_image, - inputs=[gr.Image(type="pil")], - outputs=[gr.Image(type="pil", label ="Depth") - ], - title=title, - description=description, - examples = examples - ) - -interface.launch(debug=True) \ No newline at end of file diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/android.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/android.py deleted file mode 100644 index eda80935123cb5db7e18d7fb82fe5f71991d7af8..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/platformdirs/android.py +++ /dev/null @@ -1,120 +0,0 @@ -from __future__ import annotations - -import os -import re -import sys -from functools import lru_cache -from typing import cast - -from .api import PlatformDirsABC - - -class Android(PlatformDirsABC): - """ - Follows the guidance `from here `_. Makes use of the - `appname ` and - `version `. - """ - - @property - def user_data_dir(self) -> str: - """:return: data directory tied to the user, e.g. ``/data/user///files/``""" - return self._append_app_name_and_version(cast(str, _android_folder()), "files") - - @property - def site_data_dir(self) -> str: - """:return: data directory shared by users, same as `user_data_dir`""" - return self.user_data_dir - - @property - def user_config_dir(self) -> str: - """ - :return: config directory tied to the user, e.g. ``/data/user///shared_prefs/`` - """ - return self._append_app_name_and_version(cast(str, _android_folder()), "shared_prefs") - - @property - def site_config_dir(self) -> str: - """:return: config directory shared by the users, same as `user_config_dir`""" - return self.user_config_dir - - @property - def user_cache_dir(self) -> str: - """:return: cache directory tied to the user, e.g. e.g. ``/data/user///cache/``""" - return self._append_app_name_and_version(cast(str, _android_folder()), "cache") - - @property - def user_state_dir(self) -> str: - """:return: state directory tied to the user, same as `user_data_dir`""" - return self.user_data_dir - - @property - def user_log_dir(self) -> str: - """ - :return: log directory tied to the user, same as `user_cache_dir` if not opinionated else ``log`` in it, - e.g. ``/data/user///cache//log`` - """ - path = self.user_cache_dir - if self.opinion: - path = os.path.join(path, "log") - return path - - @property - def user_documents_dir(self) -> str: - """ - :return: documents directory tied to the user e.g. ``/storage/emulated/0/Documents`` - """ - return _android_documents_folder() - - @property - def user_runtime_dir(self) -> str: - """ - :return: runtime directory tied to the user, same as `user_cache_dir` if not opinionated else ``tmp`` in it, - e.g. ``/data/user///cache//tmp`` - """ - path = self.user_cache_dir - if self.opinion: - path = os.path.join(path, "tmp") - return path - - -@lru_cache(maxsize=1) -def _android_folder() -> str | None: - """:return: base folder for the Android OS or None if cannot be found""" - try: - # First try to get path to android app via pyjnius - from jnius import autoclass - - Context = autoclass("android.content.Context") # noqa: N806 - result: str | None = Context.getFilesDir().getParentFile().getAbsolutePath() - except Exception: - # if fails find an android folder looking path on the sys.path - pattern = re.compile(r"/data/(data|user/\d+)/(.+)/files") - for path in sys.path: - if pattern.match(path): - result = path.split("/files")[0] - break - else: - result = None - return result - - -@lru_cache(maxsize=1) -def _android_documents_folder() -> str: - """:return: documents folder for the Android OS""" - # Get directories with pyjnius - try: - from jnius import autoclass - - Context = autoclass("android.content.Context") # noqa: N806 - Environment = autoclass("android.os.Environment") # noqa: N806 - documents_dir: str = Context.getExternalFilesDir(Environment.DIRECTORY_DOCUMENTS).getAbsolutePath() - except Exception: - documents_dir = "/storage/emulated/0/Documents" - - return documents_dir - - -__all__ = [ - "Android", -] diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/_msvccompiler.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/_msvccompiler.py deleted file mode 100644 index 729c2dd5217528d7b3f9220cc2c7981f95c6f6e1..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/_msvccompiler.py +++ /dev/null @@ -1,572 +0,0 @@ -"""distutils._msvccompiler - -Contains MSVCCompiler, an implementation of the abstract CCompiler class -for Microsoft Visual Studio 2015. - -The module is compatible with VS 2015 and later. You can find legacy support -for older versions in distutils.msvc9compiler and distutils.msvccompiler. -""" - -# Written by Perry Stoll -# hacked by Robin Becker and Thomas Heller to do a better job of -# finding DevStudio (through the registry) -# ported to VS 2005 and VS 2008 by Christian Heimes -# ported to VS 2015 by Steve Dower - -import os -import subprocess -import contextlib -import warnings -import unittest.mock as mock - -with contextlib.suppress(ImportError): - import winreg - -from distutils.errors import ( - DistutilsExecError, - DistutilsPlatformError, - CompileError, - LibError, - LinkError, -) -from distutils.ccompiler import CCompiler, gen_lib_options -from distutils import log -from distutils.util import get_platform - -from itertools import count - - -def _find_vc2015(): - try: - key = winreg.OpenKeyEx( - winreg.HKEY_LOCAL_MACHINE, - r"Software\Microsoft\VisualStudio\SxS\VC7", - access=winreg.KEY_READ | winreg.KEY_WOW64_32KEY, - ) - except OSError: - log.debug("Visual C++ is not registered") - return None, None - - best_version = 0 - best_dir = None - with key: - for i in count(): - try: - v, vc_dir, vt = winreg.EnumValue(key, i) - except OSError: - break - if v and vt == winreg.REG_SZ and os.path.isdir(vc_dir): - try: - version = int(float(v)) - except (ValueError, TypeError): - continue - if version >= 14 and version > best_version: - best_version, best_dir = version, vc_dir - return best_version, best_dir - - -def _find_vc2017(): - """Returns "15, path" based on the result of invoking vswhere.exe - If no install is found, returns "None, None" - - The version is returned to avoid unnecessarily changing the function - result. It may be ignored when the path is not None. - - If vswhere.exe is not available, by definition, VS 2017 is not - installed. - """ - root = os.environ.get("ProgramFiles(x86)") or os.environ.get("ProgramFiles") - if not root: - return None, None - - try: - path = subprocess.check_output( - [ - os.path.join( - root, "Microsoft Visual Studio", "Installer", "vswhere.exe" - ), - "-latest", - "-prerelease", - "-requires", - "Microsoft.VisualStudio.Component.VC.Tools.x86.x64", - "-property", - "installationPath", - "-products", - "*", - ], - encoding="mbcs", - errors="strict", - ).strip() - except (subprocess.CalledProcessError, OSError, UnicodeDecodeError): - return None, None - - path = os.path.join(path, "VC", "Auxiliary", "Build") - if os.path.isdir(path): - return 15, path - - return None, None - - -PLAT_SPEC_TO_RUNTIME = { - 'x86': 'x86', - 'x86_amd64': 'x64', - 'x86_arm': 'arm', - 'x86_arm64': 'arm64', -} - - -def _find_vcvarsall(plat_spec): - # bpo-38597: Removed vcruntime return value - _, best_dir = _find_vc2017() - - if not best_dir: - best_version, best_dir = _find_vc2015() - - if not best_dir: - log.debug("No suitable Visual C++ version found") - return None, None - - vcvarsall = os.path.join(best_dir, "vcvarsall.bat") - if not os.path.isfile(vcvarsall): - log.debug("%s cannot be found", vcvarsall) - return None, None - - return vcvarsall, None - - -def _get_vc_env(plat_spec): - if os.getenv("DISTUTILS_USE_SDK"): - return {key.lower(): value for key, value in os.environ.items()} - - vcvarsall, _ = _find_vcvarsall(plat_spec) - if not vcvarsall: - raise DistutilsPlatformError("Unable to find vcvarsall.bat") - - try: - out = subprocess.check_output( - f'cmd /u /c "{vcvarsall}" {plat_spec} && set', - stderr=subprocess.STDOUT, - ).decode('utf-16le', errors='replace') - except subprocess.CalledProcessError as exc: - log.error(exc.output) - raise DistutilsPlatformError(f"Error executing {exc.cmd}") - - env = { - key.lower(): value - for key, _, value in (line.partition('=') for line in out.splitlines()) - if key and value - } - - return env - - -def _find_exe(exe, paths=None): - """Return path to an MSVC executable program. - - Tries to find the program in several places: first, one of the - MSVC program search paths from the registry; next, the directories - in the PATH environment variable. If any of those work, return an - absolute path that is known to exist. If none of them work, just - return the original program name, 'exe'. - """ - if not paths: - paths = os.getenv('path').split(os.pathsep) - for p in paths: - fn = os.path.join(os.path.abspath(p), exe) - if os.path.isfile(fn): - return fn - return exe - - -# A map keyed by get_platform() return values to values accepted by -# 'vcvarsall.bat'. Always cross-compile from x86 to work with the -# lighter-weight MSVC installs that do not include native 64-bit tools. -PLAT_TO_VCVARS = { - 'win32': 'x86', - 'win-amd64': 'x86_amd64', - 'win-arm32': 'x86_arm', - 'win-arm64': 'x86_arm64', -} - - -class MSVCCompiler(CCompiler): - """Concrete class that implements an interface to Microsoft Visual C++, - as defined by the CCompiler abstract class.""" - - compiler_type = 'msvc' - - # Just set this so CCompiler's constructor doesn't barf. We currently - # don't use the 'set_executables()' bureaucracy provided by CCompiler, - # as it really isn't necessary for this sort of single-compiler class. - # Would be nice to have a consistent interface with UnixCCompiler, - # though, so it's worth thinking about. - executables = {} - - # Private class data (need to distinguish C from C++ source for compiler) - _c_extensions = ['.c'] - _cpp_extensions = ['.cc', '.cpp', '.cxx'] - _rc_extensions = ['.rc'] - _mc_extensions = ['.mc'] - - # Needed for the filename generation methods provided by the - # base class, CCompiler. - src_extensions = _c_extensions + _cpp_extensions + _rc_extensions + _mc_extensions - res_extension = '.res' - obj_extension = '.obj' - static_lib_extension = '.lib' - shared_lib_extension = '.dll' - static_lib_format = shared_lib_format = '%s%s' - exe_extension = '.exe' - - def __init__(self, verbose=0, dry_run=0, force=0): - super().__init__(verbose, dry_run, force) - # target platform (.plat_name is consistent with 'bdist') - self.plat_name = None - self.initialized = False - - @classmethod - def _configure(cls, vc_env): - """ - Set class-level include/lib dirs. - """ - cls.include_dirs = cls._parse_path(vc_env.get('include', '')) - cls.library_dirs = cls._parse_path(vc_env.get('lib', '')) - - @staticmethod - def _parse_path(val): - return [dir.rstrip(os.sep) for dir in val.split(os.pathsep) if dir] - - def initialize(self, plat_name=None): - # multi-init means we would need to check platform same each time... - assert not self.initialized, "don't init multiple times" - if plat_name is None: - plat_name = get_platform() - # sanity check for platforms to prevent obscure errors later. - if plat_name not in PLAT_TO_VCVARS: - raise DistutilsPlatformError( - f"--plat-name must be one of {tuple(PLAT_TO_VCVARS)}" - ) - - # Get the vcvarsall.bat spec for the requested platform. - plat_spec = PLAT_TO_VCVARS[plat_name] - - vc_env = _get_vc_env(plat_spec) - if not vc_env: - raise DistutilsPlatformError( - "Unable to find a compatible " "Visual Studio installation." - ) - self._configure(vc_env) - - self._paths = vc_env.get('path', '') - paths = self._paths.split(os.pathsep) - self.cc = _find_exe("cl.exe", paths) - self.linker = _find_exe("link.exe", paths) - self.lib = _find_exe("lib.exe", paths) - self.rc = _find_exe("rc.exe", paths) # resource compiler - self.mc = _find_exe("mc.exe", paths) # message compiler - self.mt = _find_exe("mt.exe", paths) # message compiler - - self.preprocess_options = None - # bpo-38597: Always compile with dynamic linking - # Future releases of Python 3.x will include all past - # versions of vcruntime*.dll for compatibility. - self.compile_options = ['/nologo', '/O2', '/W3', '/GL', '/DNDEBUG', '/MD'] - - self.compile_options_debug = [ - '/nologo', - '/Od', - '/MDd', - '/Zi', - '/W3', - '/D_DEBUG', - ] - - ldflags = ['/nologo', '/INCREMENTAL:NO', '/LTCG'] - - ldflags_debug = ['/nologo', '/INCREMENTAL:NO', '/LTCG', '/DEBUG:FULL'] - - self.ldflags_exe = [*ldflags, '/MANIFEST:EMBED,ID=1'] - self.ldflags_exe_debug = [*ldflags_debug, '/MANIFEST:EMBED,ID=1'] - self.ldflags_shared = [ - *ldflags, - '/DLL', - '/MANIFEST:EMBED,ID=2', - '/MANIFESTUAC:NO', - ] - self.ldflags_shared_debug = [ - *ldflags_debug, - '/DLL', - '/MANIFEST:EMBED,ID=2', - '/MANIFESTUAC:NO', - ] - self.ldflags_static = [*ldflags] - self.ldflags_static_debug = [*ldflags_debug] - - self._ldflags = { - (CCompiler.EXECUTABLE, None): self.ldflags_exe, - (CCompiler.EXECUTABLE, False): self.ldflags_exe, - (CCompiler.EXECUTABLE, True): self.ldflags_exe_debug, - (CCompiler.SHARED_OBJECT, None): self.ldflags_shared, - (CCompiler.SHARED_OBJECT, False): self.ldflags_shared, - (CCompiler.SHARED_OBJECT, True): self.ldflags_shared_debug, - (CCompiler.SHARED_LIBRARY, None): self.ldflags_static, - (CCompiler.SHARED_LIBRARY, False): self.ldflags_static, - (CCompiler.SHARED_LIBRARY, True): self.ldflags_static_debug, - } - - self.initialized = True - - # -- Worker methods ------------------------------------------------ - - @property - def out_extensions(self): - return { - **super().out_extensions, - **{ - ext: self.res_extension - for ext in self._rc_extensions + self._mc_extensions - }, - } - - def compile( # noqa: C901 - self, - sources, - output_dir=None, - macros=None, - include_dirs=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - depends=None, - ): - - if not self.initialized: - self.initialize() - compile_info = self._setup_compile( - output_dir, macros, include_dirs, sources, depends, extra_postargs - ) - macros, objects, extra_postargs, pp_opts, build = compile_info - - compile_opts = extra_preargs or [] - compile_opts.append('/c') - if debug: - compile_opts.extend(self.compile_options_debug) - else: - compile_opts.extend(self.compile_options) - - add_cpp_opts = False - - for obj in objects: - try: - src, ext = build[obj] - except KeyError: - continue - if debug: - # pass the full pathname to MSVC in debug mode, - # this allows the debugger to find the source file - # without asking the user to browse for it - src = os.path.abspath(src) - - if ext in self._c_extensions: - input_opt = "/Tc" + src - elif ext in self._cpp_extensions: - input_opt = "/Tp" + src - add_cpp_opts = True - elif ext in self._rc_extensions: - # compile .RC to .RES file - input_opt = src - output_opt = "/fo" + obj - try: - self.spawn([self.rc] + pp_opts + [output_opt, input_opt]) - except DistutilsExecError as msg: - raise CompileError(msg) - continue - elif ext in self._mc_extensions: - # Compile .MC to .RC file to .RES file. - # * '-h dir' specifies the directory for the - # generated include file - # * '-r dir' specifies the target directory of the - # generated RC file and the binary message resource - # it includes - # - # For now (since there are no options to change this), - # we use the source-directory for the include file and - # the build directory for the RC file and message - # resources. This works at least for win32all. - h_dir = os.path.dirname(src) - rc_dir = os.path.dirname(obj) - try: - # first compile .MC to .RC and .H file - self.spawn([self.mc, '-h', h_dir, '-r', rc_dir, src]) - base, _ = os.path.splitext(os.path.basename(src)) - rc_file = os.path.join(rc_dir, base + '.rc') - # then compile .RC to .RES file - self.spawn([self.rc, "/fo" + obj, rc_file]) - - except DistutilsExecError as msg: - raise CompileError(msg) - continue - else: - # how to handle this file? - raise CompileError(f"Don't know how to compile {src} to {obj}") - - args = [self.cc] + compile_opts + pp_opts - if add_cpp_opts: - args.append('/EHsc') - args.append(input_opt) - args.append("/Fo" + obj) - args.extend(extra_postargs) - - try: - self.spawn(args) - except DistutilsExecError as msg: - raise CompileError(msg) - - return objects - - def create_static_lib( - self, objects, output_libname, output_dir=None, debug=0, target_lang=None - ): - - if not self.initialized: - self.initialize() - objects, output_dir = self._fix_object_args(objects, output_dir) - output_filename = self.library_filename(output_libname, output_dir=output_dir) - - if self._need_link(objects, output_filename): - lib_args = objects + ['/OUT:' + output_filename] - if debug: - pass # XXX what goes here? - try: - log.debug('Executing "%s" %s', self.lib, ' '.join(lib_args)) - self.spawn([self.lib] + lib_args) - except DistutilsExecError as msg: - raise LibError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - def link( - self, - target_desc, - objects, - output_filename, - output_dir=None, - libraries=None, - library_dirs=None, - runtime_library_dirs=None, - export_symbols=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - build_temp=None, - target_lang=None, - ): - - if not self.initialized: - self.initialize() - objects, output_dir = self._fix_object_args(objects, output_dir) - fixed_args = self._fix_lib_args(libraries, library_dirs, runtime_library_dirs) - libraries, library_dirs, runtime_library_dirs = fixed_args - - if runtime_library_dirs: - self.warn( - "I don't know what to do with 'runtime_library_dirs': " - + str(runtime_library_dirs) - ) - - lib_opts = gen_lib_options(self, library_dirs, runtime_library_dirs, libraries) - if output_dir is not None: - output_filename = os.path.join(output_dir, output_filename) - - if self._need_link(objects, output_filename): - ldflags = self._ldflags[target_desc, debug] - - export_opts = ["/EXPORT:" + sym for sym in (export_symbols or [])] - - ld_args = ( - ldflags + lib_opts + export_opts + objects + ['/OUT:' + output_filename] - ) - - # The MSVC linker generates .lib and .exp files, which cannot be - # suppressed by any linker switches. The .lib files may even be - # needed! Make sure they are generated in the temporary build - # directory. Since they have different names for debug and release - # builds, they can go into the same directory. - build_temp = os.path.dirname(objects[0]) - if export_symbols is not None: - (dll_name, dll_ext) = os.path.splitext( - os.path.basename(output_filename) - ) - implib_file = os.path.join(build_temp, self.library_filename(dll_name)) - ld_args.append('/IMPLIB:' + implib_file) - - if extra_preargs: - ld_args[:0] = extra_preargs - if extra_postargs: - ld_args.extend(extra_postargs) - - output_dir = os.path.dirname(os.path.abspath(output_filename)) - self.mkpath(output_dir) - try: - log.debug('Executing "%s" %s', self.linker, ' '.join(ld_args)) - self.spawn([self.linker] + ld_args) - except DistutilsExecError as msg: - raise LinkError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - def spawn(self, cmd): - env = dict(os.environ, PATH=self._paths) - with self._fallback_spawn(cmd, env) as fallback: - return super().spawn(cmd, env=env) - return fallback.value - - @contextlib.contextmanager - def _fallback_spawn(self, cmd, env): - """ - Discovered in pypa/distutils#15, some tools monkeypatch the compiler, - so the 'env' kwarg causes a TypeError. Detect this condition and - restore the legacy, unsafe behavior. - """ - bag = type('Bag', (), {})() - try: - yield bag - except TypeError as exc: - if "unexpected keyword argument 'env'" not in str(exc): - raise - else: - return - warnings.warn("Fallback spawn triggered. Please update distutils monkeypatch.") - with mock.patch.dict('os.environ', env): - bag.value = super().spawn(cmd) - - # -- Miscellaneous methods ----------------------------------------- - # These are all used by the 'gen_lib_options() function, in - # ccompiler.py. - - def library_dir_option(self, dir): - return "/LIBPATH:" + dir - - def runtime_library_dir_option(self, dir): - raise DistutilsPlatformError( - "don't know how to set runtime library search path for MSVC" - ) - - def library_option(self, lib): - return self.library_filename(lib) - - def find_library_file(self, dirs, lib, debug=0): - # Prefer a debugging library if found (and requested), but deal - # with it if we don't have one. - if debug: - try_names = [lib + "_d", lib] - else: - try_names = [lib] - for dir in dirs: - for name in try_names: - libfile = os.path.join(dir, self.library_filename(name)) - if os.path.isfile(libfile): - return libfile - else: - # Oops, didn't find it in *any* of 'dirs' - return None diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/archive_util.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/archive_util.py deleted file mode 100644 index 5dfe2a16ffbf5dc907aa3ce315757f4f9a055a82..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_distutils/archive_util.py +++ /dev/null @@ -1,280 +0,0 @@ -"""distutils.archive_util - -Utility functions for creating archive files (tarballs, zip files, -that sort of thing).""" - -import os -from warnings import warn -import sys - -try: - import zipfile -except ImportError: - zipfile = None - - -from distutils.errors import DistutilsExecError -from distutils.spawn import spawn -from distutils.dir_util import mkpath -from distutils import log - -try: - from pwd import getpwnam -except ImportError: - getpwnam = None - -try: - from grp import getgrnam -except ImportError: - getgrnam = None - - -def _get_gid(name): - """Returns a gid, given a group name.""" - if getgrnam is None or name is None: - return None - try: - result = getgrnam(name) - except KeyError: - result = None - if result is not None: - return result[2] - return None - - -def _get_uid(name): - """Returns an uid, given a user name.""" - if getpwnam is None or name is None: - return None - try: - result = getpwnam(name) - except KeyError: - result = None - if result is not None: - return result[2] - return None - - -def make_tarball( - base_name, base_dir, compress="gzip", verbose=0, dry_run=0, owner=None, group=None -): - """Create a (possibly compressed) tar file from all the files under - 'base_dir'. - - 'compress' must be "gzip" (the default), "bzip2", "xz", "compress", or - None. ("compress" will be deprecated in Python 3.2) - - 'owner' and 'group' can be used to define an owner and a group for the - archive that is being built. If not provided, the current owner and group - will be used. - - The output tar file will be named 'base_dir' + ".tar", possibly plus - the appropriate compression extension (".gz", ".bz2", ".xz" or ".Z"). - - Returns the output filename. - """ - tar_compression = { - 'gzip': 'gz', - 'bzip2': 'bz2', - 'xz': 'xz', - None: '', - 'compress': '', - } - compress_ext = {'gzip': '.gz', 'bzip2': '.bz2', 'xz': '.xz', 'compress': '.Z'} - - # flags for compression program, each element of list will be an argument - if compress is not None and compress not in compress_ext.keys(): - raise ValueError( - "bad value for 'compress': must be None, 'gzip', 'bzip2', " - "'xz' or 'compress'" - ) - - archive_name = base_name + '.tar' - if compress != 'compress': - archive_name += compress_ext.get(compress, '') - - mkpath(os.path.dirname(archive_name), dry_run=dry_run) - - # creating the tarball - import tarfile # late import so Python build itself doesn't break - - log.info('Creating tar archive') - - uid = _get_uid(owner) - gid = _get_gid(group) - - def _set_uid_gid(tarinfo): - if gid is not None: - tarinfo.gid = gid - tarinfo.gname = group - if uid is not None: - tarinfo.uid = uid - tarinfo.uname = owner - return tarinfo - - if not dry_run: - tar = tarfile.open(archive_name, 'w|%s' % tar_compression[compress]) - try: - tar.add(base_dir, filter=_set_uid_gid) - finally: - tar.close() - - # compression using `compress` - if compress == 'compress': - warn("'compress' is deprecated.", DeprecationWarning) - # the option varies depending on the platform - compressed_name = archive_name + compress_ext[compress] - if sys.platform == 'win32': - cmd = [compress, archive_name, compressed_name] - else: - cmd = [compress, '-f', archive_name] - spawn(cmd, dry_run=dry_run) - return compressed_name - - return archive_name - - -def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): # noqa: C901 - """Create a zip file from all the files under 'base_dir'. - - The output zip file will be named 'base_name' + ".zip". Uses either the - "zipfile" Python module (if available) or the InfoZIP "zip" utility - (if installed and found on the default search path). If neither tool is - available, raises DistutilsExecError. Returns the name of the output zip - file. - """ - zip_filename = base_name + ".zip" - mkpath(os.path.dirname(zip_filename), dry_run=dry_run) - - # If zipfile module is not available, try spawning an external - # 'zip' command. - if zipfile is None: - if verbose: - zipoptions = "-r" - else: - zipoptions = "-rq" - - try: - spawn(["zip", zipoptions, zip_filename, base_dir], dry_run=dry_run) - except DistutilsExecError: - # XXX really should distinguish between "couldn't find - # external 'zip' command" and "zip failed". - raise DistutilsExecError( - ( - "unable to create zip file '%s': " - "could neither import the 'zipfile' module nor " - "find a standalone zip utility" - ) - % zip_filename - ) - - else: - log.info("creating '%s' and adding '%s' to it", zip_filename, base_dir) - - if not dry_run: - try: - zip = zipfile.ZipFile( - zip_filename, "w", compression=zipfile.ZIP_DEFLATED - ) - except RuntimeError: - zip = zipfile.ZipFile(zip_filename, "w", compression=zipfile.ZIP_STORED) - - with zip: - if base_dir != os.curdir: - path = os.path.normpath(os.path.join(base_dir, '')) - zip.write(path, path) - log.info("adding '%s'", path) - for dirpath, dirnames, filenames in os.walk(base_dir): - for name in dirnames: - path = os.path.normpath(os.path.join(dirpath, name, '')) - zip.write(path, path) - log.info("adding '%s'", path) - for name in filenames: - path = os.path.normpath(os.path.join(dirpath, name)) - if os.path.isfile(path): - zip.write(path, path) - log.info("adding '%s'", path) - - return zip_filename - - -ARCHIVE_FORMATS = { - 'gztar': (make_tarball, [('compress', 'gzip')], "gzip'ed tar-file"), - 'bztar': (make_tarball, [('compress', 'bzip2')], "bzip2'ed tar-file"), - 'xztar': (make_tarball, [('compress', 'xz')], "xz'ed tar-file"), - 'ztar': (make_tarball, [('compress', 'compress')], "compressed tar file"), - 'tar': (make_tarball, [('compress', None)], "uncompressed tar file"), - 'zip': (make_zipfile, [], "ZIP file"), -} - - -def check_archive_formats(formats): - """Returns the first format from the 'format' list that is unknown. - - If all formats are known, returns None - """ - for format in formats: - if format not in ARCHIVE_FORMATS: - return format - return None - - -def make_archive( - base_name, - format, - root_dir=None, - base_dir=None, - verbose=0, - dry_run=0, - owner=None, - group=None, -): - """Create an archive file (eg. zip or tar). - - 'base_name' is the name of the file to create, minus any format-specific - extension; 'format' is the archive format: one of "zip", "tar", "gztar", - "bztar", "xztar", or "ztar". - - 'root_dir' is a directory that will be the root directory of the - archive; ie. we typically chdir into 'root_dir' before creating the - archive. 'base_dir' is the directory where we start archiving from; - ie. 'base_dir' will be the common prefix of all files and - directories in the archive. 'root_dir' and 'base_dir' both default - to the current directory. Returns the name of the archive file. - - 'owner' and 'group' are used when creating a tar archive. By default, - uses the current owner and group. - """ - save_cwd = os.getcwd() - if root_dir is not None: - log.debug("changing into '%s'", root_dir) - base_name = os.path.abspath(base_name) - if not dry_run: - os.chdir(root_dir) - - if base_dir is None: - base_dir = os.curdir - - kwargs = {'dry_run': dry_run} - - try: - format_info = ARCHIVE_FORMATS[format] - except KeyError: - raise ValueError("unknown archive format '%s'" % format) - - func = format_info[0] - for arg, val in format_info[1]: - kwargs[arg] = val - - if format != 'zip': - kwargs['owner'] = owner - kwargs['group'] = group - - try: - filename = func(base_name, base_dir, **kwargs) - finally: - if root_dir is not None: - log.debug("changing back to '%s'", save_cwd) - os.chdir(save_cwd) - - return filename diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/results.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/results.py deleted file mode 100644 index 00c9421d3b0362526b8f90dc01e8db73841e0b61..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools/_vendor/pyparsing/results.py +++ /dev/null @@ -1,760 +0,0 @@ -# results.py -from collections.abc import MutableMapping, Mapping, MutableSequence, Iterator -import pprint -from weakref import ref as wkref -from typing import Tuple, Any - -str_type: Tuple[type, ...] = (str, bytes) -_generator_type = type((_ for _ in ())) - - -class _ParseResultsWithOffset: - __slots__ = ["tup"] - - def __init__(self, p1, p2): - self.tup = (p1, p2) - - def __getitem__(self, i): - return self.tup[i] - - def __getstate__(self): - return self.tup - - def __setstate__(self, *args): - self.tup = args[0] - - -class ParseResults: - """Structured parse results, to provide multiple means of access to - the parsed data: - - - as a list (``len(results)``) - - by list index (``results[0], results[1]``, etc.) - - by attribute (``results.`` - see :class:`ParserElement.set_results_name`) - - Example:: - - integer = Word(nums) - date_str = (integer.set_results_name("year") + '/' - + integer.set_results_name("month") + '/' - + integer.set_results_name("day")) - # equivalent form: - # date_str = (integer("year") + '/' - # + integer("month") + '/' - # + integer("day")) - - # parse_string returns a ParseResults object - result = date_str.parse_string("1999/12/31") - - def test(s, fn=repr): - print("{} -> {}".format(s, fn(eval(s)))) - test("list(result)") - test("result[0]") - test("result['month']") - test("result.day") - test("'month' in result") - test("'minutes' in result") - test("result.dump()", str) - - prints:: - - list(result) -> ['1999', '/', '12', '/', '31'] - result[0] -> '1999' - result['month'] -> '12' - result.day -> '31' - 'month' in result -> True - 'minutes' in result -> False - result.dump() -> ['1999', '/', '12', '/', '31'] - - day: '31' - - month: '12' - - year: '1999' - """ - - _null_values: Tuple[Any, ...] = (None, [], "", ()) - - __slots__ = [ - "_name", - "_parent", - "_all_names", - "_modal", - "_toklist", - "_tokdict", - "__weakref__", - ] - - class List(list): - """ - Simple wrapper class to distinguish parsed list results that should be preserved - as actual Python lists, instead of being converted to :class:`ParseResults`: - - LBRACK, RBRACK = map(pp.Suppress, "[]") - element = pp.Forward() - item = ppc.integer - element_list = LBRACK + pp.delimited_list(element) + RBRACK - - # add parse actions to convert from ParseResults to actual Python collection types - def as_python_list(t): - return pp.ParseResults.List(t.as_list()) - element_list.add_parse_action(as_python_list) - - element <<= item | element_list - - element.run_tests(''' - 100 - [2,3,4] - [[2, 1],3,4] - [(2, 1),3,4] - (2,3,4) - ''', post_parse=lambda s, r: (r[0], type(r[0]))) - - prints: - - 100 - (100, ) - - [2,3,4] - ([2, 3, 4], ) - - [[2, 1],3,4] - ([[2, 1], 3, 4], ) - - (Used internally by :class:`Group` when `aslist=True`.) - """ - - def __new__(cls, contained=None): - if contained is None: - contained = [] - - if not isinstance(contained, list): - raise TypeError( - "{} may only be constructed with a list," - " not {}".format(cls.__name__, type(contained).__name__) - ) - - return list.__new__(cls) - - def __new__(cls, toklist=None, name=None, **kwargs): - if isinstance(toklist, ParseResults): - return toklist - self = object.__new__(cls) - self._name = None - self._parent = None - self._all_names = set() - - if toklist is None: - self._toklist = [] - elif isinstance(toklist, (list, _generator_type)): - self._toklist = ( - [toklist[:]] - if isinstance(toklist, ParseResults.List) - else list(toklist) - ) - else: - self._toklist = [toklist] - self._tokdict = dict() - return self - - # Performance tuning: we construct a *lot* of these, so keep this - # constructor as small and fast as possible - def __init__( - self, toklist=None, name=None, asList=True, modal=True, isinstance=isinstance - ): - self._modal = modal - if name is not None and name != "": - if isinstance(name, int): - name = str(name) - if not modal: - self._all_names = {name} - self._name = name - if toklist not in self._null_values: - if isinstance(toklist, (str_type, type)): - toklist = [toklist] - if asList: - if isinstance(toklist, ParseResults): - self[name] = _ParseResultsWithOffset( - ParseResults(toklist._toklist), 0 - ) - else: - self[name] = _ParseResultsWithOffset( - ParseResults(toklist[0]), 0 - ) - self[name]._name = name - else: - try: - self[name] = toklist[0] - except (KeyError, TypeError, IndexError): - if toklist is not self: - self[name] = toklist - else: - self._name = name - - def __getitem__(self, i): - if isinstance(i, (int, slice)): - return self._toklist[i] - else: - if i not in self._all_names: - return self._tokdict[i][-1][0] - else: - return ParseResults([v[0] for v in self._tokdict[i]]) - - def __setitem__(self, k, v, isinstance=isinstance): - if isinstance(v, _ParseResultsWithOffset): - self._tokdict[k] = self._tokdict.get(k, list()) + [v] - sub = v[0] - elif isinstance(k, (int, slice)): - self._toklist[k] = v - sub = v - else: - self._tokdict[k] = self._tokdict.get(k, list()) + [ - _ParseResultsWithOffset(v, 0) - ] - sub = v - if isinstance(sub, ParseResults): - sub._parent = wkref(self) - - def __delitem__(self, i): - if isinstance(i, (int, slice)): - mylen = len(self._toklist) - del self._toklist[i] - - # convert int to slice - if isinstance(i, int): - if i < 0: - i += mylen - i = slice(i, i + 1) - # get removed indices - removed = list(range(*i.indices(mylen))) - removed.reverse() - # fixup indices in token dictionary - for name, occurrences in self._tokdict.items(): - for j in removed: - for k, (value, position) in enumerate(occurrences): - occurrences[k] = _ParseResultsWithOffset( - value, position - (position > j) - ) - else: - del self._tokdict[i] - - def __contains__(self, k) -> bool: - return k in self._tokdict - - def __len__(self) -> int: - return len(self._toklist) - - def __bool__(self) -> bool: - return not not (self._toklist or self._tokdict) - - def __iter__(self) -> Iterator: - return iter(self._toklist) - - def __reversed__(self) -> Iterator: - return iter(self._toklist[::-1]) - - def keys(self): - return iter(self._tokdict) - - def values(self): - return (self[k] for k in self.keys()) - - def items(self): - return ((k, self[k]) for k in self.keys()) - - def haskeys(self) -> bool: - """ - Since ``keys()`` returns an iterator, this method is helpful in bypassing - code that looks for the existence of any defined results names.""" - return bool(self._tokdict) - - def pop(self, *args, **kwargs): - """ - Removes and returns item at specified index (default= ``last``). - Supports both ``list`` and ``dict`` semantics for ``pop()``. If - passed no argument or an integer argument, it will use ``list`` - semantics and pop tokens from the list of parsed tokens. If passed - a non-integer argument (most likely a string), it will use ``dict`` - semantics and pop the corresponding value from any defined results - names. A second default return value argument is supported, just as in - ``dict.pop()``. - - Example:: - - numlist = Word(nums)[...] - print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321'] - - def remove_first(tokens): - tokens.pop(0) - numlist.add_parse_action(remove_first) - print(numlist.parse_string("0 123 321")) # -> ['123', '321'] - - label = Word(alphas) - patt = label("LABEL") + Word(nums)[1, ...] - print(patt.parse_string("AAB 123 321").dump()) - - # Use pop() in a parse action to remove named result (note that corresponding value is not - # removed from list form of results) - def remove_LABEL(tokens): - tokens.pop("LABEL") - return tokens - patt.add_parse_action(remove_LABEL) - print(patt.parse_string("AAB 123 321").dump()) - - prints:: - - ['AAB', '123', '321'] - - LABEL: 'AAB' - - ['AAB', '123', '321'] - """ - if not args: - args = [-1] - for k, v in kwargs.items(): - if k == "default": - args = (args[0], v) - else: - raise TypeError( - "pop() got an unexpected keyword argument {!r}".format(k) - ) - if isinstance(args[0], int) or len(args) == 1 or args[0] in self: - index = args[0] - ret = self[index] - del self[index] - return ret - else: - defaultvalue = args[1] - return defaultvalue - - def get(self, key, default_value=None): - """ - Returns named result matching the given key, or if there is no - such name, then returns the given ``default_value`` or ``None`` if no - ``default_value`` is specified. - - Similar to ``dict.get()``. - - Example:: - - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - result = date_str.parse_string("1999/12/31") - print(result.get("year")) # -> '1999' - print(result.get("hour", "not specified")) # -> 'not specified' - print(result.get("hour")) # -> None - """ - if key in self: - return self[key] - else: - return default_value - - def insert(self, index, ins_string): - """ - Inserts new element at location index in the list of parsed tokens. - - Similar to ``list.insert()``. - - Example:: - - numlist = Word(nums)[...] - print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321'] - - # use a parse action to insert the parse location in the front of the parsed results - def insert_locn(locn, tokens): - tokens.insert(0, locn) - numlist.add_parse_action(insert_locn) - print(numlist.parse_string("0 123 321")) # -> [0, '0', '123', '321'] - """ - self._toklist.insert(index, ins_string) - # fixup indices in token dictionary - for name, occurrences in self._tokdict.items(): - for k, (value, position) in enumerate(occurrences): - occurrences[k] = _ParseResultsWithOffset( - value, position + (position > index) - ) - - def append(self, item): - """ - Add single element to end of ``ParseResults`` list of elements. - - Example:: - - numlist = Word(nums)[...] - print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321'] - - # use a parse action to compute the sum of the parsed integers, and add it to the end - def append_sum(tokens): - tokens.append(sum(map(int, tokens))) - numlist.add_parse_action(append_sum) - print(numlist.parse_string("0 123 321")) # -> ['0', '123', '321', 444] - """ - self._toklist.append(item) - - def extend(self, itemseq): - """ - Add sequence of elements to end of ``ParseResults`` list of elements. - - Example:: - - patt = Word(alphas)[1, ...] - - # use a parse action to append the reverse of the matched strings, to make a palindrome - def make_palindrome(tokens): - tokens.extend(reversed([t[::-1] for t in tokens])) - return ''.join(tokens) - patt.add_parse_action(make_palindrome) - print(patt.parse_string("lskdj sdlkjf lksd")) # -> 'lskdjsdlkjflksddsklfjkldsjdksl' - """ - if isinstance(itemseq, ParseResults): - self.__iadd__(itemseq) - else: - self._toklist.extend(itemseq) - - def clear(self): - """ - Clear all elements and results names. - """ - del self._toklist[:] - self._tokdict.clear() - - def __getattr__(self, name): - try: - return self[name] - except KeyError: - if name.startswith("__"): - raise AttributeError(name) - return "" - - def __add__(self, other) -> "ParseResults": - ret = self.copy() - ret += other - return ret - - def __iadd__(self, other) -> "ParseResults": - if other._tokdict: - offset = len(self._toklist) - addoffset = lambda a: offset if a < 0 else a + offset - otheritems = other._tokdict.items() - otherdictitems = [ - (k, _ParseResultsWithOffset(v[0], addoffset(v[1]))) - for k, vlist in otheritems - for v in vlist - ] - for k, v in otherdictitems: - self[k] = v - if isinstance(v[0], ParseResults): - v[0]._parent = wkref(self) - - self._toklist += other._toklist - self._all_names |= other._all_names - return self - - def __radd__(self, other) -> "ParseResults": - if isinstance(other, int) and other == 0: - # useful for merging many ParseResults using sum() builtin - return self.copy() - else: - # this may raise a TypeError - so be it - return other + self - - def __repr__(self) -> str: - return "{}({!r}, {})".format(type(self).__name__, self._toklist, self.as_dict()) - - def __str__(self) -> str: - return ( - "[" - + ", ".join( - [ - str(i) if isinstance(i, ParseResults) else repr(i) - for i in self._toklist - ] - ) - + "]" - ) - - def _asStringList(self, sep=""): - out = [] - for item in self._toklist: - if out and sep: - out.append(sep) - if isinstance(item, ParseResults): - out += item._asStringList() - else: - out.append(str(item)) - return out - - def as_list(self) -> list: - """ - Returns the parse results as a nested list of matching tokens, all converted to strings. - - Example:: - - patt = Word(alphas)[1, ...] - result = patt.parse_string("sldkj lsdkj sldkj") - # even though the result prints in string-like form, it is actually a pyparsing ParseResults - print(type(result), result) # -> ['sldkj', 'lsdkj', 'sldkj'] - - # Use as_list() to create an actual list - result_list = result.as_list() - print(type(result_list), result_list) # -> ['sldkj', 'lsdkj', 'sldkj'] - """ - return [ - res.as_list() if isinstance(res, ParseResults) else res - for res in self._toklist - ] - - def as_dict(self) -> dict: - """ - Returns the named parse results as a nested dictionary. - - Example:: - - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - result = date_str.parse_string('12/31/1999') - print(type(result), repr(result)) # -> (['12', '/', '31', '/', '1999'], {'day': [('1999', 4)], 'year': [('12', 0)], 'month': [('31', 2)]}) - - result_dict = result.as_dict() - print(type(result_dict), repr(result_dict)) # -> {'day': '1999', 'year': '12', 'month': '31'} - - # even though a ParseResults supports dict-like access, sometime you just need to have a dict - import json - print(json.dumps(result)) # -> Exception: TypeError: ... is not JSON serializable - print(json.dumps(result.as_dict())) # -> {"month": "31", "day": "1999", "year": "12"} - """ - - def to_item(obj): - if isinstance(obj, ParseResults): - return obj.as_dict() if obj.haskeys() else [to_item(v) for v in obj] - else: - return obj - - return dict((k, to_item(v)) for k, v in self.items()) - - def copy(self) -> "ParseResults": - """ - Returns a new copy of a :class:`ParseResults` object. - """ - ret = ParseResults(self._toklist) - ret._tokdict = self._tokdict.copy() - ret._parent = self._parent - ret._all_names |= self._all_names - ret._name = self._name - return ret - - def get_name(self): - r""" - Returns the results name for this token expression. Useful when several - different expressions might match at a particular location. - - Example:: - - integer = Word(nums) - ssn_expr = Regex(r"\d\d\d-\d\d-\d\d\d\d") - house_number_expr = Suppress('#') + Word(nums, alphanums) - user_data = (Group(house_number_expr)("house_number") - | Group(ssn_expr)("ssn") - | Group(integer)("age")) - user_info = user_data[1, ...] - - result = user_info.parse_string("22 111-22-3333 #221B") - for item in result: - print(item.get_name(), ':', item[0]) - - prints:: - - age : 22 - ssn : 111-22-3333 - house_number : 221B - """ - if self._name: - return self._name - elif self._parent: - par = self._parent() - - def find_in_parent(sub): - return next( - ( - k - for k, vlist in par._tokdict.items() - for v, loc in vlist - if sub is v - ), - None, - ) - - return find_in_parent(self) if par else None - elif ( - len(self) == 1 - and len(self._tokdict) == 1 - and next(iter(self._tokdict.values()))[0][1] in (0, -1) - ): - return next(iter(self._tokdict.keys())) - else: - return None - - def dump(self, indent="", full=True, include_list=True, _depth=0) -> str: - """ - Diagnostic method for listing out the contents of - a :class:`ParseResults`. Accepts an optional ``indent`` argument so - that this string can be embedded in a nested display of other data. - - Example:: - - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - result = date_str.parse_string('1999/12/31') - print(result.dump()) - - prints:: - - ['1999', '/', '12', '/', '31'] - - day: '31' - - month: '12' - - year: '1999' - """ - out = [] - NL = "\n" - out.append(indent + str(self.as_list()) if include_list else "") - - if full: - if self.haskeys(): - items = sorted((str(k), v) for k, v in self.items()) - for k, v in items: - if out: - out.append(NL) - out.append("{}{}- {}: ".format(indent, (" " * _depth), k)) - if isinstance(v, ParseResults): - if v: - out.append( - v.dump( - indent=indent, - full=full, - include_list=include_list, - _depth=_depth + 1, - ) - ) - else: - out.append(str(v)) - else: - out.append(repr(v)) - if any(isinstance(vv, ParseResults) for vv in self): - v = self - for i, vv in enumerate(v): - if isinstance(vv, ParseResults): - out.append( - "\n{}{}[{}]:\n{}{}{}".format( - indent, - (" " * (_depth)), - i, - indent, - (" " * (_depth + 1)), - vv.dump( - indent=indent, - full=full, - include_list=include_list, - _depth=_depth + 1, - ), - ) - ) - else: - out.append( - "\n%s%s[%d]:\n%s%s%s" - % ( - indent, - (" " * (_depth)), - i, - indent, - (" " * (_depth + 1)), - str(vv), - ) - ) - - return "".join(out) - - def pprint(self, *args, **kwargs): - """ - Pretty-printer for parsed results as a list, using the - `pprint `_ module. - Accepts additional positional or keyword args as defined for - `pprint.pprint `_ . - - Example:: - - ident = Word(alphas, alphanums) - num = Word(nums) - func = Forward() - term = ident | num | Group('(' + func + ')') - func <<= ident + Group(Optional(delimited_list(term))) - result = func.parse_string("fna a,b,(fnb c,d,200),100") - result.pprint(width=40) - - prints:: - - ['fna', - ['a', - 'b', - ['(', 'fnb', ['c', 'd', '200'], ')'], - '100']] - """ - pprint.pprint(self.as_list(), *args, **kwargs) - - # add support for pickle protocol - def __getstate__(self): - return ( - self._toklist, - ( - self._tokdict.copy(), - self._parent is not None and self._parent() or None, - self._all_names, - self._name, - ), - ) - - def __setstate__(self, state): - self._toklist, (self._tokdict, par, inAccumNames, self._name) = state - self._all_names = set(inAccumNames) - if par is not None: - self._parent = wkref(par) - else: - self._parent = None - - def __getnewargs__(self): - return self._toklist, self._name - - def __dir__(self): - return dir(type(self)) + list(self.keys()) - - @classmethod - def from_dict(cls, other, name=None) -> "ParseResults": - """ - Helper classmethod to construct a ``ParseResults`` from a ``dict``, preserving the - name-value relations as results names. If an optional ``name`` argument is - given, a nested ``ParseResults`` will be returned. - """ - - def is_iterable(obj): - try: - iter(obj) - except Exception: - return False - else: - return not isinstance(obj, str_type) - - ret = cls([]) - for k, v in other.items(): - if isinstance(v, Mapping): - ret += cls.from_dict(v, name=k) - else: - ret += cls([v], name=k, asList=is_iterable(v)) - if name is not None: - ret = cls([ret], name=name) - return ret - - asList = as_list - asDict = as_dict - getName = get_name - - -MutableMapping.register(ParseResults) -MutableSequence.register(ParseResults) diff --git a/spaces/Realcat/image-matching-webui/third_party/d2net/hpatches_sequences/download_cache.sh b/spaces/Realcat/image-matching-webui/third_party/d2net/hpatches_sequences/download_cache.sh deleted file mode 100644 index 7a5a34acc75af5c2f398d3ec8cea367be404cdeb..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/d2net/hpatches_sequences/download_cache.sh +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -wget https://dsmn.ml/files/d2-net/hpatches-sequences-cache.tar.gz -tar xvzf hpatches-sequences-cache.tar.gz -rm -rf hpatches-sequences-cache.tar.gz - -wget https://dsmn.ml/files/d2-net/hpatches-sequences-cache-top.tar.gz -tar xvzf hpatches-sequences-cache-top.tar.gz -rm -rf hpatches-sequences-cache-top.tar.gz - diff --git a/spaces/RichardMB1217/blip/data/coco_karpathy_dataset.py b/spaces/RichardMB1217/blip/data/coco_karpathy_dataset.py deleted file mode 100644 index a34d29205f42aa09695b160ac9c91958ba041bb3..0000000000000000000000000000000000000000 --- a/spaces/RichardMB1217/blip/data/coco_karpathy_dataset.py +++ /dev/null @@ -1,126 +0,0 @@ -import os -import json - -from torch.utils.data import Dataset -from torchvision.datasets.utils import download_url - -from PIL import Image - -from data.utils import pre_caption - -class coco_karpathy_train(Dataset): - def __init__(self, transform, image_root, ann_root, max_words=30, prompt=''): - ''' - image_root (string): Root directory of images (e.g. coco/images/) - ann_root (string): directory to store the annotation file - ''' - url = 'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_train.json' - filename = 'coco_karpathy_train.json' - - download_url(url,ann_root) - - self.annotation = json.load(open(os.path.join(ann_root,filename),'r')) - self.transform = transform - self.image_root = image_root - self.max_words = max_words - self.prompt = prompt - - self.img_ids = {} - n = 0 - for ann in self.annotation: - img_id = ann['image_id'] - if img_id not in self.img_ids.keys(): - self.img_ids[img_id] = n - n += 1 - - def __len__(self): - return len(self.annotation) - - def __getitem__(self, index): - - ann = self.annotation[index] - - image_path = os.path.join(self.image_root,ann['image']) - image = Image.open(image_path).convert('RGB') - image = self.transform(image) - - caption = self.prompt+pre_caption(ann['caption'], self.max_words) - - return image, caption, self.img_ids[ann['image_id']] - - -class coco_karpathy_caption_eval(Dataset): - def __init__(self, transform, image_root, ann_root, split): - ''' - image_root (string): Root directory of images (e.g. coco/images/) - ann_root (string): directory to store the annotation file - split (string): val or test - ''' - urls = {'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_val.json', - 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_test.json'} - filenames = {'val':'coco_karpathy_val.json','test':'coco_karpathy_test.json'} - - download_url(urls[split],ann_root) - - self.annotation = json.load(open(os.path.join(ann_root,filenames[split]),'r')) - self.transform = transform - self.image_root = image_root - - def __len__(self): - return len(self.annotation) - - def __getitem__(self, index): - - ann = self.annotation[index] - - image_path = os.path.join(self.image_root,ann['image']) - image = Image.open(image_path).convert('RGB') - image = self.transform(image) - - img_id = ann['image'].split('/')[-1].strip('.jpg').split('_')[-1] - - return image, int(img_id) - - -class coco_karpathy_retrieval_eval(Dataset): - def __init__(self, transform, image_root, ann_root, split, max_words=30): - ''' - image_root (string): Root directory of images (e.g. coco/images/) - ann_root (string): directory to store the annotation file - split (string): val or test - ''' - urls = {'val':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_val.json', - 'test':'https://storage.googleapis.com/sfr-vision-language-research/datasets/coco_karpathy_test.json'} - filenames = {'val':'coco_karpathy_val.json','test':'coco_karpathy_test.json'} - - download_url(urls[split],ann_root) - - self.annotation = json.load(open(os.path.join(ann_root,filenames[split]),'r')) - self.transform = transform - self.image_root = image_root - - self.text = [] - self.image = [] - self.txt2img = {} - self.img2txt = {} - - txt_id = 0 - for img_id, ann in enumerate(self.annotation): - self.image.append(ann['image']) - self.img2txt[img_id] = [] - for i, caption in enumerate(ann['caption']): - self.text.append(pre_caption(caption,max_words)) - self.img2txt[img_id].append(txt_id) - self.txt2img[txt_id] = img_id - txt_id += 1 - - def __len__(self): - return len(self.annotation) - - def __getitem__(self, index): - - image_path = os.path.join(self.image_root, self.annotation[index]['image']) - image = Image.open(image_path).convert('RGB') - image = self.transform(image) - - return image, index \ No newline at end of file diff --git a/spaces/Ritori/TTS_Yui/hifi-gan/env.py b/spaces/Ritori/TTS_Yui/hifi-gan/env.py deleted file mode 100644 index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000 --- a/spaces/Ritori/TTS_Yui/hifi-gan/env.py +++ /dev/null @@ -1,15 +0,0 @@ -import os -import shutil - - -class AttrDict(dict): - def __init__(self, *args, **kwargs): - super(AttrDict, self).__init__(*args, **kwargs) - self.__dict__ = self - - -def build_env(config, config_name, path): - t_path = os.path.join(path, config_name) - if config != t_path: - os.makedirs(path, exist_ok=True) - shutil.copyfile(config, os.path.join(path, config_name)) diff --git a/spaces/Ritori/TTS_Yui/hifi-gan/stft.py b/spaces/Ritori/TTS_Yui/hifi-gan/stft.py deleted file mode 100644 index edfc44ae8bdec2887920a1ffab012432ca09a33d..0000000000000000000000000000000000000000 --- a/spaces/Ritori/TTS_Yui/hifi-gan/stft.py +++ /dev/null @@ -1,141 +0,0 @@ -""" -BSD 3-Clause License - -Copyright (c) 2017, Prem Seetharaman -All rights reserved. - -* Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - -* Redistributions in binary form must reproduce the above copyright notice, this - list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - -* Neither the name of the copyright holder nor the names of its - contributors may be used to endorse or promote products derived from this - software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR -ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON -ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -""" - -import torch -import numpy as np -import torch.nn.functional as F -from torch.autograd import Variable -from scipy.signal import get_window -from librosa.util import pad_center, tiny -from audio_processing import window_sumsquare - - -class STFT(torch.nn.Module): - """adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft""" - def __init__(self, filter_length=800, hop_length=200, win_length=800, - window='hann'): - super(STFT, self).__init__() - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length - self.window = window - self.forward_transform = None - scale = self.filter_length / self.hop_length - fourier_basis = np.fft.fft(np.eye(self.filter_length)) - - cutoff = int((self.filter_length / 2 + 1)) - fourier_basis = np.vstack([np.real(fourier_basis[:cutoff, :]), - np.imag(fourier_basis[:cutoff, :])]) - - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - inverse_basis = torch.FloatTensor( - np.linalg.pinv(scale * fourier_basis).T[:, None, :]) - - if window is not None: - assert(filter_length >= win_length) - # get window and zero center pad it to filter_length - fft_window = get_window(window, win_length, fftbins=True) - fft_window = pad_center(fft_window, filter_length) - fft_window = torch.from_numpy(fft_window).float() - - # window the bases - forward_basis *= fft_window - inverse_basis *= fft_window - - self.register_buffer('forward_basis', forward_basis.float()) - self.register_buffer('inverse_basis', inverse_basis.float()) - - def transform(self, input_data): - num_batches = input_data.size(0) - num_samples = input_data.size(1) - - self.num_samples = num_samples - - # similar to librosa, reflect-pad the input - input_data = input_data.view(num_batches, 1, num_samples) - input_data = F.pad( - input_data.unsqueeze(1), - (int(self.filter_length / 2), int(self.filter_length / 2), 0, 0), - mode='reflect') - input_data = input_data.squeeze(1) - - forward_transform = F.conv1d( - input_data, - Variable(self.forward_basis, requires_grad=False), - stride=self.hop_length, - padding=0) - - cutoff = int((self.filter_length / 2) + 1) - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - - magnitude = torch.sqrt(real_part**2 + imag_part**2) - phase = torch.autograd.Variable( - torch.atan2(imag_part.data, real_part.data)) - - return magnitude, phase - - def inverse(self, magnitude, phase): - recombine_magnitude_phase = torch.cat( - [magnitude*torch.cos(phase), magnitude*torch.sin(phase)], dim=1) - - inverse_transform = F.conv_transpose1d( - recombine_magnitude_phase, - Variable(self.inverse_basis, requires_grad=False), - stride=self.hop_length, - padding=0) - - if self.window is not None: - window_sum = window_sumsquare( - self.window, magnitude.size(-1), hop_length=self.hop_length, - win_length=self.win_length, n_fft=self.filter_length, - dtype=np.float32) - # remove modulation effects - approx_nonzero_indices = torch.from_numpy( - np.where(window_sum > tiny(window_sum))[0]) - window_sum = torch.autograd.Variable( - torch.from_numpy(window_sum), requires_grad=False) - window_sum = window_sum.cuda() if magnitude.is_cuda else window_sum - inverse_transform[:, :, approx_nonzero_indices] /= window_sum[approx_nonzero_indices] - - # scale by hop ratio - inverse_transform *= float(self.filter_length) / self.hop_length - - inverse_transform = inverse_transform[:, :, int(self.filter_length/2):] - inverse_transform = inverse_transform[:, :, :-int(self.filter_length/2):] - - return inverse_transform - - def forward(self, input_data): - self.magnitude, self.phase = self.transform(input_data) - reconstruction = self.inverse(self.magnitude, self.phase) - return reconstruction diff --git a/spaces/RobPruzan/automaticlitassesment/training.py b/spaces/RobPruzan/automaticlitassesment/training.py deleted file mode 100644 index 1c0ebe1b25edde7ee114a71cc79643a9b654bfce..0000000000000000000000000000000000000000 --- a/spaces/RobPruzan/automaticlitassesment/training.py +++ /dev/null @@ -1,309 +0,0 @@ -import random - -import matplotlib.pyplot as plt -import nltk -import numpy as np -import pandas as pd -import torch -import torch.nn -from nltk.tokenize import word_tokenize -from torch.utils.data import DataLoader, RandomSampler, SequentialSampler -from torch.utils.data import TensorDataset, random_split -from transformers import DistilBertForSequenceClassification, AdamW -from transformers import DistilBertTokenizer -from transformers import get_linear_schedule_with_warmup - -nltk.download('punkt') - -# %matplotlib inline - -df = pd.read_csv('/content/train.csv') - -print(f'Number of training samples: {df.shape[0]}') - -df.sample(100) - -tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') - -excerpts = df.excerpt.values -targets = df.target.values.astype('float32') - -plt.hist(df['target']) -plt.show() - -max_len = 0 - -for i in excerpts: - input_ids = tokenizer.encode(i, add_special_tokens=True) - - max_len = max(max_len, len(input_ids)) - -print(max_len) - -input_ids = [] -attention_masks = [] - -for i in excerpts: - encoded_text = tokenizer.encode_plus( - i, - add_special_tokens=True, - max_length=315, - pad_to_max_length=True, - return_attention_mask=True, - return_tensors='pt' - ) - - input_ids.append(encoded_text['input_ids']) - - attention_masks.append(encoded_text['attention_mask']) - -input_ids = torch.cat(input_ids, dim=0) -attention_masks = torch.cat(attention_masks, dim=0) -labels = torch.tensor(targets) - -labels = labels.float() - -# Combine the training inputs into a TensorDataset. -dataset = TensorDataset(input_ids, attention_masks, labels) - -# Create a 80-20 train-validation split. - -# Calculate the number of samples to include in each set. -train_size = int(0.8 * len(dataset)) -val_size = len(dataset) - train_size - -# Divide the dataset by randomly selecting samples. -train_dataset, val_dataset = random_split(dataset, [train_size, val_size]) - -print('{:>5,} training samples'.format(train_size)) -print('{:>5,} validation samples'.format(val_size)) - -batch_size = 8 - -train_dataloader = DataLoader( - train_dataset, - sampler=RandomSampler(train_dataset), - batch_size=batch_size -) - -validation_dataloader = DataLoader( - val_dataset, - sampler=SequentialSampler(val_dataset), - batch_size=batch_size -) - -model = DistilBertForSequenceClassification.from_pretrained( - "distilbert-base-uncased", - num_labels=1, - output_attentions=False, - output_hidden_states=False -) -torch.cuda.empty_cache() -model.cuda() - -optimizer = AdamW(model.parameters(), - lr=2e-5, - eps=1e-8) - -EPOCHS = 4 - -total_steps = len(train_dataloader) * EPOCHS - -scheduler = get_linear_schedule_with_warmup(optimizer, - num_warmup_steps=0, - num_training_steps=total_steps) - -device = torch.device('cuda' if torch.cuda.is_available else 'cpu') - -seed = 42 - -random.seed(seed) -np.random.seed(seed) -torch.manual_seed(seed) -torch.cuda.manual_seed_all(seed) - -training_stats = [] - -for epoch in range(EPOCHS): - total_train_loss = 0 - model.train() - - for step, batch in enumerate(train_dataloader): - - b_input_ids = batch[0].to(device) - b_input_mask = batch[1].to(device) - b_labels = batch[2].to(device) - - model.zero_grad() - result = model(b_input_ids, - - attention_mask=b_input_mask, - labels=b_labels, - return_dict=True - ) - - loss = result.loss - - logits = result.logits - - total_train_loss += loss.item() - - loss = loss.to(torch.float32) - - loss.backward() - - torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0) - - optimizer.step() - - scheduler.step() - if step % 40 == 0: - print(f'epoch: {epoch + 1} / {EPOCHS}, step {step + 1} / {len(train_dataloader)}, loss = {loss.item():.4f}') - avg_train_loss = total_train_loss / len(train_dataloader) - print(f'MSE {avg_train_loss:.2f}') - print("Running Validation...") - - model.eval() - total_eval_accuracy = 0 - total_eval_loss = 0 - nb_eval_steps = 0 - - for batch in validation_dataloader: - b_input_ids = batch[0].to(device) - b_input_mask = batch[1].to(device) - b_labels = batch[2].to(device) - - with torch.no_grad(): - result = model( - b_input_ids, - attention_mask=b_input_mask, - labels=b_labels, - return_dict=True, - ) - - loss = loss.to(torch.float32) - logits = result.logits - - total_eval_loss += loss.item() - - logits = logits.detach().cpu().numpy() - label_ids = b_labels.to('cpu').numpy() - - avg_val_loss = total_eval_loss / len(validation_dataloader) - print(f'Validation Loss {avg_val_loss:.2f}') - training_stats.append( - { - 'epoch': epoch + 1, - 'Training Loss': avg_train_loss, - 'MSE': avg_val_loss, - - } - ) - -print("") -print("Training complete!") - -torch.save(model, '/content/untitled') - -PATH = '/content/pytorchBERTmodel' -model = torch.load(PATH) -model.eval() -model.to(device) - - -def predict(text, tokenizer): - model.eval() - model.to(device) - - def prepare_data(text, tokenizer): - input_ids = [] - attention_masks = [] - - encoded_text = tokenizer.encode_plus( - text, - add_special_tokens=True, - max_length=315, - padding=True, - return_attention_mask=True, - return_tensors='pt' - ) - - input_ids.append(encoded_text['input_ids']) - attention_masks.append(encoded_text['attention_mask']) - - input_ids = torch.cat(input_ids, dim=0) - attention_masks = torch.cat(attention_masks, dim=0) - return {'input_ids': input_ids, 'attention_masks': attention_masks} - - tokenized_example_text = prepare_data(text, tokenizer) - with torch.no_grad(): - result = model( - tokenized_example_text['input_ids'].to(device), - attention_mask=tokenized_example_text['attention_masks'].to(device), - return_dict=True - ).logits - - return result - - -sen = """ -Recent JWST observations suggest an excess of 𝑧 & 10 galaxy candidates above most theoretical models. Here, we explore how -the interplay between halo formation timescales, star formation efficiency and dust attenuation affects the properties and number -densities of galaxies we can detect in the early universe. We calculate the theoretical upper limit on the UV luminosity function, -assuming star formation is 100% efficient and all gas in halos is converted into stars, and that galaxies are at the peak age for -UV emission (∼ 10 Myr). This upper limit is ∼ 4 orders of magnitude greater than current observations, implying these are -fully consistent with star formation in ΛCDM cosmology. One day, a woman was walking her two dogs. One was a big, friendly labrador -and the other was a little yappy dog. As they walked, the little dog started to bark at a cat. The cat hissed and ran away. The -labrador just stood there wagging his tail. The woman scolded the little dog, "You're supposed to be my protector! Why didn't you -chase that cat away?" The labrador just looked at her and said, "I'm sorry, but I just don't see the point. -""" -sen_2 = """ -Interstellar chemistry is important for galaxy formation, as it determines the rate at which gas can cool, and enables -us to make predictions for observable spectroscopic lines from ions and molecules. We explore two central aspects -of modelling the chemistry of the interstellar medium (ISM): (1) the effects of local stellar radiation, which ionises -and heats the gas, and (2) the depletion of metals onto dust grains, which reduces the abundance of metals in the -gas phase. We run high-resolution (400 M per baryonic particle) simulations of isolated disc galaxies, from dwarfs -to Milky Way-mass, using the fire galaxy formation models together with the chimes non-equilibrium chemistry -and cooling module. In our fiducial model, we couple the chemistry to the stellar fluxes calculated from star particles -using an approximate radiative transfer scheme, and we implement an empirical density-dependent prescription for -metal depletion. For comparison, we also run simulations with a spatially uniform radiation field, and without metal -depletion. Our fiducial model broadly reproduces observed trends in Hi and H2 mass with stellar mass, and in line -luminosity versus star formation rate for [Cii]158µm, [Oi]63µm, [Oiii]88µm, [Nii]122µm and Hα6563˚A. Our simulations -""" -windows_2 = [] -words = word_tokenize(sen_2) -for idx, text in enumerate(words): - if idx <= len(words) - 21: - x = ' '.join(words[idx: idx + 20]) - windows_2.append(x) - -win_preds_2 = [] -for text in windows_2: - win_preds_2.append(predict(text, tokenizer).item()) - -windows = [] -words = word_tokenize(sen) -for idx, text in enumerate(words): - if idx <= len(words) - 21: - x = ' '.join(words[idx: idx + 20]) - - windows.append(x) - -win_preds = [] -for text in windows: - win_preds.append(predict(text, tokenizer).item()) - -plt.style.use('seaborn-notebook') -# Data -x = list(range(len(win_preds))) -y = win_preds -x2 = list(range(len(win_preds_2))) -y2 = win_preds_2 -# Plot -plt.plot(x, y, color='#ff0000') -plt.plot(x2, y2, color='blue') -plt.grid(color='#cccccc', linestyle='--', linewidth=1) -plt.xlabel('Window Sequence') -plt.ylabel('Difficulty Score') -plt.suptitle('Difficulty Score Over Time', fontsize=14, fontweight='bold') -plt.show() \ No newline at end of file diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/utils/flops_counter.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/utils/flops_counter.py deleted file mode 100644 index d10af5feca7f4b8c0ba359b7b1c826f754e048be..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/cnn/utils/flops_counter.py +++ /dev/null @@ -1,599 +0,0 @@ -# Modified from flops-counter.pytorch by Vladislav Sovrasov -# original repo: https://github.com/sovrasov/flops-counter.pytorch - -# MIT License - -# Copyright (c) 2018 Vladislav Sovrasov - -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: - -# The above copyright notice and this permission notice shall be included in -# all copies or substantial portions of the Software. - -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -import sys -from functools import partial - -import numpy as np -import torch -import torch.nn as nn - -import annotator.uniformer.mmcv as mmcv - - -def get_model_complexity_info(model, - input_shape, - print_per_layer_stat=True, - as_strings=True, - input_constructor=None, - flush=False, - ost=sys.stdout): - """Get complexity information of a model. - - This method can calculate FLOPs and parameter counts of a model with - corresponding input shape. It can also print complexity information for - each layer in a model. - - Supported layers are listed as below: - - Convolutions: ``nn.Conv1d``, ``nn.Conv2d``, ``nn.Conv3d``. - - Activations: ``nn.ReLU``, ``nn.PReLU``, ``nn.ELU``, ``nn.LeakyReLU``, - ``nn.ReLU6``. - - Poolings: ``nn.MaxPool1d``, ``nn.MaxPool2d``, ``nn.MaxPool3d``, - ``nn.AvgPool1d``, ``nn.AvgPool2d``, ``nn.AvgPool3d``, - ``nn.AdaptiveMaxPool1d``, ``nn.AdaptiveMaxPool2d``, - ``nn.AdaptiveMaxPool3d``, ``nn.AdaptiveAvgPool1d``, - ``nn.AdaptiveAvgPool2d``, ``nn.AdaptiveAvgPool3d``. - - BatchNorms: ``nn.BatchNorm1d``, ``nn.BatchNorm2d``, - ``nn.BatchNorm3d``, ``nn.GroupNorm``, ``nn.InstanceNorm1d``, - ``InstanceNorm2d``, ``InstanceNorm3d``, ``nn.LayerNorm``. - - Linear: ``nn.Linear``. - - Deconvolution: ``nn.ConvTranspose2d``. - - Upsample: ``nn.Upsample``. - - Args: - model (nn.Module): The model for complexity calculation. - input_shape (tuple): Input shape used for calculation. - print_per_layer_stat (bool): Whether to print complexity information - for each layer in a model. Default: True. - as_strings (bool): Output FLOPs and params counts in a string form. - Default: True. - input_constructor (None | callable): If specified, it takes a callable - method that generates input. otherwise, it will generate a random - tensor with input shape to calculate FLOPs. Default: None. - flush (bool): same as that in :func:`print`. Default: False. - ost (stream): same as ``file`` param in :func:`print`. - Default: sys.stdout. - - Returns: - tuple[float | str]: If ``as_strings`` is set to True, it will return - FLOPs and parameter counts in a string format. otherwise, it will - return those in a float number format. - """ - assert type(input_shape) is tuple - assert len(input_shape) >= 1 - assert isinstance(model, nn.Module) - flops_model = add_flops_counting_methods(model) - flops_model.eval() - flops_model.start_flops_count() - if input_constructor: - input = input_constructor(input_shape) - _ = flops_model(**input) - else: - try: - batch = torch.ones(()).new_empty( - (1, *input_shape), - dtype=next(flops_model.parameters()).dtype, - device=next(flops_model.parameters()).device) - except StopIteration: - # Avoid StopIteration for models which have no parameters, - # like `nn.Relu()`, `nn.AvgPool2d`, etc. - batch = torch.ones(()).new_empty((1, *input_shape)) - - _ = flops_model(batch) - - flops_count, params_count = flops_model.compute_average_flops_cost() - if print_per_layer_stat: - print_model_with_flops( - flops_model, flops_count, params_count, ost=ost, flush=flush) - flops_model.stop_flops_count() - - if as_strings: - return flops_to_string(flops_count), params_to_string(params_count) - - return flops_count, params_count - - -def flops_to_string(flops, units='GFLOPs', precision=2): - """Convert FLOPs number into a string. - - Note that Here we take a multiply-add counts as one FLOP. - - Args: - flops (float): FLOPs number to be converted. - units (str | None): Converted FLOPs units. Options are None, 'GFLOPs', - 'MFLOPs', 'KFLOPs', 'FLOPs'. If set to None, it will automatically - choose the most suitable unit for FLOPs. Default: 'GFLOPs'. - precision (int): Digit number after the decimal point. Default: 2. - - Returns: - str: The converted FLOPs number with units. - - Examples: - >>> flops_to_string(1e9) - '1.0 GFLOPs' - >>> flops_to_string(2e5, 'MFLOPs') - '0.2 MFLOPs' - >>> flops_to_string(3e-9, None) - '3e-09 FLOPs' - """ - if units is None: - if flops // 10**9 > 0: - return str(round(flops / 10.**9, precision)) + ' GFLOPs' - elif flops // 10**6 > 0: - return str(round(flops / 10.**6, precision)) + ' MFLOPs' - elif flops // 10**3 > 0: - return str(round(flops / 10.**3, precision)) + ' KFLOPs' - else: - return str(flops) + ' FLOPs' - else: - if units == 'GFLOPs': - return str(round(flops / 10.**9, precision)) + ' ' + units - elif units == 'MFLOPs': - return str(round(flops / 10.**6, precision)) + ' ' + units - elif units == 'KFLOPs': - return str(round(flops / 10.**3, precision)) + ' ' + units - else: - return str(flops) + ' FLOPs' - - -def params_to_string(num_params, units=None, precision=2): - """Convert parameter number into a string. - - Args: - num_params (float): Parameter number to be converted. - units (str | None): Converted FLOPs units. Options are None, 'M', - 'K' and ''. If set to None, it will automatically choose the most - suitable unit for Parameter number. Default: None. - precision (int): Digit number after the decimal point. Default: 2. - - Returns: - str: The converted parameter number with units. - - Examples: - >>> params_to_string(1e9) - '1000.0 M' - >>> params_to_string(2e5) - '200.0 k' - >>> params_to_string(3e-9) - '3e-09' - """ - if units is None: - if num_params // 10**6 > 0: - return str(round(num_params / 10**6, precision)) + ' M' - elif num_params // 10**3: - return str(round(num_params / 10**3, precision)) + ' k' - else: - return str(num_params) - else: - if units == 'M': - return str(round(num_params / 10.**6, precision)) + ' ' + units - elif units == 'K': - return str(round(num_params / 10.**3, precision)) + ' ' + units - else: - return str(num_params) - - -def print_model_with_flops(model, - total_flops, - total_params, - units='GFLOPs', - precision=3, - ost=sys.stdout, - flush=False): - """Print a model with FLOPs for each layer. - - Args: - model (nn.Module): The model to be printed. - total_flops (float): Total FLOPs of the model. - total_params (float): Total parameter counts of the model. - units (str | None): Converted FLOPs units. Default: 'GFLOPs'. - precision (int): Digit number after the decimal point. Default: 3. - ost (stream): same as `file` param in :func:`print`. - Default: sys.stdout. - flush (bool): same as that in :func:`print`. Default: False. - - Example: - >>> class ExampleModel(nn.Module): - - >>> def __init__(self): - >>> super().__init__() - >>> self.conv1 = nn.Conv2d(3, 8, 3) - >>> self.conv2 = nn.Conv2d(8, 256, 3) - >>> self.conv3 = nn.Conv2d(256, 8, 3) - >>> self.avg_pool = nn.AdaptiveAvgPool2d((1, 1)) - >>> self.flatten = nn.Flatten() - >>> self.fc = nn.Linear(8, 1) - - >>> def forward(self, x): - >>> x = self.conv1(x) - >>> x = self.conv2(x) - >>> x = self.conv3(x) - >>> x = self.avg_pool(x) - >>> x = self.flatten(x) - >>> x = self.fc(x) - >>> return x - - >>> model = ExampleModel() - >>> x = (3, 16, 16) - to print the complexity information state for each layer, you can use - >>> get_model_complexity_info(model, x) - or directly use - >>> print_model_with_flops(model, 4579784.0, 37361) - ExampleModel( - 0.037 M, 100.000% Params, 0.005 GFLOPs, 100.000% FLOPs, - (conv1): Conv2d(0.0 M, 0.600% Params, 0.0 GFLOPs, 0.959% FLOPs, 3, 8, kernel_size=(3, 3), stride=(1, 1)) # noqa: E501 - (conv2): Conv2d(0.019 M, 50.020% Params, 0.003 GFLOPs, 58.760% FLOPs, 8, 256, kernel_size=(3, 3), stride=(1, 1)) - (conv3): Conv2d(0.018 M, 49.356% Params, 0.002 GFLOPs, 40.264% FLOPs, 256, 8, kernel_size=(3, 3), stride=(1, 1)) - (avg_pool): AdaptiveAvgPool2d(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.017% FLOPs, output_size=(1, 1)) - (flatten): Flatten(0.0 M, 0.000% Params, 0.0 GFLOPs, 0.000% FLOPs, ) - (fc): Linear(0.0 M, 0.024% Params, 0.0 GFLOPs, 0.000% FLOPs, in_features=8, out_features=1, bias=True) - ) - """ - - def accumulate_params(self): - if is_supported_instance(self): - return self.__params__ - else: - sum = 0 - for m in self.children(): - sum += m.accumulate_params() - return sum - - def accumulate_flops(self): - if is_supported_instance(self): - return self.__flops__ / model.__batch_counter__ - else: - sum = 0 - for m in self.children(): - sum += m.accumulate_flops() - return sum - - def flops_repr(self): - accumulated_num_params = self.accumulate_params() - accumulated_flops_cost = self.accumulate_flops() - return ', '.join([ - params_to_string( - accumulated_num_params, units='M', precision=precision), - '{:.3%} Params'.format(accumulated_num_params / total_params), - flops_to_string( - accumulated_flops_cost, units=units, precision=precision), - '{:.3%} FLOPs'.format(accumulated_flops_cost / total_flops), - self.original_extra_repr() - ]) - - def add_extra_repr(m): - m.accumulate_flops = accumulate_flops.__get__(m) - m.accumulate_params = accumulate_params.__get__(m) - flops_extra_repr = flops_repr.__get__(m) - if m.extra_repr != flops_extra_repr: - m.original_extra_repr = m.extra_repr - m.extra_repr = flops_extra_repr - assert m.extra_repr != m.original_extra_repr - - def del_extra_repr(m): - if hasattr(m, 'original_extra_repr'): - m.extra_repr = m.original_extra_repr - del m.original_extra_repr - if hasattr(m, 'accumulate_flops'): - del m.accumulate_flops - - model.apply(add_extra_repr) - print(model, file=ost, flush=flush) - model.apply(del_extra_repr) - - -def get_model_parameters_number(model): - """Calculate parameter number of a model. - - Args: - model (nn.module): The model for parameter number calculation. - - Returns: - float: Parameter number of the model. - """ - num_params = sum(p.numel() for p in model.parameters() if p.requires_grad) - return num_params - - -def add_flops_counting_methods(net_main_module): - # adding additional methods to the existing module object, - # this is done this way so that each function has access to self object - net_main_module.start_flops_count = start_flops_count.__get__( - net_main_module) - net_main_module.stop_flops_count = stop_flops_count.__get__( - net_main_module) - net_main_module.reset_flops_count = reset_flops_count.__get__( - net_main_module) - net_main_module.compute_average_flops_cost = compute_average_flops_cost.__get__( # noqa: E501 - net_main_module) - - net_main_module.reset_flops_count() - - return net_main_module - - -def compute_average_flops_cost(self): - """Compute average FLOPs cost. - - A method to compute average FLOPs cost, which will be available after - `add_flops_counting_methods()` is called on a desired net object. - - Returns: - float: Current mean flops consumption per image. - """ - batches_count = self.__batch_counter__ - flops_sum = 0 - for module in self.modules(): - if is_supported_instance(module): - flops_sum += module.__flops__ - params_sum = get_model_parameters_number(self) - return flops_sum / batches_count, params_sum - - -def start_flops_count(self): - """Activate the computation of mean flops consumption per image. - - A method to activate the computation of mean flops consumption per image. - which will be available after ``add_flops_counting_methods()`` is called on - a desired net object. It should be called before running the network. - """ - add_batch_counter_hook_function(self) - - def add_flops_counter_hook_function(module): - if is_supported_instance(module): - if hasattr(module, '__flops_handle__'): - return - - else: - handle = module.register_forward_hook( - get_modules_mapping()[type(module)]) - - module.__flops_handle__ = handle - - self.apply(partial(add_flops_counter_hook_function)) - - -def stop_flops_count(self): - """Stop computing the mean flops consumption per image. - - A method to stop computing the mean flops consumption per image, which will - be available after ``add_flops_counting_methods()`` is called on a desired - net object. It can be called to pause the computation whenever. - """ - remove_batch_counter_hook_function(self) - self.apply(remove_flops_counter_hook_function) - - -def reset_flops_count(self): - """Reset statistics computed so far. - - A method to Reset computed statistics, which will be available after - `add_flops_counting_methods()` is called on a desired net object. - """ - add_batch_counter_variables_or_reset(self) - self.apply(add_flops_counter_variable_or_reset) - - -# ---- Internal functions -def empty_flops_counter_hook(module, input, output): - module.__flops__ += 0 - - -def upsample_flops_counter_hook(module, input, output): - output_size = output[0] - batch_size = output_size.shape[0] - output_elements_count = batch_size - for val in output_size.shape[1:]: - output_elements_count *= val - module.__flops__ += int(output_elements_count) - - -def relu_flops_counter_hook(module, input, output): - active_elements_count = output.numel() - module.__flops__ += int(active_elements_count) - - -def linear_flops_counter_hook(module, input, output): - input = input[0] - output_last_dim = output.shape[ - -1] # pytorch checks dimensions, so here we don't care much - module.__flops__ += int(np.prod(input.shape) * output_last_dim) - - -def pool_flops_counter_hook(module, input, output): - input = input[0] - module.__flops__ += int(np.prod(input.shape)) - - -def norm_flops_counter_hook(module, input, output): - input = input[0] - - batch_flops = np.prod(input.shape) - if (getattr(module, 'affine', False) - or getattr(module, 'elementwise_affine', False)): - batch_flops *= 2 - module.__flops__ += int(batch_flops) - - -def deconv_flops_counter_hook(conv_module, input, output): - # Can have multiple inputs, getting the first one - input = input[0] - - batch_size = input.shape[0] - input_height, input_width = input.shape[2:] - - kernel_height, kernel_width = conv_module.kernel_size - in_channels = conv_module.in_channels - out_channels = conv_module.out_channels - groups = conv_module.groups - - filters_per_channel = out_channels // groups - conv_per_position_flops = ( - kernel_height * kernel_width * in_channels * filters_per_channel) - - active_elements_count = batch_size * input_height * input_width - overall_conv_flops = conv_per_position_flops * active_elements_count - bias_flops = 0 - if conv_module.bias is not None: - output_height, output_width = output.shape[2:] - bias_flops = out_channels * batch_size * output_height * output_height - overall_flops = overall_conv_flops + bias_flops - - conv_module.__flops__ += int(overall_flops) - - -def conv_flops_counter_hook(conv_module, input, output): - # Can have multiple inputs, getting the first one - input = input[0] - - batch_size = input.shape[0] - output_dims = list(output.shape[2:]) - - kernel_dims = list(conv_module.kernel_size) - in_channels = conv_module.in_channels - out_channels = conv_module.out_channels - groups = conv_module.groups - - filters_per_channel = out_channels // groups - conv_per_position_flops = int( - np.prod(kernel_dims)) * in_channels * filters_per_channel - - active_elements_count = batch_size * int(np.prod(output_dims)) - - overall_conv_flops = conv_per_position_flops * active_elements_count - - bias_flops = 0 - - if conv_module.bias is not None: - - bias_flops = out_channels * active_elements_count - - overall_flops = overall_conv_flops + bias_flops - - conv_module.__flops__ += int(overall_flops) - - -def batch_counter_hook(module, input, output): - batch_size = 1 - if len(input) > 0: - # Can have multiple inputs, getting the first one - input = input[0] - batch_size = len(input) - else: - pass - print('Warning! No positional inputs found for a module, ' - 'assuming batch size is 1.') - module.__batch_counter__ += batch_size - - -def add_batch_counter_variables_or_reset(module): - - module.__batch_counter__ = 0 - - -def add_batch_counter_hook_function(module): - if hasattr(module, '__batch_counter_handle__'): - return - - handle = module.register_forward_hook(batch_counter_hook) - module.__batch_counter_handle__ = handle - - -def remove_batch_counter_hook_function(module): - if hasattr(module, '__batch_counter_handle__'): - module.__batch_counter_handle__.remove() - del module.__batch_counter_handle__ - - -def add_flops_counter_variable_or_reset(module): - if is_supported_instance(module): - if hasattr(module, '__flops__') or hasattr(module, '__params__'): - print('Warning: variables __flops__ or __params__ are already ' - 'defined for the module' + type(module).__name__ + - ' ptflops can affect your code!') - module.__flops__ = 0 - module.__params__ = get_model_parameters_number(module) - - -def is_supported_instance(module): - if type(module) in get_modules_mapping(): - return True - return False - - -def remove_flops_counter_hook_function(module): - if is_supported_instance(module): - if hasattr(module, '__flops_handle__'): - module.__flops_handle__.remove() - del module.__flops_handle__ - - -def get_modules_mapping(): - return { - # convolutions - nn.Conv1d: conv_flops_counter_hook, - nn.Conv2d: conv_flops_counter_hook, - mmcv.cnn.bricks.Conv2d: conv_flops_counter_hook, - nn.Conv3d: conv_flops_counter_hook, - mmcv.cnn.bricks.Conv3d: conv_flops_counter_hook, - # activations - nn.ReLU: relu_flops_counter_hook, - nn.PReLU: relu_flops_counter_hook, - nn.ELU: relu_flops_counter_hook, - nn.LeakyReLU: relu_flops_counter_hook, - nn.ReLU6: relu_flops_counter_hook, - # poolings - nn.MaxPool1d: pool_flops_counter_hook, - nn.AvgPool1d: pool_flops_counter_hook, - nn.AvgPool2d: pool_flops_counter_hook, - nn.MaxPool2d: pool_flops_counter_hook, - mmcv.cnn.bricks.MaxPool2d: pool_flops_counter_hook, - nn.MaxPool3d: pool_flops_counter_hook, - mmcv.cnn.bricks.MaxPool3d: pool_flops_counter_hook, - nn.AvgPool3d: pool_flops_counter_hook, - nn.AdaptiveMaxPool1d: pool_flops_counter_hook, - nn.AdaptiveAvgPool1d: pool_flops_counter_hook, - nn.AdaptiveMaxPool2d: pool_flops_counter_hook, - nn.AdaptiveAvgPool2d: pool_flops_counter_hook, - nn.AdaptiveMaxPool3d: pool_flops_counter_hook, - nn.AdaptiveAvgPool3d: pool_flops_counter_hook, - # normalizations - nn.BatchNorm1d: norm_flops_counter_hook, - nn.BatchNorm2d: norm_flops_counter_hook, - nn.BatchNorm3d: norm_flops_counter_hook, - nn.GroupNorm: norm_flops_counter_hook, - nn.InstanceNorm1d: norm_flops_counter_hook, - nn.InstanceNorm2d: norm_flops_counter_hook, - nn.InstanceNorm3d: norm_flops_counter_hook, - nn.LayerNorm: norm_flops_counter_hook, - # FC - nn.Linear: linear_flops_counter_hook, - mmcv.cnn.bricks.Linear: linear_flops_counter_hook, - # Upscale - nn.Upsample: upsample_flops_counter_hook, - # Deconvolution - nn.ConvTranspose2d: deconv_flops_counter_hook, - mmcv.cnn.bricks.ConvTranspose2d: deconv_flops_counter_hook, - } diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/iou3d.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/iou3d.py deleted file mode 100644 index 6fc71979190323f44c09f8b7e1761cf49cd2d76b..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/iou3d.py +++ /dev/null @@ -1,85 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', [ - 'iou3d_boxes_iou_bev_forward', 'iou3d_nms_forward', - 'iou3d_nms_normal_forward' -]) - - -def boxes_iou_bev(boxes_a, boxes_b): - """Calculate boxes IoU in the Bird's Eye View. - - Args: - boxes_a (torch.Tensor): Input boxes a with shape (M, 5). - boxes_b (torch.Tensor): Input boxes b with shape (N, 5). - - Returns: - ans_iou (torch.Tensor): IoU result with shape (M, N). - """ - ans_iou = boxes_a.new_zeros( - torch.Size((boxes_a.shape[0], boxes_b.shape[0]))) - - ext_module.iou3d_boxes_iou_bev_forward(boxes_a.contiguous(), - boxes_b.contiguous(), ans_iou) - - return ans_iou - - -def nms_bev(boxes, scores, thresh, pre_max_size=None, post_max_size=None): - """NMS function GPU implementation (for BEV boxes). The overlap of two - boxes for IoU calculation is defined as the exact overlapping area of the - two boxes. In this function, one can also set ``pre_max_size`` and - ``post_max_size``. - - Args: - boxes (torch.Tensor): Input boxes with the shape of [N, 5] - ([x1, y1, x2, y2, ry]). - scores (torch.Tensor): Scores of boxes with the shape of [N]. - thresh (float): Overlap threshold of NMS. - pre_max_size (int, optional): Max size of boxes before NMS. - Default: None. - post_max_size (int, optional): Max size of boxes after NMS. - Default: None. - - Returns: - torch.Tensor: Indexes after NMS. - """ - assert boxes.size(1) == 5, 'Input boxes shape should be [N, 5]' - order = scores.sort(0, descending=True)[1] - - if pre_max_size is not None: - order = order[:pre_max_size] - boxes = boxes[order].contiguous() - - keep = torch.zeros(boxes.size(0), dtype=torch.long) - num_out = ext_module.iou3d_nms_forward(boxes, keep, thresh) - keep = order[keep[:num_out].cuda(boxes.device)].contiguous() - if post_max_size is not None: - keep = keep[:post_max_size] - return keep - - -def nms_normal_bev(boxes, scores, thresh): - """Normal NMS function GPU implementation (for BEV boxes). The overlap of - two boxes for IoU calculation is defined as the exact overlapping area of - the two boxes WITH their yaw angle set to 0. - - Args: - boxes (torch.Tensor): Input boxes with shape (N, 5). - scores (torch.Tensor): Scores of predicted boxes with shape (N). - thresh (float): Overlap threshold of NMS. - - Returns: - torch.Tensor: Remaining indices with scores in descending order. - """ - assert boxes.shape[1] == 5, 'Input boxes shape should be [N, 5]' - order = scores.sort(0, descending=True)[1] - - boxes = boxes[order].contiguous() - - keep = torch.zeros(boxes.size(0), dtype=torch.long) - num_out = ext_module.iou3d_nms_normal_forward(boxes, keep, thresh) - return order[keep[:num_out].cuda(boxes.device)].contiguous() diff --git a/spaces/Roixy/hakurei-waifu-diffusion/README.md b/spaces/Roixy/hakurei-waifu-diffusion/README.md deleted file mode 100644 index dc0b7d24a57eb4346a15820018e6caf9cbfb2ebc..0000000000000000000000000000000000000000 --- a/spaces/Roixy/hakurei-waifu-diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Hakurei Waifu Diffusion -emoji: 💩 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SNKRWRLD/SNKR_WRLD_Shoe_Picker/README.md b/spaces/SNKRWRLD/SNKR_WRLD_Shoe_Picker/README.md deleted file mode 100644 index 1a88baa4a5dd9055911eda427cfda67581432502..0000000000000000000000000000000000000000 --- a/spaces/SNKRWRLD/SNKR_WRLD_Shoe_Picker/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SNKR WRLD Shoe Picker -emoji: 🏃 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/SalahZa/Tunisian-ASR-v0/partly_frozen_splitted_wavlm/1986/lm_tunisian.py b/spaces/SalahZa/Tunisian-ASR-v0/partly_frozen_splitted_wavlm/1986/lm_tunisian.py deleted file mode 100644 index ca1096f64b8a3d9eb7e2b670c5d1ec241d1a3ba3..0000000000000000000000000000000000000000 --- a/spaces/SalahZa/Tunisian-ASR-v0/partly_frozen_splitted_wavlm/1986/lm_tunisian.py +++ /dev/null @@ -1,361 +0,0 @@ -#!/usr/bin/env/python3 -"""Recipe for training a wav2vec-based ctc ASR system with librispeech. -The system employs wav2vec as its encoder. Decoding is performed with -ctc greedy decoder. -To run this recipe, do the following: -> python train_with_wav2vec.py hparams/train_with_wav2vec.yaml -The neural network is trained on CTC likelihood target and character units -are used as basic recognition tokens. Training is performed on the full -LibriSpeech dataset (960 h). - -Authors - * Sung-Lin Yeh 2021 - * Titouan Parcollet 2021 - * Ju-Chieh Chou 2020 - * Mirco Ravanelli 2020 - * Abdel Heba 2020 - * Peter Plantinga 2020 - * Samuele Cornell 2020 -""" - -import os -import sys -import torch -import logging -import speechbrain as sb -from speechbrain.utils.distributed import run_on_main -from hyperpyyaml import load_hyperpyyaml -from pathlib import Path -import torchaudio.transforms as T - -from pyctcdecode import build_ctcdecoder -logger = logging.getLogger(__name__) - -# Define training procedure -class ASR(sb.Brain): - def compute_forward(self, batch, stage): - """Forward computations from the waveform batches to the output probabilities.""" - batch = batch.to(self.device) - wavs, wav_lens = batch.sig - tokens_bos, _ = batch.tokens_bos - wavs, wav_lens = wavs.to(self.device), wav_lens.to(self.device) - - # Forward pass - feats = self.modules.wav2vec2(wavs) - x = self.modules.enc(feats) - # Compute outputs - p_tokens = None - logits = self.modules.ctc_lin(x) - p_ctc = self.hparams.log_softmax(logits) - if stage != sb.Stage.TRAIN: - p_tokens = sb.decoders.ctc_greedy_decode( - p_ctc, wav_lens, blank_id=self.hparams.blank_index - ) - return p_ctc, wav_lens, p_tokens - - def compute_objectives(self, predictions, batch, stage): - """Computes the loss (CTC+NLL) given predictions and targets.""" - - p_ctc, wav_lens, predicted_tokens = predictions - - ids = batch.id - tokens_eos, tokens_eos_lens = batch.tokens_eos - tokens, tokens_lens = batch.tokens - - if hasattr(self.modules, "env_corrupt") and stage == sb.Stage.TRAIN: - tokens_eos = torch.cat([tokens_eos, tokens_eos], dim=0) - tokens_eos_lens = torch.cat( - [tokens_eos_lens, tokens_eos_lens], dim=0 - ) - tokens = torch.cat([tokens, tokens], dim=0) - tokens_lens = torch.cat([tokens_lens, tokens_lens], dim=0) - - loss_ctc = self.hparams.ctc_cost(p_ctc, tokens, wav_lens, tokens_lens) - loss = loss_ctc - if stage != sb.Stage.TRAIN: - # Decode token terms to words - predicted_words =[] - for logs in p_ctc: - text = decoder.decode(logs.detach().cpu().numpy()) - predicted_words.append(text.split(" ")) - - - target_words = [wrd.split(" ") for wrd in batch.wrd] - self.wer_metric.append(ids, predicted_words, target_words) - self.cer_metric.append(ids, predicted_words, target_words) - - return loss - - def fit_batch(self, batch): - """Train the parameters given a single batch in input""" - predictions = self.compute_forward(batch, sb.Stage.TRAIN) - loss = self.compute_objectives(predictions, batch, sb.Stage.TRAIN) - loss.backward() - if self.check_gradients(loss): - self.wav2vec_optimizer.step() - self.model_optimizer.step() - - self.wav2vec_optimizer.zero_grad() - self.model_optimizer.zero_grad() - - return loss.detach() - - def evaluate_batch(self, batch, stage): - """Computations needed for validation/test batches""" - predictions = self.compute_forward(batch, stage=stage) - with torch.no_grad(): - loss = self.compute_objectives(predictions, batch, stage=stage) - return loss.detach() - - def on_stage_start(self, stage, epoch): - """Gets called at the beginning of each epoch""" - if stage != sb.Stage.TRAIN: - self.cer_metric = self.hparams.cer_computer() - self.wer_metric = self.hparams.error_rate_computer() - - def on_stage_end(self, stage, stage_loss, epoch): - """Gets called at the end of an epoch.""" - # Compute/store important stats - stage_stats = {"loss": stage_loss} - if stage == sb.Stage.TRAIN: - self.train_stats = stage_stats - else: - stage_stats["CER"] = self.cer_metric.summarize("error_rate") - stage_stats["WER"] = self.wer_metric.summarize("error_rate") - - # Perform end-of-iteration things, like annealing, logging, etc. - if stage == sb.Stage.VALID: - old_lr_model, new_lr_model = self.hparams.lr_annealing_model( - stage_stats["loss"] - ) - old_lr_wav2vec, new_lr_wav2vec = self.hparams.lr_annealing_wav2vec( - stage_stats["loss"] - ) - sb.nnet.schedulers.update_learning_rate( - self.model_optimizer, new_lr_model - ) - sb.nnet.schedulers.update_learning_rate( - self.wav2vec_optimizer, new_lr_wav2vec - ) - self.hparams.train_logger.log_stats( - stats_meta={ - "epoch": epoch, - "lr_model": old_lr_model, - "lr_wav2vec": old_lr_wav2vec, - }, - train_stats=self.train_stats, - valid_stats=stage_stats, - ) - self.checkpointer.save_and_keep_only( - meta={"WER": stage_stats["WER"]}, min_keys=["WER"], - ) - elif stage == sb.Stage.TEST: - self.hparams.train_logger.log_stats( - stats_meta={"Epoch loaded": self.hparams.epoch_counter.current}, - test_stats=stage_stats, - ) - with open(self.hparams.wer_file, "w") as w: - self.wer_metric.write_stats(w) - - def init_optimizers(self): - "Initializes the wav2vec2 optimizer and model optimizer" - self.wav2vec_optimizer = self.hparams.wav2vec_opt_class( - self.modules.wav2vec2.parameters() - ) - self.model_optimizer = self.hparams.model_opt_class( - self.hparams.model.parameters() - ) - - if self.checkpointer is not None: - self.checkpointer.add_recoverable( - "wav2vec_opt", self.wav2vec_optimizer - ) - self.checkpointer.add_recoverable("modelopt", self.model_optimizer) - - -def dataio_prepare(hparams): - """This function prepares the datasets to be used in the brain class. - It also defines the data processing pipeline through user-defined functions.""" - data_folder = hparams["data_folder"] - - train_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["train_csv"], replacements={"data_root": data_folder}, - ) - - if hparams["sorting"] == "ascending": - # we sort training data to speed up training and get better results. - train_data = train_data.filtered_sorted(sort_key="duration") - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["train_dataloader_opts"]["shuffle"] = False - - elif hparams["sorting"] == "descending": - train_data = train_data.filtered_sorted( - sort_key="duration", reverse=True - ) - # when sorting do not shuffle in dataloader ! otherwise is pointless - hparams["train_dataloader_opts"]["shuffle"] = False - - elif hparams["sorting"] == "random": - pass - - else: - raise NotImplementedError( - "sorting must be random, ascending or descending" - ) - - valid_data = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=hparams["valid_csv"], replacements={"data_root": data_folder}, - ) - valid_data = valid_data.filtered_sorted(sort_key="duration") - - # test is separate - test_datasets = {} - for csv_file in hparams["test_csv"]: - name = Path(csv_file).stem - test_datasets[name] = sb.dataio.dataset.DynamicItemDataset.from_csv( - csv_path=csv_file, replacements={"data_root": data_folder} - ) - test_datasets[name] = test_datasets[name].filtered_sorted( - sort_key="duration" - ) - - datasets = [train_data, valid_data] + [i for k, i in test_datasets.items()] - - # 2. Define audio pipeline: - @sb.utils.data_pipeline.takes("wav", "sr") - @sb.utils.data_pipeline.provides("sig") - def audio_pipeline(wav, sr): - sig = sb.dataio.dataio.read_audio(wav) - sig = resamplers[sr](sig) - return sig - - sb.dataio.dataset.add_dynamic_item(datasets, audio_pipeline) - label_encoder = sb.dataio.encoder.CTCTextEncoder() - - # 3. Define text pipeline: - @sb.utils.data_pipeline.takes("wrd") - @sb.utils.data_pipeline.provides( - "wrd", "char_list", "tokens_list", "tokens_bos", "tokens_eos", "tokens" - ) - def text_pipeline(wrd): - yield wrd - char_list = list(wrd) - yield char_list - tokens_list = label_encoder.encode_sequence(char_list) - yield tokens_list - tokens_bos = torch.LongTensor([hparams["bos_index"]] + (tokens_list)) - yield tokens_bos - tokens_eos = torch.LongTensor(tokens_list + [hparams["eos_index"]]) - yield tokens_eos - tokens = torch.LongTensor(tokens_list) - yield tokens - - sb.dataio.dataset.add_dynamic_item(datasets, text_pipeline) - - lab_enc_file = os.path.join(hparams["save_folder"], "label_encoder.txt") - special_labels = { - "bos_label": hparams["bos_index"], - "eos_label": hparams["eos_index"], - "blank_label": hparams["blank_index"], - } - label_encoder.load_or_create( - path=lab_enc_file, - from_didatasets=[train_data], - output_key="char_list", - special_labels=special_labels, - sequence_input=True, - ) - - # 4. Set output: - sb.dataio.dataset.set_output_keys( - datasets, - ["id", "sig", "wrd", "char_list", "tokens_bos", "tokens_eos", "tokens"], - ) - return train_data, valid_data, test_datasets, label_encoder - - -if __name__ == "__main__": - - # CLI: - hparams_file, run_opts, overrides = sb.parse_arguments(sys.argv[1:]) - - # If distributed_launch=True then - # create ddp_group with the right communication protocol - sb.utils.distributed.ddp_init_group(run_opts) - - with open(hparams_file) as fin: - hparams = load_hyperpyyaml(fin, overrides) - - # Create experiment directory - sb.create_experiment_directory( - experiment_directory=hparams["output_folder"], - hyperparams_to_save=hparams_file, - overrides=overrides, - ) - def read_labels_file(labels_file): - with open(labels_file, "r") as lf: - lines = lf.read().splitlines() - division = "===" - numbers = {} - for line in lines : - if division in line : - break - string, number = line.split("=>") - number = int(number) - string = string[1:-2] - numbers[number] = string - return [numbers[x] for x in range(len(numbers))] - labels = read_labels_file(os.path.join(hparams["save_folder"], "label_encoder.txt")) - print(labels) - labels = [""] + labels[1:] - print(len(labels)) - decoder = build_ctcdecoder( - labels, - kenlm_model_path="tunisian.arpa", # either .arpa or .bin file - alpha=0.5, # tuned on a val set - beta=1.0, # tuned on a val set - ) - - # Dataset prep (parsing Librispeech) - - resampler_8000 = T.Resample(8000, 16000, dtype=torch.float) - - resampler_44100 =T.Resample(44100, 16000, dtype=torch.float) - resampler_48000 =T.Resample(48000, 16000, dtype=torch.float) - resamplers = {"8000": resampler_8000, "44100":resampler_44100, "48000": resampler_48000} - - # here we create the datasets objects as well as tokenization and encoding - train_data, valid_data, test_datasets, label_encoder = dataio_prepare( - hparams - ) - - # Trainer initialization - asr_brain = ASR( - modules=hparams["modules"], - hparams=hparams, - run_opts=run_opts, - checkpointer=hparams["checkpointer"], - ) - asr_brain.device= "cpu" - asr_brain.modules.to("cpu") - # We dynamicaly add the tokenizer to our brain class. - # NB: This tokenizer corresponds to the one used for the LM!! - asr_brain.tokenizer = label_encoder - - # Training - asr_brain.fit( - asr_brain.hparams.epoch_counter, - train_data, - valid_data, - train_loader_kwargs=hparams["train_dataloader_opts"], - valid_loader_kwargs=hparams["valid_dataloader_opts"], - ) - - # Testing - for k in test_datasets.keys(): # keys are test_clean, test_other etc - asr_brain.hparams.wer_file = os.path.join( - hparams["output_folder"], "wer_{}.txt".format(k) - ) - asr_brain.evaluate( - test_datasets[k], test_loader_kwargs=hparams["test_dataloader_opts"] - ) diff --git a/spaces/ServerX/PorcoDiaz/demucs/train.py b/spaces/ServerX/PorcoDiaz/demucs/train.py deleted file mode 100644 index 6bd221279dc986a6df1a8d7b4d4444bb822a1cb3..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/demucs/train.py +++ /dev/null @@ -1,127 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import sys - -import tqdm -from torch.utils.data import DataLoader -from torch.utils.data.distributed import DistributedSampler - -from .utils import apply_model, average_metric, center_trim - - -def train_model(epoch, - dataset, - model, - criterion, - optimizer, - augment, - quantizer=None, - diffq=0, - repeat=1, - device="cpu", - seed=None, - workers=4, - world_size=1, - batch_size=16): - - if world_size > 1: - sampler = DistributedSampler(dataset) - sampler_epoch = epoch * repeat - if seed is not None: - sampler_epoch += seed * 1000 - sampler.set_epoch(sampler_epoch) - batch_size //= world_size - loader = DataLoader(dataset, batch_size=batch_size, sampler=sampler, num_workers=workers) - else: - loader = DataLoader(dataset, batch_size=batch_size, num_workers=workers, shuffle=True) - current_loss = 0 - model_size = 0 - for repetition in range(repeat): - tq = tqdm.tqdm(loader, - ncols=120, - desc=f"[{epoch:03d}] train ({repetition + 1}/{repeat})", - leave=False, - file=sys.stdout, - unit=" batch") - total_loss = 0 - for idx, sources in enumerate(tq): - if len(sources) < batch_size: - # skip uncomplete batch for augment.Remix to work properly - continue - sources = sources.to(device) - sources = augment(sources) - mix = sources.sum(dim=1) - - estimates = model(mix) - sources = center_trim(sources, estimates) - loss = criterion(estimates, sources) - model_size = 0 - if quantizer is not None: - model_size = quantizer.model_size() - - train_loss = loss + diffq * model_size - train_loss.backward() - grad_norm = 0 - for p in model.parameters(): - if p.grad is not None: - grad_norm += p.grad.data.norm()**2 - grad_norm = grad_norm**0.5 - optimizer.step() - optimizer.zero_grad() - - if quantizer is not None: - model_size = model_size.item() - - total_loss += loss.item() - current_loss = total_loss / (1 + idx) - tq.set_postfix(loss=f"{current_loss:.4f}", ms=f"{model_size:.2f}", - grad=f"{grad_norm:.5f}") - - # free some space before next round - del sources, mix, estimates, loss, train_loss - - if world_size > 1: - sampler.epoch += 1 - - if world_size > 1: - current_loss = average_metric(current_loss) - return current_loss, model_size - - -def validate_model(epoch, - dataset, - model, - criterion, - device="cpu", - rank=0, - world_size=1, - shifts=0, - overlap=0.25, - split=False): - indexes = range(rank, len(dataset), world_size) - tq = tqdm.tqdm(indexes, - ncols=120, - desc=f"[{epoch:03d}] valid", - leave=False, - file=sys.stdout, - unit=" track") - current_loss = 0 - for index in tq: - streams = dataset[index] - # first five minutes to avoid OOM on --upsample models - streams = streams[..., :15_000_000] - streams = streams.to(device) - sources = streams[1:] - mix = streams[0] - estimates = apply_model(model, mix, shifts=shifts, split=split, overlap=overlap) - loss = criterion(estimates, sources) - current_loss += loss.item() / len(indexes) - del estimates, streams, sources - - if world_size > 1: - current_loss = average_metric(current_loss, len(indexes)) - return current_loss diff --git a/spaces/Skyler123/TangGPT/run_Windows.bat b/spaces/Skyler123/TangGPT/run_Windows.bat deleted file mode 100644 index 4c18f9ccaeea0af972301ffdf48778641221f76d..0000000000000000000000000000000000000000 --- a/spaces/Skyler123/TangGPT/run_Windows.bat +++ /dev/null @@ -1,5 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" diff --git a/spaces/SophiaGaogao/sophia/README.md b/spaces/SophiaGaogao/sophia/README.md deleted file mode 100644 index d64a2bcc16f6f77670bff275f7f516b5ee0791d7..0000000000000000000000000000000000000000 --- a/spaces/SophiaGaogao/sophia/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sophia -emoji: 📈 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/db/base.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/db/base.py deleted file mode 100644 index b7c991b6155072fae9264c31fc2a3f5363741f8e..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/db/base.py +++ /dev/null @@ -1,192 +0,0 @@ -from typing import Any, Optional, Sequence, Tuple, Type -from types import TracebackType -from typing_extensions import Protocol, Self, Literal -from abc import ABC, abstractmethod -from threading import local -from overrides import override, EnforceOverrides -import pypika -import pypika.queries -from chromadb.config import System, Component -from uuid import UUID -from itertools import islice, count - - -class NotFoundError(Exception): - """Raised when a delete or update operation affects no rows""" - - pass - - -class UniqueConstraintError(Exception): - """Raised when an insert operation would violate a unique constraint""" - - pass - - -class Cursor(Protocol): - """Reifies methods we use from a DBAPI2 Cursor since DBAPI2 is not typed.""" - - def execute(self, sql: str, params: Optional[Tuple[Any, ...]] = None) -> Self: - ... - - def executescript(self, script: str) -> Self: - ... - - def executemany( - self, sql: str, params: Optional[Sequence[Tuple[Any, ...]]] = None - ) -> Self: - ... - - def fetchone(self) -> Tuple[Any, ...]: - ... - - def fetchall(self) -> Sequence[Tuple[Any, ...]]: - ... - - -class TxWrapper(ABC, EnforceOverrides): - """Wrapper class for DBAPI 2.0 Connection objects, with which clients can implement transactions. - Makes two guarantees that basic DBAPI 2.0 connections do not: - - - __enter__ returns a Cursor object consistently (instead of a Connection like some do) - - Always re-raises an exception if one was thrown from the body - """ - - @abstractmethod - def __enter__(self) -> Cursor: - pass - - @abstractmethod - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_value: Optional[BaseException], - traceback: Optional[TracebackType], - ) -> Literal[False]: - pass - - -class SqlDB(Component): - """DBAPI 2.0 interface wrapper to ensure consistent behavior between implementations""" - - def __init__(self, system: System): - super().__init__(system) - - @abstractmethod - def tx(self) -> TxWrapper: - """Return a transaction wrapper""" - pass - - @staticmethod - @abstractmethod - def querybuilder() -> Type[pypika.Query]: - """Return a PyPika Query builder of an appropriate subtype for this database - implementation (see - https://pypika.readthedocs.io/en/latest/3_advanced.html#handling-different-database-platforms) - """ - pass - - @staticmethod - @abstractmethod - def parameter_format() -> str: - """Return the appropriate parameter format for this database implementation. - Will be called with str.format(i) where i is the numeric index of the parameter. - """ - pass - - @staticmethod - @abstractmethod - def uuid_to_db(uuid: Optional[UUID]) -> Optional[Any]: - """Convert a UUID to a value that can be passed to the DB driver""" - pass - - @staticmethod - @abstractmethod - def uuid_from_db(value: Optional[Any]) -> Optional[UUID]: - """Convert a value from the DB driver to a UUID""" - pass - - @staticmethod - @abstractmethod - def unique_constraint_error() -> Type[BaseException]: - """Return the exception type that the DB raises when a unique constraint is - violated""" - pass - - def param(self, idx: int) -> pypika.Parameter: - """Return a PyPika Parameter object for the given index""" - return pypika.Parameter(self.parameter_format().format(idx)) - - -_context = local() - - -class ParameterValue(pypika.Parameter): # type: ignore - """ - Wrapper class for PyPika paramters that allows the values for Parameters - to be expressed inline while building a query. See get_sql() for - detailed usage information. - """ - - def __init__(self, value: Any): - self.value = value - - @override - def get_sql(self, **kwargs: Any) -> str: - if isinstance(self.value, (list, tuple)): - _context.values.extend(self.value) - indexes = islice(_context.generator, len(self.value)) - placeholders = ", ".join(_context.formatstr.format(i) for i in indexes) - val = f"({placeholders})" - else: - _context.values.append(self.value) - val = _context.formatstr.format(next(_context.generator)) - - return str(val) - - -def get_sql( - query: pypika.queries.QueryBuilder, formatstr: str = "?" -) -> Tuple[str, Tuple[Any, ...]]: - """ - Wrapper for pypika's get_sql method that allows the values for Parameters - to be expressed inline while building a query, and that returns a tuple of the - SQL string and parameters. This makes it easier to construct complex queries - programmatically and automatically matches up the generated SQL with the required - parameter vector. - - Doing so requires using the ParameterValue class defined in this module instead - of the base pypika.Parameter class. - - Usage Example: - - q = ( - pypika.Query().from_("table") - .select("col1") - .where("col2"==ParameterValue("foo")) - .where("col3"==ParameterValue("bar")) - ) - - sql, params = get_sql(q) - - cursor.execute(sql, params) - - Note how it is not necessary to construct the parameter vector manually... it - will always be generated with the parameter values in the same order as emitted - SQL string. - - The format string should match the parameter format for the database being used. - It will be called with str.format(i) where i is the numeric index of the parameter. - For example, Postgres requires parameters like `:1`, `:2`, etc. so the format string - should be `":{}"`. - - See https://pypika.readthedocs.io/en/latest/2_tutorial.html#parametrized-queries for more - information on parameterized queries in PyPika. - """ - - _context.values = [] - _context.generator = count(1) - _context.formatstr = formatstr - sql = query.get_sql() - params = tuple(_context.values) - return sql, params diff --git a/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/mobilenetv3.py b/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/mobilenetv3.py deleted file mode 100644 index b5966c28f7207e98ee50745b1bc8f3663c650f9d..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/mobilenetv3.py +++ /dev/null @@ -1,364 +0,0 @@ -""" MobileNet-V3 - -A PyTorch impl of MobileNet-V3, compatible with TF weights from official impl. - -Paper: Searching for MobileNetV3 - https://arxiv.org/abs/1905.02244 - -Hacked together by / Copyright 2020 Ross Wightman -""" -import torch.nn as nn -import torch.nn.functional as F - -from .activations import get_act_fn, get_act_layer, HardSwish -from .config import layer_config_kwargs -from .conv2d_layers import select_conv2d -from .helpers import load_pretrained -from .efficientnet_builder import * - -__all__ = ['mobilenetv3_rw', 'mobilenetv3_large_075', 'mobilenetv3_large_100', 'mobilenetv3_large_minimal_100', - 'mobilenetv3_small_075', 'mobilenetv3_small_100', 'mobilenetv3_small_minimal_100', - 'tf_mobilenetv3_large_075', 'tf_mobilenetv3_large_100', 'tf_mobilenetv3_large_minimal_100', - 'tf_mobilenetv3_small_075', 'tf_mobilenetv3_small_100', 'tf_mobilenetv3_small_minimal_100'] - -model_urls = { - 'mobilenetv3_rw': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_100-35495452.pth', - 'mobilenetv3_large_075': None, - 'mobilenetv3_large_100': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_large_100_ra-f55367f5.pth', - 'mobilenetv3_large_minimal_100': None, - 'mobilenetv3_small_075': None, - 'mobilenetv3_small_100': None, - 'mobilenetv3_small_minimal_100': None, - 'tf_mobilenetv3_large_075': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_large_075-150ee8b0.pth', - 'tf_mobilenetv3_large_100': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_large_100-427764d5.pth', - 'tf_mobilenetv3_large_minimal_100': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_large_minimal_100-8596ae28.pth', - 'tf_mobilenetv3_small_075': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_small_075-da427f52.pth', - 'tf_mobilenetv3_small_100': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_small_100-37f49e2b.pth', - 'tf_mobilenetv3_small_minimal_100': - 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/tf_mobilenetv3_small_minimal_100-922a7843.pth', -} - - -class MobileNetV3(nn.Module): - """ MobileNet-V3 - - A this model utilizes the MobileNet-v3 specific 'efficient head', where global pooling is done before the - head convolution without a final batch-norm layer before the classifier. - - Paper: https://arxiv.org/abs/1905.02244 - """ - - def __init__(self, block_args, num_classes=1000, in_chans=3, stem_size=16, num_features=1280, head_bias=True, - channel_multiplier=1.0, pad_type='', act_layer=HardSwish, drop_rate=0., drop_connect_rate=0., - se_kwargs=None, norm_layer=nn.BatchNorm2d, norm_kwargs=None, weight_init='goog'): - super(MobileNetV3, self).__init__() - self.drop_rate = drop_rate - - stem_size = round_channels(stem_size, channel_multiplier) - self.conv_stem = select_conv2d(in_chans, stem_size, 3, stride=2, padding=pad_type) - self.bn1 = nn.BatchNorm2d(stem_size, **norm_kwargs) - self.act1 = act_layer(inplace=True) - in_chs = stem_size - - builder = EfficientNetBuilder( - channel_multiplier, pad_type=pad_type, act_layer=act_layer, se_kwargs=se_kwargs, - norm_layer=norm_layer, norm_kwargs=norm_kwargs, drop_connect_rate=drop_connect_rate) - self.blocks = nn.Sequential(*builder(in_chs, block_args)) - in_chs = builder.in_chs - - self.global_pool = nn.AdaptiveAvgPool2d(1) - self.conv_head = select_conv2d(in_chs, num_features, 1, padding=pad_type, bias=head_bias) - self.act2 = act_layer(inplace=True) - self.classifier = nn.Linear(num_features, num_classes) - - for m in self.modules(): - if weight_init == 'goog': - initialize_weight_goog(m) - else: - initialize_weight_default(m) - - def as_sequential(self): - layers = [self.conv_stem, self.bn1, self.act1] - layers.extend(self.blocks) - layers.extend([ - self.global_pool, self.conv_head, self.act2, - nn.Flatten(), nn.Dropout(self.drop_rate), self.classifier]) - return nn.Sequential(*layers) - - def features(self, x): - x = self.conv_stem(x) - x = self.bn1(x) - x = self.act1(x) - x = self.blocks(x) - x = self.global_pool(x) - x = self.conv_head(x) - x = self.act2(x) - return x - - def forward(self, x): - x = self.features(x) - x = x.flatten(1) - if self.drop_rate > 0.: - x = F.dropout(x, p=self.drop_rate, training=self.training) - return self.classifier(x) - - -def _create_model(model_kwargs, variant, pretrained=False): - as_sequential = model_kwargs.pop('as_sequential', False) - model = MobileNetV3(**model_kwargs) - if pretrained and model_urls[variant]: - load_pretrained(model, model_urls[variant]) - if as_sequential: - model = model.as_sequential() - return model - - -def _gen_mobilenet_v3_rw(variant, channel_multiplier=1.0, pretrained=False, **kwargs): - """Creates a MobileNet-V3 model (RW variant). - - Paper: https://arxiv.org/abs/1905.02244 - - This was my first attempt at reproducing the MobileNet-V3 from paper alone. It came close to the - eventual Tensorflow reference impl but has a few differences: - 1. This model has no bias on the head convolution - 2. This model forces no residual (noskip) on the first DWS block, this is different than MnasNet - 3. This model always uses ReLU for the SE activation layer, other models in the family inherit their act layer - from their parent block - 4. This model does not enforce divisible by 8 limitation on the SE reduction channel count - - Overall the changes are fairly minor and result in a very small parameter count difference and no - top-1/5 - - Args: - channel_multiplier: multiplier to number of channels per layer. - """ - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s1_e1_c16_nre_noskip'], # relu - # stage 1, 112x112 in - ['ir_r1_k3_s2_e4_c24_nre', 'ir_r1_k3_s1_e3_c24_nre'], # relu - # stage 2, 56x56 in - ['ir_r3_k5_s2_e3_c40_se0.25_nre'], # relu - # stage 3, 28x28 in - ['ir_r1_k3_s2_e6_c80', 'ir_r1_k3_s1_e2.5_c80', 'ir_r2_k3_s1_e2.3_c80'], # hard-swish - # stage 4, 14x14in - ['ir_r2_k3_s1_e6_c112_se0.25'], # hard-swish - # stage 5, 14x14in - ['ir_r3_k5_s2_e6_c160_se0.25'], # hard-swish - # stage 6, 7x7 in - ['cn_r1_k1_s1_c960'], # hard-swish - ] - with layer_config_kwargs(kwargs): - model_kwargs = dict( - block_args=decode_arch_def(arch_def), - head_bias=False, # one of my mistakes - channel_multiplier=channel_multiplier, - act_layer=resolve_act_layer(kwargs, 'hard_swish'), - se_kwargs=dict(gate_fn=get_act_fn('hard_sigmoid'), reduce_mid=True), - norm_kwargs=resolve_bn_args(kwargs), - **kwargs, - ) - model = _create_model(model_kwargs, variant, pretrained) - return model - - -def _gen_mobilenet_v3(variant, channel_multiplier=1.0, pretrained=False, **kwargs): - """Creates a MobileNet-V3 large/small/minimal models. - - Ref impl: https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet_v3.py - Paper: https://arxiv.org/abs/1905.02244 - - Args: - channel_multiplier: multiplier to number of channels per layer. - """ - if 'small' in variant: - num_features = 1024 - if 'minimal' in variant: - act_layer = 'relu' - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s2_e1_c16'], - # stage 1, 56x56 in - ['ir_r1_k3_s2_e4.5_c24', 'ir_r1_k3_s1_e3.67_c24'], - # stage 2, 28x28 in - ['ir_r1_k3_s2_e4_c40', 'ir_r2_k3_s1_e6_c40'], - # stage 3, 14x14 in - ['ir_r2_k3_s1_e3_c48'], - # stage 4, 14x14in - ['ir_r3_k3_s2_e6_c96'], - # stage 6, 7x7 in - ['cn_r1_k1_s1_c576'], - ] - else: - act_layer = 'hard_swish' - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s2_e1_c16_se0.25_nre'], # relu - # stage 1, 56x56 in - ['ir_r1_k3_s2_e4.5_c24_nre', 'ir_r1_k3_s1_e3.67_c24_nre'], # relu - # stage 2, 28x28 in - ['ir_r1_k5_s2_e4_c40_se0.25', 'ir_r2_k5_s1_e6_c40_se0.25'], # hard-swish - # stage 3, 14x14 in - ['ir_r2_k5_s1_e3_c48_se0.25'], # hard-swish - # stage 4, 14x14in - ['ir_r3_k5_s2_e6_c96_se0.25'], # hard-swish - # stage 6, 7x7 in - ['cn_r1_k1_s1_c576'], # hard-swish - ] - else: - num_features = 1280 - if 'minimal' in variant: - act_layer = 'relu' - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s1_e1_c16'], - # stage 1, 112x112 in - ['ir_r1_k3_s2_e4_c24', 'ir_r1_k3_s1_e3_c24'], - # stage 2, 56x56 in - ['ir_r3_k3_s2_e3_c40'], - # stage 3, 28x28 in - ['ir_r1_k3_s2_e6_c80', 'ir_r1_k3_s1_e2.5_c80', 'ir_r2_k3_s1_e2.3_c80'], - # stage 4, 14x14in - ['ir_r2_k3_s1_e6_c112'], - # stage 5, 14x14in - ['ir_r3_k3_s2_e6_c160'], - # stage 6, 7x7 in - ['cn_r1_k1_s1_c960'], - ] - else: - act_layer = 'hard_swish' - arch_def = [ - # stage 0, 112x112 in - ['ds_r1_k3_s1_e1_c16_nre'], # relu - # stage 1, 112x112 in - ['ir_r1_k3_s2_e4_c24_nre', 'ir_r1_k3_s1_e3_c24_nre'], # relu - # stage 2, 56x56 in - ['ir_r3_k5_s2_e3_c40_se0.25_nre'], # relu - # stage 3, 28x28 in - ['ir_r1_k3_s2_e6_c80', 'ir_r1_k3_s1_e2.5_c80', 'ir_r2_k3_s1_e2.3_c80'], # hard-swish - # stage 4, 14x14in - ['ir_r2_k3_s1_e6_c112_se0.25'], # hard-swish - # stage 5, 14x14in - ['ir_r3_k5_s2_e6_c160_se0.25'], # hard-swish - # stage 6, 7x7 in - ['cn_r1_k1_s1_c960'], # hard-swish - ] - with layer_config_kwargs(kwargs): - model_kwargs = dict( - block_args=decode_arch_def(arch_def), - num_features=num_features, - stem_size=16, - channel_multiplier=channel_multiplier, - act_layer=resolve_act_layer(kwargs, act_layer), - se_kwargs=dict( - act_layer=get_act_layer('relu'), gate_fn=get_act_fn('hard_sigmoid'), reduce_mid=True, divisor=8), - norm_kwargs=resolve_bn_args(kwargs), - **kwargs, - ) - model = _create_model(model_kwargs, variant, pretrained) - return model - - -def mobilenetv3_rw(pretrained=False, **kwargs): - """ MobileNet-V3 RW - Attn: See note in gen function for this variant. - """ - # NOTE for train set drop_rate=0.2 - if pretrained: - # pretrained model trained with non-default BN epsilon - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - model = _gen_mobilenet_v3_rw('mobilenetv3_rw', 1.0, pretrained=pretrained, **kwargs) - return model - - -def mobilenetv3_large_075(pretrained=False, **kwargs): - """ MobileNet V3 Large 0.75""" - # NOTE for train set drop_rate=0.2 - model = _gen_mobilenet_v3('mobilenetv3_large_075', 0.75, pretrained=pretrained, **kwargs) - return model - - -def mobilenetv3_large_100(pretrained=False, **kwargs): - """ MobileNet V3 Large 1.0 """ - # NOTE for train set drop_rate=0.2 - model = _gen_mobilenet_v3('mobilenetv3_large_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -def mobilenetv3_large_minimal_100(pretrained=False, **kwargs): - """ MobileNet V3 Large (Minimalistic) 1.0 """ - # NOTE for train set drop_rate=0.2 - model = _gen_mobilenet_v3('mobilenetv3_large_minimal_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -def mobilenetv3_small_075(pretrained=False, **kwargs): - """ MobileNet V3 Small 0.75 """ - model = _gen_mobilenet_v3('mobilenetv3_small_075', 0.75, pretrained=pretrained, **kwargs) - return model - - -def mobilenetv3_small_100(pretrained=False, **kwargs): - """ MobileNet V3 Small 1.0 """ - model = _gen_mobilenet_v3('mobilenetv3_small_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -def mobilenetv3_small_minimal_100(pretrained=False, **kwargs): - """ MobileNet V3 Small (Minimalistic) 1.0 """ - model = _gen_mobilenet_v3('mobilenetv3_small_minimal_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -def tf_mobilenetv3_large_075(pretrained=False, **kwargs): - """ MobileNet V3 Large 0.75. Tensorflow compat variant. """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mobilenet_v3('tf_mobilenetv3_large_075', 0.75, pretrained=pretrained, **kwargs) - return model - - -def tf_mobilenetv3_large_100(pretrained=False, **kwargs): - """ MobileNet V3 Large 1.0. Tensorflow compat variant. """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mobilenet_v3('tf_mobilenetv3_large_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -def tf_mobilenetv3_large_minimal_100(pretrained=False, **kwargs): - """ MobileNet V3 Large Minimalistic 1.0. Tensorflow compat variant. """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mobilenet_v3('tf_mobilenetv3_large_minimal_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -def tf_mobilenetv3_small_075(pretrained=False, **kwargs): - """ MobileNet V3 Small 0.75. Tensorflow compat variant. """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mobilenet_v3('tf_mobilenetv3_small_075', 0.75, pretrained=pretrained, **kwargs) - return model - - -def tf_mobilenetv3_small_100(pretrained=False, **kwargs): - """ MobileNet V3 Small 1.0. Tensorflow compat variant.""" - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mobilenet_v3('tf_mobilenetv3_small_100', 1.0, pretrained=pretrained, **kwargs) - return model - - -def tf_mobilenetv3_small_minimal_100(pretrained=False, **kwargs): - """ MobileNet V3 Small Minimalistic 1.0. Tensorflow compat variant. """ - kwargs['bn_eps'] = BN_EPS_TF_DEFAULT - kwargs['pad_type'] = 'same' - model = _gen_mobilenet_v3('tf_mobilenetv3_small_minimal_100', 1.0, pretrained=pretrained, **kwargs) - return model diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.h b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.h deleted file mode 100644 index 51bb27e9ee828f967e8aa854c2d55574040c6d7e..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/src/cpu/ms_deform_attn_cpu.h +++ /dev/null @@ -1,38 +0,0 @@ -/*! -************************************************************************************************** -* Deformable DETR -* Copyright (c) 2020 SenseTime. All Rights Reserved. -* Licensed under the Apache License, Version 2.0 [see LICENSE for details] -************************************************************************************************** -* Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -************************************************************************************************** -*/ - -/*! -* Copyright (c) Facebook, Inc. and its affiliates. -* Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR -*/ - -#pragma once -#include - -at::Tensor -ms_deform_attn_cpu_forward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const int im2col_step); - -std::vector -ms_deform_attn_cpu_backward( - const at::Tensor &value, - const at::Tensor &spatial_shapes, - const at::Tensor &level_start_index, - const at::Tensor &sampling_loc, - const at::Tensor &attn_weight, - const at::Tensor &grad_output, - const int im2col_step); - - diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/datasets/pascal_context_59.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/datasets/pascal_context_59.py deleted file mode 100644 index 37585abab89834b95cd5bdd993b994fca1db65f6..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/configs/_base_/datasets/pascal_context_59.py +++ /dev/null @@ -1,60 +0,0 @@ -# dataset settings -dataset_type = 'PascalContextDataset59' -data_root = 'data/VOCdevkit/VOC2010/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -img_scale = (520, 520) -crop_size = (480, 480) - -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', reduce_zero_label=True), - dict(type='Resize', img_scale=img_scale, ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=img_scale, - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/train.txt', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/val.txt', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClassContext', - split='ImageSets/SegmentationContext/val.txt', - pipeline=test_pipeline)) diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/cnn/bricks/context_block.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/cnn/bricks/context_block.py deleted file mode 100644 index d60fdb904c749ce3b251510dff3cc63cea70d42e..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/cnn/bricks/context_block.py +++ /dev/null @@ -1,125 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import nn - -from ..utils import constant_init, kaiming_init -from .registry import PLUGIN_LAYERS - - -def last_zero_init(m): - if isinstance(m, nn.Sequential): - constant_init(m[-1], val=0) - else: - constant_init(m, val=0) - - -@PLUGIN_LAYERS.register_module() -class ContextBlock(nn.Module): - """ContextBlock module in GCNet. - - See 'GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond' - (https://arxiv.org/abs/1904.11492) for details. - - Args: - in_channels (int): Channels of the input feature map. - ratio (float): Ratio of channels of transform bottleneck - pooling_type (str): Pooling method for context modeling. - Options are 'att' and 'avg', stand for attention pooling and - average pooling respectively. Default: 'att'. - fusion_types (Sequence[str]): Fusion method for feature fusion, - Options are 'channels_add', 'channel_mul', stand for channelwise - addition and multiplication respectively. Default: ('channel_add',) - """ - - _abbr_ = 'context_block' - - def __init__(self, - in_channels, - ratio, - pooling_type='att', - fusion_types=('channel_add', )): - super(ContextBlock, self).__init__() - assert pooling_type in ['avg', 'att'] - assert isinstance(fusion_types, (list, tuple)) - valid_fusion_types = ['channel_add', 'channel_mul'] - assert all([f in valid_fusion_types for f in fusion_types]) - assert len(fusion_types) > 0, 'at least one fusion should be used' - self.in_channels = in_channels - self.ratio = ratio - self.planes = int(in_channels * ratio) - self.pooling_type = pooling_type - self.fusion_types = fusion_types - if pooling_type == 'att': - self.conv_mask = nn.Conv2d(in_channels, 1, kernel_size=1) - self.softmax = nn.Softmax(dim=2) - else: - self.avg_pool = nn.AdaptiveAvgPool2d(1) - if 'channel_add' in fusion_types: - self.channel_add_conv = nn.Sequential( - nn.Conv2d(self.in_channels, self.planes, kernel_size=1), - nn.LayerNorm([self.planes, 1, 1]), - nn.ReLU(inplace=True), # yapf: disable - nn.Conv2d(self.planes, self.in_channels, kernel_size=1)) - else: - self.channel_add_conv = None - if 'channel_mul' in fusion_types: - self.channel_mul_conv = nn.Sequential( - nn.Conv2d(self.in_channels, self.planes, kernel_size=1), - nn.LayerNorm([self.planes, 1, 1]), - nn.ReLU(inplace=True), # yapf: disable - nn.Conv2d(self.planes, self.in_channels, kernel_size=1)) - else: - self.channel_mul_conv = None - self.reset_parameters() - - def reset_parameters(self): - if self.pooling_type == 'att': - kaiming_init(self.conv_mask, mode='fan_in') - self.conv_mask.inited = True - - if self.channel_add_conv is not None: - last_zero_init(self.channel_add_conv) - if self.channel_mul_conv is not None: - last_zero_init(self.channel_mul_conv) - - def spatial_pool(self, x): - batch, channel, height, width = x.size() - if self.pooling_type == 'att': - input_x = x - # [N, C, H * W] - input_x = input_x.view(batch, channel, height * width) - # [N, 1, C, H * W] - input_x = input_x.unsqueeze(1) - # [N, 1, H, W] - context_mask = self.conv_mask(x) - # [N, 1, H * W] - context_mask = context_mask.view(batch, 1, height * width) - # [N, 1, H * W] - context_mask = self.softmax(context_mask) - # [N, 1, H * W, 1] - context_mask = context_mask.unsqueeze(-1) - # [N, 1, C, 1] - context = torch.matmul(input_x, context_mask) - # [N, C, 1, 1] - context = context.view(batch, channel, 1, 1) - else: - # [N, C, 1, 1] - context = self.avg_pool(x) - - return context - - def forward(self, x): - # [N, C, 1, 1] - context = self.spatial_pool(x) - - out = x - if self.channel_mul_conv is not None: - # [N, C, 1, 1] - channel_mul_term = torch.sigmoid(self.channel_mul_conv(context)) - out = out * channel_mul_term - if self.channel_add_conv is not None: - # [N, C, 1, 1] - channel_add_term = self.channel_add_conv(context) - out = out + channel_add_term - - return out diff --git a/spaces/Surn/UnlimitedMusicGen/audiocraft/modules/streaming.py b/spaces/Surn/UnlimitedMusicGen/audiocraft/modules/streaming.py deleted file mode 100644 index fdbdf5e90fc0c6560873d66bf273460b38e5ed7e..0000000000000000000000000000000000000000 --- a/spaces/Surn/UnlimitedMusicGen/audiocraft/modules/streaming.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Streaming module API that should be implemented by all Streaming components, -""" - -from contextlib import contextmanager -import typing as tp -from torch import nn -import torch - - -State = tp.Dict[str, torch.Tensor] - - -class StreamingModule(nn.Module): - """Common API for streaming components. - - Each streaming component has a streaming state, which is just a dict[str, Tensor]. - By convention, the first dim of each tensor must be the batch size. - Don't use dots in the key names, as this would clash with submodules - (like in state_dict). - - If `self._is_streaming` is True, the component should use and remember - the proper state inside `self._streaming_state`. - - To set a streaming component in streaming state, use - - with module.streaming(): - ... - - This will automatically reset the streaming state when exiting the context manager. - This also automatically propagates to all streaming children module. - - Some module might also implement the `StreamingModule.flush` method, although - this one is trickier, as all parents module must be StreamingModule and implement - it as well for it to work properly. See `StreamingSequential` after. - """ - def __init__(self) -> None: - super().__init__() - self._streaming_state: State = {} - self._is_streaming = False - - def _apply_named_streaming(self, fn: tp.Any): - for name, module in self.named_modules(): - if isinstance(module, StreamingModule): - fn(name, module) - - def _set_streaming(self, streaming: bool): - def _set_streaming(name, module): - module._is_streaming = streaming - self._apply_named_streaming(_set_streaming) - - @contextmanager - def streaming(self): - """Context manager to enter streaming mode. Reset streaming state on exit. - """ - self._set_streaming(True) - try: - yield - finally: - self._set_streaming(False) - self.reset_streaming() - - def reset_streaming(self): - """Reset the streaming state. - """ - def _reset(name: str, module: StreamingModule): - module._streaming_state.clear() - - self._apply_named_streaming(_reset) - - def get_streaming_state(self) -> State: - """Return the streaming state, including that of sub-modules. - """ - state: State = {} - - def _add(name: str, module: StreamingModule): - if name: - name += "." - for key, value in module._streaming_state.items(): - state[name + key] = value - - self._apply_named_streaming(_add) - return state - - def set_streaming_state(self, state: State): - """Set the streaming state, including that of sub-modules. - """ - state = dict(state) - - def _set(name: str, module: StreamingModule): - if name: - name += "." - module._streaming_state.clear() - for key, value in list(state.items()): - # complexity is not ideal here, but probably fine. - if key.startswith(name): - local_key = key[len(name):] - if '.' not in local_key: - module._streaming_state[local_key] = value - del state[key] - - self._apply_named_streaming(_set) - assert len(state) == 0, list(state.keys()) - - def flush(self, x: tp.Optional[torch.Tensor] = None): - """Flush any remaining outputs that were waiting for completion. - Typically, for convolutions, this will add the final padding - and process the last buffer. - - This should take an optional argument `x`, which will be provided - if a module before this one in the streaming pipeline has already - spitted out a flushed out buffer. - """ - if x is None: - return None - else: - return self(x) - - -class StreamingSequential(StreamingModule, nn.Sequential): - """A streaming compatible alternative of `nn.Sequential`. - """ - def flush(self, x: tp.Optional[torch.Tensor] = None): - for module in self: - if isinstance(module, StreamingModule): - x = module.flush(x) - elif x is not None: - x = module(x) - return x diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/models.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/models.py deleted file mode 100644 index b6bb21a8b26680b38c3af8278ed139b6628356c5..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/models.py +++ /dev/null @@ -1,39 +0,0 @@ -"""Utilities for defining models -""" - -import operator -from typing import Any, Callable, Type - - -class KeyBasedCompareMixin: - """Provides comparison capabilities that is based on a key""" - - __slots__ = ["_compare_key", "_defining_class"] - - def __init__(self, key: Any, defining_class: Type["KeyBasedCompareMixin"]) -> None: - self._compare_key = key - self._defining_class = defining_class - - def __hash__(self) -> int: - return hash(self._compare_key) - - def __lt__(self, other: Any) -> bool: - return self._compare(other, operator.__lt__) - - def __le__(self, other: Any) -> bool: - return self._compare(other, operator.__le__) - - def __gt__(self, other: Any) -> bool: - return self._compare(other, operator.__gt__) - - def __ge__(self, other: Any) -> bool: - return self._compare(other, operator.__ge__) - - def __eq__(self, other: Any) -> bool: - return self._compare(other, operator.__eq__) - - def _compare(self, other: Any, method: Callable[[Any, Any], bool]) -> bool: - if not isinstance(other, self._defining_class): - return NotImplemented - - return method(self._compare_key, other._compare_key) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/version.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/version.py deleted file mode 100644 index c5e9d85cd75884b129d4ab8d0453c0e50d0c1f68..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/chardet/version.py +++ /dev/null @@ -1,9 +0,0 @@ -""" -This module exists only to simplify retrieving the version number of chardet -from within setuptools and from chardet subpackages. - -:author: Dan Blanchard (dan.blanchard@gmail.com) -""" - -__version__ = "5.1.0" -VERSION = __version__.split(".") diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/lexers/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/lexers/__init__.py deleted file mode 100644 index d97c3e395ed89825b2d6ec29abcbf82292bbebab..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/pygments/lexers/__init__.py +++ /dev/null @@ -1,362 +0,0 @@ -""" - pygments.lexers - ~~~~~~~~~~~~~~~ - - Pygments lexers. - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re -import sys -import types -import fnmatch -from os.path import basename - -from pip._vendor.pygments.lexers._mapping import LEXERS -from pip._vendor.pygments.modeline import get_filetype_from_buffer -from pip._vendor.pygments.plugin import find_plugin_lexers -from pip._vendor.pygments.util import ClassNotFound, guess_decode - -COMPAT = { - 'Python3Lexer': 'PythonLexer', - 'Python3TracebackLexer': 'PythonTracebackLexer', -} - -__all__ = ['get_lexer_by_name', 'get_lexer_for_filename', 'find_lexer_class', - 'guess_lexer', 'load_lexer_from_file'] + list(LEXERS) + list(COMPAT) - -_lexer_cache = {} -_pattern_cache = {} - - -def _fn_matches(fn, glob): - """Return whether the supplied file name fn matches pattern filename.""" - if glob not in _pattern_cache: - pattern = _pattern_cache[glob] = re.compile(fnmatch.translate(glob)) - return pattern.match(fn) - return _pattern_cache[glob].match(fn) - - -def _load_lexers(module_name): - """Load a lexer (and all others in the module too).""" - mod = __import__(module_name, None, None, ['__all__']) - for lexer_name in mod.__all__: - cls = getattr(mod, lexer_name) - _lexer_cache[cls.name] = cls - - -def get_all_lexers(plugins=True): - """Return a generator of tuples in the form ``(name, aliases, - filenames, mimetypes)`` of all know lexers. - - If *plugins* is true (the default), plugin lexers supplied by entrypoints - are also returned. Otherwise, only builtin ones are considered. - """ - for item in LEXERS.values(): - yield item[1:] - if plugins: - for lexer in find_plugin_lexers(): - yield lexer.name, lexer.aliases, lexer.filenames, lexer.mimetypes - - -def find_lexer_class(name): - """ - Return the `Lexer` subclass that with the *name* attribute as given by - the *name* argument. - """ - if name in _lexer_cache: - return _lexer_cache[name] - # lookup builtin lexers - for module_name, lname, aliases, _, _ in LEXERS.values(): - if name == lname: - _load_lexers(module_name) - return _lexer_cache[name] - # continue with lexers from setuptools entrypoints - for cls in find_plugin_lexers(): - if cls.name == name: - return cls - - -def find_lexer_class_by_name(_alias): - """ - Return the `Lexer` subclass that has `alias` in its aliases list, without - instantiating it. - - Like `get_lexer_by_name`, but does not instantiate the class. - - Will raise :exc:`pygments.util.ClassNotFound` if no lexer with that alias is - found. - - .. versionadded:: 2.2 - """ - if not _alias: - raise ClassNotFound('no lexer for alias %r found' % _alias) - # lookup builtin lexers - for module_name, name, aliases, _, _ in LEXERS.values(): - if _alias.lower() in aliases: - if name not in _lexer_cache: - _load_lexers(module_name) - return _lexer_cache[name] - # continue with lexers from setuptools entrypoints - for cls in find_plugin_lexers(): - if _alias.lower() in cls.aliases: - return cls - raise ClassNotFound('no lexer for alias %r found' % _alias) - - -def get_lexer_by_name(_alias, **options): - """ - Return an instance of a `Lexer` subclass that has `alias` in its - aliases list. The lexer is given the `options` at its - instantiation. - - Will raise :exc:`pygments.util.ClassNotFound` if no lexer with that alias is - found. - """ - if not _alias: - raise ClassNotFound('no lexer for alias %r found' % _alias) - - # lookup builtin lexers - for module_name, name, aliases, _, _ in LEXERS.values(): - if _alias.lower() in aliases: - if name not in _lexer_cache: - _load_lexers(module_name) - return _lexer_cache[name](**options) - # continue with lexers from setuptools entrypoints - for cls in find_plugin_lexers(): - if _alias.lower() in cls.aliases: - return cls(**options) - raise ClassNotFound('no lexer for alias %r found' % _alias) - - -def load_lexer_from_file(filename, lexername="CustomLexer", **options): - """Load a lexer from a file. - - This method expects a file located relative to the current working - directory, which contains a Lexer class. By default, it expects the - Lexer to be name CustomLexer; you can specify your own class name - as the second argument to this function. - - Users should be very careful with the input, because this method - is equivalent to running eval on the input file. - - Raises ClassNotFound if there are any problems importing the Lexer. - - .. versionadded:: 2.2 - """ - try: - # This empty dict will contain the namespace for the exec'd file - custom_namespace = {} - with open(filename, 'rb') as f: - exec(f.read(), custom_namespace) - # Retrieve the class `lexername` from that namespace - if lexername not in custom_namespace: - raise ClassNotFound('no valid %s class found in %s' % - (lexername, filename)) - lexer_class = custom_namespace[lexername] - # And finally instantiate it with the options - return lexer_class(**options) - except OSError as err: - raise ClassNotFound('cannot read %s: %s' % (filename, err)) - except ClassNotFound: - raise - except Exception as err: - raise ClassNotFound('error when loading custom lexer: %s' % err) - - -def find_lexer_class_for_filename(_fn, code=None): - """Get a lexer for a filename. - - If multiple lexers match the filename pattern, use ``analyse_text()`` to - figure out which one is more appropriate. - - Returns None if not found. - """ - matches = [] - fn = basename(_fn) - for modname, name, _, filenames, _ in LEXERS.values(): - for filename in filenames: - if _fn_matches(fn, filename): - if name not in _lexer_cache: - _load_lexers(modname) - matches.append((_lexer_cache[name], filename)) - for cls in find_plugin_lexers(): - for filename in cls.filenames: - if _fn_matches(fn, filename): - matches.append((cls, filename)) - - if isinstance(code, bytes): - # decode it, since all analyse_text functions expect unicode - code = guess_decode(code) - - def get_rating(info): - cls, filename = info - # explicit patterns get a bonus - bonus = '*' not in filename and 0.5 or 0 - # The class _always_ defines analyse_text because it's included in - # the Lexer class. The default implementation returns None which - # gets turned into 0.0. Run scripts/detect_missing_analyse_text.py - # to find lexers which need it overridden. - if code: - return cls.analyse_text(code) + bonus, cls.__name__ - return cls.priority + bonus, cls.__name__ - - if matches: - matches.sort(key=get_rating) - # print "Possible lexers, after sort:", matches - return matches[-1][0] - - -def get_lexer_for_filename(_fn, code=None, **options): - """Get a lexer for a filename. - - Return a `Lexer` subclass instance that has a filename pattern - matching `fn`. The lexer is given the `options` at its - instantiation. - - Raise :exc:`pygments.util.ClassNotFound` if no lexer for that filename - is found. - - If multiple lexers match the filename pattern, use their ``analyse_text()`` - methods to figure out which one is more appropriate. - """ - res = find_lexer_class_for_filename(_fn, code) - if not res: - raise ClassNotFound('no lexer for filename %r found' % _fn) - return res(**options) - - -def get_lexer_for_mimetype(_mime, **options): - """ - Return a `Lexer` subclass instance that has `mime` in its mimetype - list. The lexer is given the `options` at its instantiation. - - Will raise :exc:`pygments.util.ClassNotFound` if not lexer for that mimetype - is found. - """ - for modname, name, _, _, mimetypes in LEXERS.values(): - if _mime in mimetypes: - if name not in _lexer_cache: - _load_lexers(modname) - return _lexer_cache[name](**options) - for cls in find_plugin_lexers(): - if _mime in cls.mimetypes: - return cls(**options) - raise ClassNotFound('no lexer for mimetype %r found' % _mime) - - -def _iter_lexerclasses(plugins=True): - """Return an iterator over all lexer classes.""" - for key in sorted(LEXERS): - module_name, name = LEXERS[key][:2] - if name not in _lexer_cache: - _load_lexers(module_name) - yield _lexer_cache[name] - if plugins: - yield from find_plugin_lexers() - - -def guess_lexer_for_filename(_fn, _text, **options): - """ - As :func:`guess_lexer()`, but only lexers which have a pattern in `filenames` - or `alias_filenames` that matches `filename` are taken into consideration. - - :exc:`pygments.util.ClassNotFound` is raised if no lexer thinks it can - handle the content. - """ - fn = basename(_fn) - primary = {} - matching_lexers = set() - for lexer in _iter_lexerclasses(): - for filename in lexer.filenames: - if _fn_matches(fn, filename): - matching_lexers.add(lexer) - primary[lexer] = True - for filename in lexer.alias_filenames: - if _fn_matches(fn, filename): - matching_lexers.add(lexer) - primary[lexer] = False - if not matching_lexers: - raise ClassNotFound('no lexer for filename %r found' % fn) - if len(matching_lexers) == 1: - return matching_lexers.pop()(**options) - result = [] - for lexer in matching_lexers: - rv = lexer.analyse_text(_text) - if rv == 1.0: - return lexer(**options) - result.append((rv, lexer)) - - def type_sort(t): - # sort by: - # - analyse score - # - is primary filename pattern? - # - priority - # - last resort: class name - return (t[0], primary[t[1]], t[1].priority, t[1].__name__) - result.sort(key=type_sort) - - return result[-1][1](**options) - - -def guess_lexer(_text, **options): - """ - Return a `Lexer` subclass instance that's guessed from the text in - `text`. For that, the :meth:`.analyse_text()` method of every known lexer - class is called with the text as argument, and the lexer which returned the - highest value will be instantiated and returned. - - :exc:`pygments.util.ClassNotFound` is raised if no lexer thinks it can - handle the content. - """ - - if not isinstance(_text, str): - inencoding = options.get('inencoding', options.get('encoding')) - if inencoding: - _text = _text.decode(inencoding or 'utf8') - else: - _text, _ = guess_decode(_text) - - # try to get a vim modeline first - ft = get_filetype_from_buffer(_text) - - if ft is not None: - try: - return get_lexer_by_name(ft, **options) - except ClassNotFound: - pass - - best_lexer = [0.0, None] - for lexer in _iter_lexerclasses(): - rv = lexer.analyse_text(_text) - if rv == 1.0: - return lexer(**options) - if rv > best_lexer[0]: - best_lexer[:] = (rv, lexer) - if not best_lexer[0] or best_lexer[1] is None: - raise ClassNotFound('no lexer matching the text found') - return best_lexer[1](**options) - - -class _automodule(types.ModuleType): - """Automatically import lexers.""" - - def __getattr__(self, name): - info = LEXERS.get(name) - if info: - _load_lexers(info[0]) - cls = _lexer_cache[info[1]] - setattr(self, name, cls) - return cls - if name in COMPAT: - return getattr(self, COMPAT[name]) - raise AttributeError(name) - - -oldmod = sys.modules[__name__] -newmod = _automodule(__name__) -newmod.__dict__.update(oldmod.__dict__) -sys.modules[__name__] = newmod -del newmod.newmod, newmod.oldmod, newmod.sys, newmod.types diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/contrib/__init__.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/contrib/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/_structures.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/_structures.py deleted file mode 100644 index 90a6465f9682c886363eea5327dac64bf623a6ff..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_vendor/packaging/_structures.py +++ /dev/null @@ -1,61 +0,0 @@ -# This file is dual licensed under the terms of the Apache License, Version -# 2.0, and the BSD License. See the LICENSE file in the root of this repository -# for complete details. - - -class InfinityType: - def __repr__(self) -> str: - return "Infinity" - - def __hash__(self) -> int: - return hash(repr(self)) - - def __lt__(self, other: object) -> bool: - return False - - def __le__(self, other: object) -> bool: - return False - - def __eq__(self, other: object) -> bool: - return isinstance(other, self.__class__) - - def __gt__(self, other: object) -> bool: - return True - - def __ge__(self, other: object) -> bool: - return True - - def __neg__(self: object) -> "NegativeInfinityType": - return NegativeInfinity - - -Infinity = InfinityType() - - -class NegativeInfinityType: - def __repr__(self) -> str: - return "-Infinity" - - def __hash__(self) -> int: - return hash(repr(self)) - - def __lt__(self, other: object) -> bool: - return True - - def __le__(self, other: object) -> bool: - return True - - def __eq__(self, other: object) -> bool: - return isinstance(other, self.__class__) - - def __gt__(self, other: object) -> bool: - return False - - def __ge__(self, other: object) -> bool: - return False - - def __neg__(self: object) -> InfinityType: - return Infinity - - -NegativeInfinity = NegativeInfinityType() diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/build.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/build.py deleted file mode 100644 index 0f1d688e1797bb506139def5c6833afae8a62bf3..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/command/build.py +++ /dev/null @@ -1,149 +0,0 @@ -import sys -from typing import TYPE_CHECKING, List, Dict -from distutils.command.build import build as _build - -from ..warnings import SetuptoolsDeprecationWarning - -if sys.version_info >= (3, 8): - from typing import Protocol -elif TYPE_CHECKING: - from typing_extensions import Protocol -else: - from abc import ABC as Protocol - - -_ORIGINAL_SUBCOMMANDS = {"build_py", "build_clib", "build_ext", "build_scripts"} - - -class build(_build): - # copy to avoid sharing the object with parent class - sub_commands = _build.sub_commands[:] - - def get_sub_commands(self): - subcommands = {cmd[0] for cmd in _build.sub_commands} - if subcommands - _ORIGINAL_SUBCOMMANDS: - SetuptoolsDeprecationWarning.emit( - "Direct usage of `distutils` commands", - """ - It seems that you are using `distutils.command.build` to add - new subcommands. Using `distutils` directly is considered deprecated, - please use `setuptools.command.build`. - """, - due_date=(2023, 12, 13), # Warning introduced in 13 Jun 2022. - see_url="https://peps.python.org/pep-0632/", - ) - self.sub_commands = _build.sub_commands - return super().get_sub_commands() - - -class SubCommand(Protocol): - """In order to support editable installations (see :pep:`660`) all - build subcommands **SHOULD** implement this protocol. They also **MUST** inherit - from ``setuptools.Command``. - - When creating an :pep:`editable wheel <660>`, ``setuptools`` will try to evaluate - custom ``build`` subcommands using the following procedure: - - 1. ``setuptools`` will set the ``editable_mode`` attribute to ``True`` - 2. ``setuptools`` will execute the ``run()`` command. - - .. important:: - Subcommands **SHOULD** take advantage of ``editable_mode=True`` to adequate - its behaviour or perform optimisations. - - For example, if a subcommand doesn't need to generate an extra file and - all it does is to copy a source file into the build directory, - ``run()`` **SHOULD** simply "early return". - - Similarly, if the subcommand creates files that would be placed alongside - Python files in the final distribution, during an editable install - the command **SHOULD** generate these files "in place" (i.e. write them to - the original source directory, instead of using the build directory). - Note that ``get_output_mapping()`` should reflect that and include mappings - for "in place" builds accordingly. - - 3. ``setuptools`` use any knowledge it can derive from the return values of - ``get_outputs()`` and ``get_output_mapping()`` to create an editable wheel. - When relevant ``setuptools`` **MAY** attempt to use file links based on the value - of ``get_output_mapping()``. Alternatively, ``setuptools`` **MAY** attempt to use - :doc:`import hooks ` to redirect any attempt to import - to the directory with the original source code and other files built in place. - - Please note that custom sub-commands **SHOULD NOT** rely on ``run()`` being - executed (or not) to provide correct return values for ``get_outputs()``, - ``get_output_mapping()`` or ``get_source_files()``. The ``get_*`` methods should - work independently of ``run()``. - """ - - editable_mode: bool = False - """Boolean flag that will be set to ``True`` when setuptools is used for an - editable installation (see :pep:`660`). - Implementations **SHOULD** explicitly set the default value of this attribute to - ``False``. - When subcommands run, they can use this flag to perform optimizations or change - their behaviour accordingly. - """ - - build_lib: str - """String representing the directory where the build artifacts should be stored, - e.g. ``build/lib``. - For example, if a distribution wants to provide a Python module named ``pkg.mod``, - then a corresponding file should be written to ``{build_lib}/package/module.py``. - A way of thinking about this is that the files saved under ``build_lib`` - would be eventually copied to one of the directories in :obj:`site.PREFIXES` - upon installation. - - A command that produces platform-independent files (e.g. compiling text templates - into Python functions), **CAN** initialize ``build_lib`` by copying its value from - the ``build_py`` command. On the other hand, a command that produces - platform-specific files **CAN** initialize ``build_lib`` by copying its value from - the ``build_ext`` command. In general this is done inside the ``finalize_options`` - method with the help of the ``set_undefined_options`` command:: - - def finalize_options(self): - self.set_undefined_options("build_py", ("build_lib", "build_lib")) - ... - """ - - def initialize_options(self): - """(Required by the original :class:`setuptools.Command` interface)""" - - def finalize_options(self): - """(Required by the original :class:`setuptools.Command` interface)""" - - def run(self): - """(Required by the original :class:`setuptools.Command` interface)""" - - def get_source_files(self) -> List[str]: - """ - Return a list of all files that are used by the command to create the expected - outputs. - For example, if your build command transpiles Java files into Python, you should - list here all the Java files. - The primary purpose of this function is to help populating the ``sdist`` - with all the files necessary to build the distribution. - All files should be strings relative to the project root directory. - """ - - def get_outputs(self) -> List[str]: - """ - Return a list of files intended for distribution as they would have been - produced by the build. - These files should be strings in the form of - ``"{build_lib}/destination/file/path"``. - - .. note:: - The return value of ``get_output()`` should include all files used as keys - in ``get_output_mapping()`` plus files that are generated during the build - and don't correspond to any source file already present in the project. - """ - - def get_output_mapping(self) -> Dict[str, str]: - """ - Return a mapping between destination files as they would be produced by the - build (dict keys) into the respective existing (source) files (dict values). - Existing (source) files should be represented as strings relative to the project - root directory. - Destination files should be strings in the form of - ``"{build_lib}/destination/file/path"``. - """ diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/dev/run_inference_tests.sh b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/dev/run_inference_tests.sh deleted file mode 100644 index bc9dcc56f06f79fc5efa42c04ffdc07c2787e3ac..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/dev/run_inference_tests.sh +++ /dev/null @@ -1,44 +0,0 @@ -#!/bin/bash -e -# Copyright (c) Facebook, Inc. and its affiliates. - -BIN="python tools/train_net.py" -OUTPUT="inference_test_output" -NUM_GPUS=2 - -CFG_LIST=( "${@:1}" ) - -if [ ${#CFG_LIST[@]} -eq 0 ]; then - CFG_LIST=( ./configs/quick_schedules/*inference_acc_test.yaml ) -fi - -echo "========================================================================" -echo "Configs to run:" -echo "${CFG_LIST[@]}" -echo "========================================================================" - - -for cfg in "${CFG_LIST[@]}"; do - echo "========================================================================" - echo "Running $cfg ..." - echo "========================================================================" - $BIN \ - --eval-only \ - --num-gpus $NUM_GPUS \ - --config-file "$cfg" \ - OUTPUT_DIR $OUTPUT - rm -rf $OUTPUT -done - - -echo "========================================================================" -echo "Running demo.py ..." -echo "========================================================================" -DEMO_BIN="python demo/demo.py" -COCO_DIR=datasets/coco/val2014 -mkdir -pv $OUTPUT - -set -v - -$DEMO_BIN --config-file ./configs/quick_schedules/panoptic_fpn_R_50_inference_acc_test.yaml \ - --input $COCO_DIR/COCO_val2014_0000001933* --output $OUTPUT -rm -rf $OUTPUT diff --git a/spaces/ThirdEyeData/Health-Insurance-Cross-Sell-Prediction/app.py b/spaces/ThirdEyeData/Health-Insurance-Cross-Sell-Prediction/app.py deleted file mode 100644 index 7e5cf0780a24649548558eae999ce4feabbd7e6b..0000000000000000000000000000000000000000 --- a/spaces/ThirdEyeData/Health-Insurance-Cross-Sell-Prediction/app.py +++ /dev/null @@ -1,164 +0,0 @@ -import pandas as pd -import numpy as np -import matplotlib.pyplot as plt -import seaborn as sns -import streamlit as st -import pickle -from sklearn.preprocessing import StandardScaler -from xgboost import XGBClassifier -from sklearn.model_selection import train_test_split -from sklearn.preprocessing import LabelEncoder - - - -st.title("Health Insurance Cross Sell Prediction") -st.write("""This application uses XGBoost Classifier to perform cross cell predictin. Now, question arises what is Cross Sell Prediction? -So, Cross-selling involves selling complementary products to existing customers. It is one of the highly effective techniques in the marketing industry. - -The project uses the dataset of customers of Health Insurance company and the problem the statement is as follows: - -to build a model to predict whether a customer would be interested in Vehicle Insurance is extremely helpful for the company because -it can then accordingly plan its communication strategy to reach out to those customers and optimize its business model and revenue.""") -st.sidebar.header('Customer Data') - -#df = pd.read_csv('health_insurance.csv') - - -# DATA from user -def user_report(): - gender = st.sidebar.selectbox("Gender", - ("Male", "Female" )) - if gender=='Female': - gender=0 - else: - gender=1 - age = st.sidebar.slider('Age of Customer', 20,85, 28 ) - license = st.sidebar.selectbox('has Driving_License?', ("YES","NO") ) - if license=='NO': - license=0 - else: - license=1 - regioncode = st.sidebar.number_input('Enter the Region Code (any number between 0 to 52 )',min_value=0,max_value=52,step=1) - is_previously_insured = st.sidebar.selectbox('is_previously_insured', ("YES","NO") ) - if is_previously_insured=='YES': - is_previously_insured=1 - else: - is_previously_insured=0 - vechile_age = st.sidebar.selectbox('Vechile Age',('<1 year','1-2 year','>2 years')) - if vechile_age=='1-2 year': - vechile_age=0 - elif vechile_age=='<1 year': - vechile_age=1 - else: - vechile_age=2 - is_your_vechile_damaged = st.sidebar.selectbox('Is your Vechile Damaged',("YES","NO")) - if is_your_vechile_damaged =='NO': - is_your_vechile_damaged=0 - else: - is_your_vechile_damaged=1 - annual_premium = st.sidebar.slider('Enter Annual premium you pay', 2000,60000, 5000 ) - policy_sales_channel= st.sidebar.number_input("Policy Sales Channel(Enter any number between 1 to 160)",step =1,min_value=1,max_value=160) - number_of_days_company = st.sidebar.number_input("Enter the number of days Associaed with company(Vintage)",step=1) - - user_report_data = { - 'Gender':gender, - 'Age':age, - 'Driving_License':license, - 'Region_Code':regioncode, - 'Previously_Insured': is_previously_insured, - 'Vehicle_Age':vechile_age, - 'Vehicle_Damage':is_your_vechile_damaged, - 'Annual_Premium': annual_premium, - 'Policy_Sales_Channel':policy_sales_channel, - 'Vintage':number_of_days_company, - } - report_data = pd.DataFrame(user_report_data, index=[0]) - return report_data - - -#Customer Data -user_data = user_report() -st.header("Customer Data") -st.write(user_data) - - -def prediction(report_data): - # Importing data from csv - - df = pd.read_csv('health_insurance.csv') - - # Label Encoder - - le_gender = LabelEncoder() - df['Gender'] = le_gender.fit_transform(df['Gender']) - - le_vAge = LabelEncoder() - df['Vehicle_Age'] = le_vAge.fit_transform(df['Vehicle_Age']) - - le_vDamage = LabelEncoder() - df['Vehicle_Damage'] = le_vDamage.fit_transform(df['Vehicle_Damage']) - - x = df.drop(columns=['id','Response'], axis = 1) - y = df['Response'] - - #balancing the data for Target column - from imblearn.over_sampling import SMOTE - smt = SMOTE(k_neighbors=8, random_state=10) - x_new, y_new = smt.fit_resample(x, y) - - #Splitting the data into train and test datasets - xtrain, xtest, ytrain, ytest = train_test_split(x_new, y_new, test_size =.30, random_state = 0) - - #Xg boost model building - - model_xgb = XGBClassifier() - model_xgb.fit(xtrain, ytrain) - - # using Standard Scaler - scaler = StandardScaler() - - xtrain = scaler.fit_transform(xtrain) - #scaling the user data - report_data=scaler.transform(report_data) - response = model_xgb.predict(report_data) - if response==1: - return 'Status of Customer. This customer willing to buy a vehicle insurance' - else: - return 'Status of Customer. This customer will not buy a vehicle insurance' - -y_pred = prediction(user_data) - -if st.button("Predict"): - st.subheader(y_pred) - - -st.write("""Features Used: - -The following are the input Varibles of a customer which Company needs to be enter, and then the application will predict whether that particular -person/customer will be willing to buy Vehicle Insurance or not - -1) Gender : Gender of the customer - -2) Age : Age of the customer - -3) Driving_License : 0 - Customer does not have DL,1 - Customer already has DL - -4) Region_Code : Unique code for the region of the customer - -5) Previously_Insured : 1 - Customer already has Vehicle Insurance, 0-Customer doesn't have Vehicle Insurance - -6) Vehicle_Age : Age of the Vehicle - -7) Vehicle_Damage : 1 - Customer got his/her vehicle damaged in the past. 0 -Customer didn't get his/her vehicle damaged in the past. - -8) Annual_Premium : The amount customer needs to pay as premium in the year - -9) PolicySalesChannel : Anonymized Code for the channel of outreaching to the customer ie. Different Agents, Over Mail, Over Phone, In Person, etc. - -10) Vintage : Number of Days, Customer has been associated with the company - -Target Column/Prediction - -Response : 1 - Customer is interested, 0 - Customer is not interested""") - - diff --git a/spaces/Vignesh2496/project/README.md b/spaces/Vignesh2496/project/README.md deleted file mode 100644 index a373690bad045c367b633cebba3e41324c60dcd0..0000000000000000000000000000000000000000 --- a/spaces/Vignesh2496/project/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Project -emoji: 📊 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Wanlau/sovits-4.0_datealive/vdecoder/hifigan/utils.py b/spaces/Wanlau/sovits-4.0_datealive/vdecoder/hifigan/utils.py deleted file mode 100644 index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000 --- a/spaces/Wanlau/sovits-4.0_datealive/vdecoder/hifigan/utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import glob -import os -import matplotlib -import torch -from torch.nn.utils import weight_norm -# matplotlib.use("Agg") -import matplotlib.pylab as plt - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("Saving checkpoint to {}".format(filepath)) - torch.save(obj, filepath) - print("Complete.") - - -def del_old_checkpoints(cp_dir, prefix, n_models=2): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) # get checkpoint paths - cp_list = sorted(cp_list)# sort by iter - if len(cp_list) > n_models: # if more than n_models models are found - for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models - open(cp, 'w').close()# empty file contents - os.unlink(cp)# delete file (move to trash when using Colab) - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] - diff --git a/spaces/WindVChen/INR-Harmon/app.py b/spaces/WindVChen/INR-Harmon/app.py deleted file mode 100644 index de4198e9603061e2a8cfd3e615f226ab0fa3723a..0000000000000000000000000000000000000000 --- a/spaces/WindVChen/INR-Harmon/app.py +++ /dev/null @@ -1,324 +0,0 @@ -import os - -import cv2 - -import gradio as gr -import numpy as np -import sys -import io - - -class Logger: - def __init__(self): - self.terminal = sys.stdout - self.log = io.BytesIO() - - def write(self, message): - self.terminal.write(message) - self.log.write(bytes(message, encoding='utf-8')) - - def flush(self): - self.terminal.flush() - self.log.flush() - - def isatty(self): - return False - - -log = Logger() -sys.stdout = log - - -def read_logs(): - out = log.log.getvalue().decode() - if out.count("\n") >= 30: - log.log = io.BytesIO() - sys.stdout.flush() - return out - - -with gr.Blocks(css=".output-image, .input-image, .image-preview {height: 600px !important}") as app: - gr.Markdown(""" -# HINet (or INR-Harmonization) - A novel image Harmonization method based on Implicit neural Networks -## Harmonize any image you want! Arbitrary resolution, and arbitrary aspect ratio! -### Official Gradio Demo. See here for [**How to play with this Space**](https://github.com/WindVChen/INR-Harmonization/blob/main/assets/demo.gif) -**Since Gradio Space only support CPU, the speed may kind of slow. You may better download the code to run locally with a GPU.** -* Official Repo: [INR-Harmonization](https://github.com/WindVChen/INR-Harmonization) -""") - - gr.HTML(""" - (Notice: Sometimes it will encounter CONFLICTs when multiple users access this space at the same time, so we highly recommend you to duplicate this space and run in your private space for no queue on your own hardware - -Duplicate Space - """) - - - gr.Markdown(""" - ## Quick Start - 1. Select desired `Pretrained Model`. - 2. Select a composite image, and then a mask with the same size. - 3. Select the inference mode (for non-square image, only `Arbitrary Image` support). Also note that `Square Image` mode will be much faster than `Arbitrary Image` mode. - 4. Set `Split Resolution` (Patches' resolution) or `Split Number` (How many patches, about N*N) according to the inference mode. - 5. Click `Start` and enjoy it! - 6. Click `Stop` if you want to stop the current process. You can also click `Reset` button any time to reinitialize the GUI. - - """) - - valid_checkpoints_dict = {"Resolution_256_iHarmony4": "Resolution_256_iHarmony4.pth", - "Resolution_1024_HAdobe5K": "Resolution_1024_HAdobe5K.pth", - "Resolution_2048_HAdobe5K": "Resolution_2048_HAdobe5K.pth", - "Resolution_RAW_HAdobe5K": "Resolution_RAW_HAdobe5K.pth", - "Resolution_RAW_iHarmony4": "Resolution_RAW_iHarmony4.pth"} - - global_state = gr.State({ - 'pretrained_weight': valid_checkpoints_dict["Resolution_RAW_iHarmony4"], - - }) - with gr.Row(): - with gr.Column(): - form_composite_image = gr.Image(label='Input Composite image', type='pil').style(height=512) - gr.Examples(examples=sorted([os.path.join("demo", i) for i in os.listdir("demo") if "composite" in i]), - label="Composite Examples", inputs=form_composite_image, cache_examples=False) - with gr.Column(): - form_mask_image = gr.Image(label='Input Mask image', type='pil', interactive=False).style(height=512) - gr.Examples(examples=sorted([os.path.join("demo", i) for i in os.listdir("demo") if "mask" in i]), - label="Mask Examples", inputs=form_mask_image, cache_examples=False) - with gr.Row(): - with gr.Column(scale=4): - with gr.Row(): - with gr.Column(scale=2, min_width=10): - gr.Markdown(value='Model Selection', show_label=False) - - with gr.Column(scale=4, min_width=10): - form_pretrained_dropdown = gr.Dropdown( - choices=list(valid_checkpoints_dict.values()), - label="Pretrained Model", - value=valid_checkpoints_dict["Resolution_RAW_iHarmony4"], - interactive=True - ) - - with gr.Row(): - with gr.Column(scale=2, min_width=10): - gr.Markdown(value='Inference Mode', show_label=False) - - with gr.Column(scale=4, min_width=10): - form_inference_mode = gr.Radio( - ['Square Image', 'Arbitrary Image'], - value='Arbitrary Image', - interactive=False, - label='Mode', - ) - - with gr.Row(): - with gr.Column(scale=2, min_width=10): - gr.Markdown(value='Split Parameter', show_label=False) - - with gr.Column(scale=4, min_width=10): - form_split_res = gr.Slider( - minimum=0, - maximum=2048, - step=128, - value=256, - interactive=False, - label="Split Resolution", - ) - form_split_num = gr.Number( - value=2, - interactive=False, - label="Split Number") - with gr.Row(): - form_log = gr.Textbox(read_logs, label="Logs", interactive=False, type="text", every=1) - - with gr.Column(scale=4): - form_harmonized_image = gr.Image(label='Harmonized Result', type='numpy', interactive=False).style(height=512) - form_start_btn = gr.Button("Start Harmonization", interactive=False) - form_reset_btn = gr.Button("Reset", interactive=True) - form_stop_btn = gr.Button("Stop", interactive=True) - - - def on_change_form_composite_image(form_composite_image): - if form_composite_image is None: - return gr.update(interactive=False, value=None), gr.update(value=None) - return gr.update(interactive=True, value=None), gr.update(value=None) - - - def on_change_form_mask_image(form_composite_image, form_mask_image): - if form_mask_image is None: - return gr.update(interactive=False), gr.update( - interactive=False if form_composite_image is None else True), gr.update(interactive=False), gr.update( - interactive=False), gr.update(interactive=False), gr.update(value=None) - - if form_composite_image.size[:2] != form_mask_image.size[:2]: - raise gr.Error("Composite image and mask image should have the same resolution!") - else: - w, h = form_composite_image.size[:2] - if h != w or (h % 16 != 0): - return gr.update(value='Arbitrary Image', interactive=False), gr.update(interactive=True), gr.update( - interactive=True), gr.update(interactive=True, visible=True), gr.update(interactive=False, - value=-1, visible=False), gr.update(value=None) - else: - return gr.update(value='Square Image', interactive=True), gr.update(interactive=True), gr.update( - interactive=True), gr.update(interactive=False, visible=False), gr.update(interactive=True, - value=h // 2, - maximum=h, - minimum=h // 16, - step=h // 16, visible=True), gr.update(value=None) - - - form_composite_image.change( - on_change_form_composite_image, - inputs=[form_composite_image], - outputs=[form_mask_image, form_harmonized_image] - ) - - form_mask_image.change( - on_change_form_mask_image, - inputs=[form_composite_image, form_mask_image], - outputs=[form_inference_mode, form_mask_image, form_start_btn, form_split_num, form_split_res, - form_harmonized_image] - ) - - - def on_change_form_split_num(form_composite_image, form_split_num): - w, h = form_composite_image.size[:2] - if form_split_num < 1: - return gr.update(value=1) - elif form_split_num > min(w, h): - return gr.update(value=min(w, h)) - else: - return gr.update(value=form_split_num) - - - form_split_num.change( - on_change_form_split_num, - inputs=[form_composite_image, form_split_num], - outputs=[form_split_num] - ) - - - def on_change_form_inference_mode(form_inference_mode): - if form_inference_mode == "Square Image": - return gr.update(interactive=True, visible=True), gr.update(interactive=False, visible=False) - else: - return gr.update(interactive=False, visible=False), gr.update(interactive=True, visible=True) - - - form_inference_mode.change(on_change_form_inference_mode, inputs=[form_inference_mode], - outputs=[form_split_res, form_split_num]) - - - def on_click_form_start_btn(form_composite_image, form_mask_image, form_pretrained_dropdown, form_inference_mode, - form_split_res, form_split_num): - log.log = io.BytesIO() - print(f"Harmonizing image with {form_composite_image.size[1]}*{form_composite_image.size[0]}...") - if form_inference_mode == "Square Image": - from efficient_inference_for_square_image import parse_args, main_process, global_state - global_state[0] = 1 - - opt = parse_args() - opt.transform_mean = [.5, .5, .5] - opt.transform_var = [.5, .5, .5] - opt.pretrained = os.path.join("./pretrained_models", form_pretrained_dropdown) - opt.split_resolution = form_split_res - opt.save_path = None - opt.workers = 0 - opt.device = "cpu" - - composite_image = np.asarray(form_composite_image) - mask = np.asarray(form_mask_image) - - try: - return cv2.cvtColor( - main_process(opt, composite_image=composite_image, mask=mask), - cv2.COLOR_BGR2RGB) - except: - raise gr.Error("Patches too big. Try to reduce the `split_res`!") - - else: - from inference_for_arbitrary_resolution_image import parse_args, main_process, global_state - global_state[0] = 1 - - opt = parse_args() - opt.transform_mean = [.5, .5, .5] - opt.transform_var = [.5, .5, .5] - opt.pretrained = os.path.join("./pretrained_models", form_pretrained_dropdown) - opt.split_num = int(form_split_num) - opt.save_path = None - opt.workers = 0 - opt.device = "cpu" - - composite_image = np.asarray(form_composite_image) - mask = np.asarray(form_mask_image) - - try: - return cv2.cvtColor( - main_process(opt, composite_image=composite_image, mask=mask), - cv2.COLOR_BGR2RGB) - except: - raise gr.Error("Patches too big. Try to increase the `split_num`!") - - - generate = form_start_btn.click(on_click_form_start_btn, - inputs=[form_composite_image, form_mask_image, form_pretrained_dropdown, - form_inference_mode, - form_split_res, form_split_num], outputs=[form_harmonized_image]) - - - def on_click_form_reset_btn(form_inference_mode): - if form_inference_mode == "Square Image": - from efficient_inference_for_square_image import global_state - global_state[0] = 0 - else: - from inference_for_arbitrary_resolution_image import global_state - global_state[0] = 0 - - log.log = io.BytesIO() - return gr.update(value=None), gr.update(value=None, interactive=True), gr.update(value=None, - interactive=False), gr.update( - interactive=False) - - - form_reset_btn.click(on_click_form_reset_btn, - inputs=[form_inference_mode], - outputs=[form_log, form_composite_image, form_mask_image, form_start_btn], cancels=generate) - - - def on_click_form_stop(form_inference_mode): - if form_inference_mode == "Square Image": - from efficient_inference_for_square_image import global_state - global_state[0] = 0 - else: - from inference_for_arbitrary_resolution_image import global_state - global_state[0] = 0 - - log.log = io.BytesIO() - return gr.update(value=None), gr.update(value=None, interactive=True), gr.update(value=None, - interactive=False), gr.update( - interactive=False) - - - form_stop_btn.click(on_click_form_stop, - inputs=[form_inference_mode], - outputs=[form_log, form_composite_image, form_mask_image, form_start_btn], cancels=generate) - - gr.HTML(""" - -
- Gradio demo supported by - WindVChen -
- """) - -gr.close_all() - -app.queue(concurrency_count=1, max_size=1, api_open=False) - -app.launch(show_api=False) diff --git a/spaces/XzJosh/Gun-Bert-VITS2/text/tone_sandhi.py b/spaces/XzJosh/Gun-Bert-VITS2/text/tone_sandhi.py deleted file mode 100644 index 0f45b7a72c5d858bcaab19ac85cfa686bf9a74da..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Gun-Bert-VITS2/text/tone_sandhi.py +++ /dev/null @@ -1,351 +0,0 @@ -# Copyright (c) 2021 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import List -from typing import Tuple - -import jieba -from pypinyin import lazy_pinyin -from pypinyin import Style - - -class ToneSandhi(): - def __init__(self): - self.must_neural_tone_words = { - '麻烦', '麻利', '鸳鸯', '高粱', '骨头', '骆驼', '马虎', '首饰', '馒头', '馄饨', '风筝', - '难为', '队伍', '阔气', '闺女', '门道', '锄头', '铺盖', '铃铛', '铁匠', '钥匙', '里脊', - '里头', '部分', '那么', '道士', '造化', '迷糊', '连累', '这么', '这个', '运气', '过去', - '软和', '转悠', '踏实', '跳蚤', '跟头', '趔趄', '财主', '豆腐', '讲究', '记性', '记号', - '认识', '规矩', '见识', '裁缝', '补丁', '衣裳', '衣服', '衙门', '街坊', '行李', '行当', - '蛤蟆', '蘑菇', '薄荷', '葫芦', '葡萄', '萝卜', '荸荠', '苗条', '苗头', '苍蝇', '芝麻', - '舒服', '舒坦', '舌头', '自在', '膏药', '脾气', '脑袋', '脊梁', '能耐', '胳膊', '胭脂', - '胡萝', '胡琴', '胡同', '聪明', '耽误', '耽搁', '耷拉', '耳朵', '老爷', '老实', '老婆', - '老头', '老太', '翻腾', '罗嗦', '罐头', '编辑', '结实', '红火', '累赘', '糨糊', '糊涂', - '精神', '粮食', '簸箕', '篱笆', '算计', '算盘', '答应', '笤帚', '笑语', '笑话', '窟窿', - '窝囊', '窗户', '稳当', '稀罕', '称呼', '秧歌', '秀气', '秀才', '福气', '祖宗', '砚台', - '码头', '石榴', '石头', '石匠', '知识', '眼睛', '眯缝', '眨巴', '眉毛', '相声', '盘算', - '白净', '痢疾', '痛快', '疟疾', '疙瘩', '疏忽', '畜生', '生意', '甘蔗', '琵琶', '琢磨', - '琉璃', '玻璃', '玫瑰', '玄乎', '狐狸', '状元', '特务', '牲口', '牙碜', '牌楼', '爽快', - '爱人', '热闹', '烧饼', '烟筒', '烂糊', '点心', '炊帚', '灯笼', '火候', '漂亮', '滑溜', - '溜达', '温和', '清楚', '消息', '浪头', '活泼', '比方', '正经', '欺负', '模糊', '槟榔', - '棺材', '棒槌', '棉花', '核桃', '栅栏', '柴火', '架势', '枕头', '枇杷', '机灵', '本事', - '木头', '木匠', '朋友', '月饼', '月亮', '暖和', '明白', '时候', '新鲜', '故事', '收拾', - '收成', '提防', '挖苦', '挑剔', '指甲', '指头', '拾掇', '拳头', '拨弄', '招牌', '招呼', - '抬举', '护士', '折腾', '扫帚', '打量', '打算', '打点', '打扮', '打听', '打发', '扎实', - '扁担', '戒指', '懒得', '意识', '意思', '情形', '悟性', '怪物', '思量', '怎么', '念头', - '念叨', '快活', '忙活', '志气', '心思', '得罪', '张罗', '弟兄', '开通', '应酬', '庄稼', - '干事', '帮手', '帐篷', '希罕', '师父', '师傅', '巴结', '巴掌', '差事', '工夫', '岁数', - '屁股', '尾巴', '少爷', '小气', '小伙', '将就', '对头', '对付', '寡妇', '家伙', '客气', - '实在', '官司', '学问', '学生', '字号', '嫁妆', '媳妇', '媒人', '婆家', '娘家', '委屈', - '姑娘', '姐夫', '妯娌', '妥当', '妖精', '奴才', '女婿', '头发', '太阳', '大爷', '大方', - '大意', '大夫', '多少', '多么', '外甥', '壮实', '地道', '地方', '在乎', '困难', '嘴巴', - '嘱咐', '嘟囔', '嘀咕', '喜欢', '喇嘛', '喇叭', '商量', '唾沫', '哑巴', '哈欠', '哆嗦', - '咳嗽', '和尚', '告诉', '告示', '含糊', '吓唬', '后头', '名字', '名堂', '合同', '吆喝', - '叫唤', '口袋', '厚道', '厉害', '千斤', '包袱', '包涵', '匀称', '勤快', '动静', '动弹', - '功夫', '力气', '前头', '刺猬', '刺激', '别扭', '利落', '利索', '利害', '分析', '出息', - '凑合', '凉快', '冷战', '冤枉', '冒失', '养活', '关系', '先生', '兄弟', '便宜', '使唤', - '佩服', '作坊', '体面', '位置', '似的', '伙计', '休息', '什么', '人家', '亲戚', '亲家', - '交情', '云彩', '事情', '买卖', '主意', '丫头', '丧气', '两口', '东西', '东家', '世故', - '不由', '不在', '下水', '下巴', '上头', '上司', '丈夫', '丈人', '一辈', '那个', '菩萨', - '父亲', '母亲', '咕噜', '邋遢', '费用', '冤家', '甜头', '介绍', '荒唐', '大人', '泥鳅', - '幸福', '熟悉', '计划', '扑腾', '蜡烛', '姥爷', '照顾', '喉咙', '吉他', '弄堂', '蚂蚱', - '凤凰', '拖沓', '寒碜', '糟蹋', '倒腾', '报复', '逻辑', '盘缠', '喽啰', '牢骚', '咖喱', - '扫把', '惦记' - } - self.must_not_neural_tone_words = { - "男子", "女子", "分子", "原子", "量子", "莲子", "石子", "瓜子", "电子", "人人", "虎虎" - } - self.punc = ":,;。?!“”‘’':,;.?!" - - # the meaning of jieba pos tag: https://blog.csdn.net/weixin_44174352/article/details/113731041 - # e.g. - # word: "家里" - # pos: "s" - # finals: ['ia1', 'i3'] - def _neural_sandhi(self, word: str, pos: str, - finals: List[str]) -> List[str]: - - # reduplication words for n. and v. e.g. 奶奶, 试试, 旺旺 - for j, item in enumerate(word): - if j - 1 >= 0 and item == word[j - 1] and pos[0] in { - "n", "v", "a" - } and word not in self.must_not_neural_tone_words: - finals[j] = finals[j][:-1] + "5" - ge_idx = word.find("个") - if len(word) >= 1 and word[-1] in "吧呢啊呐噻嘛吖嗨呐哦哒额滴哩哟喽啰耶喔诶": - finals[-1] = finals[-1][:-1] + "5" - elif len(word) >= 1 and word[-1] in "的地得": - finals[-1] = finals[-1][:-1] + "5" - # e.g. 走了, 看着, 去过 - # elif len(word) == 1 and word in "了着过" and pos in {"ul", "uz", "ug"}: - # finals[-1] = finals[-1][:-1] + "5" - elif len(word) > 1 and word[-1] in "们子" and pos in { - "r", "n" - } and word not in self.must_not_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 桌上, 地下, 家里 - elif len(word) > 1 and word[-1] in "上下里" and pos in {"s", "l", "f"}: - finals[-1] = finals[-1][:-1] + "5" - # e.g. 上来, 下去 - elif len(word) > 1 and word[-1] in "来去" and word[-2] in "上下进出回过起开": - finals[-1] = finals[-1][:-1] + "5" - # 个做量词 - elif (ge_idx >= 1 and - (word[ge_idx - 1].isnumeric() or - word[ge_idx - 1] in "几有两半多各整每做是")) or word == '个': - finals[ge_idx] = finals[ge_idx][:-1] + "5" - else: - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals[-1] = finals[-1][:-1] + "5" - - word_list = self._split_word(word) - finals_list = [finals[:len(word_list[0])], finals[len(word_list[0]):]] - for i, word in enumerate(word_list): - # conventional neural in Chinese - if word in self.must_neural_tone_words or word[ - -2:] in self.must_neural_tone_words: - finals_list[i][-1] = finals_list[i][-1][:-1] + "5" - finals = sum(finals_list, []) - return finals - - def _bu_sandhi(self, word: str, finals: List[str]) -> List[str]: - # e.g. 看不懂 - if len(word) == 3 and word[1] == "不": - finals[1] = finals[1][:-1] + "5" - else: - for i, char in enumerate(word): - # "不" before tone4 should be bu2, e.g. 不怕 - if char == "不" and i + 1 < len(word) and finals[i + - 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - return finals - - def _yi_sandhi(self, word: str, finals: List[str]) -> List[str]: - # "一" in number sequences, e.g. 一零零, 二一零 - if word.find("一") != -1 and all( - [item.isnumeric() for item in word if item != "一"]): - return finals - # "一" between reduplication words shold be yi5, e.g. 看一看 - elif len(word) == 3 and word[1] == "一" and word[0] == word[-1]: - finals[1] = finals[1][:-1] + "5" - # when "一" is ordinal word, it should be yi1 - elif word.startswith("第一"): - finals[1] = finals[1][:-1] + "1" - else: - for i, char in enumerate(word): - if char == "一" and i + 1 < len(word): - # "一" before tone4 should be yi2, e.g. 一段 - if finals[i + 1][-1] == "4": - finals[i] = finals[i][:-1] + "2" - # "一" before non-tone4 should be yi4, e.g. 一天 - else: - # "一" 后面如果是标点,还读一声 - if word[i + 1] not in self.punc: - finals[i] = finals[i][:-1] + "4" - return finals - - def _split_word(self, word: str) -> List[str]: - word_list = jieba.cut_for_search(word) - word_list = sorted(word_list, key=lambda i: len(i), reverse=False) - first_subword = word_list[0] - first_begin_idx = word.find(first_subword) - if first_begin_idx == 0: - second_subword = word[len(first_subword):] - new_word_list = [first_subword, second_subword] - else: - second_subword = word[:-len(first_subword)] - new_word_list = [second_subword, first_subword] - return new_word_list - - def _three_sandhi(self, word: str, finals: List[str]) -> List[str]: - if len(word) == 2 and self._all_tone_three(finals): - finals[0] = finals[0][:-1] + "2" - elif len(word) == 3: - word_list = self._split_word(word) - if self._all_tone_three(finals): - # disyllabic + monosyllabic, e.g. 蒙古/包 - if len(word_list[0]) == 2: - finals[0] = finals[0][:-1] + "2" - finals[1] = finals[1][:-1] + "2" - # monosyllabic + disyllabic, e.g. 纸/老虎 - elif len(word_list[0]) == 1: - finals[1] = finals[1][:-1] + "2" - else: - finals_list = [ - finals[:len(word_list[0])], finals[len(word_list[0]):] - ] - if len(finals_list) == 2: - for i, sub in enumerate(finals_list): - # e.g. 所有/人 - if self._all_tone_three(sub) and len(sub) == 2: - finals_list[i][0] = finals_list[i][0][:-1] + "2" - # e.g. 好/喜欢 - elif i == 1 and not self._all_tone_three(sub) and finals_list[i][0][-1] == "3" and \ - finals_list[0][-1][-1] == "3": - - finals_list[0][-1] = finals_list[0][-1][:-1] + "2" - finals = sum(finals_list, []) - # split idiom into two words who's length is 2 - elif len(word) == 4: - finals_list = [finals[:2], finals[2:]] - finals = [] - for sub in finals_list: - if self._all_tone_three(sub): - sub[0] = sub[0][:-1] + "2" - finals += sub - - return finals - - def _all_tone_three(self, finals: List[str]) -> bool: - return all(x[-1] == "3" for x in finals) - - # merge "不" and the word behind it - # if don't merge, "不" sometimes appears alone according to jieba, which may occur sandhi error - def _merge_bu(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - last_word = "" - for word, pos in seg: - if last_word == "不": - word = last_word + word - if word != "不": - new_seg.append((word, pos)) - last_word = word[:] - if last_word == "不": - new_seg.append((last_word, 'd')) - last_word = "" - return new_seg - - # function 1: merge "一" and reduplication words in it's left and right, e.g. "听","一","听" ->"听一听" - # function 2: merge single "一" and the word behind it - # if don't merge, "一" sometimes appears alone according to jieba, which may occur sandhi error - # e.g. - # input seg: [('听', 'v'), ('一', 'm'), ('听', 'v')] - # output seg: [['听一听', 'v']] - def _merge_yi(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - # function 1 - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "一" and i + 1 < len(seg) and seg[i - 1][ - 0] == seg[i + 1][0] and seg[i - 1][1] == "v": - new_seg[i - 1][0] = new_seg[i - 1][0] + "一" + new_seg[i - 1][0] - else: - if i - 2 >= 0 and seg[i - 1][0] == "一" and seg[i - 2][ - 0] == word and pos == "v": - continue - else: - new_seg.append([word, pos]) - seg = new_seg - new_seg = [] - # function 2 - for i, (word, pos) in enumerate(seg): - if new_seg and new_seg[-1][0] == "一": - new_seg[-1][0] = new_seg[-1][0] + word - else: - new_seg.append([word, pos]) - return new_seg - - # the first and the second words are all_tone_three - def _merge_continuous_three_tones( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and self._all_tone_three( - sub_finals_list[i - 1]) and self._all_tone_three( - sub_finals_list[i]) and not merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - - return new_seg - - def _is_reduplication(self, word: str) -> bool: - return len(word) == 2 and word[0] == word[1] - - # the last char of first word and the first char of second word is tone_three - def _merge_continuous_three_tones_2( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - sub_finals_list = [ - lazy_pinyin( - word, neutral_tone_with_five=True, style=Style.FINALS_TONE3) - for (word, pos) in seg - ] - assert len(sub_finals_list) == len(seg) - merge_last = [False] * len(seg) - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and sub_finals_list[i - 1][-1][-1] == "3" and sub_finals_list[i][0][-1] == "3" and not \ - merge_last[i - 1]: - # if the last word is reduplication, not merge, because reduplication need to be _neural_sandhi - if not self._is_reduplication(seg[i - 1][0]) and len( - seg[i - 1][0]) + len(seg[i][0]) <= 3: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - merge_last[i] = True - else: - new_seg.append([word, pos]) - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_er(self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if i - 1 >= 0 and word == "儿" and seg[i-1][0] != "#": - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def _merge_reduplication( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - new_seg = [] - for i, (word, pos) in enumerate(seg): - if new_seg and word == new_seg[-1][0]: - new_seg[-1][0] = new_seg[-1][0] + seg[i][0] - else: - new_seg.append([word, pos]) - return new_seg - - def pre_merge_for_modify( - self, seg: List[Tuple[str, str]]) -> List[Tuple[str, str]]: - seg = self._merge_bu(seg) - try: - seg = self._merge_yi(seg) - except: - print("_merge_yi failed") - seg = self._merge_reduplication(seg) - seg = self._merge_continuous_three_tones(seg) - seg = self._merge_continuous_three_tones_2(seg) - seg = self._merge_er(seg) - return seg - - def modified_tone(self, word: str, pos: str, - finals: List[str]) -> List[str]: - finals = self._bu_sandhi(word, finals) - finals = self._yi_sandhi(word, finals) - finals = self._neural_sandhi(word, pos, finals) - finals = self._three_sandhi(word, finals) - return finals diff --git a/spaces/XzJosh/LAPLACE-Bert-VITS2/commons.py b/spaces/XzJosh/LAPLACE-Bert-VITS2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/LAPLACE-Bert-VITS2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/YONG627/456123/yolov5-code-main/utils/flask_rest_api/example_request.py b/spaces/YONG627/456123/yolov5-code-main/utils/flask_rest_api/example_request.py deleted file mode 100644 index 952e5dcb90fa5ad58628596ed8866b8b7a521d22..0000000000000000000000000000000000000000 --- a/spaces/YONG627/456123/yolov5-code-main/utils/flask_rest_api/example_request.py +++ /dev/null @@ -1,19 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Perform test request -""" - -import pprint - -import requests - -DETECTION_URL = 'http://localhost:5000/v1/object-detection/yolov5s' -IMAGE = 'zidane.jpg' - -# Read image -with open(IMAGE, 'rb') as f: - image_data = f.read() - -response = requests.post(DETECTION_URL, files={'image': image_data}).json() - -pprint.pprint(response) diff --git a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/demo/gradio_app.py b/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/demo/gradio_app.py deleted file mode 100644 index 15e08323f485291df8b53eefd4691c087d7863f7..0000000000000000000000000000000000000000 --- a/spaces/YouLiXiya/Mobile-SAM/GroundingDINO/demo/gradio_app.py +++ /dev/null @@ -1,125 +0,0 @@ -import argparse -from functools import partial -import cv2 -import requests -import os -from io import BytesIO -from PIL import Image -import numpy as np -from pathlib import Path - - -import warnings - -import torch - -# prepare the environment -os.system("python setup.py build develop --user") -os.system("pip install packaging==21.3") -os.system("pip install gradio") - - -warnings.filterwarnings("ignore") - -import gradio as gr - -from groundingdino.models import build_model -from groundingdino.util.slconfig import SLConfig -from groundingdino.util.utils import clean_state_dict -from groundingdino.util.inference import annotate, load_image, predict -import groundingdino.datasets.transforms as T - -from huggingface_hub import hf_hub_download - - - -# Use this command for evaluate the GLIP-T model -config_file = "groundingdino/config/GroundingDINO_SwinT_OGC.py" -ckpt_repo_id = "ShilongLiu/GroundingDINO" -ckpt_filenmae = "groundingdino_swint_ogc.pth" - - -def load_model_hf(model_config_path, repo_id, filename, device='cpu'): - args = SLConfig.fromfile(model_config_path) - model = build_model(args) - args.device = device - - cache_file = hf_hub_download(repo_id=repo_id, filename=filename) - checkpoint = torch.load(cache_file, map_location='cpu') - log = model.load_state_dict(clean_state_dict(checkpoint['model']), strict=False) - print("Model loaded from {} \n => {}".format(cache_file, log)) - _ = model.eval() - return model - -def image_transform_grounding(init_image): - transform = T.Compose([ - T.RandomResize([800], max_size=1333), - T.ToTensor(), - T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) - ]) - image, _ = transform(init_image, None) # 3, h, w - return init_image, image - -def image_transform_grounding_for_vis(init_image): - transform = T.Compose([ - T.RandomResize([800], max_size=1333), - ]) - image, _ = transform(init_image, None) # 3, h, w - return image - -model = load_model_hf(config_file, ckpt_repo_id, ckpt_filenmae) - -def run_grounding(input_image, grounding_caption, box_threshold, text_threshold): - init_image = input_image.convert("RGB") - original_size = init_image.size - - _, image_tensor = image_transform_grounding(init_image) - image_pil: Image = image_transform_grounding_for_vis(init_image) - - # run grounidng - boxes, logits, phrases = predict(model, image_tensor, grounding_caption, box_threshold, text_threshold, device='cpu') - annotated_frame = annotate(image_source=np.asarray(image_pil), boxes=boxes, logits=logits, phrases=phrases) - image_with_box = Image.fromarray(cv2.cvtColor(annotated_frame, cv2.COLOR_BGR2RGB)) - - - return image_with_box - -if __name__ == "__main__": - - parser = argparse.ArgumentParser("Grounding DINO demo", add_help=True) - parser.add_argument("--debug", action="store_true", help="using debug mode") - parser.add_argument("--share", action="store_true", help="share the app") - args = parser.parse_args() - - block = gr.Blocks().queue() - with block: - gr.Markdown("# [Grounding DINO](https://github.com/IDEA-Research/GroundingDINO)") - gr.Markdown("### Open-World Detection with Grounding DINO") - - with gr.Row(): - with gr.Column(): - input_image = gr.Image(source='upload', type="pil") - grounding_caption = gr.Textbox(label="Detection Prompt") - run_button = gr.Button(label="Run") - with gr.Accordion("Advanced options", open=False): - box_threshold = gr.Slider( - label="Box Threshold", minimum=0.0, maximum=1.0, value=0.25, step=0.001 - ) - text_threshold = gr.Slider( - label="Text Threshold", minimum=0.0, maximum=1.0, value=0.25, step=0.001 - ) - - with gr.Column(): - gallery = gr.outputs.Image( - type="pil", - # label="grounding results" - ).style(full_width=True, full_height=True) - # gallery = gr.Gallery(label="Generated images", show_label=False).style( - # grid=[1], height="auto", container=True, full_width=True, full_height=True) - - run_button.click(fn=run_grounding, inputs=[ - input_image, grounding_caption, box_threshold, text_threshold], outputs=[gallery]) - - - block.launch(server_name='0.0.0.0', server_port=7579, debug=args.debug, share=args.share) - diff --git a/spaces/Zaixi/ICLR_FLAG/models/flag.py b/spaces/Zaixi/ICLR_FLAG/models/flag.py deleted file mode 100644 index d2eb7529c1193ea20c2eb28570a51a97104f84e8..0000000000000000000000000000000000000000 --- a/spaces/Zaixi/ICLR_FLAG/models/flag.py +++ /dev/null @@ -1,268 +0,0 @@ -import sys -sys.path.append("..") -import torch -import torch.nn as nn -from torch.nn import Module, Linear, Embedding -from torch.nn import functional as F -from torch_scatter import scatter_add, scatter_mean -from torch_geometric.data import Data, Batch -from copy import deepcopy - -from .encoders import get_encoder, GNN_graphpred, MLP -from .common import * -from utils import dihedral_utils, chemutils - - -class FLAG(Module): - - def __init__(self, config, protein_atom_feature_dim, ligand_atom_feature_dim, vocab): - super().__init__() - self.config = config - self.vocab = vocab - self.protein_atom_emb = Linear(protein_atom_feature_dim, config.hidden_channels) - self.ligand_atom_emb = Linear(ligand_atom_feature_dim, config.hidden_channels) - self.embedding = nn.Embedding(vocab.size() + 1, config.hidden_channels) - self.W = nn.Linear(2 * config.hidden_channels, config.hidden_channels) - self.W_o = nn.Linear(config.hidden_channels, self.vocab.size()) - self.encoder = get_encoder(config.encoder) - self.comb_head = GNN_graphpred(num_layer=3, emb_dim=config.hidden_channels, num_tasks=1, JK='last', - drop_ratio=0.5, graph_pooling='mean', gnn_type='gin') - if config.random_alpha: - self.alpha_mlp = MLP(in_dim=config.hidden_channels * 4, out_dim=1, num_layers=2) - else: - self.alpha_mlp = MLP(in_dim=config.hidden_channels * 3, out_dim=1, num_layers=2) - self.focal_mlp_ligand = MLP(in_dim=config.hidden_channels, out_dim=1, num_layers=1) - self.focal_mlp_protein = MLP(in_dim=config.hidden_channels, out_dim=1, num_layers=1) - self.dist_mlp = MLP(in_dim=protein_atom_feature_dim + ligand_atom_feature_dim, out_dim=1, num_layers=2) - if config.refinement: - self.refine_protein = MLP(in_dim=config.hidden_channels * 2 + config.encoder.edge_channels, out_dim=1, num_layers=2) - self.refine_ligand = MLP(in_dim=config.hidden_channels * 2 + config.encoder.edge_channels, out_dim=1, num_layers=2) - - self.smooth_cross_entropy = SmoothCrossEntropyLoss(reduction='mean', smoothing=0.1) - self.pred_loss = nn.CrossEntropyLoss() - self.comb_loss = nn.BCEWithLogitsLoss() - self.three_hop_loss = torch.nn.MSELoss() - self.focal_loss = nn.BCEWithLogitsLoss() - self.dist_loss = torch.nn.MSELoss(reduction='mean') - - def forward(self, protein_pos, protein_atom_feature, ligand_pos, ligand_atom_feature, batch_protein, batch_ligand): - h_protein = self.protein_atom_emb(protein_atom_feature) - h_ligand = self.ligand_atom_emb(ligand_atom_feature) - - h_ctx, pos_ctx, batch_ctx, protein_mask = compose_context_stable(h_protein=h_protein, h_ligand=h_ligand, - pos_protein=protein_pos, pos_ligand=ligand_pos, - batch_protein=batch_protein, - batch_ligand=batch_ligand) - h_ctx = self.encoder(node_attr=h_ctx, pos=pos_ctx, batch=batch_ctx) # (N_p+N_l, H) - focal_pred = torch.cat([self.focal_mlp_protein(h_ctx[protein_mask]), self.focal_mlp_ligand(h_ctx[~protein_mask])], dim=0) - - return focal_pred, protein_mask, h_ctx - - def forward_motif(self, h_ctx_focal, current_wid, current_atoms_batch, n_samples=1): - node_hiddens = scatter_add(h_ctx_focal, dim=0, index=current_atoms_batch) - motif_hiddens = self.embedding(current_wid) - pred_vecs = torch.cat([node_hiddens, motif_hiddens], dim=1) - pred_vecs = nn.ReLU()(self.W(pred_vecs)) - pred_scores = self.W_o(pred_vecs) - pred_scores = F.softmax(pred_scores, dim=-1) - _, preds = torch.max(pred_scores, dim=1) - # random select n_samples in topk - k = 5*n_samples - select_pool = torch.topk(pred_scores, k, dim=1)[1] - index = torch.randint(k, (select_pool.shape[0], n_samples)) - preds = torch.cat([select_pool[i][index[i]] for i in range(len(index))]) - - idx_parent = torch.repeat_interleave(torch.arange(pred_scores.shape[0]), n_samples, dim=0).to(pred_scores.device) - prob = pred_scores[idx_parent, preds] - return preds, prob - - def forward_attach(self, mol_list, next_motif_smiles, device): - cand_mols, cand_batch, new_atoms, one_atom_attach, intersection, attach_fail = chemutils.assemble(mol_list, next_motif_smiles) - graph_data = Batch.from_data_list([chemutils.mol_to_graph_data_obj_simple(mol) for mol in cand_mols]).to(device) - comb_pred = self.comb_head(graph_data.x, graph_data.edge_index, graph_data.edge_attr, graph_data.batch).reshape(-1) - slice_idx = torch.cat([torch.tensor([0]), torch.cumsum(cand_batch.bincount(), dim=0)], dim=0) - select = [(torch.argmax(comb_pred[slice_idx[i]:slice_idx[i + 1]]) + slice_idx[i]).item() for i in - range(len(slice_idx) - 1)] - ''' - select = [] - for k in range(len(slice_idx) - 1): - id = torch.multinomial(torch.exp(comb_pred[slice_idx[k]:slice_idx[k + 1]]).reshape(-1).float(), 1) - select.append((id+slice_idx[k]).item())''' - - select_mols = [cand_mols[i] for i in select] - new_atoms = [new_atoms[i] for i in select] - one_atom_attach = [one_atom_attach[i] for i in select] - intersection = [intersection[i] for i in select] - return select_mols, new_atoms, one_atom_attach, intersection, attach_fail - - def forward_alpha(self, protein_pos, protein_atom_feature, ligand_pos, ligand_atom_feature, batch_protein, - batch_ligand, xy_index, rotatable): - # encode again - h_protein = self.protein_atom_emb(protein_atom_feature) - h_ligand = self.ligand_atom_emb(ligand_atom_feature) - - h_ctx, pos_ctx, batch_ctx, protein_mask = compose_context_stable(h_protein=h_protein, h_ligand=h_ligand, - pos_protein=protein_pos, pos_ligand=ligand_pos, - batch_protein=batch_protein, - batch_ligand=batch_ligand) - h_ctx = self.encoder(node_attr=h_ctx, pos=pos_ctx, batch=batch_ctx) # (N_p+N_l, H) - h_ctx_ligand = h_ctx[~protein_mask] - hx, hy = h_ctx_ligand[xy_index[:, 0]], h_ctx_ligand[xy_index[:, 1]] - h_mol = scatter_add(h_ctx_ligand, dim=0, index=batch_ligand) - h_mol = h_mol[rotatable] - if self.config.random_alpha: - rand_dist = torch.distributions.normal.Normal(loc=0, scale=1) - rand_alpha = rand_dist.sample(hx.shape).to(hx.device) - alpha = self.alpha_mlp(torch.cat([hx, hy, h_mol, rand_alpha], dim=-1)) - else: - alpha = self.alpha_mlp(torch.cat([hx, hy, h_mol], dim=-1)) - return alpha - - def get_loss(self, protein_pos, protein_atom_feature, ligand_pos, ligand_atom_feature, ligand_pos_torsion, - ligand_atom_feature_torsion, batch_protein, batch_ligand, batch_ligand_torsion, batch): - self.device = protein_pos.device - h_protein = self.protein_atom_emb(protein_atom_feature) - h_ligand = self.ligand_atom_emb(ligand_atom_feature) - - loss_list = [0, 0, 0, 0, 0, 0] - - # Encode for motif prediction - h_ctx, pos_ctx, batch_ctx, mask_protein = compose_context_stable(h_protein=h_protein, h_ligand=h_ligand, - pos_protein=protein_pos, pos_ligand=ligand_pos, - batch_protein=batch_protein, - batch_ligand=batch_ligand) - h_ctx = self.encoder(node_attr=h_ctx, pos=pos_ctx, batch=batch_ctx) # (N_p+N_l, H) - h_ctx_ligand = h_ctx[~mask_protein] - h_ctx_protein = h_ctx[mask_protein] - h_ctx_focal = h_ctx[batch['current_atoms']] - - # Encode for torsion prediction - if len(batch['y_pos']) > 0: - h_ligand_torsion = self.ligand_atom_emb(ligand_atom_feature_torsion) - h_ctx_torison, pos_ctx_torison, batch_ctx_torsion, mask_protein = compose_context_stable(h_protein=h_protein, - h_ligand=h_ligand_torsion, - pos_protein=protein_pos, - pos_ligand=ligand_pos_torsion, - batch_protein=batch_protein, - batch_ligand=batch_ligand_torsion) - h_ctx_torsion = self.encoder(node_attr=h_ctx_torison, pos=pos_ctx_torison, batch=batch_ctx_torsion) # (N_p+N_l, H) - h_ctx_ligand_torsion = h_ctx_torsion[~mask_protein] - - # next motif prediction - - node_hiddens = scatter_add(h_ctx_focal, dim=0, index=batch['current_atoms_batch']) - motif_hiddens = self.embedding(batch['current_wid']) - pred_vecs = torch.cat([node_hiddens, motif_hiddens], dim=1) - pred_vecs = nn.ReLU()(self.W(pred_vecs)) - pred_scores = self.W_o(pred_vecs) - pred_loss = self.pred_loss(pred_scores, batch['next_wid']) - loss_list[0] = pred_loss.item() - - # attachment prediction - if len(batch['cand_labels']) > 0: - cand_mols = batch['cand_mols'] - comb_pred = self.comb_head(cand_mols.x, cand_mols.edge_index, cand_mols.edge_attr, cand_mols.batch) - comb_loss = self.comb_loss(comb_pred, batch['cand_labels'].view(comb_pred.shape).float()) - loss_list[1] = comb_loss.item() - else: - comb_loss = 0 - - # focal prediction - focal_ligand_pred, focal_protein_pred = self.focal_mlp_ligand(h_ctx_ligand), self.focal_mlp_protein(h_ctx_protein) - focal_loss = self.focal_loss(focal_ligand_pred.reshape(-1), batch['ligand_frontier'].float()) +\ - self.focal_loss(focal_protein_pred.reshape(-1), batch['protein_contact'].float()) - loss_list[2] = focal_loss.item() - - # distance matrix prediction - if len(batch['true_dm']) > 0: - input = torch.cat([protein_atom_feature[batch['dm_protein_idx']], ligand_atom_feature[batch['dm_ligand_idx']]], dim=-1) - pred_dist = self.dist_mlp(input) - dm_target = batch['true_dm'].unsqueeze(-1) - dm_loss = self.dist_loss(pred_dist, dm_target) - loss_list[3] = dm_loss.item() - else: - dm_loss = 0 - - # structure refinement loss - if self.config.refinement and len(batch['true_dm']) > 0: - true_distance_alpha = torch.norm(batch['ligand_context_pos'][batch['sr_ligand_idx']] - batch['protein_pos'][batch['sr_protein_idx']], dim=1) - true_distance_intra = torch.norm(batch['ligand_context_pos'][batch['sr_ligand_idx0']] - batch['ligand_context_pos'][batch['sr_ligand_idx1']], dim=1) - input_distance_alpha = ligand_pos[batch['sr_ligand_idx']] - protein_pos[batch['sr_protein_idx']] - input_distance_intra = ligand_pos[batch['sr_ligand_idx0']] - ligand_pos[batch['sr_ligand_idx1']] - distance_emb1 = self.encoder.distance_expansion(torch.norm(input_distance_alpha, dim=1)) - distance_emb2 = self.encoder.distance_expansion(torch.norm(input_distance_intra, dim=1)) - input1 = torch.cat([h_ctx_ligand[batch['sr_ligand_idx']], h_ctx_protein[batch['sr_protein_idx']], distance_emb1], dim=-1)[true_distance_alpha<=10.0] - input2 = torch.cat([h_ctx_ligand[batch['sr_ligand_idx0']], h_ctx_ligand[batch['sr_ligand_idx1']], distance_emb2], dim=-1)[true_distance_intra<=10.0] - #distance cut_off - norm_dir1 = F.normalize(input_distance_alpha, p=2, dim=1)[true_distance_alpha<=10.0] - norm_dir2 = F.normalize(input_distance_intra, p=2, dim=1)[true_distance_intra<=10.0] - force1 = scatter_mean(self.refine_protein(input1)*norm_dir1, dim=0, index=batch['sr_ligand_idx'][true_distance_alpha<=10.0], dim_size=ligand_pos.size(0)) - force2 = scatter_mean(self.refine_ligand(input2)*norm_dir2, dim=0, index=batch['sr_ligand_idx0'][true_distance_intra<=10.0], dim_size=ligand_pos.size(0)) - new_ligand_pos = deepcopy(ligand_pos) - new_ligand_pos += force1 - new_ligand_pos += force2 - refine_dist1 = torch.norm(new_ligand_pos[batch['sr_ligand_idx']] - protein_pos[batch['sr_protein_idx']], dim=1) - refine_dist2 = torch.norm(new_ligand_pos[batch['sr_ligand_idx0']] - new_ligand_pos[batch['sr_ligand_idx1']], dim=1) - sr_loss = (self.dist_loss(refine_dist1, true_distance_alpha) + self.dist_loss(refine_dist2, true_distance_intra)) - loss_list[5] = sr_loss.item() - else: - sr_loss = 0 - - # torsion prediction - if len(batch['y_pos']) > 0: - Hx = dihedral_utils.rotation_matrix_v2(batch['y_pos']) - xn_pos = torch.matmul(Hx, batch['xn_pos'].permute(0, 2, 1)).permute(0, 2, 1) - yn_pos = torch.matmul(Hx, batch['yn_pos'].permute(0, 2, 1)).permute(0, 2, 1) - y_pos = torch.matmul(Hx, batch['y_pos'].unsqueeze(1).permute(0, 2, 1)).squeeze(-1) - - hx, hy = h_ctx_ligand_torsion[batch['ligand_torsion_xy_index'][:, 0]], h_ctx_ligand_torsion[batch['ligand_torsion_xy_index'][:, 1]] - h_mol = scatter_add(h_ctx_ligand_torsion, dim=0, index=batch['ligand_element_torsion_batch']) - if self.config.random_alpha: - rand_dist = torch.distributions.normal.Normal(loc=0, scale=1) - rand_alpha = rand_dist.sample(hx.shape).to(self.device) - alpha = self.alpha_mlp(torch.cat([hx, hy, h_mol, rand_alpha], dim=-1)) - else: - alpha = self.alpha_mlp(torch.cat([hx, hy, h_mol], dim=-1)) - # rotate xn - R_alpha = self.build_alpha_rotation(torch.sin(alpha).squeeze(-1), torch.cos(alpha).squeeze(-1)) - xn_pos = torch.matmul(R_alpha, xn_pos.permute(0, 2, 1)).permute(0, 2, 1) - - p_idx, q_idx = torch.cartesian_prod(torch.arange(3), torch.arange(3)).chunk(2, dim=-1) - p_idx, q_idx = p_idx.squeeze(-1), q_idx.squeeze(-1) - pred_sin, pred_cos = dihedral_utils.batch_dihedrals(xn_pos[:, p_idx], - torch.zeros_like(y_pos).unsqueeze(1).repeat(1, 9, 1), - y_pos.unsqueeze(1).repeat(1, 9, 1), - yn_pos[:, q_idx]) - dihedral_loss = torch.mean(dihedral_utils.von_Mises_loss(batch['true_cos'], pred_cos.reshape(-1), batch['true_sin'], pred_cos.reshape(-1))[batch['dihedral_mask']]) - torsion_loss = -dihedral_loss - loss_list[4] = torsion_loss.item() - else: - torsion_loss = 0 - - # dm: distance matrix - loss = pred_loss + comb_loss + focal_loss + dm_loss + torsion_loss + sr_loss - - return loss, loss_list - - def build_alpha_rotation(self, alpha, alpha_cos=None): - """ - Builds the alpha rotation matrix - - :param alpha: predicted values of torsion parameter alpha (n_dihedral_pairs) - :return: alpha rotation matrix (n_dihedral_pairs, 3, 3) - """ - H_alpha = torch.FloatTensor([[[1, 0, 0], [0, 0, 0], [0, 0, 0]]]).repeat(alpha.shape[0], 1, 1).to(self.device) - - if torch.is_tensor(alpha_cos): - H_alpha[:, 1, 1] = alpha_cos - H_alpha[:, 1, 2] = -alpha - H_alpha[:, 2, 1] = alpha - H_alpha[:, 2, 2] = alpha_cos - else: - H_alpha[:, 1, 1] = torch.cos(alpha) - H_alpha[:, 1, 2] = -torch.sin(alpha) - H_alpha[:, 2, 1] = torch.sin(alpha) - H_alpha[:, 2, 2] = torch.cos(alpha) - - return H_alpha - diff --git a/spaces/a-v-bely/spanish-task-generator/utilities_language_w2v/esp_main_workflow_w2v.py b/spaces/a-v-bely/spanish-task-generator/utilities_language_w2v/esp_main_workflow_w2v.py deleted file mode 100644 index 4902a720652270b0f6ca6751a1534b43bb5af241..0000000000000000000000000000000000000000 --- a/spaces/a-v-bely/spanish-task-generator/utilities_language_w2v/esp_main_workflow_w2v.py +++ /dev/null @@ -1,259 +0,0 @@ -import datetime -from io import StringIO -from random import sample -from collections import defaultdict -from streamlit import progress as st_progress -from streamlit.elements import WIDGETS as ST_WIDGETS -from utilities_language_general.esp_constants import st -from utilities_language_w2v.esp_sentence_w2v import TASK -from utilities_language_w2v.esp_sentence_w2v import SENTENCE -from utilities_language_general.esp_constants import load_w2v -from utilities_language_general.esp_utils import prepare_tasks -from streamlit.runtime.uploaded_file_manager import UploadedFile -import utilities_language_general.esp_constants as esp_constants -from utilities_language_general.esp_constants import w2v_model_1_path -from utilities_language_general.esp_constants import w2v_model_2_path -from utilities_language_general.esp_utils import prepare_target_words -from utilities_language_general.esp_utils import compute_frequency_dict -from utilities_language_general.esp_constants import BAD_USER_TARGET_WORDS - - -def main_workflow( - file: UploadedFile or None, - text: str, - logs: ST_WIDGETS, - progress: st_progress, - progress_d: st_progress, - level: str, - tw_mode_automatic_mode: str, - target_words: str, - num_distractors: int, - save_name: str, - model_name: str, - global_bad_target_words=BAD_USER_TARGET_WORDS): - """ - This is the main course of the program. - All processes and changes take place here. - Partially works with the interface, displaying the success messages and download buttons. - - :param file: user's file to generate tasks in - :param text: user's text input to generate tasks in - :param logs: widget to output logs to - :param progress: progress bar - :param progress_d: distractors progress bar - :param target_words: how target words are chosen: by user or automatically - :param tw_mode_automatic_mode: - :param level: user's specification of CEFR level of text - :param num_distractors: how many distractors does the user want the task to contain - :param save_name: user specifies name to save file in cloud - :param global_bad_target_words:global_bad_target_words - :param model_name - :return: Dictionary with output data: filename, amount_mode, text_with_gaps, tasks_as_list, correct_answers, - student_out, teacher_out, total_out, original_text - """ - - # Clear bad target_words each time - if global_bad_target_words: - global_bad_target_words = [] - - # Define main global variables - GLOBAL_DISTRACTORS = set() - MAX_FREQUENCY = 0 - - # Get input text - if file is not None: - stringio = StringIO(file.getvalue().decode("utf-8")) - current_text = stringio.read() - elif text != '': - current_text = text - else: - esp_constants.st.warning('Вы и текст не вставили, и файл не выбрали 😢') - current_text = '' - esp_constants.st.stop() - - # Process target words - if tw_mode_automatic_mode == 'Самостоятельно': - if target_words == '': - esp_constants.st.warning('Вы не ввели целевые слова') - esp_constants.st.stop() - # Cannot make up paradigm, so only USER_TARGET_WORDS is used - USER_TARGET_WORDS = prepare_target_words(target_words) - tw_mode_automatic_mode = False - else: - USER_TARGET_WORDS = None - tw_mode_automatic_mode = True - - # Text preprocessing - original_text = current_text - current_text = (current_text.replace('.', '. ').replace('. . .', '...') - .replace(' ', ' ').replace('…', '...').replace('…', '...') - .replace('—', '-').replace('\u2014', '-').replace('—', '-') - .replace('-\n', '').replace('\n', '%^&*')) - current_text_sentences = [sent.text.strip() for sent in esp_constants.nlp(current_text).sents] - logs.update(label='Получили Ваш текст!', state='running') - progress.progress(10) - - # Compute frequency dict - FREQ_DICT = compute_frequency_dict(current_text) - - # Get maximum frequency (top 5% barrier) - _frequency_barrier_percent = 0.05 - for j, tp in enumerate(FREQ_DICT.items()): - if j < len(FREQ_DICT) * _frequency_barrier_percent: - MAX_FREQUENCY = tp[1] - MAX_FREQUENCY = 3 if MAX_FREQUENCY < 3 else MAX_FREQUENCY - logs.update(label="Посчитали немного статистики!", state='running') - progress.progress(15) - - # Choose necessary language minimum according to user's input - if level == 'A1': - target_minimum = esp_constants.a1_target_set - distractor_minimum = esp_constants.a1_distractor_set - elif level == 'A2': - target_minimum = esp_constants.a2_target_set - distractor_minimum = esp_constants.a2_distractor_set - elif level == 'B1': - target_minimum = esp_constants.b1_target_set - distractor_minimum = esp_constants.b1_distractor_set - elif level == 'B2': - target_minimum = esp_constants.b2_target_set - distractor_minimum = esp_constants.b2_distractor_set - elif level == 'C1': - target_minimum = esp_constants.c1_target_set - distractor_minimum = esp_constants.c1_distractor_set - elif level == 'C2': - target_minimum = esp_constants.c2_target_set - distractor_minimum = esp_constants.c2_distractor_set - elif level == 'Без уровня': - target_minimum = None - distractor_minimum = None - else: - target_minimum = None - distractor_minimum = None - logs.error('Вы не выбрали языковой уровень!') - st.stop() - - # Define which model is used for distractor generation - logs.update(label='Загружаем языковые модели и другие данные', state='running') - if model_name == 'Модель-1': - mask_filler = load_w2v(w2v_model_1_path) - else: - mask_filler = load_w2v(w2v_model_2_path) - - # Start generation process - workflow = [SENTENCE(original=sent.strip(), n_sentence=num, max_num_distractors=num_distractors) - for num, sent in enumerate(current_text_sentences)] - logs.update(label="Запускаем процесс генерации заданий!", state='running') - progress.progress(20) - - for sentence in workflow: - sentence.lemmatize_sentence() - - for sentence in workflow: - sentence.bind_phrases() - logs.update(label="Подготовили предложения для дальнейшей работы!", state='running') - progress.progress(30) - - for j, sentence in enumerate(workflow): - sentence.search_target_words(model=mask_filler, - target_words_automatic_mode=tw_mode_automatic_mode, - target_minimum=target_minimum, - user_target_words=USER_TARGET_WORDS, - frequency_dict=FREQ_DICT) - progress.progress(int(30 + (j * (30 / len(workflow))))) - progress.progress(60) - DUPLICATE_TARGET_WORDS = defaultdict(list) - for sentence in workflow: - for target_word in sentence.target_words: - DUPLICATE_TARGET_WORDS[target_word['lemma']].append(target_word) - RESULT_TW = [] - for tw_lemma, tw_data in DUPLICATE_TARGET_WORDS.items(): - RESULT_TW.append(sample(tw_data, 1)[0]) - for sentence in workflow: - for target_word in sentence.target_words: - if target_word not in RESULT_TW: - global_bad_target_words.append(target_word['original_text']) - sentence.target_words.remove(target_word) - progress.progress(65) - logs.update(label='Выбрали слова-пропуски!', state='running') - - for sentence in workflow: - sentence.attach_distractors_to_target_word(model=mask_filler, - global_distractors=GLOBAL_DISTRACTORS, - distractor_minimum=distractor_minimum, - level_name=level, - max_frequency=MAX_FREQUENCY, - logs=logs, progress=progress_d) - progress.progress(70) - logs.update(label='Подобрали неправильные варианты!', state='running') - for sentence in workflow: - sentence.inflect_distractors() - progress.progress(80) - logs.update(label='Просклоняли и проспрягали неправильные варианты!', state='running') - - for sentence in workflow: - sentence.filter_target_words(target_words_automatic_mode=tw_mode_automatic_mode) - - for sentence in workflow: - sentence.sample_distractors(num_distractors=num_distractors) - progress.progress(90) - logs.update(label='Отобрали лучшие задания!', state='running') - - RESULT_TASKS = [] - for sentence in workflow: - for target_word in sentence.target_words: - task = TASK(task_data=target_word) - RESULT_TASKS.append(task) - del workflow - - # Compute number of final tasks - if len(RESULT_TASKS) >= 20: - NUMBER_TASKS = 20 - else: - if len(RESULT_TASKS) >= 15: - NUMBER_TASKS = 15 - else: - if len(RESULT_TASKS) >= 10: - NUMBER_TASKS = 10 - else: - NUMBER_TASKS = len(RESULT_TASKS) - RESULT_TASKS = sample(RESULT_TASKS, NUMBER_TASKS) - RESULT_TASKS = sorted(RESULT_TASKS, key=lambda t: (t.sentence_number, t.position_in_sentence)) - - for task in RESULT_TASKS: - task.compile_task(max_num_distractors=num_distractors) - - TEXT_WITH_GAPS = [] - VARIANTS = [] - tasks_counter = 1 - for i, sentence in enumerate(current_text_sentences): - for task in filter(lambda t: t.sentence_number == i, RESULT_TASKS): - sentence = sentence.replace(task.original_text, f'__________({tasks_counter})') - VARIANTS.append(task.variants) - tasks_counter += 1 - TEXT_WITH_GAPS.append(sentence) - del RESULT_TASKS - - TEXT_WITH_GAPS = ' '.join([sentence for sentence in TEXT_WITH_GAPS]).replace('%^&*', '\n') - PREPARED_TASKS = prepare_tasks(VARIANTS) - STUDENT_OUT = f'{TEXT_WITH_GAPS}\n\n{"=" * 70}\n\n{PREPARED_TASKS["TASKS_STUDENT"]}' - TEACHER_OUT = f'{TEXT_WITH_GAPS}\n\n{"=" * 70}\n\n{PREPARED_TASKS["TASKS_TEACHER"]}\n\n{"=" * 70}\n\n' \ - f'{PREPARED_TASKS["KEYS_ONLY"]}' - TOTAL_OUT = f'{original_text}\n\n{"$" * 70}\n\n{STUDENT_OUT}\n\n{"=" * 70}\n\n{PREPARED_TASKS["TASKS_TEACHER"]}' \ - f'\n\n{"$" * 70}\n\n{PREPARED_TASKS["KEYS_ONLY"]}' - logs.update(label='Сейчас все будет готово!', state='running') - progress.progress(90) - save_name = save_name if save_name != '' else f'{str(datetime.datetime.now())[:-7]}_{original_text[:20]}' - out = { - 'name': save_name, - 'STUDENT_OUT': STUDENT_OUT, - 'TEACHER_OUT': TEACHER_OUT, - 'TEXT_WITH_GAPS': TEXT_WITH_GAPS, - 'TASKS_ONLY': PREPARED_TASKS["RAW_TASKS"], - 'KEYS_ONLY': PREPARED_TASKS["KEYS_ONLY"], - 'KEYS_ONLY_RAW': PREPARED_TASKS["RAW_KEYS_ONLY"], - 'TOTAL_OUT': TOTAL_OUT, - 'ORIGINAL': original_text, - 'BAD_USER_TARGET_WORDS': sorted(set(global_bad_target_words)) - } - return out diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/tin_shift.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/tin_shift.py deleted file mode 100644 index 472c9fcfe45a124e819b7ed5653e585f94a8811e..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/ops/tin_shift.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -# Code reference from "Temporal Interlacing Network" -# https://github.com/deepcs233/TIN/blob/master/cuda_shift/rtc_wrap.py -# Hao Shao, Shengju Qian, Yu Liu -# shaoh19@mails.tsinghua.edu.cn, sjqian@cse.cuhk.edu.hk, yuliu@ee.cuhk.edu.hk - -import torch -import torch.nn as nn -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', - ['tin_shift_forward', 'tin_shift_backward']) - - -class TINShiftFunction(Function): - - @staticmethod - def forward(ctx, input, shift): - C = input.size(2) - num_segments = shift.size(1) - if C // num_segments <= 0 or C % num_segments != 0: - raise ValueError('C should be a multiple of num_segments, ' - f'but got C={C} and num_segments={num_segments}.') - - ctx.save_for_backward(shift) - - out = torch.zeros_like(input) - ext_module.tin_shift_forward(input, shift, out) - - return out - - @staticmethod - def backward(ctx, grad_output): - - shift = ctx.saved_tensors[0] - data_grad_input = grad_output.new(*grad_output.size()).zero_() - shift_grad_input = shift.new(*shift.size()).zero_() - ext_module.tin_shift_backward(grad_output, shift, data_grad_input) - - return data_grad_input, shift_grad_input - - -tin_shift = TINShiftFunction.apply - - -class TINShift(nn.Module): - """Temporal Interlace Shift. - - Temporal Interlace shift is a differentiable temporal-wise frame shifting - which is proposed in "Temporal Interlacing Network" - - Please refer to https://arxiv.org/abs/2001.06499 for more details. - Code is modified from https://github.com/mit-han-lab/temporal-shift-module - """ - - def forward(self, input, shift): - """Perform temporal interlace shift. - - Args: - input (Tensor): Feature map with shape [N, num_segments, C, H * W]. - shift (Tensor): Shift tensor with shape [N, num_segments]. - - Returns: - Feature map after temporal interlace shift. - """ - return tin_shift(input, shift) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/transformer_head.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/transformer_head.py deleted file mode 100644 index 820fd069fcca295f6102f0d27366158a8c640249..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/dense_heads/transformer_head.py +++ /dev/null @@ -1,654 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Conv2d, Linear, build_activation_layer -from mmcv.runner import force_fp32 - -from mmdet.core import (bbox_cxcywh_to_xyxy, bbox_xyxy_to_cxcywh, - build_assigner, build_sampler, multi_apply, - reduce_mean) -from mmdet.models.utils import (FFN, build_positional_encoding, - build_transformer) -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead - - -@HEADS.register_module() -class TransformerHead(AnchorFreeHead): - """Implements the DETR transformer head. - - See `paper: End-to-End Object Detection with Transformers - `_ for details. - - Args: - num_classes (int): Number of categories excluding the background. - in_channels (int): Number of channels in the input feature map. - num_fcs (int, optional): Number of fully-connected layers used in - `FFN`, which is then used for the regression head. Default 2. - transformer (dict, optional): Config for transformer. - positional_encoding (dict, optional): Config for position encoding. - loss_cls (dict, optional): Config of the classification loss. - Default `CrossEntropyLoss`. - loss_bbox (dict, optional): Config of the regression loss. - Default `L1Loss`. - loss_iou (dict, optional): Config of the regression iou loss. - Default `GIoULoss`. - tran_cfg (dict, optional): Training config of transformer head. - test_cfg (dict, optional): Testing config of transformer head. - - Example: - >>> import torch - >>> self = TransformerHead(80, 2048) - >>> x = torch.rand(1, 2048, 32, 32) - >>> mask = torch.ones(1, 32, 32).to(x.dtype) - >>> mask[:, :16, :15] = 0 - >>> all_cls_scores, all_bbox_preds = self(x, mask) - """ - - def __init__(self, - num_classes, - in_channels, - num_fcs=2, - transformer=dict( - type='Transformer', - embed_dims=256, - num_heads=8, - num_encoder_layers=6, - num_decoder_layers=6, - feedforward_channels=2048, - dropout=0.1, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2, - pre_norm=False, - return_intermediate_dec=True), - positional_encoding=dict( - type='SinePositionalEncoding', - num_feats=128, - normalize=True), - loss_cls=dict( - type='CrossEntropyLoss', - bg_cls_weight=0.1, - use_sigmoid=False, - loss_weight=1.0, - class_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=5.0), - loss_iou=dict(type='GIoULoss', loss_weight=2.0), - train_cfg=dict( - assigner=dict( - type='HungarianAssigner', - cls_cost=dict(type='ClassificationCost', weight=1.), - reg_cost=dict(type='BBoxL1Cost', weight=5.0), - iou_cost=dict( - type='IoUCost', iou_mode='giou', weight=2.0))), - test_cfg=dict(max_per_img=100), - **kwargs): - # NOTE here use `AnchorFreeHead` instead of `TransformerHead`, - # since it brings inconvenience when the initialization of - # `AnchorFreeHead` is called. - super(AnchorFreeHead, self).__init__() - use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - assert not use_sigmoid_cls, 'setting use_sigmoid_cls as True is ' \ - 'not supported in DETR, since background is needed for the ' \ - 'matching process.' - assert 'embed_dims' in transformer \ - and 'num_feats' in positional_encoding - num_feats = positional_encoding['num_feats'] - embed_dims = transformer['embed_dims'] - assert num_feats * 2 == embed_dims, 'embed_dims should' \ - f' be exactly 2 times of num_feats. Found {embed_dims}' \ - f' and {num_feats}.' - assert test_cfg is not None and 'max_per_img' in test_cfg - - class_weight = loss_cls.get('class_weight', None) - if class_weight is not None: - assert isinstance(class_weight, float), 'Expected ' \ - 'class_weight to have type float. Found ' \ - f'{type(class_weight)}.' - # NOTE following the official DETR rep0, bg_cls_weight means - # relative classification weight of the no-object class. - bg_cls_weight = loss_cls.get('bg_cls_weight', class_weight) - assert isinstance(bg_cls_weight, float), 'Expected ' \ - 'bg_cls_weight to have type float. Found ' \ - f'{type(bg_cls_weight)}.' - class_weight = torch.ones(num_classes + 1) * class_weight - # set background class as the last indice - class_weight[num_classes] = bg_cls_weight - loss_cls.update({'class_weight': class_weight}) - if 'bg_cls_weight' in loss_cls: - loss_cls.pop('bg_cls_weight') - self.bg_cls_weight = bg_cls_weight - - if train_cfg: - assert 'assigner' in train_cfg, 'assigner should be provided '\ - 'when train_cfg is set.' - assigner = train_cfg['assigner'] - assert loss_cls['loss_weight'] == assigner['cls_cost']['weight'], \ - 'The classification weight for loss and matcher should be' \ - 'exactly the same.' - assert loss_bbox['loss_weight'] == assigner['reg_cost'][ - 'weight'], 'The regression L1 weight for loss and matcher ' \ - 'should be exactly the same.' - assert loss_iou['loss_weight'] == assigner['iou_cost']['weight'], \ - 'The regression iou weight for loss and matcher should be' \ - 'exactly the same.' - self.assigner = build_assigner(assigner) - # DETR sampling=False, so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.num_classes = num_classes - self.cls_out_channels = num_classes + 1 - self.in_channels = in_channels - self.num_fcs = num_fcs - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.use_sigmoid_cls = use_sigmoid_cls - self.embed_dims = embed_dims - self.num_query = test_cfg['max_per_img'] - self.fp16_enabled = False - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.loss_iou = build_loss(loss_iou) - self.act_cfg = transformer.get('act_cfg', - dict(type='ReLU', inplace=True)) - self.activate = build_activation_layer(self.act_cfg) - self.positional_encoding = build_positional_encoding( - positional_encoding) - self.transformer = build_transformer(transformer) - self._init_layers() - - def _init_layers(self): - """Initialize layers of the transformer head.""" - self.input_proj = Conv2d( - self.in_channels, self.embed_dims, kernel_size=1) - self.fc_cls = Linear(self.embed_dims, self.cls_out_channels) - self.reg_ffn = FFN( - self.embed_dims, - self.embed_dims, - self.num_fcs, - self.act_cfg, - dropout=0.0, - add_residual=False) - self.fc_reg = Linear(self.embed_dims, 4) - self.query_embedding = nn.Embedding(self.num_query, self.embed_dims) - - def init_weights(self, distribution='uniform'): - """Initialize weights of the transformer head.""" - # The initialization for transformer is important - self.transformer.init_weights() - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """load checkpoints.""" - # NOTE here use `AnchorFreeHead` instead of `TransformerHead`, - # since `AnchorFreeHead._load_from_state_dict` should not be - # called here. Invoking the default `Module._load_from_state_dict` - # is enough. - super(AnchorFreeHead, - self)._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, - unexpected_keys, error_msgs) - - def forward(self, feats, img_metas): - """Forward function. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): List of image information. - - Returns: - tuple[list[Tensor], list[Tensor]]: Outputs for all scale levels. - - - all_cls_scores_list (list[Tensor]): Classification scores \ - for each scale level. Each is a 4D-tensor with shape \ - [nb_dec, bs, num_query, cls_out_channels]. Note \ - `cls_out_channels` should includes background. - - all_bbox_preds_list (list[Tensor]): Sigmoid regression \ - outputs for each scale level. Each is a 4D-tensor with \ - normalized coordinate format (cx, cy, w, h) and shape \ - [nb_dec, bs, num_query, 4]. - """ - num_levels = len(feats) - img_metas_list = [img_metas for _ in range(num_levels)] - return multi_apply(self.forward_single, feats, img_metas_list) - - def forward_single(self, x, img_metas): - """"Forward function for a single feature level. - - Args: - x (Tensor): Input feature from backbone's single stage, shape - [bs, c, h, w]. - img_metas (list[dict]): List of image information. - - Returns: - all_cls_scores (Tensor): Outputs from the classification head, - shape [nb_dec, bs, num_query, cls_out_channels]. Note - cls_out_channels should includes background. - all_bbox_preds (Tensor): Sigmoid outputs from the regression - head with normalized coordinate format (cx, cy, w, h). - Shape [nb_dec, bs, num_query, 4]. - """ - # construct binary masks which used for the transformer. - # NOTE following the official DETR repo, non-zero values representing - # ignored positions, while zero values means valid positions. - batch_size = x.size(0) - input_img_h, input_img_w = img_metas[0]['batch_input_shape'] - masks = x.new_ones((batch_size, input_img_h, input_img_w)) - for img_id in range(batch_size): - img_h, img_w, _ = img_metas[img_id]['img_shape'] - masks[img_id, :img_h, :img_w] = 0 - - x = self.input_proj(x) - # interpolate masks to have the same spatial shape with x - masks = F.interpolate( - masks.unsqueeze(1), size=x.shape[-2:]).to(torch.bool).squeeze(1) - # position encoding - pos_embed = self.positional_encoding(masks) # [bs, embed_dim, h, w] - # outs_dec: [nb_dec, bs, num_query, embed_dim] - outs_dec, _ = self.transformer(x, masks, self.query_embedding.weight, - pos_embed) - - all_cls_scores = self.fc_cls(outs_dec) - all_bbox_preds = self.fc_reg(self.activate( - self.reg_ffn(outs_dec))).sigmoid() - return all_cls_scores, all_bbox_preds - - @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list')) - def loss(self, - all_cls_scores_list, - all_bbox_preds_list, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore=None): - """"Loss function. - - Only outputs from the last feature level are used for computing - losses by default. - - Args: - all_cls_scores_list (list[Tensor]): Classification outputs - for each feature level. Each is a 4D-tensor with shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds_list (list[Tensor]): Sigmoid regression - outputs for each feature level. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore (list[Tensor], optional): Bounding boxes - which can be ignored for each image. Default None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # NOTE defaultly only the outputs from the last feature scale is used. - all_cls_scores = all_cls_scores_list[-1] - all_bbox_preds = all_bbox_preds_list[-1] - assert gt_bboxes_ignore is None, \ - 'Only supports for gt_bboxes_ignore setting to None.' - - num_dec_layers = len(all_cls_scores) - all_gt_bboxes_list = [gt_bboxes_list for _ in range(num_dec_layers)] - all_gt_labels_list = [gt_labels_list for _ in range(num_dec_layers)] - all_gt_bboxes_ignore_list = [ - gt_bboxes_ignore for _ in range(num_dec_layers) - ] - img_metas_list = [img_metas for _ in range(num_dec_layers)] - - losses_cls, losses_bbox, losses_iou = multi_apply( - self.loss_single, all_cls_scores, all_bbox_preds, - all_gt_bboxes_list, all_gt_labels_list, img_metas_list, - all_gt_bboxes_ignore_list) - - loss_dict = dict() - # loss from the last decoder layer - loss_dict['loss_cls'] = losses_cls[-1] - loss_dict['loss_bbox'] = losses_bbox[-1] - loss_dict['loss_iou'] = losses_iou[-1] - # loss from other decoder layers - num_dec_layer = 0 - for loss_cls_i, loss_bbox_i, loss_iou_i in zip(losses_cls[:-1], - losses_bbox[:-1], - losses_iou[:-1]): - loss_dict[f'd{num_dec_layer}.loss_cls'] = loss_cls_i - loss_dict[f'd{num_dec_layer}.loss_bbox'] = loss_bbox_i - loss_dict[f'd{num_dec_layer}.loss_iou'] = loss_iou_i - num_dec_layer += 1 - return loss_dict - - def loss_single(self, - cls_scores, - bbox_preds, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore_list=None): - """"Loss function for outputs from a single decoder layer of a single - feature level. - - Args: - cls_scores (Tensor): Box score logits from a single decoder layer - for all images. Shape [bs, num_query, cls_out_channels]. - bbox_preds (Tensor): Sigmoid outputs from a single decoder layer - for all images, with normalized coordinate (cx, cy, w, h) and - shape [bs, num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore_list (list[Tensor], optional): Bounding - boxes which can be ignored for each image. Default None. - - Returns: - dict[str, Tensor]: A dictionary of loss components for outputs from - a single decoder layer. - """ - num_imgs = cls_scores.size(0) - cls_scores_list = [cls_scores[i] for i in range(num_imgs)] - bbox_preds_list = [bbox_preds[i] for i in range(num_imgs)] - cls_reg_targets = self.get_targets(cls_scores_list, bbox_preds_list, - gt_bboxes_list, gt_labels_list, - img_metas, gt_bboxes_ignore_list) - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - labels = torch.cat(labels_list, 0) - label_weights = torch.cat(label_weights_list, 0) - bbox_targets = torch.cat(bbox_targets_list, 0) - bbox_weights = torch.cat(bbox_weights_list, 0) - - # classification loss - cls_scores = cls_scores.reshape(-1, self.cls_out_channels) - # construct weighted avg_factor to match with the official DETR repo - cls_avg_factor = num_total_pos * 1.0 + \ - num_total_neg * self.bg_cls_weight - loss_cls = self.loss_cls( - cls_scores, labels, label_weights, avg_factor=cls_avg_factor) - - # Compute the average number of gt boxes accross all gpus, for - # normalization purposes - num_total_pos = loss_cls.new_tensor([num_total_pos]) - num_total_pos = torch.clamp(reduce_mean(num_total_pos), min=1).item() - - # construct factors used for rescale bboxes - factors = [] - for img_meta, bbox_pred in zip(img_metas, bbox_preds): - img_h, img_w, _ = img_meta['img_shape'] - factor = bbox_pred.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0).repeat( - bbox_pred.size(0), 1) - factors.append(factor) - factors = torch.cat(factors, 0) - - # DETR regress the relative position of boxes (cxcywh) in the image, - # thus the learning target is normalized by the image size. So here - # we need to re-scale them for calculating IoU loss - bbox_preds = bbox_preds.reshape(-1, 4) - bboxes = bbox_cxcywh_to_xyxy(bbox_preds) * factors - bboxes_gt = bbox_cxcywh_to_xyxy(bbox_targets) * factors - - # regression IoU loss, defaultly GIoU loss - loss_iou = self.loss_iou( - bboxes, bboxes_gt, bbox_weights, avg_factor=num_total_pos) - - # regression L1 loss - loss_bbox = self.loss_bbox( - bbox_preds, bbox_targets, bbox_weights, avg_factor=num_total_pos) - return loss_cls, loss_bbox, loss_iou - - def get_targets(self, - cls_scores_list, - bbox_preds_list, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore_list=None): - """"Compute regression and classification targets for a batch image. - - Outputs from a single decoder layer of a single feature level are used. - - Args: - cls_scores_list (list[Tensor]): Box score logits from a single - decoder layer for each image with shape [num_query, - cls_out_channels]. - bbox_preds_list (list[Tensor]): Sigmoid outputs from a single - decoder layer for each image, with normalized coordinate - (cx, cy, w, h) and shape [num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore_list (list[Tensor], optional): Bounding - boxes which can be ignored for each image. Default None. - - Returns: - tuple: a tuple containing the following targets. - - - labels_list (list[Tensor]): Labels for all images. - - label_weights_list (list[Tensor]): Label weights for all \ - images. - - bbox_targets_list (list[Tensor]): BBox targets for all \ - images. - - bbox_weights_list (list[Tensor]): BBox weights for all \ - images. - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - """ - assert gt_bboxes_ignore_list is None, \ - 'Only supports for gt_bboxes_ignore setting to None.' - num_imgs = len(cls_scores_list) - gt_bboxes_ignore_list = [ - gt_bboxes_ignore_list for _ in range(num_imgs) - ] - - (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, cls_scores_list, bbox_preds_list, - gt_bboxes_list, gt_labels_list, img_metas, gt_bboxes_ignore_list) - num_total_pos = sum((inds.numel() for inds in pos_inds_list)) - num_total_neg = sum((inds.numel() for inds in neg_inds_list)) - return (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - - def _get_target_single(self, - cls_score, - bbox_pred, - gt_bboxes, - gt_labels, - img_meta, - gt_bboxes_ignore=None): - """"Compute regression and classification targets for one image. - - Outputs from a single decoder layer of a single feature level are used. - - Args: - cls_score (Tensor): Box score logits from a single decoder layer - for one image. Shape [num_query, cls_out_channels]. - bbox_pred (Tensor): Sigmoid outputs from a single decoder layer - for one image, with normalized coordinate (cx, cy, w, h) and - shape [num_query, 4]. - gt_bboxes (Tensor): Ground truth bboxes for one image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (Tensor): Ground truth class indices for one image - with shape (num_gts, ). - img_meta (dict): Meta information for one image. - gt_bboxes_ignore (Tensor, optional): Bounding boxes - which can be ignored. Default None. - - Returns: - tuple[Tensor]: a tuple containing the following for one image. - - - labels (Tensor): Labels of each image. - - label_weights (Tensor]): Label weights of each image. - - bbox_targets (Tensor): BBox targets of each image. - - bbox_weights (Tensor): BBox weights of each image. - - pos_inds (Tensor): Sampled positive indices for each image. - - neg_inds (Tensor): Sampled negative indices for each image. - """ - - num_bboxes = bbox_pred.size(0) - # assigner and sampler - assign_result = self.assigner.assign(bbox_pred, cls_score, gt_bboxes, - gt_labels, img_meta, - gt_bboxes_ignore) - sampling_result = self.sampler.sample(assign_result, bbox_pred, - gt_bboxes) - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - - # label targets - labels = gt_bboxes.new_full((num_bboxes, ), - self.num_classes, - dtype=torch.long) - labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds] - label_weights = gt_bboxes.new_ones(num_bboxes) - - # bbox targets - bbox_targets = torch.zeros_like(bbox_pred) - bbox_weights = torch.zeros_like(bbox_pred) - bbox_weights[pos_inds] = 1.0 - img_h, img_w, _ = img_meta['img_shape'] - - # DETR regress the relative position of boxes (cxcywh) in the image. - # Thus the learning target should be normalized by the image size, also - # the box format should be converted from defaultly x1y1x2y2 to cxcywh. - factor = bbox_pred.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0) - pos_gt_bboxes_normalized = sampling_result.pos_gt_bboxes / factor - pos_gt_bboxes_targets = bbox_xyxy_to_cxcywh(pos_gt_bboxes_normalized) - bbox_targets[pos_inds] = pos_gt_bboxes_targets - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds) - - # over-write because img_metas are needed as inputs for bbox_head. - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """Forward function for training mode. - - Args: - x (list[Tensor]): Features from backbone. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert proposal_cfg is None, '"proposal_cfg" must be None' - outs = self(x, img_metas) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - return losses - - @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list')) - def get_bboxes(self, - all_cls_scores_list, - all_bbox_preds_list, - img_metas, - rescale=False): - """Transform network outputs for a batch into bbox predictions. - - Args: - all_cls_scores_list (list[Tensor]): Classification outputs - for each feature level. Each is a 4D-tensor with shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds_list (list[Tensor]): Sigmoid regression - outputs for each feature level. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - img_metas (list[dict]): Meta information of each image. - rescale (bool, optional): If True, return boxes in original - image space. Default False. - - Returns: - list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. \ - The first item is an (n, 5) tensor, where the first 4 columns \ - are bounding box positions (tl_x, tl_y, br_x, br_y) and the \ - 5-th column is a score between 0 and 1. The second item is a \ - (n,) tensor where each item is the predicted class label of \ - the corresponding box. - """ - # NOTE defaultly only using outputs from the last feature level, - # and only the outputs from the last decoder layer is used. - cls_scores = all_cls_scores_list[-1][-1] - bbox_preds = all_bbox_preds_list[-1][-1] - - result_list = [] - for img_id in range(len(img_metas)): - cls_score = cls_scores[img_id] - bbox_pred = bbox_preds[img_id] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score, bbox_pred, - img_shape, scale_factor, - rescale) - result_list.append(proposals) - return result_list - - def _get_bboxes_single(self, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=False): - """Transform outputs from the last decoder layer into bbox predictions - for each image. - - Args: - cls_score (Tensor): Box score logits from the last decoder layer - for each image. Shape [num_query, cls_out_channels]. - bbox_pred (Tensor): Sigmoid outputs from the last decoder layer - for each image, with coordinate format (cx, cy, w, h) and - shape [num_query, 4]. - img_shape (tuple[int]): Shape of input image, (height, width, 3). - scale_factor (ndarray, optional): Scale factor of the image arange - as (w_scale, h_scale, w_scale, h_scale). - rescale (bool, optional): If True, return boxes in original image - space. Default False. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. - - - det_bboxes: Predicted bboxes with shape [num_query, 5], \ - where the first 4 columns are bounding box positions \ - (tl_x, tl_y, br_x, br_y) and the 5-th column are scores \ - between 0 and 1. - - det_labels: Predicted labels of the corresponding box with \ - shape [num_query]. - """ - assert len(cls_score) == len(bbox_pred) - # exclude background - scores, det_labels = F.softmax(cls_score, dim=-1)[..., :-1].max(-1) - det_bboxes = bbox_cxcywh_to_xyxy(bbox_pred) - det_bboxes[:, 0::2] = det_bboxes[:, 0::2] * img_shape[1] - det_bboxes[:, 1::2] = det_bboxes[:, 1::2] * img_shape[0] - det_bboxes[:, 0::2].clamp_(min=0, max=img_shape[1]) - det_bboxes[:, 1::2].clamp_(min=0, max=img_shape[0]) - if rescale: - det_bboxes /= det_bboxes.new_tensor(scale_factor) - det_bboxes = torch.cat((det_bboxes, scores.unsqueeze(1)), -1) - return det_bboxes, det_labels diff --git a/spaces/abidlabs/voice-verification/README.md b/spaces/abidlabs/voice-verification/README.md deleted file mode 100644 index 3abcd73eba3629a81a7adfddc50bc90c02ffcd1a..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/voice-verification/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Unispeech Speaker Verification -emoji: 💻 -colorFrom: blue -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/adirik/stylemc-demo/encoder4editing/utils/common.py b/spaces/adirik/stylemc-demo/encoder4editing/utils/common.py deleted file mode 100644 index b19e18ddcb78b06678fa18e4a76da44fc511b789..0000000000000000000000000000000000000000 --- a/spaces/adirik/stylemc-demo/encoder4editing/utils/common.py +++ /dev/null @@ -1,55 +0,0 @@ -from PIL import Image -import matplotlib.pyplot as plt - - -# Log images -def log_input_image(x, opts): - return tensor2im(x) - - -def tensor2im(var): - # var shape: (3, H, W) - var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy() - var = ((var + 1) / 2) - var[var < 0] = 0 - var[var > 1] = 1 - var = var * 255 - return Image.fromarray(var.astype('uint8')) - - -def vis_faces(log_hooks): - display_count = len(log_hooks) - fig = plt.figure(figsize=(8, 4 * display_count)) - gs = fig.add_gridspec(display_count, 3) - for i in range(display_count): - hooks_dict = log_hooks[i] - fig.add_subplot(gs[i, 0]) - if 'diff_input' in hooks_dict: - vis_faces_with_id(hooks_dict, fig, gs, i) - else: - vis_faces_no_id(hooks_dict, fig, gs, i) - plt.tight_layout() - return fig - - -def vis_faces_with_id(hooks_dict, fig, gs, i): - plt.imshow(hooks_dict['input_face']) - plt.title('Input\nOut Sim={:.2f}'.format(float(hooks_dict['diff_input']))) - fig.add_subplot(gs[i, 1]) - plt.imshow(hooks_dict['target_face']) - plt.title('Target\nIn={:.2f}, Out={:.2f}'.format(float(hooks_dict['diff_views']), - float(hooks_dict['diff_target']))) - fig.add_subplot(gs[i, 2]) - plt.imshow(hooks_dict['output_face']) - plt.title('Output\n Target Sim={:.2f}'.format(float(hooks_dict['diff_target']))) - - -def vis_faces_no_id(hooks_dict, fig, gs, i): - plt.imshow(hooks_dict['input_face'], cmap="gray") - plt.title('Input') - fig.add_subplot(gs[i, 1]) - plt.imshow(hooks_dict['target_face']) - plt.title('Target') - fig.add_subplot(gs[i, 2]) - plt.imshow(hooks_dict['output_face']) - plt.title('Output') diff --git a/spaces/aidinro/qqqqqqqqqqqqq/app.py b/spaces/aidinro/qqqqqqqqqqqqq/app.py deleted file mode 100644 index db68cfdce67fb1f934c55923541e98f47ac1b521..0000000000000000000000000000000000000000 --- a/spaces/aidinro/qqqqqqqqqqqqq/app.py +++ /dev/null @@ -1,37 +0,0 @@ -from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline -import streamlit -model_name = "deepset/roberta-base-squad2" - - -nlp = pipeline('question-answering', model=model_name, tokenizer=model_name) -QA_input = { - 'question': 'hi', - 'context': 'The onve gives freedom to the user and let people easily switch between frameworks.' -} - - -model = AutoModelForQuestionAnswering.from_pretrained(model_name) -tokenizer = AutoTokenizer.from_pretrained(model_name) -data = """Polyester resins are synthetic resins formed by the reaction of dibasic organic acids and polyhydric alcohols. Maleic anhydride is a commonly used raw material with diacid functionality in unsaturated polyester resins.[1] Unsaturated polyester resins are used in sheet moulding compound, bulk moulding compound and the toner of laser printers. Wall panels fabricated from polyester resins reinforced with fiberglass—so-called fiberglass reinforced plastic (FRP)—are typically used in restaurants, kitchens, restrooms and other areas that require washable low-maintenance walls. They are also used extensively in cured-in-place pipe applications. Departments of Transportation in the USA also specify them for use as overlays on roads and bridges. In this application they are known AS Polyester Concrete Overlays (PCO). These are usually based on isophthalic acid and cut with styrene at high levels—usually up to 50%.[2] Polyesters are also used in anchor bolt adhesives though epoxy based materials are also used.[3] Many companies have and continue to introduce styrene free systems mainly due to odor issues, but also over concerns that styrene is a potential carcinogen. Drinking water applications also prefer styrene free. Most polyester resins are viscous, pale coloured liquids consisting of a solution of a polyester in a reactive diluent which is usually styrene,[4] but can also include vinyl toluene and various acrylates.The classic variety is epoxy resin, manufactured through polymerization-polyaddition or polycondensation reactions, used as a thermoset polymer for adhesives and composites.[4] Epoxy resin is two times stronger than concrete, seamless, and waterproof.[citation needed] Accordingly, it has been mainly in use for industrial flooring purposes since the 1960s. Since 2000, however, epoxy and polyurethane resins are used in interiors as well, mainly in Western Europe. -Synthetic casting "resin" for embedding display objects in Plexiglas/Lucite (PMMA) is simply methyl methacrylate liquid, into which a polymerization catalyst is added and mixed, causing it to "set" (polymerize). The polymerization creates a block of PMMA plastic ("acrylic glass") which holds the display object inside a transparent block. -Another synthetic polymer, sometimes called by the same general category, is acetal resin. By contrast with the other synthetics, however, it has a simple chain structure with the repeat unit of form [CH2O] -Ion-exchange resins are used in water purification and catalysis of organic reactions. (See also AT-10 resin, melamine resin.) Certain ion-exchange resins are also used pharmaceutically as bile acid sequestrants, mainly as hypolipidemic agents, although they may be used for purposes other than lowering cholesterol. -Solvent impregnated resins (SIRs) are porous resin particles which contain an additional liquid extractant inside the porous matrix. The contained extractant is supposed to enhance the capacity of the resin particles. -A large category of resins, which constitutes 75% of resins used,[citation needed] is that of the unsaturated polyester resins. -The production of PVC entails the production of "vinyl chloride resins", which differ in the degree of polymerization. -Silicone resins -Silicone resins are silicone-based polymers that exhibit various useful properties like weatherability (durability), dielectricity, water repellency, thermal stability, and chemical inertness. -Health hazards -Health hazards potentially associated with synthetic resins are typically less of a concern than the hazards associated with the cured products, which are more commonly in contact with consumers. Issues of interest include the effects of unconsumed monomers, oligomers, and solvent carriers. -Dental restorative materials based on bis-GMA-containing resins[7] can break down into or be contaminated with the related compound bisphenol A, a potential endocrine disruptor. However, no negative health effects of bis-GMA use in dental resins have been found""" - - -while True: - inputfromuser = streamlit.text_input("what about polymers?") - QA_input = { - 'question': inputfromuser, - 'context': data, - } - - res = nlp(QA_input) - print(res) \ No newline at end of file diff --git a/spaces/akhaliq/Pop_Music_Transformer/finetune.py b/spaces/akhaliq/Pop_Music_Transformer/finetune.py deleted file mode 100644 index ea914b5986caadf9be5f5c1913b7ada942e51461..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Pop_Music_Transformer/finetune.py +++ /dev/null @@ -1,45 +0,0 @@ -from model import PopMusicTransformer -from glob import glob -import os -os.environ['CUDA_VISIBLE_DEVICES'] = '0' - -def main(): - # declare model - model = PopMusicTransformer( - checkpoint='REMI-tempo-checkpoint', - is_training=True) - # prepare data - midi_paths = glob('YOUR PERSOANL FOLDER/*.midi') # you need to revise it - training_data = model.prepare_data(midi_paths=midi_paths) - - # check output checkpoint folder - #################################### - # if you use "REMI-tempo-chord-checkpoint" for the pre-trained checkpoint - # please name your output folder as something with "chord" - # for example: my-love-chord, cute-doggy-chord, ... - # if use "REMI-tempo-checkpoint" - # for example: my-love, cute-doggy, ... - #################################### - output_checkpoint_folder = 'REMI-finetune' # your decision - if not os.path.exists(output_checkpoint_folder): - os.mkdir(output_checkpoint_folder) - - # finetune - model.finetune( - training_data=training_data, - output_checkpoint_folder=output_checkpoint_folder) - - #################################### - # after finetuning, please choose which checkpoint you want to try - # and change the checkpoint names you choose into "model" - # and copy the "dictionary.pkl" into the your output_checkpoint_folder - # ***** the same as the content format in "REMI-tempo-checkpoint" ***** - # and then, you can use "main.py" to generate your own music! - # (do not forget to revise the checkpoint path to your own in "main.py") - #################################### - - # close - model.close() - -if __name__ == '__main__': - main() diff --git a/spaces/akhaliq/deeplab2/model/builder.py b/spaces/akhaliq/deeplab2/model/builder.py deleted file mode 100644 index 9983b3e7f38597a384aa99e9ab9a32158c3eef46..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/model/builder.py +++ /dev/null @@ -1,174 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""This file contains functions to build encoder and decoder.""" -import tensorflow as tf - -from deeplab2 import config_pb2 -from deeplab2.model.decoder import deeplabv3 -from deeplab2.model.decoder import deeplabv3plus -from deeplab2.model.decoder import max_deeplab -from deeplab2.model.decoder import motion_deeplab_decoder -from deeplab2.model.decoder import panoptic_deeplab -from deeplab2.model.decoder import vip_deeplab_decoder -from deeplab2.model.encoder import axial_resnet_instances -from deeplab2.model.encoder import mobilenet - - -def create_encoder(backbone_options: config_pb2.ModelOptions.BackboneOptions, - bn_layer: tf.keras.layers.Layer, - conv_kernel_weight_decay: float = 0.0) -> tf.keras.Model: - """Creates an encoder. - - Args: - backbone_options: A proto config of type - config_pb2.ModelOptions.BackboneOptions. - bn_layer: A tf.keras.layers.Layer that computes the normalization. - conv_kernel_weight_decay: A float, the weight decay for convolution kernels. - - Returns: - An instance of tf.keras.Model containing the encoder. - - Raises: - ValueError: An error occurs when the specified encoder meta architecture is - not supported. - """ - if ('resnet' in backbone_options.name or - 'swidernet' in backbone_options.name or - 'axial_deeplab' in backbone_options.name or - 'max_deeplab' in backbone_options.name): - return create_resnet_encoder( - backbone_options, - bn_layer=bn_layer, - conv_kernel_weight_decay=conv_kernel_weight_decay) - elif 'mobilenet' in backbone_options.name: - return create_mobilenet_encoder( - backbone_options, - bn_layer=bn_layer, - conv_kernel_weight_decay=conv_kernel_weight_decay) - raise ValueError('The specified encoder %s is not a valid encoder.' % - backbone_options.name) - - -def create_mobilenet_encoder( - backbone_options: config_pb2.ModelOptions.BackboneOptions, - bn_layer: tf.keras.layers.Layer, - conv_kernel_weight_decay: float = 0.0) -> tf.keras.Model: - """Creates a MobileNet encoder specified by name. - - Args: - backbone_options: A proto config of type - config_pb2.ModelOptions.BackboneOptions. - bn_layer: A tf.keras.layers.Layer that computes the normalization. - conv_kernel_weight_decay: A float, the weight decay for convolution kernels. - - Returns: - An instance of tf.keras.Model containing the MobileNet encoder. - """ - if backbone_options.name.lower() == 'mobilenet_v3_large': - backbone = mobilenet.MobileNetV3Large - elif backbone_options.name.lower() == 'mobilenet_v3_small': - backbone = mobilenet.MobileNetV3Small - else: - raise ValueError('The specified encoder %s is not a valid encoder.' % - backbone_options.name) - assert backbone_options.use_squeeze_and_excite - assert backbone_options.drop_path_keep_prob == 1 - assert backbone_options.use_sac_beyond_stride == -1 - assert backbone_options.backbone_layer_multiplier == 1 - return backbone( - output_stride=backbone_options.output_stride, - width_multiplier=backbone_options.backbone_width_multiplier, - bn_layer=bn_layer, - conv_kernel_weight_decay=conv_kernel_weight_decay) - - -def create_resnet_encoder( - backbone_options: config_pb2.ModelOptions.BackboneOptions, - bn_layer: tf.keras.layers.Layer, - conv_kernel_weight_decay: float = 0.0) -> tf.keras.Model: - """Creates a ResNet encoder specified by name. - - Args: - backbone_options: A proto config of type - config_pb2.ModelOptions.BackboneOptions. - bn_layer: A tf.keras.layers.Layer that computes the normalization. - conv_kernel_weight_decay: A float, the weight decay for convolution kernels. - - Returns: - An instance of tf.keras.Model containing the ResNet encoder. - """ - return axial_resnet_instances.get_model( - backbone_options.name, - output_stride=backbone_options.output_stride, - stem_width_multiplier=backbone_options.stem_width_multiplier, - width_multiplier=backbone_options.backbone_width_multiplier, - backbone_layer_multiplier=backbone_options.backbone_layer_multiplier, - block_group_config={ - 'use_squeeze_and_excite': backbone_options.use_squeeze_and_excite, - 'drop_path_keep_prob': backbone_options.drop_path_keep_prob, - 'drop_path_schedule': backbone_options.drop_path_schedule, - 'use_sac_beyond_stride': backbone_options.use_sac_beyond_stride}, - bn_layer=bn_layer, - conv_kernel_weight_decay=conv_kernel_weight_decay) - - -def create_decoder(model_options: config_pb2.ModelOptions, - bn_layer: tf.keras.layers.Layer, - ignore_label: int) -> tf.keras.Model: - """Creates a DeepLab decoder. - - Args: - model_options: A proto config of type config_pb2.ModelOptions. - bn_layer: A tf.keras.layers.Layer that computes the normalization. - ignore_label: An integer specifying the ignore label. - - Returns: - An instance of tf.keras.layers.Layer containing the decoder. - - Raises: - ValueError: An error occurs when the specified meta architecture is not - supported. - """ - meta_architecture = model_options.WhichOneof('meta_architecture') - if meta_architecture == 'deeplab_v3': - return deeplabv3.DeepLabV3( - model_options.decoder, model_options.deeplab_v3, bn_layer=bn_layer) - elif meta_architecture == 'deeplab_v3_plus': - return deeplabv3plus.DeepLabV3Plus( - model_options.decoder, model_options.deeplab_v3_plus, bn_layer=bn_layer) - elif meta_architecture == 'panoptic_deeplab': - return panoptic_deeplab.PanopticDeepLab( - model_options.decoder, - model_options.panoptic_deeplab, - bn_layer=bn_layer) - elif meta_architecture == 'motion_deeplab': - return motion_deeplab_decoder.MotionDeepLabDecoder( - model_options.decoder, - model_options.motion_deeplab, - bn_layer=bn_layer) - elif meta_architecture == 'vip_deeplab': - return vip_deeplab_decoder.ViPDeepLabDecoder( - model_options.decoder, - model_options.vip_deeplab, - bn_layer=bn_layer) - elif meta_architecture == 'max_deeplab': - return max_deeplab.MaXDeepLab( - model_options.decoder, - model_options.max_deeplab, - ignore_label=ignore_label, - bn_layer=bn_layer) - raise ValueError('The specified meta architecture %s is not implemented.' % - meta_architecture) diff --git a/spaces/akhaliq/pedalboard/app.py b/spaces/akhaliq/pedalboard/app.py deleted file mode 100644 index c059b0c713743602f560dfaca1baf2446c2f4954..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/pedalboard/app.py +++ /dev/null @@ -1,36 +0,0 @@ -import librosa -import gradio as gr -import pedalboard -import soundfile as sf - -from pedalboard import Compressor, Gain, Phaser, Reverb - - - - -def inference(audio): - y, sr = librosa.load(audio.name, sr=44100) - board = pedalboard.Pedalboard([ - Compressor(ratio=10, threshold_db=-20), - Gain(gain_db=20), - Phaser(), - Reverb() - ], sample_rate=sr) - - effected = board.process(y) - with sf.SoundFile('./processed-output-stereo.wav', 'w', samplerate=sr, channels=len(effected.shape)) as f: - f.write(effected) - return './processed-output-stereo.wav' - - -inputs = gr.inputs.Audio(label="Input Audio", type="file") -outputs = gr.outputs.Audio(label="Output Audio", type="file") - - -title = "PedalBoard" -description = "Gradio demo for PedalBoard: A Python library for adding effects to audio. To use it, simply add your audio, or click one of the examples to load them. Read more at the links below." -article = "

Introducing Pedalboard: Spotify’s Audio Effects Library for Python | Github Repo

" - -examples = [['sample.wav']] - -gr.Interface(inference, inputs, outputs, title=title, description=description, article=article, examples=examples, enable_queue=True).launch(debug=True) \ No newline at end of file diff --git a/spaces/alamin655/websurfx/public/templates/search.html b/spaces/alamin655/websurfx/public/templates/search.html deleted file mode 100644 index 8a79d697d8ff461463f878e4a833a8b395be7ca7..0000000000000000000000000000000000000000 --- a/spaces/alamin655/websurfx/public/templates/search.html +++ /dev/null @@ -1,71 +0,0 @@ -{{>header this.style}} -
- {{>search_bar this}} -
- {{#if results}} {{#each results}} -
-

{{{this.title}}}

- {{{this.url}}} -

{{{this.description}}}

-
- {{#each engine}} - {{{this}}} - {{/each}} -
-
- {{/each}} {{else}} {{#if disallowed}} -
-
-

- Your search - {{{this.pageQuery}}} - - has been disallowed. -

-

Dear user,

-

- The query - {{{this.pageQuery}}} - has - been blacklisted via server configuration and hence disallowed by the - server. Henceforth no results could be displayed for your query. -

-
- Image of a Barricade -
- {{else}} {{#if filtered}} -
-
-

- Your search - {{{this.pageQuery}}} - - has been filtered. -

-

Dear user,

-

- All the search results contain results that has been configured to be - filtered out via server configuration and henceforth has been - completely filtered out. -

-
- Image of a paper inside a funnel -
- {{else}} -
-

Your search - {{{this.pageQuery}}} - did not match any documents.

-

Suggestions:

-
    -
  • Make sure that all words are spelled correctly.
  • -
  • Try different keywords.
  • -
  • Try more general keywords.
  • -
- Man fishing gif -
- {{/if}} {{/if}} {{/if}} -
- -
- - - -{{>footer}} diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/req/req_file.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/req/req_file.py deleted file mode 100644 index 03ae50492c585028425beed7a047905dd2f60299..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/req/req_file.py +++ /dev/null @@ -1,536 +0,0 @@ -""" -Requirements file parsing -""" - -import optparse -import os -import re -import shlex -import urllib.parse -from optparse import Values -from typing import ( - TYPE_CHECKING, - Any, - Callable, - Dict, - Iterable, - Iterator, - List, - Optional, - Tuple, -) - -from pip._internal.cli import cmdoptions -from pip._internal.exceptions import InstallationError, RequirementsFileParseError -from pip._internal.models.search_scope import SearchScope -from pip._internal.network.session import PipSession -from pip._internal.network.utils import raise_for_status -from pip._internal.utils.encoding import auto_decode -from pip._internal.utils.urls import get_url_scheme - -if TYPE_CHECKING: - # NoReturn introduced in 3.6.2; imported only for type checking to maintain - # pip compatibility with older patch versions of Python 3.6 - from typing import NoReturn - - from pip._internal.index.package_finder import PackageFinder - -__all__ = ["parse_requirements"] - -ReqFileLines = Iterable[Tuple[int, str]] - -LineParser = Callable[[str], Tuple[str, Values]] - -SCHEME_RE = re.compile(r"^(http|https|file):", re.I) -COMMENT_RE = re.compile(r"(^|\s+)#.*$") - -# Matches environment variable-style values in '${MY_VARIABLE_1}' with the -# variable name consisting of only uppercase letters, digits or the '_' -# (underscore). This follows the POSIX standard defined in IEEE Std 1003.1, -# 2013 Edition. -ENV_VAR_RE = re.compile(r"(?P\$\{(?P[A-Z0-9_]+)\})") - -SUPPORTED_OPTIONS: List[Callable[..., optparse.Option]] = [ - cmdoptions.index_url, - cmdoptions.extra_index_url, - cmdoptions.no_index, - cmdoptions.constraints, - cmdoptions.requirements, - cmdoptions.editable, - cmdoptions.find_links, - cmdoptions.no_binary, - cmdoptions.only_binary, - cmdoptions.prefer_binary, - cmdoptions.require_hashes, - cmdoptions.pre, - cmdoptions.trusted_host, - cmdoptions.use_new_feature, -] - -# options to be passed to requirements -SUPPORTED_OPTIONS_REQ: List[Callable[..., optparse.Option]] = [ - cmdoptions.install_options, - cmdoptions.global_options, - cmdoptions.hash, -] - -# the 'dest' string values -SUPPORTED_OPTIONS_REQ_DEST = [str(o().dest) for o in SUPPORTED_OPTIONS_REQ] - - -class ParsedRequirement: - def __init__( - self, - requirement: str, - is_editable: bool, - comes_from: str, - constraint: bool, - options: Optional[Dict[str, Any]] = None, - line_source: Optional[str] = None, - ) -> None: - self.requirement = requirement - self.is_editable = is_editable - self.comes_from = comes_from - self.options = options - self.constraint = constraint - self.line_source = line_source - - -class ParsedLine: - def __init__( - self, - filename: str, - lineno: int, - args: str, - opts: Values, - constraint: bool, - ) -> None: - self.filename = filename - self.lineno = lineno - self.opts = opts - self.constraint = constraint - - if args: - self.is_requirement = True - self.is_editable = False - self.requirement = args - elif opts.editables: - self.is_requirement = True - self.is_editable = True - # We don't support multiple -e on one line - self.requirement = opts.editables[0] - else: - self.is_requirement = False - - -def parse_requirements( - filename: str, - session: PipSession, - finder: Optional["PackageFinder"] = None, - options: Optional[optparse.Values] = None, - constraint: bool = False, -) -> Iterator[ParsedRequirement]: - """Parse a requirements file and yield ParsedRequirement instances. - - :param filename: Path or url of requirements file. - :param session: PipSession instance. - :param finder: Instance of pip.index.PackageFinder. - :param options: cli options. - :param constraint: If true, parsing a constraint file rather than - requirements file. - """ - line_parser = get_line_parser(finder) - parser = RequirementsFileParser(session, line_parser) - - for parsed_line in parser.parse(filename, constraint): - parsed_req = handle_line( - parsed_line, options=options, finder=finder, session=session - ) - if parsed_req is not None: - yield parsed_req - - -def preprocess(content: str) -> ReqFileLines: - """Split, filter, and join lines, and return a line iterator - - :param content: the content of the requirements file - """ - lines_enum: ReqFileLines = enumerate(content.splitlines(), start=1) - lines_enum = join_lines(lines_enum) - lines_enum = ignore_comments(lines_enum) - lines_enum = expand_env_variables(lines_enum) - return lines_enum - - -def handle_requirement_line( - line: ParsedLine, - options: Optional[optparse.Values] = None, -) -> ParsedRequirement: - - # preserve for the nested code path - line_comes_from = "{} {} (line {})".format( - "-c" if line.constraint else "-r", - line.filename, - line.lineno, - ) - - assert line.is_requirement - - if line.is_editable: - # For editable requirements, we don't support per-requirement - # options, so just return the parsed requirement. - return ParsedRequirement( - requirement=line.requirement, - is_editable=line.is_editable, - comes_from=line_comes_from, - constraint=line.constraint, - ) - else: - if options: - # Disable wheels if the user has specified build options - cmdoptions.check_install_build_global(options, line.opts) - - # get the options that apply to requirements - req_options = {} - for dest in SUPPORTED_OPTIONS_REQ_DEST: - if dest in line.opts.__dict__ and line.opts.__dict__[dest]: - req_options[dest] = line.opts.__dict__[dest] - - line_source = f"line {line.lineno} of {line.filename}" - return ParsedRequirement( - requirement=line.requirement, - is_editable=line.is_editable, - comes_from=line_comes_from, - constraint=line.constraint, - options=req_options, - line_source=line_source, - ) - - -def handle_option_line( - opts: Values, - filename: str, - lineno: int, - finder: Optional["PackageFinder"] = None, - options: Optional[optparse.Values] = None, - session: Optional[PipSession] = None, -) -> None: - - if options: - # percolate options upward - if opts.require_hashes: - options.require_hashes = opts.require_hashes - if opts.features_enabled: - options.features_enabled.extend( - f for f in opts.features_enabled if f not in options.features_enabled - ) - - # set finder options - if finder: - find_links = finder.find_links - index_urls = finder.index_urls - if opts.index_url: - index_urls = [opts.index_url] - if opts.no_index is True: - index_urls = [] - if opts.extra_index_urls: - index_urls.extend(opts.extra_index_urls) - if opts.find_links: - # FIXME: it would be nice to keep track of the source - # of the find_links: support a find-links local path - # relative to a requirements file. - value = opts.find_links[0] - req_dir = os.path.dirname(os.path.abspath(filename)) - relative_to_reqs_file = os.path.join(req_dir, value) - if os.path.exists(relative_to_reqs_file): - value = relative_to_reqs_file - find_links.append(value) - - if session: - # We need to update the auth urls in session - session.update_index_urls(index_urls) - - search_scope = SearchScope( - find_links=find_links, - index_urls=index_urls, - ) - finder.search_scope = search_scope - - if opts.pre: - finder.set_allow_all_prereleases() - - if opts.prefer_binary: - finder.set_prefer_binary() - - if session: - for host in opts.trusted_hosts or []: - source = f"line {lineno} of {filename}" - session.add_trusted_host(host, source=source) - - -def handle_line( - line: ParsedLine, - options: Optional[optparse.Values] = None, - finder: Optional["PackageFinder"] = None, - session: Optional[PipSession] = None, -) -> Optional[ParsedRequirement]: - """Handle a single parsed requirements line; This can result in - creating/yielding requirements, or updating the finder. - - :param line: The parsed line to be processed. - :param options: CLI options. - :param finder: The finder - updated by non-requirement lines. - :param session: The session - updated by non-requirement lines. - - Returns a ParsedRequirement object if the line is a requirement line, - otherwise returns None. - - For lines that contain requirements, the only options that have an effect - are from SUPPORTED_OPTIONS_REQ, and they are scoped to the - requirement. Other options from SUPPORTED_OPTIONS may be present, but are - ignored. - - For lines that do not contain requirements, the only options that have an - effect are from SUPPORTED_OPTIONS. Options from SUPPORTED_OPTIONS_REQ may - be present, but are ignored. These lines may contain multiple options - (although our docs imply only one is supported), and all our parsed and - affect the finder. - """ - - if line.is_requirement: - parsed_req = handle_requirement_line(line, options) - return parsed_req - else: - handle_option_line( - line.opts, - line.filename, - line.lineno, - finder, - options, - session, - ) - return None - - -class RequirementsFileParser: - def __init__( - self, - session: PipSession, - line_parser: LineParser, - ) -> None: - self._session = session - self._line_parser = line_parser - - def parse(self, filename: str, constraint: bool) -> Iterator[ParsedLine]: - """Parse a given file, yielding parsed lines.""" - yield from self._parse_and_recurse(filename, constraint) - - def _parse_and_recurse( - self, filename: str, constraint: bool - ) -> Iterator[ParsedLine]: - for line in self._parse_file(filename, constraint): - if not line.is_requirement and ( - line.opts.requirements or line.opts.constraints - ): - # parse a nested requirements file - if line.opts.requirements: - req_path = line.opts.requirements[0] - nested_constraint = False - else: - req_path = line.opts.constraints[0] - nested_constraint = True - - # original file is over http - if SCHEME_RE.search(filename): - # do a url join so relative paths work - req_path = urllib.parse.urljoin(filename, req_path) - # original file and nested file are paths - elif not SCHEME_RE.search(req_path): - # do a join so relative paths work - req_path = os.path.join( - os.path.dirname(filename), - req_path, - ) - - yield from self._parse_and_recurse(req_path, nested_constraint) - else: - yield line - - def _parse_file(self, filename: str, constraint: bool) -> Iterator[ParsedLine]: - _, content = get_file_content(filename, self._session) - - lines_enum = preprocess(content) - - for line_number, line in lines_enum: - try: - args_str, opts = self._line_parser(line) - except OptionParsingError as e: - # add offending line - msg = f"Invalid requirement: {line}\n{e.msg}" - raise RequirementsFileParseError(msg) - - yield ParsedLine( - filename, - line_number, - args_str, - opts, - constraint, - ) - - -def get_line_parser(finder: Optional["PackageFinder"]) -> LineParser: - def parse_line(line: str) -> Tuple[str, Values]: - # Build new parser for each line since it accumulates appendable - # options. - parser = build_parser() - defaults = parser.get_default_values() - defaults.index_url = None - if finder: - defaults.format_control = finder.format_control - - args_str, options_str = break_args_options(line) - - opts, _ = parser.parse_args(shlex.split(options_str), defaults) - - return args_str, opts - - return parse_line - - -def break_args_options(line: str) -> Tuple[str, str]: - """Break up the line into an args and options string. We only want to shlex - (and then optparse) the options, not the args. args can contain markers - which are corrupted by shlex. - """ - tokens = line.split(" ") - args = [] - options = tokens[:] - for token in tokens: - if token.startswith("-") or token.startswith("--"): - break - else: - args.append(token) - options.pop(0) - return " ".join(args), " ".join(options) - - -class OptionParsingError(Exception): - def __init__(self, msg: str) -> None: - self.msg = msg - - -def build_parser() -> optparse.OptionParser: - """ - Return a parser for parsing requirement lines - """ - parser = optparse.OptionParser(add_help_option=False) - - option_factories = SUPPORTED_OPTIONS + SUPPORTED_OPTIONS_REQ - for option_factory in option_factories: - option = option_factory() - parser.add_option(option) - - # By default optparse sys.exits on parsing errors. We want to wrap - # that in our own exception. - def parser_exit(self: Any, msg: str) -> "NoReturn": - raise OptionParsingError(msg) - - # NOTE: mypy disallows assigning to a method - # https://github.com/python/mypy/issues/2427 - parser.exit = parser_exit # type: ignore - - return parser - - -def join_lines(lines_enum: ReqFileLines) -> ReqFileLines: - """Joins a line ending in '\' with the previous line (except when following - comments). The joined line takes on the index of the first line. - """ - primary_line_number = None - new_line: List[str] = [] - for line_number, line in lines_enum: - if not line.endswith("\\") or COMMENT_RE.match(line): - if COMMENT_RE.match(line): - # this ensures comments are always matched later - line = " " + line - if new_line: - new_line.append(line) - assert primary_line_number is not None - yield primary_line_number, "".join(new_line) - new_line = [] - else: - yield line_number, line - else: - if not new_line: - primary_line_number = line_number - new_line.append(line.strip("\\")) - - # last line contains \ - if new_line: - assert primary_line_number is not None - yield primary_line_number, "".join(new_line) - - # TODO: handle space after '\'. - - -def ignore_comments(lines_enum: ReqFileLines) -> ReqFileLines: - """ - Strips comments and filter empty lines. - """ - for line_number, line in lines_enum: - line = COMMENT_RE.sub("", line) - line = line.strip() - if line: - yield line_number, line - - -def expand_env_variables(lines_enum: ReqFileLines) -> ReqFileLines: - """Replace all environment variables that can be retrieved via `os.getenv`. - - The only allowed format for environment variables defined in the - requirement file is `${MY_VARIABLE_1}` to ensure two things: - - 1. Strings that contain a `$` aren't accidentally (partially) expanded. - 2. Ensure consistency across platforms for requirement files. - - These points are the result of a discussion on the `github pull - request #3514 `_. - - Valid characters in variable names follow the `POSIX standard - `_ and are limited - to uppercase letter, digits and the `_` (underscore). - """ - for line_number, line in lines_enum: - for env_var, var_name in ENV_VAR_RE.findall(line): - value = os.getenv(var_name) - if not value: - continue - - line = line.replace(env_var, value) - - yield line_number, line - - -def get_file_content(url: str, session: PipSession) -> Tuple[str, str]: - """Gets the content of a file; it may be a filename, file: URL, or - http: URL. Returns (location, content). Content is unicode. - Respects # -*- coding: declarations on the retrieved files. - - :param url: File path or url. - :param session: PipSession instance. - """ - scheme = get_url_scheme(url) - - # Pip has special support for file:// URLs (LocalFSAdapter). - if scheme in ["http", "https", "file"]: - resp = session.get(url) - raise_for_status(resp) - return resp.url, resp.text - - # Assume this is a bare path. - try: - with open(url, "rb") as f: - content = auto_decode(f.read()) - except OSError as exc: - raise InstallationError(f"Could not open requirements file: {exc}") - return url, content diff --git a/spaces/allandclive/Uganda_MMS/vits/text/symbols.py b/spaces/allandclive/Uganda_MMS/vits/text/symbols.py deleted file mode 100644 index 869a53e763ae825bc02921842280ac9efe7f85dd..0000000000000000000000000000000000000000 --- a/spaces/allandclive/Uganda_MMS/vits/text/symbols.py +++ /dev/null @@ -1,16 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Defines the set of symbols used in text input to the model. -''' -_pad = '_' -_punctuation = ';:,.!?¡¿—…"«»“” ' -_letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' -_letters_ipa = "ɑɐɒæɓʙβɔɕçɗɖðʤəɘɚɛɜɝɞɟʄɡɠɢʛɦɧħɥʜɨɪʝɭɬɫɮʟɱɯɰŋɳɲɴøɵɸθœɶʘɹɺɾɻʀʁɽʂʃʈʧʉʊʋⱱʌɣɤʍχʎʏʑʐʒʔʡʕʢǀǁǂǃˈˌːˑʼʴʰʱʲʷˠˤ˞↓↑→↗↘'̩'ᵻ" - - -# Export all symbols: -symbols = [_pad] + list(_punctuation) + list(_letters) + list(_letters_ipa) - -# Special symbol ids -SPACE_ID = symbols.index(" ") diff --git a/spaces/allknowingroger/Image-Models-Test93/app.py b/spaces/allknowingroger/Image-Models-Test93/app.py deleted file mode 100644 index 126eab14d58bb9377688ce2e1a373812cf52ab76..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test93/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "LinoyTsaban/lora-trained-xl-colab-3d-icon-0.0001-1500-1", - "HZ0504/huahua_pet_dreambooth", - "varunsingh2191/sdxl-training-demo", - "bebechien/lora-trained-xl-colab", - "AVIIAX/majic6", - "Yntec/epiCVision", - "LinoyTsaban/lora-trained-xl-colab-shiny-sneaker-0.0001-500-2", - "sunyijia97/lora-trained-xl-colab-dog-v2", - "kimnice/lora-from-stable-diffusion-xl-base-1.0", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/alvanlii/domain-expansion/torch_utils/ops/grid_sample_gradfix.py b/spaces/alvanlii/domain-expansion/torch_utils/ops/grid_sample_gradfix.py deleted file mode 100644 index ca6b3413ea72a734703c34382c023b84523601fd..0000000000000000000000000000000000000000 --- a/spaces/alvanlii/domain-expansion/torch_utils/ops/grid_sample_gradfix.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Custom replacement for `torch.nn.functional.grid_sample` that -supports arbitrarily high order gradients between the input and output. -Only works on 2D images and assumes -`mode='bilinear'`, `padding_mode='zeros'`, `align_corners=False`.""" - -import warnings -import torch - -# pylint: disable=redefined-builtin -# pylint: disable=arguments-differ -# pylint: disable=protected-access - -#---------------------------------------------------------------------------- - -enabled = False # Enable the custom op by setting this to true. - -#---------------------------------------------------------------------------- - -def grid_sample(input, grid): - if _should_use_custom_op(): - return _GridSample2dForward.apply(input, grid) - return torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False) - -#---------------------------------------------------------------------------- - -def _should_use_custom_op(): - if not enabled: - return False - if any(torch.__version__.startswith(x) for x in ['1.7.', '1.8.', '1.9']): - return True - warnings.warn(f'grid_sample_gradfix not supported on PyTorch {torch.__version__}. Falling back to torch.nn.functional.grid_sample().') - return False - -#---------------------------------------------------------------------------- - -class _GridSample2dForward(torch.autograd.Function): - @staticmethod - def forward(ctx, input, grid): - assert input.ndim == 4 - assert grid.ndim == 4 - output = torch.nn.functional.grid_sample(input=input, grid=grid, mode='bilinear', padding_mode='zeros', align_corners=False) - ctx.save_for_backward(input, grid) - return output - - @staticmethod - def backward(ctx, grad_output): - input, grid = ctx.saved_tensors - grad_input, grad_grid = _GridSample2dBackward.apply(grad_output, input, grid) - return grad_input, grad_grid - -#---------------------------------------------------------------------------- - -class _GridSample2dBackward(torch.autograd.Function): - @staticmethod - def forward(ctx, grad_output, input, grid): - op = torch._C._jit_get_operation('aten::grid_sampler_2d_backward') - grad_input, grad_grid = op(grad_output, input, grid, 0, 0, False) - ctx.save_for_backward(grid) - return grad_input, grad_grid - - @staticmethod - def backward(ctx, grad2_grad_input, grad2_grad_grid): - _ = grad2_grad_grid # unused - grid, = ctx.saved_tensors - grad2_grad_output = None - grad2_input = None - grad2_grid = None - - if ctx.needs_input_grad[0]: - grad2_grad_output = _GridSample2dForward.apply(grad2_grad_input, grid) - - assert not ctx.needs_input_grad[2] - return grad2_grad_output, grad2_input, grad2_grid - -#---------------------------------------------------------------------------- diff --git a/spaces/annt/mrc_uit_squadv2/retro_reader/args/retro_args.py b/spaces/annt/mrc_uit_squadv2/retro_reader/args/retro_args.py deleted file mode 100644 index 6792ca0c7d9e6d8e87a36cd758d6d918fb59ca02..0000000000000000000000000000000000000000 --- a/spaces/annt/mrc_uit_squadv2/retro_reader/args/retro_args.py +++ /dev/null @@ -1,152 +0,0 @@ -from dataclasses import dataclass, field -from .. import models - - -@dataclass -class RetroDataModelArguments: - pass - - -@dataclass -class DataArguments(RetroDataModelArguments): - max_seq_length: int = field( - default=512, - metadata={"help": ""}, - ) - max_answer_length: int = field( - default=30, - metadata={"help": ""}, - ) - doc_stride: int = field( - default=128, - metadata={"help": ""}, - ) - return_token_type_ids: bool = field( - default=True, - metadata={"help": ""}, - ) - pad_to_max_length: bool = field( - default=True, - metadata={"help": ""}, - ) - preprocessing_num_workers: int = field( - default=5, - metadata={"help": ""}, - ) - overwrite_cache: bool = field( - default=False, - metadata={"help": ""}, - ) - version_2_with_negative: bool = field( - default=True, - metadata={"help": ""}, - ) - null_score_diff_threshold: float = field( - default=0.0, - metadata={"help": ""}, - ) - rear_threshold: float = field( - default=0.0, - metadata={"help": ""}, - ) - n_best_size: int = field( - default=20, - metadata={"help": ""}, - ) - use_choice_logits: bool = field( - default=False, - metadata={"help": ""}, - ) - start_n_top: int = field( - default=-1, - metadata={"help": ""}, - ) - end_n_top: int = field( - default=-1, - metadata={"help": ""}, - ) - beta1: int = field( - default=1, - metadata={"help": ""}, - ) - beta2: int = field( - default=1, - metadata={"help": ""}, - ) - best_cof: int = field( - default=1, - metadata={"help": ""}, - ) - - -@dataclass -class ModelArguments(RetroDataModelArguments): - use_auth_token: bool = field( - default=False, - metadata={"help": ""}, - ) - - -@dataclass -class SketchModelArguments(ModelArguments): - sketch_revision: str = field( - default="main", - metadata={"help": ""}, - ) - sketch_model_name: str = field( - default="monologg/koelectra-small-v3-discriminator", - metadata={"help": ""}, - ) - sketch_tokenizer_name: str = field( - default=None, - metadata={"help": ""}, - ) - sketch_architectures: str = field( - default="ElectraForSequenceClassification", - metadata={"help": ""}, - ) - - -@dataclass -class IntensiveModelArguments(ModelArguments): - intensive_revision: str = field( - default="main", - metadata={"help": ""}, - ) - intensive_model_name: str = field( - default="monologg/koelectra-small-v3-discriminator", - metadata={"help": ""}, - ) - intensive_tokenizer_name: str = field( - default=None, - metadata={"help": ""}, - ) - intensive_architectures: str = field( - default="ElectraForQuestionAnsweringAVPool", - metadata={"help": ""}, - ) - - -@dataclass -class RetroArguments( - DataArguments, - SketchModelArguments, - IntensiveModelArguments, -): - def __post_init__(self): - # Sketch - model_cls = getattr(models, self.sketch_architectures, None) - if model_cls is None: - raise AttributeError - self.sketch_model_cls = model_cls - self.sketch_model_type = model_cls.model_type - if self.sketch_tokenizer_name is None: - self.sketch_tokenizer_name = self.sketch_model_name - # Intensive - model_cls = getattr(models, self.intensive_architectures, None) - if model_cls is None: - raise AttributeError - self.intensive_model_cls = model_cls - self.intensive_model_type = model_cls.model_type - if self.intensive_tokenizer_name is None: - self.intensive_tokenizer_name = self.intensive_model_name \ No newline at end of file diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/sd_api_pictures/script.py b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/sd_api_pictures/script.py deleted file mode 100644 index 1189a593f775f814731c04afaa3b73bbb0cb1ec4..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/extensions/sd_api_pictures/script.py +++ /dev/null @@ -1,321 +0,0 @@ -import base64 -import io -import re -import time -from datetime import date -from pathlib import Path - -import gradio as gr -import modules.shared as shared -import requests -import torch -from modules.models import reload_model, unload_model -from PIL import Image - -torch._C._jit_set_profiling_mode(False) - -# parameters which can be customized in settings.json of webui -params = { - 'address': 'http://127.0.0.1:7860', - 'mode': 0, # modes of operation: 0 (Manual only), 1 (Immersive/Interactive - looks for words to trigger), 2 (Picturebook Adventure - Always on) - 'manage_VRAM': False, - 'save_img': False, - 'SD_model': 'NeverEndingDream', # not used right now - 'prompt_prefix': '(Masterpiece:1.1), detailed, intricate, colorful', - 'negative_prompt': '(worst quality, low quality:1.3)', - 'width': 512, - 'height': 512, - 'denoising_strength': 0.61, - 'restore_faces': False, - 'enable_hr': False, - 'hr_upscaler': 'ESRGAN_4x', - 'hr_scale': '1.0', - 'seed': -1, - 'sampler_name': 'DDIM', - 'steps': 32, - 'cfg_scale': 7 -} - - -def give_VRAM_priority(actor): - global shared, params - - if actor == 'SD': - unload_model() - print("Requesting Auto1111 to re-load last checkpoint used...") - response = requests.post(url=f'{params["address"]}/sdapi/v1/reload-checkpoint', json='') - response.raise_for_status() - - elif actor == 'LLM': - print("Requesting Auto1111 to vacate VRAM...") - response = requests.post(url=f'{params["address"]}/sdapi/v1/unload-checkpoint', json='') - response.raise_for_status() - reload_model() - - elif actor == 'set': - print("VRAM mangement activated -- requesting Auto1111 to vacate VRAM...") - response = requests.post(url=f'{params["address"]}/sdapi/v1/unload-checkpoint', json='') - response.raise_for_status() - - elif actor == 'reset': - print("VRAM mangement deactivated -- requesting Auto1111 to reload checkpoint") - response = requests.post(url=f'{params["address"]}/sdapi/v1/reload-checkpoint', json='') - response.raise_for_status() - - else: - raise RuntimeError(f'Managing VRAM: "{actor}" is not a known state!') - - response.raise_for_status() - del response - - -if params['manage_VRAM']: - give_VRAM_priority('set') - -samplers = ['DDIM', 'DPM++ 2M Karras'] # TODO: get the availible samplers with http://{address}}/sdapi/v1/samplers -SD_models = ['NeverEndingDream'] # TODO: get with http://{address}}/sdapi/v1/sd-models and allow user to select - -streaming_state = shared.args.no_stream # remember if chat streaming was enabled -picture_response = False # specifies if the next model response should appear as a picture - -def remove_surrounded_chars(string): - # this expression matches to 'as few symbols as possible (0 upwards) between any asterisks' OR - # 'as few symbols as possible (0 upwards) between an asterisk and the end of the string' - return re.sub('\*[^\*]*?(\*|$)', '', string) - - -def triggers_are_in(string): - string = remove_surrounded_chars(string) - # regex searches for send|main|message|me (at the end of the word) followed by - # a whole word of image|pic|picture|photo|snap|snapshot|selfie|meme(s), - # (?aims) are regex parser flags - return bool(re.search('(?aims)(send|mail|message|me)\\b.+?\\b(image|pic(ture)?|photo|snap(shot)?|selfie|meme)s?\\b', string)) - - -def input_modifier(string): - """ - This function is applied to your text inputs before - they are fed into the model. - """ - - global params - - if not params['mode'] == 1: # if not in immersive/interactive mode, do nothing - return string - - if triggers_are_in(string): # if we're in it, check for trigger words - toggle_generation(True) - string = string.lower() - if "of" in string: - subject = string.split('of', 1)[1] # subdivide the string once by the first 'of' instance and get what's coming after it - string = "Please provide a detailed and vivid description of " + subject - else: - string = "Please provide a detailed description of your appearance, your surroundings and what you are doing right now" - - return string - -# Get and save the Stable Diffusion-generated picture -def get_SD_pictures(description): - - global params - - if params['manage_VRAM']: - give_VRAM_priority('SD') - - payload = { - "prompt": params['prompt_prefix'] + description, - "seed": params['seed'], - "sampler_name": params['sampler_name'], - "enable_hr": params['enable_hr'], - "hr_scale": params['hr_scale'], - "hr_upscaler": params['hr_upscaler'], - "denoising_strength": params['denoising_strength'], - "steps": params['steps'], - "cfg_scale": params['cfg_scale'], - "width": params['width'], - "height": params['height'], - "restore_faces": params['restore_faces'], - "override_settings_restore_afterwards": True, - "negative_prompt": params['negative_prompt'] - } - - print(f'Prompting the image generator via the API on {params["address"]}...') - response = requests.post(url=f'{params["address"]}/sdapi/v1/txt2img', json=payload) - response.raise_for_status() - r = response.json() - - visible_result = "" - for img_str in r['images']: - if params['save_img']: - img_data = base64.b64decode(img_str) - - variadic = f'{date.today().strftime("%Y_%m_%d")}/{shared.character}_{int(time.time())}' - output_file = Path(f'extensions/sd_api_pictures/outputs/{variadic}.png') - output_file.parent.mkdir(parents=True, exist_ok=True) - - with open(output_file.as_posix(), 'wb') as f: - f.write(img_data) - - visible_result = visible_result + f'{description}\n' - else: - image = Image.open(io.BytesIO(base64.b64decode(img_str.split(",", 1)[0]))) - # lower the resolution of received images for the chat, otherwise the log size gets out of control quickly with all the base64 values in visible history - image.thumbnail((300, 300)) - buffered = io.BytesIO() - image.save(buffered, format="JPEG") - buffered.seek(0) - image_bytes = buffered.getvalue() - img_str = "data:image/jpeg;base64," + base64.b64encode(image_bytes).decode() - visible_result = visible_result + f'{description}\n' - - if params['manage_VRAM']: - give_VRAM_priority('LLM') - - return visible_result - -# TODO: how do I make the UI history ignore the resulting pictures (I don't want HTML to appear in history) -# and replace it with 'text' for the purposes of logging? -def output_modifier(string): - """ - This function is applied to the model outputs. - """ - - global picture_response, params - - if not picture_response: - return string - - string = remove_surrounded_chars(string) - string = string.replace('"', '') - string = string.replace('“', '') - string = string.replace('\n', ' ') - string = string.strip() - - if string == '': - string = 'no viable description in reply, try regenerating' - return string - - text = "" - if (params['mode'] < 2): - toggle_generation(False) - text = f'*Sends a picture which portrays: “{string}”*' - else: - text = string - - string = get_SD_pictures(string) + "\n" + text - - return string - - -def bot_prefix_modifier(string): - """ - This function is only applied in chat mode. It modifies - the prefix text for the Bot and can be used to bias its - behavior. - """ - - return string - - -def toggle_generation(*args): - global picture_response, shared, streaming_state - - if not args: - picture_response = not picture_response - else: - picture_response = args[0] - - shared.args.no_stream = True if picture_response else streaming_state # Disable streaming cause otherwise the SD-generated picture would return as a dud - shared.processing_message = "*Is sending a picture...*" if picture_response else "*Is typing...*" - - -def filter_address(address): - address = address.strip() - # address = re.sub('http(s)?:\/\/|\/$','',address) # remove starting http:// OR https:// OR trailing slash - address = re.sub('\/$', '', address) # remove trailing /s - if not address.startswith('http'): - address = 'http://' + address - return address - - -def SD_api_address_update(address): - - global params - - msg = "✔️ SD API is found on:" - address = filter_address(address) - params.update({"address": address}) - try: - response = requests.get(url=f'{params["address"]}/sdapi/v1/sd-models') - response.raise_for_status() - # r = response.json() - except: - msg = "❌ No SD API endpoint on:" - - return gr.Textbox.update(label=msg) - -def ui(): - - # Gradio elements - # gr.Markdown('### Stable Diffusion API Pictures') # Currently the name of extension is shown as the title - with gr.Accordion("Parameters", open=True, elem_classes="SDAP"): - with gr.Row(): - address = gr.Textbox(placeholder=params['address'], value=params['address'], label='Auto1111\'s WebUI address') - modes_list = ["Manual", "Immersive/Interactive", "Picturebook/Adventure"] - mode = gr.Dropdown(modes_list, value=modes_list[params['mode']], label="Mode of operation", type="index") - with gr.Column(scale=1, min_width=300): - manage_VRAM = gr.Checkbox(value=params['manage_VRAM'], label='Manage VRAM') - save_img = gr.Checkbox(value=params['save_img'], label='Keep original images and use them in chat') - - force_pic = gr.Button("Force the picture response") - suppr_pic = gr.Button("Suppress the picture response") - - with gr.Accordion("Generation parameters", open=False): - prompt_prefix = gr.Textbox(placeholder=params['prompt_prefix'], value=params['prompt_prefix'], label='Prompt Prefix (best used to describe the look of the character)') - negative_prompt = gr.Textbox(placeholder=params['negative_prompt'], value=params['negative_prompt'], label='Negative Prompt') - with gr.Row(): - with gr.Column(): - width = gr.Slider(256, 768, value=params['width'], step=64, label='Width') - height = gr.Slider(256, 768, value=params['height'], step=64, label='Height') - with gr.Column(): - sampler_name = gr.Textbox(placeholder=params['sampler_name'], value=params['sampler_name'], label='Sampling method', elem_id="sampler_box") - steps = gr.Slider(1, 150, value=params['steps'], step=1, label="Sampling steps") - with gr.Row(): - seed = gr.Number(label="Seed", value=params['seed'], elem_id="seed_box") - cfg_scale = gr.Number(label="CFG Scale", value=params['cfg_scale'], elem_id="cfg_box") - with gr.Column() as hr_options: - restore_faces = gr.Checkbox(value=params['restore_faces'], label='Restore faces') - enable_hr = gr.Checkbox(value=params['enable_hr'], label='Hires. fix') - with gr.Row(visible=params['enable_hr'], elem_classes="hires_opts") as hr_options: - hr_scale = gr.Slider(1, 4, value=params['hr_scale'], step=0.1, label='Upscale by') - denoising_strength = gr.Slider(0, 1, value=params['denoising_strength'], step=0.01, label='Denoising strength') - hr_upscaler = gr.Textbox(placeholder=params['hr_upscaler'], value=params['hr_upscaler'], label='Upscaler') - - - # Event functions to update the parameters in the backend - address.change(lambda x: params.update({"address": filter_address(x)}), address, None) - mode.select(lambda x: params.update({"mode": x}), mode, None) - mode.select(lambda x: toggle_generation(x > 1), inputs=mode, outputs=None) - manage_VRAM.change(lambda x: params.update({"manage_VRAM": x}), manage_VRAM, None) - manage_VRAM.change(lambda x: give_VRAM_priority('set' if x else 'reset'), inputs=manage_VRAM, outputs=None) - save_img.change(lambda x: params.update({"save_img": x}), save_img, None) - - address.submit(fn=SD_api_address_update, inputs=address, outputs=address) - prompt_prefix.change(lambda x: params.update({"prompt_prefix": x}), prompt_prefix, None) - negative_prompt.change(lambda x: params.update({"negative_prompt": x}), negative_prompt, None) - width.change(lambda x: params.update({"width": x}), width, None) - height.change(lambda x: params.update({"height": x}), height, None) - hr_scale.change(lambda x: params.update({"hr_scale": x}), hr_scale, None) - denoising_strength.change(lambda x: params.update({"denoising_strength": x}), denoising_strength, None) - restore_faces.change(lambda x: params.update({"restore_faces": x}), restore_faces, None) - hr_upscaler.change(lambda x: params.update({"hr_upscaler": x}), hr_upscaler, None) - enable_hr.change(lambda x: params.update({"enable_hr": x}), enable_hr, None) - enable_hr.change(lambda x: hr_options.update(visible=params["enable_hr"]), enable_hr, hr_options) - - sampler_name.change(lambda x: params.update({"sampler_name": x}), sampler_name, None) - steps.change(lambda x: params.update({"steps": x}), steps, None) - seed.change(lambda x: params.update({"seed": x}), seed, None) - cfg_scale.change(lambda x: params.update({"cfg_scale": x}), cfg_scale, None) - - force_pic.click(lambda x: toggle_generation(True), inputs=force_pic, outputs=None) - suppr_pic.click(lambda x: toggle_generation(False), inputs=suppr_pic, outputs=None) diff --git a/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/utils/transforms.py b/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/utils/transforms.py deleted file mode 100644 index 3ad346661f84b0647026e130a552c4b38b83e2ac..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/panoptic-segment-anything/segment_anything/segment_anything/utils/transforms.py +++ /dev/null @@ -1,102 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from torch.nn import functional as F -from torchvision.transforms.functional import resize, to_pil_image # type: ignore - -from copy import deepcopy -from typing import Tuple - - -class ResizeLongestSide: - """ - Resizes images to longest side 'target_length', as well as provides - methods for resizing coordinates and boxes. Provides methods for - transforming both numpy array and batched torch tensors. - """ - - def __init__(self, target_length: int) -> None: - self.target_length = target_length - - def apply_image(self, image: np.ndarray) -> np.ndarray: - """ - Expects a numpy array with shape HxWxC in uint8 format. - """ - target_size = self.get_preprocess_shape(image.shape[0], image.shape[1], self.target_length) - return np.array(resize(to_pil_image(image), target_size)) - - def apply_coords(self, coords: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: - """ - Expects a numpy array of length 2 in the final dimension. Requires the - original image size in (H, W) format. - """ - old_h, old_w = original_size - new_h, new_w = self.get_preprocess_shape( - original_size[0], original_size[1], self.target_length - ) - coords = deepcopy(coords).astype(float) - coords[..., 0] = coords[..., 0] * (new_w / old_w) - coords[..., 1] = coords[..., 1] * (new_h / old_h) - return coords - - def apply_boxes(self, boxes: np.ndarray, original_size: Tuple[int, ...]) -> np.ndarray: - """ - Expects a numpy array shape Bx4. Requires the original image size - in (H, W) format. - """ - boxes = self.apply_coords(boxes.reshape(-1, 2, 2), original_size) - return boxes.reshape(-1, 4) - - def apply_image_torch(self, image: torch.Tensor) -> torch.Tensor: - """ - Expects batched images with shape BxCxHxW and float format. This - transformation may not exactly match apply_image. apply_image is - the transformation expected by the model. - """ - # Expects an image in BCHW format. May not exactly match apply_image. - target_size = self.get_preprocess_shape(image.shape[0], image.shape[1], self.target_length) - return F.interpolate( - image, target_size, mode="bilinear", align_corners=False, antialias=True - ) - - def apply_coords_torch( - self, coords: torch.Tensor, original_size: Tuple[int, ...] - ) -> torch.Tensor: - """ - Expects a torch tensor with length 2 in the last dimension. Requires the - original image size in (H, W) format. - """ - old_h, old_w = original_size - new_h, new_w = self.get_preprocess_shape( - original_size[0], original_size[1], self.target_length - ) - coords = deepcopy(coords).to(torch.float) - coords[..., 0] = coords[..., 0] * (new_w / old_w) - coords[..., 1] = coords[..., 1] * (new_h / old_h) - return coords - - def apply_boxes_torch( - self, boxes: torch.Tensor, original_size: Tuple[int, ...] - ) -> torch.Tensor: - """ - Expects a torch tensor with shape Bx4. Requires the original image - size in (H, W) format. - """ - boxes = self.apply_coords_torch(boxes.reshape(-1, 2, 2), original_size) - return boxes.reshape(-1, 4) - - @staticmethod - def get_preprocess_shape(oldh: int, oldw: int, long_side_length: int) -> Tuple[int, int]: - """ - Compute the output size given input size and target long side length. - """ - scale = long_side_length * 1.0 / max(oldh, oldw) - newh, neww = oldh * scale, oldw * scale - neww = int(neww + 0.5) - newh = int(newh + 0.5) - return (newh, neww) diff --git a/spaces/aodianyun/stable-diffusion-webui/script.js b/spaces/aodianyun/stable-diffusion-webui/script.js deleted file mode 100644 index 97e0bfcf9fa3cc6b5823d86b0949ab9c947c6418..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/script.js +++ /dev/null @@ -1,102 +0,0 @@ -function gradioApp() { - const elems = document.getElementsByTagName('gradio-app') - const gradioShadowRoot = elems.length == 0 ? null : elems[0].shadowRoot - return !!gradioShadowRoot ? gradioShadowRoot : document; -} - -function get_uiCurrentTab() { - return gradioApp().querySelector('#tabs button:not(.border-transparent)') -} - -function get_uiCurrentTabContent() { - return gradioApp().querySelector('.tabitem[id^=tab_]:not([style*="display: none"])') -} - -uiUpdateCallbacks = [] -uiLoadedCallbacks = [] -uiTabChangeCallbacks = [] -optionsChangedCallbacks = [] -let uiCurrentTab = null - -function onUiUpdate(callback){ - uiUpdateCallbacks.push(callback) -} -function onUiLoaded(callback){ - uiLoadedCallbacks.push(callback) -} -function onUiTabChange(callback){ - uiTabChangeCallbacks.push(callback) -} -function onOptionsChanged(callback){ - optionsChangedCallbacks.push(callback) -} - -function runCallback(x, m){ - try { - x(m) - } catch (e) { - (console.error || console.log).call(console, e.message, e); - } -} -function executeCallbacks(queue, m) { - queue.forEach(function(x){runCallback(x, m)}) -} - -var executedOnLoaded = false; - -document.addEventListener("DOMContentLoaded", function() { - var mutationObserver = new MutationObserver(function(m){ - if(!executedOnLoaded && gradioApp().querySelector('#txt2img_prompt')){ - executedOnLoaded = true; - executeCallbacks(uiLoadedCallbacks); - } - - executeCallbacks(uiUpdateCallbacks, m); - const newTab = get_uiCurrentTab(); - if ( newTab && ( newTab !== uiCurrentTab ) ) { - uiCurrentTab = newTab; - executeCallbacks(uiTabChangeCallbacks); - } - }); - mutationObserver.observe( gradioApp(), { childList:true, subtree:true }) -}); - -/** - * Add a ctrl+enter as a shortcut to start a generation - */ -document.addEventListener('keydown', function(e) { - var handled = false; - if (e.key !== undefined) { - if((e.key == "Enter" && (e.metaKey || e.ctrlKey || e.altKey))) handled = true; - } else if (e.keyCode !== undefined) { - if((e.keyCode == 13 && (e.metaKey || e.ctrlKey || e.altKey))) handled = true; - } - if (handled) { - button = get_uiCurrentTabContent().querySelector('button[id$=_generate]'); - if (button) { - button.click(); - } - e.preventDefault(); - } -}) - -/** - * checks that a UI element is not in another hidden element or tab content - */ -function uiElementIsVisible(el) { - let isVisible = !el.closest('.\\!hidden'); - if ( ! isVisible ) { - return false; - } - - while( isVisible = el.closest('.tabitem')?.style.display !== 'none' ) { - if ( ! isVisible ) { - return false; - } else if ( el.parentElement ) { - el = el.parentElement - } else { - break; - } - } - return isVisible; -} diff --git a/spaces/aphenx/bingo/src/components/chat-history.tsx b/spaces/aphenx/bingo/src/components/chat-history.tsx deleted file mode 100644 index feb81de66562edda8f40d3c0cc717202c92b6509..0000000000000000000000000000000000000000 --- a/spaces/aphenx/bingo/src/components/chat-history.tsx +++ /dev/null @@ -1,48 +0,0 @@ -import { IconEdit, IconTrash, IconMore, IconDownload } from "./ui/icons" - -export function ChatHistory() { - return ( -
-
- 历史记录 -
-
-
-
-
-
-
- -
-

无标题的聊天

-
-

上午1:42

-
- - - - - - - - -
-
-
-
-
-
-
-
- ) -} diff --git a/spaces/artificialguybr/video-dubbing/TTS/docs/Makefile b/spaces/artificialguybr/video-dubbing/TTS/docs/Makefile deleted file mode 100644 index b1d20a99ed037c92d31a927f2bb01fb801b59bf2..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/docs/Makefile +++ /dev/null @@ -1,20 +0,0 @@ -# Minimal makefile for Sphinx documentation -# - -# You can set these variables from the command line, and also -# from the environment for the first two. -SPHINXOPTS ?= -j auto -WT --keep-going -SPHINXBUILD ?= sphinx-build -SOURCEDIR = source -BUILDDIR = _build - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/vocoder_tests/test_vocoder_pqmf.py b/spaces/artificialguybr/video-dubbing/TTS/tests/vocoder_tests/test_vocoder_pqmf.py deleted file mode 100644 index afe8d1dc8f8bf462cb3f030d3d8f113ed547c7d9..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/tests/vocoder_tests/test_vocoder_pqmf.py +++ /dev/null @@ -1,26 +0,0 @@ -import os - -import soundfile as sf -import torch -from librosa.core import load - -from tests import get_tests_input_path, get_tests_output_path, get_tests_path -from TTS.vocoder.layers.pqmf import PQMF - -TESTS_PATH = get_tests_path() -WAV_FILE = os.path.join(get_tests_input_path(), "example_1.wav") - - -def test_pqmf(): - w, sr = load(WAV_FILE) - - layer = PQMF(N=4, taps=62, cutoff=0.15, beta=9.0) - w, sr = load(WAV_FILE) - w2 = torch.from_numpy(w[None, None, :]) - b2 = layer.analysis(w2) - w2_ = layer.synthesis(b2) - - print(w2_.max()) - print(w2_.min()) - print(w2_.mean()) - sf.write(os.path.join(get_tests_output_path(), "pqmf_output.wav"), w2_.flatten().detach(), sr) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/SHA256.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/SHA256.py deleted file mode 100644 index 957aa37e0aa8f922971464c59bb9a2bb06dc72bb..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/Hash/SHA256.py +++ /dev/null @@ -1,185 +0,0 @@ -# -*- coding: utf-8 -*- -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -from Crypto.Util.py3compat import bord - -from Crypto.Util._raw_api import (load_pycryptodome_raw_lib, - VoidPointer, SmartPointer, - create_string_buffer, - get_raw_buffer, c_size_t, - c_uint8_ptr) - -_raw_sha256_lib = load_pycryptodome_raw_lib("Crypto.Hash._SHA256", - """ - int SHA256_init(void **shaState); - int SHA256_destroy(void *shaState); - int SHA256_update(void *hs, - const uint8_t *buf, - size_t len); - int SHA256_digest(const void *shaState, - uint8_t *digest, - size_t digest_size); - int SHA256_copy(const void *src, void *dst); - - int SHA256_pbkdf2_hmac_assist(const void *inner, - const void *outer, - const uint8_t *first_digest, - uint8_t *final_digest, - size_t iterations, - size_t digest_size); - """) - -class SHA256Hash(object): - """A SHA-256 hash object. - Do not instantiate directly. Use the :func:`new` function. - - :ivar oid: ASN.1 Object ID - :vartype oid: string - - :ivar block_size: the size in bytes of the internal message block, - input to the compression function - :vartype block_size: integer - - :ivar digest_size: the size in bytes of the resulting hash - :vartype digest_size: integer - """ - - # The size of the resulting hash in bytes. - digest_size = 32 - # The internal block size of the hash algorithm in bytes. - block_size = 64 - # ASN.1 Object ID - oid = "2.16.840.1.101.3.4.2.1" - - def __init__(self, data=None): - state = VoidPointer() - result = _raw_sha256_lib.SHA256_init(state.address_of()) - if result: - raise ValueError("Error %d while instantiating SHA256" - % result) - self._state = SmartPointer(state.get(), - _raw_sha256_lib.SHA256_destroy) - if data: - self.update(data) - - def update(self, data): - """Continue hashing of a message by consuming the next chunk of data. - - Args: - data (byte string/byte array/memoryview): The next chunk of the message being hashed. - """ - - result = _raw_sha256_lib.SHA256_update(self._state.get(), - c_uint8_ptr(data), - c_size_t(len(data))) - if result: - raise ValueError("Error %d while hashing data with SHA256" - % result) - - def digest(self): - """Return the **binary** (non-printable) digest of the message that has been hashed so far. - - :return: The hash digest, computed over the data processed so far. - Binary form. - :rtype: byte string - """ - - bfr = create_string_buffer(self.digest_size) - result = _raw_sha256_lib.SHA256_digest(self._state.get(), - bfr, - c_size_t(self.digest_size)) - if result: - raise ValueError("Error %d while making SHA256 digest" - % result) - - return get_raw_buffer(bfr) - - def hexdigest(self): - """Return the **printable** digest of the message that has been hashed so far. - - :return: The hash digest, computed over the data processed so far. - Hexadecimal encoded. - :rtype: string - """ - - return "".join(["%02x" % bord(x) for x in self.digest()]) - - def copy(self): - """Return a copy ("clone") of the hash object. - - The copy will have the same internal state as the original hash - object. - This can be used to efficiently compute the digests of strings that - share a common initial substring. - - :return: A hash object of the same type - """ - - clone = SHA256Hash() - result = _raw_sha256_lib.SHA256_copy(self._state.get(), - clone._state.get()) - if result: - raise ValueError("Error %d while copying SHA256" % result) - return clone - - def new(self, data=None): - """Create a fresh SHA-256 hash object.""" - - return SHA256Hash(data) - -def new(data=None): - """Create a new hash object. - - :parameter data: - Optional. The very first chunk of the message to hash. - It is equivalent to an early call to :meth:`SHA256Hash.update`. - :type data: byte string/byte array/memoryview - - :Return: A :class:`SHA256Hash` hash object - """ - - return SHA256Hash().new(data) - - -# The size of the resulting hash in bytes. -digest_size = SHA256Hash.digest_size - -# The internal block size of the hash algorithm in bytes. -block_size = SHA256Hash.block_size - - -def _pbkdf2_hmac_assist(inner, outer, first_digest, iterations): - """Compute the expensive inner loop in PBKDF-HMAC.""" - - assert iterations > 0 - - bfr = create_string_buffer(len(first_digest)); - result = _raw_sha256_lib.SHA256_pbkdf2_hmac_assist( - inner._state.get(), - outer._state.get(), - first_digest, - bfr, - c_size_t(iterations), - c_size_t(len(first_digest))) - - if result: - raise ValueError("Error %d with PBKDF2-HMAC assist for SHA256" % result) - - return get_raw_buffer(bfr) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PyAccess.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PyAccess.py deleted file mode 100644 index 9a2ec48fc60bdec98b4baa9a9c2fc3b1f818c1af..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/PyAccess.py +++ /dev/null @@ -1,358 +0,0 @@ -# -# The Python Imaging Library -# Pillow fork -# -# Python implementation of the PixelAccess Object -# -# Copyright (c) 1997-2009 by Secret Labs AB. All rights reserved. -# Copyright (c) 1995-2009 by Fredrik Lundh. -# Copyright (c) 2013 Eric Soroos -# -# See the README file for information on usage and redistribution -# - -# Notes: -# -# * Implements the pixel access object following Access. -# * Does not implement the line functions, as they don't appear to be used -# * Taking only the tuple form, which is used from python. -# * Fill.c uses the integer form, but it's still going to use the old -# Access.c implementation. -# - -import logging -import sys - -try: - from cffi import FFI - - defs = """ - struct Pixel_RGBA { - unsigned char r,g,b,a; - }; - struct Pixel_I16 { - unsigned char l,r; - }; - """ - ffi = FFI() - ffi.cdef(defs) -except ImportError as ex: - # Allow error import for doc purposes, but error out when accessing - # anything in core. - from ._util import DeferredError - - FFI = ffi = DeferredError(ex) - -logger = logging.getLogger(__name__) - - -class PyAccess: - def __init__(self, img, readonly=False): - vals = dict(img.im.unsafe_ptrs) - self.readonly = readonly - self.image8 = ffi.cast("unsigned char **", vals["image8"]) - self.image32 = ffi.cast("int **", vals["image32"]) - self.image = ffi.cast("unsigned char **", vals["image"]) - self.xsize, self.ysize = img.im.size - self._img = img - - # Keep pointer to im object to prevent dereferencing. - self._im = img.im - if self._im.mode in ("P", "PA"): - self._palette = img.palette - - # Debugging is polluting test traces, only useful here - # when hacking on PyAccess - # logger.debug("%s", vals) - self._post_init() - - def _post_init(self): - pass - - def __setitem__(self, xy, color): - """ - Modifies the pixel at x,y. The color is given as a single - numerical value for single band images, and a tuple for - multi-band images - - :param xy: The pixel coordinate, given as (x, y). See - :ref:`coordinate-system`. - :param color: The pixel value. - """ - if self.readonly: - raise ValueError("Attempt to putpixel a read only image") - (x, y) = xy - if x < 0: - x = self.xsize + x - if y < 0: - y = self.ysize + y - (x, y) = self.check_xy((x, y)) - - if ( - self._im.mode in ("P", "PA") - and isinstance(color, (list, tuple)) - and len(color) in [3, 4] - ): - # RGB or RGBA value for a P or PA image - if self._im.mode == "PA": - alpha = color[3] if len(color) == 4 else 255 - color = color[:3] - color = self._palette.getcolor(color, self._img) - if self._im.mode == "PA": - color = (color, alpha) - - return self.set_pixel(x, y, color) - - def __getitem__(self, xy): - """ - Returns the pixel at x,y. The pixel is returned as a single - value for single band images or a tuple for multiple band - images - - :param xy: The pixel coordinate, given as (x, y). See - :ref:`coordinate-system`. - :returns: a pixel value for single band images, a tuple of - pixel values for multiband images. - """ - (x, y) = xy - if x < 0: - x = self.xsize + x - if y < 0: - y = self.ysize + y - (x, y) = self.check_xy((x, y)) - return self.get_pixel(x, y) - - putpixel = __setitem__ - getpixel = __getitem__ - - def check_xy(self, xy): - (x, y) = xy - if not (0 <= x < self.xsize and 0 <= y < self.ysize): - raise ValueError("pixel location out of range") - return xy - - -class _PyAccess32_2(PyAccess): - """PA, LA, stored in first and last bytes of a 32 bit word""" - - def _post_init(self, *args, **kwargs): - self.pixels = ffi.cast("struct Pixel_RGBA **", self.image32) - - def get_pixel(self, x, y): - pixel = self.pixels[y][x] - return pixel.r, pixel.a - - def set_pixel(self, x, y, color): - pixel = self.pixels[y][x] - # tuple - pixel.r = min(color[0], 255) - pixel.a = min(color[1], 255) - - -class _PyAccess32_3(PyAccess): - """RGB and friends, stored in the first three bytes of a 32 bit word""" - - def _post_init(self, *args, **kwargs): - self.pixels = ffi.cast("struct Pixel_RGBA **", self.image32) - - def get_pixel(self, x, y): - pixel = self.pixels[y][x] - return pixel.r, pixel.g, pixel.b - - def set_pixel(self, x, y, color): - pixel = self.pixels[y][x] - # tuple - pixel.r = min(color[0], 255) - pixel.g = min(color[1], 255) - pixel.b = min(color[2], 255) - pixel.a = 255 - - -class _PyAccess32_4(PyAccess): - """RGBA etc, all 4 bytes of a 32 bit word""" - - def _post_init(self, *args, **kwargs): - self.pixels = ffi.cast("struct Pixel_RGBA **", self.image32) - - def get_pixel(self, x, y): - pixel = self.pixels[y][x] - return pixel.r, pixel.g, pixel.b, pixel.a - - def set_pixel(self, x, y, color): - pixel = self.pixels[y][x] - # tuple - pixel.r = min(color[0], 255) - pixel.g = min(color[1], 255) - pixel.b = min(color[2], 255) - pixel.a = min(color[3], 255) - - -class _PyAccess8(PyAccess): - """1, L, P, 8 bit images stored as uint8""" - - def _post_init(self, *args, **kwargs): - self.pixels = self.image8 - - def get_pixel(self, x, y): - return self.pixels[y][x] - - def set_pixel(self, x, y, color): - try: - # integer - self.pixels[y][x] = min(color, 255) - except TypeError: - # tuple - self.pixels[y][x] = min(color[0], 255) - - -class _PyAccessI16_N(PyAccess): - """I;16 access, native bitendian without conversion""" - - def _post_init(self, *args, **kwargs): - self.pixels = ffi.cast("unsigned short **", self.image) - - def get_pixel(self, x, y): - return self.pixels[y][x] - - def set_pixel(self, x, y, color): - try: - # integer - self.pixels[y][x] = min(color, 65535) - except TypeError: - # tuple - self.pixels[y][x] = min(color[0], 65535) - - -class _PyAccessI16_L(PyAccess): - """I;16L access, with conversion""" - - def _post_init(self, *args, **kwargs): - self.pixels = ffi.cast("struct Pixel_I16 **", self.image) - - def get_pixel(self, x, y): - pixel = self.pixels[y][x] - return pixel.l + pixel.r * 256 - - def set_pixel(self, x, y, color): - pixel = self.pixels[y][x] - try: - color = min(color, 65535) - except TypeError: - color = min(color[0], 65535) - - pixel.l = color & 0xFF # noqa: E741 - pixel.r = color >> 8 - - -class _PyAccessI16_B(PyAccess): - """I;16B access, with conversion""" - - def _post_init(self, *args, **kwargs): - self.pixels = ffi.cast("struct Pixel_I16 **", self.image) - - def get_pixel(self, x, y): - pixel = self.pixels[y][x] - return pixel.l * 256 + pixel.r - - def set_pixel(self, x, y, color): - pixel = self.pixels[y][x] - try: - color = min(color, 65535) - except Exception: - color = min(color[0], 65535) - - pixel.l = color >> 8 # noqa: E741 - pixel.r = color & 0xFF - - -class _PyAccessI32_N(PyAccess): - """Signed Int32 access, native endian""" - - def _post_init(self, *args, **kwargs): - self.pixels = self.image32 - - def get_pixel(self, x, y): - return self.pixels[y][x] - - def set_pixel(self, x, y, color): - self.pixels[y][x] = color - - -class _PyAccessI32_Swap(PyAccess): - """I;32L/B access, with byteswapping conversion""" - - def _post_init(self, *args, **kwargs): - self.pixels = self.image32 - - def reverse(self, i): - orig = ffi.new("int *", i) - chars = ffi.cast("unsigned char *", orig) - chars[0], chars[1], chars[2], chars[3] = chars[3], chars[2], chars[1], chars[0] - return ffi.cast("int *", chars)[0] - - def get_pixel(self, x, y): - return self.reverse(self.pixels[y][x]) - - def set_pixel(self, x, y, color): - self.pixels[y][x] = self.reverse(color) - - -class _PyAccessF(PyAccess): - """32 bit float access""" - - def _post_init(self, *args, **kwargs): - self.pixels = ffi.cast("float **", self.image32) - - def get_pixel(self, x, y): - return self.pixels[y][x] - - def set_pixel(self, x, y, color): - try: - # not a tuple - self.pixels[y][x] = color - except TypeError: - # tuple - self.pixels[y][x] = color[0] - - -mode_map = { - "1": _PyAccess8, - "L": _PyAccess8, - "P": _PyAccess8, - "LA": _PyAccess32_2, - "La": _PyAccess32_2, - "PA": _PyAccess32_2, - "RGB": _PyAccess32_3, - "LAB": _PyAccess32_3, - "HSV": _PyAccess32_3, - "YCbCr": _PyAccess32_3, - "RGBA": _PyAccess32_4, - "RGBa": _PyAccess32_4, - "RGBX": _PyAccess32_4, - "CMYK": _PyAccess32_4, - "F": _PyAccessF, - "I": _PyAccessI32_N, -} - -if sys.byteorder == "little": - mode_map["I;16"] = _PyAccessI16_N - mode_map["I;16L"] = _PyAccessI16_N - mode_map["I;16B"] = _PyAccessI16_B - - mode_map["I;32L"] = _PyAccessI32_N - mode_map["I;32B"] = _PyAccessI32_Swap -else: - mode_map["I;16"] = _PyAccessI16_L - mode_map["I;16L"] = _PyAccessI16_L - mode_map["I;16B"] = _PyAccessI16_N - - mode_map["I;32L"] = _PyAccessI32_Swap - mode_map["I;32B"] = _PyAccessI32_N - - -def new(img, readonly=False): - access_type = mode_map.get(img.mode, None) - if not access_type: - logger.debug("PyAccess Not Implemented: %s", img.mode) - return None - return access_type(img, readonly) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/charset_normalizer/legacy.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/charset_normalizer/legacy.py deleted file mode 100644 index cdebe2b81cf1b706994ae3dd9c48adc42bf9b357..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/charset_normalizer/legacy.py +++ /dev/null @@ -1,95 +0,0 @@ -import warnings -from typing import Dict, Optional, Union - -from .api import from_bytes, from_fp, from_path, normalize -from .constant import CHARDET_CORRESPONDENCE -from .models import CharsetMatch, CharsetMatches - - -def detect(byte_str: bytes) -> Dict[str, Optional[Union[str, float]]]: - """ - chardet legacy method - Detect the encoding of the given byte string. It should be mostly backward-compatible. - Encoding name will match Chardet own writing whenever possible. (Not on encoding name unsupported by it) - This function is deprecated and should be used to migrate your project easily, consult the documentation for - further information. Not planned for removal. - - :param byte_str: The byte sequence to examine. - """ - if not isinstance(byte_str, (bytearray, bytes)): - raise TypeError( # pragma: nocover - "Expected object of type bytes or bytearray, got: " - "{0}".format(type(byte_str)) - ) - - if isinstance(byte_str, bytearray): - byte_str = bytes(byte_str) - - r = from_bytes(byte_str).best() - - encoding = r.encoding if r is not None else None - language = r.language if r is not None and r.language != "Unknown" else "" - confidence = 1.0 - r.chaos if r is not None else None - - # Note: CharsetNormalizer does not return 'UTF-8-SIG' as the sig get stripped in the detection/normalization process - # but chardet does return 'utf-8-sig' and it is a valid codec name. - if r is not None and encoding == "utf_8" and r.bom: - encoding += "_sig" - - return { - "encoding": encoding - if encoding not in CHARDET_CORRESPONDENCE - else CHARDET_CORRESPONDENCE[encoding], - "language": language, - "confidence": confidence, - } - - -class CharsetNormalizerMatch(CharsetMatch): - pass - - -class CharsetNormalizerMatches(CharsetMatches): - @staticmethod - def from_fp(*args, **kwargs): # type: ignore - warnings.warn( # pragma: nocover - "staticmethod from_fp, from_bytes, from_path and normalize are deprecated " - "and scheduled to be removed in 3.0", - DeprecationWarning, - ) - return from_fp(*args, **kwargs) # pragma: nocover - - @staticmethod - def from_bytes(*args, **kwargs): # type: ignore - warnings.warn( # pragma: nocover - "staticmethod from_fp, from_bytes, from_path and normalize are deprecated " - "and scheduled to be removed in 3.0", - DeprecationWarning, - ) - return from_bytes(*args, **kwargs) # pragma: nocover - - @staticmethod - def from_path(*args, **kwargs): # type: ignore - warnings.warn( # pragma: nocover - "staticmethod from_fp, from_bytes, from_path and normalize are deprecated " - "and scheduled to be removed in 3.0", - DeprecationWarning, - ) - return from_path(*args, **kwargs) # pragma: nocover - - @staticmethod - def normalize(*args, **kwargs): # type: ignore - warnings.warn( # pragma: nocover - "staticmethod from_fp, from_bytes, from_path and normalize are deprecated " - "and scheduled to be removed in 3.0", - DeprecationWarning, - ) - return normalize(*args, **kwargs) # pragma: nocover - - -class CharsetDetector(CharsetNormalizerMatches): - pass - - -class CharsetDoctor(CharsetNormalizerMatches): - pass diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/click/_winconsole.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/click/_winconsole.py deleted file mode 100644 index 6b20df315b23ecd1e3d0ec32c11c0b5ced577efe..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/click/_winconsole.py +++ /dev/null @@ -1,279 +0,0 @@ -# This module is based on the excellent work by Adam Bartoš who -# provided a lot of what went into the implementation here in -# the discussion to issue1602 in the Python bug tracker. -# -# There are some general differences in regards to how this works -# compared to the original patches as we do not need to patch -# the entire interpreter but just work in our little world of -# echo and prompt. -import io -import sys -import time -import typing as t -from ctypes import byref -from ctypes import c_char -from ctypes import c_char_p -from ctypes import c_int -from ctypes import c_ssize_t -from ctypes import c_ulong -from ctypes import c_void_p -from ctypes import POINTER -from ctypes import py_object -from ctypes import Structure -from ctypes.wintypes import DWORD -from ctypes.wintypes import HANDLE -from ctypes.wintypes import LPCWSTR -from ctypes.wintypes import LPWSTR - -from ._compat import _NonClosingTextIOWrapper - -assert sys.platform == "win32" -import msvcrt # noqa: E402 -from ctypes import windll # noqa: E402 -from ctypes import WINFUNCTYPE # noqa: E402 - -c_ssize_p = POINTER(c_ssize_t) - -kernel32 = windll.kernel32 -GetStdHandle = kernel32.GetStdHandle -ReadConsoleW = kernel32.ReadConsoleW -WriteConsoleW = kernel32.WriteConsoleW -GetConsoleMode = kernel32.GetConsoleMode -GetLastError = kernel32.GetLastError -GetCommandLineW = WINFUNCTYPE(LPWSTR)(("GetCommandLineW", windll.kernel32)) -CommandLineToArgvW = WINFUNCTYPE(POINTER(LPWSTR), LPCWSTR, POINTER(c_int))( - ("CommandLineToArgvW", windll.shell32) -) -LocalFree = WINFUNCTYPE(c_void_p, c_void_p)(("LocalFree", windll.kernel32)) - -STDIN_HANDLE = GetStdHandle(-10) -STDOUT_HANDLE = GetStdHandle(-11) -STDERR_HANDLE = GetStdHandle(-12) - -PyBUF_SIMPLE = 0 -PyBUF_WRITABLE = 1 - -ERROR_SUCCESS = 0 -ERROR_NOT_ENOUGH_MEMORY = 8 -ERROR_OPERATION_ABORTED = 995 - -STDIN_FILENO = 0 -STDOUT_FILENO = 1 -STDERR_FILENO = 2 - -EOF = b"\x1a" -MAX_BYTES_WRITTEN = 32767 - -try: - from ctypes import pythonapi -except ImportError: - # On PyPy we cannot get buffers so our ability to operate here is - # severely limited. - get_buffer = None -else: - - class Py_buffer(Structure): - _fields_ = [ - ("buf", c_void_p), - ("obj", py_object), - ("len", c_ssize_t), - ("itemsize", c_ssize_t), - ("readonly", c_int), - ("ndim", c_int), - ("format", c_char_p), - ("shape", c_ssize_p), - ("strides", c_ssize_p), - ("suboffsets", c_ssize_p), - ("internal", c_void_p), - ] - - PyObject_GetBuffer = pythonapi.PyObject_GetBuffer - PyBuffer_Release = pythonapi.PyBuffer_Release - - def get_buffer(obj, writable=False): - buf = Py_buffer() - flags = PyBUF_WRITABLE if writable else PyBUF_SIMPLE - PyObject_GetBuffer(py_object(obj), byref(buf), flags) - - try: - buffer_type = c_char * buf.len - return buffer_type.from_address(buf.buf) - finally: - PyBuffer_Release(byref(buf)) - - -class _WindowsConsoleRawIOBase(io.RawIOBase): - def __init__(self, handle): - self.handle = handle - - def isatty(self): - super().isatty() - return True - - -class _WindowsConsoleReader(_WindowsConsoleRawIOBase): - def readable(self): - return True - - def readinto(self, b): - bytes_to_be_read = len(b) - if not bytes_to_be_read: - return 0 - elif bytes_to_be_read % 2: - raise ValueError( - "cannot read odd number of bytes from UTF-16-LE encoded console" - ) - - buffer = get_buffer(b, writable=True) - code_units_to_be_read = bytes_to_be_read // 2 - code_units_read = c_ulong() - - rv = ReadConsoleW( - HANDLE(self.handle), - buffer, - code_units_to_be_read, - byref(code_units_read), - None, - ) - if GetLastError() == ERROR_OPERATION_ABORTED: - # wait for KeyboardInterrupt - time.sleep(0.1) - if not rv: - raise OSError(f"Windows error: {GetLastError()}") - - if buffer[0] == EOF: - return 0 - return 2 * code_units_read.value - - -class _WindowsConsoleWriter(_WindowsConsoleRawIOBase): - def writable(self): - return True - - @staticmethod - def _get_error_message(errno): - if errno == ERROR_SUCCESS: - return "ERROR_SUCCESS" - elif errno == ERROR_NOT_ENOUGH_MEMORY: - return "ERROR_NOT_ENOUGH_MEMORY" - return f"Windows error {errno}" - - def write(self, b): - bytes_to_be_written = len(b) - buf = get_buffer(b) - code_units_to_be_written = min(bytes_to_be_written, MAX_BYTES_WRITTEN) // 2 - code_units_written = c_ulong() - - WriteConsoleW( - HANDLE(self.handle), - buf, - code_units_to_be_written, - byref(code_units_written), - None, - ) - bytes_written = 2 * code_units_written.value - - if bytes_written == 0 and bytes_to_be_written > 0: - raise OSError(self._get_error_message(GetLastError())) - return bytes_written - - -class ConsoleStream: - def __init__(self, text_stream: t.TextIO, byte_stream: t.BinaryIO) -> None: - self._text_stream = text_stream - self.buffer = byte_stream - - @property - def name(self) -> str: - return self.buffer.name - - def write(self, x: t.AnyStr) -> int: - if isinstance(x, str): - return self._text_stream.write(x) - try: - self.flush() - except Exception: - pass - return self.buffer.write(x) - - def writelines(self, lines: t.Iterable[t.AnyStr]) -> None: - for line in lines: - self.write(line) - - def __getattr__(self, name: str) -> t.Any: - return getattr(self._text_stream, name) - - def isatty(self) -> bool: - return self.buffer.isatty() - - def __repr__(self): - return f"" - - -def _get_text_stdin(buffer_stream: t.BinaryIO) -> t.TextIO: - text_stream = _NonClosingTextIOWrapper( - io.BufferedReader(_WindowsConsoleReader(STDIN_HANDLE)), - "utf-16-le", - "strict", - line_buffering=True, - ) - return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream)) - - -def _get_text_stdout(buffer_stream: t.BinaryIO) -> t.TextIO: - text_stream = _NonClosingTextIOWrapper( - io.BufferedWriter(_WindowsConsoleWriter(STDOUT_HANDLE)), - "utf-16-le", - "strict", - line_buffering=True, - ) - return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream)) - - -def _get_text_stderr(buffer_stream: t.BinaryIO) -> t.TextIO: - text_stream = _NonClosingTextIOWrapper( - io.BufferedWriter(_WindowsConsoleWriter(STDERR_HANDLE)), - "utf-16-le", - "strict", - line_buffering=True, - ) - return t.cast(t.TextIO, ConsoleStream(text_stream, buffer_stream)) - - -_stream_factories: t.Mapping[int, t.Callable[[t.BinaryIO], t.TextIO]] = { - 0: _get_text_stdin, - 1: _get_text_stdout, - 2: _get_text_stderr, -} - - -def _is_console(f: t.TextIO) -> bool: - if not hasattr(f, "fileno"): - return False - - try: - fileno = f.fileno() - except (OSError, io.UnsupportedOperation): - return False - - handle = msvcrt.get_osfhandle(fileno) - return bool(GetConsoleMode(handle, byref(DWORD()))) - - -def _get_windows_console_stream( - f: t.TextIO, encoding: t.Optional[str], errors: t.Optional[str] -) -> t.Optional[t.TextIO]: - if ( - get_buffer is not None - and encoding in {"utf-16-le", None} - and errors in {"strict", None} - and _is_console(f) - ): - func = _stream_factories.get(f.fileno()) - if func is not None: - b = getattr(f, "buffer", None) - - if b is None: - return None - - return func(b) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/feature_transforms/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/feature_transforms/__init__.py deleted file mode 100644 index 359fa069716cba0dd615ce0959368b20828c31f7..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/audio/feature_transforms/__init__.py +++ /dev/null @@ -1,82 +0,0 @@ -import importlib -import os -from abc import ABC, abstractmethod -from typing import Dict, Optional - - -class AudioFeatureTransform(ABC): - @classmethod - @abstractmethod - def from_config_dict(cls, config: Optional[Dict] = None): - pass - - -AUDIO_FEATURE_TRANSFORM_REGISTRY = {} -AUDIO_FEATURE_TRANSFORM_CLASS_NAMES = set() - - -def register_audio_feature_transform(name): - def register_audio_feature_transform_cls(cls): - if name in AUDIO_FEATURE_TRANSFORM_REGISTRY: - raise ValueError(f"Cannot register duplicate transform ({name})") - if not issubclass(cls, AudioFeatureTransform): - raise ValueError( - f"Transform ({name}: {cls.__name__}) must extend " - "AudioFeatureTransform" - ) - if cls.__name__ in AUDIO_FEATURE_TRANSFORM_CLASS_NAMES: - raise ValueError( - f"Cannot register audio feature transform with duplicate " - f"class name ({cls.__name__})" - ) - AUDIO_FEATURE_TRANSFORM_REGISTRY[name] = cls - AUDIO_FEATURE_TRANSFORM_CLASS_NAMES.add(cls.__name__) - return cls - - return register_audio_feature_transform_cls - - -def get_audio_feature_transform(name): - return AUDIO_FEATURE_TRANSFORM_REGISTRY[name] - - -transforms_dir = os.path.dirname(__file__) -for file in os.listdir(transforms_dir): - path = os.path.join(transforms_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - name = file[: file.find(".py")] if file.endswith(".py") else file - importlib.import_module("fairseq.data.audio.feature_transforms." + name) - - -class CompositeAudioFeatureTransform(AudioFeatureTransform): - @classmethod - def from_config_dict(cls, config=None): - _config = {} if config is None else config - _transforms = _config.get("transforms") - if _transforms is None: - return None - transforms = [ - get_audio_feature_transform(_t).from_config_dict(_config.get(_t)) - for _t in _transforms - ] - return CompositeAudioFeatureTransform(transforms) - - def __init__(self, transforms): - self.transforms = [t for t in transforms if t is not None] - - def __call__(self, x): - for t in self.transforms: - x = t(x) - return x - - def __repr__(self): - format_string = ( - [self.__class__.__name__ + "("] - + [f" {t.__repr__()}" for t in self.transforms] - + [")"] - ) - return "\n".join(format_string) diff --git a/spaces/asd123Xiao/kafuu_chino_sovits4.0/onnx/model_onnx.py b/spaces/asd123Xiao/kafuu_chino_sovits4.0/onnx/model_onnx.py deleted file mode 100644 index 1567d28875c8a6620d5db8114daa0f073ddb145c..0000000000000000000000000000000000000000 --- a/spaces/asd123Xiao/kafuu_chino_sovits4.0/onnx/model_onnx.py +++ /dev/null @@ -1,328 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import modules.attentions as attentions -import modules.commons as commons -import modules.modules as modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from modules.commons import init_weights, get_padding -from vdecoder.hifigan.models import Generator -from utils import f0_to_coarse - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_lengths, f0=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = x + self.f0_emb(f0.long()).transpose(1,2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - - return z, m, logs, x_mask - - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SpeakerEncoder(torch.nn.Module): - def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256): - super(SpeakerEncoder, self).__init__() - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - def forward(self, mels): - self.lstm.flatten_parameters() - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - def compute_partial_slices(self, total_frames, partial_frames, partial_hop): - mel_slices = [] - for i in range(0, total_frames-partial_frames, partial_hop): - mel_range = torch.arange(i, i+partial_frames) - mel_slices.append(mel_range) - - return mel_slices - - def embed_utterance(self, mel, partial_frames=128, partial_hop=64): - mel_len = mel.size(1) - last_mel = mel[:,-partial_frames:] - - if mel_len > partial_frames: - mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop) - mels = list(mel[:,s] for s in mel_slices) - mels.append(last_mel) - mels = torch.stack(tuple(mels), 0).squeeze(1) - - with torch.no_grad(): - partial_embeds = self(mels) - embed = torch.mean(partial_embeds, axis=0).unsqueeze(0) - #embed = embed / torch.linalg.norm(embed, 2) - else: - with torch.no_grad(): - embed = self(last_mel) - - return embed - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - **kwargs): - - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.enc_p_ = TextEncoder(ssl_dim, inter_channels, hidden_channels, 5, 1, 16,0, filter_channels, n_heads, p_dropout) - hps = { - "sampling_rate": 32000, - "inter_channels": 192, - "resblock": "1", - "resblock_kernel_sizes": [3, 7, 11], - "resblock_dilation_sizes": [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - "upsample_rates": [10, 8, 2, 2], - "upsample_initial_channel": 512, - "upsample_kernel_sizes": [16, 16, 4, 4], - "gin_channels": 256, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - def forward(self, c, c_lengths, f0, g=None): - g = self.emb_g(g.unsqueeze(0)).transpose(1,2) - z_p, m_p, logs_p, c_mask = self.enc_p_(c.transpose(1,2), c_lengths, f0=f0_to_coarse(f0)) - z = self.flow(z_p, c_mask, g=g, reverse=True) - o = self.dec(z * c_mask, g=g, f0=f0.float()) - return o - diff --git a/spaces/asd123Xiao/kafuu_chino_sovits4.0/utils.py b/spaces/asd123Xiao/kafuu_chino_sovits4.0/utils.py deleted file mode 100644 index f13d3526d514be71c77bebb17a5af8831b9c6a36..0000000000000000000000000000000000000000 --- a/spaces/asd123Xiao/kafuu_chino_sovits4.0/utils.py +++ /dev/null @@ -1,508 +0,0 @@ -import os -import glob -import re -import sys -import argparse -import logging -import json -import subprocess -import random - -import librosa -import numpy as np -from scipy.io.wavfile import read -import torch -from torch.nn import functional as F -from modules.commons import sequence_mask -from hubert import hubert_model -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - -f0_bin = 256 -f0_max = 1100.0 -f0_min = 50.0 -f0_mel_min = 1127 * np.log(1 + f0_min / 700) -f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - -# def normalize_f0(f0, random_scale=True): -# f0_norm = f0.clone() # create a copy of the input Tensor -# batch_size, _, frame_length = f0_norm.shape -# for i in range(batch_size): -# means = torch.mean(f0_norm[i, 0, :]) -# if random_scale: -# factor = random.uniform(0.8, 1.2) -# else: -# factor = 1 -# f0_norm[i, 0, :] = (f0_norm[i, 0, :] - means) * factor -# return f0_norm -# def normalize_f0(f0, random_scale=True): -# means = torch.mean(f0[:, 0, :], dim=1, keepdim=True) -# if random_scale: -# factor = torch.Tensor(f0.shape[0],1).uniform_(0.8, 1.2).to(f0.device) -# else: -# factor = torch.ones(f0.shape[0], 1, 1).to(f0.device) -# f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1) -# return f0_norm -def normalize_f0(f0, x_mask, uv, random_scale=True): - # calculate means based on x_mask - uv_sum = torch.sum(uv, dim=1, keepdim=True) - uv_sum[uv_sum == 0] = 9999 - means = torch.sum(f0[:, 0, :] * uv, dim=1, keepdim=True) / uv_sum - - if random_scale: - factor = torch.Tensor(f0.shape[0], 1).uniform_(0.8, 1.2).to(f0.device) - else: - factor = torch.ones(f0.shape[0], 1).to(f0.device) - # normalize f0 based on means and factor - f0_norm = (f0 - means.unsqueeze(-1)) * factor.unsqueeze(-1) - if torch.isnan(f0_norm).any(): - exit(0) - return f0_norm * x_mask - - -def plot_data_to_numpy(x, y): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - plt.plot(x) - plt.plot(y) - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - - -def interpolate_f0(f0): - ''' - 对F0进行插值处理 - ''' - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] - last_value = data[i] - - return ip_data[:,0], vuv_vector[:,0] - - -def compute_f0_parselmouth(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512): - import parselmouth - x = wav_numpy - if p_len is None: - p_len = x.shape[0]//hop_length - else: - assert abs(p_len-x.shape[0]//hop_length) < 4, "pad length error" - time_step = hop_length / sampling_rate * 1000 - f0_min = 50 - f0_max = 1100 - f0 = parselmouth.Sound(x, sampling_rate).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - return f0 - -def resize_f0(x, target_len): - source = np.array(x) - source[source<0.001] = np.nan - target = np.interp(np.arange(0, len(source)*target_len, len(source))/ target_len, np.arange(0, len(source)), source) - res = np.nan_to_num(target) - return res - -def compute_f0_dio(wav_numpy, p_len=None, sampling_rate=44100, hop_length=512): - import pyworld - if p_len is None: - p_len = wav_numpy.shape[0]//hop_length - f0, t = pyworld.dio( - wav_numpy.astype(np.double), - fs=sampling_rate, - f0_ceil=800, - frame_period=1000 * hop_length / sampling_rate, - ) - f0 = pyworld.stonemask(wav_numpy.astype(np.double), f0, t, sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return resize_f0(f0, p_len) - -def f0_to_coarse(f0): - is_torch = isinstance(f0, torch.Tensor) - f0_mel = 1127 * (1 + f0 / 700).log() if is_torch else 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * (f0_bin - 2) / (f0_mel_max - f0_mel_min) + 1 - - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > f0_bin - 1] = f0_bin - 1 - f0_coarse = (f0_mel + 0.5).long() if is_torch else np.rint(f0_mel).astype(np.int) - assert f0_coarse.max() <= 255 and f0_coarse.min() >= 1, (f0_coarse.max(), f0_coarse.min()) - return f0_coarse - - -def get_hubert_model(): - vec_path = "hubert/checkpoint_best_legacy_500.pt" - print("load model(s) from {}".format(vec_path)) - from fairseq import checkpoint_utils - models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [vec_path], - suffix="", - ) - model = models[0] - model.eval() - return model - -def get_hubert_content(hmodel, wav_16k_tensor): - feats = wav_16k_tensor - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.to(wav_16k_tensor.device), - "padding_mask": padding_mask.to(wav_16k_tensor.device), - "output_layer": 9, # layer 9 - } - with torch.no_grad(): - logits = hmodel.extract_features(**inputs) - feats = hmodel.final_proj(logits[0]) - return feats.transpose(1, 2) - - -def get_content(cmodel, y): - with torch.no_grad(): - c = cmodel.extract_features(y.squeeze(1))[0] - c = c.transpose(1, 2) - return c - - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - # assert "dec" in k or "disc" in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - logger.info("%s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict) - else: - model.load_state_dict(new_state_dict) - print("load ") - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path, val_steps, current_step): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - if current_step >= val_steps * 3: - to_del_ckptname = checkpoint_path.replace(str(current_step), str(current_step - val_steps * 3)) - if os.path.exists(to_del_ckptname): - os.remove(to_del_ckptname) - print("Removing ", to_del_ckptname) - - -def clean_checkpoints(path_to_models='logs/48k/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10,2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, required=True, - help='Model name') - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams =HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -def repeat_expand_2d(content, target_len): - # content : [h, t] - - src_len = content.shape[-1] - target = torch.zeros([content.shape[0], target_len], dtype=torch.float).to(content.device) - temp = torch.arange(src_len+1) * target_len / src_len - current_pos = 0 - for i in range(target_len): - if i < temp[current_pos+1]: - target[:, i] = content[:, current_pos] - else: - current_pos += 1 - target[:, i] = content[:, current_pos] - - return target - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() - diff --git a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Parham Hamouni.html b/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Parham Hamouni.html deleted file mode 100644 index 0c266a809f180a1b5916903ce672b0d6aecea111..0000000000000000000000000000000000000000 --- a/spaces/at2507/SM_NLP_RecoSys/Data/Mentor_interviews/Parham Hamouni.html +++ /dev/null @@ -1,132 +0,0 @@ - - - - Parham Hamouni - - - - -
-

Parham Hamouni

- -
-
Mentee to Mentor. 

1- What is your motivation to become a mentor with SharpestMinds?
- motivation is two fold - Help others and also the financial incentives tied to eventually getting some land a job. 

2- Do you have previous mentorship experience?
- No previous mentorship experience but was a TA at university. 

3- What's your data science career journey been like and what motivated you to get in the Data field?
- Have a transportation engg background. Wanted to move to a more scientific/research-oriented role. Seeked mentorship on SM to make the switch. 
- Landed a job as an Applied research Scientist at Crater Lab. Working on Vague problems to come up with concrete solutions. Worked on building Proof of concepts and some of the work involved working on government projects. Work on creating reports to classify and detect risks using NLP and graph theory.
- Currently working as senior Data scientist at Kraft. 
- Day-to-day works include working on CV techniques, NLP, graph theory & techniques and statistical techniques. 


4- According to you, what's the biggest challenge a newcomer faces when landing a job in the data field? How can you help them with this?
- This is kind of a chicken and egg problem. The entry is actually the most difficult part. Having the right expectation of what the job will be and what's the business side of the knowledge they have is crucial. Being technically sound is important. 
- Try to get mentees through basic things first and understand their level of understanding. Work on real-world scenarios and data sets ( Unlabelled data) and help them understand how to tackle these problems. Also, help them stay humble and firm to be able to tackle these problems. 

5- Do you have any questions for me regarding the Platform?
- What is the procedure like?
-
- -
- - - \ No newline at end of file diff --git a/spaces/awacke1/Generative-AI-Writers-Dashboard/style.css b/spaces/awacke1/Generative-AI-Writers-Dashboard/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Generative-AI-Writers-Dashboard/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/awacke1/Human.Feedback.Dynamic.JSONL.Dataset.Download/README.md b/spaces/awacke1/Human.Feedback.Dynamic.JSONL.Dataset.Download/README.md deleted file mode 100644 index fd251b68d0495b6899c0d2504e5407c80318a331..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Human.Feedback.Dynamic.JSONL.Dataset.Download/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Human.Feedback.Dynamic.JSONL.Dataset.Download -emoji: 📈 -colorFrom: blue -colorTo: yellow -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/Markdown.Streamlit.EDA.Generic.Loader.Presenter.Memory/README.md b/spaces/awacke1/Markdown.Streamlit.EDA.Generic.Loader.Presenter.Memory/README.md deleted file mode 100644 index 510d3bbd0d40392798b19cf422a86846ddcb5f39..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Markdown.Streamlit.EDA.Generic.Loader.Presenter.Memory/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Azure.Streamlit.Github.Actions.Azure.Container.Registry.Docker.AKS -emoji: 🌖 -colorFrom: yellow -colorTo: pink -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: awacke1/Azure.Streamlit.Github.Actions.Azure.Container.Registry.Docker.AKS ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/PandasDataframeAutoFilterStreamlit/README.md b/spaces/awacke1/PandasDataframeAutoFilterStreamlit/README.md deleted file mode 100644 index 4132ab9e81ae2b07d6be2e55ff4a85f7570edd04..0000000000000000000000000000000000000000 --- a/spaces/awacke1/PandasDataframeAutoFilterStreamlit/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: PandasDataframeAutoFilterStreamlit -emoji: 🐠 -colorFrom: purple -colorTo: yellow -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/axart-software/simple-beat-generator/beatgenerator.py b/spaces/axart-software/simple-beat-generator/beatgenerator.py deleted file mode 100644 index a08247ac2f4b3cab11e07a6af3577a4d0ea0420d..0000000000000000000000000000000000000000 --- a/spaces/axart-software/simple-beat-generator/beatgenerator.py +++ /dev/null @@ -1,81 +0,0 @@ -from midiutil import MIDIFile -import base64 -from io import BytesIO -from transformers import GPT2LMHeadModel, GPT2Tokenizer -from customtokenencoderdecoder import CustomTokenEncoderDecoder - -class BeatGenerator: - STEP_SIZE = 0.25 - STEPS_PER_SEQUENCE = 32 - - def __init__(self, model: GPT2LMHeadModel, tokenizer: GPT2Tokenizer): - self.__model = model - self.__tokenizer = tokenizer - self.__sections = ["a", "b", "c", "d"] - - def generate_beat(self, user_prompt: [[int]], temperature: float, tempo: float) -> [str, str]: - # pitches = [36, 38, 42] - pitches = [36, 38, 39, 42, 45, 46, 47, 49, 51] - assert len(user_prompt) == len(pitches), "User prompt length must be equal to the number of pitches" - - user_events: [[int, int]] = [] - for pitch_id, pitch in enumerate(pitches): - for step in user_prompt[pitch_id]: - user_events.append((step, pitch)) - - custom_token_encoder_decoder = CustomTokenEncoderDecoder( - events=user_events, - sections=self.__sections, - steps_per_section=self.STEPS_PER_SEQUENCE, - model=self.__model, - tokenizer=self.__tokenizer, - ) - - result = custom_token_encoder_decoder.generate_events(temperature=temperature) - - genre = result["genre"] - events = result["events"] - - midi_buffer = self.__make_midi_buffer( - data_container=events, - tempo=tempo, - verbose=False - ) - midi_base64 = base64.b64encode(midi_buffer.read()).decode("utf-8") - - return genre, midi_base64 - - def __make_midi_buffer(self, data_container: [(int, int)], tempo: int, verbose: bool = False) -> BytesIO: - track_count = 1 - out_midi_file = MIDIFile(1) - out_midi_file.addTempo(0, 0, tempo) - - for data in data_container: - step = data[0] - pitch = data[1] - velocity = 100 - - if verbose is True: - print("Processing: {0} in step range: {1}".format(data, step_ranges[section_id])) - - if step >= 0 and step < 128 and pitch >= 0 and pitch < 128: - start_time = float(step) * self.STEP_SIZE - volume = int(velocity) - - out_midi_file.addNote( - track=0, - channel=9, - pitch=pitch, - time=start_time, - duration=self.STEP_SIZE, - volume=volume - ) - - buffer = BytesIO() - out_midi_file.writeFile(buffer) - buffer.seek(0) - - with open("out.mid", "wb") as output_file: - out_midi_file.writeFile(output_file) - - return buffer \ No newline at end of file diff --git a/spaces/azusarang/so-vits-svc-models-ba_P/vencoder/whisper/tokenizer.py b/spaces/azusarang/so-vits-svc-models-ba_P/vencoder/whisper/tokenizer.py deleted file mode 100644 index a27cb359ee891590d3f793624f9f8ec768a26cc3..0000000000000000000000000000000000000000 --- a/spaces/azusarang/so-vits-svc-models-ba_P/vencoder/whisper/tokenizer.py +++ /dev/null @@ -1,331 +0,0 @@ -import os -from dataclasses import dataclass -from functools import lru_cache -from typing import List, Optional, Tuple, Union - -import numpy as np -import torch -from transformers import GPT2TokenizerFast - -LANGUAGES = { - "en": "english", - "zh": "chinese", - "de": "german", - "es": "spanish", - "ru": "russian", - "ko": "korean", - "fr": "french", - "ja": "japanese", - "pt": "portuguese", - "tr": "turkish", - "pl": "polish", - "ca": "catalan", - "nl": "dutch", - "ar": "arabic", - "sv": "swedish", - "it": "italian", - "id": "indonesian", - "hi": "hindi", - "fi": "finnish", - "vi": "vietnamese", - "he": "hebrew", - "uk": "ukrainian", - "el": "greek", - "ms": "malay", - "cs": "czech", - "ro": "romanian", - "da": "danish", - "hu": "hungarian", - "ta": "tamil", - "no": "norwegian", - "th": "thai", - "ur": "urdu", - "hr": "croatian", - "bg": "bulgarian", - "lt": "lithuanian", - "la": "latin", - "mi": "maori", - "ml": "malayalam", - "cy": "welsh", - "sk": "slovak", - "te": "telugu", - "fa": "persian", - "lv": "latvian", - "bn": "bengali", - "sr": "serbian", - "az": "azerbaijani", - "sl": "slovenian", - "kn": "kannada", - "et": "estonian", - "mk": "macedonian", - "br": "breton", - "eu": "basque", - "is": "icelandic", - "hy": "armenian", - "ne": "nepali", - "mn": "mongolian", - "bs": "bosnian", - "kk": "kazakh", - "sq": "albanian", - "sw": "swahili", - "gl": "galician", - "mr": "marathi", - "pa": "punjabi", - "si": "sinhala", - "km": "khmer", - "sn": "shona", - "yo": "yoruba", - "so": "somali", - "af": "afrikaans", - "oc": "occitan", - "ka": "georgian", - "be": "belarusian", - "tg": "tajik", - "sd": "sindhi", - "gu": "gujarati", - "am": "amharic", - "yi": "yiddish", - "lo": "lao", - "uz": "uzbek", - "fo": "faroese", - "ht": "haitian creole", - "ps": "pashto", - "tk": "turkmen", - "nn": "nynorsk", - "mt": "maltese", - "sa": "sanskrit", - "lb": "luxembourgish", - "my": "myanmar", - "bo": "tibetan", - "tl": "tagalog", - "mg": "malagasy", - "as": "assamese", - "tt": "tatar", - "haw": "hawaiian", - "ln": "lingala", - "ha": "hausa", - "ba": "bashkir", - "jw": "javanese", - "su": "sundanese", -} - -# language code lookup by name, with a few language aliases -TO_LANGUAGE_CODE = { - **{language: code for code, language in LANGUAGES.items()}, - "burmese": "my", - "valencian": "ca", - "flemish": "nl", - "haitian": "ht", - "letzeburgesch": "lb", - "pushto": "ps", - "panjabi": "pa", - "moldavian": "ro", - "moldovan": "ro", - "sinhalese": "si", - "castilian": "es", -} - - -@dataclass(frozen=True) -class Tokenizer: - """A thin wrapper around `GPT2TokenizerFast` providing quick access to special tokens""" - - tokenizer: "GPT2TokenizerFast" - language: Optional[str] - sot_sequence: Tuple[int] - - def encode(self, text, **kwargs): - return self.tokenizer.encode(text, **kwargs) - - def decode(self, token_ids: Union[int, List[int], np.ndarray, torch.Tensor], **kwargs): - return self.tokenizer.decode(token_ids, **kwargs) - - def decode_with_timestamps(self, tokens) -> str: - """ - Timestamp tokens are above the special tokens' id range and are ignored by `decode()`. - This method decodes given tokens with timestamps tokens annotated, e.g. "<|1.08|>". - """ - outputs = [[]] - for token in tokens: - if token >= self.timestamp_begin: - timestamp = f"<|{(token - self.timestamp_begin) * 0.02:.2f}|>" - outputs.append(timestamp) - outputs.append([]) - else: - outputs[-1].append(token) - outputs = [s if isinstance(s, str) else self.tokenizer.decode(s) for s in outputs] - return "".join(outputs) - - @property - @lru_cache() - def eot(self) -> int: - return self.tokenizer.eos_token_id - - @property - @lru_cache() - def sot(self) -> int: - return self._get_single_token_id("<|startoftranscript|>") - - @property - @lru_cache() - def sot_lm(self) -> int: - return self._get_single_token_id("<|startoflm|>") - - @property - @lru_cache() - def sot_prev(self) -> int: - return self._get_single_token_id("<|startofprev|>") - - @property - @lru_cache() - def no_speech(self) -> int: - return self._get_single_token_id("<|nospeech|>") - - @property - @lru_cache() - def no_timestamps(self) -> int: - return self._get_single_token_id("<|notimestamps|>") - - @property - @lru_cache() - def timestamp_begin(self) -> int: - return self.tokenizer.all_special_ids[-1] + 1 - - @property - @lru_cache() - def language_token(self) -> int: - """Returns the token id corresponding to the value of the `language` field""" - if self.language is None: - raise ValueError(f"This tokenizer does not have language token configured") - - additional_tokens = dict( - zip( - self.tokenizer.additional_special_tokens, - self.tokenizer.additional_special_tokens_ids, - ) - ) - candidate = f"<|{self.language}|>" - if candidate in additional_tokens: - return additional_tokens[candidate] - - raise KeyError(f"Language {self.language} not found in tokenizer.") - - @property - @lru_cache() - def all_language_tokens(self) -> Tuple[int]: - result = [] - for token, token_id in zip( - self.tokenizer.additional_special_tokens, - self.tokenizer.additional_special_tokens_ids, - ): - if token.strip("<|>") in LANGUAGES: - result.append(token_id) - return tuple(result) - - @property - @lru_cache() - def all_language_codes(self) -> Tuple[str]: - return tuple(self.decode([l]).strip("<|>") for l in self.all_language_tokens) - - @property - @lru_cache() - def sot_sequence_including_notimestamps(self) -> Tuple[int]: - return tuple(list(self.sot_sequence) + [self.no_timestamps]) - - @property - @lru_cache() - def non_speech_tokens(self) -> Tuple[int]: - """ - Returns the list of tokens to suppress in order to avoid any speaker tags or non-speech - annotations, to prevent sampling texts that are not actually spoken in the audio, e.g. - - - ♪♪♪ - - ( SPEAKING FOREIGN LANGUAGE ) - - [DAVID] Hey there, - - keeping basic punctuations like commas, periods, question marks, exclamation points, etc. - """ - symbols = list("\"#()*+/:;<=>@[\\]^_`{|}~「」『』") - symbols += "<< >> <<< >>> -- --- -( -[ (' (\" (( )) ((( ))) [[ ]] {{ }} ♪♪ ♪♪♪".split() - - # symbols that may be a single token or multiple tokens depending on the tokenizer. - # In case they're multiple tokens, suppress the first token, which is safe because: - # These are between U+2640 and U+267F miscellaneous symbols that are okay to suppress - # in generations, and in the 3-byte UTF-8 representation they share the first two bytes. - miscellaneous = set("♩♪♫♬♭♮♯") - assert all(0x2640 <= ord(c) <= 0x267F for c in miscellaneous) - - # allow hyphens "-" and single quotes "'" between words, but not at the beginning of a word - result = {self.tokenizer.encode(" -")[0], self.tokenizer.encode(" '")[0]} - for symbol in symbols + list(miscellaneous): - for tokens in [self.tokenizer.encode(symbol), self.tokenizer.encode(" " + symbol)]: - if len(tokens) == 1 or symbol in miscellaneous: - result.add(tokens[0]) - - return tuple(sorted(result)) - - def _get_single_token_id(self, text) -> int: - tokens = self.tokenizer.encode(text) - assert len(tokens) == 1, f"{text} is not encoded as a single token" - return tokens[0] - - -@lru_cache(maxsize=None) -def build_tokenizer(name: str = "gpt2"): - os.environ["TOKENIZERS_PARALLELISM"] = "false" - path = os.path.join(os.path.dirname(__file__), "assets", name) - tokenizer = GPT2TokenizerFast.from_pretrained(path) - - specials = [ - "<|startoftranscript|>", - *[f"<|{lang}|>" for lang in LANGUAGES.keys()], - "<|translate|>", - "<|transcribe|>", - "<|startoflm|>", - "<|startofprev|>", - "<|nospeech|>", - "<|notimestamps|>", - ] - - tokenizer.add_special_tokens(dict(additional_special_tokens=specials)) - return tokenizer - - -@lru_cache(maxsize=None) -def get_tokenizer( - multilingual: bool, - *, - task: Optional[str] = None, # Literal["transcribe", "translate", None] - language: Optional[str] = None, -) -> Tokenizer: - if language is not None: - language = language.lower() - if language not in LANGUAGES: - if language in TO_LANGUAGE_CODE: - language = TO_LANGUAGE_CODE[language] - else: - raise ValueError(f"Unsupported language: {language}") - - if multilingual: - tokenizer_name = "multilingual" - task = task or "transcribe" - language = language or "en" - else: - tokenizer_name = "gpt2" - task = None - language = None - - tokenizer = build_tokenizer(name=tokenizer_name) - all_special_ids: List[int] = tokenizer.all_special_ids - sot: int = all_special_ids[1] - translate: int = all_special_ids[-6] - transcribe: int = all_special_ids[-5] - - langs = tuple(LANGUAGES.keys()) - sot_sequence = [sot] - if language is not None: - sot_sequence.append(sot + 1 + langs.index(language)) - if task is not None: - sot_sequence.append(transcribe if task == "transcribe" else translate) - - return Tokenizer(tokenizer=tokenizer, language=language, sot_sequence=tuple(sot_sequence)) diff --git a/spaces/banana-projects/web3d/node_modules/three/src/helpers/SpotLightHelper.d.ts b/spaces/banana-projects/web3d/node_modules/three/src/helpers/SpotLightHelper.d.ts deleted file mode 100644 index 6b2d04234b2acd8dbcf3c3db7fff5a3793abd50f..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/helpers/SpotLightHelper.d.ts +++ /dev/null @@ -1,16 +0,0 @@ -import { Light } from './../lights/Light'; -import { Color } from './../math/Color'; -import { Matrix4 } from './../math/Matrix4'; -import { Object3D } from './../core/Object3D'; - -export class SpotLightHelper extends Object3D { - constructor(light: Light, color?: Color | string | number); - - light: Light; - matrix: Matrix4; - matrixAutoUpdate: boolean; - color: Color | string | number | undefined; - - dispose(): void; - update(): void; -} diff --git a/spaces/bankholdup/stylegan_petbreeder/e4e/datasets/gt_res_dataset.py b/spaces/bankholdup/stylegan_petbreeder/e4e/datasets/gt_res_dataset.py deleted file mode 100644 index c0beacfee5335aa10aa7e8b7cabe206d7f9a56f7..0000000000000000000000000000000000000000 --- a/spaces/bankholdup/stylegan_petbreeder/e4e/datasets/gt_res_dataset.py +++ /dev/null @@ -1,32 +0,0 @@ -#!/usr/bin/python -# encoding: utf-8 -import os -from torch.utils.data import Dataset -from PIL import Image -import torch - -class GTResDataset(Dataset): - - def __init__(self, root_path, gt_dir=None, transform=None, transform_train=None): - self.pairs = [] - for f in os.listdir(root_path): - image_path = os.path.join(root_path, f) - gt_path = os.path.join(gt_dir, f) - if f.endswith(".jpg") or f.endswith(".png"): - self.pairs.append([image_path, gt_path.replace('.png', '.jpg'), None]) - self.transform = transform - self.transform_train = transform_train - - def __len__(self): - return len(self.pairs) - - def __getitem__(self, index): - from_path, to_path, _ = self.pairs[index] - from_im = Image.open(from_path).convert('RGB') - to_im = Image.open(to_path).convert('RGB') - - if self.transform: - to_im = self.transform(to_im) - from_im = self.transform(from_im) - - return from_im, to_im diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327221823.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327221823.py deleted file mode 100644 index 18a629e6b327749e47282c436a35297e5fc78174..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327221823.py +++ /dev/null @@ -1,67 +0,0 @@ -import os -os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - #return Image.fromarray(restored_faces[0][:,:,::-1]) - return Image.fromarray(restored_img[:, :, ::-1]) - -title = "让美好回忆更清晰" - - -description = "上传老照片,点击Submit,稍等片刻,右侧Output将照片另存为即可。" - -article = "

本项目克隆自akhaliq`@huggingface | Github Repo

visitor badge
" - -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True,share=True) - - diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/utils/ui.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/utils/ui.py deleted file mode 100644 index 68fcbe0af257bdbaad767708843b545064d9b219..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/utils/ui.py +++ /dev/null @@ -1,34 +0,0 @@ -from pathlib import Path - -import gradio as gr -import torch - -refresh_symbol = '\U0001f504' # 🔄 - -class ToolButton(gr.Button, gr.components.IOComponent): - """Small button with single emoji as text, fits inside gradio forms""" - - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def get_block_name(self): - return "button" - - -def create_refresh_button(refresh_component, refresh_method, refreshed_args, elem_class): - def refresh(): - refresh_method() - args = refreshed_args() if callable(refreshed_args) else refreshed_args - - for k, v in args.items(): - setattr(refresh_component, k, v) - - return gr.update(**(args or {})) - - refresh_button = ToolButton(value=refresh_symbol, elem_classes=elem_class, scale=1, size="sm", container=False) - refresh_button.click( - fn=refresh, - inputs=[], - outputs=[refresh_component] - ) - return refresh_button \ No newline at end of file diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/test_time_augmentation.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/test_time_augmentation.py deleted file mode 100644 index ec2022ed16727f538993d2c7db60a60a1183b90d..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/test_time_augmentation.py +++ /dev/null @@ -1,207 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import numpy as np -import torch -from fvcore.transforms import HFlipTransform, TransformList -from torch.nn import functional as F - -from detectron2.data.transforms import RandomRotation, RotationTransform, apply_transform_gens -from detectron2.modeling.postprocessing import detector_postprocess -from detectron2.modeling.test_time_augmentation import DatasetMapperTTA, GeneralizedRCNNWithTTA - -from ..converters import HFlipConverter - - -class DensePoseDatasetMapperTTA(DatasetMapperTTA): - def __init__(self, cfg): - super().__init__(cfg=cfg) - self.angles = cfg.TEST.AUG.ROTATION_ANGLES - - def __call__(self, dataset_dict): - ret = super().__call__(dataset_dict=dataset_dict) - numpy_image = dataset_dict["image"].permute(1, 2, 0).numpy() - for angle in self.angles: - rotate = RandomRotation(angle=angle, expand=True) - new_numpy_image, tfms = apply_transform_gens([rotate], np.copy(numpy_image)) - torch_image = torch.from_numpy(np.ascontiguousarray(new_numpy_image.transpose(2, 0, 1))) - dic = copy.deepcopy(dataset_dict) - # In DatasetMapperTTA, there is a pre_tfm transform (resize or no-op) that is - # added at the beginning of each TransformList. That's '.transforms[0]'. - dic["transforms"] = TransformList( - [ret[-1]["transforms"].transforms[0]] + tfms.transforms - ) - dic["image"] = torch_image - ret.append(dic) - return ret - - -class DensePoseGeneralizedRCNNWithTTA(GeneralizedRCNNWithTTA): - def __init__(self, cfg, model, transform_data, tta_mapper=None, batch_size=1): - """ - Args: - cfg (CfgNode): - model (GeneralizedRCNN): a GeneralizedRCNN to apply TTA on. - transform_data (DensePoseTransformData): contains symmetry label - transforms used for horizontal flip - tta_mapper (callable): takes a dataset dict and returns a list of - augmented versions of the dataset dict. Defaults to - `DatasetMapperTTA(cfg)`. - batch_size (int): batch the augmented images into this batch size for inference. - """ - self._transform_data = transform_data.to(model.device) - super().__init__(cfg=cfg, model=model, tta_mapper=tta_mapper, batch_size=batch_size) - - # the implementation follows closely the one from detectron2/modeling - def _inference_one_image(self, input): - """ - Args: - input (dict): one dataset dict with "image" field being a CHW tensor - - Returns: - dict: one output dict - """ - orig_shape = (input["height"], input["width"]) - # For some reason, resize with uint8 slightly increases box AP but decreases densepose AP - input["image"] = input["image"].to(torch.uint8) - augmented_inputs, tfms = self._get_augmented_inputs(input) - # Detect boxes from all augmented versions - with self._turn_off_roi_heads(["mask_on", "keypoint_on", "densepose_on"]): - # temporarily disable roi heads - all_boxes, all_scores, all_classes = self._get_augmented_boxes(augmented_inputs, tfms) - merged_instances = self._merge_detections(all_boxes, all_scores, all_classes, orig_shape) - - if self.cfg.MODEL.MASK_ON or self.cfg.MODEL.DENSEPOSE_ON: - # Use the detected boxes to obtain new fields - augmented_instances = self._rescale_detected_boxes( - augmented_inputs, merged_instances, tfms - ) - # run forward on the detected boxes - outputs = self._batch_inference(augmented_inputs, augmented_instances) - # Delete now useless variables to avoid being out of memory - del augmented_inputs, augmented_instances - # average the predictions - if self.cfg.MODEL.MASK_ON: - merged_instances.pred_masks = self._reduce_pred_masks(outputs, tfms) - if self.cfg.MODEL.DENSEPOSE_ON: - merged_instances.pred_densepose = self._reduce_pred_densepose(outputs, tfms) - # postprocess - merged_instances = detector_postprocess(merged_instances, *orig_shape) - return {"instances": merged_instances} - else: - return {"instances": merged_instances} - - def _get_augmented_boxes(self, augmented_inputs, tfms): - # Heavily based on detectron2/modeling/test_time_augmentation.py - # Only difference is that RotationTransform is excluded from bbox computation - # 1: forward with all augmented images - outputs = self._batch_inference(augmented_inputs) - # 2: union the results - all_boxes = [] - all_scores = [] - all_classes = [] - for output, tfm in zip(outputs, tfms): - # Need to inverse the transforms on boxes, to obtain results on original image - if not any(isinstance(t, RotationTransform) for t in tfm.transforms): - # Some transforms can't compute bbox correctly - pred_boxes = output.pred_boxes.tensor - original_pred_boxes = tfm.inverse().apply_box(pred_boxes.cpu().numpy()) - all_boxes.append(torch.from_numpy(original_pred_boxes).to(pred_boxes.device)) - all_scores.extend(output.scores) - all_classes.extend(output.pred_classes) - all_boxes = torch.cat(all_boxes, dim=0) - return all_boxes, all_scores, all_classes - - def _reduce_pred_densepose(self, outputs, tfms): - # Should apply inverse transforms on densepose preds. - # We assume only rotation, resize & flip are used. pred_masks is a scale-invariant - # representation, so we handle the other ones specially - for idx, (output, tfm) in enumerate(zip(outputs, tfms)): - for t in tfm.transforms: - for attr in ["coarse_segm", "fine_segm", "u", "v"]: - setattr( - output.pred_densepose, - attr, - _inverse_rotation( - getattr(output.pred_densepose, attr), output.pred_boxes.tensor, t - ), - ) - if any(isinstance(t, HFlipTransform) for t in tfm.transforms): - output.pred_densepose = HFlipConverter.convert( - output.pred_densepose, self._transform_data - ) - self._incremental_avg_dp(outputs[0].pred_densepose, output.pred_densepose, idx) - return outputs[0].pred_densepose - - # incrementally computed average: u_(n + 1) = u_n + (x_(n+1) - u_n) / (n + 1). - def _incremental_avg_dp(self, avg, new_el, idx): - for attr in ["coarse_segm", "fine_segm", "u", "v"]: - setattr(avg, attr, (getattr(avg, attr) * idx + getattr(new_el, attr)) / (idx + 1)) - if idx: - # Deletion of the > 0 index intermediary values to prevent GPU OOM - setattr(new_el, attr, None) - return avg - - -def _inverse_rotation(densepose_attrs, boxes, transform): - # resample outputs to image size and rotate back the densepose preds - # on the rotated images to the space of the original image - if len(boxes) == 0 or not isinstance(transform, RotationTransform): - return densepose_attrs - boxes = boxes.int().cpu().numpy() - wh_boxes = boxes[:, 2:] - boxes[:, :2] # bboxes in the rotated space - inv_boxes = rotate_box_inverse(transform, boxes).astype(int) # bboxes in original image - wh_diff = (inv_boxes[:, 2:] - inv_boxes[:, :2] - wh_boxes) // 2 # diff between new/old bboxes - rotation_matrix = torch.tensor([transform.rm_image]).to(device=densepose_attrs.device).float() - rotation_matrix[:, :, -1] = 0 - # To apply grid_sample for rotation, we need to have enough space to fit the original and - # rotated bboxes. l_bds and r_bds are the left/right bounds that will be used to - # crop the difference once the rotation is done - l_bds = np.maximum(0, -wh_diff) - for i in range(len(densepose_attrs)): - if min(wh_boxes[i]) <= 0: - continue - densepose_attr = densepose_attrs[[i]].clone() - # 1. Interpolate densepose attribute to size of the rotated bbox - densepose_attr = F.interpolate(densepose_attr, wh_boxes[i].tolist()[::-1], mode="bilinear") - # 2. Pad the interpolated attribute so it has room for the original + rotated bbox - densepose_attr = F.pad(densepose_attr, tuple(np.repeat(np.maximum(0, wh_diff[i]), 2))) - # 3. Compute rotation grid and transform - grid = F.affine_grid(rotation_matrix, size=densepose_attr.shape) - densepose_attr = F.grid_sample(densepose_attr, grid) - # 4. Compute right bounds and crop the densepose_attr to the size of the original bbox - r_bds = densepose_attr.shape[2:][::-1] - l_bds[i] - densepose_attr = densepose_attr[:, :, l_bds[i][1] : r_bds[1], l_bds[i][0] : r_bds[0]] - if min(densepose_attr.shape) > 0: - # Interpolate back to the original size of the densepose attribute - densepose_attr = F.interpolate( - densepose_attr, densepose_attrs.shape[-2:], mode="bilinear" - ) - # Adding a very small probability to the background class to fill padded zones - densepose_attr[:, 0] += 1e-10 - densepose_attrs[i] = densepose_attr - return densepose_attrs - - -def rotate_box_inverse(rot_tfm, rotated_box): - """ - rotated_box is a N * 4 array of [x0, y0, x1, y1] boxes - When a bbox is rotated, it gets bigger, because we need to surround the tilted bbox - So when a bbox is rotated then inverse-rotated, it is much bigger than the original - This function aims to invert the rotation on the box, but also resize it to its original size - """ - # 1. Compute the inverse rotation of the rotated bboxes (bigger than it ) - invrot_box = rot_tfm.inverse().apply_box(rotated_box) - h, w = rotated_box[:, 3] - rotated_box[:, 1], rotated_box[:, 2] - rotated_box[:, 0] - ih, iw = invrot_box[:, 3] - invrot_box[:, 1], invrot_box[:, 2] - invrot_box[:, 0] - assert 2 * rot_tfm.abs_sin**2 != 1, "45 degrees angle can't be inverted" - # 2. Inverse the corresponding computation in the rotation transform - # to get the original height/width of the rotated boxes - orig_h = (h * rot_tfm.abs_cos - w * rot_tfm.abs_sin) / (1 - 2 * rot_tfm.abs_sin**2) - orig_w = (w * rot_tfm.abs_cos - h * rot_tfm.abs_sin) / (1 - 2 * rot_tfm.abs_sin**2) - # 3. Resize the inverse-rotated bboxes to their original size - invrot_box[:, 0] += (iw - orig_w) / 2 - invrot_box[:, 1] += (ih - orig_h) / 2 - invrot_box[:, 2] -= (iw - orig_w) / 2 - invrot_box[:, 3] -= (ih - orig_h) / 2 - - return invrot_box diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointRend/point_rend/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointRend/point_rend/__init__.py deleted file mode 100644 index e3050cbddb92f4ec3acf091cc7aed0ea70484927..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/PointRend/point_rend/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .config import add_pointrend_config -from .mask_head import PointRendMaskHead, ImplicitPointRendMaskHead -from .semantic_seg import PointRendSemSegHead -from .color_augmentation import ColorAugSSDTransform - -from . import roi_heads as _ # only registration diff --git a/spaces/bupenghui/123/README.md b/spaces/bupenghui/123/README.md deleted file mode 100644 index 90580a247e55b84c43140f9f819650106dd6bb3b..0000000000000000000000000000000000000000 --- a/spaces/bupenghui/123/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: 123 -emoji: 📊 -colorFrom: gray -colorTo: yellow -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/caffeinum/VToonify/vtoonify/model/stylegan/lpips/base_model.py b/spaces/caffeinum/VToonify/vtoonify/model/stylegan/lpips/base_model.py deleted file mode 100644 index 8de1d16f0c7fa52d8067139abc6e769e96d0a6a1..0000000000000000000000000000000000000000 --- a/spaces/caffeinum/VToonify/vtoonify/model/stylegan/lpips/base_model.py +++ /dev/null @@ -1,58 +0,0 @@ -import os -import numpy as np -import torch -from torch.autograd import Variable -from pdb import set_trace as st -from IPython import embed - -class BaseModel(): - def __init__(self): - pass; - - def name(self): - return 'BaseModel' - - def initialize(self, use_gpu=True, gpu_ids=[0]): - self.use_gpu = use_gpu - self.gpu_ids = gpu_ids - - def forward(self): - pass - - def get_image_paths(self): - pass - - def optimize_parameters(self): - pass - - def get_current_visuals(self): - return self.input - - def get_current_errors(self): - return {} - - def save(self, label): - pass - - # helper saving function that can be used by subclasses - def save_network(self, network, path, network_label, epoch_label): - save_filename = '%s_net_%s.pth' % (epoch_label, network_label) - save_path = os.path.join(path, save_filename) - torch.save(network.state_dict(), save_path) - - # helper loading function that can be used by subclasses - def load_network(self, network, network_label, epoch_label): - save_filename = '%s_net_%s.pth' % (epoch_label, network_label) - save_path = os.path.join(self.save_dir, save_filename) - print('Loading network from %s'%save_path) - network.load_state_dict(torch.load(save_path)) - - def update_learning_rate(): - pass - - def get_image_paths(self): - return self.image_paths - - def save_done(self, flag=False): - np.save(os.path.join(self.save_dir, 'done_flag'),flag) - np.savetxt(os.path.join(self.save_dir, 'done_flag'),[flag,],fmt='%i') diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/proposal_generator/rrpn.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/proposal_generator/rrpn.py deleted file mode 100644 index 1a3cd282c2d1ede5c60a7c2c84846cbeed7808f0..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/modeling/proposal_generator/rrpn.py +++ /dev/null @@ -1,209 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -import logging -from typing import Dict, List -import torch - -from detectron2.config import configurable -from detectron2.layers import ShapeSpec, batched_nms_rotated, cat -from detectron2.structures import Instances, RotatedBoxes, pairwise_iou_rotated -from detectron2.utils.memory import retry_if_cuda_oom - -from ..box_regression import Box2BoxTransformRotated -from .build import PROPOSAL_GENERATOR_REGISTRY -from .proposal_utils import _is_tracing -from .rpn import RPN - -logger = logging.getLogger(__name__) - - -def find_top_rrpn_proposals( - proposals, - pred_objectness_logits, - image_sizes, - nms_thresh, - pre_nms_topk, - post_nms_topk, - min_box_size, - training, -): - """ - For each feature map, select the `pre_nms_topk` highest scoring proposals, - apply NMS, clip proposals, and remove small boxes. Return the `post_nms_topk` - highest scoring proposals among all the feature maps if `training` is True, - otherwise, returns the highest `post_nms_topk` scoring proposals for each - feature map. - - Args: - proposals (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A, 5). - All proposal predictions on the feature maps. - pred_objectness_logits (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A). - image_sizes (list[tuple]): sizes (h, w) for each image - nms_thresh (float): IoU threshold to use for NMS - pre_nms_topk (int): number of top k scoring proposals to keep before applying NMS. - When RRPN is run on multiple feature maps (as in FPN) this number is per - feature map. - post_nms_topk (int): number of top k scoring proposals to keep after applying NMS. - When RRPN is run on multiple feature maps (as in FPN) this number is total, - over all feature maps. - min_box_size(float): minimum proposal box side length in pixels (absolute units wrt - input images). - training (bool): True if proposals are to be used in training, otherwise False. - This arg exists only to support a legacy bug; look for the "NB: Legacy bug ..." - comment. - - Returns: - proposals (list[Instances]): list of N Instances. The i-th Instances - stores post_nms_topk object proposals for image i. - """ - num_images = len(image_sizes) - device = proposals[0].device - - # 1. Select top-k anchor for every level and every image - topk_scores = [] # #lvl Tensor, each of shape N x topk - topk_proposals = [] - level_ids = [] # #lvl Tensor, each of shape (topk,) - batch_idx = torch.arange(num_images, device=device) - for level_id, proposals_i, logits_i in zip( - itertools.count(), proposals, pred_objectness_logits - ): - Hi_Wi_A = logits_i.shape[1] - if isinstance(Hi_Wi_A, torch.Tensor): # it's a tensor in tracing - num_proposals_i = torch.clamp(Hi_Wi_A, max=pre_nms_topk) - else: - num_proposals_i = min(Hi_Wi_A, pre_nms_topk) - - topk_scores_i, topk_idx = logits_i.topk(num_proposals_i, dim=1) - - # each is N x topk - topk_proposals_i = proposals_i[batch_idx[:, None], topk_idx] # N x topk x 5 - - topk_proposals.append(topk_proposals_i) - topk_scores.append(topk_scores_i) - level_ids.append(torch.full((num_proposals_i,), level_id, dtype=torch.int64, device=device)) - - # 2. Concat all levels together - topk_scores = cat(topk_scores, dim=1) - topk_proposals = cat(topk_proposals, dim=1) - level_ids = cat(level_ids, dim=0) - - # 3. For each image, run a per-level NMS, and choose topk results. - results = [] - for n, image_size in enumerate(image_sizes): - boxes = RotatedBoxes(topk_proposals[n]) - scores_per_img = topk_scores[n] - lvl = level_ids - - valid_mask = torch.isfinite(boxes.tensor).all(dim=1) & torch.isfinite(scores_per_img) - if not valid_mask.all(): - if training: - raise FloatingPointError( - "Predicted boxes or scores contain Inf/NaN. Training has diverged." - ) - boxes = boxes[valid_mask] - scores_per_img = scores_per_img[valid_mask] - lvl = lvl[valid_mask] - boxes.clip(image_size) - - # filter empty boxes - keep = boxes.nonempty(threshold=min_box_size) - if _is_tracing() or keep.sum().item() != len(boxes): - boxes, scores_per_img, lvl = (boxes[keep], scores_per_img[keep], lvl[keep]) - - keep = batched_nms_rotated(boxes.tensor, scores_per_img, lvl, nms_thresh) - # In Detectron1, there was different behavior during training vs. testing. - # (https://github.com/facebookresearch/Detectron/issues/459) - # During training, topk is over the proposals from *all* images in the training batch. - # During testing, it is over the proposals for each image separately. - # As a result, the training behavior becomes batch-dependent, - # and the configuration "POST_NMS_TOPK_TRAIN" end up relying on the batch size. - # This bug is addressed in Detectron2 to make the behavior independent of batch size. - keep = keep[:post_nms_topk] - - res = Instances(image_size) - res.proposal_boxes = boxes[keep] - res.objectness_logits = scores_per_img[keep] - results.append(res) - return results - - -@PROPOSAL_GENERATOR_REGISTRY.register() -class RRPN(RPN): - """ - Rotated Region Proposal Network described in :paper:`RRPN`. - """ - - @configurable - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - if self.anchor_boundary_thresh >= 0: - raise NotImplementedError( - "anchor_boundary_thresh is a legacy option not implemented for RRPN." - ) - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - ret = super().from_config(cfg, input_shape) - ret["box2box_transform"] = Box2BoxTransformRotated(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS) - return ret - - @torch.no_grad() - def label_and_sample_anchors(self, anchors: List[RotatedBoxes], gt_instances: List[Instances]): - """ - Args: - anchors (list[RotatedBoxes]): anchors for each feature map. - gt_instances: the ground-truth instances for each image. - - Returns: - list[Tensor]: - List of #img tensors. i-th element is a vector of labels whose length is - the total number of anchors across feature maps. Label values are in {-1, 0, 1}, - with meanings: -1 = ignore; 0 = negative class; 1 = positive class. - list[Tensor]: - i-th element is a Nx5 tensor, where N is the total number of anchors across - feature maps. The values are the matched gt boxes for each anchor. - Values are undefined for those anchors not labeled as 1. - """ - anchors = RotatedBoxes.cat(anchors) - - gt_boxes = [x.gt_boxes for x in gt_instances] - del gt_instances - - gt_labels = [] - matched_gt_boxes = [] - for gt_boxes_i in gt_boxes: - """ - gt_boxes_i: ground-truth boxes for i-th image - """ - match_quality_matrix = retry_if_cuda_oom(pairwise_iou_rotated)(gt_boxes_i, anchors) - matched_idxs, gt_labels_i = retry_if_cuda_oom(self.anchor_matcher)(match_quality_matrix) - # Matching is memory-expensive and may result in CPU tensors. But the result is small - gt_labels_i = gt_labels_i.to(device=gt_boxes_i.device) - - # A vector of labels (-1, 0, 1) for each anchor - gt_labels_i = self._subsample_labels(gt_labels_i) - - if len(gt_boxes_i) == 0: - # These values won't be used anyway since the anchor is labeled as background - matched_gt_boxes_i = torch.zeros_like(anchors.tensor) - else: - # TODO wasted indexing computation for ignored boxes - matched_gt_boxes_i = gt_boxes_i[matched_idxs].tensor - - gt_labels.append(gt_labels_i) # N,AHW - matched_gt_boxes.append(matched_gt_boxes_i) - return gt_labels, matched_gt_boxes - - @torch.no_grad() - def predict_proposals(self, anchors, pred_objectness_logits, pred_anchor_deltas, image_sizes): - pred_proposals = self._decode_proposals(anchors, pred_anchor_deltas) - return find_top_rrpn_proposals( - pred_proposals, - pred_objectness_logits, - image_sizes, - self.nms_thresh, - self.pre_nms_topk[self.training], - self.post_nms_topk[self.training], - self.min_box_size, - self.training, - ) diff --git a/spaces/catasaurus/text2int/README.md b/spaces/catasaurus/text2int/README.md deleted file mode 100644 index 1051c5ca02049cad7da8e7614b8e5f2ea5c960e2..0000000000000000000000000000000000000000 --- a/spaces/catasaurus/text2int/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text2int -emoji: 📉 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/celise88/Pathfinder/templates/find_match.html b/spaces/celise88/Pathfinder/templates/find_match.html deleted file mode 100644 index a7ac64e0edf5cdfc5434495938cfd08d429cd6ac..0000000000000000000000000000000000000000 --- a/spaces/celise88/Pathfinder/templates/find_match.html +++ /dev/null @@ -1,50 +0,0 @@ - - - - - - - Dashboard - - - - -
-

Matching Jobs

- {% if jobpostings %} -

Here are the top 10 {{ jobtitle }} jobs in {{ state }} that are a great match for your skillset and interests!

- -
-
- - {% else %} -

We're sorry! This page is currently under construction.

-

Please check back soon to get {{ jobselection }} jobs that are a great match for your skillset and interests!

- {% endif %} -
-
-
- - - diff --git a/spaces/chainyo/optimum-text-classification/utils.py b/spaces/chainyo/optimum-text-classification/utils.py deleted file mode 100644 index d0c0aa43f435786830d75f59d464eb6d4164662c..0000000000000000000000000000000000000000 --- a/spaces/chainyo/optimum-text-classification/utils.py +++ /dev/null @@ -1,27 +0,0 @@ -"""⭐ Text Classification with Optimum and ONNXRuntime - -Utils functions. - -Author: - - @ChainYo - https://github.com/ChainYo -""" - -from contextlib import contextmanager -from time import time -from typing import List - - -@contextmanager -def calculate_inference_time(buffer: List[int]): - """Calculate inference time. - - Args: - buffer (list): List of inference times. - - Returns: - float: Average inference time. - """ - start = time() - yield - end = time() - buffer.append(end - start) diff --git a/spaces/chasemcdo/hf_localai/examples/langchain-chroma/query.py b/spaces/chasemcdo/hf_localai/examples/langchain-chroma/query.py deleted file mode 100644 index 3384881899b52e18a9eb7120349ca7b9fa9257e5..0000000000000000000000000000000000000000 --- a/spaces/chasemcdo/hf_localai/examples/langchain-chroma/query.py +++ /dev/null @@ -1,23 +0,0 @@ - -import os -from langchain.vectorstores import Chroma -from langchain.embeddings import OpenAIEmbeddings -from langchain.chat_models import ChatOpenAI -from langchain.chains import RetrievalQA -from langchain.vectorstores.base import VectorStoreRetriever - -base_path = os.environ.get('OPENAI_API_BASE', 'http://localhost:8080/v1') - -# Load and process the text -embedding = OpenAIEmbeddings() -persist_directory = 'db' - -# Now we can load the persisted database from disk, and use it as normal. -llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo", openai_api_base=base_path) -vectordb = Chroma(persist_directory=persist_directory, embedding_function=embedding) -retriever = VectorStoreRetriever(vectorstore=vectordb) -qa = RetrievalQA.from_llm(llm=llm, retriever=retriever) - -query = "What the president said about taxes ?" -print(qa.run(query)) - diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/docs/Makefile b/spaces/chendl/compositional_test/multimodal/YOLOX/docs/Makefile deleted file mode 100644 index ce61fb6a84ca97d25d833b64fa1b66d8f3e6ae7f..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/docs/Makefile +++ /dev/null @@ -1,19 +0,0 @@ -# Minimal makefile for Sphinx documentation -# Copyright (c) Facebook, Inc. and its affiliates. - -# You can set these variables from the command line. -SPHINXOPTS = -SPHINXBUILD = sphinx-build -SOURCEDIR = . -BUILDDIR = _build - -# Put it first so that "make" without argument is like "make help". -help: - @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) - -.PHONY: help Makefile - -# Catch-all target: route all unknown targets to Sphinx using the new -# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). -%: Makefile - @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) \ No newline at end of file diff --git a/spaces/chronopt-research/ViTExCo/src/scheduler.py b/spaces/chronopt-research/ViTExCo/src/scheduler.py deleted file mode 100644 index 87a9a1c5fcbf2df2a9263d49a5d3f5ba87ccb48d..0000000000000000000000000000000000000000 --- a/spaces/chronopt-research/ViTExCo/src/scheduler.py +++ /dev/null @@ -1,40 +0,0 @@ -from torch.optim.lr_scheduler import _LRScheduler - -class PolynomialLR(_LRScheduler): - def __init__( - self, - optimizer, - step_size, - iter_warmup, - iter_max, - power, - min_lr=0, - last_epoch=-1, - ): - self.step_size = step_size - self.iter_warmup = int(iter_warmup) - self.iter_max = int(iter_max) - self.power = power - self.min_lr = min_lr - super(PolynomialLR, self).__init__(optimizer, last_epoch) - - def polynomial_decay(self, lr): - iter_cur = float(self.last_epoch) - if iter_cur < self.iter_warmup: - coef = iter_cur / self.iter_warmup - coef *= (1 - self.iter_warmup / self.iter_max) ** self.power - else: - coef = (1 - iter_cur / self.iter_max) ** self.power - return (lr - self.min_lr) * coef + self.min_lr - - def get_lr(self): - if ( - (self.last_epoch == 0) - or (self.last_epoch % self.step_size != 0) - or (self.last_epoch > self.iter_max) - ): - return [group["lr"] for group in self.optimizer.param_groups] - return [self.polynomial_decay(lr) for lr in self.base_lrs] - - def step_update(self, num_updates): - self.step() \ No newline at end of file diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/payload_streamer.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/payload_streamer.py deleted file mode 100644 index 9f8b8bc57cc22fc693da1646bf806c2a6ca8d797..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/aiohttp/payload_streamer.py +++ /dev/null @@ -1,75 +0,0 @@ -""" -Payload implemenation for coroutines as data provider. - -As a simple case, you can upload data from file:: - - @aiohttp.streamer - async def file_sender(writer, file_name=None): - with open(file_name, 'rb') as f: - chunk = f.read(2**16) - while chunk: - await writer.write(chunk) - - chunk = f.read(2**16) - -Then you can use `file_sender` like this: - - async with session.post('http://httpbin.org/post', - data=file_sender(file_name='huge_file')) as resp: - print(await resp.text()) - -..note:: Coroutine must accept `writer` as first argument - -""" - -import types -import warnings -from typing import Any, Awaitable, Callable, Dict, Tuple - -from .abc import AbstractStreamWriter -from .payload import Payload, payload_type - -__all__ = ("streamer",) - - -class _stream_wrapper: - def __init__( - self, - coro: Callable[..., Awaitable[None]], - args: Tuple[Any, ...], - kwargs: Dict[str, Any], - ) -> None: - self.coro = types.coroutine(coro) - self.args = args - self.kwargs = kwargs - - async def __call__(self, writer: AbstractStreamWriter) -> None: - await self.coro(writer, *self.args, **self.kwargs) # type: ignore[operator] - - -class streamer: - def __init__(self, coro: Callable[..., Awaitable[None]]) -> None: - warnings.warn( - "@streamer is deprecated, use async generators instead", - DeprecationWarning, - stacklevel=2, - ) - self.coro = coro - - def __call__(self, *args: Any, **kwargs: Any) -> _stream_wrapper: - return _stream_wrapper(self.coro, args, kwargs) - - -@payload_type(_stream_wrapper) -class StreamWrapperPayload(Payload): - async def write(self, writer: AbstractStreamWriter) -> None: - await self._value(writer) - - -@payload_type(streamer) -class StreamPayload(StreamWrapperPayload): - def __init__(self, value: Any, *args: Any, **kwargs: Any) -> None: - super().__init__(value(), *args, **kwargs) - - async def write(self, writer: AbstractStreamWriter) -> None: - await self._value(writer) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/property/test_cross_version_persist.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/property/test_cross_version_persist.py deleted file mode 100644 index f4b87aecee2171b54dfb73b5ed34958f02b40bd3..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/test/property/test_cross_version_persist.py +++ /dev/null @@ -1,270 +0,0 @@ -from multiprocessing.connection import Connection -import sys -import os -import shutil -import subprocess -import tempfile -from types import ModuleType -from typing import Callable, Generator, List, Tuple -from hypothesis import given, settings -import hypothesis.strategies as st -import pytest -import json -from urllib import request -from chromadb.api import API -from chromadb.api.types import Documents, EmbeddingFunction, Embeddings -import chromadb.test.property.strategies as strategies -import chromadb.test.property.invariants as invariants -from packaging import version as packaging_version -import re -import multiprocessing -from chromadb import Client -from chromadb.config import Settings - -MINIMUM_VERSION = "0.3.20" -COLLECTION_NAME_LOWERCASE_VERSION = "0.3.21" -version_re = re.compile(r"^[0-9]+\.[0-9]+\.[0-9]+$") - - -def _patch_uppercase_coll_name( - collection: strategies.Collection, embeddings: strategies.RecordSet -) -> None: - """Old versions didn't handle uppercase characters in collection names""" - collection.name = collection.name.lower() - - -def _patch_empty_dict_metadata( - collection: strategies.Collection, embeddings: strategies.RecordSet -) -> None: - """Old versions do the wrong thing when metadata is a single empty dict""" - if embeddings["metadatas"] == {}: - embeddings["metadatas"] = None - - -version_patches: List[ - Tuple[str, Callable[[strategies.Collection, strategies.RecordSet], None]] -] = [ - ("0.3.21", _patch_uppercase_coll_name), - ("0.3.21", _patch_empty_dict_metadata), -] - - -def patch_for_version( - version: str, collection: strategies.Collection, embeddings: strategies.RecordSet -) -> None: - """Override aspects of the collection and embeddings, before testing, to account for - breaking changes in old versions.""" - - for patch_version, patch in version_patches: - if packaging_version.Version(version) <= packaging_version.Version( - patch_version - ): - patch(collection, embeddings) - - -def versions() -> List[str]: - """Returns the pinned minimum version and the latest version of chromadb.""" - url = "https://pypi.org/pypi/chromadb/json" - data = json.load(request.urlopen(request.Request(url))) - versions = list(data["releases"].keys()) - # Older versions on pypi contain "devXYZ" suffixes - versions = [v for v in versions if version_re.match(v)] - versions.sort(key=packaging_version.Version) - return [MINIMUM_VERSION, versions[-1]] - - -def configurations(versions: List[str]) -> List[Tuple[str, Settings]]: - return [ - ( - version, - Settings( - chroma_api_impl="local", - chroma_db_impl="duckdb+parquet", - persist_directory=tempfile.gettempdir() + "/tests/" + version + "/", - ), - ) - for version in versions - ] - - -test_old_versions = versions() -base_install_dir = tempfile.gettempdir() + "/persistence_test_chromadb_versions" - - -# This fixture is not shared with the rest of the tests because it is unique in how it -# installs the versions of chromadb -@pytest.fixture(scope="module", params=configurations(test_old_versions)) # type: ignore -def version_settings(request) -> Generator[Tuple[str, Settings], None, None]: - configuration = request.param - version = configuration[0] - install_version(version) - yield configuration - # Cleanup the installed version - path = get_path_to_version_install(version) - shutil.rmtree(path) - # Cleanup the persisted data - data_path = configuration[1].persist_directory - if os.path.exists(data_path): - shutil.rmtree(data_path) - - -def get_path_to_version_install(version: str) -> str: - return base_install_dir + "/" + version - - -def get_path_to_version_library(version: str) -> str: - return get_path_to_version_install(version) + "/chromadb/__init__.py" - - -def install_version(version: str) -> None: - # Check if already installed - version_library = get_path_to_version_library(version) - if os.path.exists(version_library): - return - path = get_path_to_version_install(version) - install(f"chromadb=={version}", path) - - -def install(pkg: str, path: str) -> int: - # -q -q to suppress pip output to ERROR level - # https://pip.pypa.io/en/stable/cli/pip/#quiet - print(f"Installing chromadb version {pkg} to {path}") - return subprocess.check_call( - [ - sys.executable, - "-m", - "pip", - "-q", - "-q", - "install", - pkg, - "--target={}".format(path), - ] - ) - - -def switch_to_version(version: str) -> ModuleType: - module_name = "chromadb" - # Remove old version from sys.modules, except test modules - old_modules = { - n: m - for n, m in sys.modules.items() - if n == module_name or (n.startswith(module_name + ".")) - } - for n in old_modules: - del sys.modules[n] - - # Load the target version and override the path to the installed version - # https://docs.python.org/3/library/importlib.html#importing-a-source-file-directly - sys.path.insert(0, get_path_to_version_install(version)) - import chromadb - - assert chromadb.__version__ == version - return chromadb - - -class not_implemented_ef(EmbeddingFunction): - def __call__(self, texts: Documents) -> Embeddings: - assert False, "Embedding function should not be called" - - -def persist_generated_data_with_old_version( - version: str, - settings: Settings, - collection_strategy: strategies.Collection, - embeddings_strategy: strategies.RecordSet, - conn: Connection, -) -> None: - try: - old_module = switch_to_version(version) - api: API = old_module.Client(settings) - api.reset() - coll = api.create_collection( - name=collection_strategy.name, - metadata=collection_strategy.metadata, - # In order to test old versions, we can't rely on the not_implemented function - embedding_function=not_implemented_ef(), - ) - coll.add(**embeddings_strategy) - # We can't use the invariants module here because it uses the current version - # Just use some basic checks for sanity and manual testing where you break the new - # version - - check_embeddings = invariants.wrap_all(embeddings_strategy) - # Check count - assert coll.count() == len(check_embeddings["embeddings"] or []) - # Check ids - result = coll.get() - actual_ids = result["ids"] - embedding_id_to_index = {id: i for i, id in enumerate(check_embeddings["ids"])} - actual_ids = sorted(actual_ids, key=lambda id: embedding_id_to_index[id]) - assert actual_ids == check_embeddings["ids"] - api.persist() - except Exception as e: - conn.send(e) - raise e - - -# Since we can't pickle the embedding function, we always generate record sets with embeddings -collection_st: st.SearchStrategy[strategies.Collection] = st.shared( - strategies.collections(with_hnsw_params=True, has_embeddings=True), key="coll" -) - - -@given( - collection_strategy=collection_st, - embeddings_strategy=strategies.recordsets(collection_st), -) -@pytest.mark.skipif( - sys.version_info.major < 3 - or (sys.version_info.major == 3 and sys.version_info.minor <= 7), - reason="The mininum supported versions of chroma do not work with python <= 3.7", -) -@pytest.mark.xfail( - reason="As we migrate to sqlite, we will not support old versions of chromadb and instead require manual migration. The minimum version will be increased to 0.4.0 and this test will be expected to pass." -) -@settings(deadline=None) -def test_cycle_versions( - version_settings: Tuple[str, Settings], - collection_strategy: strategies.Collection, - embeddings_strategy: strategies.RecordSet, -) -> None: - # # Test backwards compatibility - # # For the current version, ensure that we can load a collection from - # # the previous versions - version, settings = version_settings - - patch_for_version(version, collection_strategy, embeddings_strategy) - - # Can't pickle a function, and we won't need them - collection_strategy.embedding_function = None - collection_strategy.known_metadata_keys = {} - - # Run the task in a separate process to avoid polluting the current process - # with the old version. Using spawn instead of fork to avoid sharing the - # current process memory which would cause the old version to be loaded - ctx = multiprocessing.get_context("spawn") - conn1, conn2 = multiprocessing.Pipe() - p = ctx.Process( - target=persist_generated_data_with_old_version, - args=(version, settings, collection_strategy, embeddings_strategy, conn2), - ) - p.start() - p.join() - - if conn1.poll(): - e = conn1.recv() - raise e - - # Switch to the current version (local working directory) and check the invariants - # are preserved for the collection - api = Client(settings) - coll = api.get_collection( - name=collection_strategy.name, - embedding_function=not_implemented_ef(), - ) - invariants.count(coll, embeddings_strategy) - invariants.metadatas_match(coll, embeddings_strategy) - invariants.documents_match(coll, embeddings_strategy) - invariants.ids_match(coll, embeddings_strategy) - invariants.ann_accuracy(coll, embeddings_strategy) diff --git a/spaces/cihyFjudo/fairness-paper-search/Kiseki No Gyakuten Shouri!! Uchuu O Sukutta Gokuu 720p Torrent.md b/spaces/cihyFjudo/fairness-paper-search/Kiseki No Gyakuten Shouri!! Uchuu O Sukutta Gokuu 720p Torrent.md deleted file mode 100644 index 8861db5e14f67b8efcbd5a183139aee00ba2e589..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Kiseki No Gyakuten Shouri!! Uchuu O Sukutta Gokuu 720p Torrent.md +++ /dev/null @@ -1,6 +0,0 @@ -

Kiseki No Gyakuten Shouri!! Uchuu O Sukutta Gokuu 720p Torrent


Download Ziphttps://tinurli.com/2uwkqj



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/cihyFjudo/fairness-paper-search/[Altssima pobreza (review Giorgio Agamben s book) - Academia.edu](2).md b/spaces/cihyFjudo/fairness-paper-search/[Altssima pobreza (review Giorgio Agamben s book) - Academia.edu](2).md deleted file mode 100644 index b7d49445e7cad4da0e53eb433d580685a2af4a7d..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/[Altssima pobreza (review Giorgio Agamben s book) - Academia.edu](2).md +++ /dev/null @@ -1,6 +0,0 @@ -

Altisima Pobreza Agamben Pdf Download


DOWNLOAD ->>->>->> https://tinurli.com/2uwk36



- - aaccfb2cb3
-
-
-

diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/SpiderImagePlugin.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/SpiderImagePlugin.py deleted file mode 100644 index 5614957c176685c24f0c4cfebb4661d7c856b053..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/SpiderImagePlugin.py +++ /dev/null @@ -1,318 +0,0 @@ -# -# The Python Imaging Library. -# -# SPIDER image file handling -# -# History: -# 2004-08-02 Created BB -# 2006-03-02 added save method -# 2006-03-13 added support for stack images -# -# Copyright (c) 2004 by Health Research Inc. (HRI) RENSSELAER, NY 12144. -# Copyright (c) 2004 by William Baxter. -# Copyright (c) 2004 by Secret Labs AB. -# Copyright (c) 2004 by Fredrik Lundh. -# - -## -# Image plugin for the Spider image format. This format is used -# by the SPIDER software, in processing image data from electron -# microscopy and tomography. -## - -# -# SpiderImagePlugin.py -# -# The Spider image format is used by SPIDER software, in processing -# image data from electron microscopy and tomography. -# -# Spider home page: -# https://spider.wadsworth.org/spider_doc/spider/docs/spider.html -# -# Details about the Spider image format: -# https://spider.wadsworth.org/spider_doc/spider/docs/image_doc.html -# -import os -import struct -import sys - -from . import Image, ImageFile - - -def isInt(f): - try: - i = int(f) - if f - i == 0: - return 1 - else: - return 0 - except (ValueError, OverflowError): - return 0 - - -iforms = [1, 3, -11, -12, -21, -22] - - -# There is no magic number to identify Spider files, so just check a -# series of header locations to see if they have reasonable values. -# Returns no. of bytes in the header, if it is a valid Spider header, -# otherwise returns 0 - - -def isSpiderHeader(t): - h = (99,) + t # add 1 value so can use spider header index start=1 - # header values 1,2,5,12,13,22,23 should be integers - for i in [1, 2, 5, 12, 13, 22, 23]: - if not isInt(h[i]): - return 0 - # check iform - iform = int(h[5]) - if iform not in iforms: - return 0 - # check other header values - labrec = int(h[13]) # no. records in file header - labbyt = int(h[22]) # total no. of bytes in header - lenbyt = int(h[23]) # record length in bytes - if labbyt != (labrec * lenbyt): - return 0 - # looks like a valid header - return labbyt - - -def isSpiderImage(filename): - with open(filename, "rb") as fp: - f = fp.read(92) # read 23 * 4 bytes - t = struct.unpack(">23f", f) # try big-endian first - hdrlen = isSpiderHeader(t) - if hdrlen == 0: - t = struct.unpack("<23f", f) # little-endian - hdrlen = isSpiderHeader(t) - return hdrlen - - -class SpiderImageFile(ImageFile.ImageFile): - format = "SPIDER" - format_description = "Spider 2D image" - _close_exclusive_fp_after_loading = False - - def _open(self): - # check header - n = 27 * 4 # read 27 float values - f = self.fp.read(n) - - try: - self.bigendian = 1 - t = struct.unpack(">27f", f) # try big-endian first - hdrlen = isSpiderHeader(t) - if hdrlen == 0: - self.bigendian = 0 - t = struct.unpack("<27f", f) # little-endian - hdrlen = isSpiderHeader(t) - if hdrlen == 0: - msg = "not a valid Spider file" - raise SyntaxError(msg) - except struct.error as e: - msg = "not a valid Spider file" - raise SyntaxError(msg) from e - - h = (99,) + t # add 1 value : spider header index starts at 1 - iform = int(h[5]) - if iform != 1: - msg = "not a Spider 2D image" - raise SyntaxError(msg) - - self._size = int(h[12]), int(h[2]) # size in pixels (width, height) - self.istack = int(h[24]) - self.imgnumber = int(h[27]) - - if self.istack == 0 and self.imgnumber == 0: - # stk=0, img=0: a regular 2D image - offset = hdrlen - self._nimages = 1 - elif self.istack > 0 and self.imgnumber == 0: - # stk>0, img=0: Opening the stack for the first time - self.imgbytes = int(h[12]) * int(h[2]) * 4 - self.hdrlen = hdrlen - self._nimages = int(h[26]) - # Point to the first image in the stack - offset = hdrlen * 2 - self.imgnumber = 1 - elif self.istack == 0 and self.imgnumber > 0: - # stk=0, img>0: an image within the stack - offset = hdrlen + self.stkoffset - self.istack = 2 # So Image knows it's still a stack - else: - msg = "inconsistent stack header values" - raise SyntaxError(msg) - - if self.bigendian: - self.rawmode = "F;32BF" - else: - self.rawmode = "F;32F" - self.mode = "F" - - self.tile = [("raw", (0, 0) + self.size, offset, (self.rawmode, 0, 1))] - self._fp = self.fp # FIXME: hack - - @property - def n_frames(self): - return self._nimages - - @property - def is_animated(self): - return self._nimages > 1 - - # 1st image index is zero (although SPIDER imgnumber starts at 1) - def tell(self): - if self.imgnumber < 1: - return 0 - else: - return self.imgnumber - 1 - - def seek(self, frame): - if self.istack == 0: - msg = "attempt to seek in a non-stack file" - raise EOFError(msg) - if not self._seek_check(frame): - return - self.stkoffset = self.hdrlen + frame * (self.hdrlen + self.imgbytes) - self.fp = self._fp - self.fp.seek(self.stkoffset) - self._open() - - # returns a byte image after rescaling to 0..255 - def convert2byte(self, depth=255): - (minimum, maximum) = self.getextrema() - m = 1 - if maximum != minimum: - m = depth / (maximum - minimum) - b = -m * minimum - return self.point(lambda i, m=m, b=b: i * m + b).convert("L") - - # returns a ImageTk.PhotoImage object, after rescaling to 0..255 - def tkPhotoImage(self): - from . import ImageTk - - return ImageTk.PhotoImage(self.convert2byte(), palette=256) - - -# -------------------------------------------------------------------- -# Image series - - -# given a list of filenames, return a list of images -def loadImageSeries(filelist=None): - """create a list of :py:class:`~PIL.Image.Image` objects for use in a montage""" - if filelist is None or len(filelist) < 1: - return - - imglist = [] - for img in filelist: - if not os.path.exists(img): - print(f"unable to find {img}") - continue - try: - with Image.open(img) as im: - im = im.convert2byte() - except Exception: - if not isSpiderImage(img): - print(img + " is not a Spider image file") - continue - im.info["filename"] = img - imglist.append(im) - return imglist - - -# -------------------------------------------------------------------- -# For saving images in Spider format - - -def makeSpiderHeader(im): - nsam, nrow = im.size - lenbyt = nsam * 4 # There are labrec records in the header - labrec = int(1024 / lenbyt) - if 1024 % lenbyt != 0: - labrec += 1 - labbyt = labrec * lenbyt - nvalues = int(labbyt / 4) - if nvalues < 23: - return [] - - hdr = [] - for i in range(nvalues): - hdr.append(0.0) - - # NB these are Fortran indices - hdr[1] = 1.0 # nslice (=1 for an image) - hdr[2] = float(nrow) # number of rows per slice - hdr[3] = float(nrow) # number of records in the image - hdr[5] = 1.0 # iform for 2D image - hdr[12] = float(nsam) # number of pixels per line - hdr[13] = float(labrec) # number of records in file header - hdr[22] = float(labbyt) # total number of bytes in header - hdr[23] = float(lenbyt) # record length in bytes - - # adjust for Fortran indexing - hdr = hdr[1:] - hdr.append(0.0) - # pack binary data into a string - return [struct.pack("f", v) for v in hdr] - - -def _save(im, fp, filename): - if im.mode[0] != "F": - im = im.convert("F") - - hdr = makeSpiderHeader(im) - if len(hdr) < 256: - msg = "Error creating Spider header" - raise OSError(msg) - - # write the SPIDER header - fp.writelines(hdr) - - rawmode = "F;32NF" # 32-bit native floating point - ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, 0, 1))]) - - -def _save_spider(im, fp, filename): - # get the filename extension and register it with Image - ext = os.path.splitext(filename)[1] - Image.register_extension(SpiderImageFile.format, ext) - _save(im, fp, filename) - - -# -------------------------------------------------------------------- - - -Image.register_open(SpiderImageFile.format, SpiderImageFile) -Image.register_save(SpiderImageFile.format, _save_spider) - -if __name__ == "__main__": - if len(sys.argv) < 2: - print("Syntax: python3 SpiderImagePlugin.py [infile] [outfile]") - sys.exit() - - filename = sys.argv[1] - if not isSpiderImage(filename): - print("input image must be in Spider format") - sys.exit() - - with Image.open(filename) as im: - print("image: " + str(im)) - print("format: " + str(im.format)) - print("size: " + str(im.size)) - print("mode: " + str(im.mode)) - print("max, min: ", end=" ") - print(im.getextrema()) - - if len(sys.argv) > 2: - outfile = sys.argv[2] - - # perform some image operation - im = im.transpose(Image.Transpose.FLIP_LEFT_RIGHT) - print( - f"saving a flipped version of {os.path.basename(filename)} " - f"as {outfile} " - ) - im.save(outfile, SpiderImageFile.format) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/otlLib/optimize/__main__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/otlLib/optimize/__main__.py deleted file mode 100644 index b0ae9081ca8dac338bcf085c71adad87805e3bad..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/otlLib/optimize/__main__.py +++ /dev/null @@ -1,6 +0,0 @@ -import sys -from fontTools.otlLib.optimize import main - - -if __name__ == "__main__": - sys.exit(main()) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/removeOverlaps.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/removeOverlaps.py deleted file mode 100644 index 624cd47b4076a95cbc7c2124550371f6ffa5ea37..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/removeOverlaps.py +++ /dev/null @@ -1,248 +0,0 @@ -""" Simplify TrueType glyphs by merging overlapping contours/components. - -Requires https://github.com/fonttools/skia-pathops -""" - -import itertools -import logging -from typing import Callable, Iterable, Optional, Mapping - -from fontTools.misc.roundTools import otRound -from fontTools.ttLib import ttFont -from fontTools.ttLib.tables import _g_l_y_f -from fontTools.ttLib.tables import _h_m_t_x -from fontTools.pens.ttGlyphPen import TTGlyphPen - -import pathops - - -__all__ = ["removeOverlaps"] - - -class RemoveOverlapsError(Exception): - pass - - -log = logging.getLogger("fontTools.ttLib.removeOverlaps") - -_TTGlyphMapping = Mapping[str, ttFont._TTGlyph] - - -def skPathFromGlyph(glyphName: str, glyphSet: _TTGlyphMapping) -> pathops.Path: - path = pathops.Path() - pathPen = path.getPen(glyphSet=glyphSet) - glyphSet[glyphName].draw(pathPen) - return path - - -def skPathFromGlyphComponent( - component: _g_l_y_f.GlyphComponent, glyphSet: _TTGlyphMapping -): - baseGlyphName, transformation = component.getComponentInfo() - path = skPathFromGlyph(baseGlyphName, glyphSet) - return path.transform(*transformation) - - -def componentsOverlap(glyph: _g_l_y_f.Glyph, glyphSet: _TTGlyphMapping) -> bool: - if not glyph.isComposite(): - raise ValueError("This method only works with TrueType composite glyphs") - if len(glyph.components) < 2: - return False # single component, no overlaps - - component_paths = {} - - def _get_nth_component_path(index: int) -> pathops.Path: - if index not in component_paths: - component_paths[index] = skPathFromGlyphComponent( - glyph.components[index], glyphSet - ) - return component_paths[index] - - return any( - pathops.op( - _get_nth_component_path(i), - _get_nth_component_path(j), - pathops.PathOp.INTERSECTION, - fix_winding=False, - keep_starting_points=False, - ) - for i, j in itertools.combinations(range(len(glyph.components)), 2) - ) - - -def ttfGlyphFromSkPath(path: pathops.Path) -> _g_l_y_f.Glyph: - # Skia paths have no 'components', no need for glyphSet - ttPen = TTGlyphPen(glyphSet=None) - path.draw(ttPen) - glyph = ttPen.glyph() - assert not glyph.isComposite() - # compute glyph.xMin (glyfTable parameter unused for non composites) - glyph.recalcBounds(glyfTable=None) - return glyph - - -def _round_path( - path: pathops.Path, round: Callable[[float], float] = otRound -) -> pathops.Path: - rounded_path = pathops.Path() - for verb, points in path: - rounded_path.add(verb, *((round(p[0]), round(p[1])) for p in points)) - return rounded_path - - -def _simplify(path: pathops.Path, debugGlyphName: str) -> pathops.Path: - # skia-pathops has a bug where it sometimes fails to simplify paths when there - # are float coordinates and control points are very close to one another. - # Rounding coordinates to integers works around the bug. - # Since we are going to round glyf coordinates later on anyway, here it is - # ok(-ish) to also round before simplify. Better than failing the whole process - # for the entire font. - # https://bugs.chromium.org/p/skia/issues/detail?id=11958 - # https://github.com/google/fonts/issues/3365 - # TODO(anthrotype): remove once this Skia bug is fixed - try: - return pathops.simplify(path, clockwise=path.clockwise) - except pathops.PathOpsError: - pass - - path = _round_path(path) - try: - path = pathops.simplify(path, clockwise=path.clockwise) - log.debug( - "skia-pathops failed to simplify '%s' with float coordinates, " - "but succeded using rounded integer coordinates", - debugGlyphName, - ) - return path - except pathops.PathOpsError as e: - if log.isEnabledFor(logging.DEBUG): - path.dump() - raise RemoveOverlapsError( - f"Failed to remove overlaps from glyph {debugGlyphName!r}" - ) from e - - raise AssertionError("Unreachable") - - -def removeTTGlyphOverlaps( - glyphName: str, - glyphSet: _TTGlyphMapping, - glyfTable: _g_l_y_f.table__g_l_y_f, - hmtxTable: _h_m_t_x.table__h_m_t_x, - removeHinting: bool = True, -) -> bool: - glyph = glyfTable[glyphName] - # decompose composite glyphs only if components overlap each other - if ( - glyph.numberOfContours > 0 - or glyph.isComposite() - and componentsOverlap(glyph, glyphSet) - ): - path = skPathFromGlyph(glyphName, glyphSet) - - # remove overlaps - path2 = _simplify(path, glyphName) - - # replace TTGlyph if simplified path is different (ignoring contour order) - if {tuple(c) for c in path.contours} != {tuple(c) for c in path2.contours}: - glyfTable[glyphName] = glyph = ttfGlyphFromSkPath(path2) - # simplified glyph is always unhinted - assert not glyph.program - # also ensure hmtx LSB == glyph.xMin so glyph origin is at x=0 - width, lsb = hmtxTable[glyphName] - if lsb != glyph.xMin: - hmtxTable[glyphName] = (width, glyph.xMin) - return True - - if removeHinting: - glyph.removeHinting() - return False - - -def removeOverlaps( - font: ttFont.TTFont, - glyphNames: Optional[Iterable[str]] = None, - removeHinting: bool = True, - ignoreErrors=False, -) -> None: - """Simplify glyphs in TTFont by merging overlapping contours. - - Overlapping components are first decomposed to simple contours, then merged. - - Currently this only works with TrueType fonts with 'glyf' table. - Raises NotImplementedError if 'glyf' table is absent. - - Note that removing overlaps invalidates the hinting. By default we drop hinting - from all glyphs whether or not overlaps are removed from a given one, as it would - look weird if only some glyphs are left (un)hinted. - - Args: - font: input TTFont object, modified in place. - glyphNames: optional iterable of glyph names (str) to remove overlaps from. - By default, all glyphs in the font are processed. - removeHinting (bool): set to False to keep hinting for unmodified glyphs. - ignoreErrors (bool): set to True to ignore errors while removing overlaps, - thus keeping the tricky glyphs unchanged (fonttools/fonttools#2363). - """ - try: - glyfTable = font["glyf"] - except KeyError: - raise NotImplementedError("removeOverlaps currently only works with TTFs") - - hmtxTable = font["hmtx"] - # wraps the underlying glyf Glyphs, takes care of interfacing with drawing pens - glyphSet = font.getGlyphSet() - - if glyphNames is None: - glyphNames = font.getGlyphOrder() - - # process all simple glyphs first, then composites with increasing component depth, - # so that by the time we test for component intersections the respective base glyphs - # have already been simplified - glyphNames = sorted( - glyphNames, - key=lambda name: ( - glyfTable[name].getCompositeMaxpValues(glyfTable).maxComponentDepth - if glyfTable[name].isComposite() - else 0, - name, - ), - ) - modified = set() - for glyphName in glyphNames: - try: - if removeTTGlyphOverlaps( - glyphName, glyphSet, glyfTable, hmtxTable, removeHinting - ): - modified.add(glyphName) - except RemoveOverlapsError: - if not ignoreErrors: - raise - log.error("Failed to remove overlaps for '%s'", glyphName) - - log.debug("Removed overlaps for %s glyphs:\n%s", len(modified), " ".join(modified)) - - -def main(args=None): - import sys - - if args is None: - args = sys.argv[1:] - - if len(args) < 2: - print( - f"usage: fonttools ttLib.removeOverlaps INPUT.ttf OUTPUT.ttf [GLYPHS ...]" - ) - sys.exit(1) - - src = args[0] - dst = args[1] - glyphNames = args[2:] or None - - with ttFont.TTFont(src) as f: - removeOverlaps(f, glyphNames) - f.save(dst) - - -if __name__ == "__main__": - main() diff --git a/spaces/cm107/agv-demo/README.md b/spaces/cm107/agv-demo/README.md deleted file mode 100644 index 07735e702d914badba25a209c1ae1459cf8c6d4d..0000000000000000000000000000000000000000 --- a/spaces/cm107/agv-demo/README.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: Agv Demo -emoji: 📉 -colorFrom: gray -colorTo: indigo -sdk: static -pinned: false -license: mit ---- diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_exss.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_exss.c deleted file mode 100644 index e873088f82dd7fdfd5b1cfeadec4878362caf691..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/dca_exss.c +++ /dev/null @@ -1,514 +0,0 @@ -/* - * Copyright (C) 2016 foo86 - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "dcadec.h" - -static void parse_xll_parameters(DCAExssParser *s, DCAExssAsset *asset) -{ - // Size of XLL data in extension substream - asset->xll_size = get_bits(&s->gb, s->exss_size_nbits) + 1; - - // XLL sync word present flag - if (asset->xll_sync_present = get_bits1(&s->gb)) { - int xll_delay_nbits; - - // Peak bit rate smoothing buffer size - skip_bits(&s->gb, 4); - - // Number of bits for XLL decoding delay - xll_delay_nbits = get_bits(&s->gb, 5) + 1; - - // Initial XLL decoding delay in frames - asset->xll_delay_nframes = get_bits_long(&s->gb, xll_delay_nbits); - - // Number of bytes offset to XLL sync - asset->xll_sync_offset = get_bits(&s->gb, s->exss_size_nbits); - } else { - asset->xll_delay_nframes = 0; - asset->xll_sync_offset = 0; - } -} - -static void parse_lbr_parameters(DCAExssParser *s, DCAExssAsset *asset) -{ - // Size of LBR component in extension substream - asset->lbr_size = get_bits(&s->gb, 14) + 1; - - // LBR sync word present flag - if (get_bits1(&s->gb)) - // LBR sync distance - skip_bits(&s->gb, 2); -} - -static int parse_descriptor(DCAExssParser *s, DCAExssAsset *asset) -{ - int i, j, drc_present, descr_size, descr_pos = get_bits_count(&s->gb); - - // Size of audio asset descriptor in bytes - descr_size = get_bits(&s->gb, 9) + 1; - - // Audio asset identifier - asset->asset_index = get_bits(&s->gb, 3); - - // - // Per stream static metadata - // - - if (s->static_fields_present) { - // Asset type descriptor presence - if (get_bits1(&s->gb)) - // Asset type descriptor - skip_bits(&s->gb, 4); - - // Language descriptor presence - if (get_bits1(&s->gb)) - // Language descriptor - skip_bits(&s->gb, 24); - - // Additional textual information presence - if (get_bits1(&s->gb)) { - // Byte size of additional text info - int text_size = get_bits(&s->gb, 10) + 1; - - // Sanity check available size - if (get_bits_left(&s->gb) < text_size * 8) - return AVERROR_INVALIDDATA; - - // Additional textual information string - skip_bits_long(&s->gb, text_size * 8); - } - - // PCM bit resolution - asset->pcm_bit_res = get_bits(&s->gb, 5) + 1; - - // Maximum sample rate - asset->max_sample_rate = ff_dca_sampling_freqs[get_bits(&s->gb, 4)]; - - // Total number of channels - asset->nchannels_total = get_bits(&s->gb, 8) + 1; - - // One to one map channel to speakers - if (asset->one_to_one_map_ch_to_spkr = get_bits1(&s->gb)) { - int spkr_mask_nbits = 0; - int spkr_remap_nsets; - int nspeakers[8]; - - // Embedded stereo flag - asset->embedded_stereo = asset->nchannels_total > 2 && get_bits1(&s->gb); - - // Embedded 6 channels flag - asset->embedded_6ch = asset->nchannels_total > 6 && get_bits1(&s->gb); - - // Speaker mask enabled flag - if (asset->spkr_mask_enabled = get_bits1(&s->gb)) { - // Number of bits for speaker activity mask - spkr_mask_nbits = (get_bits(&s->gb, 2) + 1) << 2; - - // Loudspeaker activity mask - asset->spkr_mask = get_bits(&s->gb, spkr_mask_nbits); - } - - // Number of speaker remapping sets - if ((spkr_remap_nsets = get_bits(&s->gb, 3)) && !spkr_mask_nbits) { - if (s->avctx) - av_log(s->avctx, AV_LOG_ERROR, "Speaker mask disabled yet there are remapping sets\n"); - return AVERROR_INVALIDDATA; - } - - // Standard loudspeaker layout mask - for (i = 0; i < spkr_remap_nsets; i++) - nspeakers[i] = ff_dca_count_chs_for_mask(get_bits(&s->gb, spkr_mask_nbits)); - - for (i = 0; i < spkr_remap_nsets; i++) { - // Number of channels to be decoded for speaker remapping - int nch_for_remaps = get_bits(&s->gb, 5) + 1; - - for (j = 0; j < nspeakers[i]; j++) { - // Decoded channels to output speaker mapping mask - int remap_ch_mask = get_bits_long(&s->gb, nch_for_remaps); - - // Loudspeaker remapping codes - skip_bits_long(&s->gb, av_popcount(remap_ch_mask) * 5); - } - } - } else { - asset->embedded_stereo = 0; - asset->embedded_6ch = 0; - asset->spkr_mask_enabled = 0; - asset->spkr_mask = 0; - - // Representation type - asset->representation_type = get_bits(&s->gb, 3); - } - } - - // - // DRC, DNC and mixing metadata - // - - // Dynamic range coefficient presence flag - drc_present = get_bits1(&s->gb); - - // Code for dynamic range coefficient - if (drc_present) - skip_bits(&s->gb, 8); - - // Dialog normalization presence flag - if (get_bits1(&s->gb)) - // Dialog normalization code - skip_bits(&s->gb, 5); - - // DRC for stereo downmix - if (drc_present && asset->embedded_stereo) - skip_bits(&s->gb, 8); - - // Mixing metadata presence flag - if (s->mix_metadata_enabled && get_bits1(&s->gb)) { - int nchannels_dmix; - - // External mixing flag - skip_bits1(&s->gb); - - // Post mixing / replacement gain adjustment - skip_bits(&s->gb, 6); - - // DRC prior to mixing - if (get_bits(&s->gb, 2) == 3) - // Custom code for mixing DRC - skip_bits(&s->gb, 8); - else - // Limit for mixing DRC - skip_bits(&s->gb, 3); - - // Scaling type for channels of main audio - // Scaling parameters of main audio - if (get_bits1(&s->gb)) - for (i = 0; i < s->nmixoutconfigs; i++) - skip_bits_long(&s->gb, 6 * s->nmixoutchs[i]); - else - skip_bits_long(&s->gb, 6 * s->nmixoutconfigs); - - nchannels_dmix = asset->nchannels_total; - if (asset->embedded_6ch) - nchannels_dmix += 6; - if (asset->embedded_stereo) - nchannels_dmix += 2; - - for (i = 0; i < s->nmixoutconfigs; i++) { - if (!s->nmixoutchs[i]) { - if (s->avctx) - av_log(s->avctx, AV_LOG_ERROR, "Invalid speaker layout mask for mixing configuration\n"); - return AVERROR_INVALIDDATA; - } - for (j = 0; j < nchannels_dmix; j++) { - // Mix output mask - int mix_map_mask = get_bits(&s->gb, s->nmixoutchs[i]); - - // Mixing coefficients - skip_bits_long(&s->gb, av_popcount(mix_map_mask) * 6); - } - } - } - - // - // Decoder navigation data - // - - // Coding mode for the asset - asset->coding_mode = get_bits(&s->gb, 2); - - // Coding components used in asset - switch (asset->coding_mode) { - case 0: // Coding mode that may contain multiple coding components - asset->extension_mask = get_bits(&s->gb, 12); - - if (asset->extension_mask & DCA_EXSS_CORE) { - // Size of core component in extension substream - asset->core_size = get_bits(&s->gb, 14) + 1; - // Core sync word present flag - if (get_bits1(&s->gb)) - // Core sync distance - skip_bits(&s->gb, 2); - } - - if (asset->extension_mask & DCA_EXSS_XBR) - // Size of XBR extension in extension substream - asset->xbr_size = get_bits(&s->gb, 14) + 1; - - if (asset->extension_mask & DCA_EXSS_XXCH) - // Size of XXCH extension in extension substream - asset->xxch_size = get_bits(&s->gb, 14) + 1; - - if (asset->extension_mask & DCA_EXSS_X96) - // Size of X96 extension in extension substream - asset->x96_size = get_bits(&s->gb, 12) + 1; - - if (asset->extension_mask & DCA_EXSS_LBR) - parse_lbr_parameters(s, asset); - - if (asset->extension_mask & DCA_EXSS_XLL) - parse_xll_parameters(s, asset); - - if (asset->extension_mask & DCA_EXSS_RSV1) - skip_bits(&s->gb, 16); - - if (asset->extension_mask & DCA_EXSS_RSV2) - skip_bits(&s->gb, 16); - break; - - case 1: // Loss-less coding mode without CBR component - asset->extension_mask = DCA_EXSS_XLL; - parse_xll_parameters(s, asset); - break; - - case 2: // Low bit rate mode - asset->extension_mask = DCA_EXSS_LBR; - parse_lbr_parameters(s, asset); - break; - - case 3: // Auxiliary coding mode - asset->extension_mask = 0; - - // Size of auxiliary coded data - skip_bits(&s->gb, 14); - - // Auxiliary codec identification - skip_bits(&s->gb, 8); - - // Aux sync word present flag - if (get_bits1(&s->gb)) - // Aux sync distance - skip_bits(&s->gb, 3); - break; - } - - if (asset->extension_mask & DCA_EXSS_XLL) - // DTS-HD stream ID - asset->hd_stream_id = get_bits(&s->gb, 3); - - // One to one mixing flag - // Per channel main audio scaling flag - // Main audio scaling codes - // Decode asset in secondary decoder flag - // Revision 2 DRC metadata - // Reserved - // Zero pad - if (ff_dca_seek_bits(&s->gb, descr_pos + descr_size * 8)) { - if (s->avctx) - av_log(s->avctx, AV_LOG_ERROR, "Read past end of EXSS asset descriptor\n"); - return AVERROR_INVALIDDATA; - } - - return 0; -} - -static int set_exss_offsets(DCAExssAsset *asset) -{ - int offs = asset->asset_offset; - int size = asset->asset_size; - - if (asset->extension_mask & DCA_EXSS_CORE) { - asset->core_offset = offs; - if (asset->core_size > size) - return AVERROR_INVALIDDATA; - offs += asset->core_size; - size -= asset->core_size; - } - - if (asset->extension_mask & DCA_EXSS_XBR) { - asset->xbr_offset = offs; - if (asset->xbr_size > size) - return AVERROR_INVALIDDATA; - offs += asset->xbr_size; - size -= asset->xbr_size; - } - - if (asset->extension_mask & DCA_EXSS_XXCH) { - asset->xxch_offset = offs; - if (asset->xxch_size > size) - return AVERROR_INVALIDDATA; - offs += asset->xxch_size; - size -= asset->xxch_size; - } - - if (asset->extension_mask & DCA_EXSS_X96) { - asset->x96_offset = offs; - if (asset->x96_size > size) - return AVERROR_INVALIDDATA; - offs += asset->x96_size; - size -= asset->x96_size; - } - - if (asset->extension_mask & DCA_EXSS_LBR) { - asset->lbr_offset = offs; - if (asset->lbr_size > size) - return AVERROR_INVALIDDATA; - offs += asset->lbr_size; - size -= asset->lbr_size; - } - - if (asset->extension_mask & DCA_EXSS_XLL) { - asset->xll_offset = offs; - if (asset->xll_size > size) - return AVERROR_INVALIDDATA; - offs += asset->xll_size; - size -= asset->xll_size; - } - - return 0; -} - -int ff_dca_exss_parse(DCAExssParser *s, const uint8_t *data, int size) -{ - int i, ret, offset, wide_hdr, header_size; - - if ((ret = init_get_bits8(&s->gb, data, size)) < 0) - return ret; - - // Extension substream sync word - skip_bits_long(&s->gb, 32); - - // User defined bits - skip_bits(&s->gb, 8); - - // Extension substream index - s->exss_index = get_bits(&s->gb, 2); - - // Flag indicating short or long header size - wide_hdr = get_bits1(&s->gb); - - // Extension substream header length - header_size = get_bits(&s->gb, 8 + 4 * wide_hdr) + 1; - - // Check CRC - if (s->avctx && ff_dca_check_crc(s->avctx, &s->gb, 32 + 8, header_size * 8)) { - av_log(s->avctx, AV_LOG_ERROR, "Invalid EXSS header checksum\n"); - return AVERROR_INVALIDDATA; - } - - s->exss_size_nbits = 16 + 4 * wide_hdr; - - // Number of bytes of extension substream - s->exss_size = get_bits(&s->gb, s->exss_size_nbits) + 1; - if (s->exss_size > size) { - if (s->avctx) - av_log(s->avctx, AV_LOG_ERROR, "Packet too short for EXSS frame\n"); - return AVERROR_INVALIDDATA; - } - - // Per stream static fields presence flag - if (s->static_fields_present = get_bits1(&s->gb)) { - int active_exss_mask[8]; - - // Reference clock code - skip_bits(&s->gb, 2); - - // Extension substream frame duration - skip_bits(&s->gb, 3); - - // Timecode presence flag - if (get_bits1(&s->gb)) - // Timecode data - skip_bits_long(&s->gb, 36); - - // Number of defined audio presentations - s->npresents = get_bits(&s->gb, 3) + 1; - if (s->npresents > 1) { - if (s->avctx) - avpriv_request_sample(s->avctx, "%d audio presentations", s->npresents); - return AVERROR_PATCHWELCOME; - } - - // Number of audio assets in extension substream - s->nassets = get_bits(&s->gb, 3) + 1; - if (s->nassets > 1) { - if (s->avctx) - avpriv_request_sample(s->avctx, "%d audio assets", s->nassets); - return AVERROR_PATCHWELCOME; - } - - // Active extension substream mask for audio presentation - for (i = 0; i < s->npresents; i++) - active_exss_mask[i] = get_bits(&s->gb, s->exss_index + 1); - - // Active audio asset mask - for (i = 0; i < s->npresents; i++) - skip_bits_long(&s->gb, av_popcount(active_exss_mask[i]) * 8); - - // Mixing metadata enable flag - if (s->mix_metadata_enabled = get_bits1(&s->gb)) { - int spkr_mask_nbits; - - // Mixing metadata adjustment level - skip_bits(&s->gb, 2); - - // Number of bits for mixer output speaker activity mask - spkr_mask_nbits = (get_bits(&s->gb, 2) + 1) << 2; - - // Number of mixing configurations - s->nmixoutconfigs = get_bits(&s->gb, 2) + 1; - - // Speaker layout mask for mixer output channels - for (i = 0; i < s->nmixoutconfigs; i++) - s->nmixoutchs[i] = ff_dca_count_chs_for_mask(get_bits(&s->gb, spkr_mask_nbits)); - } - } else { - s->npresents = 1; - s->nassets = 1; - } - - // Size of encoded asset data in bytes - offset = header_size; - for (i = 0; i < s->nassets; i++) { - s->assets[i].asset_offset = offset; - s->assets[i].asset_size = get_bits(&s->gb, s->exss_size_nbits) + 1; - offset += s->assets[i].asset_size; - if (offset > s->exss_size) { - if (s->avctx) - av_log(s->avctx, AV_LOG_ERROR, "EXSS asset out of bounds\n"); - return AVERROR_INVALIDDATA; - } - } - - // Audio asset descriptor - for (i = 0; i < s->nassets; i++) { - if ((ret = parse_descriptor(s, &s->assets[i])) < 0) - return ret; - if ((ret = set_exss_offsets(&s->assets[i])) < 0) { - if (s->avctx) - av_log(s->avctx, AV_LOG_ERROR, "Invalid extension size in EXSS asset descriptor\n"); - return ret; - } - } - - // Backward compatible core present - // Backward compatible core substream index - // Backward compatible core asset index - // Reserved - // Byte align - // CRC16 of extension substream header - if (ff_dca_seek_bits(&s->gb, header_size * 8)) { - if (s->avctx) - av_log(s->avctx, AV_LOG_ERROR, "Read past end of EXSS header\n"); - return AVERROR_INVALIDDATA; - } - - return 0; -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Archive 3D Free 3D models textures HDRI.md b/spaces/congsaPfin/Manga-OCR/logs/Archive 3D Free 3D models textures HDRI.md deleted file mode 100644 index fff7e948db3686fb0423bea88b6f3657371702cf..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Archive 3D Free 3D models textures HDRI.md +++ /dev/null @@ -1,133 +0,0 @@ -
-

3D Objects Free: How to Find and Download Them for Your Projects

-

If you are working on a project that involves 3D graphics, animation, or visualization, you might be looking for some free 3D objects to use. 3D objects are digital representations of physical objects that can be displayed and manipulated in three dimensions. They can be used for various purposes, such as creating realistic scenes, enhancing user interfaces, simulating physical phenomena, or telling stories.

-

3d objects free


Download File ✪✪✪ https://urlca.com/2uO6sK



-

What are 3D Objects and Why Use Them?

-

3D objects are composed of vertices, edges, and faces that define their shape and appearance. They can also have textures, materials, colors, lighting, and animations that add more details and effects. 3D objects can be created using specialized software, such as Blender, Maya, or SketchUp, or scanned from real-world objects using cameras or sensors.

-

The Benefits of Using 3D Objects in Your Projects

-

Using 3D objects in your projects can have many benefits, such as:

-
    -
  • Improving the visual quality and realism of your graphics and animations
  • -
  • Increasing the interactivity and engagement of your users and audiences
  • -
  • Reducing the time and cost of creating your own 3D objects from scratch
  • -
  • Expanding your creative possibilities and options
  • -
-

The Challenges of Creating 3D Objects from Scratch

-

However, creating your own 3D objects from scratch can also have some challenges, such as:

-
    -
  • Requiring a lot of skill, knowledge, and experience in 3D modeling and design
  • -
  • Taking a lot of time and effort to produce high-quality and realistic results
  • -
  • Needing a lot of resources and equipment to scan or capture real-world objects
  • -
  • Facing legal or ethical issues when using or modifying existing 3D objects without permission or attribution
  • -
-

Where to Find Free 3D Objects Online

-

Fortunately, there are many online sources where you can find free 3D objects that you can use for your projects. These sources can be divided into two main categories: websites and libraries.

-

Free 3D Models Websites

-

These are websites that offer a large collection of free 3D models that you can browse, download, and use. Some examples are:

-

Free3D.com

-

This is a website that hosts over 15,000 free 3D models in various formats, such as Blender, OBJ, 3DS, C4D, MAX, MAYA. You can find models for different categories, such as architecture, vehicles, characters, furniture, aircrafts, etc. You can also share your own models with the community.

-

free 3d models download
-free 3d assets for games
-free 3d characters and creatures
-free 3d cars and vehicles
-free 3d weapons and military
-free 3d scans and photogrammetry
-free 3d handpainted textures
-free 3d pbr materials
-free 3d medieval and fantasy
-free 3d electronics and gadgets
-free 3d robots and mechs
-free 3d furniture and decorations
-free 3d architecture and buildings
-free 3d city and environment
-free 3d low poly and stylized
-free 3d sci-fi and space
-free 3d anime and manga
-free 3d cartoon and comic
-free 3d horror and zombie
-free 3d plants and trees
-free 3d animals and pets
-free 3d dinosaurs and prehistoric
-free 3d humanoids and aliens
-free 3d rigged and animated
-free 3d printable and sculptable
-free 3d blender models
-free 3d obj models
-free 3d fbx models
-free 3d max models
-free 3d maya models
-free 3d c4d models
-free 3d unity models
-free 3d unreal models
-free 3d sketchfab models
-free 3d cgtrader models
-free 3d turbosquid models
-free 3d artstation models
-free 3d mixamo models
-free 3d quixel models
-free 3d substance models
-royalty-free 3d models
-creative commons license 3d models
-high quality 3d models for free
-realistic 3d models for free
-best sites for free 3d models
-how to get free 3d models
-where to find free 3d models
-tips for creating free 3d models
-tutorials for making free 3d models

-

Sketchfab

-

This is a website that allows you to view, share, and download over half a million free 3D models under Creative Commons licenses. You can also buy royalty-free models from the Sketchfab Store. You can explore models for different categories, such as characters, cars, weapons, scans, handpainted, medieval, fantasy, etc.

-

CGTrader

-

This is a website that offers over one million free and paid 3 D models for various categories, such as animals, architecture, electronics, furniture, vehicles, etc. You can also sell your own models and earn money. You can filter models by price, format, license, polycount, etc.

-

Free 3D Models Libraries and Repositories

-

These are online platforms that store and organize free 3D models that you can access and download. Some examples are:

-

Google Poly

-

This is a platform that allows you to browse, discover, and download thousands of free 3D models created by Google and other users. You can also upload your own models and share them with the world. You can find models for different categories, such as animals, art, food, nature, objects, scenes, etc.

-

BlenderKit

-

This is a platform that offers over 10,000 free 3D models for Blender users. You can also upload your own models and earn credits or money. You can find models for different categories, such as animals, architecture, characters, furniture, vehicles, etc. You can also access free materials, brushes, and textures.

-

NASA 3D Resources

-

This is a platform that provides free 3D models of NASA's missions, spacecrafts, planets, asteroids, comets, etc. You can also access free images, videos, podcasts, and e-books. You can find models in various formats, such as STL, OBJ, 3DS, FBX, etc.

-

How to Download and Use Free 3D Objects

-

Once you have found the free 3D objects that you want to use for your projects, you need to download and use them properly. Here are some tips to help you:

-

The Different File Formats for 3D Objects

-

Free 3D objects can come in different file formats that determine how they are stored and displayed. Some of the most common formats are:

-
    -
  • OBJ: This is a simple and widely used format that stores the geometry and texture coordinates of a 3D object.
  • -
  • STL: This is a format that stores the surface geometry of a 3D object using triangular facets. It is often used for 3D printing.
  • -
  • FBX: This is a format that stores the geometry, materials, animations, and other attributes of a 3D object. It is often used for game development and film production.
  • -
  • GLTF: This is a format that stores the geometry, materials, textures, animations, and other attributes of a 3D object using JSON and binary data. It is often used for web-based applications and virtual reality.
  • -
-

The Software and Tools You Need to Open and Edit 3D Objects

-

To open and edit free 3D objects, you need to have the appropriate software and tools installed on your computer or device. Some of the most popular software and tools are:

-
    -
  • Blender: This is a free and open-source software that allows you to create, edit, animate, render, and export 3D objects in various formats. It also has many features and add-ons for modeling, sculpting, texturing, lighting, rigging, animating, rendering, compositing, and more.
  • -
  • SketchUp: This is a software that allows you to create, edit, and share 3D objects in various formats. It also has many features and tools for drawing, measuring, coloring, applying materials, adding components, creating scenes, and more.
  • -
  • Unity: This is a software that allows you to create, edit, and export 3D objects in various formats. It also has many features and tools for game development, such as scripting, physics, audio, networking, animation, UI, etc.
  • -
  • Paint 3D: This is a software that allows you to create, edit, and export 3D objects in various formats. It also has many features and tools for painting, coloring, texturing, adding stickers, effects, lighting, etc.
  • -
-

The Best Practices for Using Free 3D Objects in Your Projects

-

To use free 3D objects in your projects effectively and ethically, you need to follow some best practices, such as:

-
    -
  • Checking the license and terms of use of the free 3D objects before downloading and using them. Some free 3D objects may have restrictions or requirements for attribution, modification, distribution, commercial use, etc.
  • -
  • Optimizing the size and quality of the free 3D objects to suit your project's needs and specifications. Some free 3D objects may have high polygon counts or large file sizes that can affect the performance or loading time of your project.
  • -
  • Customizing the appearance and behavior of the free 3D objects to match your project's theme and style. Some free 3D objects may have generic or inconsistent textures, materials, colors, lighting, animations, etc. that can affect the realism or aesthetics of your project.
  • -
  • Crediting the original creators or sources of the free 3D objects when using them in your project. This is not only a sign of respect and gratitude but also a way of avoiding plagiarism or infringement issues.
  • -
-

Conclusion

-

In conclusion, free 3D objects are a great resource for anyone who wants to create or enhance their projects with 3D graphics, animation, or visualization. They can save you time, money, and effort, as well as provide you with a variety of options and possibilities. However, you also need to be aware of the challenges and best practices of finding, downloading, and using free 3D objects online. By following the tips and resources in this article, you can make the most out of free 3D objects for your projects.

-

FAQs

-

Here are some frequently asked questions about free 3D objects:

-
    -
  1. What are the best websites to find free 3D objects?
  2. -

    There is no definitive answer to this question, as different websites may have different advantages and disadvantages depending on your needs and preferences. However, some of the most popular and reputable websites are Free3D.com, Sketchfab, CGTrader, Google Poly, BlenderKit, and NASA 3D Resources.

    -
  3. What are the best software and tools to open and edit free 3D objects?
  4. -

    Again, this depends on your personal choice and project requirements. However, some of the most widely used and versatile software and tools are Blender, SketchUp, Unity, and Paint 3D.

    -
  5. What are the best file formats for free 3D objects?
  6. -

    This depends on the type and purpose of your project, as well as the compatibility and functionality of your software and tools. However, some of the most common and flexible file formats are OBJ, STL, FBX, and GLTF.

    -
  7. How can I optimize the size and quality of free 3D objects?
  8. -

    You can use various methods and techniques to optimize the size and quality of free 3D objects, such as reducing the polygon count, simplifying the geometry, compressing the file size, adjusting the resolution, applying level of detail (LOD), etc.

    -
  9. How can I credit the original creators or sources of free 3D objects?
  10. -

    You can credit the original creators or sources of free 3D objects by following their license and terms of use, which may include providing their name, website, link, or other information. You can also use tools or platforms that automatically generate citations or attributions for free 3D objects.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Asphalt 7 Heat - The Ultimate Racing Game for Android with Mod APK and OBB Data (Unlimited Money and Cars).md b/spaces/congsaPfin/Manga-OCR/logs/Asphalt 7 Heat - The Ultimate Racing Game for Android with Mod APK and OBB Data (Unlimited Money and Cars).md deleted file mode 100644 index e81d18db32e2a4883ace5b4218c88af5d817ba07..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Asphalt 7 Heat - The Ultimate Racing Game for Android with Mod APK and OBB Data (Unlimited Money and Cars).md +++ /dev/null @@ -1,100 +0,0 @@ -
-

Download Asphalt 7 Mod APK: The Ultimate Racing Experience

-

If you are a fan of racing games, you must have heard of Asphalt 7, one of the most popular and thrilling games in the genre. Asphalt 7 is a game that lets you drive over 60 different cars from the world's most prestigious manufacturers, such as Ferrari, Lamborghini, Aston Martin, and more. You can race across the globe in various locations, such as Paris, London, Miami, Rio, etc. You can also compete with other players online or offline in various modes, such as career, quick play, multiplayer, etc.

-

However, if you want to enjoy the game to the fullest, you might need to spend some real money to unlock all the cars, tracks, and features. That's why many people look for a modded version of the game that gives them unlimited money and access to everything. In this article, we will show you how to download and install Asphalt 7 Mod APK, a modified version of the game that gives you unlimited money and unlock all cars. We will also tell you why you should choose this mod over the original game, and give you some tips and tricks to improve your racing skills.

-

download asphalt 7 mod apk


DOWNLOAD 🌟 https://urlca.com/2uO65o



-

What is Asphalt 7?

-

Asphalt 7 is a racing game developed by Gameloft, a famous company that produces many high-quality games for mobile devices. Asphalt 7 is the seventh entry in the Asphalt series, which started in 2004. The game was released in 2012 for iOS and Android devices, and later for Windows Phone and BlackBerry devices.

-

Features of Asphalt 7

-

Asphalt 7 has many features that make it one of the best racing games on the market. Some of these features are:

-
    -
  • A first-class lineup of over 60 cars from the world's most prestigious manufacturers, such as Ferrari, Lamborghini, Aston Martin, etc. You can also drive the legendary DeLorean from Back to the Future.
  • -
  • A global tour of over 15 tracks in various locations, such as Paris, London, Miami, Rio, etc. You can also race on new tracks that are added regularly.
  • -
  • A variety of game modes to suit your preferences and skills. You can play career mode, where you progress through different events and challenges. You can also play quick play mode, where you can choose any track and car you want. You can also play multiplayer mode, where you can race with up to 5 other players online or locally.
  • -
  • A lot of customization options for your car. You can upgrade your car's performance, such as speed, acceleration, handling, etc. You can also change your car's appearance, such as color, decals, rims, etc.
  • -
  • A lot of achievements and rewards to collect. You can earn stars by completing races and challenges. You can also earn cash by winning races and performing stunts. You can use these resources to buy new cars and upgrades.
  • -
  • A lot of fun and excitement. You can use nitro to boost your speed and perform stunts such as drifts, jumps, barrel rolls, etc. You can also use items such as rockets, magnets, shields, etc. to gain an advantage over your opponents.
  • -
-

How to download and install Asphalt 7 Mod APK?

-

If you want to download and install Asphalt 7 Mod APK on your Android device, you need to follow these steps:

-
    -
  1. First of all, you need to enable unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
  2. -
  3. Next, you need to download the modded APK file from a reliable source. You can find the link to the latest version of Asphalt 7 Mod APK here. Make sure you download the file from a trusted source, as some files may contain viruses or malware.
  4. -
  5. After you download the file, locate it in your device's file manager and tap on it to install it. You may need to grant some permissions to the app during the installation process.
  6. -
  7. Once the installation is complete, you can launch the game and enjoy the modded features. You don't need to create an account or sign in to play the game.
  8. -
-

Why choose Asphalt 7 Mod APK?

-

Asphalt 7 Mod APK is a modified version of the original game that gives you unlimited money and unlock all cars. This means that you can buy any car you want, upgrade it to the max, and race on any track you like. You don't have to worry about running out of money or being stuck with a slow car. You can also enjoy the game without any ads or interruptions. Moreover, you don't need to root your device to use this mod, as it works on any Android device.

-

Unlimited money and unlock all cars

-

The main feature of Asphalt 7 Mod APK is that it gives you unlimited money and unlock all cars. This means that you can buy any car you want, upgrade it to the max, and race on any track you like. You don't have to worry about running out of money or being stuck with a slow car. You can also enjoy the game without any ads or interruptions. Moreover, you don't need to root your device to use this mod, as it works on any Android device.

-

No ads and no root required

-

Another feature of Asphalt 7 Mod APK is that it removes all the ads and pop-ups from the game. This means that you can enjoy the game without any distractions or annoyances. You don't have to watch any videos or click on any banners to earn extra cash or items. You can also play the game offline without any internet connection. Moreover, you don't need to root your device to use this mod, as it works on any Android device.

-

High-quality graphics and sound

-

The last feature of Asphalt 7 Mod APK is that it enhances the graphics and sound quality of the game. This means that you can enjoy the game with more realistic and stunning visuals and effects. You can also hear the roar of the engines and the screech of the tires more clearly and loudly. You can also adjust the graphics and sound settings according to your preference and device performance.

-

Tips and tricks for playing Asphalt 7 Mod APK

-

Now that you know how to download and install Asphalt 7 Mod APK, and why you should choose it over the original game, let's give you some tips and tricks to improve your racing skills and have more fun with the game.

-

download asphalt 7 heat mod apk unlimited money
-download asphalt 7 mod apk latest version
-download asphalt 7 mod apk for android
-download asphalt 7 mod apk offline
-download asphalt 7 mod apk data
-download asphalt 7 mod apk obb
-download asphalt 7 mod apk revdl
-download asphalt 7 mod apk andropalace
-download asphalt 7 mod apk highly compressed
-download asphalt 7 mod apk no root
-download asphalt 7 mod apk free shopping
-download asphalt 7 mod apk rexdl
-download asphalt 7 mod apk full unlocked
-download asphalt 7 mod apk android 1
-download asphalt 7 mod apk all cars unlocked
-download asphalt 7 mod apk unlimited stars
-download asphalt 7 mod apk pure
-download asphalt 7 mod apk hack
-download asphalt 7 mod apk mega
-download asphalt 7 mod apk vip
-download asphalt 7 mod apk mirror
-download asphalt 7 mod apk uptodown
-download asphalt 7 mod apk apkpure
-download asphalt 7 mod apk mali
-download asphalt 7 mod apk adreno
-download asphalt 7 heat mod apk + data for android
-download asphalt 7 heat mod apk + obb file
-download asphalt 7 heat mod apk + data offline
-download asphalt 7 heat mod apk + data highly compressed
-download asphalt 7 heat mod apk + data latest version
-download asphalt 7 heat mod apk + data revdl
-download asphalt 7 heat mod apk + data rexdl
-download asphalt 7 heat mod apk + data andropalace
-download asphalt 7 heat mod apk + data mega
-download asphalt 7 heat mod apk + data mali
-download asphalt 7 heat mod apk + data adreno
-how to download and install asphalt 7 mod apk on android device
-how to download and play asphalt 7 mod apk offline on android device
-how to download and update asphalt 7 mod apk to latest version on android device
-how to fix lag and crash issues in asphalt 7 mod apk on android device

-

Choose the right car for each track

-

One of the most important things to consider when playing Asphalt 7 Mod APK is choosing the right car for each track. Different cars have different strengths and weaknesses, such as speed, acceleration, handling, etc. You should choose a car that suits the track's layout, weather, and difficulty level. For example, if you are racing on a snowy track, you should choose a car with good traction and stability. If you are racing on a city track, you should choose a car with good maneuverability and braking.

-

Use nitro wisely and perform stunts

-

Another thing to consider when playing Asphalt 7 Mod APK is using nitro wisely and performing stunts. Nitro is a boost that increases your speed and helps you overtake your opponents. However, nitro is limited and needs to be refilled by performing stunts such as drifts, jumps, barrel rolls, etc. You should use nitro when you need an extra burst of speed, such as when you are behind or when you are approaching a finish line. You should also perform stunts whenever possible, as they not only refill your nitro but also increase your score and cash.

-

Upgrade your car and customize it

-

The last thing to consider when playing Asphalt 7 Mod APK is upgrading your car and customizing it. Upgrading your car improves its performance, such as speed, acceleration, handling, etc. You can upgrade your car by spending cash that you earn by winning races and performing stunts. You can also customize your car by changing its appearance, such as color, decals, rims, etc. You can customize your car by spending stars that you earn by completing races and challenges.

-

Conclusion

-

Summary of the article

-

In this article, we have shown you how to download and install Asphalt 7 Mod APK, a modified version of the game that gives you unlimited money and unlock all cars. We have also told you why you should choose this mod over the original game, and given you some tips and tricks to improve your racing skills and have more fun with the game. Asphalt 7 Mod APK is a game that lets you drive over 60 different cars from the world's most prestigious manufacturers, such as Ferrari, Lamborghini, Aston Martin, and more. You can race across the globe in various locations, such as Paris, London, Miami, Rio, etc. You can also compete with other players online or offline in various modes, such as career, quick play, multiplayer, etc. Asphalt 7 Mod APK is a game that gives you unlimited money and unlock all cars. This means that you can buy any car you want, upgrade it to the max, and race on any track you like. You don't have to worry about running out of money or being stuck with a slow car. You can also enjoy the game without any ads or interruptions. Moreover, you don't need to root your device to use this mod, as it works on any Android device.

-

FAQs

-

Here are some frequently asked questions about Asphalt 7 Mod APK:

-
    -
  • Is Asphalt 7 Mod APK safe to use?
    -Yes, Asphalt 7 Mod APK is safe to use, as long as you download it from a reliable source. However, you should always be careful when downloading and installing any modded app, as some files may contain viruses or malware. You should also scan your device regularly with an antivirus app.
  • -
  • Is Asphalt 7 Mod APK compatible with my device?
    -Asphalt 7 Mod APK is compatible with any Android device that runs on Android 4.0 or higher. However, some devices may not support the high-quality graphics and sound of the game. You can adjust the graphics and sound settings according to your device performance.
  • -
  • Can I play Asphalt 7 Mod APK online with other players?
    -Yes, you can play Asphalt 7 Mod APK online with other players who have the same modded version of the game. However, you may not be able to play online with players who have the original version of the game. You can also play offline with other players who are nearby using Wi-Fi or Bluetooth.
  • -
  • Can I update Asphalt 7 Mod APK to the latest version?
    -Yes, you can update Asphalt 7 Mod APK to the latest version by downloading and installing the new modded APK file from the same source. However, you may lose your progress and data if you uninstall the previous version of the game. You can backup your data using a cloud service or a file manager app.
  • -
  • Can I request a new feature or report a bug for Asphalt 7 Mod APK?
    -Yes, you can request a new feature or report a bug for Asphalt 7 Mod APK by contacting the developer of the mod. You can find their contact information on their website or social media accounts. However, they may not respond to your request or fix the bug immediately.
  • -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Crafting and Building A Free Sandbox Game with Endless Possibilities.md b/spaces/congsaPfin/Manga-OCR/logs/Crafting and Building A Free Sandbox Game with Endless Possibilities.md deleted file mode 100644 index 442579f47f022a154baa43af6b277891d9fb230b..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Crafting and Building A Free Sandbox Game with Endless Possibilities.md +++ /dev/null @@ -1,160 +0,0 @@ - -

Free Download Crafting and Building: A Guide for Beginners

-

Do you like building games? Do you want to unleash your creativity and imagination? Do you want to have fun with your friends in a virtual world? If you answered yes to any of these questions, then you might want to try Crafting and Building, a new free building game that lets you do all that and more. In this article, we will tell you everything you need to know about this game, from what it is, how to download it, how to play it, and some tips and tricks to make your experience even better.

-

What is Crafting and Building?

-

A free game for creative minds

-

Crafting and Building is a game that allows you to create your own world using different types of blocks. You can build anything you can imagine, from houses, castles, farms, cities, to monuments, sculptures, and even pixel art. You can also explore the world, interact with animals and villagers, and play with your friends online. The game is inspired by Minecraft, but it has its own style and features.

-

free download crafting and building


Download »»» https://urlca.com/2uO5yU



-

Features of Crafting and Building

-

Some of the features that make Crafting and Building an enjoyable game are:

-
    -
  • Perfect game for the whole family: boys and girls of all ages will love it.
  • -
  • Cool game: search for hidden caves, dungeons, and secrets with your friends. Multiplayer mode is cool!
  • -
  • Build anything: house with a room and a kitchen? A castle? A spaceship? You decide!
  • -
  • One of the best simulation games: start building your house and meet your neighbors.
  • -
  • Choose your character: boy or girl? Custom skin? You can change your appearance anytime.
  • -
  • Multiplayer games: you can play online and help your friend to build their house or compete with them.
  • -
  • Fun game: play with villagers and animals. It is so fun!
  • -
  • Cool graphics: enjoy the best pixel graphics with high fps.
  • -
  • Free game: play the game for free! No hidden fees or subscriptions.
  • -
  • Building game: build your own constructions. Who will have the best building?
  • -
-

How to Download Crafting and Building?

-

Choose your platform

-

Crafting and Building is available for various platforms, including Windows PC, Android, iOS, Nintendo Switch, PlayStation 4, PlayStation 5, Xbox One, Xbox Series X/S. You can choose the one that suits you best.

-

Follow the instructions

-

To download Crafting and Building, you need to follow these steps:

-
    -
  1. Go to the official website of Crafting and Building or search for it on your platform's store.
  2. -
  3. Select the version that matches your platform.
  4. -
  5. Click on the download button or install button.
  6. -
  7. Wait for the download or installation to finish.
  8. -
  9. Launch the game and enjoy!
  10. -
-

How to Play Crafting and Building?

-

Learn the basics

-

The first thing you need to do when you start playing Crafting and Building is to learn the basics. You will need to gather some wood blocks by punching trees. Then

Then, you will need to craft a crafting table by placing four wood blocks in a square in your inventory. You can use the crafting table to make more advanced items, such as tools, weapons, armor, and furniture. You will also need to find some food, such as apples, carrots, or meat, to replenish your hunger and health. You can cook food in a furnace, which you can craft with cobblestone.

-

Explore the world

-

Once you have the basic items, you can start exploring the world of Crafting and Building. You will find different biomes, such as forests, deserts, mountains, oceans, and snowlands. Each biome has its own features, such as plants, animals, resources, and structures. You can also discover caves, dungeons, temples, villages, and other secrets. Be careful of hostile mobs, such as zombies, skeletons, spiders, and creepers. They will attack you at night or in dark places. You can fight them with your weapons or avoid them by running away or hiding.

-

Build your dream house

-

The main attraction of Crafting and Building is to build your own house or anything else you can imagine. You can use different types of blocks and items to create your own design. You can also decorate your house with paintings, carpets, flowers, and other items. You can make your house more functional by adding doors, windows, beds, chests, bookshelves, and other items. You can also make some redstone contraptions, such as lamps, switches, pistons, and dispensers. Here is an example of a simple house you can build:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Play with friends

-

Crafting and Building is more fun when you play with your friends online. You can join an existing server or create your own one. You can chat with other players, share your creations, help each other out, or compete in mini-games. You can also customize your character with different skins and outfits. You can download skins from the internet or create your own ones. To play with friends online, you need to follow these steps:

-

free game crafting and building 2020
-crafting and building app free download
-crafting and building for kids free download
-crafting and building online multiplayer free download
-crafting and building sandbox game free download
-crafting and building apk free download
-crafting and building on pc free download
-crafting and building mod apk free download
-crafting and building adventure game free download
-crafting and building offline game free download
-free download crafting and building for android
-free download crafting and building for windows 10
-free download crafting and building for mac
-free download crafting and building for ios
-free download crafting and building for chromebook
-how to download crafting and building for free
-where to download crafting and building for free
-best site to download crafting and building for free
-safe way to download crafting and building for free
-easy way to download crafting and building for free
-free download of crafting and building latest version
-free download of crafting and building old version
-free download of crafting and building update
-free download of crafting and building hack
-free download of crafting and building cheats
-learn how to craft and build for free download
-tips and tricks for crafting and building free download
-guide for crafting and building free download
-tutorial for crafting and building free download
-walkthrough for crafting and building free download
-play crafting and building online for free no download
-play crafting and building offline for free no download
-play crafting and building with friends for free no download
-play crafting and building with pets for free no download
-play crafting and building with villagers for free no download
-create your own world in crafting and building free download
-explore different worlds in crafting and building free download
-customize your character in crafting and building free download
-design your house in crafting and building free download
-build your castle in crafting and building free download
-craft your tools in crafting and building free download
-craft your weapons in crafting and building free download
-craft your armor in crafting and building free download
-craft your furniture in crafting and building free download
-craft your clothes in crafting and building free download
-enjoy the best graphics in crafting and building free download
-enjoy the best music in crafting and building free download
-enjoy the best gameplay in crafting and building free download
-enjoy the best simulation in crafting and building free download

-
    -
  1. Select multiplayer mode from the main menu.
  2. -
  3. Choose whether you want to join or create a server.
  4. -
  5. If you want to join a server, enter the server name or IP address and click join.
  6. -
  7. If you want to create a server, enter a server name and password and click create.
  8. -
  9. Invite your friends to join your server or find other players online.
  10. -
  11. Enjoy playing together!
  12. -
-

T

Here are some frequently asked questions about Crafting and Building:

-
    -
  • Q: Is Crafting and Building safe to download and play?
  • -
  • A: Yes, Crafting and Building is safe to download and play. The game does not contain any viruses, malware, or inappropriate content. However, you should always be careful when downloading anything from the internet and make sure you have a reliable antivirus software installed on your device.
  • -
  • Q: How can I update Crafting and Building to the latest version?
  • -
  • A: To update Crafting and Building to the latest version, you need to go to the official website of the game or your platform's store and check if there is a new update available. If there is, you can download and install it by following the instructions. You should always update your game to enjoy the new features and bug fixes.
  • -
  • Q: How can I contact the developers of Crafting and Building?
  • -
  • A: If you have any questions, suggestions, feedback, or issues with Crafting and Building, you can contact the developers of the game by sending them an email at craftingandbuilding@gmail.com. You can also follow them on their social media accounts, such as Facebook, Twitter, Instagram, or YouTube.
  • -
  • Q: Can I play Crafting and Building offline?
  • -
  • A: Yes, you can play Crafting and Building offline. You can create your own world and play it without an internet connection. However, if you want to play online with your friends or access some online features, such as skins or servers, you will need an internet connection.
  • -
  • Q: Can I share my creations with other players?
  • -
  • A: Yes, you can share your creations with other players. You can upload your world to the online gallery and let other players see it and rate it. You can also download other players' worlds and play them on your device. You can also share your screenshots or videos of your creations on social media or other platforms.
  • -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Facebook and Messenger APKs for Xiaomi Phones How to Install and Use.md b/spaces/congsaPfin/Manga-OCR/logs/Facebook and Messenger APKs for Xiaomi Phones How to Install and Use.md deleted file mode 100644 index caff45a266586200f90035999cc2128ca3f365c8..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Facebook and Messenger APKs for Xiaomi Phones How to Install and Use.md +++ /dev/null @@ -1,135 +0,0 @@ -
-

Facebook APK Redmi 7A: A Guide for Users

-

If you are looking for a budget-friendly smartphone that can run Facebook smoothly, you might want to consider the redmi 7a. This device comes with a pre-installed facebook apk that allows you to access the social network without any hassle. But what is facebook apk redmi 7a and why is it popular among users? In this article, we will answer these questions and more. We will also explore the features, benefits, specifications, and reviews of facebook apk redmi 7a. Read on to find out more.

-

facebook apk redmi 7a


Download ===== https://urlca.com/2uOfQb



-

What is facebook apk redmi 7a?

-

Facebook apk redmi 7a is a combination of two terms: facebook apk and redmi 7a. Let's break them down one by one.

-

Facebook apk

-

Facebook apk is a file format that contains the application software for Facebook. It is used to install or update Facebook on Android devices. You can download facebook apk from various sources, such as the official website, Google Play Store, or third-party websites. However, you need to be careful when downloading from untrusted sources, as they may contain malware or viruses.

-

Redmi 7a

-

Redmi 7a is a smartphone model from Xiaomi, a Chinese electronics company. It was released in July 2019 as a successor to the redmi 6a. It has a 5.45-inch display, a Snapdragon 439 processor, a 13-megapixel rear camera, a 5-megapixel front camera, a 4000 mAh battery, and a dual-SIM slot. It runs on Android 9 Pie with MIUI 9 software. It comes in two variants: one with 16 GB of internal storage and 2 GB of RAM, and another with 32 GB of internal storage and either 2 GB or 3 GB of RAM.

-

Why is facebook apk redmi 7a popular among users?

-

Facebook apk redmi 7a is popular among users because it offers a fast and convenient way to access Facebook on a low-cost device. Users can enjoy the following features of facebook apk redmi 7a:

-

facebook lite apk for redmi 7a
-how to download facebook on xiaomi phones
-facebook app for android redmi 7a
-facebook messenger apk for redmi 7a
-facebook apk download for xiaomi redmi 7a
-facebook lite for xiaomi redmi 7a
-how to install facebook on redmi 7a
-facebook app update for redmi 7a
-facebook apk latest version for redmi 7a
-facebook lite vs facebook app for redmi 7a
-how to fix facebook not working on redmi 7a
-facebook video downloader apk for redmi 7a
-facebook mod apk for xiaomi redmi 7a
-facebook dark mode apk for redmi 7a
-facebook apk old version for redmi 7a
-how to uninstall facebook on xiaomi phones
-facebook app settings for redmi 7a
-facebook notifications not working on redmi 7a
-facebook stories apk for xiaomi redmi 7a
-facebook beta apk for redmi 7a
-how to clear facebook cache on redmi 7a
-facebook gameroom apk for xiaomi redmi 7a
-facebook business suite apk for redmi 7a
-facebook dating apk for xiaomi redmi 7a
-how to change facebook language on redmi 7a
-facebook creator studio apk for redmi 7a
-facebook watch apk for xiaomi redmi 7a
-how to disable facebook on redmi 7a
-facebook pages manager apk for xiaomi redmi 7a
-how to enable facebook live on redmi 7a
-facebook groups apk for xiaomi redmi 7a
-how to block facebook ads on redmi 7a
-facebook marketplace apk for xiaomi redmi 7a
-how to sync facebook contacts on redmi 7a
-facebook analytics apk for xiaomi redmi 7a
-how to hide online status on facebook on redmi 7a
-facebook avatar maker apk for xiaomi redmi 7a
-how to change profile picture on facebook on redmi 7a
-facebook ads manager apk for xiaomi redmi 7a
-how to delete facebook account on redmi 7a
-facebook events apk for xiaomi redmi 7a
-how to recover deleted messages on facebook on redmi 7a
-facebook local apk for xiaomi redmi 7a
-how to use stickers on facebook on redmi 7a
-facebook community network apk for xiaomi redmi 7a
-how to tag someone on facebook on redmi 7a

-

Facebook Lite

-

Facebook Lite is a version of Facebook that uses less data and works in all network conditions. It is smaller in size, faster to load, and easier to use. It also has most of the features of the regular Facebook app, such as posting status updates, sharing photos and videos, liking and commenting on posts, chatting with friends, joining groups, and watching stories.

-

Facebook Messenger

-

Facebook Messenger is an app that lets you send messages, photos, videos, voice notes, stickers, emojis, GIFs, and more to your Facebook contacts. You can also make free voice and video calls, create group chats, play games, send money, and use bots. You can also access Messenger from within the Facebook Lite app.

-

Facebook News Feed

-

Facebook News Feed is where you can see the latest updates from your friends, pages you follow, groups you join, and topics you care about. You can also discover new content based on your interests and preferences. You can customize your News Feed by hiding or snoozing posts you don't want to see, prioritizing posts from certain people or pages, saving posts for later, or exploring different tabs such as Watch, Marketplace, Gaming, or Dating.

-

Facebook LiveFacebook Portal

-

Facebook Portal is a device that lets you make video calls with your Facebook friends and contacts. It has a smart camera that follows your movements and adjusts the frame automatically. It also has a smart sound feature that enhances your voice and reduces background noise. You can use Facebook Portal to watch videos, listen to music, play games, and use Alexa skills. You can also connect it to your TV with Portal TV and enjoy a larger screen experience.

-

Benefits of facebook apk redmi 7a

-

By using facebook apk redmi 7a, you can enjoy the following benefits:

-

Saves data and storage space

-

Facebook apk redmi 7a uses less data than the regular Facebook app, which means you can save money on your mobile plan. It also takes up less storage space on your device, which means you can have more room for other apps and files.

-

Works on old and low-end devices

-

Facebook apk redmi 7a is designed to work on devices that have low specifications or are outdated. It can run smoothly on 2G networks and areas with poor internet connectivity. It can also support older versions of Android, such as Android 4.1 or higher.

-

Connects with friends and family easily

-

Facebook apk redmi 7a lets you stay in touch with your friends and family on Facebook, no matter where they are. You can send messages, make calls, share photos and videos, and join groups. You can also see what they are up to on their timelines and stories.

-

Provides entertainment and information

-

Facebook apk redmi 7a gives you access to a variety of content that can entertain and inform you. You can watch videos from Facebook Watch, browse news from Facebook News Feed, play games from Facebook Gaming, shop from Facebook Marketplace, and date from Facebook Dating.

-

Supports interactive online chat and video calls

-

Facebook apk redmi 7a enables you to have fun and engaging conversations with your contacts. You can use Facebook Messenger to chat with text, voice, video, stickers, emojis, GIFs, and more. You can also use Facebook Live to broadcast live video to your friends or the public. You can also use Facebook Portal to make video calls with a smart camera and sound.

-

Specifications and reviews of redmi 7a

-

If you are interested in buying the redmi 7a device, you might want to know more about its specifications and reviews. Here are some of the key features and user opinions of the redmi 7a:

-

Display, design, and battery

-

The redmi 7a has a 5.45-inch HD+ display with a resolution of 720 x 1440 pixels and an aspect ratio of 18:9. It has a plastic body with a matte finish and a splash-resistant coating. It has a single rear camera and a front-facing camera on the top bezel. It has a 4000 mAh battery that supports 10W charging and can last up to two days on normal usage.

-

Performance, memory, and camera

-

The redmi 7a is powered by a Snapdragon 439 octa-core processor that can handle basic tasks and games smoothly. It has either 2 GB or 3 GB of RAM and either 16 GB or 32 GB of internal storage that can be expanded up to 256 GB with a microSD card. It has a 13-megapixel rear camera with an LED flash and a Sony IMX486 sensor that supports AI scene detection and phase detection autofocus. It has a 5-megapixel front camera that supports AI beautify and face unlock.

-

Pros and cons of redmi 7a

-

The redmi 7a is a budget-friendly smartphone that offers some advantages and disadvantages. Here are some of the pros and cons of the redmi 7a:

-
StepDescriptionImage
1Make a 9x9 square with cobblestone blocks.Step 1
2Add four more layers of cobblestone blocks on top of the square.Step 2
3Make a roof with oak planks and stairs.Step 3
4Cut out a door and two windows on the front wall.Step 4
5Add a door and glass panes to the door and windows.Step 5
6Add some torches inside and outside the house for lighting.Step 6
7Add a bed, a chest, a crafting table, and a furnace inside the house.Step 7
8Add some flowers and grass around the house for decoration.Step 8
- - - - - - - - - - - - - - - - - - - - - - - - -
ProsCons
Affordable priceNo fingerprint scanner
Long-lasting batteryNo fast charging
Splash-resistant designNo notch or full-screen display
Decent performanceNo dual camera or portrait mode
Good rear camera qualityAverage front camera quality
-

User opinions and ratings of redmi 7a

User opinions and ratings of redmi 7a

-

To get a better idea of how the redmi 7a performs in real life, you might want to check out some of the user reviews and ratings from various sources. Here are some of the excerpts from the web search results:

-
    -
  • "The best phone I ever had \uD83D\uDE0D I loved it so much. And satisfied with it."
  • -
  • "With a relatively petite frame, great battery life, a sturdy design and an attractive price point, the Redmi 7A is quite a steal. But with limited RAM, divisive software and an iffy camera, it isn’t an instant recommendation."
  • -
  • "The Redmi 7A is cheap but nonetheless cheerful in a couple of key areas – particularly the battery, but also the screen and build quality. However, its underwhelming internal specs make it a poor choice for anyone who wants to do more than the basics with their phone."
  • -
-

Conclusion

-

The facebook apk redmi 7a is a good option for users who want to enjoy Facebook on a budget-friendly device. It has a pre-installed facebook apk that offers various features, such as Facebook Lite, Facebook Messenger, Facebook News Feed, Facebook Live, and Facebook Portal. It also has some benefits, such as saving data and storage space, working on old and low-end devices, connecting with friends and family easily, providing entertainment and information, and supporting interactive online chat and video calls. However, the redmi 7a device itself has some limitations, such as not enough RAM, bloated software, no fingerprint scanner, no fast charging, no notch or full-screen display, no dual camera or portrait mode, and average front camera quality. The user opinions and ratings of the redmi 7a are mixed, with some praising its battery life, performance, and rear camera quality, and others criticizing its software, camera, and display. Therefore, we recommend the facebook apk redmi 7a for users who are looking for a simple and reliable smartphone that can run Facebook smoothly, but not for users who are looking for a more advanced and versatile smartphone that can handle more demanding tasks and games.

-

FAQs

-

Here are some of the frequently asked questions about facebook apk redmi 7a:

-

Q: How can I download facebook apk redmi 7a?

-

A: You can download facebook apk redmi 7a from various sources, such as the official website, Google Play Store, or third-party websites. However, you need to be careful when downloading from untrusted sources, as they may contain malware or viruses.

-

Q: How can I update facebook apk redmi 7a?

-

A: You can update facebook apk redmi 7a by checking for updates in the app settings or by downloading the latest version from the official website or Google Play Store.

-

Q: How can I uninstall facebook apk redmi 7a?

-

A: You can uninstall facebook apk redmi 7a by going to the app settings and tapping on uninstall or by going to the device settings and tapping on apps and notifications.

-

Q: How can I contact facebook apk redmi 7a support?

-

A: You can contact facebook apk redmi 7a support by visiting the help center or by sending feedback in the app settings.

-

Q: How can I report a problem with facebook apk redmi 7a?

-

A: You can report a problem with facebook apk redmi 7a by going to the app settings and tapping on report a problem or by going to the help center and tapping on report a problem.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Ra Ra Reddy (Hindi) Video Song - Macharla Chunaav Kshetra (M.C.K) - Watch the Remake of Ranu Ranu Song.md b/spaces/congsaPfin/Manga-OCR/logs/Ra Ra Reddy (Hindi) Video Song - Macharla Chunaav Kshetra (M.C.K) - Watch the Remake of Ranu Ranu Song.md deleted file mode 100644 index efd3a663d9e55f726b168f6f91b25ee712da8dd7..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Ra Ra Reddy (Hindi) Video Song - Macharla Chunaav Kshetra (M.C.K) - Watch the Remake of Ranu Ranu Song.md +++ /dev/null @@ -1,130 +0,0 @@ - -

Ra Ra Reddy Song Download: A Peppy Party Number from Macherla Niyojakavargam

-

If you are looking for a catchy and energetic song to groove to, then you should check out Ra Ra Reddy I'm Ready, a peppy party number from the Telugu movie Macherla Niyojakavargam. The song features Nithiin and Anjali in a festive mood, dancing to the upbeat tunes composed by Mahathi Swara Sagar. The song is sung by Lipsika and Aditya Iyengar, with lyrics written by Kasarla Shyam and Kulasekar.

-

Macherla Niyojakavargam is an action-drama film directed by M.S Raja Shekhar Reddy and produced by Sreshth Movies. The film stars Nithiin as Siddharth Reddy, an IAS officer who takes charge as the collector of Guntur district. He faces challenges from a local goon who has not let elections happen in Macherla. Krithi Shetty and Catherine Tresa play the female leads in the film. The film was released on 12 August 2022 and received mixed reviews from critics.

-

ra ra reddy song download


DOWNLOADhttps://urlca.com/2uOgaN



-

Song Review

-

Ra Ra Reddy I'm Ready is a remake of the popular song Ranu Ranu Antune Chinnado from Nithiin's debut film Jayam (2002). The song has been given a fresh twist by Mahathi Swara Sagar, who has added some catchy beats and electronic sounds to make it more appealing to the current generation. The song has a festive vibe, with drums, flutes, saxophones, and keyboards creating a lively atmosphere.

-

The lyrics of the song are catchy and witty, with some double entendres and spicy dialogues. The song also has some references to Nithiin's previous films like Bheeshma and Maestro. The singers Lipsika and Aditya Iyengar have done a commendable job in delivering the vocals with energy and enthusiasm. Lipsika especially stands out with her husky voice and attitude.

-

The video of the song is equally vibrant and colorful, with Nithiin and Anjali showing off their dancing skills and chemistry. The choreography by Jani master is impressive, with some innovative moves and formations. Nithiin looks dashing in a black shirt and denim, while Anjali looks stunning in a desi avatar. The video also has some cameo appearances by Samuthirakani, Rajendra Prasad, Vennela Kishore, Murali Sharma, Jayaprakash, Indraja, Subhalekha Sudhakar, Brahmaji.

-

The song has received a positive response from the audience and critics alike, who have praised its music, lyrics, vocals, video, choreography, and chemistry. The song has crossed over 50 million views on YouTube and has become one of the most popular songs of 2022.

-

Song Lyrics

-

Here are the Telugu lyrics of Ra Ra Reddy I'm Ready along with their English translation:

- - - - - - - - - - - - - - - - - - -
Telugu LyricsEnglish Translation
Aa macherla center lo mapatella nenosthe
-Sandamama sanduloki vachenantare
-Masaka masaka winter lo paiyta nenu jaaristhe Summer lo ice cream lanti nee navvu chusthe
-Ra ra reddy I'm ready
-Ra ra reddy I'm ready
If I become the leader of all the parties in Macherla center
-The moon will come down to the sand
-If I wear a sweater in winter and walk around
-Your smile is like ice cream in summer
-Ra ra reddy I'm ready
-Ra ra reddy I'm ready
Nee kallu chupisthe nenu kallu moosukunta
-Nee muddu pettisthe nenu muddu vippukunta
-Nee chethi pattisthe nenu chethi vadhulukunta
-Nee peru pilichisthe nenu peru marchukunta
-Ra ra reddy I'm ready
-Ra ra reddy I'm ready
If you show me your eyes, I close my eyes
-If you give me a kiss, I return a kiss
-If you hold my hand, I let go of my hand
-If you call my name, I change my name
-Ra ra reddy I'm ready
-Ra ra reddy I'm ready
Nee kosam wait chesthunna nee kosam wait chesthunna
-Nee kosam wait chesthunna nee kosam wait chesthunna
-Nee kosam wait chesthunna nee kosam wait chesthunna
-Nee kosam wait chesthunna nee kosam wait chesthunna
-Ra ra reddy I'm ready
-Ra ra reddy I'm ready
I'm waiting for you, I'm waiting for you
-I'm waiting for you, I'm waiting for you
-I'm waiting for you, I'm waiting for you
-I'm waiting for you, I'm waiting for you
-Ra ra reddy I'm ready
-Ra ra reddy I'm ready
-

Song Download

-

If you want to download or stream Ra Ra Reddy I'm Ready song, you can do so legally and safely from various platforms. Here are some of the platforms where you can find the song and their features:

- - - - - - - - - - - - - - - - - - - - - - - - - - -
PlatformFeatures
YouTube Music- Free and premium options available
- Access to millions of songs and videos
- Offline download and background play
- Personalized recommendations and playlists
- Ad-free and high-quality audio with premium subscription
Spotify- Free and premium options available
- Access to millions of songs and podcasts
- Offline download and ad-free listening with premium subscription
- Personalized recommendations and playlists
- High-quality audio and cross-device compatibility
Apple Music- Subscription-based service with a free trial
- Access to millions of songs and exclusive content
- Offline download and ad-free listening
- Personalized recommendations and playlists
- High-quality audio and integration with Apple devices
JioSaavn- Free and premium options available
- Access to millions of songs in various languages
- Offline download and ad-free listening with premium subscription
- Personalized recommendations and playlists
- High-quality audio and cross-device compatibility
Gaana- Free and premium options available
- Access to millions of songs in various languages
- Offline download and ad-free listening with premium subscription
- Personalized recommendations and playlists
- High-quality audio and cross-device compatibility
-

Conclusion

-

Ra Ra Reddy I'm Ready is a fun and catchy song that will make you want to dance along. The song has a festive vibe, with lively music, witty lyrics, and energetic vocals. The video of the song is also colorful and vibrant, with Nithiin and Anjali showing off their chemistry and dancing skills. The song is a remake of a classic hit from Nithiin's debut film Jayam, but it has been given a fresh twist by Mahathi Swara Sagar. The song has received a lot of love from the audience and critics, who have praised its music, lyrics, vocals, video, choreography, and chemistry. If you are looking for a peppy party number to groove to, then you should definitely check out Ra Ra Reddy I'm Ready song from Macherla Niyojakavargam. You can download or stream the song from various platforms legally and safely. We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please feel free to leave a comment below.

-

ra ra reddy i am ready song download from macherla niyojakavargam
-lipsika and aditya iyengar ra ra reddy song mp3 download
-ra ra reddy full video song youtube
-ranu ranu song remake by r.p.patnaik as ra ra reddy
-ra ra reddy hindi version video song macharla chunaav kshetra
-nithiin and krithi shetty in ra ra reddy song
-how to download ra ra reddy song from jiosaavn
-ra ra reddy song lyrics in telugu and english
-mahathi swara sagar music for ra ra reddy song
-kulasekhar and kasarla shyam lyrics for ra ra reddy song
-watch online ra ra reddy song from macherla niyojakavargam movie
-best deals on macherla niyojakavargam movie tickets to watch ra ra reddy song
-reviews and ratings of ra ra reddy song by critics and audience
-behind the scenes of ra ra reddy song shooting
-making of ra ra reddy song video with nithiin and krithi shetty
-dance steps and choreography of ra ra reddy song
-remix and mashup versions of ra ra reddy song by dj's and artists
-reaction videos of fans and celebrities to ra ra reddy song
-memes and jokes on ra ra reddy song and its lyrics
-trivia and facts about ra ra reddy song and its singers
-comparison of ra ra reddy song with ranu ranu song from jayam movie
-history and origin of the phrase "ra ra reddy i am ready"
-meaning and significance of the word "reddy" in telugu culture and politics
-popularity and trends of the keyword "ra ra reddy song download" on google and bing
-analysis and insights of the keyword "ra ra reddy song download" on semrush and ahrefs

-

FAQs

-

Here are some of the frequently asked questions and their answers about Ra Ra Reddy I'm Ready song:

-

Q: Who are the singers of Ra Ra Reddy I'm Ready song?

-

A: The singers of Ra Ra Reddy I'm Ready song are Lipsika and Aditya Iyengar.

-

Q: Who are the composers and lyricists of Ra Ra Reddy I'm Ready song?

-

A: The composer of Ra Ra Reddy I'm Ready song is Mahathi Swara Sagar, and the lyricist is Kasarla Shyam and Kulasekar.

-

Q: Which movie is Ra Ra Reddy I'm Ready song from?

-

A: Ra Ra Reddy I'm Ready song is from the Telugu movie Macherla Niyojakavargam, starring Nithiin, Krithi Shetty, Catherine Tresa, and Anjali.

-

Q: Is Ra Ra Reddy I'm Ready song a remake of another song?

-

A: Yes, Ra Ra Reddy I'm Ready song is a remake of the popular song Ranu Ranu Antune Chinnado from Nithiin's debut film Jayam (2002).

-

Q: How many views does Ra Ra Reddy I'm Ready song have on YouTube?

-

A: As of 21 June 2023, Ra Ra Reddy I'm Ready song has over 50 million views on YouTube.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Zoo 2 Animal Park MOD APK Latest Version - Fun Easy and Addictive.md b/spaces/congsaPfin/Manga-OCR/logs/Zoo 2 Animal Park MOD APK Latest Version - Fun Easy and Addictive.md deleted file mode 100644 index e2cebbce5de6c078f841cc5eebc03fdfef29d531..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Zoo 2 Animal Park MOD APK Latest Version - Fun Easy and Addictive.md +++ /dev/null @@ -1,99 +0,0 @@ - -

Zoo 2 Animal Park Mod APK Latest Version: A Review

-

If you love animals and tycoon games, you might want to check out Zoo 2 Animal Park, a popular simulation game by upjers GmbH. In this game, you can build your own zoo, take care of various animals, breed cute babies, and expand your park. You can also enjoy a gripping story, stunning graphics, and regular updates. But what if you want to have more resources, more freedom, and more fun? You might be interested in the mod APK version of Zoo 2 Animal Park, which offers unlimited gold coins and other benefits. In this article, we will review the mod APK version of Zoo 2 Animal Park, explain how to download and install it, and share some tips and tricks for playing the game.

-

What is Zoo 2 Animal Park?

-

A zoo simulation game with a captivating story and cute animals

-

Zoo 2 Animal Park is a game that lets you slip into the role of a zoo director. You inherit a small zoo from your late great aunt Josephine, who loved animals dearly. However, the mayor wants to shut down the zoo and build a shopping mall instead. You have to save the zoo by attracting more visitors and proving its worth. Along the way, you will meet various characters, complete tasks and quests, and uncover secrets.

-

zoo 2 animal park mod apk latest version


DOWNLOADhttps://urlca.com/2uOa70



-

But Zoo 2 Animal Park is not just about saving the zoo. It's also about creating your own animal paradise. You can choose from hundreds of animals, from domestic to exotic, from rabbits to lions. You can feed them, play with them, pet them, and watch them interact with each other. You can also breed adorable baby animals and collect rare coat colors.

-

A tycoon game with a wide range of features and customization options

-

Zoo 2 Animal Park is also a tycoon game that challenges you to manage your zoo efficiently and effectively. You have to earn money by attracting visitors, selling tickets, running shops, and completing quests. You can use your money to buy new animals, enclosures, decorations, buildings, plants, paths, and more. You can also unlock new items by leveling up.

-

Zoo 2 Animal Park gives you a lot of freedom to design your zoo according to your preferences. You can rotate the view 360 degrees and zoom in and out continuously. You can place items anywhere you want as long as there is enough space. You can customize each enclosure with different types of fences, floors, plants, toys, feeders, shelters, etc. You can also decorate your zoo with various themes such as Halloween or Christmas.

-

What is the mod APK version of Zoo 2 zoo to attract more visitors

-

Another important aspect of playing Zoo 2 Animal Park is to take care of your animals and decorate your zoo. You have to feed your animals regularly, clean their enclosures, and treat them when they are sick. You also have to provide them with enough space, shelter, toys, and plants. By taking care of your animals, you can increase their happiness and loyalty, which will attract more visitors to your zoo.

-

You can also decorate your zoo with various items that will make it more appealing and unique. You can choose from different themes, such as jungle, savanna, or arctic. You can also place paths, fences, signs, benches, lamps, fountains, statues, and more. By decorating your zoo, you can increase its beauty and popularity, which will also attract more visitors to your zoo.

-

Breed adorable baby animals and collect rare coat colors

-

One of the most fun features of Zoo 2 Animal Park is breeding adorable baby animals. You can breed any two animals of the same species as long as they are not related. You can also use gold coins to speed up the breeding process or to increase the chances of getting twins or triplets. By breeding animals, you can expand your zoo collection and earn more money.

-

You can also collect rare coat colors for your animals by breeding them. Each animal has a chance of having a different coat color than its parents. Some coat colors are more rare and valuable than others. You can see the possible coat colors for each animal in the animal book. By collecting rare coat colors, you can make your zoo more diverse and attractive.

-

zoo 2 animal park unlimited gold coins mod apk
-zoo 2 animal park hack mod apk download
-zoo 2 animal park mod apk latest version free
-zoo 2 animal park mod apk android 1
-zoo 2 animal park mod apk offline
-zoo 2 animal park mod apk unlimited money and diamonds
-zoo 2 animal park mod apk revdl
-zoo 2 animal park mod apk no root
-zoo 2 animal park mod apk unlimited everything
-zoo 2 animal park mod apk for pc
-zoo 2 animal park mod apk online
-zoo 2 animal park mod apk obb
-zoo 2 animal park mod apk rexdl
-zoo 2 animal park mod apk unlimited gems
-zoo 2 animal park mod apk happymod
-zoo 2 animal park mod apk latest update
-zoo 2 animal park mod apk new version
-zoo 2 animal park mod apk vip
-zoo 2 animal park mod apk premium
-zoo 2 animal park mod apk pro
-zoo 2 animal park mod apk full version
-zoo 2 animal park mod apk unlocked all
-zoo 2 animal park mod apk mega
-zoo 2 animal park mod apk cheat
-zoo 2 animal park mod apk original
-zoo 2 animal park mod apk pure
-zoo 2 animal park mod apk mirror
-zoo 2 animal park mod apk apkpure
-zoo 2 animal park mod apk android republic
-zoo 2 animal park mod apk platinmods
-zoo 2 animal park mod apk an1
-zoo 2 animal park mod apk blackmod
-zoo 2 animal park mod apk mob.org
-zoo 2 animal park mod apk andropalace
-zoo 2 animal park mod apk ihackedit
-zoo 2 animal park mod apk lenov.ru
-zoo 2 animal park mod apk apkmody
-zoo 2 animal park mod apk apkmirror.com
-zoo 2 animal park mod apk androidoyun.club
-zoo 2 animal park mod apk apknite.com

-

Participate in exciting events and earn exclusive rewards

-

Zoo 2 Animal Park also offers exciting events that change every week or month. These events usually have a specific theme, such as Halloween or Christmas. They also have special quests, challenges, and rewards. You can participate in these events by completing tasks, collecting items, or competing with other players. By participating in these events, you can earn exclusive rewards such as special animals, decorations, or buildings.

-

Hire helpers and use tickets to exchange unwanted cards

-

Another useful tip for playing Zoo 2 Animal Park is to hire helpers and use tickets to exchange unwanted cards. Helpers are characters that can help you with various tasks in your zoo, such as feeding animals, cleaning enclosures, or collecting money. You can hire helpers with gold coins or real money. By hiring helpers, you can save time and energy for other activities.

-

Tickets are currency that you can use to exchange unwanted cards for new ones. Cards are items that you can collect by opening chests or completing quests. Cards can contain animals, decorations, buildings, or other items. You can exchange tickets for cards at the card shop or the ticket machine. By using tickets, you can get rid of cards that you don't need or want and get new ones that you do.

-

Conclusion

-

Zoo 2 Animal Park is a fun and engaging game for animal lovers and tycoon fans

-

Zoo 2 Animal Park is a game that combines zoo simulation and tycoon elements. It offers a captivating story, cute animals, stunning graphics, and regular updates. It also offers a wide range of features and customization options that allow you to create your own animal paradise.

-

The mod APK version of Zoo 2 Animal Park can enhance your gaming experience but also has some drawbacks

-

The mod APK version of Zoo 2 Animal Park is a modified version of the game that offers unlimited gold coins and other benefits. It can enhance your gaming experience by giving you more resources, faster progress, and more fun. However, it also has some drawbacks such as potential security risks, possible game errors, and unfair advantage over other players.

-

Follow our tips and tricks to make the most out of your zoo adventure

-

If you decide to play Zoo 2 Animal Park or the mod APK version of Zoo 2 Animal Park, you can follow our tips and tricks to make the most out of your zoo adventure. You can follow the quests to level up and unlock new items; take care of your animals and decorate your zoo to attract more visitors; breed adorable baby animals and collect rare coat colors; participate in exciting events and earn exclusive rewards; hire helpers and use tickets to exchange unwanted cards.

-

We hope that this article has helped you learn more about Zoo 2 Animal Park and the mod APK version of Zoo 2 Animal Park. We hope that you enjoy playing this game as much as we do.

-

FAQs

-

Here are some frequently asked questions about Zoo 2 Animal Park and the mod APK version of Zoo 2 Animal Park:

- - - - - - - - - - - - - - - - - - - - - - - - -
QuestionAnswer
Is Zoo 2 Animal Park free to play?Yes, Zoo 2 Animal Park is free to play. You can download it from the Google Play Store or the App Store. However, the game also offers in-app purchases that can enhance your gaming experience.
Is the mod APK version of Zoo 2 Animal Park safe to use?The mod APK version of Zoo 2 Animal Park is not an official product of upjers GmbH, the developer of the game. Therefore, it might not be safe to use. It might contain viruses, malware, spyware, or other harmful elements that could damage your device or steal your data. You should always scan the mod APK file before installing it and use a trusted antivirus app on your device.
Can I play Zoo 2 Animal Park offline?No, Zoo 2 Animal Park requires an internet connection to play. You need to be online to access the game features, such as quests, events, cards, helpers, etc. You also need to be online to save your game progress and sync it with other devices.
Can I play Zoo 2 Animal Park with my friends?Yes, Zoo 2 Animal Park has a social feature that allows you to play with your friends. You can add your friends as neighbors and visit their zoos. You can also chat with them, send them gifts, and help them with their tasks. You can also join a club and cooperate with other players to complete club quests and earn club rewards.
How can I contact the support team of Zoo 2 Animal Park?If you have any questions, problems, or feedback about Zoo 2 Animal Park, you can contact the support team of upjers GmbH by using the in-game support button or by sending an email to support@upjers.com. You can also visit the official website or the Facebook page of Zoo 2 Animal Park for more information and updates.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Emicsoft Trp Converter Registration Code.rar.md b/spaces/contluForse/HuggingGPT/assets/Emicsoft Trp Converter Registration Code.rar.md deleted file mode 100644 index 0d70f067463fa1e66ea45521ac3fa6ff54699a82..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Emicsoft Trp Converter Registration Code.rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

emicsoft trp converter registration code.rar


Download Filehttps://ssurll.com/2uzxxG



-
-Aiseesoft Total Video Converter. Aiseesoft MKV Converter v6. Aiseesoft TRP Converter 6. Aiseesoft Video Converter Ultimate v 7. Aiseesoft AMV Converter 5. 1fdad05405
-
-
-

diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/leres/pix2pix/models/base_model_hg.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/leres/pix2pix/models/base_model_hg.py deleted file mode 100644 index 1709accdf0b048b3793dfd1f58d1b06c35f7b907..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/leres/pix2pix/models/base_model_hg.py +++ /dev/null @@ -1,58 +0,0 @@ -import os -import torch - -class BaseModelHG(): - def name(self): - return 'BaseModel' - - def initialize(self, opt): - self.opt = opt - self.gpu_ids = opt.gpu_ids - self.isTrain = opt.isTrain - self.Tensor = torch.cuda.FloatTensor if self.gpu_ids else torch.Tensor - self.save_dir = os.path.join(opt.checkpoints_dir, opt.name) - - def set_input(self, input): - self.input = input - - def forward(self): - pass - - # used in test time, no backprop - def test(self): - pass - - def get_image_paths(self): - pass - - def optimize_parameters(self): - pass - - def get_current_visuals(self): - return self.input - - def get_current_errors(self): - return {} - - def save(self, label): - pass - - # helper saving function that can be used by subclasses - def save_network(self, network, network_label, epoch_label, gpu_ids): - save_filename = '_%s_net_%s.pth' % (epoch_label, network_label) - save_path = os.path.join(self.save_dir, save_filename) - torch.save(network.cpu().state_dict(), save_path) - if len(gpu_ids) and torch.cuda.is_available(): - network.cuda(device_id=gpu_ids[0]) - - # helper loading function that can be used by subclasses - def load_network(self, network, network_label, epoch_label): - save_filename = '%s_net_%s.pth' % (epoch_label, network_label) - save_path = os.path.join(self.save_dir, save_filename) - print(save_path) - model = torch.load(save_path) - return model - # network.load_state_dict(torch.load(save_path)) - - def update_learning_rate(): - pass diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/midas/midas/vit.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/midas/midas/vit.py deleted file mode 100644 index ea46b1be88b261b0dec04f3da0256f5f66f88a74..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/midas/midas/vit.py +++ /dev/null @@ -1,491 +0,0 @@ -import torch -import torch.nn as nn -import timm -import types -import math -import torch.nn.functional as F - - -class Slice(nn.Module): - def __init__(self, start_index=1): - super(Slice, self).__init__() - self.start_index = start_index - - def forward(self, x): - return x[:, self.start_index :] - - -class AddReadout(nn.Module): - def __init__(self, start_index=1): - super(AddReadout, self).__init__() - self.start_index = start_index - - def forward(self, x): - if self.start_index == 2: - readout = (x[:, 0] + x[:, 1]) / 2 - else: - readout = x[:, 0] - return x[:, self.start_index :] + readout.unsqueeze(1) - - -class ProjectReadout(nn.Module): - def __init__(self, in_features, start_index=1): - super(ProjectReadout, self).__init__() - self.start_index = start_index - - self.project = nn.Sequential(nn.Linear(2 * in_features, in_features), nn.GELU()) - - def forward(self, x): - readout = x[:, 0].unsqueeze(1).expand_as(x[:, self.start_index :]) - features = torch.cat((x[:, self.start_index :], readout), -1) - - return self.project(features) - - -class Transpose(nn.Module): - def __init__(self, dim0, dim1): - super(Transpose, self).__init__() - self.dim0 = dim0 - self.dim1 = dim1 - - def forward(self, x): - x = x.transpose(self.dim0, self.dim1) - return x - - -def forward_vit(pretrained, x): - b, c, h, w = x.shape - - glob = pretrained.model.forward_flex(x) - - layer_1 = pretrained.activations["1"] - layer_2 = pretrained.activations["2"] - layer_3 = pretrained.activations["3"] - layer_4 = pretrained.activations["4"] - - layer_1 = pretrained.act_postprocess1[0:2](layer_1) - layer_2 = pretrained.act_postprocess2[0:2](layer_2) - layer_3 = pretrained.act_postprocess3[0:2](layer_3) - layer_4 = pretrained.act_postprocess4[0:2](layer_4) - - unflatten = nn.Sequential( - nn.Unflatten( - 2, - torch.Size( - [ - h // pretrained.model.patch_size[1], - w // pretrained.model.patch_size[0], - ] - ), - ) - ) - - if layer_1.ndim == 3: - layer_1 = unflatten(layer_1) - if layer_2.ndim == 3: - layer_2 = unflatten(layer_2) - if layer_3.ndim == 3: - layer_3 = unflatten(layer_3) - if layer_4.ndim == 3: - layer_4 = unflatten(layer_4) - - layer_1 = pretrained.act_postprocess1[3 : len(pretrained.act_postprocess1)](layer_1) - layer_2 = pretrained.act_postprocess2[3 : len(pretrained.act_postprocess2)](layer_2) - layer_3 = pretrained.act_postprocess3[3 : len(pretrained.act_postprocess3)](layer_3) - layer_4 = pretrained.act_postprocess4[3 : len(pretrained.act_postprocess4)](layer_4) - - return layer_1, layer_2, layer_3, layer_4 - - -def _resize_pos_embed(self, posemb, gs_h, gs_w): - posemb_tok, posemb_grid = ( - posemb[:, : self.start_index], - posemb[0, self.start_index :], - ) - - gs_old = int(math.sqrt(len(posemb_grid))) - - posemb_grid = posemb_grid.reshape(1, gs_old, gs_old, -1).permute(0, 3, 1, 2) - posemb_grid = F.interpolate(posemb_grid, size=(gs_h, gs_w), mode="bilinear") - posemb_grid = posemb_grid.permute(0, 2, 3, 1).reshape(1, gs_h * gs_w, -1) - - posemb = torch.cat([posemb_tok, posemb_grid], dim=1) - - return posemb - - -def forward_flex(self, x): - b, c, h, w = x.shape - - pos_embed = self._resize_pos_embed( - self.pos_embed, h // self.patch_size[1], w // self.patch_size[0] - ) - - B = x.shape[0] - - if hasattr(self.patch_embed, "backbone"): - x = self.patch_embed.backbone(x) - if isinstance(x, (list, tuple)): - x = x[-1] # last feature if backbone outputs list/tuple of features - - x = self.patch_embed.proj(x).flatten(2).transpose(1, 2) - - if getattr(self, "dist_token", None) is not None: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - dist_token = self.dist_token.expand(B, -1, -1) - x = torch.cat((cls_tokens, dist_token, x), dim=1) - else: - cls_tokens = self.cls_token.expand( - B, -1, -1 - ) # stole cls_tokens impl from Phil Wang, thanks - x = torch.cat((cls_tokens, x), dim=1) - - x = x + pos_embed - x = self.pos_drop(x) - - for blk in self.blocks: - x = blk(x) - - x = self.norm(x) - - return x - - -activations = {} - - -def get_activation(name): - def hook(model, input, output): - activations[name] = output - - return hook - - -def get_readout_oper(vit_features, features, use_readout, start_index=1): - if use_readout == "ignore": - readout_oper = [Slice(start_index)] * len(features) - elif use_readout == "add": - readout_oper = [AddReadout(start_index)] * len(features) - elif use_readout == "project": - readout_oper = [ - ProjectReadout(vit_features, start_index) for out_feat in features - ] - else: - assert ( - False - ), "wrong operation for readout token, use_readout can be 'ignore', 'add', or 'project'" - - return readout_oper - - -def _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - size=[384, 384], - hooks=[2, 5, 8, 11], - vit_features=768, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - # 32, 48, 136, 384 - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitl16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_large_patch16_384", pretrained=pretrained) - - hooks = [5, 11, 17, 23] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[256, 512, 1024, 1024], - hooks=hooks, - vit_features=1024, - use_readout=use_readout, - ) - - -def _make_pretrained_vitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model("vit_deit_base_patch16_384", pretrained=pretrained) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, features=[96, 192, 384, 768], hooks=hooks, use_readout=use_readout - ) - - -def _make_pretrained_deitb16_distil_384(pretrained, use_readout="ignore", hooks=None): - model = timm.create_model( - "vit_deit_base_distilled_patch16_384", pretrained=pretrained - ) - - hooks = [2, 5, 8, 11] if hooks == None else hooks - return _make_vit_b16_backbone( - model, - features=[96, 192, 384, 768], - hooks=hooks, - use_readout=use_readout, - start_index=2, - ) - - -def _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=[0, 1, 8, 11], - vit_features=768, - use_vit_only=False, - use_readout="ignore", - start_index=1, -): - pretrained = nn.Module() - - pretrained.model = model - - if use_vit_only == True: - pretrained.model.blocks[hooks[0]].register_forward_hook(get_activation("1")) - pretrained.model.blocks[hooks[1]].register_forward_hook(get_activation("2")) - else: - pretrained.model.patch_embed.backbone.stages[0].register_forward_hook( - get_activation("1") - ) - pretrained.model.patch_embed.backbone.stages[1].register_forward_hook( - get_activation("2") - ) - - pretrained.model.blocks[hooks[2]].register_forward_hook(get_activation("3")) - pretrained.model.blocks[hooks[3]].register_forward_hook(get_activation("4")) - - pretrained.activations = activations - - readout_oper = get_readout_oper(vit_features, features, use_readout, start_index) - - if use_vit_only == True: - pretrained.act_postprocess1 = nn.Sequential( - readout_oper[0], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[0], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[0], - out_channels=features[0], - kernel_size=4, - stride=4, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - - pretrained.act_postprocess2 = nn.Sequential( - readout_oper[1], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[1], - kernel_size=1, - stride=1, - padding=0, - ), - nn.ConvTranspose2d( - in_channels=features[1], - out_channels=features[1], - kernel_size=2, - stride=2, - padding=0, - bias=True, - dilation=1, - groups=1, - ), - ) - else: - pretrained.act_postprocess1 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - pretrained.act_postprocess2 = nn.Sequential( - nn.Identity(), nn.Identity(), nn.Identity() - ) - - pretrained.act_postprocess3 = nn.Sequential( - readout_oper[2], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[2], - kernel_size=1, - stride=1, - padding=0, - ), - ) - - pretrained.act_postprocess4 = nn.Sequential( - readout_oper[3], - Transpose(1, 2), - nn.Unflatten(2, torch.Size([size[0] // 16, size[1] // 16])), - nn.Conv2d( - in_channels=vit_features, - out_channels=features[3], - kernel_size=1, - stride=1, - padding=0, - ), - nn.Conv2d( - in_channels=features[3], - out_channels=features[3], - kernel_size=3, - stride=2, - padding=1, - ), - ) - - pretrained.model.start_index = start_index - pretrained.model.patch_size = [16, 16] - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model.forward_flex = types.MethodType(forward_flex, pretrained.model) - - # We inject this function into the VisionTransformer instances so that - # we can use it with interpolated position embeddings without modifying the library source. - pretrained.model._resize_pos_embed = types.MethodType( - _resize_pos_embed, pretrained.model - ) - - return pretrained - - -def _make_pretrained_vitb_rn50_384( - pretrained, use_readout="ignore", hooks=None, use_vit_only=False -): - model = timm.create_model("vit_base_resnet50_384", pretrained=pretrained) - - hooks = [0, 1, 8, 11] if hooks == None else hooks - return _make_vit_b_rn50_backbone( - model, - features=[256, 512, 768, 768], - size=[384, 384], - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/pipelines/loading.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/pipelines/loading.py deleted file mode 100644 index 3ad8c2cb67cb1d2b593217fb1fb2e0ca5834c24f..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/datasets/pipelines/loading.py +++ /dev/null @@ -1,153 +0,0 @@ -import os.path as osp - -import annotator.mmpkg.mmcv as mmcv -import numpy as np - -from ..builder import PIPELINES - - -@PIPELINES.register_module() -class LoadImageFromFile(object): - """Load an image from file. - - Required keys are "img_prefix" and "img_info" (a dict that must contain the - key "filename"). Added or updated keys are "filename", "img", "img_shape", - "ori_shape" (same as `img_shape`), "pad_shape" (same as `img_shape`), - "scale_factor" (1.0) and "img_norm_cfg" (means=0 and stds=1). - - Args: - to_float32 (bool): Whether to convert the loaded image to a float32 - numpy array. If set to False, the loaded image is an uint8 array. - Defaults to False. - color_type (str): The flag argument for :func:`mmcv.imfrombytes`. - Defaults to 'color'. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: - 'cv2' - """ - - def __init__(self, - to_float32=False, - color_type='color', - file_client_args=dict(backend='disk'), - imdecode_backend='cv2'): - self.to_float32 = to_float32 - self.color_type = color_type - self.file_client_args = file_client_args.copy() - self.file_client = None - self.imdecode_backend = imdecode_backend - - def __call__(self, results): - """Call functions to load image and get image meta information. - - Args: - results (dict): Result dict from :obj:`mmseg.CustomDataset`. - - Returns: - dict: The dict contains loaded image and meta information. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results.get('img_prefix') is not None: - filename = osp.join(results['img_prefix'], - results['img_info']['filename']) - else: - filename = results['img_info']['filename'] - img_bytes = self.file_client.get(filename) - img = mmcv.imfrombytes( - img_bytes, flag=self.color_type, backend=self.imdecode_backend) - if self.to_float32: - img = img.astype(np.float32) - - results['filename'] = filename - results['ori_filename'] = results['img_info']['filename'] - results['img'] = img - results['img_shape'] = img.shape - results['ori_shape'] = img.shape - # Set initial values for default meta_keys - results['pad_shape'] = img.shape - results['scale_factor'] = 1.0 - num_channels = 1 if len(img.shape) < 3 else img.shape[2] - results['img_norm_cfg'] = dict( - mean=np.zeros(num_channels, dtype=np.float32), - std=np.ones(num_channels, dtype=np.float32), - to_rgb=False) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(to_float32={self.to_float32},' - repr_str += f"color_type='{self.color_type}'," - repr_str += f"imdecode_backend='{self.imdecode_backend}')" - return repr_str - - -@PIPELINES.register_module() -class LoadAnnotations(object): - """Load annotations for semantic segmentation. - - Args: - reduce_zero_label (bool): Whether reduce all label value by 1. - Usually used for datasets where 0 is background label. - Default: False. - file_client_args (dict): Arguments to instantiate a FileClient. - See :class:`mmcv.fileio.FileClient` for details. - Defaults to ``dict(backend='disk')``. - imdecode_backend (str): Backend for :func:`mmcv.imdecode`. Default: - 'pillow' - """ - - def __init__(self, - reduce_zero_label=False, - file_client_args=dict(backend='disk'), - imdecode_backend='pillow'): - self.reduce_zero_label = reduce_zero_label - self.file_client_args = file_client_args.copy() - self.file_client = None - self.imdecode_backend = imdecode_backend - - def __call__(self, results): - """Call function to load multiple types annotations. - - Args: - results (dict): Result dict from :obj:`mmseg.CustomDataset`. - - Returns: - dict: The dict contains loaded semantic segmentation annotations. - """ - - if self.file_client is None: - self.file_client = mmcv.FileClient(**self.file_client_args) - - if results.get('seg_prefix', None) is not None: - filename = osp.join(results['seg_prefix'], - results['ann_info']['seg_map']) - else: - filename = results['ann_info']['seg_map'] - img_bytes = self.file_client.get(filename) - gt_semantic_seg = mmcv.imfrombytes( - img_bytes, flag='unchanged', - backend=self.imdecode_backend).squeeze().astype(np.uint8) - # modify if custom classes - if results.get('label_map', None) is not None: - for old_id, new_id in results['label_map'].items(): - gt_semantic_seg[gt_semantic_seg == old_id] = new_id - # reduce zero_label - if self.reduce_zero_label: - # avoid using underflow conversion - gt_semantic_seg[gt_semantic_seg == 0] = 255 - gt_semantic_seg = gt_semantic_seg - 1 - gt_semantic_seg[gt_semantic_seg == 254] = 255 - results['gt_semantic_seg'] = gt_semantic_seg - results['seg_fields'].append('gt_semantic_seg') - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(reduce_zero_label={self.reduce_zero_label},' - repr_str += f"imdecode_backend='{self.imdecode_backend}')" - return repr_str diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/parallel/distributed_deprecated.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/parallel/distributed_deprecated.py deleted file mode 100644 index 676937a2085d4da20fa87923041a200fca6214eb..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/parallel/distributed_deprecated.py +++ /dev/null @@ -1,70 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.distributed as dist -import torch.nn as nn -from torch._utils import (_flatten_dense_tensors, _take_tensors, - _unflatten_dense_tensors) - -from annotator.uniformer.mmcv.utils import TORCH_VERSION, digit_version -from .registry import MODULE_WRAPPERS -from .scatter_gather import scatter_kwargs - - -@MODULE_WRAPPERS.register_module() -class MMDistributedDataParallel(nn.Module): - - def __init__(self, - module, - dim=0, - broadcast_buffers=True, - bucket_cap_mb=25): - super(MMDistributedDataParallel, self).__init__() - self.module = module - self.dim = dim - self.broadcast_buffers = broadcast_buffers - - self.broadcast_bucket_size = bucket_cap_mb * 1024 * 1024 - self._sync_params() - - def _dist_broadcast_coalesced(self, tensors, buffer_size): - for tensors in _take_tensors(tensors, buffer_size): - flat_tensors = _flatten_dense_tensors(tensors) - dist.broadcast(flat_tensors, 0) - for tensor, synced in zip( - tensors, _unflatten_dense_tensors(flat_tensors, tensors)): - tensor.copy_(synced) - - def _sync_params(self): - module_states = list(self.module.state_dict().values()) - if len(module_states) > 0: - self._dist_broadcast_coalesced(module_states, - self.broadcast_bucket_size) - if self.broadcast_buffers: - if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) < digit_version('1.0')): - buffers = [b.data for b in self.module._all_buffers()] - else: - buffers = [b.data for b in self.module.buffers()] - if len(buffers) > 0: - self._dist_broadcast_coalesced(buffers, - self.broadcast_bucket_size) - - def scatter(self, inputs, kwargs, device_ids): - return scatter_kwargs(inputs, kwargs, device_ids, dim=self.dim) - - def forward(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - return self.module(*inputs[0], **kwargs[0]) - - def train_step(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - output = self.module.train_step(*inputs[0], **kwargs[0]) - return output - - def val_step(self, *inputs, **kwargs): - inputs, kwargs = self.scatter(inputs, kwargs, - [torch.cuda.current_device()]) - output = self.module.val_step(*inputs[0], **kwargs[0]) - return output diff --git a/spaces/daarumadx/bot/src/processing/video.py b/spaces/daarumadx/bot/src/processing/video.py deleted file mode 100644 index 70b7da4ab7b1de55e0cac8c2399bede974f3de5f..0000000000000000000000000000000000000000 --- a/spaces/daarumadx/bot/src/processing/video.py +++ /dev/null @@ -1,86 +0,0 @@ -"""Video Transform Processing.""" -import os -import shutil -import tempfile - -import cv2 -import imageio - -from config import Config as Conf -from processing import Processing -from processing.multiple import MultipleImageProcessing -from processing.utils import select_phases -from utils import write_image - - -class VideoProcessing(Processing): - """Video Image Processing Class.""" - def _setup(self, *args): - self.__phases = select_phases(self._args) - self.__input_path = self._args['input'] - self.__output_path = self._args['output'] - self.__tmp_dir = None - self.__temp_input_paths = [] - self.__temp_output_paths = [] - self.__tmp_dir = tempfile.mkdtemp() - self.__fps = 25.0 - - Conf.log.debug("Temporay dir is {}".format(self.__tmp_dir)) - - try: - video = cv2.VideoCapture(self.__input_path) - self.__fps = video.get(cv2.CAP_PROP_FPS) - except: - Conf.log.debug("Error trying to get frame-rate from video. Default: 25") - - if self.__fps <= 0: - self.__fps = 25.0 - - imgs = imageio.get_reader(self.__input_path, format="FFMPEG") - - self.__temp_input_paths = [] - self.__temp_output_paths = [] - - for i, im in enumerate(imgs): - frame_input_path = os.path.join(self.__tmp_dir, "input_{}.png".format(i)) - frame_output_path = os.path.join(self.__tmp_dir, "output_{}.png".format(i)) - - self.__temp_input_paths = self.__temp_input_paths + [frame_input_path] - self.__temp_output_paths = self.__temp_output_paths + [frame_output_path] - - write_image(cv2.cvtColor(im, cv2.COLOR_RGB2BGR), frame_input_path) - - Conf.log.info("Video have {} frames to process @ {}fps".format(len(self.__temp_input_paths), self.__fps)) - - self._args['input'] = self.__temp_input_paths - self._args['output'] = self.__temp_output_paths - - def _execute(self, *args): - """ - Execute all phases on each frames of the gif and recreate the gif. - - :return: None - """ - MultipleImageProcessing().run(config=self._args) - - dir_out = os.path.dirname(self.__output_path) - - if dir_out != '': - os.makedirs(dir_out, exist_ok=True) - - ext = os.path.splitext(self.__input_path)[1] - - video_codec = "libx264" - - if ext == ".webm": - video_codec = "libvpx" - - try: - imageio.mimsave(self.__output_path, [imageio.imread(i) for i in self.__temp_output_paths], format="FFMPEG", codec=video_codec, fps=self.__fps) - except: - imageio.mimsave(self.__output_path, [imageio.imread(i) for i in self.__temp_output_paths], format="FFMPEG", codec=video_codec) - - Conf.log.info("Video created! {}".format(self.__output_path)) - - def _clean(self, *args): - shutil.rmtree(self.__tmp_dir) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/cached.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/cached.py deleted file mode 100644 index 379cf04cffeedc85618952c0dcea152c9ebc6eaa..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fsspec/implementations/cached.py +++ /dev/null @@ -1,867 +0,0 @@ -from __future__ import annotations - -import contextlib -import hashlib -import inspect -import logging -import os -import pickle -import tempfile -import time -from shutil import rmtree -from typing import ClassVar - -from fsspec import AbstractFileSystem, filesystem -from fsspec.callbacks import _DEFAULT_CALLBACK -from fsspec.compression import compr -from fsspec.core import BaseCache, MMapCache -from fsspec.exceptions import BlocksizeMismatchError -from fsspec.spec import AbstractBufferedFile -from fsspec.utils import infer_compression - -logger = logging.getLogger("fsspec.cached") - - -class CachingFileSystem(AbstractFileSystem): - """Locally caching filesystem, layer over any other FS - - This class implements chunk-wise local storage of remote files, for quick - access after the initial download. The files are stored in a given - directory with hashes of URLs for the filenames. If no directory is given, - a temporary one is used, which should be cleaned up by the OS after the - process ends. The files themselves are sparse (as implemented in - :class:`~fsspec.caching.MMapCache`), so only the data which is accessed - takes up space. - - Restrictions: - - - the block-size must be the same for each access of a given file, unless - all blocks of the file have already been read - - caching can only be applied to file-systems which produce files - derived from fsspec.spec.AbstractBufferedFile ; LocalFileSystem is also - allowed, for testing - """ - - protocol: ClassVar[str | tuple[str, ...]] = ("blockcache", "cached") - - def __init__( - self, - target_protocol=None, - cache_storage="TMP", - cache_check=10, - check_files=False, - expiry_time=604800, - target_options=None, - fs=None, - same_names=False, - compression=None, - **kwargs, - ): - """ - - Parameters - ---------- - target_protocol: str (optional) - Target filesystem protocol. Provide either this or ``fs``. - cache_storage: str or list(str) - Location to store files. If "TMP", this is a temporary directory, - and will be cleaned up by the OS when this process ends (or later). - If a list, each location will be tried in the order given, but - only the last will be considered writable. - cache_check: int - Number of seconds between reload of cache metadata - check_files: bool - Whether to explicitly see if the UID of the remote file matches - the stored one before using. Warning: some file systems such as - HTTP cannot reliably give a unique hash of the contents of some - path, so be sure to set this option to False. - expiry_time: int - The time in seconds after which a local copy is considered useless. - Set to falsy to prevent expiry. The default is equivalent to one - week. - target_options: dict or None - Passed to the instantiation of the FS, if fs is None. - fs: filesystem instance - The target filesystem to run against. Provide this or ``protocol``. - same_names: bool (optional) - By default, target URLs are hashed, so that files from different backends - with the same basename do not conflict. If this is true, the original - basename is used. - compression: str (optional) - To decompress on download. Can be 'infer' (guess from the URL name), - one of the entries in ``fsspec.compression.compr``, or None for no - decompression. - """ - super().__init__(**kwargs) - if fs is None and target_protocol is None: - raise ValueError( - "Please provide filesystem instance(fs) or target_protocol" - ) - if not (fs is None) ^ (target_protocol is None): - raise ValueError( - "Both filesystems (fs) and target_protocol may not be both given." - ) - if cache_storage == "TMP": - storage = [tempfile.mkdtemp()] - else: - if isinstance(cache_storage, str): - storage = [cache_storage] - else: - storage = cache_storage - os.makedirs(storage[-1], exist_ok=True) - self.storage = storage - self.kwargs = target_options or {} - self.cache_check = cache_check - self.check_files = check_files - self.expiry = expiry_time - self.compression = compression - # TODO: same_names should allow for variable prefix, not only - # to keep the basename - self.same_names = same_names - self.target_protocol = ( - target_protocol - if isinstance(target_protocol, str) - else (fs.protocol if isinstance(fs.protocol, str) else fs.protocol[0]) - ) - self.load_cache() - self.fs = fs if fs is not None else filesystem(target_protocol, **self.kwargs) - - def _strip_protocol(path): - # acts as a method, since each instance has a difference target - return self.fs._strip_protocol(type(self)._strip_protocol(path)) - - self._strip_protocol = _strip_protocol - - def _mkcache(self): - os.makedirs(self.storage[-1], exist_ok=True) - - def load_cache(self): - """Read set of stored blocks from file""" - cached_files = [] - for storage in self.storage: - fn = os.path.join(storage, "cache") - if os.path.exists(fn): - with open(fn, "rb") as f: - # TODO: consolidate blocks here - loaded_cached_files = pickle.load(f) - for c in loaded_cached_files.values(): - if isinstance(c["blocks"], list): - c["blocks"] = set(c["blocks"]) - cached_files.append(loaded_cached_files) - else: - cached_files.append({}) - self._mkcache() - self.cached_files = cached_files or [{}] - self.last_cache = time.time() - - def save_cache(self): - """Save set of stored blocks from file""" - fn = os.path.join(self.storage[-1], "cache") - # TODO: a file lock could be used to ensure file does not change - # between re-read and write; but occasional duplicated reads ok. - cache = self.cached_files[-1] - if os.path.exists(fn): - with open(fn, "rb") as f: - cached_files = pickle.load(f) - for k, c in cached_files.items(): - if k in cache: - if c["blocks"] is True or cache[k]["blocks"] is True: - c["blocks"] = True - else: - # self.cached_files[*][*]["blocks"] must continue to - # point to the same set object so that updates - # performed by MMapCache are propagated back to - # self.cached_files. - blocks = cache[k]["blocks"] - blocks.update(c["blocks"]) - c["blocks"] = blocks - c["time"] = max(c["time"], cache[k]["time"]) - c["uid"] = cache[k]["uid"] - - # Files can be added to cache after it was written once - for k, c in cache.items(): - if k not in cached_files: - cached_files[k] = c - else: - cached_files = cache - cache = {k: v.copy() for k, v in cached_files.items()} - for c in cache.values(): - if isinstance(c["blocks"], set): - c["blocks"] = list(c["blocks"]) - self._mkcache() - with atomic_write(fn) as f: - pickle.dump(cache, f) - self.cached_files[-1] = cached_files - self.last_cache = time.time() - - def _check_cache(self): - """Reload caches if time elapsed or any disappeared""" - self._mkcache() - if not self.cache_check: - # explicitly told not to bother checking - return - timecond = time.time() - self.last_cache > self.cache_check - existcond = all(os.path.exists(storage) for storage in self.storage) - if timecond or not existcond: - self.load_cache() - - def _check_file(self, path): - """Is path in cache and still valid""" - path = self._strip_protocol(path) - - self._check_cache() - for storage, cache in zip(self.storage, self.cached_files): - if path not in cache: - continue - detail = cache[path].copy() - if self.check_files: - if detail["uid"] != self.fs.ukey(path): - continue - if self.expiry: - if time.time() - detail["time"] > self.expiry: - continue - fn = os.path.join(storage, detail["fn"]) - if os.path.exists(fn): - return detail, fn - return False - - def clear_cache(self): - """Remove all files and metadat from the cache - - In the case of multiple cache locations, this clears only the last one, - which is assumed to be the read/write one. - """ - rmtree(self.storage[-1]) - self.load_cache() - - def clear_expired_cache(self, expiry_time=None): - """Remove all expired files and metadata from the cache - - In the case of multiple cache locations, this clears only the last one, - which is assumed to be the read/write one. - - Parameters - ---------- - expiry_time: int - The time in seconds after which a local copy is considered useless. - If not defined the default is equivalent to the attribute from the - file caching instantiation. - """ - - if not expiry_time: - expiry_time = self.expiry - - self._check_cache() - - for path, detail in self.cached_files[-1].copy().items(): - if time.time() - detail["time"] > expiry_time: - if self.same_names: - basename = os.path.basename(detail["original"]) - fn = os.path.join(self.storage[-1], basename) - else: - fn = os.path.join(self.storage[-1], detail["fn"]) - if os.path.exists(fn): - os.remove(fn) - self.cached_files[-1].pop(path) - - if self.cached_files[-1]: - cache_path = os.path.join(self.storage[-1], "cache") - with atomic_write(cache_path) as fc: - pickle.dump(self.cached_files[-1], fc) - else: - rmtree(self.storage[-1]) - self.load_cache() - - def pop_from_cache(self, path): - """Remove cached version of given file - - Deletes local copy of the given (remote) path. If it is found in a cache - location which is not the last, it is assumed to be read-only, and - raises PermissionError - """ - path = self._strip_protocol(path) - details = self._check_file(path) - if not details: - return - _, fn = details - if fn.startswith(self.storage[-1]): - # is in in writable cache - os.remove(fn) - self.cached_files[-1].pop(path) - self.save_cache() - else: - raise PermissionError( - "Can only delete cached file in last, writable cache location" - ) - - def _open( - self, - path, - mode="rb", - block_size=None, - autocommit=True, - cache_options=None, - **kwargs, - ): - """Wrap the target _open - - If the whole file exists in the cache, just open it locally and - return that. - - Otherwise, open the file on the target FS, and make it have a mmap - cache pointing to the location which we determine, in our cache. - The ``blocks`` instance is shared, so as the mmap cache instance - updates, so does the entry in our ``cached_files`` attribute. - We monkey-patch this file, so that when it closes, we call - ``close_and_update`` to save the state of the blocks. - """ - path = self._strip_protocol(path) - - path = self.fs._strip_protocol(path) - if "r" not in mode: - return self.fs._open( - path, - mode=mode, - block_size=block_size, - autocommit=autocommit, - cache_options=cache_options, - **kwargs, - ) - detail = self._check_file(path) - if detail: - # file is in cache - detail, fn = detail - hash, blocks = detail["fn"], detail["blocks"] - if blocks is True: - # stored file is complete - logger.debug("Opening local copy of %s" % path) - return open(fn, mode) - # TODO: action where partial file exists in read-only cache - logger.debug("Opening partially cached copy of %s" % path) - else: - hash = self.hash_name(path, self.same_names) - fn = os.path.join(self.storage[-1], hash) - blocks = set() - detail = { - "original": path, - "fn": hash, - "blocks": blocks, - "time": time.time(), - "uid": self.fs.ukey(path), - } - self.cached_files[-1][path] = detail - logger.debug("Creating local sparse file for %s" % path) - - # call target filesystems open - self._mkcache() - f = self.fs._open( - path, - mode=mode, - block_size=block_size, - autocommit=autocommit, - cache_options=cache_options, - cache_type="none", - **kwargs, - ) - if self.compression: - comp = ( - infer_compression(path) - if self.compression == "infer" - else self.compression - ) - f = compr[comp](f, mode="rb") - if "blocksize" in detail: - if detail["blocksize"] != f.blocksize: - raise BlocksizeMismatchError( - "Cached file must be reopened with same block" - "size as original (old: %i, new %i)" - "" % (detail["blocksize"], f.blocksize) - ) - else: - detail["blocksize"] = f.blocksize - f.cache = MMapCache(f.blocksize, f._fetch_range, f.size, fn, blocks) - close = f.close - f.close = lambda: self.close_and_update(f, close) - self.save_cache() - return f - - def hash_name(self, path, same_name): - return hash_name(path, same_name=same_name) - - def close_and_update(self, f, close): - """Called when a file is closing, so store the set of blocks""" - if f.closed: - return - path = self._strip_protocol(f.path) - - c = self.cached_files[-1][path] - if c["blocks"] is not True and len(c["blocks"]) * f.blocksize >= f.size: - c["blocks"] = True - try: - logger.debug("going to save") - self.save_cache() - logger.debug("saved") - except OSError: - logger.debug("Cache saving failed while closing file") - except NameError: - logger.debug("Cache save failed due to interpreter shutdown") - close() - f.closed = True - - def __getattribute__(self, item): - if item in [ - "load_cache", - "_open", - "save_cache", - "close_and_update", - "__init__", - "__getattribute__", - "__reduce__", - "_make_local_details", - "open", - "cat", - "cat_file", - "get", - "read_block", - "tail", - "head", - "_check_file", - "_check_cache", - "_mkcache", - "clear_cache", - "clear_expired_cache", - "pop_from_cache", - "_mkcache", - "local_file", - "_paths_from_path", - "get_mapper", - "open_many", - "commit_many", - "hash_name", - "__hash__", - "__eq__", - "to_json", - ]: - # all the methods defined in this class. Note `open` here, since - # it calls `_open`, but is actually in superclass - return lambda *args, **kw: getattr(type(self), item).__get__(self)( - *args, **kw - ) - if item in ["__reduce_ex__"]: - raise AttributeError - if item in ["_cache"]: - # class attributes - return getattr(type(self), item) - if item == "__class__": - return type(self) - d = object.__getattribute__(self, "__dict__") - fs = d.get("fs", None) # fs is not immediately defined - if item in d: - return d[item] - elif fs is not None: - if item in fs.__dict__: - # attribute of instance - return fs.__dict__[item] - # attributed belonging to the target filesystem - cls = type(fs) - m = getattr(cls, item) - if (inspect.isfunction(m) or inspect.isdatadescriptor(m)) and ( - not hasattr(m, "__self__") or m.__self__ is None - ): - # instance method - return m.__get__(fs, cls) - return m # class method or attribute - else: - # attributes of the superclass, while target is being set up - return super().__getattribute__(item) - - def __eq__(self, other): - """Test for equality.""" - if self is other: - return True - if not isinstance(other, type(self)): - return False - return ( - self.storage == other.storage - and self.kwargs == other.kwargs - and self.cache_check == other.cache_check - and self.check_files == other.check_files - and self.expiry == other.expiry - and self.compression == other.compression - and self.same_names == other.same_names - and self.target_protocol == other.target_protocol - ) - - def __hash__(self): - """Calculate hash.""" - return ( - hash(tuple(self.storage)) - ^ hash(str(self.kwargs)) - ^ hash(self.cache_check) - ^ hash(self.check_files) - ^ hash(self.expiry) - ^ hash(self.compression) - ^ hash(self.same_names) - ^ hash(self.target_protocol) - ) - - def to_json(self): - """Calculate JSON representation. - - Not implemented yet for CachingFileSystem. - """ - raise NotImplementedError( - "CachingFileSystem JSON representation not implemented" - ) - - -class WholeFileCacheFileSystem(CachingFileSystem): - """Caches whole remote files on first access - - This class is intended as a layer over any other file system, and - will make a local copy of each file accessed, so that all subsequent - reads are local. This is similar to ``CachingFileSystem``, but without - the block-wise functionality and so can work even when sparse files - are not allowed. See its docstring for definition of the init - arguments. - - The class still needs access to the remote store for listing files, - and may refresh cached files. - """ - - protocol = "filecache" - local_file = True - - def open_many(self, open_files): - paths = [of.path for of in open_files] - if "r" in open_files.mode: - self._mkcache() - else: - return [ - LocalTempFile(self.fs, path, mode=open_files.mode) for path in paths - ] - - if self.compression: - raise NotImplementedError - details = [self._check_file(sp) for sp in paths] - downpath = [p for p, d in zip(paths, details) if not d] - downfn0 = [ - os.path.join(self.storage[-1], self.hash_name(p, self.same_names)) - for p, d in zip(paths, details) - ] # keep these path names for opening later - downfn = [fn for fn, d in zip(downfn0, details) if not d] - if downpath: - # skip if all files are already cached and up to date - self.fs.get(downpath, downfn) - - # update metadata - only happens when downloads are successful - newdetail = [ - { - "original": path, - "fn": self.hash_name(path, self.same_names), - "blocks": True, - "time": time.time(), - "uid": self.fs.ukey(path), - } - for path in downpath - ] - self.cached_files[-1].update( - {path: detail for path, detail in zip(downpath, newdetail)} - ) - self.save_cache() - - def firstpart(fn): - # helper to adapt both whole-file and simple-cache - return fn[1] if isinstance(fn, tuple) else fn - - return [ - open(firstpart(fn0) if fn0 else fn1, mode=open_files.mode) - for fn0, fn1 in zip(details, downfn0) - ] - - def commit_many(self, open_files): - self.fs.put([f.fn for f in open_files], [f.path for f in open_files]) - [f.close() for f in open_files] - for f in open_files: - # in case autocommit is off, and so close did not already delete - try: - os.remove(f.name) - except FileNotFoundError: - pass - - def _make_local_details(self, path): - hash = self.hash_name(path, self.same_names) - fn = os.path.join(self.storage[-1], hash) - detail = { - "original": path, - "fn": hash, - "blocks": True, - "time": time.time(), - "uid": self.fs.ukey(path), - } - self.cached_files[-1][path] = detail - logger.debug("Copying %s to local cache" % path) - return fn - - def cat( - self, - path, - recursive=False, - on_error="raise", - callback=_DEFAULT_CALLBACK, - **kwargs, - ): - paths = self.expand_path( - path, recursive=recursive, maxdepth=kwargs.get("maxdepth", None) - ) - getpaths = [] - storepaths = [] - fns = [] - out = {} - for p in paths.copy(): - try: - detail = self._check_file(p) - if not detail: - fn = self._make_local_details(p) - getpaths.append(p) - storepaths.append(fn) - else: - detail, fn = detail if isinstance(detail, tuple) else (None, detail) - fns.append(fn) - except Exception as e: - if on_error == "raise": - raise - if on_error == "return": - out[p] = e - paths.remove(p) - - if getpaths: - self.fs.get(getpaths, storepaths) - self.save_cache() - - callback.set_size(len(paths)) - for p, fn in zip(paths, fns): - with open(fn, "rb") as f: - out[p] = f.read() - callback.relative_update(1) - if isinstance(path, str) and len(paths) == 1 and recursive is False: - out = out[paths[0]] - return out - - def _open(self, path, mode="rb", **kwargs): - path = self._strip_protocol(path) - if "r" not in mode: - return LocalTempFile(self, path, mode=mode) - detail = self._check_file(path) - if detail: - detail, fn = detail - _, blocks = detail["fn"], detail["blocks"] - if blocks is True: - logger.debug("Opening local copy of %s" % path) - - # In order to support downstream filesystems to be able to - # infer the compression from the original filename, like - # the `TarFileSystem`, let's extend the `io.BufferedReader` - # fileobject protocol by adding a dedicated attribute - # `original`. - f = open(fn, mode) - f.original = detail.get("original") - return f - else: - raise ValueError( - "Attempt to open partially cached file %s" - "as a wholly cached file" % path - ) - else: - fn = self._make_local_details(path) - kwargs["mode"] = mode - - # call target filesystems open - self._mkcache() - if self.compression: - with self.fs._open(path, **kwargs) as f, open(fn, "wb") as f2: - if isinstance(f, AbstractBufferedFile): - # want no type of caching if just downloading whole thing - f.cache = BaseCache(0, f.cache.fetcher, f.size) - comp = ( - infer_compression(path) - if self.compression == "infer" - else self.compression - ) - f = compr[comp](f, mode="rb") - data = True - while data: - block = getattr(f, "blocksize", 5 * 2**20) - data = f.read(block) - f2.write(data) - else: - self.fs.get(path, fn) - self.save_cache() - return self._open(path, mode) - - -class SimpleCacheFileSystem(WholeFileCacheFileSystem): - """Caches whole remote files on first access - - This class is intended as a layer over any other file system, and - will make a local copy of each file accessed, so that all subsequent - reads are local. This implementation only copies whole files, and - does not keep any metadata about the download time or file details. - It is therefore safer to use in multi-threaded/concurrent situations. - - This is the only of the caching filesystems that supports write: you will - be given a real local open file, and upon close and commit, it will be - uploaded to the target filesystem; the writability or the target URL is - not checked until that time. - - """ - - protocol = "simplecache" - local_file = True - - def __init__(self, **kwargs): - kw = kwargs.copy() - for key in ["cache_check", "expiry_time", "check_files"]: - kw[key] = False - super().__init__(**kw) - for storage in self.storage: - if not os.path.exists(storage): - os.makedirs(storage, exist_ok=True) - self.cached_files = [{}] - - def _check_file(self, path): - self._check_cache() - sha = self.hash_name(path, self.same_names) - for storage in self.storage: - fn = os.path.join(storage, sha) - if os.path.exists(fn): - return fn - - def save_cache(self): - pass - - def load_cache(self): - pass - - def _open(self, path, mode="rb", **kwargs): - path = self._strip_protocol(path) - - if "r" not in mode: - return LocalTempFile(self, path, mode=mode) - fn = self._check_file(path) - if fn: - return open(fn, mode) - - sha = self.hash_name(path, self.same_names) - fn = os.path.join(self.storage[-1], sha) - logger.debug("Copying %s to local cache" % path) - kwargs["mode"] = mode - - self._mkcache() - if self.compression: - with self.fs._open(path, **kwargs) as f, open(fn, "wb") as f2: - if isinstance(f, AbstractBufferedFile): - # want no type of caching if just downloading whole thing - f.cache = BaseCache(0, f.cache.fetcher, f.size) - comp = ( - infer_compression(path) - if self.compression == "infer" - else self.compression - ) - f = compr[comp](f, mode="rb") - data = True - while data: - block = getattr(f, "blocksize", 5 * 2**20) - data = f.read(block) - f2.write(data) - else: - self.fs.get(path, fn) - return self._open(path, mode) - - -class LocalTempFile: - """A temporary local file, which will be uploaded on commit""" - - def __init__(self, fs, path, fn=None, mode="wb", autocommit=True, seek=0): - if fn: - self.fn = fn - self.fh = open(fn, mode) - else: - fd, self.fn = tempfile.mkstemp() - self.fh = open(fd, mode) - self.mode = mode - if seek: - self.fh.seek(seek) - self.path = path - self.fs = fs - self.closed = False - self.autocommit = autocommit - - def __reduce__(self): - # always open in rb+ to allow continuing writing at a location - return ( - LocalTempFile, - (self.fs, self.path, self.fn, "rb+", self.autocommit, self.tell()), - ) - - def __enter__(self): - return self.fh - - def __exit__(self, exc_type, exc_val, exc_tb): - self.close() - - def close(self): - if self.closed: - return - self.fh.close() - self.closed = True - if self.autocommit: - self.commit() - - def discard(self): - self.fh.close() - os.remove(self.fn) - - def commit(self): - self.fs.put(self.fn, self.path) - try: - os.remove(self.fn) - except (PermissionError, FileNotFoundError): - # file path may be held by new version of the file on windows - pass - - @property - def name(self): - return self.fn - - def __getattr__(self, item): - return getattr(self.fh, item) - - -def hash_name(path, same_name): - if same_name: - hash = os.path.basename(path) - else: - hash = hashlib.sha256(path.encode()).hexdigest() - return hash - - -@contextlib.contextmanager -def atomic_write(path, mode="wb"): - """ - A context manager that opens a temporary file next to `path` and, on exit, - replaces `path` with the temporary file, thereby updating `path` - atomically. - """ - fd, fn = tempfile.mkstemp( - dir=os.path.dirname(path), prefix=os.path.basename(path) + "-" - ) - try: - with open(fd, mode) as fp: - yield fp - except BaseException: - with contextlib.suppress(FileNotFoundError): - os.unlink(fn) - raise - else: - os.replace(fn, path) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/linkify_it/tlds.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/linkify_it/tlds.py deleted file mode 100644 index 7f8053ded999e6da51d64b54f6dbf2b77b26ac95..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/linkify_it/tlds.py +++ /dev/null @@ -1,1517 +0,0 @@ -"""TLDS - -Version 2020110600, Last Updated Fri Nov 6 07:07:02 2020 UTC - -References: - http://data.iana.org/TLD/tlds-alpha-by-domain.txt -""" -TLDS = [ - "AAA", - "AARP", - "ABARTH", - "ABB", - "ABBOTT", - "ABBVIE", - "ABC", - "ABLE", - "ABOGADO", - "ABUDHABI", - "AC", - "ACADEMY", - "ACCENTURE", - "ACCOUNTANT", - "ACCOUNTANTS", - "ACO", - "ACTOR", - "AD", - "ADAC", - "ADS", - "ADULT", - "AE", - "AEG", - "AERO", - "AETNA", - "AF", - "AFAMILYCOMPANY", - "AFL", - "AFRICA", - "AG", - "AGAKHAN", - "AGENCY", - "AI", - "AIG", - "AIRBUS", - "AIRFORCE", - "AIRTEL", - "AKDN", - "AL", - "ALFAROMEO", - "ALIBABA", - "ALIPAY", - "ALLFINANZ", - "ALLSTATE", - "ALLY", - "ALSACE", - "ALSTOM", - "AM", - "AMAZON", - "AMERICANEXPRESS", - "AMERICANFAMILY", - "AMEX", - "AMFAM", - "AMICA", - "AMSTERDAM", - "ANALYTICS", - "ANDROID", - "ANQUAN", - "ANZ", - "AO", - "AOL", - "APARTMENTS", - "APP", - "APPLE", - "AQ", - "AQUARELLE", - "AR", - "ARAB", - "ARAMCO", - "ARCHI", - "ARMY", - "ARPA", - "ART", - "ARTE", - "AS", - "ASDA", - "ASIA", - "ASSOCIATES", - "AT", - "ATHLETA", - "ATTORNEY", - "AU", - "AUCTION", - "AUDI", - "AUDIBLE", - "AUDIO", - "AUSPOST", - "AUTHOR", - "AUTO", - "AUTOS", - "AVIANCA", - "AW", - "AWS", - "AX", - "AXA", - "AZ", - "AZURE", - "BA", - "BABY", - "BAIDU", - "BANAMEX", - "BANANAREPUBLIC", - "BAND", - "BANK", - "BAR", - "BARCELONA", - "BARCLAYCARD", - "BARCLAYS", - "BAREFOOT", - "BARGAINS", - "BASEBALL", - "BASKETBALL", - "BAUHAUS", - "BAYERN", - "BB", - "BBC", - "BBT", - "BBVA", - "BCG", - "BCN", - "BD", - "BE", - "BEATS", - "BEAUTY", - "BEER", - "BENTLEY", - "BERLIN", - "BEST", - "BESTBUY", - "BET", - "BF", - "BG", - "BH", - "BHARTI", - "BI", - "BIBLE", - "BID", - "BIKE", - "BING", - "BINGO", - "BIO", - "BIZ", - "BJ", - "BLACK", - "BLACKFRIDAY", - "BLOCKBUSTER", - "BLOG", - "BLOOMBERG", - "BLUE", - "BM", - "BMS", - "BMW", - "BN", - "BNPPARIBAS", - "BO", - "BOATS", - "BOEHRINGER", - "BOFA", - "BOM", - "BOND", - "BOO", - "BOOK", - "BOOKING", - "BOSCH", - "BOSTIK", - "BOSTON", - "BOT", - "BOUTIQUE", - "BOX", - "BR", - "BRADESCO", - "BRIDGESTONE", - "BROADWAY", - "BROKER", - "BROTHER", - "BRUSSELS", - "BS", - "BT", - "BUDAPEST", - "BUGATTI", - "BUILD", - "BUILDERS", - "BUSINESS", - "BUY", - "BUZZ", - "BV", - "BW", - "BY", - "BZ", - "BZH", - "CA", - "CAB", - "CAFE", - "CAL", - "CALL", - "CALVINKLEIN", - "CAM", - "CAMERA", - "CAMP", - "CANCERRESEARCH", - "CANON", - "CAPETOWN", - "CAPITAL", - "CAPITALONE", - "CAR", - "CARAVAN", - "CARDS", - "CARE", - "CAREER", - "CAREERS", - "CARS", - "CASA", - "CASE", - "CASEIH", - "CASH", - "CASINO", - "CAT", - "CATERING", - "CATHOLIC", - "CBA", - "CBN", - "CBRE", - "CBS", - "CC", - "CD", - "CEB", - "CENTER", - "CEO", - "CERN", - "CF", - "CFA", - "CFD", - "CG", - "CH", - "CHANEL", - "CHANNEL", - "CHARITY", - "CHASE", - "CHAT", - "CHEAP", - "CHINTAI", - "CHRISTMAS", - "CHROME", - "CHURCH", - "CI", - "CIPRIANI", - "CIRCLE", - "CISCO", - "CITADEL", - "CITI", - "CITIC", - "CITY", - "CITYEATS", - "CK", - "CL", - "CLAIMS", - "CLEANING", - "CLICK", - "CLINIC", - "CLINIQUE", - "CLOTHING", - "CLOUD", - "CLUB", - "CLUBMED", - "CM", - "CN", - "CO", - "COACH", - "CODES", - "COFFEE", - "COLLEGE", - "COLOGNE", - "COM", - "COMCAST", - "COMMBANK", - "COMMUNITY", - "COMPANY", - "COMPARE", - "COMPUTER", - "COMSEC", - "CONDOS", - "CONSTRUCTION", - "CONSULTING", - "CONTACT", - "CONTRACTORS", - "COOKING", - "COOKINGCHANNEL", - "COOL", - "COOP", - "CORSICA", - "COUNTRY", - "COUPON", - "COUPONS", - "COURSES", - "CPA", - "CR", - "CREDIT", - "CREDITCARD", - "CREDITUNION", - "CRICKET", - "CROWN", - "CRS", - "CRUISE", - "CRUISES", - "CSC", - "CU", - "CUISINELLA", - "CV", - "CW", - "CX", - "CY", - "CYMRU", - "CYOU", - "CZ", - "DABUR", - "DAD", - "DANCE", - "DATA", - "DATE", - "DATING", - "DATSUN", - "DAY", - "DCLK", - "DDS", - "DE", - "DEAL", - "DEALER", - "DEALS", - "DEGREE", - "DELIVERY", - "DELL", - "DELOITTE", - "DELTA", - "DEMOCRAT", - "DENTAL", - "DENTIST", - "DESI", - "DESIGN", - "DEV", - "DHL", - "DIAMONDS", - "DIET", - "DIGITAL", - "DIRECT", - "DIRECTORY", - "DISCOUNT", - "DISCOVER", - "DISH", - "DIY", - "DJ", - "DK", - "DM", - "DNP", - "DO", - "DOCS", - "DOCTOR", - "DOG", - "DOMAINS", - "DOT", - "DOWNLOAD", - "DRIVE", - "DTV", - "DUBAI", - "DUCK", - "DUNLOP", - "DUPONT", - "DURBAN", - "DVAG", - "DVR", - "DZ", - "EARTH", - "EAT", - "EC", - "ECO", - "EDEKA", - "EDU", - "EDUCATION", - "EE", - "EG", - "EMAIL", - "EMERCK", - "ENERGY", - "ENGINEER", - "ENGINEERING", - "ENTERPRISES", - "EPSON", - "EQUIPMENT", - "ER", - "ERICSSON", - "ERNI", - "ES", - "ESQ", - "ESTATE", - "ET", - "ETISALAT", - "EU", - "EUROVISION", - "EUS", - "EVENTS", - "EXCHANGE", - "EXPERT", - "EXPOSED", - "EXPRESS", - "EXTRASPACE", - "FAGE", - "FAIL", - "FAIRWINDS", - "FAITH", - "FAMILY", - "FAN", - "FANS", - "FARM", - "FARMERS", - "FASHION", - "FAST", - "FEDEX", - "FEEDBACK", - "FERRARI", - "FERRERO", - "FI", - "FIAT", - "FIDELITY", - "FIDO", - "FILM", - "FINAL", - "FINANCE", - "FINANCIAL", - "FIRE", - "FIRESTONE", - "FIRMDALE", - "FISH", - "FISHING", - "FIT", - "FITNESS", - "FJ", - "FK", - "FLICKR", - "FLIGHTS", - "FLIR", - "FLORIST", - "FLOWERS", - "FLY", - "FM", - "FO", - "FOO", - "FOOD", - "FOODNETWORK", - "FOOTBALL", - "FORD", - "FOREX", - "FORSALE", - "FORUM", - "FOUNDATION", - "FOX", - "FR", - "FREE", - "FRESENIUS", - "FRL", - "FROGANS", - "FRONTDOOR", - "FRONTIER", - "FTR", - "FUJITSU", - "FUJIXEROX", - "FUN", - "FUND", - "FURNITURE", - "FUTBOL", - "FYI", - "GA", - "GAL", - "GALLERY", - "GALLO", - "GALLUP", - "GAME", - "GAMES", - "GAP", - "GARDEN", - "GAY", - "GB", - "GBIZ", - "GD", - "GDN", - "GE", - "GEA", - "GENT", - "GENTING", - "GEORGE", - "GF", - "GG", - "GGEE", - "GH", - "GI", - "GIFT", - "GIFTS", - "GIVES", - "GIVING", - "GL", - "GLADE", - "GLASS", - "GLE", - "GLOBAL", - "GLOBO", - "GM", - "GMAIL", - "GMBH", - "GMO", - "GMX", - "GN", - "GODADDY", - "GOLD", - "GOLDPOINT", - "GOLF", - "GOO", - "GOODYEAR", - "GOOG", - "GOOGLE", - "GOP", - "GOT", - "GOV", - "GP", - "GQ", - "GR", - "GRAINGER", - "GRAPHICS", - "GRATIS", - "GREEN", - "GRIPE", - "GROCERY", - "GROUP", - "GS", - "GT", - "GU", - "GUARDIAN", - "GUCCI", - "GUGE", - "GUIDE", - "GUITARS", - "GURU", - "GW", - "GY", - "HAIR", - "HAMBURG", - "HANGOUT", - "HAUS", - "HBO", - "HDFC", - "HDFCBANK", - "HEALTH", - "HEALTHCARE", - "HELP", - "HELSINKI", - "HERE", - "HERMES", - "HGTV", - "HIPHOP", - "HISAMITSU", - "HITACHI", - "HIV", - "HK", - "HKT", - "HM", - "HN", - "HOCKEY", - "HOLDINGS", - "HOLIDAY", - "HOMEDEPOT", - "HOMEGOODS", - "HOMES", - "HOMESENSE", - "HONDA", - "HORSE", - "HOSPITAL", - "HOST", - "HOSTING", - "HOT", - "HOTELES", - "HOTELS", - "HOTMAIL", - "HOUSE", - "HOW", - "HR", - "HSBC", - "HT", - "HU", - "HUGHES", - "HYATT", - "HYUNDAI", - "IBM", - "ICBC", - "ICE", - "ICU", - "ID", - "IE", - "IEEE", - "IFM", - "IKANO", - "IL", - "IM", - "IMAMAT", - "IMDB", - "IMMO", - "IMMOBILIEN", - "IN", - "INC", - "INDUSTRIES", - "INFINITI", - "INFO", - "ING", - "INK", - "INSTITUTE", - "INSURANCE", - "INSURE", - "INT", - "INTERNATIONAL", - "INTUIT", - "INVESTMENTS", - "IO", - "IPIRANGA", - "IQ", - "IR", - "IRISH", - "IS", - "ISMAILI", - "IST", - "ISTANBUL", - "IT", - "ITAU", - "ITV", - "IVECO", - "JAGUAR", - "JAVA", - "JCB", - "JCP", - "JE", - "JEEP", - "JETZT", - "JEWELRY", - "JIO", - "JLL", - "JM", - "JMP", - "JNJ", - "JO", - "JOBS", - "JOBURG", - "JOT", - "JOY", - "JP", - "JPMORGAN", - "JPRS", - "JUEGOS", - "JUNIPER", - "KAUFEN", - "KDDI", - "KE", - "KERRYHOTELS", - "KERRYLOGISTICS", - "KERRYPROPERTIES", - "KFH", - "KG", - "KH", - "KI", - "KIA", - "KIM", - "KINDER", - "KINDLE", - "KITCHEN", - "KIWI", - "KM", - "KN", - "KOELN", - "KOMATSU", - "KOSHER", - "KP", - "KPMG", - "KPN", - "KR", - "KRD", - "KRED", - "KUOKGROUP", - "KW", - "KY", - "KYOTO", - "KZ", - "LA", - "LACAIXA", - "LAMBORGHINI", - "LAMER", - "LANCASTER", - "LANCIA", - "LAND", - "LANDROVER", - "LANXESS", - "LASALLE", - "LAT", - "LATINO", - "LATROBE", - "LAW", - "LAWYER", - "LB", - "LC", - "LDS", - "LEASE", - "LECLERC", - "LEFRAK", - "LEGAL", - "LEGO", - "LEXUS", - "LGBT", - "LI", - "LIDL", - "LIFE", - "LIFEINSURANCE", - "LIFESTYLE", - "LIGHTING", - "LIKE", - "LILLY", - "LIMITED", - "LIMO", - "LINCOLN", - "LINDE", - "LINK", - "LIPSY", - "LIVE", - "LIVING", - "LIXIL", - "LK", - "LLC", - "LLP", - "LOAN", - "LOANS", - "LOCKER", - "LOCUS", - "LOFT", - "LOL", - "LONDON", - "LOTTE", - "LOTTO", - "LOVE", - "LPL", - "LPLFINANCIAL", - "LR", - "LS", - "LT", - "LTD", - "LTDA", - "LU", - "LUNDBECK", - "LUPIN", - "LUXE", - "LUXURY", - "LV", - "LY", - "MA", - "MACYS", - "MADRID", - "MAIF", - "MAISON", - "MAKEUP", - "MAN", - "MANAGEMENT", - "MANGO", - "MAP", - "MARKET", - "MARKETING", - "MARKETS", - "MARRIOTT", - "MARSHALLS", - "MASERATI", - "MATTEL", - "MBA", - "MC", - "MCKINSEY", - "MD", - "ME", - "MED", - "MEDIA", - "MEET", - "MELBOURNE", - "MEME", - "MEMORIAL", - "MEN", - "MENU", - "MERCKMSD", - "MG", - "MH", - "MIAMI", - "MICROSOFT", - "MIL", - "MINI", - "MINT", - "MIT", - "MITSUBISHI", - "MK", - "ML", - "MLB", - "MLS", - "MM", - "MMA", - "MN", - "MO", - "MOBI", - "MOBILE", - "MODA", - "MOE", - "MOI", - "MOM", - "MONASH", - "MONEY", - "MONSTER", - "MORMON", - "MORTGAGE", - "MOSCOW", - "MOTO", - "MOTORCYCLES", - "MOV", - "MOVIE", - "MP", - "MQ", - "MR", - "MS", - "MSD", - "MT", - "MTN", - "MTR", - "MU", - "MUSEUM", - "MUTUAL", - "MV", - "MW", - "MX", - "MY", - "MZ", - "NA", - "NAB", - "NAGOYA", - "NAME", - "NATIONWIDE", - "NATURA", - "NAVY", - "NBA", - "NC", - "NE", - "NEC", - "NET", - "NETBANK", - "NETFLIX", - "NETWORK", - "NEUSTAR", - "NEW", - "NEWHOLLAND", - "NEWS", - "NEXT", - "NEXTDIRECT", - "NEXUS", - "NF", - "NFL", - "NG", - "NGO", - "NHK", - "NI", - "NICO", - "NIKE", - "NIKON", - "NINJA", - "NISSAN", - "NISSAY", - "NL", - "NO", - "NOKIA", - "NORTHWESTERNMUTUAL", - "NORTON", - "NOW", - "NOWRUZ", - "NOWTV", - "NP", - "NR", - "NRA", - "NRW", - "NTT", - "NU", - "NYC", - "NZ", - "OBI", - "OBSERVER", - "OFF", - "OFFICE", - "OKINAWA", - "OLAYAN", - "OLAYANGROUP", - "OLDNAVY", - "OLLO", - "OM", - "OMEGA", - "ONE", - "ONG", - "ONL", - "ONLINE", - "ONYOURSIDE", - "OOO", - "OPEN", - "ORACLE", - "ORANGE", - "ORG", - "ORGANIC", - "ORIGINS", - "OSAKA", - "OTSUKA", - "OTT", - "OVH", - "PA", - "PAGE", - "PANASONIC", - "PARIS", - "PARS", - "PARTNERS", - "PARTS", - "PARTY", - "PASSAGENS", - "PAY", - "PCCW", - "PE", - "PET", - "PF", - "PFIZER", - "PG", - "PH", - "PHARMACY", - "PHD", - "PHILIPS", - "PHONE", - "PHOTO", - "PHOTOGRAPHY", - "PHOTOS", - "PHYSIO", - "PICS", - "PICTET", - "PICTURES", - "PID", - "PIN", - "PING", - "PINK", - "PIONEER", - "PIZZA", - "PK", - "PL", - "PLACE", - "PLAY", - "PLAYSTATION", - "PLUMBING", - "PLUS", - "PM", - "PN", - "PNC", - "POHL", - "POKER", - "POLITIE", - "PORN", - "POST", - "PR", - "PRAMERICA", - "PRAXI", - "PRESS", - "PRIME", - "PRO", - "PROD", - "PRODUCTIONS", - "PROF", - "PROGRESSIVE", - "PROMO", - "PROPERTIES", - "PROPERTY", - "PROTECTION", - "PRU", - "PRUDENTIAL", - "PS", - "PT", - "PUB", - "PW", - "PWC", - "PY", - "QA", - "QPON", - "QUEBEC", - "QUEST", - "QVC", - "RACING", - "RADIO", - "RAID", - "RE", - "READ", - "REALESTATE", - "REALTOR", - "REALTY", - "RECIPES", - "RED", - "REDSTONE", - "REDUMBRELLA", - "REHAB", - "REISE", - "REISEN", - "REIT", - "RELIANCE", - "REN", - "RENT", - "RENTALS", - "REPAIR", - "REPORT", - "REPUBLICAN", - "REST", - "RESTAURANT", - "REVIEW", - "REVIEWS", - "REXROTH", - "RICH", - "RICHARDLI", - "RICOH", - "RIL", - "RIO", - "RIP", - "RMIT", - "RO", - "ROCHER", - "ROCKS", - "RODEO", - "ROGERS", - "ROOM", - "RS", - "RSVP", - "RU", - "RUGBY", - "RUHR", - "RUN", - "RW", - "RWE", - "RYUKYU", - "SA", - "SAARLAND", - "SAFE", - "SAFETY", - "SAKURA", - "SALE", - "SALON", - "SAMSCLUB", - "SAMSUNG", - "SANDVIK", - "SANDVIKCOROMANT", - "SANOFI", - "SAP", - "SARL", - "SAS", - "SAVE", - "SAXO", - "SB", - "SBI", - "SBS", - "SC", - "SCA", - "SCB", - "SCHAEFFLER", - "SCHMIDT", - "SCHOLARSHIPS", - "SCHOOL", - "SCHULE", - "SCHWARZ", - "SCIENCE", - "SCJOHNSON", - "SCOT", - "SD", - "SE", - "SEARCH", - "SEAT", - "SECURE", - "SECURITY", - "SEEK", - "SELECT", - "SENER", - "SERVICES", - "SES", - "SEVEN", - "SEW", - "SEX", - "SEXY", - "SFR", - "SG", - "SH", - "SHANGRILA", - "SHARP", - "SHAW", - "SHELL", - "SHIA", - "SHIKSHA", - "SHOES", - "SHOP", - "SHOPPING", - "SHOUJI", - "SHOW", - "SHOWTIME", - "SHRIRAM", - "SI", - "SILK", - "SINA", - "SINGLES", - "SITE", - "SJ", - "SK", - "SKI", - "SKIN", - "SKY", - "SKYPE", - "SL", - "SLING", - "SM", - "SMART", - "SMILE", - "SN", - "SNCF", - "SO", - "SOCCER", - "SOCIAL", - "SOFTBANK", - "SOFTWARE", - "SOHU", - "SOLAR", - "SOLUTIONS", - "SONG", - "SONY", - "SOY", - "SPA", - "SPACE", - "SPORT", - "SPOT", - "SPREADBETTING", - "SR", - "SRL", - "SS", - "ST", - "STADA", - "STAPLES", - "STAR", - "STATEBANK", - "STATEFARM", - "STC", - "STCGROUP", - "STOCKHOLM", - "STORAGE", - "STORE", - "STREAM", - "STUDIO", - "STUDY", - "STYLE", - "SU", - "SUCKS", - "SUPPLIES", - "SUPPLY", - "SUPPORT", - "SURF", - "SURGERY", - "SUZUKI", - "SV", - "SWATCH", - "SWIFTCOVER", - "SWISS", - "SX", - "SY", - "SYDNEY", - "SYSTEMS", - "SZ", - "TAB", - "TAIPEI", - "TALK", - "TAOBAO", - "TARGET", - "TATAMOTORS", - "TATAR", - "TATTOO", - "TAX", - "TAXI", - "TC", - "TCI", - "TD", - "TDK", - "TEAM", - "TECH", - "TECHNOLOGY", - "TEL", - "TEMASEK", - "TENNIS", - "TEVA", - "TF", - "TG", - "TH", - "THD", - "THEATER", - "THEATRE", - "TIAA", - "TICKETS", - "TIENDA", - "TIFFANY", - "TIPS", - "TIRES", - "TIROL", - "TJ", - "TJMAXX", - "TJX", - "TK", - "TKMAXX", - "TL", - "TM", - "TMALL", - "TN", - "TO", - "TODAY", - "TOKYO", - "TOOLS", - "TOP", - "TORAY", - "TOSHIBA", - "TOTAL", - "TOURS", - "TOWN", - "TOYOTA", - "TOYS", - "TR", - "TRADE", - "TRADING", - "TRAINING", - "TRAVEL", - "TRAVELCHANNEL", - "TRAVELERS", - "TRAVELERSINSURANCE", - "TRUST", - "TRV", - "TT", - "TUBE", - "TUI", - "TUNES", - "TUSHU", - "TV", - "TVS", - "TW", - "TZ", - "UA", - "UBANK", - "UBS", - "UG", - "UK", - "UNICOM", - "UNIVERSITY", - "UNO", - "UOL", - "UPS", - "US", - "UY", - "UZ", - "VA", - "VACATIONS", - "VANA", - "VANGUARD", - "VC", - "VE", - "VEGAS", - "VENTURES", - "VERISIGN", - "VERSICHERUNG", - "VET", - "VG", - "VI", - "VIAJES", - "VIDEO", - "VIG", - "VIKING", - "VILLAS", - "VIN", - "VIP", - "VIRGIN", - "VISA", - "VISION", - "VIVA", - "VIVO", - "VLAANDEREN", - "VN", - "VODKA", - "VOLKSWAGEN", - "VOLVO", - "VOTE", - "VOTING", - "VOTO", - "VOYAGE", - "VU", - "VUELOS", - "WALES", - "WALMART", - "WALTER", - "WANG", - "WANGGOU", - "WATCH", - "WATCHES", - "WEATHER", - "WEATHERCHANNEL", - "WEBCAM", - "WEBER", - "WEBSITE", - "WED", - "WEDDING", - "WEIBO", - "WEIR", - "WF", - "WHOSWHO", - "WIEN", - "WIKI", - "WILLIAMHILL", - "WIN", - "WINDOWS", - "WINE", - "WINNERS", - "WME", - "WOLTERSKLUWER", - "WOODSIDE", - "WORK", - "WORKS", - "WORLD", - "WOW", - "WS", - "WTC", - "WTF", - "XBOX", - "XEROX", - "XFINITY", - "XIHUAN", - "XIN", - "XN--11B4C3D", - "XN--1CK2E1B", - "XN--1QQW23A", - "XN--2SCRJ9C", - "XN--30RR7Y", - "XN--3BST00M", - "XN--3DS443G", - "XN--3E0B707E", - "XN--3HCRJ9C", - "XN--3OQ18VL8PN36A", - "XN--3PXU8K", - "XN--42C2D9A", - "XN--45BR5CYL", - "XN--45BRJ9C", - "XN--45Q11C", - "XN--4GBRIM", - "XN--54B7FTA0CC", - "XN--55QW42G", - "XN--55QX5D", - "XN--5SU34J936BGSG", - "XN--5TZM5G", - "XN--6FRZ82G", - "XN--6QQ986B3XL", - "XN--80ADXHKS", - "XN--80AO21A", - "XN--80AQECDR1A", - "XN--80ASEHDB", - "XN--80ASWG", - "XN--8Y0A063A", - "XN--90A3AC", - "XN--90AE", - "XN--90AIS", - "XN--9DBQ2A", - "XN--9ET52U", - "XN--9KRT00A", - "XN--B4W605FERD", - "XN--BCK1B9A5DRE4C", - "XN--C1AVG", - "XN--C2BR7G", - "XN--CCK2B3B", - "XN--CCKWCXETD", - "XN--CG4BKI", - "XN--CLCHC0EA0B2G2A9GCD", - "XN--CZR694B", - "XN--CZRS0T", - "XN--CZRU2D", - "XN--D1ACJ3B", - "XN--D1ALF", - "XN--E1A4C", - "XN--ECKVDTC9D", - "XN--EFVY88H", - "XN--FCT429K", - "XN--FHBEI", - "XN--FIQ228C5HS", - "XN--FIQ64B", - "XN--FIQS8S", - "XN--FIQZ9S", - "XN--FJQ720A", - "XN--FLW351E", - "XN--FPCRJ9C3D", - "XN--FZC2C9E2C", - "XN--FZYS8D69UVGM", - "XN--G2XX48C", - "XN--GCKR3F0F", - "XN--GECRJ9C", - "XN--GK3AT1E", - "XN--H2BREG3EVE", - "XN--H2BRJ9C", - "XN--H2BRJ9C8C", - "XN--HXT814E", - "XN--I1B6B1A6A2E", - "XN--IMR513N", - "XN--IO0A7I", - "XN--J1AEF", - "XN--J1AMH", - "XN--J6W193G", - "XN--JLQ480N2RG", - "XN--JLQ61U9W7B", - "XN--JVR189M", - "XN--KCRX77D1X4A", - "XN--KPRW13D", - "XN--KPRY57D", - "XN--KPUT3I", - "XN--L1ACC", - "XN--LGBBAT1AD8J", - "XN--MGB9AWBF", - "XN--MGBA3A3EJT", - "XN--MGBA3A4F16A", - "XN--MGBA7C0BBN0A", - "XN--MGBAAKC7DVF", - "XN--MGBAAM7A8H", - "XN--MGBAB2BD", - "XN--MGBAH1A3HJKRD", - "XN--MGBAI9AZGQP6J", - "XN--MGBAYH7GPA", - "XN--MGBBH1A", - "XN--MGBBH1A71E", - "XN--MGBC0A9AZCG", - "XN--MGBCA7DZDO", - "XN--MGBCPQ6GPA1A", - "XN--MGBERP4A5D4AR", - "XN--MGBGU82A", - "XN--MGBI4ECEXP", - "XN--MGBPL2FH", - "XN--MGBT3DHD", - "XN--MGBTX2B", - "XN--MGBX4CD0AB", - "XN--MIX891F", - "XN--MK1BU44C", - "XN--MXTQ1M", - "XN--NGBC5AZD", - "XN--NGBE9E0A", - "XN--NGBRX", - "XN--NODE", - "XN--NQV7F", - "XN--NQV7FS00EMA", - "XN--NYQY26A", - "XN--O3CW4H", - "XN--OGBPF8FL", - "XN--OTU796D", - "XN--P1ACF", - "XN--P1AI", - "XN--PGBS0DH", - "XN--PSSY2U", - "XN--Q7CE6A", - "XN--Q9JYB4C", - "XN--QCKA1PMC", - "XN--QXA6A", - "XN--QXAM", - "XN--RHQV96G", - "XN--ROVU88B", - "XN--RVC1E0AM3E", - "XN--S9BRJ9C", - "XN--SES554G", - "XN--T60B56A", - "XN--TCKWE", - "XN--TIQ49XQYJ", - "XN--UNUP4Y", - "XN--VERMGENSBERATER-CTB", - "XN--VERMGENSBERATUNG-PWB", - "XN--VHQUV", - "XN--VUQ861B", - "XN--W4R85EL8FHU5DNRA", - "XN--W4RS40L", - "XN--WGBH1C", - "XN--WGBL6A", - "XN--XHQ521B", - "XN--XKC2AL3HYE2A", - "XN--XKC2DL3A5EE0H", - "XN--Y9A3AQ", - "XN--YFRO4I67O", - "XN--YGBI2AMMX", - "XN--ZFR164B", - "XXX", - "XYZ", - "YACHTS", - "YAHOO", - "YAMAXUN", - "YANDEX", - "YE", - "YODOBASHI", - "YOGA", - "YOKOHAMA", - "YOU", - "YOUTUBE", - "YT", - "YUN", - "ZA", - "ZAPPOS", - "ZARA", - "ZERO", - "ZIP", - "ZM", - "ZONE", - "ZUERICH", - "ZW", -] diff --git a/spaces/declare-lab/tango/diffusers/CONTRIBUTING.md b/spaces/declare-lab/tango/diffusers/CONTRIBUTING.md deleted file mode 100644 index e9aa10a871d3afff3dbb9426db05baf6a0be3817..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/CONTRIBUTING.md +++ /dev/null @@ -1,498 +0,0 @@ - - -# How to contribute to Diffusers 🧨 - -We ❤️ contributions from the open-source community! Everyone is welcome, and all types of participation –not just code– are valued and appreciated. Answering questions, helping others, reaching out, and improving the documentation are all immensely valuable to the community, so don't be afraid and get involved if you're up for it! - -Everyone is encouraged to start by saying 👋 in our public Discord channel. We discuss the latest trends in diffusion models, ask questions, show off personal projects, help each other with contributions, or just hang out ☕. Join us on Discord - -Whichever way you choose to contribute, we strive to be part of an open, welcoming, and kind community. Please, read our [code of conduct](https://github.com/huggingface/diffusers/blob/main/CODE_OF_CONDUCT.md) and be mindful to respect it during your interactions. We also recommend you become familiar with the [ethical guidelines](https://huggingface.co/docs/diffusers/conceptual/ethical_guidelines) that guide our project and ask you to adhere to the same principles of transparency and responsibility. - -We enormously value feedback from the community, so please do not be afraid to speak up if you believe you have valuable feedback that can help improve the library - every message, comment, issue, and pull request (PR) is read and considered. - -## Overview - -You can contribute in many ways ranging from answering questions on issues to adding new diffusion models to -the core library. - -In the following, we give an overview of different ways to contribute, ranked by difficulty in ascending order. All of them are valuable to the community. - -* 1. Asking and answering questions on [the Diffusers discussion forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers) or on [Discord](https://discord.gg/G7tWnz98XR). -* 2. Opening new issues on [the GitHub Issues tab](https://github.com/huggingface/diffusers/issues/new/choose) -* 3. Answering issues on [the GitHub Issues tab](https://github.com/huggingface/diffusers/issues) -* 4. Fix a simple issue, marked by the "Good first issue" label, see [here](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22). -* 5. Contribute to the [documentation](https://github.com/huggingface/diffusers/tree/main/docs/source). -* 6. Contribute a [Community Pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3Acommunity-examples) -* 7. Contribute to the [examples](https://github.com/huggingface/diffusers/tree/main/examples). -* 8. Fix a more difficult issue, marked by the "Good second issue" label, see [here](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+second+issue%22). -* 9. Add a new pipeline, model, or scheduler, see ["New Pipeline/Model"](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) and ["New scheduler"](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22) issues. For this contribution, please have a look at [Design Philosophy](https://github.com/huggingface/diffusers/blob/main/PHILOSOPHY.md). - -As said before, **all contributions are valuable to the community**. -In the following, we will explain each contribution a bit more in detail. - -For all contributions 4.-9. you will need to open a PR. It is explained in detail how to do so in [Opening a pull requst](#how-to-open-a-pr) - -### 1. Asking and answering questions on the Diffusers discussion forum or on the Diffusers Discord - -Any question or comment related to the Diffusers library can be asked on the [discussion forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/) or on [Discord](https://discord.gg/G7tWnz98XR). Such questions and comments include (but are not limited to): -- Reports of training or inference experiments in an attempt to share knowledge -- Presentation of personal projects -- Questions to non-official training examples -- Project proposals -- General feedback -- Paper summaries -- Asking for help on personal projects that build on top of the Diffusers library -- General questions -- Ethical questions regarding diffusion models -- ... - -Every question that is asked on the forum or on Discord actively encourages the community to publicly -share knowledge and might very well help a beginner in the future that has the same question you're -having. Please do pose any questions you might have. -In the same spirit, you are of immense help to the community by answering such questions because this way you are publicly documenting knowledge for everybody to learn from. - -**Please** keep in mind that the more effort you put into asking or answering a question, the higher -the quality of the publicly documented knowledge. In the same way, well-posed and well-answered questions create a high-quality knowledge database accessible to everybody, while badly posed questions or answers reduce the overall quality of the public knowledge database. -In short, a high quality question or answer is *precise*, *concise*, *relevant*, *easy-to-understand*, *accesible*, and *well-formated/well-posed*. For more information, please have a look through the [How to write a good issue](#how-to-write-a-good-issue) section. - -**NOTE about channels**: -[*The forum*](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) is much better indexed by search engines, such as Google. Posts are ranked by popularity rather than chronologically. Hence, it's easier to look up questions and answers that we posted some time ago. -In addition, questions and answers posted in the forum can easily be linked to. -In contrast, *Discord* has a chat-like format that invites fast back-and-forth communication. -While it will most likely take less time for you to get an answer to your question on Discord, your -question won't be visible anymore over time. Also, it's much harder to find information that was posted a while back on Discord. We therefore strongly recommend using the forum for high-quality questions and answers in an attempt to create long-lasting knowledge for the community. If discussions on Discord lead to very interesting answers and conclusions, we recommend posting the results on the forum to make the information more available for future readers. - -### 2. Opening new issues on the GitHub issues tab - -The 🧨 Diffusers library is robust and reliable thanks to the users who notify us of -the problems they encounter. So thank you for reporting an issue. - -Remember, GitHub issues are reserved for technical questions directly related to the Diffusers library, bug reports, feature requests, or feedback on the library design. - -In a nutshell, this means that everything that is **not** related to the **code of the Diffusers library** (including the documentation) should **not** be asked on GitHub, but rather on either the [forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) or [Discord](https://discord.gg/G7tWnz98XR). - -**Please consider the following guidelines when opening a new issue**: -- Make sure you have searched whether your issue has already been asked before (use the search bar on GitHub under Issues). -- Please never report a new issue on another (related) issue. If another issue is highly related, please -open a new issue nevertheless and link to the related issue. -- Make sure your issue is written in English. Please use one of the great, free online translation services, such as [DeepL](https://www.deepl.com/translator) to translate from your native language to English if you are not comfortable in English. -- Check whether your issue might be solved by updating to the newest Diffusers version. Before posting your issue, please make sure that `python -c "import diffusers; print(diffusers.__version__)"` is higher or matches the latest Diffusers version. -- Remember that the more effort you put into opening a new issue, the higher the quality of your answer will be and the better the overall quality of the Diffusers issues. - -New issues usually include the following. - -#### 2.1. Reproducible, minimal bug reports. - -A bug report should always have a reproducible code snippet and be as minimal and concise as possible. -This means in more detail: -- Narrow the bug down as much as you can, **do not just dump your whole code file** -- Format your code -- Do not include any external libraries except for Diffusers depending on them. -- **Always** provide all necessary information about your environment; for this, you can run: `diffusers-cli env` in your shell and copy-paste the displayed information to the issue. -- Explain the issue. If the reader doesn't know what the issue is and why it is an issue, she cannot solve it. -- **Always** make sure the reader can reproduce your issue with as little effort as possible. If your code snippet cannot be run because of missing libraries or undefined variables, the reader cannot help you. Make sure your reproducible code snippet is as minimal as possible and can be copy-pasted into a simple Python shell. -- If in order to reproduce your issue a model and/or dataset is required, make sure the reader has access to that model or dataset. You can always upload your model or dataset to the [Hub](https://huggingface.co) to make it easily downloadable. Try to keep your model and dataset as small as possible, to make the reproduction of your issue as effortless as possible. - -For more information, please have a look through the [How to write a good issue](#how-to-write-a-good-issue) section. - -You can open a bug report [here](https://github.com/huggingface/diffusers/issues/new/choose). - -#### 2.2. Feature requests. - -A world-class feature request addresses the following points: - -1. Motivation first: -* Is it related to a problem/frustration with the library? If so, please explain -why. Providing a code snippet that demonstrates the problem is best. -* Is it related to something you would need for a project? We'd love to hear -about it! -* Is it something you worked on and think could benefit the community? -Awesome! Tell us what problem it solved for you. -2. Write a *full paragraph* describing the feature; -3. Provide a **code snippet** that demonstrates its future use; -4. In case this is related to a paper, please attach a link; -5. Attach any additional information (drawings, screenshots, etc.) you think may help. - -You can open a feature request [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feature_request.md&title=). - -#### 2.3 Feedback. - -Feedback about the library design and why it is good or not good helps the core maintainers immensely to build a user-friendly library. To understand the philosophy behind the current design philosophy, please have a look [here](https://huggingface.co/docs/diffusers/conceptual/philosophy). If you feel like a certain design choice does not fit with the current design philosophy, please explain why and how it should be changed. If a certain design choice follows the design philosophy too much, hence restricting use cases, explain why and how it should be changed. -If a certain design choice is very useful for you, please also leave a note as this is great feedback for future design decisions. - -You can open an issue about feedback [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=). - -#### 2.4 Technical questions. - -Technical questions are mainly about why certain code of the library was written in a certain way, or what a certain part of the code does. Please make sure to link to the code in question and please provide detail on -why this part of the code is difficult to understand. - -You can open an issue about a technical question [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=bug&template=bug-report.yml). - -#### 2.5 Proposal to add a new model, scheduler, or pipeline. - -If the diffusion model community released a new model, pipeline, or scheduler that you would like to see in the Diffusers library, please provide the following information: - -* Short description of the diffusion pipeline, model, or scheduler and link to the paper or public release. -* Link to any of its open-source implementation. -* Link to the model weights if they are available. - -If you are willing to contribute to the model yourself, let us know so we can best guide you. Also, don't forget -to tag the original author of the component (model, scheduler, pipeline, etc.) by GitHub handle if you can find it. - -You can open a request for a model/pipeline/scheduler [here](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=New+model%2Fpipeline%2Fscheduler&template=new-model-addition.yml). - -### 3. Answering issues on the GitHub issues tab - -Answering issues on GitHub might require some technical knowledge of Diffusers, but we encourage everybody to give it a try even if you are not 100% certain that your answer is correct. -Some tips to give a high-quality answer to an issue: -- Be as concise and minimal as possible -- Stay on topic. An answer to the issue should concern the issue and only the issue. -- Provide links to code, papers, or other sources that prove or encourage your point. -- Answer in code. If a simple code snippet is the answer to the issue or shows how the issue can be solved, please provide a fully reproducible code snippet. - -Also, many issues tend to be simply off-topic, duplicates of other issues, or irrelevant. It is of great -help to the maintainers if you can answer such issues, encouraging the author of the issue to be -more precise, provide the link to a duplicated issue or redirect them to [the forum](https://discuss.huggingface.co/c/discussion-related-to-httpsgithubcomhuggingfacediffusers/63) or [Discord](https://discord.gg/G7tWnz98XR) - -If you have verified that the issued bug report is correct and requires a correction in the source code, -please have a look at the next sections. - -For all of the following contributions, you will need to open a PR. It is explained in detail how to do so in the [Opening a pull requst](#how-to-open-a-pr) section. - -### 4. Fixing a "Good first issue" - -*Good first issues* are marked by the [Good first issue](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22) label. Usually, the issue already -explains how a potential solution should look so that it is easier to fix. -If the issue hasn't been closed and you would like to try to fix this issue, you can just leave a message "I would like to try this issue.". There are usually three scenarios: -- a.) The issue description already proposes a fix. In this case and if the solution makes sense to you, you can open a PR or draft PR to fix it. -- b.) The issue description does not propose a fix. In this case, you can ask what a proposed fix could look like and someone from the Diffusers team should answer shortly. If you have a good idea of how to fix it, feel free to directly open a PR. -- c.) There is already an open PR to fix the issue, but the issue hasn't been closed yet. If the PR has gone stale, you can simply open a new PR and link to the stale PR. PRs often go stale if the original contributor who wanted to fix the issue suddenly cannot find the time anymore to proceed. This often happens in open-source and is very normal. In this case, the community will be very happy if you give it a new try and leverage the knowledge of the existing PR. If there is already a PR and it is active, you can help the author by giving suggestions, reviewing the PR or even asking whether you can contribute to the PR. - - -### 5. Contribute to the documentation - -A good library **always** has good documentation! The official documentation is often one of the first points of contact for new users of the library, and therefore contributing to the documentation is a **highly -valuable contribution**. - -Contributing to the library can have many forms: - -- Correcting spelling or grammatical errors. -- Correct incorrect formatting of the docstring. If you see that the official documentation is weirdly displayed or a link is broken, we are very happy if you take some time to correct it. -- Correct the shape or dimensions of a docstring input or output tensor. -- Clarify documentation that is hard to understand or incorrect. -- Update outdated code examples. -- Translating the documentation to another language. - -Anything displayed on [the official Diffusers doc page](https://huggingface.co/docs/diffusers/index) is part of the official documentation and can be corrected, adjusted in the respective [documentation source](https://github.com/huggingface/diffusers/tree/main/docs/source). - -Please have a look at [this page](https://github.com/huggingface/diffusers/tree/main/docs) on how to verify changes made to the documentation locally. - - -### 6. Contribute a community pipeline - -[Pipelines](https://huggingface.co/docs/diffusers/api/pipelines/overview) are usually the first point of contact between the Diffusers library and the user. -Pipelines are examples of how to use Diffusers [models](https://huggingface.co/docs/diffusers/api/models) and [schedulers](https://huggingface.co/docs/diffusers/api/schedulers/overview). -We support two types of pipelines: - -- Official Pipelines -- Community Pipelines - -Both official and community pipelines follow the same design and consist of the same type of components. - -Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code -resides in [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines). -In contrast, community pipelines are contributed and maintained purely by the **community** and are **not** tested. -They reside in [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) and while they can be accessed via the [PyPI diffusers package](https://pypi.org/project/diffusers/), their code is not part of the PyPI distribution. - -The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all -possible ways diffusion models can be used for inference, but some of them may be of interest to the community. -Officially released diffusion pipelines, -such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures -high quality of maintenance, no backward-breaking code changes, and testing. -More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library. - -To add a community pipeline, one should add a .py file to [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) and adapt the [examples/community/README.md](https://github.com/huggingface/diffusers/tree/main/examples/community/README.md) to include an example of the new pipeline. - -An example can be seen [here](https://github.com/huggingface/diffusers/pull/2400). - -Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors. - -Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the -core package. - -### 7. Contribute to training examples - -Diffusers examples are a collection of training scripts that reside in [examples](https://github.com/huggingface/diffusers/tree/main/examples). - -We support two types of training examples: - -- Official training examples -- Research training examples - -Research training examples are located in [examples/research_projects](https://github.com/huggingface/diffusers/tree/main/examples/research_projects) whereas official training examples include all folders under [examples](https://github.com/huggingface/diffusers/tree/main/examples) except the `research_projects` and `community` folders. -The official training examples are maintained by the Diffusers' core maintainers whereas the research training examples are maintained by the community. -This is because of the same reasons put forward in [6. Contribute a community pipeline](#contribute-a-community-pipeline) for official pipelines vs. community pipelines: It is not feasible for the core maintainers to maintain all possible training methods for diffusion models. -If the Diffusers core maintainers and the community consider a certain training paradigm to be too experimental or not popular enough, the corresponding training code should be put in the `research_projects` folder and maintained by the author. - -Both official training and research examples consist of a directory that contains one or more training scripts, a requirements.txt file, and a README.md file. In order for the user to make use of the -training examples, it is required to clone the repository: - -``` -git clone https://github.com/huggingface/diffusers -``` - -as well as to install all additional dependencies required for training: - -``` -pip install -r /examples//requirements.txt -``` - -Therefore when adding an example, the `requirements.txt` file shall define all pip dependencies required for your training example so that once all those are installed, the user can run the example's training script. See, for example, the [DreamBooth `requirements.txt` file](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/requirements.txt). - -Training examples of the Diffusers library should adhere to the following philosophy: -- All the code necessary to run the examples should be found in a single Python file -- One should be able to run the example from the command line with `python .py --args` -- Examples should be kept simple and serve as **an example** on how to use Diffusers for training. The purpose of example scripts is **not** to create state-of-the-art diffusion models, but rather to reproduce known training schemes without adding too much custom logic. As a byproduct of this point, our examples also strive to serve as good educational materials. - -To contribute an example, it is highly recommended to look at already existing examples such as [dreambooth](https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/train_dreambooth.py) to get an idea of how they should look like. -We strongly advise contributors to make use of the [Accelerate library](https://github.com/huggingface/accelerate) as it's tightly integrated -with Diffusers. -Once an example script works, please make sure to add a comprehensive `README.md` that states how to use the example exactly. This README should include: -- An example command on how to run the example script as shown [here e.g.](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth#running-locally-with-pytorch). -- A link to some training results (logs, models, ...) that show what the user can expect as shown [here e.g.](https://api.wandb.ai/report/patrickvonplaten/xm6cd5q5). -- If you are adding a non-official/research training example, **please don't forget** to add a sentence that you are maintaining this training example which includes your git handle as shown [here](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/intel_opts#diffusers-examples-with-intel-optimizations). - -If you are contributing to the official training examples, please also make sure to add a test to [examples/test_examples.py](https://github.com/huggingface/diffusers/blob/main/examples/test_examples.py). This is not necessary for non-official training examples. - -### 8. Fixing a "Good second issue" - -*Good second issues* are marked by the [Good second issue](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+second+issue%22) label. Good second issues are -usually more complicated to solve than [Good first issues](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22). -The issue description usually gives less guidance on how to fix the issue and requires -a decent understanding of the library by the interested contributor. -If you are interested in tackling a second good issue, feel free to open a PR to fix it and link the PR to the issue. If you see that a PR has already been opened for this issue but did not get merged, have a look to understand why it wasn't merged and try to open an improved PR. -Good second issues are usually more difficult to get merged compared to good first issues, so don't hesitate to ask for help from the core maintainers. If your PR is almost finished the core maintainers can also jump into your PR and commit to it in order to get it merged. - -### 9. Adding pipelines, models, schedulers - -Pipelines, models, and schedulers are the most important pieces of the Diffusers library. -They provide easy access to state-of-the-art diffusion technologies and thus allow the community to -build powerful generative AI applications. - -By adding a new model, pipeline, or scheduler you might enable a new powerful use case for any of the user interfaces relying on Diffusers which can be of immense value for the whole generative AI ecosystem. - -Diffusers has a couple of open feature requests for all three components - feel free to gloss over them -if you don't know yet what specific component you would like to add: -- [Model or pipeline](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+pipeline%2Fmodel%22) -- [Scheduler](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22New+scheduler%22) - -Before adding any of the three components, it is strongly recommended that you give the [Philosophy guide](https://github.com/huggingface/diffusers/issues?q=is%3Aopen+is%3Aissue+label%3A%22Good+second+issue%22) a read to better understand the design of any of the three components. Please be aware that -we cannot merge model, scheduler, or pipeline additions that strongly diverge from our design philosophy -as it will lead to API inconsistencies. If you fundamentally disagree with a design choice, please -open a [Feedback issue](https://github.com/huggingface/diffusers/issues/new?assignees=&labels=&template=feedback.md&title=) instead so that it can be discussed whether a certain design -pattern/design choice shall be changed everywhere in the library and whether we shall update our design philosophy. Consistency across the library is very important for us. - -Please make sure to add links to the original codebase/paper to the PR and ideally also ping the -original author directly on the PR so that they can follow the progress and potentially help with questions. - -If you are unsure or stuck in the PR, don't hesitate to leave a message to ask for a first review or help. - -## How to write a good issue - -**The better your issue is written, the higher the chances that it will be quickly resolved.** - -1. Make sure that you've used the correct template for your issue. You can pick between *Bug Report*, *Feature Request*, *Feedback about API Design*, *New model/pipeline/scheduler addition*, *Forum*, or a blank issue. Make sure to pick the correct one when opening [a new issue](https://github.com/huggingface/diffusers/issues/new/choose). -2. **Be precise**: Give your issue a fitting title. Try to formulate your issue description as simple as possible. The more precise you are when submitting an issue, the less time it takes to understand the issue and potentially solve it. Make sure to open an issue for one issue only and not for multiple issues. If you found multiple issues, simply open multiple issues. If your issue is a bug, try to be as precise as possible about what bug it is - you should not just write "Error in diffusers". -3. **Reproducibility**: No reproducible code snippet == no solution. If you encounter a bug, maintainers **have to be able to reproduce** it. Make sure that you include a code snippet that can be copy-pasted into a Python interpreter to reproduce the issue. Make sure that your code snippet works, *i.e.* that there are no missing imports or missing links to images, ... Your issue should contain an error message **and** a code snippet that can be copy-pasted without any changes to reproduce the exact same error message. If your issue is using local model weights or local data that cannot be accessed by the reader, the issue cannot be solved. If you cannot share your data or model, try to make a dummy model or dummy data. -4. **Minimalistic**: Try to help the reader as much as you can to understand the issue as quickly as possible by staying as concise as possible. Remove all code / all information that is irrelevant to the issue. If you have found a bug, try to create the easiest code example you can to demonstrate your issue, do not just dump your whole workflow into the issue as soon as you have found a bug. E.g., if you train a model and get an error at some point during the training, you should first try to understand what part of the training code is responsible for the error and try to reproduce it with a couple of lines. Try to use dummy data instead of full datasets. -5. Add links. If you are referring to a certain naming, method, or model make sure to provide a link so that the reader can better understand what you mean. If you are referring to a specific PR or issue, make sure to link it to your issue. Do not assume that the reader knows what you are talking about. The more links you add to your issue the better. -6. Formatting. Make sure to nicely format your issue by formatting code into Python code syntax, and error messages into normal code syntax. See the [official GitHub formatting docs](https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax) for more information. -7. Think of your issue not as a ticket to be solved, but rather as a beautiful entry to a well-written encyclopedia. Every added issue is a contribution to publicly available knowledge. By adding a nicely written issue you not only make it easier for maintainers to solve your issue, but you are helping the whole community to better understand a certain aspect of the library. - -## How to write a good PR - -1. Be a chameleon. Understand existing design patterns and syntax and make sure your code additions flow seamlessly into the existing code base. Pull requests that significantly diverge from existing design patterns or user interfaces will not be merged. -2. Be laser focused. A pull request should solve one problem and one problem only. Make sure to not fall into the trap of "also fixing another problem while we're adding it". It is much more difficult to review pull requests that solve multiple, unrelated problems at once. -3. If helpful, try to add a code snippet that displays an example of how your addition can be used. -4. The title of your pull request should be a summary of its contribution. -5. If your pull request addresses an issue, please mention the issue number in -the pull request description to make sure they are linked (and people -consulting the issue know you are working on it); -6. To indicate a work in progress please prefix the title with `[WIP]`. These -are useful to avoid duplicated work, and to differentiate it from PRs ready -to be merged; -7. Try to formulate and format your text as explained in [How to write a good issue](#how-to-write-a-good-issue). -8. Make sure existing tests pass; -9. Add high-coverage tests. No quality testing = no merge. -- If you are adding new `@slow` tests, make sure they pass using -`RUN_SLOW=1 python -m pytest tests/test_my_new_model.py`. -CircleCI does not run the slow tests, but GitHub actions does every night! -10. All public methods must have informative docstrings that work nicely with markdown. See `[pipeline_latent_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py)` for an example. -11. Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like -[`hf-internal-testing`](https://huggingface.co/hf-internal-testing) or [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images) to place these files. -If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images -to this dataset. - -## How to open a PR - -Before writing code, we strongly advise you to search through the existing PRs or -issues to make sure that nobody is already working on the same thing. If you are -unsure, it is always a good idea to open an issue to get some feedback. - -You will need basic `git` proficiency to be able to contribute to -🧨 Diffusers. `git` is not the easiest tool to use but it has the greatest -manual. Type `git --help` in a shell and enjoy. If you prefer books, [Pro -Git](https://git-scm.com/book/en/v2) is a very good reference. - -Follow these steps to start contributing ([supported Python versions](https://github.com/huggingface/diffusers/blob/main/setup.py#L244)): - -1. Fork the [repository](https://github.com/huggingface/diffusers) by -clicking on the 'Fork' button on the repository's page. This creates a copy of the code -under your GitHub user account. - -2. Clone your fork to your local disk, and add the base repository as a remote: - - ```bash - $ git clone git@github.com:/diffusers.git - $ cd diffusers - $ git remote add upstream https://github.com/huggingface/diffusers.git - ``` - -3. Create a new branch to hold your development changes: - - ```bash - $ git checkout -b a-descriptive-name-for-my-changes - ``` - -**Do not** work on the `main` branch. - -4. Set up a development environment by running the following command in a virtual environment: - - ```bash - $ pip install -e ".[dev]" - ``` - -If you have already cloned the repo, you might need to `git pull` to get the most recent changes in the -library. - -5. Develop the features on your branch. - -As you work on the features, you should make sure that the test suite -passes. You should run the tests impacted by your changes like this: - - ```bash - $ pytest tests/.py - ``` - -You can also run the full suite with the following command, but it takes -a beefy machine to produce a result in a decent amount of time now that -Diffusers has grown a lot. Here is the command for it: - - ```bash - $ make test - ``` - -🧨 Diffusers relies on `black` and `isort` to format its source code -consistently. After you make changes, apply automatic style corrections and code verifications -that can't be automated in one go with: - - ```bash - $ make style - ``` - -🧨 Diffusers also uses `ruff` and a few custom scripts to check for coding mistakes. Quality -control runs in CI, however, you can also run the same checks with: - - ```bash - $ make quality - ``` - -Once you're happy with your changes, add changed files using `git add` and -make a commit with `git commit` to record your changes locally: - - ```bash - $ git add modified_file.py - $ git commit - ``` - -It is a good idea to sync your copy of the code with the original -repository regularly. This way you can quickly account for changes: - - ```bash - $ git pull upstream main - ``` - -Push the changes to your account using: - - ```bash - $ git push -u origin a-descriptive-name-for-my-changes - ``` - -6. Once you are satisfied, go to the -webpage of your fork on GitHub. Click on 'Pull request' to send your changes -to the project maintainers for review. - -7. It's ok if maintainers ask you for changes. It happens to core contributors -too! So everyone can see the changes in the Pull request, work in your local -branch and push the changes to your fork. They will automatically appear in -the pull request. - -### Tests - -An extensive test suite is included to test the library behavior and several examples. Library tests can be found in -the [tests folder](https://github.com/huggingface/diffusers/tree/main/tests). - -We like `pytest` and `pytest-xdist` because it's faster. From the root of the -repository, here's how to run tests with `pytest` for the library: - -```bash -$ python -m pytest -n auto --dist=loadfile -s -v ./tests/ -``` - -In fact, that's how `make test` is implemented! - -You can specify a smaller set of tests in order to test only the feature -you're working on. - -By default, slow tests are skipped. Set the `RUN_SLOW` environment variable to -`yes` to run them. This will download many gigabytes of models — make sure you -have enough disk space and a good Internet connection, or a lot of patience! - -```bash -$ RUN_SLOW=yes python -m pytest -n auto --dist=loadfile -s -v ./tests/ -``` - -`unittest` is fully supported, here's how to run tests with it: - -```bash -$ python -m unittest discover -s tests -t . -v -$ python -m unittest discover -s examples -t examples -v -``` - -### Syncing forked main with upstream (HuggingFace) main - -To avoid pinging the upstream repository which adds reference notes to each upstream PR and sends unnecessary notifications to the developers involved in these PRs, -when syncing the main branch of a forked repository, please, follow these steps: -1. When possible, avoid syncing with the upstream using a branch and PR on the forked repository. Instead, merge directly into the forked main. -2. If a PR is absolutely necessary, use the following steps after checking out your branch: -``` -$ git checkout -b your-branch-for-syncing -$ git pull --squash --no-commit upstream main -$ git commit -m '' -$ git push --set-upstream origin your-branch-for-syncing -``` - -### Style guide - -For documentation strings, 🧨 Diffusers follows the [google style](https://google.github.io/styleguide/pyguide.html). diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/text_to_video_synthesis/__init__.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/text_to_video_synthesis/__init__.py deleted file mode 100644 index c2437857a23a0bbbba168cf9457ac8b72bd51e67..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/text_to_video_synthesis/__init__.py +++ /dev/null @@ -1,31 +0,0 @@ -from dataclasses import dataclass -from typing import List, Optional, Union - -import numpy as np -import torch - -from ...utils import BaseOutput, OptionalDependencyNotAvailable, is_torch_available, is_transformers_available - - -@dataclass -class TextToVideoSDPipelineOutput(BaseOutput): - """ - Output class for text to video pipelines. - - Args: - frames (`List[np.ndarray]` or `torch.FloatTensor`) - List of denoised frames (essentially images) as NumPy arrays of shape `(height, width, num_channels)` or as - a `torch` tensor. NumPy array present the denoised images of the diffusion pipeline. The length of the list - denotes the video length i.e., the number of frames. - """ - - frames: Union[List[np.ndarray], torch.FloatTensor] - - -try: - if not (is_transformers_available() and is_torch_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ...utils.dummy_torch_and_transformers_objects import * # noqa F403 -else: - from .pipeline_text_to_video_synth import TextToVideoSDPipeline # noqa: F401 diff --git a/spaces/deepwisdom/MetaGPT/metagpt/tools/search_engine_ddg.py b/spaces/deepwisdom/MetaGPT/metagpt/tools/search_engine_ddg.py deleted file mode 100644 index 57bc61b825909a0e9821e830b3cae752d690c1f4..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/tools/search_engine_ddg.py +++ /dev/null @@ -1,102 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import asyncio -import json -from concurrent import futures -from typing import Literal, overload - -try: - from duckduckgo_search import DDGS -except ImportError: - raise ImportError( - "To use this module, you should have the `duckduckgo_search` Python package installed. " - "You can install it by running the command: `pip install -e.[search-ddg]`" - ) - -from metagpt.config import CONFIG - - -class DDGAPIWrapper: - """Wrapper around duckduckgo_search API. - - To use this module, you should have the `duckduckgo_search` Python package installed. - """ - - def __init__( - self, - *, - loop: asyncio.AbstractEventLoop | None = None, - executor: futures.Executor | None = None, - ): - kwargs = {} - if CONFIG.global_proxy: - kwargs["proxies"] = CONFIG.global_proxy - self.loop = loop - self.executor = executor - self.ddgs = DDGS(**kwargs) - - @overload - def run( - self, - query: str, - max_results: int = 8, - as_string: Literal[True] = True, - focus: list[str] | None = None, - ) -> str: - ... - - @overload - def run( - self, - query: str, - max_results: int = 8, - as_string: Literal[False] = False, - focus: list[str] | None = None, - ) -> list[dict[str, str]]: - ... - - async def run( - self, - query: str, - max_results: int = 8, - as_string: bool = True, - ) -> str | list[dict]: - """Return the results of a Google search using the official Google API - - Args: - query: The search query. - max_results: The number of results to return. - as_string: A boolean flag to determine the return type of the results. If True, the function will - return a formatted string with the search results. If False, it will return a list of dictionaries - containing detailed information about each search result. - - Returns: - The results of the search. - """ - loop = self.loop or asyncio.get_event_loop() - future = loop.run_in_executor( - self.executor, - self._search_from_ddgs, - query, - max_results, - ) - search_results = await future - - # Return the list of search result URLs - if as_string: - return json.dumps(search_results, ensure_ascii=False) - return search_results - - def _search_from_ddgs(self, query: str, max_results: int): - return [ - {"link": i["href"], "snippet": i["body"], "title": i["title"]} - for (_, i) in zip(range(max_results), self.ddgs.text(query)) - ] - - -if __name__ == "__main__": - import fire - - fire.Fire(DDGAPIWrapper().run) diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/learn/__init__.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/learn/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/diacanFperku/AutoGPT/Aap Ka Suroor Songs.pk Zip File.md b/spaces/diacanFperku/AutoGPT/Aap Ka Suroor Songs.pk Zip File.md deleted file mode 100644 index cac2a5005ed5ef99486580fea0e97c4ddab5cda5..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Aap Ka Suroor Songs.pk Zip File.md +++ /dev/null @@ -1,98 +0,0 @@ -
-

Aap Ka Suroor Songs.pk Zip File: How to Download and Enjoy Himesh Reshammiya's Music

- -

Aap Ka Suroor is a popular Hindi album released in 2006 by singer and composer Himesh Reshammiya. The album features 18 songs, including hits like Tera Surroor, Naam Hai Tera, Samjho Na, and I Love You Sayyoni. The album was a huge success and sold over 5 million copies worldwide.

-

aap ka suroor songs.pk zip file


DOWNLOADhttps://gohhs.com/2uFTXS



- -

If you are a fan of Himesh Reshammiya's music and want to download Aap Ka Suroor songs in zip file format, you are in luck. There are many websites that offer Aap Ka Suroor songs.pk zip file for free download. You can easily download the zip file and extract the mp3 songs to your device and enjoy them offline.

- -

How to Download Aap Ka Suroor Songs.pk Zip File

- -

Downloading Aap Ka Suroor songs.pk zip file is very easy and simple. You just need to follow these steps:

- -
    -
  1. Go to any of the websites that offer Aap Ka Suroor songs.pk zip file for free download. Some of the websites are JioSaavn, Weebly, Peatix, and SoundCloud.
  2. -
  3. Search for Aap Ka Suroor songs.pk zip file or click on the link provided by the website.
  4. -
  5. Click on the download button or icon and wait for the zip file to be downloaded to your device.
  6. -
  7. Locate the downloaded zip file and unzip it using any app or software that can extract zip files.
  8. -
  9. You will find all the 18 songs of Aap Ka Suroor in mp3 format in a folder. You can transfer them to your music player or playlist and enjoy them offline.
  10. -
- -

Why You Should Download Aap Ka Suroor Songs.pk Zip File

- -

There are many benefits of downloading Aap Ka Suroor songs.pk zip file. Here are some of them:

- -
    -
  • You can listen to all the songs of Aap Ka Suroor without any interruption or ads.
  • -
  • You can save your internet data and storage space by downloading one zip file instead of 18 individual mp3 files.
  • -
  • You can share the zip file with your friends and family who are also fans of Himesh Reshammiya's music.
  • -
  • You can enjoy the high quality and original sound of Aap Ka Suroor songs as they were recorded by Himesh Reshammiya.
  • -
- -

Conclusion

- -

Aap Ka Suroor is a classic Hindi album that showcases Himesh Reshammiya's talent and versatility as a singer and composer. If you want to download Aap Ka Suroor songs.pk zip file and enjoy his music offline, you can easily do so by following the steps mentioned above. You will not regret downloading Aap Ka Suroor songs.pk zip file as it will give you hours of entertainment and pleasure.

-

What are the Benefits of Listening to Aap Ka Suroor Songs

- -

Aap Ka Suroor songs are not only catchy and melodious, but also have many benefits for your health and well-being. Here are some of them:

-

- -
    -
  • Aap Ka Suroor songs can boost your mood and energy levels. They can make you feel happy, relaxed, and motivated.
  • -
  • Aap Ka Suroor songs can improve your memory and concentration. They can help you recall information and focus on your tasks.
  • -
  • Aap Ka Suroor songs can reduce your stress and anxiety. They can calm your nerves and lower your blood pressure.
  • -
  • Aap Ka Suroor songs can enhance your creativity and imagination. They can inspire you to think out of the box and express yourself.
  • -
  • Aap Ka Suroor songs can strengthen your emotional and social bonds. They can help you connect with your loved ones and share your feelings.
  • -
- -

How to Enjoy Aap Ka Suroor Songs More

- -

If you want to enjoy Aap Ka Suroor songs more, you can try these tips:

- -
    -
  1. Listen to Aap Ka Suroor songs with headphones or speakers. This will give you a better sound quality and immersion.
  2. -
  3. Listen to Aap Ka Suroor songs with lyrics. This will help you understand the meaning and message of the songs better.
  4. -
  5. Listen to Aap Ka Suroor songs with friends or family. This will make you feel more connected and entertained.
  6. -
  7. Listen to Aap Ka Suroor songs with different moods and occasions. This will make you appreciate the versatility and diversity of the songs more.
  8. -
  9. Listen to Aap Ka Suroor songs with remixes or covers. This will give you a fresh and new perspective on the songs.
  10. -
- -

Conclusion

- -

Aap Ka Suroor is a classic Hindi album that showcases Himesh Reshammiya's talent and versatility as a singer and composer. If you want to download Aap Ka Suroor songs.pk zip file and enjoy his music offline, you can easily do so by following the steps mentioned above. You will not regret downloading Aap Ka Suroor songs.pk zip file as it will give you hours of entertainment and pleasure.

- -

You can also listen to Aap Ka Suroor songs online on various platforms like JioSaavn, Weebly, Peatix, and SoundCloud. You can also watch the videos of Aap Ka Suroor songs on YouTube or other websites. You can also buy the CD or DVD of Aap Ka Suroor album from any online or offline store.

- -

Whatever way you choose to listen to Aap Ka Suroor songs, you will surely enjoy them and benefit from them. Aap Ka Suroor songs are not only music, but also therapy for your mind, body, and soul.

-

What are the Features of Aap Ka Suroor Songs

- -

Aap Ka Suroor songs are not only popular and successful, but also have many features that make them stand out from other Hindi songs. Here are some of them:

- -
    -
  • Aap Ka Suroor songs are composed and sung by Himesh Reshammiya himself. He is known for his unique and distinctive voice and style of singing.
  • -
  • Aap Ka Suroor songs are based on various genres and themes. They range from romantic to sad, from rock to folk, from dance to rap.
  • -
  • Aap Ka Suroor songs have catchy and meaningful lyrics written by Sameer. He is one of the most prolific and acclaimed lyricists in Bollywood.
  • -
  • Aap Ka Suroor songs have impressive and innovative music arrangements and production. They use various instruments and sounds to create a rich and diverse musical experience.
  • -
  • Aap Ka Suroor songs have remix versions by famous DJs like Akbar Sami and Aqueel. They add a new twist and flavor to the original songs.
  • -
- -

What are the Reviews and Awards of Aap Ka Suroor Songs

- -

Aap Ka Suroor songs have received positive reviews and awards from critics and audiences alike. Here are some of them:

- -
    -
  • Aap Ka Suroor songs have been praised for their originality and freshness. They have been called as "a breath of fresh air" and "a musical revolution" by many reviewers.
  • -
  • Aap Ka Suroor songs have been nominated and won several awards for their excellence and popularity. They have won awards like Filmfare, IIFA, Zee Cine, Screen, Stardust, MTV, Mirchi Music, and more.
  • -
  • Aap Ka Suroor songs have been featured in various charts and lists of best Hindi songs. They have topped charts like Billboard, BBC, Radio Mirchi, Saavn, Gaana, Hungama, and more.
  • -
  • Aap Ka Suroor songs have been covered and recreated by many artists and singers. They have inspired many aspiring musicians and singers to follow their passion and dreams.
  • -
- -

Conclusion

- -

Aap Ka Suroor is a classic Hindi album that showcases Himesh Reshammiya's talent and versatility as a singer and composer. If you want to download Aap Ka Suroor songs.pk zip file and enjoy his music offline, you can easily do so by following the steps mentioned above. You will not regret downloading Aap Ka Suroor songs.pk zip file as it will give you hours of entertainment and pleasure.

- -

You can also listen to Aap Ka Suroor songs online on various platforms like JioSaavn, Weebly, Peatix, and SoundCloud. You can also watch the videos of Aap Ka Suroor songs on YouTube or other websites. You can also buy the CD or DVD of Aap Ka Suroor album from any online or offline store.

- -

Whatever way you choose to listen to Aap Ka Suroor songs, you will surely enjoy them and benefit from them. Aap Ka Suroor songs are not only music, but also therapy for your mind, body, and soul.

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Championship Manager 00 01 Download Free Full Version.md b/spaces/diacanFperku/AutoGPT/Championship Manager 00 01 Download Free Full Version.md deleted file mode 100644 index c67dfb1395016943e709035c6086d3267199f211..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Championship Manager 00 01 Download Free Full Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

championship manager 00 01 download full version


Download Zip >> https://gohhs.com/2uFSU6



-
- 4fefd39f24
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/Pro Landscape 12 Serial Keygenl.md b/spaces/diacanFperku/AutoGPT/Pro Landscape 12 Serial Keygenl.md deleted file mode 100644 index 628bbd16981f4145b9cbfbe904ac903af3fcba3d..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Pro Landscape 12 Serial Keygenl.md +++ /dev/null @@ -1,6 +0,0 @@ -

Pro Landscape 12 Serial Keygenl


Download Ziphttps://gohhs.com/2uFUGq



-
-by PN Vasudevan · 2018 — The Landscape of Hardness in Cryptography ... It has been shown that if the the OV conjecture is true, then many string pro- cessing problems ... 1fdad05405
-
-
-

diff --git a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/data_utils.py b/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/data_utils.py deleted file mode 100644 index 2c98d3dc8b9572bd05859033a74d155425a2a2ab..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/data_utils.py +++ /dev/null @@ -1,332 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import torchaudio -import commons -from mel_processing import spectrogram_torch, mel_spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import cleaned_text_to_sequence, get_bert - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.spk_map = hparams.spk2id - self.hparams = hparams - - self.use_mel_spec_posterior = getattr(hparams, "use_mel_posterior_encoder", False) - if self.use_mel_spec_posterior: - self.n_mel_channels = getattr(hparams, "n_mel_channels", 80) - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 300) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - skipped = 0 - for _id, spk, language, text, phones, tone, word2ph in self.audiopaths_sid_text: - audiopath = f'{_id}' - if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len: - phones = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - audiopaths_sid_text_new.append([audiopath, spk, language, text, phones, tone, word2ph]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - else: - skipped += 1 - print("skipped: ", skipped, ", total: ", len(self.audiopaths_sid_text)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text - - bert, phones, tone, language = self.get_text(text, word2ph, phones, tone, language, audiopath) - - spec, wav = self.get_audio(audiopath) - sid = torch.LongTensor([int(self.spk_map[sid])]) - return (phones, spec, wav, sid, tone, language, bert) - - def get_audio(self, filename): - audio_norm, sampling_rate = torchaudio.load(filename, frame_offset=0, num_frames=-1, normalize=True, channels_first=True) - ''' - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - ''' - spec_filename = filename.replace(".wav", ".spec.pt") - if self.use_mel_spec_posterior: - spec_filename = spec_filename.replace(".spec.pt", ".mel.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - if self.use_mel_spec_posterior: - # if os.path.exists(filename.replace(".wav", ".spec.pt")): - # # spec, n_fft, num_mels, sampling_rate, fmin, fmax - # spec = spec_to_mel_torch( - # torch.load(filename.replace(".wav", ".spec.pt")), - # self.filter_length, self.n_mel_channels, self.sampling_rate, - # self.hparams.mel_fmin, self.hparams.mel_fmax) - spec = mel_spectrogram_torch(audio_norm, self.filter_length, - self.n_mel_channels, self.sampling_rate, self.hop_length, - self.win_length, self.hparams.mel_fmin, self.hparams.mel_fmax, center=False) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text, word2ph, phone, tone, language_str, wav_path): - # print(text, word2ph,phone, tone, language_str) - pold = phone - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - pold2 = phone - - if self.add_blank: - p1 = len(phone) - phone = commons.intersperse(phone, 0) - p2 = len(phone) - t1 = len(tone) - tone = commons.intersperse(tone, 0) - t2 = len(tone) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - torch.save(bert, bert_path) - #print(bert.shape[-1], bert_path, text, pold) - assert bert.shape[-1] == len(phone) - - assert bert.shape[-1] == len(phone), ( - bert.shape, len(phone), sum(word2ph), p1, p2, t1, t2, pold, pold2, word2ph, text, w2pho) - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, phone, tone, language - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - tone_padded = torch.LongTensor(len(batch), max_text_len) - language_padded = torch.LongTensor(len(batch), max_text_len) - bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len) - - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - tone_padded.zero_() - language_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - bert_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - tone = row[4] - tone_padded[i, :tone.size(0)] = tone - - language = row[5] - language_padded[i, :language.size(0)] = language - - bert = row[6] - bert_padded[i, :, :bert.size(1)] = bert - - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, tone_padded, language_padded, bert_padded - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - if (len_bucket == 0): - continue - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/digitalxingtong/Kino-Bert-VITS2/short_audio_transcribe.py b/spaces/digitalxingtong/Kino-Bert-VITS2/short_audio_transcribe.py deleted file mode 100644 index f1e8b30671f2c2f2fa3c93feb1f4edd3fbe2f545..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Kino-Bert-VITS2/short_audio_transcribe.py +++ /dev/null @@ -1,122 +0,0 @@ -import whisper -import os -import json -import torchaudio -import argparse -import torch - -lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - "en": "[EN]", - } -def transcribe_one(audio_path): - # load audio and pad/trim it to fit 30 seconds - audio = whisper.load_audio(audio_path) - audio = whisper.pad_or_trim(audio) - - # make log-Mel spectrogram and move to the same device as the model - mel = whisper.log_mel_spectrogram(audio).to(model.device) - - # detect the spoken language - _, probs = model.detect_language(mel) - print(f"Detected language: {max(probs, key=probs.get)}") - lang = max(probs, key=probs.get) - # decode the audio - options = whisper.DecodingOptions(beam_size=5) - result = whisper.decode(model, mel, options) - - # print the recognized text - print(result.text) - return lang, result.text -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--languages", default="CJE") - parser.add_argument("--whisper_size", default="medium") - args = parser.parse_args() - if args.languages == "CJE": - lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - "en": "[EN]", - } - elif args.languages == "CJ": - lang2token = { - 'zh': "[ZH]", - 'ja': "[JA]", - } - elif args.languages == "C": - lang2token = { - 'zh': "[ZH]", - } - assert (torch.cuda.is_available()), "Please enable GPU in order to run Whisper!" - model = whisper.load_model(args.whisper_size) - parent_dir = "./custom_character_voice/" - speaker_names = list(os.walk(parent_dir))[0][1] - speaker_annos = [] - total_files = sum([len(files) for r, d, files in os.walk(parent_dir)]) - # resample audios - # 2023/4/21: Get the target sampling rate - with open("./configs/config.json", 'r', encoding='utf-8') as f: - hps = json.load(f) - target_sr = hps['data']['sampling_rate'] - processed_files = 0 - for speaker in speaker_names: - for i, wavfile in enumerate(list(os.walk(parent_dir + speaker))[0][2]): - # try to load file as audio - if wavfile.startswith("processed_"): - continue - try: - wav, sr = torchaudio.load(parent_dir + speaker + "/" + wavfile, frame_offset=0, num_frames=-1, normalize=True, - channels_first=True) - wav = wav.mean(dim=0).unsqueeze(0) - if sr != target_sr: - wav = torchaudio.transforms.Resample(orig_freq=sr, new_freq=target_sr)(wav) - if wav.shape[1] / sr > 20: - print(f"{wavfile} too long, ignoring\n") - save_path = parent_dir + speaker + "/" + f"processed_{i}.wav" - torchaudio.save(save_path, wav, target_sr, channels_first=True) - # transcribe text - lang, text = transcribe_one(save_path) - if lang not in list(lang2token.keys()): - print(f"{lang} not supported, ignoring\n") - continue - text = "ZH|" + text + "\n"# - #text = lang2token[lang] + text + lang2token[lang] + "\n" - speaker_annos.append(save_path + "|" + speaker + "|" + text) - - processed_files += 1 - print(f"Processed: {processed_files}/{total_files}") - except: - continue - - # # clean annotation - # import argparse - # import text - # from utils import load_filepaths_and_text - # for i, line in enumerate(speaker_annos): - # path, sid, txt = line.split("|") - # cleaned_text = text._clean_text(txt, ["cjke_cleaners2"]) - # cleaned_text += "\n" if not cleaned_text.endswith("\n") else "" - # speaker_annos[i] = path + "|" + sid + "|" + cleaned_text - # write into annotation - if len(speaker_annos) == 0: - print("Warning: no short audios found, this IS expected if you have only uploaded long audios, videos or video links.") - print("this IS NOT expected if you have uploaded a zip file of short audios. Please check your file structure or make sure your audio language is supported.") - with open("./filelists/short_character_anno.list", 'w', encoding='utf-8') as f: - for line in speaker_annos: - f.write(line) - - # import json - # # generate new config - # with open("./configs/finetune_speaker.json", 'r', encoding='utf-8') as f: - # hps = json.load(f) - # # modify n_speakers - # hps['data']["n_speakers"] = 1000 + len(speaker2id) - # # add speaker names - # for speaker in speaker_names: - # hps['speakers'][speaker] = speaker2id[speaker] - # # save modified config - # with open("./configs/modified_finetune_speaker.json", 'w', encoding='utf-8') as f: - # json.dump(hps, f, indent=2) - # print("finished") diff --git a/spaces/digitalxingtong/Miiu-Bert-Vits2/losses.py b/spaces/digitalxingtong/Miiu-Bert-Vits2/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Miiu-Bert-Vits2/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/digitalxingtong/Nailv-Bert-Vits2/data_utils.py b/spaces/digitalxingtong/Nailv-Bert-Vits2/data_utils.py deleted file mode 100644 index 2c98d3dc8b9572bd05859033a74d155425a2a2ab..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nailv-Bert-Vits2/data_utils.py +++ /dev/null @@ -1,332 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data -import torchaudio -import commons -from mel_processing import spectrogram_torch, mel_spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import cleaned_text_to_sequence, get_bert - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - self.spk_map = hparams.spk2id - self.hparams = hparams - - self.use_mel_spec_posterior = getattr(hparams, "use_mel_posterior_encoder", False) - if self.use_mel_spec_posterior: - self.n_mel_channels = getattr(hparams, "n_mel_channels", 80) - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 300) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - skipped = 0 - for _id, spk, language, text, phones, tone, word2ph in self.audiopaths_sid_text: - audiopath = f'{_id}' - if self.min_text_len <= len(phones) and len(phones) <= self.max_text_len: - phones = phones.split(" ") - tone = [int(i) for i in tone.split(" ")] - word2ph = [int(i) for i in word2ph.split(" ")] - audiopaths_sid_text_new.append([audiopath, spk, language, text, phones, tone, word2ph]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - else: - skipped += 1 - print("skipped: ", skipped, ", total: ", len(self.audiopaths_sid_text)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, language, text, phones, tone, word2ph = audiopath_sid_text - - bert, phones, tone, language = self.get_text(text, word2ph, phones, tone, language, audiopath) - - spec, wav = self.get_audio(audiopath) - sid = torch.LongTensor([int(self.spk_map[sid])]) - return (phones, spec, wav, sid, tone, language, bert) - - def get_audio(self, filename): - audio_norm, sampling_rate = torchaudio.load(filename, frame_offset=0, num_frames=-1, normalize=True, channels_first=True) - ''' - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - ''' - spec_filename = filename.replace(".wav", ".spec.pt") - if self.use_mel_spec_posterior: - spec_filename = spec_filename.replace(".spec.pt", ".mel.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - if self.use_mel_spec_posterior: - # if os.path.exists(filename.replace(".wav", ".spec.pt")): - # # spec, n_fft, num_mels, sampling_rate, fmin, fmax - # spec = spec_to_mel_torch( - # torch.load(filename.replace(".wav", ".spec.pt")), - # self.filter_length, self.n_mel_channels, self.sampling_rate, - # self.hparams.mel_fmin, self.hparams.mel_fmax) - spec = mel_spectrogram_torch(audio_norm, self.filter_length, - self.n_mel_channels, self.sampling_rate, self.hop_length, - self.win_length, self.hparams.mel_fmin, self.hparams.mel_fmax, center=False) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text, word2ph, phone, tone, language_str, wav_path): - # print(text, word2ph,phone, tone, language_str) - pold = phone - w2pho = [i for i in word2ph] - word2ph = [i for i in word2ph] - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - pold2 = phone - - if self.add_blank: - p1 = len(phone) - phone = commons.intersperse(phone, 0) - p2 = len(phone) - t1 = len(tone) - tone = commons.intersperse(tone, 0) - t2 = len(tone) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert_path = wav_path.replace(".wav", ".bert.pt") - try: - bert = torch.load(bert_path) - assert bert.shape[-1] == len(phone) - except: - bert = get_bert(text, word2ph, language_str) - torch.save(bert, bert_path) - #print(bert.shape[-1], bert_path, text, pold) - assert bert.shape[-1] == len(phone) - - assert bert.shape[-1] == len(phone), ( - bert.shape, len(phone), sum(word2ph), p1, p2, t1, t2, pold, pold2, word2ph, text, w2pho) - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - return bert, phone, tone, language - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - tone_padded = torch.LongTensor(len(batch), max_text_len) - language_padded = torch.LongTensor(len(batch), max_text_len) - bert_padded = torch.FloatTensor(len(batch), 1024, max_text_len) - - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - tone_padded.zero_() - language_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - bert_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - tone = row[4] - tone_padded[i, :tone.size(0)] = tone - - language = row[5] - language_padded[i, :language.size(0)] = language - - bert = row[6] - bert_padded[i, :, :bert.size(1)] = bert - - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, tone_padded, language_padded, bert_padded - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i + 1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - if (len_bucket == 0): - continue - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j * self.batch_size:(j + 1) * self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid + 1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/dirge/voicevox/voicevox_engine/preset/__init__.py b/spaces/dirge/voicevox/voicevox_engine/preset/__init__.py deleted file mode 100644 index 8c485e2fbfbcdd660d869ccc36483d6ace6272ec..0000000000000000000000000000000000000000 --- a/spaces/dirge/voicevox/voicevox_engine/preset/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from .Preset import Preset -from .PresetError import PresetError -from .PresetManager import PresetManager - -__all__ = [ - "Preset", - "PresetManager", - "PresetError", -] diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/extensions/send_pictures/script.py b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/extensions/send_pictures/script.py deleted file mode 100644 index b21423e443f524fee120581ff95ed388ecf0de08..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/extensions/send_pictures/script.py +++ /dev/null @@ -1,47 +0,0 @@ -import base64 -from io import BytesIO - -import gradio as gr -import torch -from transformers import BlipForConditionalGeneration, BlipProcessor - -from modules import chat, shared -from modules.ui import gather_interface_values - -# If 'state' is True, will hijack the next chat generation with -# custom input text given by 'value' in the format [text, visible_text] -input_hijack = { - 'state': False, - 'value': ["", ""] -} - -processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") -model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float32).to("cpu") - - -def caption_image(raw_image): - inputs = processor(raw_image.convert('RGB'), return_tensors="pt").to("cpu", torch.float32) - out = model.generate(**inputs, max_new_tokens=100) - return processor.decode(out[0], skip_special_tokens=True) - - -def generate_chat_picture(picture, name1, name2): - text = f'*{name1} sends {name2} a picture that contains the following: “{caption_image(picture)}”*' - # lower the resolution of sent images for the chat, otherwise the log size gets out of control quickly with all the base64 values in visible history - picture.thumbnail((300, 300)) - buffer = BytesIO() - picture.save(buffer, format="JPEG") - img_str = base64.b64encode(buffer.getvalue()).decode('utf-8') - visible_text = f'{text}' - return text, visible_text - - -def ui(): - picture_select = gr.Image(label='Send a picture', type='pil') - - # Prepare the input hijack, update the interface values, call the generation function, and clear the picture - picture_select.upload( - lambda picture, name1, name2: input_hijack.update({"state": True, "value": generate_chat_picture(picture, name1, name2)}), [picture_select, shared.gradio['name1'], shared.gradio['name2']], None).then( - gather_interface_values, [shared.gradio[k] for k in shared.input_elements], shared.gradio['interface_state']).then( - chat.cai_chatbot_wrapper, shared.input_params, shared.gradio['display'], show_progress=shared.args.no_stream).then( - lambda: None, None, picture_select, show_progress=False) diff --git a/spaces/drift-ai/recruiter-assistant-jbfxrs/prompts/preprocess.py b/spaces/drift-ai/recruiter-assistant-jbfxrs/prompts/preprocess.py deleted file mode 100644 index 886d60c606173f95c3727f5409c8d1197e871403..0000000000000000000000000000000000000000 --- a/spaces/drift-ai/recruiter-assistant-jbfxrs/prompts/preprocess.py +++ /dev/null @@ -1,45 +0,0 @@ -import os - -from langchain import LLMChain -from langchain.chains import SequentialChain -from langchain.chat_models import ChatOpenAI -from langchain.prompts import ChatPromptTemplate - - -def preprocess_resume(llm, resume) -> SequentialChain: - """We mold the resume to the format of the vacancy to increase the chances of good search results.""" - - template_get_skills_intersection = """ - - ``` - {resume} - ``` - - Can you summarize the above resume in the template below and fill in eveything that is delimited by '<' '>'? - Return only the filled in template nothing else. - - position: < the job name or role > - location: < where does the person live > - description: < the description or the person's job or role and description of it's experience. > - profile: < can you describe the sector the person is working in > - competences: < can you describe some hard skills of the person> - """ - - prompt_get_skills_intersection = ChatPromptTemplate.from_template( - template=template_get_skills_intersection - ) - skills_match_chain = LLMChain( - llm=llm, - prompt=prompt_get_skills_intersection, - output_key="resume_preprocess", - ) - - chain = SequentialChain( - chains=[skills_match_chain], - input_variables=["resume"], - output_variables=[ - skills_match_chain.output_key, - ], - verbose=False, - ) - return chain({"resume": resume}) diff --git a/spaces/ekenkel/dog-identifier/modelCreation.py b/spaces/ekenkel/dog-identifier/modelCreation.py deleted file mode 100644 index dcda643dba86110f2af887a7c021b2053175dacc..0000000000000000000000000000000000000000 --- a/spaces/ekenkel/dog-identifier/modelCreation.py +++ /dev/null @@ -1,79 +0,0 @@ -from fastcore.all import * -from fastdownload import download_url -from fastai.vision.widgets import * -from fastai.vision.all import * -import os -import requests - -def search_and_download_images(api_key, breeds, download_path, num_images=10): - headers = { - 'Ocp-Apim-Subscription-Key': api_key - } - - for breed in breeds: - # Create a directory for the breed if it doesn't exist - breed_dir = os.path.join(download_path, breed) - os.makedirs(breed_dir, exist_ok=True) - - # Initialize a counter for the number of successfully downloaded images - downloaded_count = 0 - - # Make the API request - params = { - 'q': f'{breed} dog', - 'count': num_images - } - response = requests.get('https://api.bing.microsoft.com/v7.0/images/search', headers=headers, params=params) - - # L() is from fastai - results = L(response.json()['value']) - download_images(breed_dir, urls=results.attrgot('contentUrl')) - - -URL = 'https://dog.ceo/api/breeds/list/all' - -# Get the breeds of all the dogs (some of them require reformatting) -result = requests.get(url = URL).json() -dog_breeds = [] -for val in result['message'].items(): - if len(val[1]) > 0: - for type in val[1]: - dog_breeds.append(f'{type} {val[0]}') - else: - dog_breeds.append(val[0]) - -path = Path('Dog_Types') - -api_key = os.environ.get('AZURE_SEARCH_KEY', 'INSERT KEY HERE') - -search_and_download_images(api_key, dog_breeds, path, num_images=200) - -# Ensure that all images are able to be opened. -# If they cannot be opened, remove them -for breed in dog_breeds: - failed = verify_images(get_image_files(f'{path}/{breed}')) - failed.map(Path.unlink) - - -# Load the data into fastai datablock -# In this we randomly split the data into train, validation, and test -# Resize the data to be 396x396 px -# Also perform tansforms (in this way, we are able to get imperfect data to train on) -dataloaders = DataBlock( - blocks=(ImageBlock, CategoryBlock), - get_items=get_image_files, - splitter=RandomSplitter(valid_pct=0.2, seed=42), - get_y=parent_label, - item_tfms=Resize(396), - batch_tfms=aug_transforms(size=396, min_scale=0.75) -).dataloaders(path) - -# Load the data into the Convnext-22k -# NOTE: When this happens, fastai will adjust the start of the model according to the input size of your data -learn = vision_learner(dataloaders, 'convnext_tiny_in22k', metrics=error_rate).to_fp16() -# Because of this, for 3 epochs, I decided to freeze the weights of the pretrained model (as this has already been optimized) -# Only the additional layers that were added will adjust in the first 3 epochs -# After the 3 epochs, the model will update all weights -learn.fine_tune(8, freeze_epochs=3) - -learn.export('dogIdentifierModel.pkl') \ No newline at end of file diff --git a/spaces/elozano/tweet_eval/README.md b/spaces/elozano/tweet_eval/README.md deleted file mode 100644 index f877fe17398e4df8823cef9b9d299f3b2570333b..0000000000000000000000000000000000000000 --- a/spaces/elozano/tweet_eval/README.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Tweet Evaluator -emoji: 🐦 -colorFrom: blue -colorTo: purple -sdk: streamlit -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/erbanku/gpt-academic/colorful.py b/spaces/erbanku/gpt-academic/colorful.py deleted file mode 100644 index d90972bb30a8f8fb932abbc34232e474df4d5205..0000000000000000000000000000000000000000 --- a/spaces/erbanku/gpt-academic/colorful.py +++ /dev/null @@ -1,91 +0,0 @@ -import platform -from sys import stdout - -if platform.system()=="Linux": - pass -else: - from colorama import init - init() - -# Do you like the elegance of Chinese characters? -def print红(*kw,**kargs): - print("\033[0;31m",*kw,"\033[0m",**kargs) -def print绿(*kw,**kargs): - print("\033[0;32m",*kw,"\033[0m",**kargs) -def print黄(*kw,**kargs): - print("\033[0;33m",*kw,"\033[0m",**kargs) -def print蓝(*kw,**kargs): - print("\033[0;34m",*kw,"\033[0m",**kargs) -def print紫(*kw,**kargs): - print("\033[0;35m",*kw,"\033[0m",**kargs) -def print靛(*kw,**kargs): - print("\033[0;36m",*kw,"\033[0m",**kargs) - -def print亮红(*kw,**kargs): - print("\033[1;31m",*kw,"\033[0m",**kargs) -def print亮绿(*kw,**kargs): - print("\033[1;32m",*kw,"\033[0m",**kargs) -def print亮黄(*kw,**kargs): - print("\033[1;33m",*kw,"\033[0m",**kargs) -def print亮蓝(*kw,**kargs): - print("\033[1;34m",*kw,"\033[0m",**kargs) -def print亮紫(*kw,**kargs): - print("\033[1;35m",*kw,"\033[0m",**kargs) -def print亮靛(*kw,**kargs): - print("\033[1;36m",*kw,"\033[0m",**kargs) - - - -def print亮红(*kw,**kargs): - print("\033[1;31m",*kw,"\033[0m",**kargs) -def print亮绿(*kw,**kargs): - print("\033[1;32m",*kw,"\033[0m",**kargs) -def print亮黄(*kw,**kargs): - print("\033[1;33m",*kw,"\033[0m",**kargs) -def print亮蓝(*kw,**kargs): - print("\033[1;34m",*kw,"\033[0m",**kargs) -def print亮紫(*kw,**kargs): - print("\033[1;35m",*kw,"\033[0m",**kargs) -def print亮靛(*kw,**kargs): - print("\033[1;36m",*kw,"\033[0m",**kargs) - -print_red = print红 -print_green = print绿 -print_yellow = print黄 -print_blue = print蓝 -print_purple = print紫 -print_indigo = print靛 - -print_bold_red = print亮红 -print_bold_green = print亮绿 -print_bold_yellow = print亮黄 -print_bold_blue = print亮蓝 -print_bold_purple = print亮紫 -print_bold_indigo = print亮靛 - -if not stdout.isatty(): - # redirection, avoid a fucked up log file - print红 = print - print绿 = print - print黄 = print - print蓝 = print - print紫 = print - print靛 = print - print亮红 = print - print亮绿 = print - print亮黄 = print - print亮蓝 = print - print亮紫 = print - print亮靛 = print - print_red = print - print_green = print - print_yellow = print - print_blue = print - print_purple = print - print_indigo = print - print_bold_red = print - print_bold_green = print - print_bold_yellow = print - print_bold_blue = print - print_bold_purple = print - print_bold_indigo = print \ No newline at end of file diff --git a/spaces/f2api/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpgd.cpp b/spaces/f2api/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpgd.cpp deleted file mode 100644 index 36d06c8e9068570c3e7624895d474f33dbfe3d29..0000000000000000000000000000000000000000 --- a/spaces/f2api/gpt-academic/crazy_functions/test_project/cpp/libJPG/jpgd.cpp +++ /dev/null @@ -1,3276 +0,0 @@ -// jpgd.cpp - C++ class for JPEG decompression. -// Public domain, Rich Geldreich -// Last updated Apr. 16, 2011 -// Alex Evans: Linear memory allocator (taken from jpge.h). -// -// Supports progressive and baseline sequential JPEG image files, and the most common chroma subsampling factors: Y, H1V1, H2V1, H1V2, and H2V2. -// -// Chroma upsampling quality: H2V2 is upsampled in the frequency domain, H2V1 and H1V2 are upsampled using point sampling. -// Chroma upsampling reference: "Fast Scheme for Image Size Change in the Compressed Domain" -// http://vision.ai.uiuc.edu/~dugad/research/dct/index.html - -#include "jpgd.h" -#include - -#include -// BEGIN EPIC MOD -#define JPGD_ASSERT(x) { assert(x); CA_ASSUME(x); } (void)0 -// END EPIC MOD - -#ifdef _MSC_VER -#pragma warning (disable : 4611) // warning C4611: interaction between '_setjmp' and C++ object destruction is non-portable -#endif - -// Set to 1 to enable freq. domain chroma upsampling on images using H2V2 subsampling (0=faster nearest neighbor sampling). -// This is slower, but results in higher quality on images with highly saturated colors. -#define JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING 1 - -#define JPGD_TRUE (1) -#define JPGD_FALSE (0) - -#define JPGD_MAX(a,b) (((a)>(b)) ? (a) : (b)) -#define JPGD_MIN(a,b) (((a)<(b)) ? (a) : (b)) - -namespace jpgd { - - static inline void *jpgd_malloc(size_t nSize) { return FMemory::Malloc(nSize); } - static inline void jpgd_free(void *p) { FMemory::Free(p); } - -// BEGIN EPIC MOD -//@UE3 - use UE3 BGRA encoding instead of assuming RGBA - // stolen from IImageWrapper.h - enum ERGBFormatJPG - { - Invalid = -1, - RGBA = 0, - BGRA = 1, - Gray = 2, - }; - static ERGBFormatJPG jpg_format; -// END EPIC MOD - - // DCT coefficients are stored in this sequence. - static int g_ZAG[64] = { 0,1,8,16,9,2,3,10,17,24,32,25,18,11,4,5,12,19,26,33,40,48,41,34,27,20,13,6,7,14,21,28,35,42,49,56,57,50,43,36,29,22,15,23,30,37,44,51,58,59,52,45,38,31,39,46,53,60,61,54,47,55,62,63 }; - - enum JPEG_MARKER - { - M_SOF0 = 0xC0, M_SOF1 = 0xC1, M_SOF2 = 0xC2, M_SOF3 = 0xC3, M_SOF5 = 0xC5, M_SOF6 = 0xC6, M_SOF7 = 0xC7, M_JPG = 0xC8, - M_SOF9 = 0xC9, M_SOF10 = 0xCA, M_SOF11 = 0xCB, M_SOF13 = 0xCD, M_SOF14 = 0xCE, M_SOF15 = 0xCF, M_DHT = 0xC4, M_DAC = 0xCC, - M_RST0 = 0xD0, M_RST1 = 0xD1, M_RST2 = 0xD2, M_RST3 = 0xD3, M_RST4 = 0xD4, M_RST5 = 0xD5, M_RST6 = 0xD6, M_RST7 = 0xD7, - M_SOI = 0xD8, M_EOI = 0xD9, M_SOS = 0xDA, M_DQT = 0xDB, M_DNL = 0xDC, M_DRI = 0xDD, M_DHP = 0xDE, M_EXP = 0xDF, - M_APP0 = 0xE0, M_APP15 = 0xEF, M_JPG0 = 0xF0, M_JPG13 = 0xFD, M_COM = 0xFE, M_TEM = 0x01, M_ERROR = 0x100, RST0 = 0xD0 - }; - - enum JPEG_SUBSAMPLING { JPGD_GRAYSCALE = 0, JPGD_YH1V1, JPGD_YH2V1, JPGD_YH1V2, JPGD_YH2V2 }; - -#define CONST_BITS 13 -#define PASS1_BITS 2 -#define SCALEDONE ((int32)1) - -#define FIX_0_298631336 ((int32)2446) /* FIX(0.298631336) */ -#define FIX_0_390180644 ((int32)3196) /* FIX(0.390180644) */ -#define FIX_0_541196100 ((int32)4433) /* FIX(0.541196100) */ -#define FIX_0_765366865 ((int32)6270) /* FIX(0.765366865) */ -#define FIX_0_899976223 ((int32)7373) /* FIX(0.899976223) */ -#define FIX_1_175875602 ((int32)9633) /* FIX(1.175875602) */ -#define FIX_1_501321110 ((int32)12299) /* FIX(1.501321110) */ -#define FIX_1_847759065 ((int32)15137) /* FIX(1.847759065) */ -#define FIX_1_961570560 ((int32)16069) /* FIX(1.961570560) */ -#define FIX_2_053119869 ((int32)16819) /* FIX(2.053119869) */ -#define FIX_2_562915447 ((int32)20995) /* FIX(2.562915447) */ -#define FIX_3_072711026 ((int32)25172) /* FIX(3.072711026) */ - -#define DESCALE(x,n) (((x) + (SCALEDONE << ((n)-1))) >> (n)) -#define DESCALE_ZEROSHIFT(x,n) (((x) + (128 << (n)) + (SCALEDONE << ((n)-1))) >> (n)) - -#define MULTIPLY(var, cnst) ((var) * (cnst)) - -#define CLAMP(i) ((static_cast(i) > 255) ? (((~i) >> 31) & 0xFF) : (i)) - - // Compiler creates a fast path 1D IDCT for X non-zero columns - template - struct Row - { - static void idct(int* pTemp, const jpgd_block_t* pSrc) - { - // ACCESS_COL() will be optimized at compile time to either an array access, or 0. -#define ACCESS_COL(x) (((x) < NONZERO_COLS) ? (int)pSrc[x] : 0) - - const int z2 = ACCESS_COL(2), z3 = ACCESS_COL(6); - - const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100); - const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065); - const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865); - - const int tmp0 = (ACCESS_COL(0) + ACCESS_COL(4)) << CONST_BITS; - const int tmp1 = (ACCESS_COL(0) - ACCESS_COL(4)) << CONST_BITS; - - const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2; - - const int atmp0 = ACCESS_COL(7), atmp1 = ACCESS_COL(5), atmp2 = ACCESS_COL(3), atmp3 = ACCESS_COL(1); - - const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3; - const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602); - - const int az1 = MULTIPLY(bz1, - FIX_0_899976223); - const int az2 = MULTIPLY(bz2, - FIX_2_562915447); - const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5; - const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5; - - const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3; - const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4; - const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3; - const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4; - - pTemp[0] = DESCALE(tmp10 + btmp3, CONST_BITS-PASS1_BITS); - pTemp[7] = DESCALE(tmp10 - btmp3, CONST_BITS-PASS1_BITS); - pTemp[1] = DESCALE(tmp11 + btmp2, CONST_BITS-PASS1_BITS); - pTemp[6] = DESCALE(tmp11 - btmp2, CONST_BITS-PASS1_BITS); - pTemp[2] = DESCALE(tmp12 + btmp1, CONST_BITS-PASS1_BITS); - pTemp[5] = DESCALE(tmp12 - btmp1, CONST_BITS-PASS1_BITS); - pTemp[3] = DESCALE(tmp13 + btmp0, CONST_BITS-PASS1_BITS); - pTemp[4] = DESCALE(tmp13 - btmp0, CONST_BITS-PASS1_BITS); - } - }; - - template <> - struct Row<0> - { - static void idct(int* pTemp, const jpgd_block_t* pSrc) - { -#ifdef _MSC_VER - pTemp; pSrc; -#endif - } - }; - - template <> - struct Row<1> - { - static void idct(int* pTemp, const jpgd_block_t* pSrc) - { - const int dcval = (pSrc[0] << PASS1_BITS); - - pTemp[0] = dcval; - pTemp[1] = dcval; - pTemp[2] = dcval; - pTemp[3] = dcval; - pTemp[4] = dcval; - pTemp[5] = dcval; - pTemp[6] = dcval; - pTemp[7] = dcval; - } - }; - - // Compiler creates a fast path 1D IDCT for X non-zero rows - template - struct Col - { - static void idct(uint8* pDst_ptr, const int* pTemp) - { - // ACCESS_ROW() will be optimized at compile time to either an array access, or 0. -#define ACCESS_ROW(x) (((x) < NONZERO_ROWS) ? pTemp[x * 8] : 0) - - const int z2 = ACCESS_ROW(2); - const int z3 = ACCESS_ROW(6); - - const int z1 = MULTIPLY(z2 + z3, FIX_0_541196100); - const int tmp2 = z1 + MULTIPLY(z3, - FIX_1_847759065); - const int tmp3 = z1 + MULTIPLY(z2, FIX_0_765366865); - - const int tmp0 = (ACCESS_ROW(0) + ACCESS_ROW(4)) << CONST_BITS; - const int tmp1 = (ACCESS_ROW(0) - ACCESS_ROW(4)) << CONST_BITS; - - const int tmp10 = tmp0 + tmp3, tmp13 = tmp0 - tmp3, tmp11 = tmp1 + tmp2, tmp12 = tmp1 - tmp2; - - const int atmp0 = ACCESS_ROW(7), atmp1 = ACCESS_ROW(5), atmp2 = ACCESS_ROW(3), atmp3 = ACCESS_ROW(1); - - const int bz1 = atmp0 + atmp3, bz2 = atmp1 + atmp2, bz3 = atmp0 + atmp2, bz4 = atmp1 + atmp3; - const int bz5 = MULTIPLY(bz3 + bz4, FIX_1_175875602); - - const int az1 = MULTIPLY(bz1, - FIX_0_899976223); - const int az2 = MULTIPLY(bz2, - FIX_2_562915447); - const int az3 = MULTIPLY(bz3, - FIX_1_961570560) + bz5; - const int az4 = MULTIPLY(bz4, - FIX_0_390180644) + bz5; - - const int btmp0 = MULTIPLY(atmp0, FIX_0_298631336) + az1 + az3; - const int btmp1 = MULTIPLY(atmp1, FIX_2_053119869) + az2 + az4; - const int btmp2 = MULTIPLY(atmp2, FIX_3_072711026) + az2 + az3; - const int btmp3 = MULTIPLY(atmp3, FIX_1_501321110) + az1 + az4; - - int i = DESCALE_ZEROSHIFT(tmp10 + btmp3, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*0] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp10 - btmp3, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*7] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp11 + btmp2, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*1] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp11 - btmp2, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*6] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp12 + btmp1, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*2] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp12 - btmp1, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*5] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp13 + btmp0, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*3] = (uint8)CLAMP(i); - - i = DESCALE_ZEROSHIFT(tmp13 - btmp0, CONST_BITS+PASS1_BITS+3); - pDst_ptr[8*4] = (uint8)CLAMP(i); - } - }; - - template <> - struct Col<1> - { - static void idct(uint8* pDst_ptr, const int* pTemp) - { - int dcval = DESCALE_ZEROSHIFT(pTemp[0], PASS1_BITS+3); - const uint8 dcval_clamped = (uint8)CLAMP(dcval); - pDst_ptr[0*8] = dcval_clamped; - pDst_ptr[1*8] = dcval_clamped; - pDst_ptr[2*8] = dcval_clamped; - pDst_ptr[3*8] = dcval_clamped; - pDst_ptr[4*8] = dcval_clamped; - pDst_ptr[5*8] = dcval_clamped; - pDst_ptr[6*8] = dcval_clamped; - pDst_ptr[7*8] = dcval_clamped; - } - }; - - static const uint8 s_idct_row_table[] = - { - 1,0,0,0,0,0,0,0, 2,0,0,0,0,0,0,0, 2,1,0,0,0,0,0,0, 2,1,1,0,0,0,0,0, 2,2,1,0,0,0,0,0, 3,2,1,0,0,0,0,0, 4,2,1,0,0,0,0,0, 4,3,1,0,0,0,0,0, - 4,3,2,0,0,0,0,0, 4,3,2,1,0,0,0,0, 4,3,2,1,1,0,0,0, 4,3,2,2,1,0,0,0, 4,3,3,2,1,0,0,0, 4,4,3,2,1,0,0,0, 5,4,3,2,1,0,0,0, 6,4,3,2,1,0,0,0, - 6,5,3,2,1,0,0,0, 6,5,4,2,1,0,0,0, 6,5,4,3,1,0,0,0, 6,5,4,3,2,0,0,0, 6,5,4,3,2,1,0,0, 6,5,4,3,2,1,1,0, 6,5,4,3,2,2,1,0, 6,5,4,3,3,2,1,0, - 6,5,4,4,3,2,1,0, 6,5,5,4,3,2,1,0, 6,6,5,4,3,2,1,0, 7,6,5,4,3,2,1,0, 8,6,5,4,3,2,1,0, 8,7,5,4,3,2,1,0, 8,7,6,4,3,2,1,0, 8,7,6,5,3,2,1,0, - 8,7,6,5,4,2,1,0, 8,7,6,5,4,3,1,0, 8,7,6,5,4,3,2,0, 8,7,6,5,4,3,2,1, 8,7,6,5,4,3,2,2, 8,7,6,5,4,3,3,2, 8,7,6,5,4,4,3,2, 8,7,6,5,5,4,3,2, - 8,7,6,6,5,4,3,2, 8,7,7,6,5,4,3,2, 8,8,7,6,5,4,3,2, 8,8,8,6,5,4,3,2, 8,8,8,7,5,4,3,2, 8,8,8,7,6,4,3,2, 8,8,8,7,6,5,3,2, 8,8,8,7,6,5,4,2, - 8,8,8,7,6,5,4,3, 8,8,8,7,6,5,4,4, 8,8,8,7,6,5,5,4, 8,8,8,7,6,6,5,4, 8,8,8,7,7,6,5,4, 8,8,8,8,7,6,5,4, 8,8,8,8,8,6,5,4, 8,8,8,8,8,7,5,4, - 8,8,8,8,8,7,6,4, 8,8,8,8,8,7,6,5, 8,8,8,8,8,7,6,6, 8,8,8,8,8,7,7,6, 8,8,8,8,8,8,7,6, 8,8,8,8,8,8,8,6, 8,8,8,8,8,8,8,7, 8,8,8,8,8,8,8,8, - }; - - static const uint8 s_idct_col_table[] = { 1, 1, 2, 3, 3, 3, 3, 3, 3, 4, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8 }; - - void idct(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr, int block_max_zag) - { - JPGD_ASSERT(block_max_zag >= 1); - JPGD_ASSERT(block_max_zag <= 64); - - if (block_max_zag == 1) - { - int k = ((pSrc_ptr[0] + 4) >> 3) + 128; - k = CLAMP(k); - k = k | (k<<8); - k = k | (k<<16); - - for (int i = 8; i > 0; i--) - { - *(int*)&pDst_ptr[0] = k; - *(int*)&pDst_ptr[4] = k; - pDst_ptr += 8; - } - return; - } - - int temp[64]; - - const jpgd_block_t* pSrc = pSrc_ptr; - int* pTemp = temp; - - const uint8* pRow_tab = &s_idct_row_table[(block_max_zag - 1) * 8]; - int i; - for (i = 8; i > 0; i--, pRow_tab++) - { - switch (*pRow_tab) - { - case 0: Row<0>::idct(pTemp, pSrc); break; - case 1: Row<1>::idct(pTemp, pSrc); break; - case 2: Row<2>::idct(pTemp, pSrc); break; - case 3: Row<3>::idct(pTemp, pSrc); break; - case 4: Row<4>::idct(pTemp, pSrc); break; - case 5: Row<5>::idct(pTemp, pSrc); break; - case 6: Row<6>::idct(pTemp, pSrc); break; - case 7: Row<7>::idct(pTemp, pSrc); break; - case 8: Row<8>::idct(pTemp, pSrc); break; - } - - pSrc += 8; - pTemp += 8; - } - - pTemp = temp; - - const int nonzero_rows = s_idct_col_table[block_max_zag - 1]; - for (i = 8; i > 0; i--) - { - switch (nonzero_rows) - { - case 1: Col<1>::idct(pDst_ptr, pTemp); break; - case 2: Col<2>::idct(pDst_ptr, pTemp); break; - case 3: Col<3>::idct(pDst_ptr, pTemp); break; - case 4: Col<4>::idct(pDst_ptr, pTemp); break; - case 5: Col<5>::idct(pDst_ptr, pTemp); break; - case 6: Col<6>::idct(pDst_ptr, pTemp); break; - case 7: Col<7>::idct(pDst_ptr, pTemp); break; - case 8: Col<8>::idct(pDst_ptr, pTemp); break; - } - - pTemp++; - pDst_ptr++; - } - } - - void idct_4x4(const jpgd_block_t* pSrc_ptr, uint8* pDst_ptr) - { - int temp[64]; - int* pTemp = temp; - const jpgd_block_t* pSrc = pSrc_ptr; - - for (int i = 4; i > 0; i--) - { - Row<4>::idct(pTemp, pSrc); - pSrc += 8; - pTemp += 8; - } - - pTemp = temp; - for (int i = 8; i > 0; i--) - { - Col<4>::idct(pDst_ptr, pTemp); - pTemp++; - pDst_ptr++; - } - } - - // Retrieve one character from the input stream. - inline uint jpeg_decoder::get_char() - { - // Any bytes remaining in buffer? - if (!m_in_buf_left) - { - // Try to get more bytes. - prep_in_buffer(); - // Still nothing to get? - if (!m_in_buf_left) - { - // Pad the end of the stream with 0xFF 0xD9 (EOI marker) - int t = m_tem_flag; - m_tem_flag ^= 1; - if (t) - return 0xD9; - else - return 0xFF; - } - } - - uint c = *m_pIn_buf_ofs++; - m_in_buf_left--; - - return c; - } - - // Same as previous method, except can indicate if the character is a pad character or not. - inline uint jpeg_decoder::get_char(bool *pPadding_flag) - { - if (!m_in_buf_left) - { - prep_in_buffer(); - if (!m_in_buf_left) - { - *pPadding_flag = true; - int t = m_tem_flag; - m_tem_flag ^= 1; - if (t) - return 0xD9; - else - return 0xFF; - } - } - - *pPadding_flag = false; - - uint c = *m_pIn_buf_ofs++; - m_in_buf_left--; - - return c; - } - - // Inserts a previously retrieved character back into the input buffer. - inline void jpeg_decoder::stuff_char(uint8 q) - { - *(--m_pIn_buf_ofs) = q; - m_in_buf_left++; - } - - // Retrieves one character from the input stream, but does not read past markers. Will continue to return 0xFF when a marker is encountered. - inline uint8 jpeg_decoder::get_octet() - { - bool padding_flag; - int c = get_char(&padding_flag); - - if (c == 0xFF) - { - if (padding_flag) - return 0xFF; - - c = get_char(&padding_flag); - if (padding_flag) - { - stuff_char(0xFF); - return 0xFF; - } - - if (c == 0x00) - return 0xFF; - else - { - stuff_char(static_cast(c)); - stuff_char(0xFF); - return 0xFF; - } - } - - return static_cast(c); - } - - // Retrieves a variable number of bits from the input stream. Does not recognize markers. - inline uint jpeg_decoder::get_bits(int num_bits) - { - if (!num_bits) - return 0; - - uint i = m_bit_buf >> (32 - num_bits); - - if ((m_bits_left -= num_bits) <= 0) - { - m_bit_buf <<= (num_bits += m_bits_left); - - uint c1 = get_char(); - uint c2 = get_char(); - m_bit_buf = (m_bit_buf & 0xFFFF0000) | (c1 << 8) | c2; - - m_bit_buf <<= -m_bits_left; - - m_bits_left += 16; - - JPGD_ASSERT(m_bits_left >= 0); - } - else - m_bit_buf <<= num_bits; - - return i; - } - - // Retrieves a variable number of bits from the input stream. Markers will not be read into the input bit buffer. Instead, an infinite number of all 1's will be returned when a marker is encountered. - inline uint jpeg_decoder::get_bits_no_markers(int num_bits) - { - if (!num_bits) - return 0; - - uint i = m_bit_buf >> (32 - num_bits); - - if ((m_bits_left -= num_bits) <= 0) - { - m_bit_buf <<= (num_bits += m_bits_left); - - if ((m_in_buf_left < 2) || (m_pIn_buf_ofs[0] == 0xFF) || (m_pIn_buf_ofs[1] == 0xFF)) - { - uint c1 = get_octet(); - uint c2 = get_octet(); - m_bit_buf |= (c1 << 8) | c2; - } - else - { - m_bit_buf |= ((uint)m_pIn_buf_ofs[0] << 8) | m_pIn_buf_ofs[1]; - m_in_buf_left -= 2; - m_pIn_buf_ofs += 2; - } - - m_bit_buf <<= -m_bits_left; - - m_bits_left += 16; - - JPGD_ASSERT(m_bits_left >= 0); - } - else - m_bit_buf <<= num_bits; - - return i; - } - - // Decodes a Huffman encoded symbol. - inline int jpeg_decoder::huff_decode(huff_tables *pH) - { - int symbol; - - // Check first 8-bits: do we have a complete symbol? - if ((symbol = pH->look_up[m_bit_buf >> 24]) < 0) - { - // Decode more bits, use a tree traversal to find symbol. - int ofs = 23; - do - { - symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))]; - ofs--; - } while (symbol < 0); - - get_bits_no_markers(8 + (23 - ofs)); - } - else - get_bits_no_markers(pH->code_size[symbol]); - - return symbol; - } - - // Decodes a Huffman encoded symbol. - inline int jpeg_decoder::huff_decode(huff_tables *pH, int& extra_bits) - { - int symbol; - - // Check first 8-bits: do we have a complete symbol? - if ((symbol = pH->look_up2[m_bit_buf >> 24]) < 0) - { - // Use a tree traversal to find symbol. - int ofs = 23; - do - { - symbol = pH->tree[-(int)(symbol + ((m_bit_buf >> ofs) & 1))]; - ofs--; - } while (symbol < 0); - - get_bits_no_markers(8 + (23 - ofs)); - - extra_bits = get_bits_no_markers(symbol & 0xF); - } - else - { - JPGD_ASSERT(((symbol >> 8) & 31) == pH->code_size[symbol & 255] + ((symbol & 0x8000) ? (symbol & 15) : 0)); - - if (symbol & 0x8000) - { - get_bits_no_markers((symbol >> 8) & 31); - extra_bits = symbol >> 16; - } - else - { - int code_size = (symbol >> 8) & 31; - int num_extra_bits = symbol & 0xF; - int bits = code_size + num_extra_bits; - if (bits <= (m_bits_left + 16)) - extra_bits = get_bits_no_markers(bits) & ((1 << num_extra_bits) - 1); - else - { - get_bits_no_markers(code_size); - extra_bits = get_bits_no_markers(num_extra_bits); - } - } - - symbol &= 0xFF; - } - - return symbol; - } - - // Tables and macro used to fully decode the DPCM differences. - static const int s_extend_test[16] = { 0, 0x0001, 0x0002, 0x0004, 0x0008, 0x0010, 0x0020, 0x0040, 0x0080, 0x0100, 0x0200, 0x0400, 0x0800, 0x1000, 0x2000, 0x4000 }; - static const int s_extend_offset[16] = { 0, -1, -3, -7, -15, -31, -63, -127, -255, -511, -1023, -2047, -4095, -8191, -16383, -32767 }; - static const int s_extend_mask[] = { 0, (1<<0), (1<<1), (1<<2), (1<<3), (1<<4), (1<<5), (1<<6), (1<<7), (1<<8), (1<<9), (1<<10), (1<<11), (1<<12), (1<<13), (1<<14), (1<<15), (1<<16) }; -#define HUFF_EXTEND(x,s) ((x) < s_extend_test[s] ? (x) + s_extend_offset[s] : (x)) - - // Clamps a value between 0-255. - inline uint8 jpeg_decoder::clamp(int i) - { - if (static_cast(i) > 255) - i = (((~i) >> 31) & 0xFF); - - return static_cast(i); - } - - namespace DCT_Upsample - { - struct Matrix44 - { - typedef int Element_Type; - enum { NUM_ROWS = 4, NUM_COLS = 4 }; - - Element_Type v[NUM_ROWS][NUM_COLS]; - - inline int rows() const { return NUM_ROWS; } - inline int cols() const { return NUM_COLS; } - - inline const Element_Type & at(int r, int c) const { return v[r][c]; } - inline Element_Type & at(int r, int c) { return v[r][c]; } - - inline Matrix44() { } - - inline Matrix44& operator += (const Matrix44& a) - { - for (int r = 0; r < NUM_ROWS; r++) - { - at(r, 0) += a.at(r, 0); - at(r, 1) += a.at(r, 1); - at(r, 2) += a.at(r, 2); - at(r, 3) += a.at(r, 3); - } - return *this; - } - - inline Matrix44& operator -= (const Matrix44& a) - { - for (int r = 0; r < NUM_ROWS; r++) - { - at(r, 0) -= a.at(r, 0); - at(r, 1) -= a.at(r, 1); - at(r, 2) -= a.at(r, 2); - at(r, 3) -= a.at(r, 3); - } - return *this; - } - - friend inline Matrix44 operator + (const Matrix44& a, const Matrix44& b) - { - Matrix44 ret; - for (int r = 0; r < NUM_ROWS; r++) - { - ret.at(r, 0) = a.at(r, 0) + b.at(r, 0); - ret.at(r, 1) = a.at(r, 1) + b.at(r, 1); - ret.at(r, 2) = a.at(r, 2) + b.at(r, 2); - ret.at(r, 3) = a.at(r, 3) + b.at(r, 3); - } - return ret; - } - - friend inline Matrix44 operator - (const Matrix44& a, const Matrix44& b) - { - Matrix44 ret; - for (int r = 0; r < NUM_ROWS; r++) - { - ret.at(r, 0) = a.at(r, 0) - b.at(r, 0); - ret.at(r, 1) = a.at(r, 1) - b.at(r, 1); - ret.at(r, 2) = a.at(r, 2) - b.at(r, 2); - ret.at(r, 3) = a.at(r, 3) - b.at(r, 3); - } - return ret; - } - - static inline void add_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b) - { - for (int r = 0; r < 4; r++) - { - pDst[0*8 + r] = static_cast(a.at(r, 0) + b.at(r, 0)); - pDst[1*8 + r] = static_cast(a.at(r, 1) + b.at(r, 1)); - pDst[2*8 + r] = static_cast(a.at(r, 2) + b.at(r, 2)); - pDst[3*8 + r] = static_cast(a.at(r, 3) + b.at(r, 3)); - } - } - - static inline void sub_and_store(jpgd_block_t* pDst, const Matrix44& a, const Matrix44& b) - { - for (int r = 0; r < 4; r++) - { - pDst[0*8 + r] = static_cast(a.at(r, 0) - b.at(r, 0)); - pDst[1*8 + r] = static_cast(a.at(r, 1) - b.at(r, 1)); - pDst[2*8 + r] = static_cast(a.at(r, 2) - b.at(r, 2)); - pDst[3*8 + r] = static_cast(a.at(r, 3) - b.at(r, 3)); - } - } - }; - - const int FRACT_BITS = 10; - const int SCALE = 1 << FRACT_BITS; - - typedef int Temp_Type; -#define D(i) (((i) + (SCALE >> 1)) >> FRACT_BITS) -#define F(i) ((int)((i) * SCALE + .5f)) - - // Any decent C++ compiler will optimize this at compile time to a 0, or an array access. -#define AT(c, r) ((((c)>=NUM_COLS)||((r)>=NUM_ROWS)) ? 0 : pSrc[(c)+(r)*8]) - - // NUM_ROWS/NUM_COLS = # of non-zero rows/cols in input matrix - template - struct P_Q - { - static void calc(Matrix44& P, Matrix44& Q, const jpgd_block_t* pSrc) - { - // 4x8 = 4x8 times 8x8, matrix 0 is constant - const Temp_Type X000 = AT(0, 0); - const Temp_Type X001 = AT(0, 1); - const Temp_Type X002 = AT(0, 2); - const Temp_Type X003 = AT(0, 3); - const Temp_Type X004 = AT(0, 4); - const Temp_Type X005 = AT(0, 5); - const Temp_Type X006 = AT(0, 6); - const Temp_Type X007 = AT(0, 7); - const Temp_Type X010 = D(F(0.415735f) * AT(1, 0) + F(0.791065f) * AT(3, 0) + F(-0.352443f) * AT(5, 0) + F(0.277785f) * AT(7, 0)); - const Temp_Type X011 = D(F(0.415735f) * AT(1, 1) + F(0.791065f) * AT(3, 1) + F(-0.352443f) * AT(5, 1) + F(0.277785f) * AT(7, 1)); - const Temp_Type X012 = D(F(0.415735f) * AT(1, 2) + F(0.791065f) * AT(3, 2) + F(-0.352443f) * AT(5, 2) + F(0.277785f) * AT(7, 2)); - const Temp_Type X013 = D(F(0.415735f) * AT(1, 3) + F(0.791065f) * AT(3, 3) + F(-0.352443f) * AT(5, 3) + F(0.277785f) * AT(7, 3)); - const Temp_Type X014 = D(F(0.415735f) * AT(1, 4) + F(0.791065f) * AT(3, 4) + F(-0.352443f) * AT(5, 4) + F(0.277785f) * AT(7, 4)); - const Temp_Type X015 = D(F(0.415735f) * AT(1, 5) + F(0.791065f) * AT(3, 5) + F(-0.352443f) * AT(5, 5) + F(0.277785f) * AT(7, 5)); - const Temp_Type X016 = D(F(0.415735f) * AT(1, 6) + F(0.791065f) * AT(3, 6) + F(-0.352443f) * AT(5, 6) + F(0.277785f) * AT(7, 6)); - const Temp_Type X017 = D(F(0.415735f) * AT(1, 7) + F(0.791065f) * AT(3, 7) + F(-0.352443f) * AT(5, 7) + F(0.277785f) * AT(7, 7)); - const Temp_Type X020 = AT(4, 0); - const Temp_Type X021 = AT(4, 1); - const Temp_Type X022 = AT(4, 2); - const Temp_Type X023 = AT(4, 3); - const Temp_Type X024 = AT(4, 4); - const Temp_Type X025 = AT(4, 5); - const Temp_Type X026 = AT(4, 6); - const Temp_Type X027 = AT(4, 7); - const Temp_Type X030 = D(F(0.022887f) * AT(1, 0) + F(-0.097545f) * AT(3, 0) + F(0.490393f) * AT(5, 0) + F(0.865723f) * AT(7, 0)); - const Temp_Type X031 = D(F(0.022887f) * AT(1, 1) + F(-0.097545f) * AT(3, 1) + F(0.490393f) * AT(5, 1) + F(0.865723f) * AT(7, 1)); - const Temp_Type X032 = D(F(0.022887f) * AT(1, 2) + F(-0.097545f) * AT(3, 2) + F(0.490393f) * AT(5, 2) + F(0.865723f) * AT(7, 2)); - const Temp_Type X033 = D(F(0.022887f) * AT(1, 3) + F(-0.097545f) * AT(3, 3) + F(0.490393f) * AT(5, 3) + F(0.865723f) * AT(7, 3)); - const Temp_Type X034 = D(F(0.022887f) * AT(1, 4) + F(-0.097545f) * AT(3, 4) + F(0.490393f) * AT(5, 4) + F(0.865723f) * AT(7, 4)); - const Temp_Type X035 = D(F(0.022887f) * AT(1, 5) + F(-0.097545f) * AT(3, 5) + F(0.490393f) * AT(5, 5) + F(0.865723f) * AT(7, 5)); - const Temp_Type X036 = D(F(0.022887f) * AT(1, 6) + F(-0.097545f) * AT(3, 6) + F(0.490393f) * AT(5, 6) + F(0.865723f) * AT(7, 6)); - const Temp_Type X037 = D(F(0.022887f) * AT(1, 7) + F(-0.097545f) * AT(3, 7) + F(0.490393f) * AT(5, 7) + F(0.865723f) * AT(7, 7)); - - // 4x4 = 4x8 times 8x4, matrix 1 is constant - P.at(0, 0) = X000; - P.at(0, 1) = D(X001 * F(0.415735f) + X003 * F(0.791065f) + X005 * F(-0.352443f) + X007 * F(0.277785f)); - P.at(0, 2) = X004; - P.at(0, 3) = D(X001 * F(0.022887f) + X003 * F(-0.097545f) + X005 * F(0.490393f) + X007 * F(0.865723f)); - P.at(1, 0) = X010; - P.at(1, 1) = D(X011 * F(0.415735f) + X013 * F(0.791065f) + X015 * F(-0.352443f) + X017 * F(0.277785f)); - P.at(1, 2) = X014; - P.at(1, 3) = D(X011 * F(0.022887f) + X013 * F(-0.097545f) + X015 * F(0.490393f) + X017 * F(0.865723f)); - P.at(2, 0) = X020; - P.at(2, 1) = D(X021 * F(0.415735f) + X023 * F(0.791065f) + X025 * F(-0.352443f) + X027 * F(0.277785f)); - P.at(2, 2) = X024; - P.at(2, 3) = D(X021 * F(0.022887f) + X023 * F(-0.097545f) + X025 * F(0.490393f) + X027 * F(0.865723f)); - P.at(3, 0) = X030; - P.at(3, 1) = D(X031 * F(0.415735f) + X033 * F(0.791065f) + X035 * F(-0.352443f) + X037 * F(0.277785f)); - P.at(3, 2) = X034; - P.at(3, 3) = D(X031 * F(0.022887f) + X033 * F(-0.097545f) + X035 * F(0.490393f) + X037 * F(0.865723f)); - // 40 muls 24 adds - - // 4x4 = 4x8 times 8x4, matrix 1 is constant - Q.at(0, 0) = D(X001 * F(0.906127f) + X003 * F(-0.318190f) + X005 * F(0.212608f) + X007 * F(-0.180240f)); - Q.at(0, 1) = X002; - Q.at(0, 2) = D(X001 * F(-0.074658f) + X003 * F(0.513280f) + X005 * F(0.768178f) + X007 * F(-0.375330f)); - Q.at(0, 3) = X006; - Q.at(1, 0) = D(X011 * F(0.906127f) + X013 * F(-0.318190f) + X015 * F(0.212608f) + X017 * F(-0.180240f)); - Q.at(1, 1) = X012; - Q.at(1, 2) = D(X011 * F(-0.074658f) + X013 * F(0.513280f) + X015 * F(0.768178f) + X017 * F(-0.375330f)); - Q.at(1, 3) = X016; - Q.at(2, 0) = D(X021 * F(0.906127f) + X023 * F(-0.318190f) + X025 * F(0.212608f) + X027 * F(-0.180240f)); - Q.at(2, 1) = X022; - Q.at(2, 2) = D(X021 * F(-0.074658f) + X023 * F(0.513280f) + X025 * F(0.768178f) + X027 * F(-0.375330f)); - Q.at(2, 3) = X026; - Q.at(3, 0) = D(X031 * F(0.906127f) + X033 * F(-0.318190f) + X035 * F(0.212608f) + X037 * F(-0.180240f)); - Q.at(3, 1) = X032; - Q.at(3, 2) = D(X031 * F(-0.074658f) + X033 * F(0.513280f) + X035 * F(0.768178f) + X037 * F(-0.375330f)); - Q.at(3, 3) = X036; - // 40 muls 24 adds - } - }; - - template - struct R_S - { - static void calc(Matrix44& R, Matrix44& S, const jpgd_block_t* pSrc) - { - // 4x8 = 4x8 times 8x8, matrix 0 is constant - const Temp_Type X100 = D(F(0.906127f) * AT(1, 0) + F(-0.318190f) * AT(3, 0) + F(0.212608f) * AT(5, 0) + F(-0.180240f) * AT(7, 0)); - const Temp_Type X101 = D(F(0.906127f) * AT(1, 1) + F(-0.318190f) * AT(3, 1) + F(0.212608f) * AT(5, 1) + F(-0.180240f) * AT(7, 1)); - const Temp_Type X102 = D(F(0.906127f) * AT(1, 2) + F(-0.318190f) * AT(3, 2) + F(0.212608f) * AT(5, 2) + F(-0.180240f) * AT(7, 2)); - const Temp_Type X103 = D(F(0.906127f) * AT(1, 3) + F(-0.318190f) * AT(3, 3) + F(0.212608f) * AT(5, 3) + F(-0.180240f) * AT(7, 3)); - const Temp_Type X104 = D(F(0.906127f) * AT(1, 4) + F(-0.318190f) * AT(3, 4) + F(0.212608f) * AT(5, 4) + F(-0.180240f) * AT(7, 4)); - const Temp_Type X105 = D(F(0.906127f) * AT(1, 5) + F(-0.318190f) * AT(3, 5) + F(0.212608f) * AT(5, 5) + F(-0.180240f) * AT(7, 5)); - const Temp_Type X106 = D(F(0.906127f) * AT(1, 6) + F(-0.318190f) * AT(3, 6) + F(0.212608f) * AT(5, 6) + F(-0.180240f) * AT(7, 6)); - const Temp_Type X107 = D(F(0.906127f) * AT(1, 7) + F(-0.318190f) * AT(3, 7) + F(0.212608f) * AT(5, 7) + F(-0.180240f) * AT(7, 7)); - const Temp_Type X110 = AT(2, 0); - const Temp_Type X111 = AT(2, 1); - const Temp_Type X112 = AT(2, 2); - const Temp_Type X113 = AT(2, 3); - const Temp_Type X114 = AT(2, 4); - const Temp_Type X115 = AT(2, 5); - const Temp_Type X116 = AT(2, 6); - const Temp_Type X117 = AT(2, 7); - const Temp_Type X120 = D(F(-0.074658f) * AT(1, 0) + F(0.513280f) * AT(3, 0) + F(0.768178f) * AT(5, 0) + F(-0.375330f) * AT(7, 0)); - const Temp_Type X121 = D(F(-0.074658f) * AT(1, 1) + F(0.513280f) * AT(3, 1) + F(0.768178f) * AT(5, 1) + F(-0.375330f) * AT(7, 1)); - const Temp_Type X122 = D(F(-0.074658f) * AT(1, 2) + F(0.513280f) * AT(3, 2) + F(0.768178f) * AT(5, 2) + F(-0.375330f) * AT(7, 2)); - const Temp_Type X123 = D(F(-0.074658f) * AT(1, 3) + F(0.513280f) * AT(3, 3) + F(0.768178f) * AT(5, 3) + F(-0.375330f) * AT(7, 3)); - const Temp_Type X124 = D(F(-0.074658f) * AT(1, 4) + F(0.513280f) * AT(3, 4) + F(0.768178f) * AT(5, 4) + F(-0.375330f) * AT(7, 4)); - const Temp_Type X125 = D(F(-0.074658f) * AT(1, 5) + F(0.513280f) * AT(3, 5) + F(0.768178f) * AT(5, 5) + F(-0.375330f) * AT(7, 5)); - const Temp_Type X126 = D(F(-0.074658f) * AT(1, 6) + F(0.513280f) * AT(3, 6) + F(0.768178f) * AT(5, 6) + F(-0.375330f) * AT(7, 6)); - const Temp_Type X127 = D(F(-0.074658f) * AT(1, 7) + F(0.513280f) * AT(3, 7) + F(0.768178f) * AT(5, 7) + F(-0.375330f) * AT(7, 7)); - const Temp_Type X130 = AT(6, 0); - const Temp_Type X131 = AT(6, 1); - const Temp_Type X132 = AT(6, 2); - const Temp_Type X133 = AT(6, 3); - const Temp_Type X134 = AT(6, 4); - const Temp_Type X135 = AT(6, 5); - const Temp_Type X136 = AT(6, 6); - const Temp_Type X137 = AT(6, 7); - // 80 muls 48 adds - - // 4x4 = 4x8 times 8x4, matrix 1 is constant - R.at(0, 0) = X100; - R.at(0, 1) = D(X101 * F(0.415735f) + X103 * F(0.791065f) + X105 * F(-0.352443f) + X107 * F(0.277785f)); - R.at(0, 2) = X104; - R.at(0, 3) = D(X101 * F(0.022887f) + X103 * F(-0.097545f) + X105 * F(0.490393f) + X107 * F(0.865723f)); - R.at(1, 0) = X110; - R.at(1, 1) = D(X111 * F(0.415735f) + X113 * F(0.791065f) + X115 * F(-0.352443f) + X117 * F(0.277785f)); - R.at(1, 2) = X114; - R.at(1, 3) = D(X111 * F(0.022887f) + X113 * F(-0.097545f) + X115 * F(0.490393f) + X117 * F(0.865723f)); - R.at(2, 0) = X120; - R.at(2, 1) = D(X121 * F(0.415735f) + X123 * F(0.791065f) + X125 * F(-0.352443f) + X127 * F(0.277785f)); - R.at(2, 2) = X124; - R.at(2, 3) = D(X121 * F(0.022887f) + X123 * F(-0.097545f) + X125 * F(0.490393f) + X127 * F(0.865723f)); - R.at(3, 0) = X130; - R.at(3, 1) = D(X131 * F(0.415735f) + X133 * F(0.791065f) + X135 * F(-0.352443f) + X137 * F(0.277785f)); - R.at(3, 2) = X134; - R.at(3, 3) = D(X131 * F(0.022887f) + X133 * F(-0.097545f) + X135 * F(0.490393f) + X137 * F(0.865723f)); - // 40 muls 24 adds - // 4x4 = 4x8 times 8x4, matrix 1 is constant - S.at(0, 0) = D(X101 * F(0.906127f) + X103 * F(-0.318190f) + X105 * F(0.212608f) + X107 * F(-0.180240f)); - S.at(0, 1) = X102; - S.at(0, 2) = D(X101 * F(-0.074658f) + X103 * F(0.513280f) + X105 * F(0.768178f) + X107 * F(-0.375330f)); - S.at(0, 3) = X106; - S.at(1, 0) = D(X111 * F(0.906127f) + X113 * F(-0.318190f) + X115 * F(0.212608f) + X117 * F(-0.180240f)); - S.at(1, 1) = X112; - S.at(1, 2) = D(X111 * F(-0.074658f) + X113 * F(0.513280f) + X115 * F(0.768178f) + X117 * F(-0.375330f)); - S.at(1, 3) = X116; - S.at(2, 0) = D(X121 * F(0.906127f) + X123 * F(-0.318190f) + X125 * F(0.212608f) + X127 * F(-0.180240f)); - S.at(2, 1) = X122; - S.at(2, 2) = D(X121 * F(-0.074658f) + X123 * F(0.513280f) + X125 * F(0.768178f) + X127 * F(-0.375330f)); - S.at(2, 3) = X126; - S.at(3, 0) = D(X131 * F(0.906127f) + X133 * F(-0.318190f) + X135 * F(0.212608f) + X137 * F(-0.180240f)); - S.at(3, 1) = X132; - S.at(3, 2) = D(X131 * F(-0.074658f) + X133 * F(0.513280f) + X135 * F(0.768178f) + X137 * F(-0.375330f)); - S.at(3, 3) = X136; - // 40 muls 24 adds - } - }; - } // end namespace DCT_Upsample - - // Unconditionally frees all allocated m_blocks. - void jpeg_decoder::free_all_blocks() - { - m_pStream = NULL; - for (mem_block *b = m_pMem_blocks; b; ) - { - mem_block *n = b->m_pNext; - jpgd_free(b); - b = n; - } - m_pMem_blocks = NULL; - } - - // This method handles all errors. - // It could easily be changed to use C++ exceptions. - void jpeg_decoder::stop_decoding(jpgd_status status) - { - m_error_code = status; - free_all_blocks(); - longjmp(m_jmp_state, status); - - // we shouldn't get here as longjmp shouldn't return, but we put it here to make it explicit - // that this function doesn't return, otherwise we get this error: - // - // error : function declared 'noreturn' should not return - exit(1); - } - - void *jpeg_decoder::alloc(size_t nSize, bool zero) - { - nSize = (JPGD_MAX(nSize, 1) + 3) & ~3; - char *rv = NULL; - for (mem_block *b = m_pMem_blocks; b; b = b->m_pNext) - { - if ((b->m_used_count + nSize) <= b->m_size) - { - rv = b->m_data + b->m_used_count; - b->m_used_count += nSize; - break; - } - } - if (!rv) - { - int capacity = JPGD_MAX(32768 - 256, (nSize + 2047) & ~2047); - mem_block *b = (mem_block*)jpgd_malloc(sizeof(mem_block) + capacity); - if (!b) stop_decoding(JPGD_NOTENOUGHMEM); - b->m_pNext = m_pMem_blocks; m_pMem_blocks = b; - b->m_used_count = nSize; - b->m_size = capacity; - rv = b->m_data; - } - if (zero) memset(rv, 0, nSize); - return rv; - } - - void jpeg_decoder::word_clear(void *p, uint16 c, uint n) - { - uint8 *pD = (uint8*)p; - const uint8 l = c & 0xFF, h = (c >> 8) & 0xFF; - while (n) - { - pD[0] = l; pD[1] = h; pD += 2; - n--; - } - } - - // Refill the input buffer. - // This method will sit in a loop until (A) the buffer is full or (B) - // the stream's read() method reports and end of file condition. - void jpeg_decoder::prep_in_buffer() - { - m_in_buf_left = 0; - m_pIn_buf_ofs = m_in_buf; - - if (m_eof_flag) - return; - - do - { - int bytes_read = m_pStream->read(m_in_buf + m_in_buf_left, JPGD_IN_BUF_SIZE - m_in_buf_left, &m_eof_flag); - if (bytes_read == -1) - stop_decoding(JPGD_STREAM_READ); - - m_in_buf_left += bytes_read; - } while ((m_in_buf_left < JPGD_IN_BUF_SIZE) && (!m_eof_flag)); - - m_total_bytes_read += m_in_buf_left; - - // Pad the end of the block with M_EOI (prevents the decompressor from going off the rails if the stream is invalid). - // (This dates way back to when this decompressor was written in C/asm, and the all-asm Huffman decoder did some fancy things to increase perf.) - word_clear(m_pIn_buf_ofs + m_in_buf_left, 0xD9FF, 64); - } - - // Read a Huffman code table. - void jpeg_decoder::read_dht_marker() - { - int i, index, count; - uint8 huff_num[17]; - uint8 huff_val[256]; - - uint num_left = get_bits(16); - - if (num_left < 2) - stop_decoding(JPGD_BAD_DHT_MARKER); - - num_left -= 2; - - while (num_left) - { - index = get_bits(8); - - huff_num[0] = 0; - - count = 0; - - for (i = 1; i <= 16; i++) - { - huff_num[i] = static_cast(get_bits(8)); - count += huff_num[i]; - } - - if (count > 255) - stop_decoding(JPGD_BAD_DHT_COUNTS); - - for (i = 0; i < count; i++) - huff_val[i] = static_cast(get_bits(8)); - - i = 1 + 16 + count; - - if (num_left < (uint)i) - stop_decoding(JPGD_BAD_DHT_MARKER); - - num_left -= i; - - if ((index & 0x10) > 0x10) - stop_decoding(JPGD_BAD_DHT_INDEX); - - index = (index & 0x0F) + ((index & 0x10) >> 4) * (JPGD_MAX_HUFF_TABLES >> 1); - - if (index >= JPGD_MAX_HUFF_TABLES) - stop_decoding(JPGD_BAD_DHT_INDEX); - - if (!m_huff_num[index]) - m_huff_num[index] = (uint8 *)alloc(17); - - if (!m_huff_val[index]) - m_huff_val[index] = (uint8 *)alloc(256); - - m_huff_ac[index] = (index & 0x10) != 0; - memcpy(m_huff_num[index], huff_num, 17); - memcpy(m_huff_val[index], huff_val, 256); - } - } - - // Read a quantization table. - void jpeg_decoder::read_dqt_marker() - { - int n, i, prec; - uint num_left; - uint temp; - - num_left = get_bits(16); - - if (num_left < 2) - stop_decoding(JPGD_BAD_DQT_MARKER); - - num_left -= 2; - - while (num_left) - { - n = get_bits(8); - prec = n >> 4; - n &= 0x0F; - - if (n >= JPGD_MAX_QUANT_TABLES) - stop_decoding(JPGD_BAD_DQT_TABLE); - - if (!m_quant[n]) - m_quant[n] = (jpgd_quant_t *)alloc(64 * sizeof(jpgd_quant_t)); - - // read quantization entries, in zag order - for (i = 0; i < 64; i++) - { - temp = get_bits(8); - - if (prec) - temp = (temp << 8) + get_bits(8); - - m_quant[n][i] = static_cast(temp); - } - - i = 64 + 1; - - if (prec) - i += 64; - - if (num_left < (uint)i) - stop_decoding(JPGD_BAD_DQT_LENGTH); - - num_left -= i; - } - } - - // Read the start of frame (SOF) marker. - void jpeg_decoder::read_sof_marker() - { - int i; - uint num_left; - - num_left = get_bits(16); - - if (get_bits(8) != 8) /* precision: sorry, only 8-bit precision is supported right now */ - stop_decoding(JPGD_BAD_PRECISION); - - m_image_y_size = get_bits(16); - - if ((m_image_y_size < 1) || (m_image_y_size > JPGD_MAX_HEIGHT)) - stop_decoding(JPGD_BAD_HEIGHT); - - m_image_x_size = get_bits(16); - - if ((m_image_x_size < 1) || (m_image_x_size > JPGD_MAX_WIDTH)) - stop_decoding(JPGD_BAD_WIDTH); - - m_comps_in_frame = get_bits(8); - - if (m_comps_in_frame > JPGD_MAX_COMPONENTS) - stop_decoding(JPGD_TOO_MANY_COMPONENTS); - - if (num_left != (uint)(m_comps_in_frame * 3 + 8)) - stop_decoding(JPGD_BAD_SOF_LENGTH); - - for (i = 0; i < m_comps_in_frame; i++) - { - m_comp_ident[i] = get_bits(8); - m_comp_h_samp[i] = get_bits(4); - m_comp_v_samp[i] = get_bits(4); - m_comp_quant[i] = get_bits(8); - } - } - - // Used to skip unrecognized markers. - void jpeg_decoder::skip_variable_marker() - { - uint num_left; - - num_left = get_bits(16); - - if (num_left < 2) - stop_decoding(JPGD_BAD_VARIABLE_MARKER); - - num_left -= 2; - - while (num_left) - { - get_bits(8); - num_left--; - } - } - - // Read a define restart interval (DRI) marker. - void jpeg_decoder::read_dri_marker() - { - if (get_bits(16) != 4) - stop_decoding(JPGD_BAD_DRI_LENGTH); - - m_restart_interval = get_bits(16); - } - - // Read a start of scan (SOS) marker. - void jpeg_decoder::read_sos_marker() - { - uint num_left; - int i, ci, n, c, cc; - - num_left = get_bits(16); - - n = get_bits(8); - - m_comps_in_scan = n; - - num_left -= 3; - - if ( (num_left != (uint)(n * 2 + 3)) || (n < 1) || (n > JPGD_MAX_COMPS_IN_SCAN) ) - stop_decoding(JPGD_BAD_SOS_LENGTH); - - for (i = 0; i < n; i++) - { - cc = get_bits(8); - c = get_bits(8); - num_left -= 2; - - for (ci = 0; ci < m_comps_in_frame; ci++) - if (cc == m_comp_ident[ci]) - break; - - if (ci >= m_comps_in_frame) - stop_decoding(JPGD_BAD_SOS_COMP_ID); - - m_comp_list[i] = ci; - m_comp_dc_tab[ci] = (c >> 4) & 15; - m_comp_ac_tab[ci] = (c & 15) + (JPGD_MAX_HUFF_TABLES >> 1); - } - - m_spectral_start = get_bits(8); - m_spectral_end = get_bits(8); - m_successive_high = get_bits(4); - m_successive_low = get_bits(4); - - if (!m_progressive_flag) - { - m_spectral_start = 0; - m_spectral_end = 63; - } - - num_left -= 3; - - while (num_left) /* read past whatever is num_left */ - { - get_bits(8); - num_left--; - } - } - - // Finds the next marker. - int jpeg_decoder::next_marker() - { - uint c, bytes; - - bytes = 0; - - do - { - do - { - bytes++; - c = get_bits(8); - } while (c != 0xFF); - - do - { - c = get_bits(8); - } while (c == 0xFF); - - } while (c == 0); - - // If bytes > 0 here, there where extra bytes before the marker (not good). - - return c; - } - - // Process markers. Returns when an SOFx, SOI, EOI, or SOS marker is - // encountered. - int jpeg_decoder::process_markers() - { - int c; - - for ( ; ; ) - { - c = next_marker(); - - switch (c) - { - case M_SOF0: - case M_SOF1: - case M_SOF2: - case M_SOF3: - case M_SOF5: - case M_SOF6: - case M_SOF7: - // case M_JPG: - case M_SOF9: - case M_SOF10: - case M_SOF11: - case M_SOF13: - case M_SOF14: - case M_SOF15: - case M_SOI: - case M_EOI: - case M_SOS: - { - return c; - } - case M_DHT: - { - read_dht_marker(); - break; - } - // No arithmitic support - dumb patents! - case M_DAC: - { - stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT); - break; - } - case M_DQT: - { - read_dqt_marker(); - break; - } - case M_DRI: - { - read_dri_marker(); - break; - } - //case M_APP0: /* no need to read the JFIF marker */ - - case M_JPG: - case M_RST0: /* no parameters */ - case M_RST1: - case M_RST2: - case M_RST3: - case M_RST4: - case M_RST5: - case M_RST6: - case M_RST7: - case M_TEM: - { - stop_decoding(JPGD_UNEXPECTED_MARKER); - break; - } - default: /* must be DNL, DHP, EXP, APPn, JPGn, COM, or RESn or APP0 */ - { - skip_variable_marker(); - break; - } - } - } - } - - // Finds the start of image (SOI) marker. - // This code is rather defensive: it only checks the first 512 bytes to avoid - // false positives. - void jpeg_decoder::locate_soi_marker() - { - uint lastchar, thischar; - uint bytesleft; - - lastchar = get_bits(8); - - thischar = get_bits(8); - - /* ok if it's a normal JPEG file without a special header */ - - if ((lastchar == 0xFF) && (thischar == M_SOI)) - return; - - bytesleft = 4096; //512; - - for ( ; ; ) - { - if (--bytesleft == 0) - stop_decoding(JPGD_NOT_JPEG); - - lastchar = thischar; - - thischar = get_bits(8); - - if (lastchar == 0xFF) - { - if (thischar == M_SOI) - break; - else if (thischar == M_EOI) // get_bits will keep returning M_EOI if we read past the end - stop_decoding(JPGD_NOT_JPEG); - } - } - - // Check the next character after marker: if it's not 0xFF, it can't be the start of the next marker, so the file is bad. - thischar = (m_bit_buf >> 24) & 0xFF; - - if (thischar != 0xFF) - stop_decoding(JPGD_NOT_JPEG); - } - - // Find a start of frame (SOF) marker. - void jpeg_decoder::locate_sof_marker() - { - locate_soi_marker(); - - int c = process_markers(); - - switch (c) - { - case M_SOF2: - m_progressive_flag = JPGD_TRUE; - case M_SOF0: /* baseline DCT */ - case M_SOF1: /* extended sequential DCT */ - { - read_sof_marker(); - break; - } - case M_SOF9: /* Arithmitic coding */ - { - stop_decoding(JPGD_NO_ARITHMITIC_SUPPORT); - break; - } - default: - { - stop_decoding(JPGD_UNSUPPORTED_MARKER); - break; - } - } - } - - // Find a start of scan (SOS) marker. - int jpeg_decoder::locate_sos_marker() - { - int c; - - c = process_markers(); - - if (c == M_EOI) - return JPGD_FALSE; - else if (c != M_SOS) - stop_decoding(JPGD_UNEXPECTED_MARKER); - - read_sos_marker(); - - return JPGD_TRUE; - } - - // Reset everything to default/uninitialized state. - void jpeg_decoder::init(jpeg_decoder_stream *pStream) - { - m_pMem_blocks = NULL; - m_error_code = JPGD_SUCCESS; - m_ready_flag = false; - m_image_x_size = m_image_y_size = 0; - m_pStream = pStream; - m_progressive_flag = JPGD_FALSE; - - memset(m_huff_ac, 0, sizeof(m_huff_ac)); - memset(m_huff_num, 0, sizeof(m_huff_num)); - memset(m_huff_val, 0, sizeof(m_huff_val)); - memset(m_quant, 0, sizeof(m_quant)); - - m_scan_type = 0; - m_comps_in_frame = 0; - - memset(m_comp_h_samp, 0, sizeof(m_comp_h_samp)); - memset(m_comp_v_samp, 0, sizeof(m_comp_v_samp)); - memset(m_comp_quant, 0, sizeof(m_comp_quant)); - memset(m_comp_ident, 0, sizeof(m_comp_ident)); - memset(m_comp_h_blocks, 0, sizeof(m_comp_h_blocks)); - memset(m_comp_v_blocks, 0, sizeof(m_comp_v_blocks)); - - m_comps_in_scan = 0; - memset(m_comp_list, 0, sizeof(m_comp_list)); - memset(m_comp_dc_tab, 0, sizeof(m_comp_dc_tab)); - memset(m_comp_ac_tab, 0, sizeof(m_comp_ac_tab)); - - m_spectral_start = 0; - m_spectral_end = 0; - m_successive_low = 0; - m_successive_high = 0; - m_max_mcu_x_size = 0; - m_max_mcu_y_size = 0; - m_blocks_per_mcu = 0; - m_max_blocks_per_row = 0; - m_mcus_per_row = 0; - m_mcus_per_col = 0; - m_expanded_blocks_per_component = 0; - m_expanded_blocks_per_mcu = 0; - m_expanded_blocks_per_row = 0; - m_freq_domain_chroma_upsample = false; - - memset(m_mcu_org, 0, sizeof(m_mcu_org)); - - m_total_lines_left = 0; - m_mcu_lines_left = 0; - m_real_dest_bytes_per_scan_line = 0; - m_dest_bytes_per_scan_line = 0; - m_dest_bytes_per_pixel = 0; - - memset(m_pHuff_tabs, 0, sizeof(m_pHuff_tabs)); - - memset(m_dc_coeffs, 0, sizeof(m_dc_coeffs)); - memset(m_ac_coeffs, 0, sizeof(m_ac_coeffs)); - memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu)); - - m_eob_run = 0; - - memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu)); - - m_pIn_buf_ofs = m_in_buf; - m_in_buf_left = 0; - m_eof_flag = false; - m_tem_flag = 0; - - memset(m_in_buf_pad_start, 0, sizeof(m_in_buf_pad_start)); - memset(m_in_buf, 0, sizeof(m_in_buf)); - memset(m_in_buf_pad_end, 0, sizeof(m_in_buf_pad_end)); - - m_restart_interval = 0; - m_restarts_left = 0; - m_next_restart_num = 0; - - m_max_mcus_per_row = 0; - m_max_blocks_per_mcu = 0; - m_max_mcus_per_col = 0; - - memset(m_last_dc_val, 0, sizeof(m_last_dc_val)); - m_pMCU_coefficients = NULL; - m_pSample_buf = NULL; - - m_total_bytes_read = 0; - - m_pScan_line_0 = NULL; - m_pScan_line_1 = NULL; - - // Ready the input buffer. - prep_in_buffer(); - - // Prime the bit buffer. - m_bits_left = 16; - m_bit_buf = 0; - - get_bits(16); - get_bits(16); - - for (int i = 0; i < JPGD_MAX_BLOCKS_PER_MCU; i++) - m_mcu_block_max_zag[i] = 64; - } - -#define SCALEBITS 16 -#define ONE_HALF ((int) 1 << (SCALEBITS-1)) -#define FIX(x) ((int) ((x) * (1L<> SCALEBITS; - m_cbb[i] = ( FIX(1.77200f) * k + ONE_HALF) >> SCALEBITS; - m_crg[i] = (-FIX(0.71414f)) * k; - m_cbg[i] = (-FIX(0.34414f)) * k + ONE_HALF; - } - } - - // This method throws back into the stream any bytes that where read - // into the bit buffer during initial marker scanning. - void jpeg_decoder::fix_in_buffer() - { - // In case any 0xFF's where pulled into the buffer during marker scanning. - JPGD_ASSERT((m_bits_left & 7) == 0); - - if (m_bits_left == 16) - stuff_char( (uint8)(m_bit_buf & 0xFF)); - - if (m_bits_left >= 8) - stuff_char( (uint8)((m_bit_buf >> 8) & 0xFF)); - - stuff_char((uint8)((m_bit_buf >> 16) & 0xFF)); - stuff_char((uint8)((m_bit_buf >> 24) & 0xFF)); - - m_bits_left = 16; - get_bits_no_markers(16); - get_bits_no_markers(16); - } - - void jpeg_decoder::transform_mcu(int mcu_row) - { - jpgd_block_t* pSrc_ptr = m_pMCU_coefficients; - uint8* pDst_ptr = m_pSample_buf + mcu_row * m_blocks_per_mcu * 64; - - for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++) - { - idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]); - pSrc_ptr += 64; - pDst_ptr += 64; - } - } - - static const uint8 s_max_rc[64] = - { - 17, 18, 34, 50, 50, 51, 52, 52, 52, 68, 84, 84, 84, 84, 85, 86, 86, 86, 86, 86, - 102, 118, 118, 118, 118, 118, 118, 119, 120, 120, 120, 120, 120, 120, 120, 136, - 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, - 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136, 136 - }; - - void jpeg_decoder::transform_mcu_expand(int mcu_row) - { - jpgd_block_t* pSrc_ptr = m_pMCU_coefficients; - uint8* pDst_ptr = m_pSample_buf + mcu_row * m_expanded_blocks_per_mcu * 64; - - // Y IDCT - int mcu_block; - for (mcu_block = 0; mcu_block < m_expanded_blocks_per_component; mcu_block++) - { - idct(pSrc_ptr, pDst_ptr, m_mcu_block_max_zag[mcu_block]); - pSrc_ptr += 64; - pDst_ptr += 64; - } - - // Chroma IDCT, with upsampling - jpgd_block_t temp_block[64]; - - for (int i = 0; i < 2; i++) - { - DCT_Upsample::Matrix44 P, Q, R, S; - - JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] >= 1); - JPGD_ASSERT(m_mcu_block_max_zag[mcu_block] <= 64); - - switch (s_max_rc[m_mcu_block_max_zag[mcu_block++] - 1]) - { - case 1*16+1: - DCT_Upsample::P_Q<1, 1>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<1, 1>::calc(R, S, pSrc_ptr); - break; - case 1*16+2: - DCT_Upsample::P_Q<1, 2>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<1, 2>::calc(R, S, pSrc_ptr); - break; - case 2*16+2: - DCT_Upsample::P_Q<2, 2>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<2, 2>::calc(R, S, pSrc_ptr); - break; - case 3*16+2: - DCT_Upsample::P_Q<3, 2>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<3, 2>::calc(R, S, pSrc_ptr); - break; - case 3*16+3: - DCT_Upsample::P_Q<3, 3>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<3, 3>::calc(R, S, pSrc_ptr); - break; - case 3*16+4: - DCT_Upsample::P_Q<3, 4>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<3, 4>::calc(R, S, pSrc_ptr); - break; - case 4*16+4: - DCT_Upsample::P_Q<4, 4>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<4, 4>::calc(R, S, pSrc_ptr); - break; - case 5*16+4: - DCT_Upsample::P_Q<5, 4>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<5, 4>::calc(R, S, pSrc_ptr); - break; - case 5*16+5: - DCT_Upsample::P_Q<5, 5>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<5, 5>::calc(R, S, pSrc_ptr); - break; - case 5*16+6: - DCT_Upsample::P_Q<5, 6>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<5, 6>::calc(R, S, pSrc_ptr); - break; - case 6*16+6: - DCT_Upsample::P_Q<6, 6>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<6, 6>::calc(R, S, pSrc_ptr); - break; - case 7*16+6: - DCT_Upsample::P_Q<7, 6>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<7, 6>::calc(R, S, pSrc_ptr); - break; - case 7*16+7: - DCT_Upsample::P_Q<7, 7>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<7, 7>::calc(R, S, pSrc_ptr); - break; - case 7*16+8: - DCT_Upsample::P_Q<7, 8>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<7, 8>::calc(R, S, pSrc_ptr); - break; - case 8*16+8: - DCT_Upsample::P_Q<8, 8>::calc(P, Q, pSrc_ptr); - DCT_Upsample::R_S<8, 8>::calc(R, S, pSrc_ptr); - break; - default: - JPGD_ASSERT(false); - } - - DCT_Upsample::Matrix44 a(P + Q); P -= Q; - DCT_Upsample::Matrix44& b = P; - DCT_Upsample::Matrix44 c(R + S); R -= S; - DCT_Upsample::Matrix44& d = R; - - DCT_Upsample::Matrix44::add_and_store(temp_block, a, c); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - DCT_Upsample::Matrix44::sub_and_store(temp_block, a, c); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - DCT_Upsample::Matrix44::add_and_store(temp_block, b, d); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - DCT_Upsample::Matrix44::sub_and_store(temp_block, b, d); - idct_4x4(temp_block, pDst_ptr); - pDst_ptr += 64; - - pSrc_ptr += 64; - } - } - - // Loads and dequantizes the next row of (already decoded) coefficients. - // Progressive images only. - void jpeg_decoder::load_next_row() - { - int i; - jpgd_block_t *p; - jpgd_quant_t *q; - int mcu_row, mcu_block, row_block = 0; - int component_num, component_id; - int block_x_mcu[JPGD_MAX_COMPONENTS]; - - memset(block_x_mcu, 0, JPGD_MAX_COMPONENTS * sizeof(int)); - - for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++) - { - int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0; - - for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++) - { - component_id = m_mcu_org[mcu_block]; - q = m_quant[m_comp_quant[component_id]]; - - p = m_pMCU_coefficients + 64 * mcu_block; - - jpgd_block_t* pAC = coeff_buf_getp(m_ac_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs); - jpgd_block_t* pDC = coeff_buf_getp(m_dc_coeffs[component_id], block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs); - p[0] = pDC[0]; - memcpy(&p[1], &pAC[1], 63 * sizeof(jpgd_block_t)); - - for (i = 63; i > 0; i--) - if (p[g_ZAG[i]]) - break; - - m_mcu_block_max_zag[mcu_block] = i + 1; - - for ( ; i >= 0; i--) - if (p[g_ZAG[i]]) - p[g_ZAG[i]] = static_cast(p[g_ZAG[i]] * q[i]); - - row_block++; - - if (m_comps_in_scan == 1) - block_x_mcu[component_id]++; - else - { - if (++block_x_mcu_ofs == m_comp_h_samp[component_id]) - { - block_x_mcu_ofs = 0; - - if (++block_y_mcu_ofs == m_comp_v_samp[component_id]) - { - block_y_mcu_ofs = 0; - - block_x_mcu[component_id] += m_comp_h_samp[component_id]; - } - } - } - } - - if (m_freq_domain_chroma_upsample) - transform_mcu_expand(mcu_row); - else - transform_mcu(mcu_row); - } - - if (m_comps_in_scan == 1) - m_block_y_mcu[m_comp_list[0]]++; - else - { - for (component_num = 0; component_num < m_comps_in_scan; component_num++) - { - component_id = m_comp_list[component_num]; - - m_block_y_mcu[component_id] += m_comp_v_samp[component_id]; - } - } - } - - // Restart interval processing. - void jpeg_decoder::process_restart() - { - int i; - int c = 0; - - // Align to a byte boundry - // FIXME: Is this really necessary? get_bits_no_markers() never reads in markers! - //get_bits_no_markers(m_bits_left & 7); - - // Let's scan a little bit to find the marker, but not _too_ far. - // 1536 is a "fudge factor" that determines how much to scan. - for (i = 1536; i > 0; i--) - if (get_char() == 0xFF) - break; - - if (i == 0) - stop_decoding(JPGD_BAD_RESTART_MARKER); - - for ( ; i > 0; i--) - if ((c = get_char()) != 0xFF) - break; - - if (i == 0) - stop_decoding(JPGD_BAD_RESTART_MARKER); - - // Is it the expected marker? If not, something bad happened. - if (c != (m_next_restart_num + M_RST0)) - stop_decoding(JPGD_BAD_RESTART_MARKER); - - // Reset each component's DC prediction values. - memset(&m_last_dc_val, 0, m_comps_in_frame * sizeof(uint)); - - m_eob_run = 0; - - m_restarts_left = m_restart_interval; - - m_next_restart_num = (m_next_restart_num + 1) & 7; - - // Get the bit buffer going again... - - m_bits_left = 16; - get_bits_no_markers(16); - get_bits_no_markers(16); - } - - static inline int dequantize_ac(int c, int q) { c *= q; return c; } - - // Decodes and dequantizes the next row of coefficients. - void jpeg_decoder::decode_next_row() - { - int row_block = 0; - - for (int mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++) - { - if ((m_restart_interval) && (m_restarts_left == 0)) - process_restart(); - - jpgd_block_t* p = m_pMCU_coefficients; - for (int mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++, p += 64) - { - int component_id = m_mcu_org[mcu_block]; - jpgd_quant_t* q = m_quant[m_comp_quant[component_id]]; - - int r, s; - s = huff_decode(m_pHuff_tabs[m_comp_dc_tab[component_id]], r); - s = HUFF_EXTEND(r, s); - - m_last_dc_val[component_id] = (s += m_last_dc_val[component_id]); - - p[0] = static_cast(s * q[0]); - - int prev_num_set = m_mcu_block_max_zag[mcu_block]; - - huff_tables *pH = m_pHuff_tabs[m_comp_ac_tab[component_id]]; - - int k; - for (k = 1; k < 64; k++) - { - int extra_bits; - s = huff_decode(pH, extra_bits); - - r = s >> 4; - s &= 15; - - if (s) - { - if (r) - { - if ((k + r) > 63) - stop_decoding(JPGD_DECODE_ERROR); - - if (k < prev_num_set) - { - int n = JPGD_MIN(r, prev_num_set - k); - int kt = k; - while (n--) - p[g_ZAG[kt++]] = 0; - } - - k += r; - } - - s = HUFF_EXTEND(extra_bits, s); - - JPGD_ASSERT(k < 64); - - p[g_ZAG[k]] = static_cast(dequantize_ac(s, q[k])); //s * q[k]; - } - else - { - if (r == 15) - { - if ((k + 16) > 64) - stop_decoding(JPGD_DECODE_ERROR); - - if (k < prev_num_set) - { - int n = JPGD_MIN(16, prev_num_set - k); - int kt = k; - while (n--) - { - JPGD_ASSERT(kt <= 63); - p[g_ZAG[kt++]] = 0; - } - } - - k += 16 - 1; // - 1 because the loop counter is k - // BEGIN EPIC MOD - JPGD_ASSERT(k < 64 && p[g_ZAG[k]] == 0); - // END EPIC MOD - } - else - break; - } - } - - if (k < prev_num_set) - { - int kt = k; - while (kt < prev_num_set) - p[g_ZAG[kt++]] = 0; - } - - m_mcu_block_max_zag[mcu_block] = k; - - row_block++; - } - - if (m_freq_domain_chroma_upsample) - transform_mcu_expand(mcu_row); - else - transform_mcu(mcu_row); - - m_restarts_left--; - } - } - - // YCbCr H1V1 (1x1:1:1, 3 m_blocks per MCU) to RGB - void jpeg_decoder::H1V1Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d = m_pScan_line_0; - uint8 *s = m_pSample_buf + row * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int j = 0; j < 8; j++) - { - int y = s[j]; - int cb = s[64+j]; - int cr = s[128+j]; - - if (jpg_format == ERGBFormatJPG::BGRA) - { - d[0] = clamp(y + m_cbb[cb]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_crr[cr]); - d[3] = 255; - } - else - { - d[0] = clamp(y + m_crr[cr]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_cbb[cb]); - d[3] = 255; - } - d += 4; - } - - s += 64*3; - } - } - - // YCbCr H2V1 (2x1:1:1, 4 m_blocks per MCU) to RGB - void jpeg_decoder::H2V1Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d0 = m_pScan_line_0; - uint8 *y = m_pSample_buf + row * 8; - uint8 *c = m_pSample_buf + 2*64 + row * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int l = 0; l < 2; l++) - { - for (int j = 0; j < 4; j++) - { - int cb = c[0]; - int cr = c[64]; - - int rc = m_crr[cr]; - int gc = ((m_crg[cr] + m_cbg[cb]) >> 16); - int bc = m_cbb[cb]; - - int yy = y[j<<1]; - if (jpg_format == ERGBFormatJPG::BGRA) - { - d0[0] = clamp(yy+bc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+rc); - d0[3] = 255; - yy = y[(j<<1)+1]; - d0[4] = clamp(yy+bc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+rc); - d0[7] = 255; - } - else - { - d0[0] = clamp(yy+rc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+bc); - d0[3] = 255; - yy = y[(j<<1)+1]; - d0[4] = clamp(yy+rc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+bc); - d0[7] = 255; - } - - d0 += 8; - - c++; - } - y += 64; - } - - y += 64*4 - 64*2; - c += 64*4 - 8; - } - } - - // YCbCr H2V1 (1x2:1:1, 4 m_blocks per MCU) to RGB - void jpeg_decoder::H1V2Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d0 = m_pScan_line_0; - uint8 *d1 = m_pScan_line_1; - uint8 *y; - uint8 *c; - - if (row < 8) - y = m_pSample_buf + row * 8; - else - y = m_pSample_buf + 64*1 + (row & 7) * 8; - - c = m_pSample_buf + 64*2 + (row >> 1) * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int j = 0; j < 8; j++) - { - int cb = c[0+j]; - int cr = c[64+j]; - - int rc = m_crr[cr]; - int gc = ((m_crg[cr] + m_cbg[cb]) >> 16); - int bc = m_cbb[cb]; - - int yy = y[j]; - if (jpg_format == ERGBFormatJPG::BGRA) - { - d0[0] = clamp(yy+bc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+rc); - d0[3] = 255; - yy = y[8+j]; - d1[0] = clamp(yy+bc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+rc); - d1[3] = 255; - } - else - { - d0[0] = clamp(yy+rc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+bc); - d0[3] = 255; - yy = y[8+j]; - d1[0] = clamp(yy+rc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+bc); - d1[3] = 255; - } - - d0 += 4; - d1 += 4; - } - - y += 64*4; - c += 64*4; - } - } - - // YCbCr H2V2 (2x2:1:1, 6 m_blocks per MCU) to RGB - void jpeg_decoder::H2V2Convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d0 = m_pScan_line_0; - uint8 *d1 = m_pScan_line_1; - uint8 *y; - uint8 *c; - - if (row < 8) - y = m_pSample_buf + row * 8; - else - y = m_pSample_buf + 64*2 + (row & 7) * 8; - - c = m_pSample_buf + 64*4 + (row >> 1) * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int l = 0; l < 2; l++) - { - for (int j = 0; j < 8; j += 2) - { - int cb = c[0]; - int cr = c[64]; - - int rc = m_crr[cr]; - int gc = ((m_crg[cr] + m_cbg[cb]) >> 16); - int bc = m_cbb[cb]; - - int yy = y[j]; - if (jpg_format == ERGBFormatJPG::BGRA) - { - d0[0] = clamp(yy+bc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+rc); - d0[3] = 255; - yy = y[j+1]; - d0[4] = clamp(yy+bc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+rc); - d0[7] = 255; - yy = y[j+8]; - d1[0] = clamp(yy+bc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+rc); - d1[3] = 255; - yy = y[j+8+1]; - d1[4] = clamp(yy+bc); - d1[5] = clamp(yy+gc); - d1[6] = clamp(yy+rc); - d1[7] = 255; - } - else - { - d0[0] = clamp(yy+rc); - d0[1] = clamp(yy+gc); - d0[2] = clamp(yy+bc); - d0[3] = 255; - yy = y[j+1]; - d0[4] = clamp(yy+rc); - d0[5] = clamp(yy+gc); - d0[6] = clamp(yy+bc); - d0[7] = 255; - yy = y[j+8]; - d1[0] = clamp(yy+rc); - d1[1] = clamp(yy+gc); - d1[2] = clamp(yy+bc); - d1[3] = 255; - yy = y[j+8+1]; - d1[4] = clamp(yy+rc); - d1[5] = clamp(yy+gc); - d1[6] = clamp(yy+bc); - d1[7] = 255; - } - - d0 += 8; - d1 += 8; - - c++; - } - y += 64; - } - - y += 64*6 - 64*2; - c += 64*6 - 8; - } - } - - // Y (1 block per MCU) to 8-bit grayscale - void jpeg_decoder::gray_convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - uint8 *d = m_pScan_line_0; - uint8 *s = m_pSample_buf + row * 8; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - *(uint *)d = *(uint *)s; - *(uint *)(&d[4]) = *(uint *)(&s[4]); - - s += 64; - d += 8; - } - } - - void jpeg_decoder::expanded_convert() - { - int row = m_max_mcu_y_size - m_mcu_lines_left; - - uint8* Py = m_pSample_buf + (row / 8) * 64 * m_comp_h_samp[0] + (row & 7) * 8; - - uint8* d = m_pScan_line_0; - - for (int i = m_max_mcus_per_row; i > 0; i--) - { - for (int k = 0; k < m_max_mcu_x_size; k += 8) - { - const int Y_ofs = k * 8; - const int Cb_ofs = Y_ofs + 64 * m_expanded_blocks_per_component; - const int Cr_ofs = Y_ofs + 64 * m_expanded_blocks_per_component * 2; - for (int j = 0; j < 8; j++) - { - int y = Py[Y_ofs + j]; - int cb = Py[Cb_ofs + j]; - int cr = Py[Cr_ofs + j]; - - if (jpg_format == ERGBFormatJPG::BGRA) - { - d[0] = clamp(y + m_cbb[cb]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_crr[cr]); - d[3] = 255; - } - else - { - d[0] = clamp(y + m_crr[cr]); - d[1] = clamp(y + ((m_crg[cr] + m_cbg[cb]) >> 16)); - d[2] = clamp(y + m_cbb[cb]); - d[3] = 255; - } - - d += 4; - } - } - - Py += 64 * m_expanded_blocks_per_mcu; - } - } - - // Find end of image (EOI) marker, so we can return to the user the exact size of the input stream. - void jpeg_decoder::find_eoi() - { - if (!m_progressive_flag) - { - // Attempt to read the EOI marker. - //get_bits_no_markers(m_bits_left & 7); - - // Prime the bit buffer - m_bits_left = 16; - get_bits(16); - get_bits(16); - - // The next marker _should_ be EOI - process_markers(); - } - - m_total_bytes_read -= m_in_buf_left; - } - - int jpeg_decoder::decode(const void** pScan_line, uint* pScan_line_len) - { - if ((m_error_code) || (!m_ready_flag)) - return JPGD_FAILED; - - if (m_total_lines_left == 0) - return JPGD_DONE; - - if (m_mcu_lines_left == 0) - { - if (setjmp(m_jmp_state)) - return JPGD_FAILED; - - if (m_progressive_flag) - load_next_row(); - else - decode_next_row(); - - // Find the EOI marker if that was the last row. - if (m_total_lines_left <= m_max_mcu_y_size) - find_eoi(); - - m_mcu_lines_left = m_max_mcu_y_size; - } - - if (m_freq_domain_chroma_upsample) - { - expanded_convert(); - *pScan_line = m_pScan_line_0; - } - else - { - switch (m_scan_type) - { - case JPGD_YH2V2: - { - if ((m_mcu_lines_left & 1) == 0) - { - H2V2Convert(); - *pScan_line = m_pScan_line_0; - } - else - *pScan_line = m_pScan_line_1; - - break; - } - case JPGD_YH2V1: - { - H2V1Convert(); - *pScan_line = m_pScan_line_0; - break; - } - case JPGD_YH1V2: - { - if ((m_mcu_lines_left & 1) == 0) - { - H1V2Convert(); - *pScan_line = m_pScan_line_0; - } - else - *pScan_line = m_pScan_line_1; - - break; - } - case JPGD_YH1V1: - { - H1V1Convert(); - *pScan_line = m_pScan_line_0; - break; - } - case JPGD_GRAYSCALE: - { - gray_convert(); - *pScan_line = m_pScan_line_0; - - break; - } - } - } - - *pScan_line_len = m_real_dest_bytes_per_scan_line; - - m_mcu_lines_left--; - m_total_lines_left--; - - return JPGD_SUCCESS; - } - - // Creates the tables needed for efficient Huffman decoding. - void jpeg_decoder::make_huff_table(int index, huff_tables *pH) - { - int p, i, l, si; - uint8 huffsize[257]; - uint huffcode[257]; - uint code; - uint subtree; - int code_size; - int lastp; - int nextfreeentry; - int currententry; - - pH->ac_table = m_huff_ac[index] != 0; - - p = 0; - - for (l = 1; l <= 16; l++) - { - for (i = 1; i <= m_huff_num[index][l]; i++) - huffsize[p++] = static_cast(l); - } - - huffsize[p] = 0; - - lastp = p; - - code = 0; - si = huffsize[0]; - p = 0; - - while (huffsize[p]) - { - while (huffsize[p] == si) - { - huffcode[p++] = code; - code++; - } - - code <<= 1; - si++; - } - - memset(pH->look_up, 0, sizeof(pH->look_up)); - memset(pH->look_up2, 0, sizeof(pH->look_up2)); - memset(pH->tree, 0, sizeof(pH->tree)); - memset(pH->code_size, 0, sizeof(pH->code_size)); - - nextfreeentry = -1; - - p = 0; - - while (p < lastp) - { - i = m_huff_val[index][p]; - code = huffcode[p]; - code_size = huffsize[p]; - - pH->code_size[i] = static_cast(code_size); - - if (code_size <= 8) - { - code <<= (8 - code_size); - - for (l = 1 << (8 - code_size); l > 0; l--) - { - JPGD_ASSERT(i < 256); - - pH->look_up[code] = i; - - bool has_extrabits = false; - int extra_bits = 0; - int num_extra_bits = i & 15; - - int bits_to_fetch = code_size; - if (num_extra_bits) - { - int total_codesize = code_size + num_extra_bits; - if (total_codesize <= 8) - { - has_extrabits = true; - extra_bits = ((1 << num_extra_bits) - 1) & (code >> (8 - total_codesize)); - JPGD_ASSERT(extra_bits <= 0x7FFF); - bits_to_fetch += num_extra_bits; - } - } - - if (!has_extrabits) - pH->look_up2[code] = i | (bits_to_fetch << 8); - else - pH->look_up2[code] = i | 0x8000 | (extra_bits << 16) | (bits_to_fetch << 8); - - code++; - } - } - else - { - subtree = (code >> (code_size - 8)) & 0xFF; - - currententry = pH->look_up[subtree]; - - if (currententry == 0) - { - pH->look_up[subtree] = currententry = nextfreeentry; - pH->look_up2[subtree] = currententry = nextfreeentry; - - nextfreeentry -= 2; - } - - code <<= (16 - (code_size - 8)); - - for (l = code_size; l > 9; l--) - { - if ((code & 0x8000) == 0) - currententry--; - - if (pH->tree[-currententry - 1] == 0) - { - pH->tree[-currententry - 1] = nextfreeentry; - - currententry = nextfreeentry; - - nextfreeentry -= 2; - } - else - currententry = pH->tree[-currententry - 1]; - - code <<= 1; - } - - if ((code & 0x8000) == 0) - currententry--; - - pH->tree[-currententry - 1] = i; - } - - p++; - } - } - - // Verifies the quantization tables needed for this scan are available. - void jpeg_decoder::check_quant_tables() - { - for (int i = 0; i < m_comps_in_scan; i++) - if (m_quant[m_comp_quant[m_comp_list[i]]] == NULL) - stop_decoding(JPGD_UNDEFINED_QUANT_TABLE); - } - - // Verifies that all the Huffman tables needed for this scan are available. - void jpeg_decoder::check_huff_tables() - { - for (int i = 0; i < m_comps_in_scan; i++) - { - if ((m_spectral_start == 0) && (m_huff_num[m_comp_dc_tab[m_comp_list[i]]] == NULL)) - stop_decoding(JPGD_UNDEFINED_HUFF_TABLE); - - if ((m_spectral_end > 0) && (m_huff_num[m_comp_ac_tab[m_comp_list[i]]] == NULL)) - stop_decoding(JPGD_UNDEFINED_HUFF_TABLE); - } - - for (int i = 0; i < JPGD_MAX_HUFF_TABLES; i++) - if (m_huff_num[i]) - { - if (!m_pHuff_tabs[i]) - m_pHuff_tabs[i] = (huff_tables *)alloc(sizeof(huff_tables)); - - make_huff_table(i, m_pHuff_tabs[i]); - } - } - - // Determines the component order inside each MCU. - // Also calcs how many MCU's are on each row, etc. - void jpeg_decoder::calc_mcu_block_order() - { - int component_num, component_id; - int max_h_samp = 0, max_v_samp = 0; - - for (component_id = 0; component_id < m_comps_in_frame; component_id++) - { - if (m_comp_h_samp[component_id] > max_h_samp) - max_h_samp = m_comp_h_samp[component_id]; - - if (m_comp_v_samp[component_id] > max_v_samp) - max_v_samp = m_comp_v_samp[component_id]; - } - - for (component_id = 0; component_id < m_comps_in_frame; component_id++) - { - m_comp_h_blocks[component_id] = ((((m_image_x_size * m_comp_h_samp[component_id]) + (max_h_samp - 1)) / max_h_samp) + 7) / 8; - m_comp_v_blocks[component_id] = ((((m_image_y_size * m_comp_v_samp[component_id]) + (max_v_samp - 1)) / max_v_samp) + 7) / 8; - } - - if (m_comps_in_scan == 1) - { - m_mcus_per_row = m_comp_h_blocks[m_comp_list[0]]; - m_mcus_per_col = m_comp_v_blocks[m_comp_list[0]]; - } - else - { - m_mcus_per_row = (((m_image_x_size + 7) / 8) + (max_h_samp - 1)) / max_h_samp; - m_mcus_per_col = (((m_image_y_size + 7) / 8) + (max_v_samp - 1)) / max_v_samp; - } - - if (m_comps_in_scan == 1) - { - m_mcu_org[0] = m_comp_list[0]; - - m_blocks_per_mcu = 1; - } - else - { - m_blocks_per_mcu = 0; - - for (component_num = 0; component_num < m_comps_in_scan; component_num++) - { - int num_blocks; - - component_id = m_comp_list[component_num]; - - num_blocks = m_comp_h_samp[component_id] * m_comp_v_samp[component_id]; - - while (num_blocks--) - m_mcu_org[m_blocks_per_mcu++] = component_id; - } - } - } - - // Starts a new scan. - int jpeg_decoder::init_scan() - { - if (!locate_sos_marker()) - return JPGD_FALSE; - - calc_mcu_block_order(); - - check_huff_tables(); - - check_quant_tables(); - - memset(m_last_dc_val, 0, m_comps_in_frame * sizeof(uint)); - - m_eob_run = 0; - - if (m_restart_interval) - { - m_restarts_left = m_restart_interval; - m_next_restart_num = 0; - } - - fix_in_buffer(); - - return JPGD_TRUE; - } - - // Starts a frame. Determines if the number of components or sampling factors - // are supported. - void jpeg_decoder::init_frame() - { - int i; - - if (m_comps_in_frame == 1) - { - if ((m_comp_h_samp[0] != 1) || (m_comp_v_samp[0] != 1)) - stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS); - - m_scan_type = JPGD_GRAYSCALE; - m_max_blocks_per_mcu = 1; - m_max_mcu_x_size = 8; - m_max_mcu_y_size = 8; - } - else if (m_comps_in_frame == 3) - { - if ( ((m_comp_h_samp[1] != 1) || (m_comp_v_samp[1] != 1)) || - ((m_comp_h_samp[2] != 1) || (m_comp_v_samp[2] != 1)) ) - stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS); - - if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 1)) - { - m_scan_type = JPGD_YH1V1; - - m_max_blocks_per_mcu = 3; - m_max_mcu_x_size = 8; - m_max_mcu_y_size = 8; - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 1)) - { - m_scan_type = JPGD_YH2V1; - m_max_blocks_per_mcu = 4; - m_max_mcu_x_size = 16; - m_max_mcu_y_size = 8; - } - else if ((m_comp_h_samp[0] == 1) && (m_comp_v_samp[0] == 2)) - { - m_scan_type = JPGD_YH1V2; - m_max_blocks_per_mcu = 4; - m_max_mcu_x_size = 8; - m_max_mcu_y_size = 16; - } - else if ((m_comp_h_samp[0] == 2) && (m_comp_v_samp[0] == 2)) - { - m_scan_type = JPGD_YH2V2; - m_max_blocks_per_mcu = 6; - m_max_mcu_x_size = 16; - m_max_mcu_y_size = 16; - } - else - stop_decoding(JPGD_UNSUPPORTED_SAMP_FACTORS); - } - else - stop_decoding(JPGD_UNSUPPORTED_COLORSPACE); - - m_max_mcus_per_row = (m_image_x_size + (m_max_mcu_x_size - 1)) / m_max_mcu_x_size; - m_max_mcus_per_col = (m_image_y_size + (m_max_mcu_y_size - 1)) / m_max_mcu_y_size; - - // These values are for the *destination* pixels: after conversion. - if (m_scan_type == JPGD_GRAYSCALE) - m_dest_bytes_per_pixel = 1; - else - m_dest_bytes_per_pixel = 4; - - m_dest_bytes_per_scan_line = ((m_image_x_size + 15) & 0xFFF0) * m_dest_bytes_per_pixel; - - m_real_dest_bytes_per_scan_line = (m_image_x_size * m_dest_bytes_per_pixel); - - // Initialize two scan line buffers. - m_pScan_line_0 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true); - if ((m_scan_type == JPGD_YH1V2) || (m_scan_type == JPGD_YH2V2)) - m_pScan_line_1 = (uint8 *)alloc(m_dest_bytes_per_scan_line, true); - - m_max_blocks_per_row = m_max_mcus_per_row * m_max_blocks_per_mcu; - - // Should never happen - if (m_max_blocks_per_row > JPGD_MAX_BLOCKS_PER_ROW) - stop_decoding(JPGD_ASSERTION_ERROR); - - // Allocate the coefficient buffer, enough for one MCU - m_pMCU_coefficients = (jpgd_block_t*)alloc(m_max_blocks_per_mcu * 64 * sizeof(jpgd_block_t)); - - for (i = 0; i < m_max_blocks_per_mcu; i++) - m_mcu_block_max_zag[i] = 64; - - m_expanded_blocks_per_component = m_comp_h_samp[0] * m_comp_v_samp[0]; - m_expanded_blocks_per_mcu = m_expanded_blocks_per_component * m_comps_in_frame; - m_expanded_blocks_per_row = m_max_mcus_per_row * m_expanded_blocks_per_mcu; - // Freq. domain chroma upsampling is only supported for H2V2 subsampling factor. -// BEGIN EPIC MOD -#if JPGD_SUPPORT_FREQ_DOMAIN_UPSAMPLING - m_freq_domain_chroma_upsample = (m_expanded_blocks_per_mcu == 4*3); -#else - m_freq_domain_chroma_upsample = 0; -#endif -// END EPIC MOD - - if (m_freq_domain_chroma_upsample) - m_pSample_buf = (uint8 *)alloc(m_expanded_blocks_per_row * 64); - else - m_pSample_buf = (uint8 *)alloc(m_max_blocks_per_row * 64); - - m_total_lines_left = m_image_y_size; - - m_mcu_lines_left = 0; - - create_look_ups(); - } - - // The coeff_buf series of methods originally stored the coefficients - // into a "virtual" file which was located in EMS, XMS, or a disk file. A cache - // was used to make this process more efficient. Now, we can store the entire - // thing in RAM. - jpeg_decoder::coeff_buf* jpeg_decoder::coeff_buf_open(int block_num_x, int block_num_y, int block_len_x, int block_len_y) - { - coeff_buf* cb = (coeff_buf*)alloc(sizeof(coeff_buf)); - - cb->block_num_x = block_num_x; - cb->block_num_y = block_num_y; - cb->block_len_x = block_len_x; - cb->block_len_y = block_len_y; - cb->block_size = (block_len_x * block_len_y) * sizeof(jpgd_block_t); - cb->pData = (uint8 *)alloc(cb->block_size * block_num_x * block_num_y, true); - return cb; - } - - inline jpgd_block_t *jpeg_decoder::coeff_buf_getp(coeff_buf *cb, int block_x, int block_y) - { - JPGD_ASSERT((block_x < cb->block_num_x) && (block_y < cb->block_num_y)); - return (jpgd_block_t *)(cb->pData + block_x * cb->block_size + block_y * (cb->block_size * cb->block_num_x)); - } - - // The following methods decode the various types of m_blocks encountered - // in progressively encoded images. - void jpeg_decoder::decode_block_dc_first(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - int s, r; - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y); - - if ((s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_dc_tab[component_id]])) != 0) - { - r = pD->get_bits_no_markers(s); - s = HUFF_EXTEND(r, s); - } - - pD->m_last_dc_val[component_id] = (s += pD->m_last_dc_val[component_id]); - - p[0] = static_cast(s << pD->m_successive_low); - } - - void jpeg_decoder::decode_block_dc_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - if (pD->get_bits_no_markers(1)) - { - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_dc_coeffs[component_id], block_x, block_y); - - p[0] |= (1 << pD->m_successive_low); - } - } - - void jpeg_decoder::decode_block_ac_first(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - int k, s, r; - - if (pD->m_eob_run) - { - pD->m_eob_run--; - return; - } - - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y); - - for (k = pD->m_spectral_start; k <= pD->m_spectral_end; k++) - { - s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]); - - r = s >> 4; - s &= 15; - - if (s) - { - if ((k += r) > 63) - pD->stop_decoding(JPGD_DECODE_ERROR); - - r = pD->get_bits_no_markers(s); - s = HUFF_EXTEND(r, s); - - p[g_ZAG[k]] = static_cast(s << pD->m_successive_low); - } - else - { - if (r == 15) - { - if ((k += 15) > 63) - pD->stop_decoding(JPGD_DECODE_ERROR); - } - else - { - pD->m_eob_run = 1 << r; - - if (r) - pD->m_eob_run += pD->get_bits_no_markers(r); - - pD->m_eob_run--; - - break; - } - } - } - } - - void jpeg_decoder::decode_block_ac_refine(jpeg_decoder *pD, int component_id, int block_x, int block_y) - { - int s, k, r; - int p1 = 1 << pD->m_successive_low; - int m1 = (-1) << pD->m_successive_low; - jpgd_block_t *p = pD->coeff_buf_getp(pD->m_ac_coeffs[component_id], block_x, block_y); - - k = pD->m_spectral_start; - - if (pD->m_eob_run == 0) - { - for ( ; k <= pD->m_spectral_end; k++) - { - s = pD->huff_decode(pD->m_pHuff_tabs[pD->m_comp_ac_tab[component_id]]); - - r = s >> 4; - s &= 15; - - if (s) - { - if (s != 1) - pD->stop_decoding(JPGD_DECODE_ERROR); - - if (pD->get_bits_no_markers(1)) - s = p1; - else - s = m1; - } - else - { - if (r != 15) - { - pD->m_eob_run = 1 << r; - - if (r) - pD->m_eob_run += pD->get_bits_no_markers(r); - - break; - } - } - - do - { - // BEGIN EPIC MOD - JPGD_ASSERT(k < 64); - // END EPIC MOD - - jpgd_block_t *this_coef = p + g_ZAG[k]; - - if (*this_coef != 0) - { - if (pD->get_bits_no_markers(1)) - { - if ((*this_coef & p1) == 0) - { - if (*this_coef >= 0) - *this_coef = static_cast(*this_coef + p1); - else - *this_coef = static_cast(*this_coef + m1); - } - } - } - else - { - if (--r < 0) - break; - } - - k++; - - } while (k <= pD->m_spectral_end); - - if ((s) && (k < 64)) - { - p[g_ZAG[k]] = static_cast(s); - } - } - } - - if (pD->m_eob_run > 0) - { - for ( ; k <= pD->m_spectral_end; k++) - { - // BEGIN EPIC MOD - JPGD_ASSERT(k < 64); - // END EPIC MOD - - jpgd_block_t *this_coef = p + g_ZAG[k]; - - if (*this_coef != 0) - { - if (pD->get_bits_no_markers(1)) - { - if ((*this_coef & p1) == 0) - { - if (*this_coef >= 0) - *this_coef = static_cast(*this_coef + p1); - else - *this_coef = static_cast(*this_coef + m1); - } - } - } - } - - pD->m_eob_run--; - } - } - - // Decode a scan in a progressively encoded image. - void jpeg_decoder::decode_scan(pDecode_block_func decode_block_func) - { - int mcu_row, mcu_col, mcu_block; - int block_x_mcu[JPGD_MAX_COMPONENTS], m_block_y_mcu[JPGD_MAX_COMPONENTS]; - - memset(m_block_y_mcu, 0, sizeof(m_block_y_mcu)); - - for (mcu_col = 0; mcu_col < m_mcus_per_col; mcu_col++) - { - int component_num, component_id; - - memset(block_x_mcu, 0, sizeof(block_x_mcu)); - - for (mcu_row = 0; mcu_row < m_mcus_per_row; mcu_row++) - { - int block_x_mcu_ofs = 0, block_y_mcu_ofs = 0; - - if ((m_restart_interval) && (m_restarts_left == 0)) - process_restart(); - - for (mcu_block = 0; mcu_block < m_blocks_per_mcu; mcu_block++) - { - component_id = m_mcu_org[mcu_block]; - - decode_block_func(this, component_id, block_x_mcu[component_id] + block_x_mcu_ofs, m_block_y_mcu[component_id] + block_y_mcu_ofs); - - if (m_comps_in_scan == 1) - block_x_mcu[component_id]++; - else - { - if (++block_x_mcu_ofs == m_comp_h_samp[component_id]) - { - block_x_mcu_ofs = 0; - - if (++block_y_mcu_ofs == m_comp_v_samp[component_id]) - { - block_y_mcu_ofs = 0; - block_x_mcu[component_id] += m_comp_h_samp[component_id]; - } - } - } - } - - m_restarts_left--; - } - - if (m_comps_in_scan == 1) - m_block_y_mcu[m_comp_list[0]]++; - else - { - for (component_num = 0; component_num < m_comps_in_scan; component_num++) - { - component_id = m_comp_list[component_num]; - m_block_y_mcu[component_id] += m_comp_v_samp[component_id]; - } - } - } - } - - // Decode a progressively encoded image. - void jpeg_decoder::init_progressive() - { - int i; - - if (m_comps_in_frame == 4) - stop_decoding(JPGD_UNSUPPORTED_COLORSPACE); - - // Allocate the coefficient buffers. - for (i = 0; i < m_comps_in_frame; i++) - { - m_dc_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 1, 1); - m_ac_coeffs[i] = coeff_buf_open(m_max_mcus_per_row * m_comp_h_samp[i], m_max_mcus_per_col * m_comp_v_samp[i], 8, 8); - } - - for ( ; ; ) - { - int dc_only_scan, refinement_scan; - pDecode_block_func decode_block_func; - - if (!init_scan()) - break; - - dc_only_scan = (m_spectral_start == 0); - refinement_scan = (m_successive_high != 0); - - if ((m_spectral_start > m_spectral_end) || (m_spectral_end > 63)) - stop_decoding(JPGD_BAD_SOS_SPECTRAL); - - if (dc_only_scan) - { - if (m_spectral_end) - stop_decoding(JPGD_BAD_SOS_SPECTRAL); - } - else if (m_comps_in_scan != 1) /* AC scans can only contain one component */ - stop_decoding(JPGD_BAD_SOS_SPECTRAL); - - if ((refinement_scan) && (m_successive_low != m_successive_high - 1)) - stop_decoding(JPGD_BAD_SOS_SUCCESSIVE); - - if (dc_only_scan) - { - if (refinement_scan) - decode_block_func = decode_block_dc_refine; - else - decode_block_func = decode_block_dc_first; - } - else - { - if (refinement_scan) - decode_block_func = decode_block_ac_refine; - else - decode_block_func = decode_block_ac_first; - } - - decode_scan(decode_block_func); - - m_bits_left = 16; - get_bits(16); - get_bits(16); - } - - m_comps_in_scan = m_comps_in_frame; - - for (i = 0; i < m_comps_in_frame; i++) - m_comp_list[i] = i; - - calc_mcu_block_order(); - } - - void jpeg_decoder::init_sequential() - { - if (!init_scan()) - stop_decoding(JPGD_UNEXPECTED_MARKER); - } - - void jpeg_decoder::decode_start() - { - init_frame(); - - if (m_progressive_flag) - init_progressive(); - else - init_sequential(); - } - - void jpeg_decoder::decode_init(jpeg_decoder_stream *pStream) - { - init(pStream); - locate_sof_marker(); - } - - jpeg_decoder::jpeg_decoder(jpeg_decoder_stream *pStream) - { - if (setjmp(m_jmp_state)) - return; - decode_init(pStream); - } - - int jpeg_decoder::begin_decoding() - { - if (m_ready_flag) - return JPGD_SUCCESS; - - if (m_error_code) - return JPGD_FAILED; - - if (setjmp(m_jmp_state)) - return JPGD_FAILED; - - decode_start(); - - m_ready_flag = true; - - return JPGD_SUCCESS; - } - - jpeg_decoder::~jpeg_decoder() - { - free_all_blocks(); - } - - jpeg_decoder_file_stream::jpeg_decoder_file_stream() - { - m_pFile = NULL; - m_eof_flag = false; - m_error_flag = false; - } - - void jpeg_decoder_file_stream::close() - { - if (m_pFile) - { - fclose(m_pFile); - m_pFile = NULL; - } - - m_eof_flag = false; - m_error_flag = false; - } - - jpeg_decoder_file_stream::~jpeg_decoder_file_stream() - { - close(); - } - - bool jpeg_decoder_file_stream::open(const char *Pfilename) - { - close(); - - m_eof_flag = false; - m_error_flag = false; - -#if defined(_MSC_VER) - m_pFile = NULL; - fopen_s(&m_pFile, Pfilename, "rb"); -#else - m_pFile = fopen(Pfilename, "rb"); -#endif - return m_pFile != NULL; - } - - int jpeg_decoder_file_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) - { - if (!m_pFile) - return -1; - - if (m_eof_flag) - { - *pEOF_flag = true; - return 0; - } - - if (m_error_flag) - return -1; - - int bytes_read = static_cast(fread(pBuf, 1, max_bytes_to_read, m_pFile)); - if (bytes_read < max_bytes_to_read) - { - if (ferror(m_pFile)) - { - m_error_flag = true; - return -1; - } - - m_eof_flag = true; - *pEOF_flag = true; - } - - return bytes_read; - } - - bool jpeg_decoder_mem_stream::open(const uint8 *pSrc_data, uint size) - { - close(); - m_pSrc_data = pSrc_data; - m_ofs = 0; - m_size = size; - return true; - } - - int jpeg_decoder_mem_stream::read(uint8 *pBuf, int max_bytes_to_read, bool *pEOF_flag) - { - *pEOF_flag = false; - - if (!m_pSrc_data) - return -1; - - uint bytes_remaining = m_size - m_ofs; - if ((uint)max_bytes_to_read > bytes_remaining) - { - max_bytes_to_read = bytes_remaining; - *pEOF_flag = true; - } - - memcpy(pBuf, m_pSrc_data + m_ofs, max_bytes_to_read); - m_ofs += max_bytes_to_read; - - return max_bytes_to_read; - } - - unsigned char *decompress_jpeg_image_from_stream(jpeg_decoder_stream *pStream, int *width, int *height, int *actual_comps, int req_comps) - { - if (!actual_comps) - return NULL; - *actual_comps = 0; - - if ((!pStream) || (!width) || (!height) || (!req_comps)) - return NULL; - - if ((req_comps != 1) && (req_comps != 3) && (req_comps != 4)) - return NULL; - - jpeg_decoder decoder(pStream); - if (decoder.get_error_code() != JPGD_SUCCESS) - return NULL; - - const int image_width = decoder.get_width(), image_height = decoder.get_height(); - *width = image_width; - *height = image_height; - *actual_comps = decoder.get_num_components(); - - if (decoder.begin_decoding() != JPGD_SUCCESS) - return NULL; - - const int dst_bpl = image_width * req_comps; - - uint8 *pImage_data = (uint8*)jpgd_malloc(dst_bpl * image_height); - if (!pImage_data) - return NULL; - - for (int y = 0; y < image_height; y++) - { - const uint8* pScan_line = 0; - uint scan_line_len; - if (decoder.decode((const void**)&pScan_line, &scan_line_len) != JPGD_SUCCESS) - { - jpgd_free(pImage_data); - return NULL; - } - - uint8 *pDst = pImage_data + y * dst_bpl; - - if (((req_comps == 4) && (decoder.get_num_components() == 3)) || - ((req_comps == 1) && (decoder.get_num_components() == 1))) - { - memcpy(pDst, pScan_line, dst_bpl); - } - else if (decoder.get_num_components() == 1) - { - if (req_comps == 3) - { - for (int x = 0; x < image_width; x++) - { - uint8 luma = pScan_line[x]; - pDst[0] = luma; - pDst[1] = luma; - pDst[2] = luma; - pDst += 3; - } - } - else - { - for (int x = 0; x < image_width; x++) - { - uint8 luma = pScan_line[x]; - pDst[0] = luma; - pDst[1] = luma; - pDst[2] = luma; - pDst[3] = 255; - pDst += 4; - } - } - } - else if (decoder.get_num_components() == 3) - { - if (req_comps == 1) - { - const int YR = 19595, YG = 38470, YB = 7471; - for (int x = 0; x < image_width; x++) - { - int r = pScan_line[x*4+0]; - int g = pScan_line[x*4+1]; - int b = pScan_line[x*4+2]; - *pDst++ = static_cast((r * YR + g * YG + b * YB + 32768) >> 16); - } - } - else - { - for (int x = 0; x < image_width; x++) - { - pDst[0] = pScan_line[x*4+0]; - pDst[1] = pScan_line[x*4+1]; - pDst[2] = pScan_line[x*4+2]; - pDst += 3; - } - } - } - } - - return pImage_data; - } - -// BEGIN EPIC MOD - unsigned char *decompress_jpeg_image_from_memory(const unsigned char *pSrc_data, int src_data_size, int *width, int *height, int *actual_comps, int req_comps, int format) - { - jpg_format = (ERGBFormatJPG)format; -// EMD EPIC MOD - jpgd::jpeg_decoder_mem_stream mem_stream(pSrc_data, src_data_size); - return decompress_jpeg_image_from_stream(&mem_stream, width, height, actual_comps, req_comps); - } - - unsigned char *decompress_jpeg_image_from_file(const char *pSrc_filename, int *width, int *height, int *actual_comps, int req_comps) - { - jpgd::jpeg_decoder_file_stream file_stream; - if (!file_stream.open(pSrc_filename)) - return NULL; - return decompress_jpeg_image_from_stream(&file_stream, width, height, actual_comps, req_comps); - } - -} // namespace jpgd diff --git a/spaces/facebook/MusicGen/audiocraft/solvers/__init__.py b/spaces/facebook/MusicGen/audiocraft/solvers/__init__.py deleted file mode 100644 index ae19f3a8c51abf469697d6affa91449d668716ba..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/audiocraft/solvers/__init__.py +++ /dev/null @@ -1,17 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -""" -Solvers. A Solver is a training recipe, combining the dataloaders, models, -optimizer, losses etc into a single convenient object. -""" - -# flake8: noqa -from .audiogen import AudioGenSolver -from .builders import get_solver -from .base import StandardSolver -from .compression import CompressionSolver -from .musicgen import MusicGenSolver -from .diffusion import DiffusionSolver diff --git a/spaces/facebook/MusicGen/scripts/mos.py b/spaces/facebook/MusicGen/scripts/mos.py deleted file mode 100644 index a711c9ece23e72ed3a07032c7834ef7c56ab4f11..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/scripts/mos.py +++ /dev/null @@ -1,286 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -""" -To run this script, from the root of the repo. Make sure to have Flask installed - - FLASK_DEBUG=1 FLASK_APP=scripts.mos flask run -p 4567 - # or if you have gunicorn - gunicorn -w 4 -b 127.0.0.1:8895 -t 120 'scripts.mos:app' --access-logfile - - -""" -from collections import defaultdict -from functools import wraps -from hashlib import sha1 -import json -import math -from pathlib import Path -import random -import typing as tp - -from flask import Flask, redirect, render_template, request, session, url_for - -from audiocraft import train -from audiocraft.utils.samples.manager import get_samples_for_xps - - -SAMPLES_PER_PAGE = 8 -MAX_RATING = 5 -storage = Path(train.main.dora.dir / 'mos_storage') -storage.mkdir(exist_ok=True) -surveys = storage / 'surveys' -surveys.mkdir(exist_ok=True) -magma_root = Path(train.__file__).parent.parent -app = Flask('mos', static_folder=str(magma_root / 'scripts/static'), - template_folder=str(magma_root / 'scripts/templates')) -app.secret_key = b'audiocraft makes the best songs' - - -def normalize_path(path: Path): - """Just to make path a bit nicer, make them relative to the Dora root dir. - """ - path = path.resolve() - dora_dir = train.main.dora.dir.resolve() / 'xps' - return path.relative_to(dora_dir) - - -def get_full_path(normalized_path: Path): - """Revert `normalize_path`. - """ - return train.main.dora.dir.resolve() / 'xps' / normalized_path - - -def get_signature(xps: tp.List[str]): - """Return a signature for a list of XP signatures. - """ - return sha1(json.dumps(xps).encode()).hexdigest()[:10] - - -def ensure_logged(func): - """Ensure user is logged in. - """ - @wraps(func) - def _wrapped(*args, **kwargs): - user = session.get('user') - if user is None: - return redirect(url_for('login', redirect_to=request.url)) - return func(*args, **kwargs) - return _wrapped - - -@app.route('/login', methods=['GET', 'POST']) -def login(): - """Login user if not already, then redirect. - """ - user = session.get('user') - if user is None: - error = None - if request.method == 'POST': - user = request.form['user'] - if not user: - error = 'User cannot be empty' - if user is None or error: - return render_template('login.html', error=error) - assert user - session['user'] = user - redirect_to = request.args.get('redirect_to') - if redirect_to is None: - redirect_to = url_for('index') - return redirect(redirect_to) - - -@app.route('/', methods=['GET', 'POST']) -@ensure_logged -def index(): - """Offer to create a new study. - """ - errors = [] - if request.method == 'POST': - xps_or_grids = [part.strip() for part in request.form['xps'].split()] - xps = set() - for xp_or_grid in xps_or_grids: - xp_path = train.main.dora.dir / 'xps' / xp_or_grid - if xp_path.exists(): - xps.add(xp_or_grid) - continue - grid_path = train.main.dora.dir / 'grids' / xp_or_grid - if grid_path.exists(): - for child in grid_path.iterdir(): - if child.is_symlink(): - xps.add(child.name) - continue - errors.append(f'{xp_or_grid} is neither an XP nor a grid!') - assert xps or errors - blind = 'true' if request.form.get('blind') == 'on' else 'false' - xps = list(xps) - if not errors: - signature = get_signature(xps) - manifest = { - 'xps': xps, - } - survey_path = surveys / signature - survey_path.mkdir(exist_ok=True) - with open(survey_path / 'manifest.json', 'w') as f: - json.dump(manifest, f, indent=2) - return redirect(url_for('survey', blind=blind, signature=signature)) - return render_template('index.html', errors=errors) - - -@app.route('/survey/', methods=['GET', 'POST']) -@ensure_logged -def survey(signature): - success = request.args.get('success', False) - seed = int(request.args.get('seed', 4321)) - blind = request.args.get('blind', 'false') in ['true', 'on', 'True'] - exclude_prompted = request.args.get('exclude_prompted', 'false') in ['true', 'on', 'True'] - exclude_unprompted = request.args.get('exclude_unprompted', 'false') in ['true', 'on', 'True'] - max_epoch = int(request.args.get('max_epoch', '-1')) - survey_path = surveys / signature - assert survey_path.exists(), survey_path - - user = session['user'] - result_folder = survey_path / 'results' - result_folder.mkdir(exist_ok=True) - result_file = result_folder / f'{user}_{seed}.json' - - with open(survey_path / 'manifest.json') as f: - manifest = json.load(f) - - xps = [train.main.get_xp_from_sig(xp) for xp in manifest['xps']] - names, ref_name = train.main.get_names(xps) - - samples_kwargs = { - 'exclude_prompted': exclude_prompted, - 'exclude_unprompted': exclude_unprompted, - 'max_epoch': max_epoch, - } - matched_samples = get_samples_for_xps(xps, epoch=-1, **samples_kwargs) # fetch latest epoch - models_by_id = { - id: [{ - 'xp': xps[idx], - 'xp_name': names[idx], - 'model_id': f'{xps[idx].sig}-{sample.id}', - 'sample': sample, - 'is_prompted': sample.prompt is not None, - 'errors': [], - } for idx, sample in enumerate(samples)] - for id, samples in matched_samples.items() - } - experiments = [ - {'xp': xp, 'name': names[idx], 'epoch': list(matched_samples.values())[0][idx].epoch} - for idx, xp in enumerate(xps) - ] - - keys = list(matched_samples.keys()) - keys.sort() - rng = random.Random(seed) - rng.shuffle(keys) - model_ids = keys[:SAMPLES_PER_PAGE] - - if blind: - for key in model_ids: - rng.shuffle(models_by_id[key]) - - ok = True - if request.method == 'POST': - all_samples_results = [] - for id in model_ids: - models = models_by_id[id] - result = { - 'id': id, - 'is_prompted': models[0]['is_prompted'], - 'models': {} - } - all_samples_results.append(result) - for model in models: - rating = request.form[model['model_id']] - if rating: - rating = int(rating) - assert rating <= MAX_RATING and rating >= 1 - result['models'][model['xp'].sig] = rating - model['rating'] = rating - else: - ok = False - model['errors'].append('Please rate this model.') - if ok: - result = { - 'results': all_samples_results, - 'seed': seed, - 'user': user, - 'blind': blind, - 'exclude_prompted': exclude_prompted, - 'exclude_unprompted': exclude_unprompted, - } - print(result) - with open(result_file, 'w') as f: - json.dump(result, f) - seed = seed + 1 - return redirect(url_for( - 'survey', signature=signature, blind=blind, seed=seed, - exclude_prompted=exclude_prompted, exclude_unprompted=exclude_unprompted, - max_epoch=max_epoch, success=True)) - - ratings = list(range(1, MAX_RATING + 1)) - return render_template( - 'survey.html', ratings=ratings, blind=blind, seed=seed, signature=signature, success=success, - exclude_prompted=exclude_prompted, exclude_unprompted=exclude_unprompted, max_epoch=max_epoch, - experiments=experiments, models_by_id=models_by_id, model_ids=model_ids, errors=[], - ref_name=ref_name, already_filled=result_file.exists()) - - -@app.route('/audio/') -def audio(path: str): - full_path = Path('/') / path - assert full_path.suffix in [".mp3", ".wav"] - return full_path.read_bytes(), {'Content-Type': 'audio/mpeg'} - - -def mean(x): - return sum(x) / len(x) - - -def std(x): - m = mean(x) - return math.sqrt(sum((i - m)**2 for i in x) / len(x)) - - -@app.route('/results/') -@ensure_logged -def results(signature): - - survey_path = surveys / signature - assert survey_path.exists(), survey_path - result_folder = survey_path / 'results' - result_folder.mkdir(exist_ok=True) - - # ratings per model, then per user. - ratings_per_model = defaultdict(list) - users = [] - for result_file in result_folder.iterdir(): - if result_file.suffix != '.json': - continue - with open(result_file) as f: - results = json.load(f) - users.append(results['user']) - for result in results['results']: - for sig, rating in result['models'].items(): - ratings_per_model[sig].append(rating) - - fmt = '{:.2f}' - models = [] - for model in sorted(ratings_per_model.keys()): - ratings = ratings_per_model[model] - - models.append({ - 'sig': model, - 'samples': len(ratings), - 'mean_rating': fmt.format(mean(ratings)), - # the value 1.96 was probably chosen to achieve some - # confidence interval assuming gaussianity. - 'std_rating': fmt.format(1.96 * std(ratings) / len(ratings)**0.5), - }) - return render_template('results.html', signature=signature, models=models, users=users) diff --git a/spaces/facebook/ov-seg/open_vocab_seg/modeling/clip_adapter/adapter.py b/spaces/facebook/ov-seg/open_vocab_seg/modeling/clip_adapter/adapter.py deleted file mode 100644 index 864d20b160714865b4130fab8714f323aaae2572..0000000000000000000000000000000000000000 --- a/spaces/facebook/ov-seg/open_vocab_seg/modeling/clip_adapter/adapter.py +++ /dev/null @@ -1,206 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved -# Modified by Feng Liang from -# https://github.com/MendelXu/zsseg.baseline/blob/master/mask_former/modeling/clip_adapter/adapter.py - -from typing import List -import torch -from torch import nn -from torch.nn import functional as F -from detectron2.structures import BitMasks -from .utils import build_clip_model, crop_with_mask -from .text_template import PromptExtractor - - -PIXEL_MEAN = (0.48145466, 0.4578275, 0.40821073) -PIXEL_STD = (0.26862954, 0.26130258, 0.27577711) - - -class ClipAdapter(nn.Module): - def __init__(self, clip_model_name: str, mask_prompt_depth: int, text_templates: PromptExtractor): - super().__init__() - self.clip_model = build_clip_model(clip_model_name, mask_prompt_depth) - self.text_templates = text_templates - self.text_templates.init_buffer(self.clip_model) - self.text_feature_buffer = {} - - def forward(self, image: torch.Tensor, text: List[str], **kwargs): - image = self._preprocess_image(image, **kwargs) - text_feature = self.get_text_features(text) # k,feat_dim - image_features = self.get_image_features(image) - return self.get_sim_logits(text_feature, image_features) - - def _preprocess_image(self, image: torch.Tensor): - return image - - def _get_text_features(self, noun_list: List[str]): - left_noun_list = [ - noun for noun in noun_list if noun not in self.text_feature_buffer - ] - if len(left_noun_list) > 0: - left_text_features = self.text_templates( - left_noun_list, self.clip_model - ) - self.text_feature_buffer.update( - { - noun: text_feature - for noun, text_feature in zip( - left_noun_list, left_text_features - ) - } - ) - return torch.stack([self.text_feature_buffer[noun] for noun in noun_list]) - - - def get_text_features(self, noun_list: List[str]): - return self._get_text_features(noun_list) - - def get_image_features(self, image: torch.Tensor): - image_features = self.clip_model.visual(image) - image_features = image_features / image_features.norm(dim=-1, keepdim=True) - return image_features - - def get_sim_logits( - self, - text_features: torch.Tensor, - image_features: torch.Tensor, - temperature: float = 100, - ): - return temperature * image_features @ text_features.T - - def normalize_feature(self, feat: torch.Tensor): - return feat / feat.norm(dim=-1, keepdim=True) - - -class MaskFormerClipAdapter(ClipAdapter): - def __init__( - self, - clip_model_name: str, - text_templates: PromptExtractor, - mask_fill: str = "mean", - mask_expand_ratio: float = 1.0, - mask_thr: float = 0.5, - mask_matting: bool = False, - region_resized: bool = True, - mask_prompt_depth: int = 0, - mask_prompt_fwd: bool = False, - ): - super().__init__(clip_model_name, mask_prompt_depth, text_templates) - self.non_object_embedding = nn.Parameter( - torch.empty(1, self.clip_model.text_projection.shape[-1]) - ) - nn.init.normal_( - self.non_object_embedding.data, - std=self.clip_model.transformer.width ** -0.5, - ) - # for test - self.mask_fill = mask_fill - if self.mask_fill == "zero": - self.mask_fill = (0.0, 0.0, 0.0) - elif self.mask_fill == "mean": - self.mask_fill = [255.0 * c for c in PIXEL_MEAN] - else: - raise NotImplementedError( - "Unknown mask_fill method: {}".format(self.mask_fill) - ) - self.mask_expand_ratio = mask_expand_ratio - self.mask_thr = mask_thr - self.mask_matting = mask_matting - self.region_resized = region_resized - self.mask_prompt_fwd = mask_prompt_fwd - self.register_buffer( - "pixel_mean", torch.Tensor(PIXEL_MEAN).reshape(1, 3, 1, 1) * 255.0 - ) - self.register_buffer( - "pixel_std", torch.Tensor(PIXEL_STD).reshape(1, 3, 1, 1) * 255.0 - ) - - def forward( - self, - image: torch.Tensor, - text: List[str], - mask: torch.Tensor, - normalize: bool = True, - fwd_w_region_mask: bool = False, - ): - (regions, unnorm_regions), region_masks, valid_flag = self._preprocess_image(image, mask, normalize=normalize) - if regions is None: - return None, valid_flag - if isinstance(regions, list): - assert NotImplementedError - image_features = torch.cat( - [self.get_image_features(image_i) for image_i in regions], dim=0 - ) - else: - if self.mask_prompt_fwd: - image_features = self.get_image_features(regions, region_masks) - else: - image_features = self.get_image_features(regions) - text_feature = self.get_text_features(text) # k,feat_dim - return self.get_sim_logits(text_feature, image_features), unnorm_regions, valid_flag - - def get_image_features(self, image, region_masks=None): - image_features = self.clip_model.visual(image, region_masks) - image_features = image_features / image_features.norm(dim=-1, keepdim=True) - return image_features - - def _preprocess_image( - self, image: torch.Tensor, mask: torch.Tensor, normalize: bool = True - ): - """crop, mask and normalize the image - - Args: - image ([type]): [C,H,W] - mask ([type]): [K,H,W - normalize (bool, optional): [description]. Defaults to True. - """ - dtype = mask.dtype - bin_mask = mask > self.mask_thr - valid = bin_mask.sum(dim=(-1, -2)) > 0 - bin_mask = bin_mask[valid] - mask = mask[valid] - if not self.mask_matting: - mask = bin_mask - bin_mask = BitMasks(bin_mask) - bboxes = bin_mask.get_bounding_boxes() - # crop,mask - regions = [] - region_masks = [] - for bbox, single_mask in zip(bboxes, mask): - region, region_mask = crop_with_mask( - image.type(dtype), - single_mask.type(dtype), - bbox, - fill=self.mask_fill, - expand_ratio=self.mask_expand_ratio, - ) - regions.append(region.unsqueeze(0)) - region_masks.append(region_mask.unsqueeze(0)) - if len(regions) == 0: - return None, valid - unnorm_regions = regions - if normalize: - regions = [(r - self.pixel_mean) / self.pixel_std for r in regions] - # resize - if self.region_resized: - regions = [ - F.interpolate(r, size=(224, 224), mode="bicubic") for r in regions - ] - regions = torch.cat(regions) - region_masks = [ - F.interpolate(r, size=(224, 224), mode="nearest") for r in region_masks - ] - region_masks = torch.cat(region_masks) - unnorm_regions = [ - F.interpolate(r, size=(224, 224), mode="bicubic") for r in unnorm_regions - ] - unnorm_regions = torch.cat(unnorm_regions) - return (regions, unnorm_regions), region_masks, valid - - def get_text_features(self, noun_list: List[str]): - object_text_features = self._get_text_features(noun_list) - non_object_text_features = ( - self.non_object_embedding - / self.non_object_embedding.norm(dim=-1, keepdim=True) - ) - return torch.cat([object_text_features, non_object_text_features], dim=0) diff --git a/spaces/failfast/2D-GameCreator/src/pages/api/url/types.d.ts b/spaces/failfast/2D-GameCreator/src/pages/api/url/types.d.ts deleted file mode 100644 index 086a702fc67d460dfc800528f38c95f8e0afa99b..0000000000000000000000000000000000000000 --- a/spaces/failfast/2D-GameCreator/src/pages/api/url/types.d.ts +++ /dev/null @@ -1,8 +0,0 @@ -declare module "codesandbox-import-utils/lib/api/define" { - interface IFiles { - [key: string]: { - content: string | Record; - isBinary: boolean; - }; - } -} diff --git a/spaces/falterWliame/Face_Mask_Detection/EhLib 9.4 Build 9.4.017 Professional Edition With Full Source.md b/spaces/falterWliame/Face_Mask_Detection/EhLib 9.4 Build 9.4.017 Professional Edition With Full Source.md deleted file mode 100644 index 1153c8f5d6622b7d9fcd105ec52de71bc98b681b..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/EhLib 9.4 Build 9.4.017 Professional Edition With Full Source.md +++ /dev/null @@ -1,30 +0,0 @@ - -

EhLib 9.4: A Powerful Pack of Components for Delphi and C++ Builder

-

EhLib 9.4 is a pack of components for Delphi and C++ Builder that can enhance the functionality and appearance of your applications. EhLib 9.4 contains components and classes for Borland Delphi 7 - 2006, CodeGear Delphi 2007, RAD Studio 2009, Embarcadero RAD Studio 2010, XE - XE10.3, Lazarus (Win32), intended to increase capacity of the client part of database application in part of interaction with applications user[^2^] [^3^].

-

Some of the features of EhLib 9.4 are:

-

EhLib 9.4 Build 9.4.017 Professional Edition with Full Source


Download Zip ::: https://urlca.com/2uDcJh



-
    -
  • Set of visual components to edit data in TDataSet: TDBGridEh, TDBVertGirdEh, TDBEditEh, TDBLookupComboboxEh, TDBDateTimeEditEh, TDBComboBoxEh, TDBNumberEditEh.
  • -
  • Components for totaling sums and amounts of records: TDBSumList.
  • -
  • Set of non visual components to print/preview DBGridEh or other printable stuff: TPrintDBGridEh, TPreviewBox, TPrinterPreview.
  • -
  • Components to store component properties to/from settings storage such as ini files, registry etc: TPropStorageEh, TIniPropStorageManEh, TRegPropStorageManEh.
  • -
  • Maximum of enclosed functional during the work with tabular data.
  • -
  • Time saving for developer – display your data in the right format without writing software code.
  • -
  • Fast and intuitive exploration of the library.
  • -
  • Great number of examples, instructions and help-files.
  • -
  • High speed of applications – development involves a special optimization stage of speed for each component of library.
  • -
  • Easy debug final product.
  • -
  • EhLib is 100% native VCL library written in Delphi language.
  • -
-

If you want to download EhLib 9.4 Build 9.4.017 Professional Edition with Full Source[^1^], you can visit this link: https://developer.team/delphi/26715-ehlib-94-build-94017-professional-edition-with-full-source.html. You will get access to all the source code and documentation of the library. You can also check out other versions and editions of EhLib from the same website.

-

EhLib 9.4 is a powerful and versatile pack of components for Delphi and C++ Builder that can help you create amazing applications with ease. Try it out today and see the difference!

Some of the new features of EhLib 9.4 are:

-
    -
  • New type of data display in TPlannerEh - annual period by day. This allows you to show events and tasks for a whole year in a compact and convenient way[^1^]. You can customize the appearance and behavior of the planner using various properties and events.
  • -
  • In DBGridEh, the ability to save filters through the SettingsKeeper specified in the STFilter section. This makes it easier to persist and restore the filtering conditions of the grid across sessions[^1^]. You can also use the built-in dialog to edit the filters or create your own custom filter editor.
  • -
  • Xlsx file generator. EhLib 9.4 includes a new unit that can generate Excel files in xlsx format without using any external libraries or components[^1^]. You can export data from any TDataSet descendant or TDBGridEh to an xlsx file with formatting, formulas, images and charts.
  • -
-

EhLib 9.4 also includes many bug fixes and improvements for existing components and classes. You can find the full list of changes in the readme file that comes with the library.

-

EhLib 9.4 is a must-have pack of components for Delphi and C++ Builder developers who want to create powerful and user-friendly applications with database support. Download it today and enjoy the benefits of EhLib!

-

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/falterWliame/Face_Mask_Detection/Mp3 Editor For Free 7.0.1 Crack ((FREE)).md b/spaces/falterWliame/Face_Mask_Detection/Mp3 Editor For Free 7.0.1 Crack ((FREE)).md deleted file mode 100644 index beb667427ec3aa17132fd46af184eaa757e9e41b..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Mp3 Editor For Free 7.0.1 Crack ((FREE)).md +++ /dev/null @@ -1,8 +0,0 @@ -

mp3 editor for free 7.0.1 crack


DOWNLOAD ::: https://urlca.com/2uDdJb



- -January 17, 2014 - MP3 Music Editor is a sophisticated software tool that can be used to convert audio files from one format to another, edit them, and . MP3 Music Editor is a sophisticated software tool that you can use to convert audio files from one format to another, edit them, and record your own music files. -With MP3 Music Editor, you can edit, trim, merge, cut, copy, paste and delete fragments from audio files. -To do this, you have the ability to change the frequency, size and tone of your audio using tools such as speed editing 8a78ff9644
-
-
-

diff --git a/spaces/fatiXbelha/sd/Bloons TD 6 APK A Must-Have Game for iPhone Users.md b/spaces/fatiXbelha/sd/Bloons TD 6 APK A Must-Have Game for iPhone Users.md deleted file mode 100644 index 0e4fe0670aee72525436fcd3e65a84c0318dbd5d..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Bloons TD 6 APK A Must-Have Game for iPhone Users.md +++ /dev/null @@ -1,130 +0,0 @@ - -

Bloons TD 6: A Tower Defense Game That Will Pop Your Mind

-

If you are looking for a fun and challenging strategy game that will keep you entertained for hours, you might want to check out Bloons TD 6. This is the latest installment in the Bloons series, a franchise that has been around for over a decade and has millions of fans worldwide. Bloons TD 6 is a 3D tower defense game that features colorful graphics, engaging gameplay, and tons of content to explore. In this article, we will tell you everything you need to know about Bloons TD 6, including its features, tips, review, and FAQs.

-

What is Bloons TD 6?

-

Bloons TD 6 is a game developed by Ninja Kiwi, a New Zealand-based company that specializes in creating casual and mobile games. Bloons TD 6 was released in June 2018 for iOS, Android, and Steam platforms. The game is a sequel to Bloons TD 5, which was released in 2011.

-

bloons td 6 apk iphone


Downloadhttps://urllie.com/2uNxxc



-

The premise of Bloons TD 6 is simple: you have to stop the invading balloons (called bloons) from reaching the end of the path by placing monkey towers and heroes along the way. Each monkey tower has its own unique abilities and upgrades that can help you pop the bloons more effectively. There are also different types of bloons that have different properties and resistances, such as camo bloons, lead bloons, MOABs, and more.

-

Bloons TD 6 offers a variety of game modes, maps, difficulties, and challenges to suit your preferences and skill level. You can play solo or with up to three other players in co-op mode. You can also create your own custom challenges and odysseys and share them with other players online. Additionally, you can earn trophies, monkey money, powers, insta monkeys, and monkey knowledge by completing quests, events, achievements, and more.

-

What are the main features of Bloons TD 6?

-

Bloons TD 6 has many features that make it one of the best tower defense games on the market. Here are some of them:

-
    -
  • 3D graphics: Bloons TD 6 has stunning 3D graphics that bring the monkeys and bloons to life. The game also has dynamic lighting and shadow effects that enhance the visual experience.
  • -
  • 23 monkey towers: Bloons TD 6 has 23 different monkey towers to choose from, each with three upgrade paths and unique activated abilities. You can mix and match different towers to create your own strategies and combos.
  • -
  • 14 heroes: Bloons TD 6 also has 14 heroes that can join your monkey army. Heroes are powerful units that level up automatically and have two special abilities. You can also customize their skins and voiceovers.
  • -
  • Paragons: Paragons are the ultimate upgrades for monkey towers. They are extremely expensive and powerful, but they can only be obtained by sacrificing other towers of the same type.
  • -
  • Boss events: Boss events are special challenges that pit you against fearsome boss bloons that have massive health and abilities. You need to use your best tactics and teamwork to defeat them.
  • -
  • Odysseys: Odysseys are series of maps connected by their theme, rules, and rewards. You need to complete all the maps in an odyssey with a limited number of towers and lives.
  • -
  • Contested territory: Contested territory is a competitive mode where you join forces with other players and battle for territory against five other teams. You need to capture tiles on a shared map and compete on the leaderboards.
  • -
  • Trophy store: Trophy store is where you can spend your trophies (earned by playing the game or completing achievements) to buy various cosmetic items, such as skins, decals, music, and more.
  • -
-

What are some useful tips and tricks for Bloons TD 6?

-

Bloons TD 6 is a game that requires strategy, planning, and adaptation. Here are some tips and tricks that can help you improve your performance and enjoy the game more:

-
    -
  • Know your enemies: Different bloons have different properties and resistances. For example, camo bloons can only be detected by certain towers or upgrades, lead bloons can only be popped by sharp or explosive projectiles, and MOABs can withstand a lot of damage. You need to know what kind of bloons you are facing and what towers are effective against them.
  • -
  • Know your allies: Similarly, different monkey towers have different strengths and weaknesses. For example, dart monkeys are cheap and versatile, but they have low range and pierce, sniper monkeys have high range and damage, but they are slow and expensive, and super monkeys are powerful and fast, but they consume a lot of space and money. You need to know what kind of towers you have and how to use them efficiently.
  • -
  • Know your maps: Each map has its own layout, terrain, obstacles, and difficulty. Some maps have multiple paths, some have water or lava, some have moving parts, and some have special rules. You need to know the characteristics of each map and how to adapt your strategy accordingly.
  • -
  • Know your modes: Each game mode has its own challenges and rewards. For example, in hard mode, you have less money and lives, in impoppable mode, you have no continues or powers, in chimps mode, you have no monkey knowledge or insta monkeys, and in sandbox mode, you have unlimited money and lives. You need to know the requirements and objectives of each mode and how to prepare for them.
  • -
  • Know your upgrades: Each monkey tower has three upgrade paths that can drastically change its performance and role. For example, a dart monkey can become a crossbow master that shoots fast and powerful arrows, a plasma monkey fan club that summons a group of super monkeys temporarily, or an ultra-juggernaut that launches giant spiked balls that bounce around. You need to know the effects and costs of each upgrade and how to combine them with other towers.
  • -
-

What are the pros and cons of Bloons TD 6?

-

Bloons TD 6 is a game that has received mostly positive reviews from critics and players alike. The game has an average rating of 4.7 out of 5 stars on the App Store, 4.8 out of 5 stars on Google Play, and 9 out of 10 on Steam. However, like any game, Bloons TD 6 also has its flaws and drawbacks. Here are some of the pros and cons of Bloons TD 6 based on user feedback:

- - - - - - - - - - - - - - - - - - - - - - - - - -
ProsCons
- Fun and addictive gameplay- High battery consumption
- Beautiful and colorful graphics- Occasional bugs and glitches
- Tons of content and replay value- Some maps and modes are too hard or easy
- Frequent updates and events- Some items are too expensive or rare
- Co-op and online features- Some co-op issues and disconnects
-

Conclusion

-

Bloons TD 6 is a game that will appeal to anyone who likes tower defense games, strategy games, or just popping balloons. The game has a lot of features, modes, maps, towers, heroes, upgrades, challenges, events, rewards, and more to keep you hooked for hours. Whether you play solo or with friends, Bloons TD 6 will provide you with a fun and satisfying gaming experience.

-

bloons td 6+ app store download
-bloons td 6 offline mode iphone
-bloons td 6 best strategy ios
-bloons td 6 free download for iphone
-bloons td 6 update ios
-bloons td 6 co op mode iphone
-bloons td 6 cheats and hacks ios
-bloons td 6 paragon upgrades iphone
-bloons td 6 odyssey mode ios
-bloons td 6 quests and trophies iphone
-bloons td 6+ vs bloons td 6 iphone
-bloons td 6 custom challenges ios
-bloons td 6 modded apk for iphone
-bloons td 6 heroes and skins ios
-bloons td 6 monkey knowledge guide iphone
-bloons td 6 powers and insta monkeys iphone
-bloons td 6 boss events iphone
-bloons td 6 contested territory ios
-bloons td 6 apk iphone no jailbreak
-bloons td 6 apk iphone reddit
-bloons td 6 apk iphone latest version
-bloons td 6 apk iphone review
-bloons td 6 apk iphone gameplay
-bloons td 6 apk iphone tips and tricks
-bloons td 6 apk iphone support and feedback
-bloons td 6 apk iphone compatible devices
-bloons td 6 apk iphone system requirements
-bloons td 6 apk iphone how to install
-bloons td 6 apk iphone how to update
-bloons td 6 apk iphone how to play with friends
-bloons td 6 apk iphone how to unlock maps and modes
-bloons td 6 apk iphone how to earn money and xp
-bloons td 6 apk iphone how to use monkey towers and heroes
-bloons td 6 apk iphone how to pop different types of bloons
-bloons td 6 apk iphone how to beat hard levels and modes
-bloons td 6 apk iphone how to customize your game settings and appearance
-bloons td 6 apk iphone how to access content browser and trophy store
-bloons td 6 apk iphone how to backup and restore your game progress
-bloons td 6 apk iphone how to contact ninja kiwi developers
-bloons td 6 apk iphone how to report bugs and issues

-

If you want to download Bloons TD 6 for your iPhone or iPad device, you can do so by following this link: [Bloons TD 6 on the App Store]. The game costs $4.99 USD (or equivalent in your local currency) to purchase. However, there are also some in-app purchases available for extra content or currency.

-

If you have any questions or comments about Bloons TD 6, feel free to leave them below. We hope you enjoyed this article and learned something new about this amazing game.

FAQs -

Here are some of the most frequently asked questions about Bloons TD 6:

-
    -
  1. Is Bloons TD 6 free?
  2. -

    No, Bloons TD 6 is not a free game. You need to pay a one-time fee of $4.99 USD (or equivalent in your local currency) to download and play the game. However, the game does not have any ads or forced subscriptions. You can also play the game offline without any internet connection.

    -
  3. Can I play Bloons TD 6 on PC?
  4. -

    Yes, you can play Bloons TD 6 on PC. The game is available on Steam for Windows and Mac devices. You can also use an emulator to play the mobile version of the game on your PC.

    -
  5. How do I get more monkey money and trophies?
  6. -

    Monkey money and trophies are the main currencies in Bloons TD 6. You can earn them by playing the game, completing quests, achievements, events, challenges, and more. You can also buy them with real money if you want to support the developers or get some extra content.

    -
  7. What are the best towers and heroes in Bloons TD 6?
  8. -

    There is no definitive answer to this question, as different towers and heroes have different advantages and disadvantages depending on the situation. However, some of the most popular and effective towers and heroes in Bloons TD 6 are:

    -
      -
    • Ninja monkey: A stealthy and fast tower that can pop camo bloons and deal extra damage to MOABs.
    • -
    • Alchemist: A support tower that can buff other towers with increased attack speed, damage, range, and pierce.
    • -
    • Sun avatar: A super monkey upgrade that shoots powerful beams of plasma that can pop almost any bloon.
    • -
    • Gwendolin: A hero that uses fire to pop bloons and boost nearby towers with extra damage and pierce.
    • -
    • Obyn Greenfoot: A hero that uses nature to pop bloons and buff magic towers with increased range and pierce.
    • -
    -
  9. How do I beat boss bloons?
  10. -

    Boss bloons are very tough enemies that require a lot of strategy and teamwork to defeat. Here are some general tips for beating boss bloons:

    -
      -
    • Use towers and heroes that can deal high damage, such as snipers, cannons, super monkeys, Quincy, Striker Jones, etc.
    • -
    • Use towers and heroes that can slow down or stun boss bloons, such as ice monkeys, glue gunners, heli pilots, Benjamin, Ezili, etc.
    • -
    • Use powers and insta monkeys to boost your defense or offense when needed.
    • -
    • Use co-op mode to coordinate with other players and share resources.
    • -
    • Learn the patterns and abilities of each boss bloon and adapt your strategy accordingly.
    • -
    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Count Money Slot Machine Review A Fun and Rewarding Game with a Spooky Twist.md b/spaces/fatiXbelha/sd/Count Money Slot Machine Review A Fun and Rewarding Game with a Spooky Twist.md deleted file mode 100644 index 22ada8bf3639a21bf4eb6f0400fd5a9446d33ce3..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Count Money Slot Machine Review A Fun and Rewarding Game with a Spooky Twist.md +++ /dev/null @@ -1,138 +0,0 @@ - -

Count Money Slot Machine Download: A Spooky and Thrilling Game

-

If you are a fan of horror movies and novels, you might enjoy playing Count Money slot machine, a game that revolves around vampires, werewolves, bats, and blood. This game is designed and developed by WMS, a leading provider of slot machines and casino games. In this article, we will tell you everything you need to know about Count Money slot machine, how to download it, and how to play it online.

-

Introduction: What is Count Money Slot Machine?

-

Count Money slot machine is a five-reel, 25-payline video slot that features a spooky and gripping theme. The game is based on the story of Count Dracula, a legendary vampire who lives in a castle and preys on innocent humans. The game has colorful and detailed graphics, as well as eerie and thrilling sound effects that create an immersive gaming experience. The game also has some exciting features and symbols that can help you win big prizes.

-

count money slot machine download


Download ✸✸✸ https://urllie.com/2uNFwl



-

The theme and design of the game

-

The theme of Count Money slot machine is inspired by the classic horror genre, especially the stories of Bram Stoker and Mary Shelley. The game takes place in a dark and gloomy castle, where Count Dracula resides with his minions. The symbols on the reels include the vampire Count, his castle, bats, werewolves, golden chalices of blood, and high-value playing cards such as Ten, Jack, Queen, King, and Ace. The game also has some animations and special effects that enhance the theme, such as bats flying across the screen, lightning flashes, and blood splatters.

-

The features and symbols of the game

-

Count Money slot machine has some features and symbols that can make your gameplay more fun and rewarding. Here are some of them:

-
    -
  • The Count symbol is the wild symbol of the game. It can substitute for any other symbol except for the scatter symbol. It can also appear stacked on the reels, increasing your chances of forming winning combinations.
  • -
  • The Castle symbol is the scatter symbol of the game. It can trigger the free spins feature if you land three or more of them anywhere on the reels. You can win up to 15 free spins with a 3x multiplier on all wins.
  • -
  • The Wilds feature is a random feature that can occur during any spin. It can turn any symbol on the reels into a wild symbol, creating more opportunities for winning.
  • -
  • The Bonus feature is triggered by landing three or more Bonus symbols on an active payline. It takes you to a second screen where you have to pick from 12 coffins. Each coffin contains either a cash prize or a vampire. If you pick a cash prize, you can keep it or pick another coffin. If you pick a vampire, the feature ends and you collect your winnings.
  • -
-

The gameplay and rules of the game

-

The gameplay and rules of Count Money slot machine are simple and straightforward. You can start playing by selecting your betting denomination from the options at your disposal, choosing the number of paylines you wish to activate, and hitting “Spin.” You can also use the “Auto Play” option to spin the reels automatically for a preset number of times.

-

The game pays from left to right on adjacent reels, starting from the leftmost reel. You need to land at least three matching symbols on an active payline to win a prize. The paytable shows the value of each symbol and the payout for each combination.

How to Download Count Money Slot Machine?

-

If you want to play Count Money slot machine on your computer or mobile device, you will need to download it first. Downloading the game has some benefits, such as faster loading times, better graphics, and more security. Here is how you can download Count Money slot machine:

-

The benefits of downloading the game

-

Downloading Count Money slot machine can offer you some advantages over playing it online or at a land-based casino. Some of these benefits are:

-
    -
  • You can play the game anytime and anywhere, without depending on an internet connection or a casino location.
  • -
  • You can enjoy the game in full-screen mode, with high-quality graphics and sound effects.
  • -
  • You can save your progress and resume the game later, without losing your winnings or bonuses.
  • -
  • You can access the game's settings and customize them according to your preferences, such as adjusting the volume, the speed, and the autoplay options.
  • -
  • You can protect your privacy and security, as you don't have to share your personal or financial information with any third-party website or casino.
  • -
-

The steps to download the game

-

The steps to download Count Money slot machine are easy and quick. You just need to follow these instructions:

-

Count Money slot machine bonus features
-How to play Count Money slot machine online
-Count Money slot machine free download for PC
-Count Money slot machine tips and tricks
-Count Money slot machine review and ratings
-Count Money slot machine jackpot winners
-Count Money slot machine wilds and scatters
-Count Money slot machine by WMS Gaming
-Count Money slot machine at Pechanga Casino
-Count Money slot machine vs Lucky Count slot machine
-Count Money slot machine demo play
-Count Money slot machine RTP and volatility
-Count Money slot machine cheats and hacks
-Count Money slot machine for Android and iOS devices
-Count Money slot machine symbols and payouts
-Count Money slot machine theme and graphics
-Count Money slot machine sound effects and music
-Count Money slot machine history and origin
-Count Money slot machine alternatives and similar games
-Count Money slot machine FAQs and customer support
-How to win big on Count Money slot machine
-Where to find Count Money slot machine online
-Count Money slot machine no deposit bonus codes
-Count Money slot machine free spins and multipliers
-Count Money slot machine best casinos and sites

-
    -
  1. Find a reliable and reputable website that offers Count Money slot machine download. You can use a search engine or a review site to find one.
  2. -
  3. Click on the download link or button and choose the destination folder where you want to save the game file.
  4. -
  5. Wait for the download to complete and then open the game file. You may need to unzip it first if it is compressed.
  6. -
  7. Follow the installation wizard and agree to the terms and conditions. You may need to create an account or log in with an existing one if the website requires it.
  8. -
  9. Launch the game and enjoy playing Count Money slot machine on your device.
  10. -
-

The requirements and compatibility of the game

-

Before you download Count Money slot machine, you should make sure that your device meets the minimum requirements and is compatible with the game. Here are some of the requirements and compatibility factors you should consider:

-
    -
  • The operating system: The game is compatible with Windows, Mac OS, Android, and iOS devices. You should check the version of your operating system and update it if necessary.
  • -
  • The memory: The game requires at least 1 GB of RAM and 500 MB of free disk space. You should check the available memory on your device and free up some space if needed.
  • -
  • The processor: The game requires at least a 1 GHz processor or higher. You should check the speed of your processor and upgrade it if possible.
  • -
  • The graphics: The game requires at least a 256 MB graphics card or higher. You should check the quality of your graphics card and update its drivers if needed.
  • -
  • The internet connection: The game requires a stable and fast internet connection for downloading and updating. You should check the speed and reliability of your internet connection and avoid using public or shared networks.
  • -

How to Play Count Money Slot Machine Online?

-

If you don't want to download Count Money slot machine, you can also play it online on your browser. Playing online has some advantages, such as convenience, variety, and bonuses. Here is how you can play Count Money slot machine online:

-

The advantages of playing online

-

Playing Count Money slot machine online can offer you some benefits over playing it offline or at a land-based casino. Some of these benefits are:

-
    -
  • You can play the game anytime and anywhere, as long as you have an internet connection and a compatible device.
  • -
  • You can choose from a wide range of online casinos that offer Count Money slot machine, as well as other games from WMS and other providers.
  • -
  • You can claim various bonuses and promotions that can boost your bankroll and your chances of winning. These include welcome bonuses, free spins, cashback, loyalty rewards, and more.
  • -
  • You can play the game for free or for real money, depending on your preference and budget. You can also switch between the two modes easily and quickly.
  • -
  • You can enjoy the game in a safe and secure environment, as long as you choose a licensed and reputable online casino that uses encryption and fair gaming software.
  • -
-

The best online casinos to play at

-

If you want to play Count Money slot machine online, you will need to find a good online casino that offers the game. There are many online casinos that claim to be the best, but not all of them are trustworthy and reliable. Here are some of the factors you should consider when choosing an online casino to play at:

-
    -
  • The reputation and reviews of the casino. You should check the history and background of the casino, as well as the feedback and ratings from other players and experts.
  • -
  • The license and regulation of the casino. You should check if the casino is licensed and regulated by a reputable authority, such as the UK Gambling Commission, the Malta Gaming Authority, or the Curacao eGaming.
  • -
  • The game selection and quality of the casino. You should check if the casino offers Count Money slot machine, as well as other games from WMS and other providers. You should also check if the games are tested and certified by independent auditors, such as eCOGRA, iTech Labs, or GLI.
  • -
  • The bonus offers and terms of the casino. You should check if the casino offers generous and fair bonuses and promotions for new and existing players. You should also check if the bonus terms and conditions are clear and reasonable.
  • -
  • The payment methods and security of the casino. You should check if the casino offers convenient and secure payment methods for deposits and withdrawals, such as credit cards, e-wallets, bank transfers, or cryptocurrencies. You should also check if the casino uses encryption and firewall technology to protect your personal and financial information.
  • -
  • The customer support and service of the casino. You should check if the casino offers friendly and professional customer support that is available 24/7 via live chat, phone, email, or social media. You should also check if the casino has a FAQ section or a help center that can answer your questions and issues.
  • -
-

Based on these factors, some of the best online casinos that offer Count Money slot machine are:

- - - - - - - -
Casino NameLicenseBonusPayment MethodsCustomer Support
Betway CasinoUKGC, MGAUp to $1000 welcome bonusVisa, Mastercard, PayPal, Neteller, Skrill, PaysafecardLive chat, phone, email
888 CasinoUKGC, MGA$88 no deposit bonus + up to $1500 welcome bonusVisa, Mastercard, PayPal, Neteller, Skrill, TrustlyLive chat, phone, email
Casumo CasinoUKGC, MGA20 free spins no deposit bonus + up to $1200 welcome bonusVisa, Mastercard, PayPal, Neteller, Skrill, PaysafecardLive chat, email
LeoVegas CasinoUKGC, MGA10 free spins no deposit bonus + up to $1000 welcome bonus + 200 free spinsVisa, Mastercard, PayPal, Neteller, Skrill, TrustlyLive chat, phone, email
Jackpot City CasinoMGAUp to $1600 welcome bonusVisa, Mastercard, Neteller, Skrill, Paysafecard, ecoPayzLive chat, email
-

The tips and strategies to win

-

If you want to increase your chances of winning at Count Money slot machine online, you should follow some tips and strategies that can help you improve your gameplay and your results. Here are some of them:

-
    -
  • Play for fun and entertainment, not for money. Gambling is a form of entertainment, not a way to make money. You should only play with money that you can afford to lose and never chase your losses.
  • -
  • Learn the rules and features of the game. Before you start playing, you should familiarize yourself with the rules and features of Count Money slot machine. You should know the value of each symbol, the paylines, the bonus rounds, and the payouts.
  • -
  • Manage your bankroll and bet wisely. You should set a budget for your gaming session and stick to it. You should also choose a bet size that suits your bankroll and your risk appetite. You can use the bet max option to activate all paylines and increase your chances of winning, but you should also be aware of the higher cost and risk involved.
  • -
  • Take advantage of the bonuses and promotions. You should look for online casinos that offer generous and fair bonuses and promotions for Count Money slot machine. You should also read the terms and conditions carefully and make sure you meet the wagering requirements before you withdraw your winnings.
  • -
  • Have fun and enjoy the game. The most important tip is to have fun and enjoy the game. Count Money slot machine is a spooky and thrilling game that can keep you entertained for hours. You should appreciate the theme, the graphics, the sound effects, and the features of the game.
  • -
-

Conclusion: Why You Should Try Count Money Slot Machine

-

In conclusion, Count Money slot machine is a game that you should try if you are looking for a spooky and thrilling gaming experience. The game has a horror-themed design, exciting features, and generous payouts. You can download the game or play it online at some of the best online casinos. You can also use some tips and strategies to improve your gameplay and your results. Count Money slot machine is a game that can make you scream with fear and joy at the same time.

-

A summary of the main points

-

To summarize, here are the main points of this article:

-
    -
  • Count Money slot machine is a five-reel, 25-payline video slot that features a spooky and gripping theme based on Count Dracula.
  • -
  • The game has colorful and detailed graphics, eerie and thrilling sound effects, and some exciting features and symbols that can help you win big prizes.
  • -
  • The game has a wild symbol, a scatter symbol, a free spins feature, a wilds feature, and a bonus feature that can reward you with cash prizes or vampires.
  • -
  • You can download the game or play it online on your browser. Downloading the game has some benefits, such as faster loading times, better graphics, and more security. Playing online has some benefits, such as convenience, variety, and bonuses.
  • -
  • You can play the game for free or for real money at some of the best online casinos that offer Count Money slot machine. You can also use some tips and strategies to increase your chances of winning.
  • -
-

A call to action for the readers

-

If you are ready to try Count Money slot machine, you can download it or play it online at one of the online casinos we recommended in this article. You can also claim some bonuses and promotions that can boost your bankroll and your chances of winning. Don't miss this opportunity to have a spooky and thrilling gaming experience with Count Money slot machine. Download or play it today and see if you can win big prizes or avoid vampires!

-

Frequently Asked Questions

-

Here are some frequently asked questions about Count Money slot machine:

-
    -
  1. What is the RTP of Count Money slot machine?
  2. -

    The RTP (return to player) of Count Money slot machine is 96%, which means that for every $100 wagered on the game, you can expect to get back $96 on average.

    -
  3. What is the jackpot of Count Money slot machine?
  4. -

    The jackpot of Count Money slot machine is 10,000 coins, which you can win by landing five wild symbols on an active payline.

    What are the best online casinos to play Count Money slot machine? -

    Some of the best online casinos to play Count Money slot machine are Betway Casino, 888 Casino, Casumo Casino, LeoVegas Casino, and Jackpot City Casino. These casinos are licensed and regulated, offer a wide range of games, have generous and fair bonuses and promotions, and provide excellent customer support and service.

    -
  5. How can I play Count Money slot machine for free?
  6. -

    You can play Count Money slot machine for free by choosing the demo or fun mode at the online casino of your choice. You can also play the game for free on some websites that offer free slot games. Playing for free can help you learn the rules and features of the game, practice your skills and strategies, and have fun without risking any money.

    -
  7. How can I win at Count Money slot machine?
  8. -

    There is no surefire way to win at Count Money slot machine, as the game is based on luck and chance. However, you can increase your chances of winning by following some tips and strategies, such as managing your bankroll and bet wisely, taking advantage of the bonuses and promotions, learning the rules and features of the game, and having fun and enjoying the game.

    -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Awara Full Movie and Experience the Magic of Music and Drama.md b/spaces/fatiXbelha/sd/Download Awara Full Movie and Experience the Magic of Music and Drama.md deleted file mode 100644 index f1698e7771dbba60fd0acffb65cce976dc27fce1..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Awara Full Movie and Experience the Magic of Music and Drama.md +++ /dev/null @@ -1,128 +0,0 @@ - -

Awara Full Movie Download: A Classic Romantic Drama

-

If you are a fan of old Hindi movies, you might have heard of Awara, a 1951 film directed by and starring Raj Kapoor, along with Nargis, Prithviraj Kapoor, and K.N. Singh. Awara, which means The Vagabond in English, is a crime drama that explores the themes of fate, justice, love, and social inequality. It is widely regarded as one of the greatest films of Indian cinema and has influenced many filmmakers across the world.

-

In this article, we will tell you more about the plot, cast, music, and legacy of Awara, as well as how you can watch it online legally. We will also warn you about the dangers of downloading Awara from illegal sites and suggest some alternatives to this classic movie.

-

awara full movie download


Download File 🆓 https://urllie.com/2uNxYH



-

What is Awara about?

-

Awara tells the story of Raj (Raj Kapoor), a young man who grows up in the slums after his father, Judge Raghunath (Prithviraj Kapoor), disowns his mother, Leela (Leela Chitnis), for being kidnapped by a criminal named Jagga (K.N. Singh). Raj becomes a petty thief under Jagga's guidance and falls in love with Rita (Nargis), a lawyer who happens to be his childhood friend. However, fate brings him face to face with his father in a courtroom, where he is accused of attempting to murder him. Rita defends Raj and tries to prove his innocence, while also questioning the judge's role in creating his son's destiny.

-

The plot of Awara

-

The plot of Awara is divided into three parts: the past, the present, and the dream sequence. The past shows how Judge Raghunath rejects his wife and son after suspecting her of infidelity. He also believes that criminals are born to criminals and good people are born to good people. The present shows how Raj lives as a vagabond and meets Rita again after many years. He also learns that Jagga is his biological father and that he was behind his mother's kidnapping. The dream sequence shows Raj's subconscious struggle between good and evil, heaven and hell, as he imagines himself being judged by his father and Rita.

-

The cast of Awara

-

The cast of Awara features some of the most prominent actors of Hindi cinema at that time. Raj Kapoor plays the titular role of Raj, the vagabond who is torn between his love for Rita and his loyalty to Jagga. Nargis plays Rita, the lawyer who loves Raj and believes in his goodness. Prithviraj Kapoor plays Judge Raghunath, Raj's estranged father who is rigid in his views on justice and morality. K.N. Singh plays Jagga, the criminal who kidnaps Leela and later claims to be Raj's father. Leela Chitnis plays Leela, Raj's mother who suffers from poverty and illness after being abandoned by her husband. Cuckoo plays a bar dancer who entertains Raj and Jagga. B.M. Vyas plays Dubey, Rita's father who disapproves of her relationship with Raj. Leela Mishra plays Raghunath's sister-in-law who sympathizes with Leela. Shashi Kapoor plays young Raj and Baby Zubeida plays young Rita in their childhood scenes.

-

The music of Awara

-

The music of Awara is composed by Shankar-Jaikishan, one of the most successful music director duos in Bollywood history. The lyrics are written by Shailendra and Hasrat Jaipuri, two renowned poets who collaborated with Shankar -Jaikishan for many hit songs. The soundtrack of Awara consists of 10 songs, each of which has a different mood and style. Some of the most popular songs are Awara Hoon, Ghar Aaya Mera Pardesi, Dum Bhar Jo Udhar Munh Phere, Ek Do Teen, and Ab Raat Guzarne Wali Hai. The songs are sung by legendary singers like Mukesh, Lata Mangeshkar, Manna Dey, and Mohammed Rafi. The music of Awara is widely praised for its melody, lyrics, and orchestration. It is also considered to be one of the best examples of the use of the harmonium, an Indian keyboard instrument, in film music.

-

Why is Awara a classic movie?

-

Awara is not just a movie, it is a phenomenon. It has been hailed as a masterpiece of Indian cinema and a landmark in world cinema. It has been admired by critics and audiences alike for its artistic and technical excellence, its social and political relevance, and its cultural and historical impact. Here are some of the reasons why Awara is a classic movie:

-

The popularity of Awara

-

Awara was a huge commercial success when it was released in 1951. It broke all box office records and became the highest-grossing Indian film of all time until then. It was also the first Indian film to be screened at the Cannes Film Festival in 1953, where it received a standing ovation. It was also nominated for the Grand Prize of the Festival, the precursor to the Palme d'Or. Awara was also a hit in many foreign countries, especially in the Soviet Union, China, Turkey, and the Middle East. It was dubbed or subtitled in several languages and attracted millions of viewers across the world. It is estimated that Awara has been seen by over a billion people worldwide.

-

awara full movie download hotstar
-awara full movie download youtube
-awara full movie download hd 1080p
-awara full movie download bengali
-awara full movie download telugu
-awara full movie download 1951
-awara full movie download filmywap
-awara full movie download 720p
-awara full movie download in hindi
-awara full movie download mp4
-awara full movie download pagalworld
-awara full movie download tamilrockers
-awara full movie download raj kapoor
-awara full movie download free online
-awara full movie download with english subtitles
-awara full movie download utorrent
-awara full movie download jeet
-awara full movie download mkv
-awara full movie download worldfree4u
-awara full movie download khatrimaza
-awara full movie download moviescounter
-awara full movie download bluray
-awara full movie download disney plus hotstar
-awara full movie download 480p
-awara full movie download 300mb
-awara full movie download dvdrip
-awara full movie download bolly4u
-awara full movie download filmyzilla
-awara full movie download in tamil dubbed
-awara full movie download in telugu dubbed
-awara full movie download nargis dutt
-awara full movie download google drive link
-awara full movie download watch online free
-awara full movie download song mp3
-awara full movie download sayantika banerjee
-awara full movie download web series
-awara full movie download netflix amazon prime video
-awara full movie download torrentz2
-awara full movie download extramovies
-awara full movie download openload

-

The influence of Awara

-

Awara has influenced many filmmakers and artists in India and abroad. It has inspired several remakes and adaptations in different languages and genres. It has also influenced the style and themes of many directors, such as Guru Dutt, Satyajit Ray, Raj Khosla, Mehboob Khan, Bimal Roy, Yash Chopra, Mani Ratnam, Karan Johar, Anurag Kashyap, and others. Awara has also influenced the music and fashion of India and other countries. The song Awara Hoon, which became an anthem for the youth and the poor, was covered by many singers and musicians in different languages and styles. The cap that Raj Kapoor wore in the film became a symbol of his persona and a fashion trend for many people.

-

The awards and accolades of Awara

-

Awara has received many awards and accolades from various organizations and institutions. It won three Filmfare Awards in 1954 for Best Film, Best Director, and Best Music Director. It also won two National Film Awards in 1954 for Best Feature Film in Hindi and Best Feature Film on National Integration. It was also selected as one of the best films of all time by various polls and surveys conducted by Time Magazine, BBC, Sight & Sound, The British Film Institute, The Film Foundation, The Academy of Motion Picture Arts and Sciences, and others.

-

How to watch Awara online?

-

If you are interested in watching Awara online, you have several options to choose from. However, you should be careful about which sites you use to stream or download this movie, as some of them may be illegal or unsafe. Here are some of the legal ways to watch Awara online:

-

The legal ways to stream Awara

-

One of the easiest ways to watch Awara online is to use streaming platforms that have the rights to show this movie. Some of these platforms are:

- - - - - - - - - - - - - - - - - - - - - - - - - -
PlatformPriceAvailability
YouTube$2.99 (rent) / $9.99 (buy)Worldwide
Amazon Prime Video$2.99 (rent) / $9.99 (buy)Worldwide
Eros Now$4.99/month or $49.99/year (subscription)Worldwide
Netflix$8.99/month or more (subscription)India, USA, UK, Canada, and some other countries
-

These platforms offer high-quality video and audio, as well as subtitles and other features. They are also legal and safe to use, as they have the permission from the producers and distributors of Awara. However, they may have different prices and availability depending on your location and device. You should check the terms and conditions of each platform before using them.

-

The risks of downloading Awara from illegal sites

-

Another way to watch Awara online is to download it from torrent or other file-sharing sites that offer free downloads. However, this method is illegal and risky, as it violates the copyright laws and may expose you to malware, viruses, phishing, identity theft, and other cybercrimes. Some of the risks of downloading Awara from illegal sites are:

-
    -
  • You may face legal action from the authorities or the owners of Awara for piracy. You may have to pay a fine or face imprisonment for downloading or distributing copyrighted content without permission.
  • -
  • You may damage your device or compromise your data by downloading files that contain harmful software or links. You may lose your personal information, financial details, passwords, or other sensitive data to hackers or scammers.
  • -
  • You may get a poor-quality video or audio, or a different movie altogether. You may also miss out on the subtitles, bonus features, or other extras that are available on the legal platforms.
  • -
-

Therefore, we strongly advise you to avoid downloading Awara from illegal sites and use the legal platforms instead.

-

The alternatives to Awara

-

If you are looking for some other movies that are similar to Awara in terms of genre, theme, style, or quality, you have plenty of options to choose from. Some of the alternatives to Awara are:

-
    -
  1. Shree 420: Another classic movie by Raj Kapoor and Nargis, this 1955 film is a comedy-drama that follows the adventures of a poor but honest man who comes to Mumbai and gets corrupted by the city life. It has some of the same songs and scenes as Awara and is also a commentary on the social issues of India.
  2. -
  3. Pyaasa: A masterpiece by Guru Dutt, this 1957 film is a romantic drama that depicts the life of a struggling poet who is rejected by his family, friends, and lover. It is a poignant and poetic film that explores the themes of love, art, and alienation.
  4. -
  5. Mother India: A landmark film by Mehboob Khan, this 1957 film is an epic drama that portrays the life of a poor peasant woman who faces many hardships and challenges in her quest to raise her sons. It is a powerful and emotional film that showcases the strength and resilience of Indian women.
  6. -
  7. Mughal-e-Azam: A magnum opus by K. Asif, this 1960 film is a historical drama that narrates the love story of Prince Salim and Anarkali, a court dancer. It is a lavish and spectacular film that features stunning sets, costumes, music, and performances.
  8. -
  9. Anand: A gem by Hrishikesh Mukherjee, this 1971 film is a comedy-drama that revolves around the friendship between a terminally ill man and a cynical doctor. It is a heartwarming and humorous film that celebrates life and happiness.
  10. -
-

These are some of the movies that you can watch online if you like Awara. You can find them on various streaming platforms or download them legally from authorized sites.

-

Conclusion

-

Awara is a classic movie that deserves to be watched by everyone who loves cinema. It is a movie that has everything: romance, drama, comedy, music, action, and message. It is a movie that has influenced generations of filmmakers and audiences. It is a movie that has transcended borders and cultures. It is a movie that you can watch online legally and safely from various platforms.

-

We hope this article has given you some useful information about Awara full movie download. If you have any questions or comments about this topic, feel free to leave them below. We would love to hear from you!

-

FAQs

-

Here are some of the frequently asked questions about Awara full movie download:

-

Q: When was Awara released?

-

A: Awara was released on December 14, 1951 in India.

-

Q: Who directed Awara?

A: Awara was directed by Raj Kapoor, who also played the lead role of Raj.

-

Q: What is the meaning of Awara?

-

A: Awara means The Vagabond or The Tramp in English. It refers to the character of Raj, who lives as a wanderer and a thief.

-

Q: How long is Awara?

-

A: Awara is 193 minutes long, or 3 hours and 13 minutes.

-

Q: Is Awara based on a true story?

-

A: No, Awara is not based on a true story. It is a fictional story written by Khwaja Ahmad Abbas, who also co-wrote the screenplay with V.P. Sathe.

-

Q: Where can I find the lyrics of the songs of Awara?

-

A: You can find the lyrics of the songs of Awara on various websites, such as LyricsIndia, HindiGeetMala, or HindiLyrics.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Crowd Evolution Mod APK and Enjoy the Fun of Crowd Building and Fighting.md b/spaces/fatiXbelha/sd/Download Crowd Evolution Mod APK and Enjoy the Fun of Crowd Building and Fighting.md deleted file mode 100644 index af9ce7d122b142a4e120b770ac57cea85a69ecb3..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Crowd Evolution Mod APK and Enjoy the Fun of Crowd Building and Fighting.md +++ /dev/null @@ -1,107 +0,0 @@ -
-

How to Download Mod Crowd Evolution Game and Why You Should Try It

-

Are you looking for a fun and unique game that will take you on a journey through human evolution? Do you want to experience the thrill of building your own army, fighting enemies, and traveling through time portals? If you answered yes, then you should definitely try Mod Crowd Evolution Game.

-

Mod Crowd Evolution Game is a modified version of the original Crowd Evolution Game, which adds more features and benefits for the players. In this article, we will tell you what Mod Crowd Evolution Game is, how to download and install it, and why you should try it. We will also share some tips and tricks to help you play the game better. So, let's get started!

-

download mod crowd evolution


Download File >>>>> https://urllie.com/2uNy96



-

What is Mod Crowd Evolution Game?

-

Mod Crowd Evolution Game is a game that combines elements of action, strategy, and simulation. It is based on the theme of human evolution, where you start as an ancient prehistoric person and go through different periods of history by passing through time portals. Along the way, you will encounter various enemies, such as dinosaurs, zombies, robots, aliens, and more. You will also recruit more warriors to join your army, upgrade your weapons and skills, and grow your crowd.

-

The concept and gameplay of Mod Crowd Evolution Game

-

The concept of Mod Crowd Evolution Game is simple but addictive. You control your character with arrow keys or tap controls on your screen. You move around the map, collecting coins, gems, weapons, and other items. You also try to avoid or kill the enemies that are trying to stop you. If you touch an enemy, you will lose some warriors from your crowd. If you lose all your warriors, you will die and have to start over.

-

The goal of the game is to reach the end of each level, where you will find a time portal that will take you to the next period of human evolution. Each time portal will give you a boost in time or warriors, depending on which one you choose. However, some time portals will also take away some time or warriors from you, so be careful. The game has many levels to play, each with different challenges and environments.

-

The features and benefits of Mod Crowd Evolution Game

-

Mod Crowd Evolution Game has many features and benefits that make it more enjoyable and rewarding than the original game. Some of them are:

-
    -
  • Unlimited money and gems: You can get unlimited money and gems in Mod Crowd Evolution Game, which you can use to buy more weapons, upgrades, skins, and other items. You can also use them to revive yourself if you die or skip levels if you get stuck.
  • -
  • New weapons and skills: Mod Crowd Evolution Game has more weapons and skills than the original game. You can use swords, axes, spears, guns, lasers, rockets, grenades, bombs, shields, helmets, armor, and more. You can also upgrade your weapons and skills to make them more powerful and effective.
  • -
  • New enemies and bosses: Mod Crowd Evolution Game has more enemies and bosses than the original game. You will face different types of enemies in each period of human evolution, such as cavemen, knights, pirates, ninjas, soldiers, robots, aliens, etc. You will also encounter giant bosses that will test your skills and strategy.
  • -
  • New maps and graphics: Mod Crowd Evolution Game has more maps and graphics than the original game. You will explore different locations in each period of human evolution, such as forests, deserts, islands, mountains, cities, factories, spaceships, etc. You will also enjoy the improved graphics and animations of the game.
  • -
  • New modes and challenges: Mod Crowd Evolution Game has more modes and challenges than the original game. You can play in different modes, such as survival, time trial, endless, and multiplayer. You can also complete various challenges, such as collecting a certain number of coins or gems, killing a certain number of enemies or bosses, reaching a certain level or time portal, etc.
  • -
-

As you can see, Mod Crowd Evolution Game has a lot to offer for the players who love action, strategy, and simulation games. It is a game that will keep you entertained and engaged for hours.

-

How to Download and Install Mod Crowd Evolution Game?

-

Now that you know what Mod Crowd Evolution Game is and why you should try it, you might be wondering how to download and install it on your device. Well, don't worry, because we have got you covered. Here are the steps to download and install Mod Crowd Evolution Game:

-

The steps to download and install Mod Crowd Evolution Game

-
    -
  1. First, you need to make sure that your device meets the minimum requirements to play Mod Crowd Evolution Game. You need to have an Android device with at least 4.4 version or higher, or an iOS device with at least 10.0 version or higher. You also need to have at least 100 MB of free space on your device.
  2. -
  3. Next, you need to find a reliable source to download Mod Crowd Evolution Game. You can search for it on Google or any other search engine, or you can use the link that we have provided below. Make sure that you download the latest version of Mod Crowd Evolution Game.
  4. -
  5. After you have downloaded the Mod Crowd Evolution Game file, you need to install it on your device. If you are using an Android device, you need to enable the unknown sources option in your settings. This will allow you to install apps from sources other than the Google Play Store. If you are using an iOS device, you need to trust the developer profile in your settings. This will allow you to install apps from sources other than the App Store.
  6. -
  7. Once you have installed Mod Crowd Evolution Game on your device, you can launch it and start playing. You will see a tutorial that will guide you through the basics of the game. You can also customize your settings and preferences according to your liking.
  8. -
-

That's it! You have successfully downloaded and installed Mod Crowd Evolution Game on your device. Now you can enjoy the game and have fun!

-

download mod crowd evolution apk
-download mod crowd evolution unlimited money
-download mod crowd evolution latest version
-download mod crowd evolution android
-download mod crowd evolution hack
-download mod crowd evolution game
-download mod crowd evolution free
-download mod crowd evolution offline
-download mod crowd evolution online
-download mod crowd evolution for pc
-download mod crowd evolution cheats
-download mod crowd evolution gems
-download mod crowd evolution weapons
-download mod crowd evolution characters
-download mod crowd evolution tips
-download mod crowd evolution guide
-download mod crowd evolution review
-download mod crowd evolution gameplay
-download mod crowd evolution strategy
-download mod crowd evolution fun
-download mod crowd evolution simulation
-download mod crowd evolution action
-download mod crowd evolution adventure
-download mod crowd evolution casual
-download mod crowd evolution arcade
-download mod crowd evolution 3d
-download mod crowd evolution graphics
-download mod crowd evolution sound
-download mod crowd evolution music
-download mod crowd evolution controls
-download mod crowd evolution features
-download mod crowd evolution updates
-download mod crowd evolution bugs
-download mod crowd evolution fixes
-download mod crowd evolution improvements
-download mod crowd evolution challenges
-download mod crowd evolution levels
-download mod crowd evolution stages
-download mod crowd evolution modes
-download mod crowd evolution scenarios
-download mod crowd evolution enemies
-download mod crowd evolution bosses
-download mod crowd evolution rewards
-download mod crowd evolution items
-download mod crowd evolution skins
-download mod crowd evolution costumes
-download mod crowd evolution customizations
-download mod crowd evolution ratings
-download mod crowd evolution comments

-

The tips and tricks to play Mod Crowd Evolution Game

-

To help you play Mod Crowd Evolution Game better, we have also prepared some tips and tricks for you. Here are some of them:

-
    -
  • Use different weapons and skills according to the situation. Some weapons and skills are more effective against certain enemies or in certain environments. For example, swords and axes are good for close combat, guns and lasers are good for long-range combat, grenades and bombs are good for crowd control, shields and helmets are good for defense, etc.
  • -
  • Collect as many coins and gems as possible. Coins and gems are the main currency in Mod Crowd Evolution Game. You can use them to buy more weapons, upgrades, skins, and other items. You can also use them to revive yourself if you die or skip levels if you get stuck.
  • -
  • Avoid or kill the enemies as fast as possible. Enemies are the main obstacle in Mod Crowd Evolution Game. They will try to stop you from reaching the end of each level or passing through the time portal. If you touch an enemy, you will lose some warriors from your crowd. If you lose all your warriors, you will die and have to start over.
  • -
  • Grow your crowd as large as possible. Your crowd is your main strength in Mod Crowd Evolution Game. The larger your crowd is, the more damage you can deal and the more enemies you can defeat. You can grow your crowd by recruiting more warriors along the way or by choosing time portals that give you a boost in warriors.
  • -
  • Choose wisely between time portals. Time portals are the key to progress in Mod Crowd Evolution Game. They will take you to the next period of human evolution or give you a boost in time or warriors. However, some time portals will also take away some time or warriors from you, so be careful. You can see the effect of each time portal before choosing it.
  • -
-

These are some of the tips and tricks that we have for you. Of course, there are more things that you can discover and learn by playing Mod Crowd Evolution Game yourself.

- Conclusion -

Mod Crowd Evolution Game is a game that will give you a lot of fun and excitement. It is a game that will let you experience the journey of human evolution, from the prehistoric times to the future. It is a game that will challenge your skills and strategy, as you face different enemies and bosses in each period of human evolution. It is a game that will reward you with unlimited money and gems, new weapons and skills, new maps and graphics, new modes and challenges, and more.

-

If you are looking for a game that will keep you entertained and engaged for hours, then you should definitely download and install Mod Crowd Evolution Game on your device. You will not regret it. You will love it.

-

So, what are you waiting for? Download Mod Crowd Evolution Game now and start your adventure!

-

FAQs

-

Here are some of the frequently asked questions about Mod Crowd Evolution Game:

-

What are the requirements to play Mod Crowd Evolution Game?

-

To play Mod Crowd Evolution Game, you need to have an Android device with at least 4.4 version or higher, or an iOS device with at least 10.0 version or higher. You also need to have at least 100 MB of free space on your device.

-

Is Mod Crowd Evolution Game safe and legal to download?

-

Yes, Mod Crowd Evolution Game is safe and legal to download. It is a modified version of the original Crowd Evolution Game, which is available on the Google Play Store and the App Store. It does not contain any viruses or malware that can harm your device or data. It also does not violate any laws or regulations that can get you in trouble.

-

How can I get unlimited money and gems in Mod Crowd Evolution Game?

-

You can get unlimited money and gems in Mod Crowd Evolution Game by downloading and installing the modded version of the game. The modded version of the game will give you unlimited money and gems, which you can use to buy more weapons, upgrades, skins, and other items. You can also use them to revive yourself if you die or skip levels if you get stuck.

-

What are the best weapons and strategies in Mod Crowd Evolution Game?

-

The best weapons and strategies in Mod Crowd Evolution Game depend on the situation and the enemies that you are facing. Some weapons and strategies are more effective against certain enemies or in certain environments. For example, swords and axes are good for close combat, guns and lasers are good for long-range combat, grenades and bombs are good for crowd control, shields and helmets are good for defense, etc. You can also upgrade your weapons and skills to make them more powerful and effective.

-

How can I contact the developers of Mod Crowd Evolution Game?

-

If you have any questions, feedback, suggestions, or issues about Mod Crowd Evolution Game, you can contact the developers of the game by sending them an email at modcrowdevolution@gmail.com. They will try to respond to you as soon as possible.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Save The Doge Pro APK for Free and Rescue the Cute Doge.md b/spaces/fatiXbelha/sd/Download Save The Doge Pro APK for Free and Rescue the Cute Doge.md deleted file mode 100644 index c4d55ee0b6146587004d1184b4cf3e7135396822..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Save The Doge Pro APK for Free and Rescue the Cute Doge.md +++ /dev/null @@ -1,92 +0,0 @@ -
-

Save the Doge Pro APK: A Fun and Challenging Puzzle Game

-

Do you love dogs? Do you love puzzles? Do you love drawing? If you answered yes to any of these questions, then you will love Save the Doge Pro APK, a fun and challenging puzzle game that lets you draw lines to protect a cute doge from bees. In this article, we will tell you what Save the Doge Pro APK is, how to download and install it, and how to play it.

-

What is Save the Doge Pro APK?

-

A puzzle game that lets you draw lines to protect a cute doge from bees

-

Save the Doge Pro APK is a puzzle game that allows you to draw a line to save the poor doge. Oops! These bees are dangerous! Doge is in danger! Could you protect Doge from the bad bees?

-

save the doge pro apk


Download Ziphttps://urllie.com/2uNxpi



-

This game is based on the popular internet meme of doge, a Shiba Inu dog with funny expressions. The game challenges your creativity, logic, and reflexes as you try to draw lines to block the bees from reaching the doge. You have to be quick and smart, as the bees will try to find a way around your lines.

-

The features of Save the Doge Pro APK

-

Simple and intuitive gameplay

-

The game is easy to play, but hard to master. You just need to touch the screen to draw a line. The line will act as a barrier between the doge and the bees. You can draw as many lines as you want, but be careful not to run out of ink. The ink meter will show you how much ink you have left.

-

Various levels and scenarios

-

The game has different levels and scenarios, each with its own difficulty and challenge. You will encounter different types of bees, such as normal bees, angry bees, boss bees, and more. You will also see different backgrounds, such as grassland, forest, desert, and more. Each level will test your skills and strategy.

-

Cute graphics and sound effects

-

The game has cute graphics and sound effects that will make you smile. The doge is adorable and expressive, and the bees are funny and annoying. The game also has cheerful music and sound effects that will enhance your gaming experience.

-

No ads or in-app purchases

-

The best thing about Save the Doge Pro APK is that it is completely free and has no ads or in-app purchases. You can enjoy the game without any interruptions or distractions. You can also play it offline without an internet connection.

-

How to download and install Save the Doge Pro APK?

-

Download the APK file from a trusted source

-

To download Save the Doge Pro APK, you need to find a trusted source that offers the latest version of the game. You can use this link to download the APK file directly from APKPure.com, a reliable website that provides safe and secure downloads of Android apps.

-

Enable unknown sources on your device settings

-

Before you can install Save the Doge Pro APK, you need to enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, follow these steps: - Go to your device settings and tap on Security or Privacy. - Find the option that says Unknown Sources or Install Unknown Apps and toggle it on. - Confirm your choice by tapping OK or Allow.

Install the APK file and enjoy the game

-

Once you have enabled unknown sources, you can install Save the Doge Pro APK by following these steps: - Locate the APK file that you have downloaded and tap on it. - Follow the instructions on the screen and tap on Install. - Wait for the installation to finish and tap on Open. - Enjoy playing Save the Doge Pro APK and save the doge from the bees.

How to play Save the Doge Pro APK?

-

Touch the screen to draw a line

-

The main objective of Save the Doge Pro APK is to draw a line to protect the doge from the bees. To do this, you need to touch the screen and drag your finger to draw a line. The line will act as a barrier between the doge and the bees. You can draw as many lines as you want, but be careful not to run out of ink. The ink meter will show you how much ink you have left.

-

save the doge mod apk unlocked
-save the doge game download for android
-save the doge pro apk latest version
-save the doge mod apk free download
-save the doge casual game mod apk
-save the doge apk download for pc
-save the doge pro apk moddroid
-save the doge game online play
-save the doge pro apk no ads
-save the doge mod apk unlimited coins
-save the doge game review
-save the doge pro apk 2023
-save the doge mod apk android 1
-save the doge game hack
-save the doge pro apk full version
-save the doge mod apk revdl
-save the doge game tips and tricks
-save the doge pro apk premium
-save the doge mod apk happymod
-save the doge game cheats
-save the doge pro apk cracked
-save the doge mod apk rexdl
-save the doge game walkthrough
-save the doge pro apk paid
-save the doge mod apk an1
-save the doge game guide
-save the doge pro apk original
-save the doge mod apk apkpure
-save the doge game levels
-save the doge pro apk update
-save the doge mod apk 1.0.11
-save the doge game features
-save the doge pro apk old version
-save the doge mod apk offline
-save the doge game screenshots
-save the doge pro apk mirror
-save the doge mod apk online
-save the doge game rating
-save the doge pro apk obb
-save the doge mod apk 2022
-save the doge game developer
-save the doge pro apk reddit
-save the doge mod apk ios
-save the doge game genre
-save the doge pro apk telegram
-save the doge mod apk unlimited money
-save the doge game size
-save the doge pro apk uptodown

-

Hold on for 10 seconds to prevent the bees from hurting the doge

-

The game will start with a countdown of 10 seconds. During this time, you need to hold on and prevent the bees from hurting the doge. The bees will try to find a way around your lines and reach the doge. If they touch the doge, they will sting him and reduce his health. The health bar will show you how much health the doge has left. If his health reaches zero, you will lose the game.

-

Use less ink to get a higher score

-

The game will reward you with a score based on how well you protect the doge. The score will depend on how much ink you use and how many bees you block. The less ink you use, the higher your score will be. The more bees you block, the higher your score will be. You can also get bonus points for completing achievements, such as saving the doge without using any ink, blocking all the bees, or saving the doge with full health.

-

Conclusion

-

Save the Doge Pro APK is a fun and challenging puzzle game that lets you draw lines to protect a cute doge from bees. It has simple and intuitive gameplay, various levels and scenarios, cute graphics and sound effects, and no ads or in-app purchases. You can download and install it easily from a trusted source and enjoy playing it offline without an internet connection. If you love dogs, puzzles, and drawing, you should definitely try Save the Doge Pro APK.

-

FAQs

-

Here are some frequently asked questions about Save the Doge Pro APK:

- - - - - - - -
QuestionAnswer
Is Save the Doge Pro APK safe to download and install?Yes, Save the Doge Pro APK is safe to download and install from a trusted source like APKPure.com. It does not contain any viruses or malware that can harm your device.
Is Save the Doge Pro APK compatible with my device?Save the Doge Pro APK is compatible with most Android devices that have Android 4.4 or higher. You can check your device's compatibility by visiting this link and clicking on Check Compatibility.
How can I update Save the Doge Pro APK?You can update Save the Doge Pro APK by visiting this link and downloading the latest version of the game. You can also enable auto-update by tapping on More Options > Auto Update Apps > On.
How can I contact the developer of Save the Doge Pro APK?You can contact the developer of Save the Doge Pro APK by visiting this link and clicking on Contact Developer. You can also send an email to savethedogepro@gmail.com.
How can I share my feedback or suggestions for Save the Doge Pro APK?You can share your feedback or suggestions for Save the Doge Pro APK by visiting this link and clicking on Rate This App. You can also leave a comment or a review on the same page.
-

I hope this article has helped you learn more about Save the Doge Pro APK and how to download, install, and play it. If you have any questions or comments, feel free to contact me or leave a comment below. Thank you for reading and have a great day!

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/tools/match_histogram.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/tools/match_histogram.py deleted file mode 100644 index be55a7788363bc3b212b82864547592faa936b87..0000000000000000000000000000000000000000 --- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/tools/match_histogram.py +++ /dev/null @@ -1,167 +0,0 @@ -from argparse import ( - ArgumentParser, - Namespace, -) -import os -from os.path import join as pjoin -from typing import Optional -import sys - -import numpy as np -import cv2 -from skimage import exposure - - -# sys.path.append('Face_Detection') -# from align_warp_back_multiple_dlib import match_histograms - - -def calculate_cdf(histogram): - """ - This method calculates the cumulative distribution function - :param array histogram: The values of the histogram - :return: normalized_cdf: The normalized cumulative distribution function - :rtype: array - """ - # Get the cumulative sum of the elements - cdf = histogram.cumsum() - - # Normalize the cdf - normalized_cdf = cdf / float(cdf.max()) - - return normalized_cdf - - -def calculate_lookup(src_cdf, ref_cdf): - """ - This method creates the lookup table - :param array src_cdf: The cdf for the source image - :param array ref_cdf: The cdf for the reference image - :return: lookup_table: The lookup table - :rtype: array - """ - lookup_table = np.zeros(256) - lookup_val = 0 - for src_pixel_val in range(len(src_cdf)): - lookup_val - for ref_pixel_val in range(len(ref_cdf)): - if ref_cdf[ref_pixel_val] >= src_cdf[src_pixel_val]: - lookup_val = ref_pixel_val - break - lookup_table[src_pixel_val] = lookup_val - return lookup_table - - -def match_histograms(src_image, ref_image, src_mask=None, ref_mask=None): - """ - This method matches the source image histogram to the - reference signal - :param image src_image: The original source image - :param image ref_image: The reference image - :return: image_after_matching - :rtype: image (array) - """ - # Split the images into the different color channels - # b means blue, g means green and r means red - src_b, src_g, src_r = cv2.split(src_image) - ref_b, ref_g, ref_r = cv2.split(ref_image) - - def rv(im): - if ref_mask is None: - return im.flatten() - return im[ref_mask] - - def sv(im): - if src_mask is None: - return im.flatten() - return im[src_mask] - - # Compute the b, g, and r histograms separately - # The flatten() Numpy method returns a copy of the array c - # collapsed into one dimension. - src_hist_blue, bin_0 = np.histogram(sv(src_b), 256, [0, 256]) - src_hist_green, bin_1 = np.histogram(sv(src_g), 256, [0, 256]) - src_hist_red, bin_2 = np.histogram(sv(src_r), 256, [0, 256]) - ref_hist_blue, bin_3 = np.histogram(rv(ref_b), 256, [0, 256]) - ref_hist_green, bin_4 = np.histogram(rv(ref_g), 256, [0, 256]) - ref_hist_red, bin_5 = np.histogram(rv(ref_r), 256, [0, 256]) - - # Compute the normalized cdf for the source and reference image - src_cdf_blue = calculate_cdf(src_hist_blue) - src_cdf_green = calculate_cdf(src_hist_green) - src_cdf_red = calculate_cdf(src_hist_red) - ref_cdf_blue = calculate_cdf(ref_hist_blue) - ref_cdf_green = calculate_cdf(ref_hist_green) - ref_cdf_red = calculate_cdf(ref_hist_red) - - # Make a separate lookup table for each color - blue_lookup_table = calculate_lookup(src_cdf_blue, ref_cdf_blue) - green_lookup_table = calculate_lookup(src_cdf_green, ref_cdf_green) - red_lookup_table = calculate_lookup(src_cdf_red, ref_cdf_red) - - # Use the lookup function to transform the colors of the original - # source image - blue_after_transform = cv2.LUT(src_b, blue_lookup_table) - green_after_transform = cv2.LUT(src_g, green_lookup_table) - red_after_transform = cv2.LUT(src_r, red_lookup_table) - - # Put the image back together - image_after_matching = cv2.merge([blue_after_transform, green_after_transform, red_after_transform]) - image_after_matching = cv2.convertScaleAbs(image_after_matching) - - return image_after_matching - - -def convert_to_BW(im, mode): - if mode == "b": - gray = im[..., 0] - elif mode == "gb": - gray = (im[..., 0].astype(float) + im[..., 1]) / 2.0 - else: - gray = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY) - gray = gray.astype(np.uint8) - - return np.stack([gray] * 3, axis=-1) - - -def parse_args(args=None, namespace: Optional[Namespace] = None): - parser = ArgumentParser('match histogram of src to ref') - parser.add_argument('src') - parser.add_argument('ref') - parser.add_argument('--out', default=None, help="converted src that matches ref") - parser.add_argument('--src_mask', default=None, help="mask on which to match the histogram") - parser.add_argument('--ref_mask', default=None, help="mask on which to match the histogram") - parser.add_argument('--spectral_sensitivity', choices=['b', 'gb', 'g'], help="match the histogram of corresponding sensitive channel(s)") - parser.add_argument('--crop', type=int, default=0, help="crop the boundary to match") - return parser.parse_args(args=args, namespace=namespace) - - -def main(args): - A = cv2.imread(args.ref) - A = convert_to_BW(A, args.spectral_sensitivity) - B = cv2.imread(args.src, 0) - B = np.stack((B,) * 3, axis=-1) - - mask_A = cv2.resize(cv2.imread(args.ref_mask, 0), A.shape[:2][::-1], - interpolation=cv2.INTER_NEAREST) > 0 if args.ref_mask else None - mask_B = cv2.resize(cv2.imread(args.src_mask, 0), B.shape[:2][::-1], - interpolation=cv2.INTER_NEAREST) > 0 if args.src_mask else None - - if args.crop > 0: - c = args.crop - bc = int(c / A.shape[0] * B.shape[0] + 0.5) - A = A[c:-c, c:-c] - B = B[bc:-bc, bc:-bc] - - B = match_histograms(B, A, src_mask=mask_B, ref_mask=mask_A) - # B = exposure.match_histograms(B, A, multichannel=True) - - if args.out: - os.makedirs(os.path.dirname(args.out), exist_ok=True) - cv2.imwrite(args.out, B) - - return B - - -if __name__ == "__main__": - main(parse_args()) diff --git a/spaces/feng2022/styleganhuman_copy/torch_utils/training_stats.py b/spaces/feng2022/styleganhuman_copy/torch_utils/training_stats.py deleted file mode 100644 index 3eb94d95286d8aeffe40ad32ca667e53b4622c4f..0000000000000000000000000000000000000000 --- a/spaces/feng2022/styleganhuman_copy/torch_utils/training_stats.py +++ /dev/null @@ -1,270 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Facilities for reporting and collecting training statistics across -multiple processes and devices. The interface is designed to minimize -synchronization overhead as well as the amount of boilerplate in user -code.""" - -import re -import numpy as np -import torch -import dnnlib - -from . import misc - -#---------------------------------------------------------------------------- - -_num_moments = 3 # [num_scalars, sum_of_scalars, sum_of_squares] -_reduce_dtype = torch.float32 # Data type to use for initial per-tensor reduction. -_counter_dtype = torch.float64 # Data type to use for the internal counters. -_rank = 0 # Rank of the current process. -_sync_device = None # Device to use for multiprocess communication. None = single-process. -_sync_called = False # Has _sync() been called yet? -_counters = dict() # Running counters on each device, updated by report(): name => device => torch.Tensor -_cumulative = dict() # Cumulative counters on the CPU, updated by _sync(): name => torch.Tensor - -#---------------------------------------------------------------------------- - -def init_multiprocessing(rank, sync_device): - r"""Initializes `torch_utils.training_stats` for collecting statistics - across multiple processes. - - This function must be called after - `torch.distributed.init_process_group()` and before `Collector.update()`. - The call is not necessary if multi-process collection is not needed. - - Args: - rank: Rank of the current process. - sync_device: PyTorch device to use for inter-process - communication, or None to disable multi-process - collection. Typically `torch.device('cuda', rank)`. - """ - global _rank, _sync_device - assert not _sync_called - _rank = rank - _sync_device = sync_device - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def report(name, value): - r"""Broadcasts the given set of scalars to all interested instances of - `Collector`, across device and process boundaries. - - This function is expected to be extremely cheap and can be safely - called from anywhere in the training loop, loss function, or inside a - `torch.nn.Module`. - - Warning: The current implementation expects the set of unique names to - be consistent across processes. Please make sure that `report()` is - called at least once for each unique name by each process, and in the - same order. If a given process has no scalars to broadcast, it can do - `report(name, [])` (empty list). - - Args: - name: Arbitrary string specifying the name of the statistic. - Averages are accumulated separately for each unique name. - value: Arbitrary set of scalars. Can be a list, tuple, - NumPy array, PyTorch tensor, or Python scalar. - - Returns: - The same `value` that was passed in. - """ - if name not in _counters: - _counters[name] = dict() - - elems = torch.as_tensor(value) - if elems.numel() == 0: - return value - - elems = elems.detach().flatten().to(_reduce_dtype) - moments = torch.stack([ - torch.ones_like(elems).sum(), - elems.sum(), - elems.square().sum(), - ]) - assert moments.ndim == 1 and moments.shape[0] == _num_moments - moments = moments.to(_counter_dtype) - - device = moments.device - if device not in _counters[name]: - _counters[name][device] = torch.zeros_like(moments) - _counters[name][device].add_(moments) - return value - -#---------------------------------------------------------------------------- - -def report0(name, value): - r"""Broadcasts the given set of scalars by the first process (`rank = 0`), - but ignores any scalars provided by the other processes. - See `report()` for further details. - """ - report(name, value if _rank == 0 else []) - return value - -#---------------------------------------------------------------------------- - -class Collector: - r"""Collects the scalars broadcasted by `report()` and `report0()` and - computes their long-term averages (mean and standard deviation) over - user-defined periods of time. - - The averages are first collected into internal counters that are not - directly visible to the user. They are then copied to the user-visible - state as a result of calling `update()` and can then be queried using - `mean()`, `std()`, `as_dict()`, etc. Calling `update()` also resets the - internal counters for the next round, so that the user-visible state - effectively reflects averages collected between the last two calls to - `update()`. - - Args: - regex: Regular expression defining which statistics to - collect. The default is to collect everything. - keep_previous: Whether to retain the previous averages if no - scalars were collected on a given round - (default: True). - """ - def __init__(self, regex='.*', keep_previous=True): - self._regex = re.compile(regex) - self._keep_previous = keep_previous - self._cumulative = dict() - self._moments = dict() - self.update() - self._moments.clear() - - def names(self): - r"""Returns the names of all statistics broadcasted so far that - match the regular expression specified at construction time. - """ - return [name for name in _counters if self._regex.fullmatch(name)] - - def update(self): - r"""Copies current values of the internal counters to the - user-visible state and resets them for the next round. - - If `keep_previous=True` was specified at construction time, the - operation is skipped for statistics that have received no scalars - since the last update, retaining their previous averages. - - This method performs a number of GPU-to-CPU transfers and one - `torch.distributed.all_reduce()`. It is intended to be called - periodically in the main training loop, typically once every - N training steps. - """ - if not self._keep_previous: - self._moments.clear() - for name, cumulative in _sync(self.names()): - if name not in self._cumulative: - self._cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - delta = cumulative - self._cumulative[name] - self._cumulative[name].copy_(cumulative) - if float(delta[0]) != 0: - self._moments[name] = delta - - def _get_delta(self, name): - r"""Returns the raw moments that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - assert self._regex.fullmatch(name) - if name not in self._moments: - self._moments[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - return self._moments[name] - - def num(self, name): - r"""Returns the number of scalars that were accumulated for the given - statistic between the last two calls to `update()`, or zero if - no scalars were collected. - """ - delta = self._get_delta(name) - return int(delta[0]) - - def mean(self, name): - r"""Returns the mean of the scalars that were accumulated for the - given statistic between the last two calls to `update()`, or NaN if - no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0: - return float('nan') - return float(delta[1] / delta[0]) - - def std(self, name): - r"""Returns the standard deviation of the scalars that were - accumulated for the given statistic between the last two calls to - `update()`, or NaN if no scalars were collected. - """ - delta = self._get_delta(name) - if int(delta[0]) == 0 or not np.isfinite(float(delta[1])): - return float('nan') - if int(delta[0]) == 1: - return float(0) - mean = float(delta[1] / delta[0]) - raw_var = float(delta[2] / delta[0]) - return np.sqrt(max(raw_var - np.square(mean), 0)) - - def as_dict(self): - r"""Returns the averages accumulated between the last two calls to - `update()` as an `dnnlib.EasyDict`. The contents are as follows: - - dnnlib.EasyDict( - NAME = dnnlib.EasyDict(num=FLOAT, mean=FLOAT, std=FLOAT), - ... - ) - """ - stats = dnnlib.EasyDict() - for name in self.names(): - stats[name] = dnnlib.EasyDict(num=self.num(name), mean=self.mean(name), std=self.std(name)) - return stats - - def __getitem__(self, name): - r"""Convenience getter. - `collector[name]` is a synonym for `collector.mean(name)`. - """ - return self.mean(name) - -#---------------------------------------------------------------------------- - -def _sync(names): - r"""Synchronize the global cumulative counters across devices and - processes. Called internally by `Collector.update()`. - """ - if len(names) == 0: - return [] - global _sync_called - _sync_called = True - - # Collect deltas within current rank. - deltas = [] - device = _sync_device if _sync_device is not None else torch.device('cpu') - for name in names: - delta = torch.zeros([_num_moments], dtype=_counter_dtype, device=device) - for counter in _counters[name].values(): - delta.add_(counter.to(device)) - counter.copy_(torch.zeros_like(counter)) - deltas.append(delta) - deltas = torch.stack(deltas) - - # Sum deltas across ranks. - if _sync_device is not None: - torch.distributed.all_reduce(deltas) - - # Update cumulative values. - deltas = deltas.cpu() - for idx, name in enumerate(names): - if name not in _cumulative: - _cumulative[name] = torch.zeros([_num_moments], dtype=_counter_dtype) - _cumulative[name].add_(deltas[idx]) - - # Return name-value pairs. - return [(name, _cumulative[name]) for name in names] - -#---------------------------------------------------------------------------- diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download RakGhana Street Quiz The Funniest African Videos Ever.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download RakGhana Street Quiz The Funniest African Videos Ever.md deleted file mode 100644 index ac67d38a485ef7d4a4eb523075ff626776d7ce4d..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download RakGhana Street Quiz The Funniest African Videos Ever.md +++ /dev/null @@ -1,88 +0,0 @@ - -

Download Rakghana Street Quiz: A Fun and Educational App for Everyone

-

Do you love watching funny videos that make you laugh and learn at the same time? Do you enjoy testing your general knowledge and challenging your friends? Do you want to support a talented and passionate team of African comedians and content creators? If you answered yes to any of these questions, then you need to download Rakghana Street Quiz app right now!

-

download rakghana street quiz


Download File ✺✺✺ https://gohhs.com/2uPq0W



-

What is Rakghana Street Quiz?

-

Rakghana Street Quiz is a popular YouTube show that tests people's general knowledge on various topics, such as history, geography, science, culture, and more. The show is hosted by Iman, aka Mr. Laugh & Learn, who asks random people on the streets of different African countries some simple but tricky questions. The answers are often hilarious, surprising, and sometimes even shocking.

-

Rakghana Street Quiz is not only a hilarious and entertaining way to learn new things, but also a source of inspiration for many African comedians and content creators. The show has been featured on several media platforms, such as New Scientist, The Sun, Yahoo News, and more. The show has also spawned many spin-offs and parodies, such as Street Quiz Mzansi in South Africa, Street Quiz Kenya in Kenya, Street Quiz Uganda in Uganda, and more.

-

Rakghana Street Quiz has over 600k subscribers on YouTube and millions of views on each episode. The show has a loyal fan base that loves to watch and laugh with Iman and his guests. Some of the most memorable questions from the show include:

-
    -
  • If two are twins, three will be...?
  • -
  • Name five fruits
  • -
  • What does LOL stand for?
  • -
  • What is the difference between a film and a movie?
  • -
  • What is your favorite movie?
  • -
-

Why should you download Rakghana Street Quiz app?

-

If you are a fan of Rakghana Street Quiz, or if you want to become one, then you should definitely download their app. Here are some of the benefits of downloading Rakghana Street Quiz app:

-
    -
  • You can watch all the episodes offline anytime, anywhere. You don't need an internet connection or a YouTube account to enjoy the show. You can save your favorite episodes on your device and watch them whenever you want.
  • -
  • You can test your own knowledge and challenge your friends. You can play along with Iman and his guests and see how well you do. You can also invite your friends to join you and see who knows more. You can compare your scores and share your results on social media.
  • -
  • You can support the Rakghana team and their mission to educate and entertain. By downloading the app, you are showing your appreciation and support for the hard work and creativity of the Rakghana team. You are also helping them to reach more people and spread more laughter and learning across Africa and beyond.
  • -
-

How to download

How to download Rakghana Street Quiz app?

-

Downloading Rakghana Street Quiz app is very easy and fast. You just need to follow these simple steps:

-

download rakghana street quiz videos
-download rakghana street quiz mp4
-download rakghana street quiz latest episodes
-download rakghana street quiz funny questions
-download rakghana street quiz 2023
-download rakghana street quiz app
-download rakghana street quiz youtube
-download rakghana street quiz season 2
-download rakghana street quiz ghana
-download rakghana street quiz apk
-download rakghana street quiz comedy
-download rakghana street quiz online
-download rakghana street quiz free
-download rakghana street quiz full episodes
-download rakghana street quiz best moments
-download rakghana street quiz hd
-download rakghana street quiz for android
-download rakghana street quiz for pc
-download rakghana street quiz for iphone
-download rakghana street quiz for mac
-download rakghana street quiz with mr. clement
-download rakghana street quiz with celebrities
-download rakghana street quiz with answers
-download rakghana street quiz with subtitles
-download rakghana street quiz with music
-download rakghana street quiz 2022
-download rakghana street quiz 2021
-download rakghana street quiz 2020
-download rakghana street quiz 2019
-download rakghana street quiz 2018
-how to download rakghana street quiz
-where to download rakghana street quiz
-why you should download rakghana street quiz
-what is rakghana street quiz and how to download it
-benefits of downloading rakghana street quiz
-tips for downloading rakghana street quiz
-reviews of downloading rakghana street quiz
-alternatives to downloading rakghana street quiz
-challenges of downloading rakghana street quiz
-solutions for downloading rakghana street quiz

-

Step 1: Go to the Google Play Store or the App Store and search for "Rakghana Street Quiz"

-

You can use your Android or iOS device to download the app. Just open the Google Play Store or the App Store and type "Rakghana Street Quiz" in the search bar. You will see the app icon with a yellow background and a red question mark.

-

Step 2: Tap on the app icon and click on "Install" or "Get"

-

Once you find the app, tap on it and you will see a page with more information about the app, such as the description, the ratings, the screenshots, and more. To download the app, click on the "Install" button if you are using an Android device, or the "Get" button if you are using an iOS device.

-

Step 3: Wait for the app to download and open it

-

The app will start downloading automatically after you click on the button. Depending on your internet speed and your device storage, the download may take a few seconds or minutes. You can check the progress of the download on your notification bar or your home screen. Once the download is complete, you can open the app by tapping on it.

-

Step 4: Enjoy the fun and laughter of Rakghana Street Quiz

-

Congratulations! You have successfully downloaded Rakghana Street Quiz app. Now you can enjoy watching all the episodes offline, playing along with Iman and his guests, challenging your friends, and supporting the Rakghana team. Have fun and learn something new every day with Rakghana Street Quiz!

-

Conclusion

-

Rakghana Street Quiz is a must-have app for anyone who loves comedy, trivia, and learning new things. It is a fun and educational way to spend your free time, whether you are alone or with your friends. It is also a great way to support a talented and passionate team of African comedians and content creators who are making a positive impact on their continent and beyond.

-

Download Rakghana Street Quiz app today and join the millions of fans who watch and laugh with Rakghana Street Quiz. You will not regret it!

-

Frequently Asked Questions

-

Q: How much does Rakghana Street Quiz app cost?

-

A: Rakghana Street Quiz app is completely free to download and use. You don't need to pay anything to enjoy the show.

-

Q: How often are new episodes of Rakghana Street Quiz released?

-

A: The Rakghana team usually releases new episodes of Rakghana Street Quiz every week on their YouTube channel. You can also find them on their app as soon as they are uploaded.

-

Q: How can I contact or follow the Rakghana team?

-

A: You can contact or follow the Rakghana team on their social media platforms, such as Facebook, Instagram, Twitter, and TikTok. You can also visit their website at www.rakghana.com for more information.

-

Q: Can I suggest questions or topics for Rakghana Street Quiz?

-

A: Yes, you can suggest questions or topics for Rakghana Street Quiz by leaving a comment on their YouTube videos or their social media posts. The Rakghana team may use your suggestions in their future episodes.

-

Q: Can I be a guest on Rakghana Street Quiz?

-

A: Yes, you can be a guest on Rakghana Street Quiz if you happen to meet Iman on the streets of any African country where he is filming. You can also apply to be a guest by sending an email to rakghanastreetquiz@gmail.com with your name, location, and why you want to be on the show.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Treasure Songs and Enjoy Their Amazing Music.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Treasure Songs and Enjoy Their Amazing Music.md deleted file mode 100644 index 1f0fa137d7cd1c89faf2448fa718f3013c8d0640..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download Treasure Songs and Enjoy Their Amazing Music.md +++ /dev/null @@ -1,85 +0,0 @@ - -

Download Treasure Songs: How to Enjoy the Music of the Popular K-pop Group

-

If you are a fan of K-pop, you have probably heard of Treasure, one of the most promising and talented groups in the industry. Treasure is a 10-member boy group under YG Entertainment, known for their catchy songs, powerful performances, and charming personalities. Whether you are a longtime supporter or a new listener, you might want to download Treasure songs and listen to them offline. But how can you do that legally and ethically? And what are the best sources to find Treasure songs online? In this article, we will answer these questions and more, so you can enjoy the music of Treasure anytime, anywhere.

-

Who are Treasure?

-

A brief introduction to the group and their members

-

Treasure debuted in August 2020, after being formed through the survival show YG Treasure Box. The group consists of 10 members: Choi Hyunsuk, Jihoon, Yoshi, Junkyu, Mashiho, Yoon Jaehyuk, Asahi, Bang Yedam, Doyoung, Haruto, Park Jeongwoo, and So Junghwan. They have diverse backgrounds and skills, as some of them are from Japan or Thailand, some of them can rap or produce music, and some of them have trained for years before debuting.

-

download treasure songs


Download > https://gohhs.com/2uPvla



-

Their musical style and achievements

-

Treasure's musical style is versatile and dynamic, as they can switch from upbeat pop songs to emotional ballads. Some of their most popular songs include "Boy", "I Love You", "MMM", "My Treasure", and "Jikjin". They have also released a full-length album called The First Step: Treasure Effect, which showcases their vocal and rap abilities. Treasure has achieved many accolades since their debut, such as winning Rookie of the Year awards, breaking sales records, and gaining millions of fans worldwide.

-

Why download Treasure songs?

-

The benefits of downloading music offline

-

Downloading music offline has many advantages over streaming music online. For one thing, you can save data and battery life by not relying on an internet connection. You can also avoid interruptions from ads or buffering issues. Moreover, you can have more control over your music library, as you can create playlists, edit tags, and delete songs as you wish. Downloading music offline also allows you to support your favorite artists directly, as they can earn more revenue from digital sales than from streaming services.

-

The legal and ethical issues of downloading music online

-

However, downloading music online is not always legal or ethical. Music is a form of intellectual property that belongs to the artists and their labels. If you download music from unauthorized sources or share it with others without permission, you are violating their rights and harming their income. Therefore, you should always respect the artists and their work by downloading music from legal and ethical sources. You should also avoid using music for commercial or public purposes without obtaining a license or paying royalties.

-

How to download Treasure songs?

-

The best free music download sites for Treasure songs

-

Bandcamp

-

Bandcamp is a platform that allows artists to upload their music and set their own prices. You can find some Treasure songs on Bandcamp, such as "My Treasure" and "Jikjin". To download them for free, you just need to enter $0 as the price and provide your email address. You can also choose to pay more if you want to support the artists. Bandcamp supports multiple formats, such as MP3, FLAC, and WAV. Bandcamp is a legal and ethical source of music, as it gives artists full control over their music and pays them fairly.

-

DatPiff

-

DatPiff is a website that specializes in hip-hop and rap music. You can find some Treasure songs on DatPiff, such as "Going Crazy" and "Orange". To download them for free, you just need to create an account and click on the download button. You can also stream the songs online or share them with your friends. DatPiff is a legal and ethical source of music, as it works with the artists and their labels to distribute their music and promote their careers.

-

Free Music Archive

-

Free Music Archive is a library of high-quality and royalty-free music. You can find some Treasure songs on Free Music Archive, such as "Beautiful" and "Slow Motion". To download them for free, you just need to click on the download icon and choose the format you want. You can also browse by genre, mood, or license. Free Music Archive is a legal and ethical source of music, as it offers music that is either in the public domain or licensed under Creative Commons.

-

download treasure songs mp3
-download treasure songs free
-download treasure songs by bruno mars
-download treasure songs from spotify
-download treasure songs kpop
-download treasure songs 2023
-download treasure songs offline
-download treasure songs album
-download treasure songs video
-download treasure songs lyrics
-download treasure boy song
-download treasure i love you song
-download treasure mmm song
-download treasure jikjin song
-download treasure my treasure song
-download treasure beautiful song
-download treasure slowmotion song
-download treasure be with me song
-download treasure orange song
-download treasure going crazy song
-download treasure blt song
-download treasure come to me song
-download treasure boy remix song
-download treasure i love you remix song
-download treasure mmm remix song
-download treasure boy instrumental song
-download treasure i love you instrumental song
-download treasure mmm instrumental song
-download treasure boy acoustic song
-download treasure i love you acoustic song
-download treasure mmm acoustic song
-how to download treasure songs on iphone
-how to download treasure songs on android
-how to download treasure songs on pc
-how to download treasure songs on mac
-how to download treasure songs on itunes
-how to download treasure songs on youtube
-how to download treasure songs on soundcloud
-how to download treasure songs on amazon music
-how to download treasure songs on apple music

-

The best music streaming services for Treasure songs

-

Spotify

-

Spotify is one of the most popular and widely used music streaming services in the world. You can find all of Treasure's songs on Spotify, as well as their albums, playlists, and podcasts. To listen to them online, you just need to sign up for a free account and search for Treasure. You can also download them offline if you upgrade to a premium account for a monthly fee. Spotify is a legal and ethical source of music, as it pays royalties to the artists and their labels based on the number of streams.

-

YouTube Music

-

YouTube Music is a music streaming service that is integrated with YouTube. You can find all of Treasure's songs on YouTube Music, as well as their music videos, live performances, and interviews. To listen to them online, you just need to sign up for a free account and search for Treasure. You can also download them offline if you subscribe to YouTube Premium for a monthly fee. YouTube Music is a legal and ethical source of music, as it pays royalties to the artists and their labels based on the number of views.

-

Apple Music

-

Apple Music is a music streaming service that is exclusive to Apple devices. You can find all of Treasure's songs on Apple Music, as well as their albums, playlists, and radio stations. To listen to them online, you just need to sign up for a free trial and search for Treasure. You can also download them offline if you continue with a paid subscription for a monthly fee. Apple Music is a legal and ethical source of music, as it pays royalties to the artists and their labels based on the number of plays.

-

Conclusion

-

A summary of the main points and a call to action

-

Treasure is an amazing K-pop group that deserves your attention and support. Their songs are catchy, versatile, and powerful, and their performances are impressive and charismatic. If you want to enjoy their music offline, you have many options to choose from. You can download their songs for free from legal and ethical sources like Bandcamp, DatPiff, or Free Music Archive. Or you can stream their songs online from popular services like Spotify, YouTube Music, or Apple Music. Whichever way you choose, you will not regret listening to Treasure's songs. So what are you waiting for? Download Treasure songs today and join the Treasure Makers fandom!

-

Frequently Asked Questions

-

Q: How can I contact Treasure or send them fan mail?

-

A: You can follow Treasure's official social media accounts on Twitter, Instagram, Facebook, TikTok, Weibo, or V Live. You can also send them fan mail through their fan cafe or their agency's address.

-

Q: How can I buy Treasure's merchandise or albums?

-

A: You can buy Treasure's merchandise or albums from their official online store or from authorized retailers like YG Select or Ktown4u.

-

Q: How can I watch Treasure's reality show or variety show appearances?

-

A: You can watch Treasure's reality show "Treasure Map" on their YouTube channel or V Live channel. You can also watch their variety show appearances on various platforms like Netflix, Viu, or Kocowa.

-

Q: How can I vote for Treasure in music shows or awards?

-

A: You can vote for Treasure in music shows like M Countdown, Show Champion, Music Bank, Show Music Core, or Inkigayo by using various apps or websites like Mwave, Whosfan, Idol Champ, Starpass, or The Music. You can also vote for Treasure in awards like MAMA, MMA, or AAA by using apps or websites like Mwave, Melon, or Choeaedol.

-

Q: How can I support Treasure's social causes or charity projects?

-

A: You can support Treasure's social causes or charity projects by donating to their official campaigns or fan-initiated fundraisers. For example, you can donate to the UNICEF campaign that Treasure participated in for their first anniversary. You can also donate to the fan projects that aim to plant trees, provide clean water, or help animals in Treasure's name.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/floriankrempl/mtg_rules_bot/mtg/data_handler/process_reddit_data.py b/spaces/floriankrempl/mtg_rules_bot/mtg/data_handler/process_reddit_data.py deleted file mode 100644 index 9ab35958b7a5b3a578146c32702bdb58d0f8ceae..0000000000000000000000000000000000000000 --- a/spaces/floriankrempl/mtg_rules_bot/mtg/data_handler/process_reddit_data.py +++ /dev/null @@ -1,72 +0,0 @@ -import re -import json -from pathlib import Path -import pandas as pd - - -CARD_NAME_PATTERN = r"\[\[.*?\]\]" -BLOCKLIST = ["cardname", "SET"] - - -def process_reddit_cards_data( - csv_file: Path = "data/raw/comments/reddit_2019.csv", -) -> list[dict]: - """Get text and entities from reddit comments. - - Returns: - [ - { - "entity": reddit_comment - "start": start character index of the card name - "end": end character index of the card name - } - ] - """ - data = pd.read_csv(csv_file) - - documents = [] - for _, row in data.iterrows(): - document = extract_entities_from_text(row["body"]) - if document is not None: - documents.append(document) - - with open("data/processed/reddit/reddit_2019.json", "w") as outfile: - json.dump(documents, outfile) - - return documents - - -def extract_entities_from_text(text): - """Text with [[card names]] in brackets. - - Subreddit use this patterns for a smart bot that provides links to the cards. - """ - extracted_cards = [] - for match in re.finditer(CARD_NAME_PATTERN, text): - card = match.group() - card = card.replace("[[", "") - card = card.replace("]]", "") - extracted_cards.append(card) - - processed_text = text.replace("[[", "") - processed_text = processed_text.replace("]]", "") - - document = {"text": processed_text, "entities": []} - try: - for card in extracted_cards: - if card not in BLOCKLIST: - for match in re.finditer(card, processed_text): - # print(match.group(), match.start(), match.end()) - document["entities"].append( - { - "entity": match.group(), - "start": match.start(), - "end": match.end(), - } - ) - except: - # print("not found", card, processed_text) - return - if document["entities"]: - return document - return diff --git a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/marblepassenv.py b/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/marblepassenv.py deleted file mode 100644 index b21f95b39a220dde6f71f077d5dc067db7400cf3..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/gym-minigrid/gym_minigrid/social_ai_envs/marblepassenv.py +++ /dev/null @@ -1,585 +0,0 @@ -import random -import time - -import numpy as np -from gym_minigrid.minigrid import * -from gym_minigrid.register import register -from gym_minigrid.social_ai_envs.socialaigrammar import SocialAIGrammar, SocialAIActions, SocialAIActionSpace -import time -from collections import deque - - -class Partner(NPC): - """ - A simple NPC that knows who is telling the truth - """ - def __init__(self, color, name, env): - super().__init__(color) - self.name = name - self.env = env - self.npc_dir = 1 # NPC initially looks downward - self.npc_type = 0 # this will be put into the encoding - - # opposite role of the agent - self.npc_side = "L" if self.env.agent_side == "R" else "R" - - # how many random action at the beginning -> removes trivial solutions - self.random_to_go = 0 # not needed as no lever is used, solution is trivial anyway - - assert set([self.npc_side, self.env.agent_side]) == {"L", "R"} - - self.was_introduced_to = False - self.pushed_the_marble = False - self.ate_an_apple = False - - # target obj - assert self.env.problem == self.env.parameters["Problem"] if self.env.parameters else "MarblePass" - - self.target_obj = self.env.generator - - assert self.env.grammar.contains_utterance(self.introduction_statement) - - def step(self, utterance): - - reply, info = super().step() - - if self.env.hidden_npc: - return reply, info - - reply, action = self.handle_introduction(utterance) - - if self.was_introduced_to: - - if self.random_to_go > 0: - action = random.choice([self.go_forward, self.rotate_left, self.rotate_right]) - self.random_to_go -= 1 - - elif not self.pushed_the_marble: - - if self.npc_side == "L": - # find angle - push_pos = self.env.marble.cur_pos - np.array([1, 0]) - - if all(self.cur_pos == push_pos): - next_target_position = self.env.marble.cur_pos - else: - next_target_position = push_pos - - # go to loc in front of marble and push - action = self.path_to_pos(next_target_position) - - elif self.npc_side == "R": - if self.env.marble.cur_pos[0] == self.env.generator.cur_pos[0]: - - distance = (self.env.marble.cur_pos - self.env.generator.cur_pos) - - # keep only the direction and move for 1 step - diff = np.sign(distance) - # assert all(diff == np.nan_to_num(distance / np.abs(distance))) - if abs(diff.sum()) == 1: - push_pos = self.env.marble.cur_pos + diff - - if all(self.cur_pos == push_pos): - next_target_position = self.env.marble.cur_pos - - else: - next_target_position = push_pos - - # go to loc in front of marble or push - action = self.path_to_pos(next_target_position) - - else: - raise ValueError("Undefined role") - - eaten_before = self.env.agent_apple.eaten - pushed_before = self.env.marble.is_moving - - if action is not None: - action() - - if not self.pushed_the_marble: - self.pushed_the_marble = not pushed_before and self.env.marble.is_moving - - # check if the NPC ate the apple - eaten_after = self.env.agent_apple.eaten - self.ate_an_apple = not eaten_before and eaten_after - - info = { - "prim_action": action.__name__ if action is not None else "no_op", - "utterance": reply or "no_op", - "was_introduced_to": self.was_introduced_to - } - - assert (reply or "no_op") in self.list_of_possible_utterances - - return reply, info - - -class MarblePassEnv(MultiModalMiniGridEnv): - """ - Environment in which the agent is instructed to go to a given object - named using an English text string - """ - - def __init__( - self, - size=10, - diminished_reward=True, - step_penalty=False, - knowledgeable=False, - max_steps=80, - hidden_npc=False, - reward_diminish_factor=0.1, - egocentric_observation=True, - ): - assert size >= 5 - self.empty_symbol = "NA \n" - self.diminished_reward = diminished_reward - self.step_penalty = step_penalty - self.knowledgeable = knowledgeable - self.hidden_npc = hidden_npc - self.hear_yourself = False - - self.grammar = SocialAIGrammar() - - self.init_done = False - # parameters - to be set in reset - self.parameters = None - - # encoding size should be 5 - self.add_npc_direction = True - self.add_npc_point_direction = True - self.add_npc_last_prim_action = True - - self.reward_diminish_factor = reward_diminish_factor - self.egocentric_observation = egocentric_observation - self.encoding_size = 3 + 2*bool(not self.egocentric_observation) + bool(self.add_npc_direction) + bool(self.add_npc_point_direction) + bool(self.add_npc_last_prim_action) - - super().__init__( - grid_size=size, - max_steps=max_steps, - # Set this to True for maximum speed - see_through_walls=False, - actions=SocialAIActions, # primitive actions - action_space=SocialAIActionSpace, - add_npc_direction=self.add_npc_direction, - add_npc_point_direction=self.add_npc_point_direction, - add_npc_last_prim_action=self.add_npc_last_prim_action, - reward_diminish_factor=self.reward_diminish_factor, - ) - self.all_npc_utterance_actions = Partner.get_list_of_possible_utterances() - self.prim_actions_dict = SocialAINPCActionsDict - - def is_in_marble_way(self, pos): - - if pos[0] == self.generator_current_pos[0]: # same column as generator - return True - - if pos[1] == self.marble_current_pos[1]: # same row as marble - return True - - # all good - return False - - def _gen_grid(self, width_, height_): - # Create the grid - self.grid = Grid(width_, height_, nb_obj_dims=self.encoding_size) - - # new - min_w = min(9, width_) - min_h = min(9, height_) - self.current_width = self._rand_int(min_w, width_+1) - self.current_height = self._rand_int(min_h, height_+1) - - self.wall_x = self.current_width-1 - self.wall_y = self.current_height-1 - - # Generate the surrounding walls - self.grid.wall_rect(0, 0, self.current_width, self.current_height) - - self.problem = self.parameters["Problem"] if self.parameters else "MarblePass" - self.version = self.parameters["Version"] if self.parameters else "Asocial" - self.role = self.parameters["Role"] if self.parameters else "A" - assert self.role in ["A", "B", "Meta"] - - if self.role in ["B", "Meta"]: - self.agent_side = "R" # starts on the right side - else: - self.agent_side = "L" # starts on the right side - - num_of_colors = self.parameters.get("Num_of_colors", None) if self.parameters else None - - self.add_obstacles() - - # apple - if num_of_colors is None: - POSSIBLE_COLORS = COLOR_NAMES - - else: - POSSIBLE_COLORS = COLOR_NAMES[:int(num_of_colors)] - - self.left_half_size = (self.current_width//2, self.current_height) - self.left_half_top = (0, 0) - - self.right_half_size = (self.current_width//2, self.current_height) - self.right_half_top = (self.current_width - self.current_width // 2, 0) - - # generator - self.generator_pos = (self.current_width//2, self.current_height) - self.generator_color = self._rand_elem(POSSIBLE_COLORS) - self.generator_current_pos = self.find_loc( - # on the right most column - top=(self.current_width-2, 0), - size=(1, self.current_height), - reject_agent_pos=True, - ) - - # marble - self.marble_color = self._rand_elem(POSSIBLE_COLORS) - if self.version == "Social": - self.marble_current_pos = self.find_loc( - top=(self.current_width//2 - 1, 1), # fence or column left of fence, not next to wall - size=(2, self.current_height - 2), - reject_agent_pos=True, - reject_fn=lambda _, pos: ( - # tuple(pos) in map(tuple, [self.left_apple_current_pos, self.right_apple_current_pos, self.generator_current_pos]) - # or - pos[1] == self.generator_current_pos[1] # reject if in row as the generator - # or - # pos[1] == self.right_apple_current_pos[1] # reject if in row as the partner's platform - # or - # any(pos == self.left_apple_current_pos) # reject if in row or column as the agent's platform - or - any(pos == 1) # next to a wall - or - pos[1] == self.current_height - 2 - ), - ) - else: - self.marble_current_pos = self.find_loc( - top=(self.generator_current_pos[0], 1), # fence or column left of fence, not next to wall - size=(1, self.current_height - 2), - reject_agent_pos=True, - reject_fn=lambda _, pos: ( - # tuple(pos) in map(tuple, [self.left_apple_current_pos, self.right_apple_current_pos, self.generator_current_pos]) - # or - pos[1] == self.generator_current_pos[1] # reject if in row as the generator - # or - # pos[1] == self.right_apple_current_pos[1] # reject if in row as the partner's platform - # or - # any(pos == self.left_apple_current_pos) # reject if in row or column as the agent's platform - or - any(pos == 1) # next to a wall - or - pos[1] == self.current_height - 2 - ), - ) - - # add fence to grid - self.grid.vert_wall( - x=self.current_width//2, - y=1, - length=self.current_height - 2, - obj_type=Fence - ) - - if self.version == "Social": - # create hole in fence wall to make room for the marble - self.grid.set(self.current_width//2, self.marble_current_pos[1], None) - - # generator platform - - # find the position for generator_platforms - self.left_apple_current_pos = self.find_loc( - top=self.left_half_top, size=self.left_half_size, reject_agent_pos=True, - # reject if in row or column as the generator - reject_fn=lambda _, pos: pos[1] == self.generator_current_pos[1] or pos[1] == self.marble_current_pos[1], - ) - - self.right_apple_current_pos = self.find_loc( - top=self.right_half_top, size=self.right_half_size, reject_agent_pos=True, - # reject if in row or column as the generator - reject_fn=lambda _, pos: any(pos == self.generator_current_pos) or pos[1] == self.marble_current_pos[1], - ) - - assert all(self.left_apple_current_pos < np.array([self.current_width - 1, self.current_height - 1])) - assert all(self.right_apple_current_pos < np.array([self.current_width - 1, self.current_height - 1])) - - self.left_generator_platform_color = self._rand_elem(POSSIBLE_COLORS) - self.right_generator_platform_color = self._rand_elem(POSSIBLE_COLORS) - - self.put_objects_in_env() - - # place agent - if self.agent_side == "L": - self.place_agent(size=self.left_half_size, top=self.left_half_top) - else: - self.place_agent(size=self.right_half_size, top=self.right_half_top) - - # NPC - if self.version == "Social": - self.npc_color = self._rand_elem(COLOR_NAMES) - self.caretaker = Partner(self.npc_color, "Partner", self) - - if self.agent_side == "L": - self.place_obj(self.caretaker, size=self.right_half_size, top=self.right_half_top, reject_fn=MarblePassEnv.is_in_marble_way) - else: - self.place_obj(self.caretaker, size=self.left_half_size, top=self.left_half_top, reject_fn=MarblePassEnv.is_in_marble_way) - - # Generate the mission string - self.mission = 'lets collaborate' - - # Dummy beginning string - # self.beginning_string = "This is what you hear. \n" - self.beginning_string = "Conversation: \n" - self.utterance = self.beginning_string - - # utterance appended at the end of each step - self.utterance_history = "" - - # used for rendering - self.full_conversation = self.utterance - self.outcome_info = None - - def put_objects_in_env(self, remove_objects=False): - - assert self.left_apple_current_pos is not None - assert self.right_apple_current_pos is not None - assert self.generator_current_pos is not None - assert self.left_generator_platform_color is not None - assert self.right_generator_platform_color is not None - - assert self.problem == self.parameters["Problem"] if self.parameters else "MarblePass" - - if remove_objects: - self.grid.set(*self.left_generator_platform.cur_pos, None) # remove apple - self.grid.set(*self.right_generator_platform.cur_pos, None) # remove apple - self.grid.set(*self.generator.cur_pos, None) # remove generator - self.grid.set(*self.marble.cur_pos, None) # remove marble - self.grid.set(*self.marble_current_pos, None) # remove tee - - - # Apple - self.agent_apple = Apple() - self.partner_apple = Apple() - - def generate_apples(): - if self.agent_side == "L": - self.grid.set(*self.left_apple_current_pos, self.agent_apple), - self.grid.set(*self.right_apple_current_pos, self.partner_apple), - else: - self.grid.set(*self.left_apple_current_pos, self.partner_apple), - self.grid.set(*self.right_apple_current_pos, self.agent_apple), - - # Generator - self.generator = AppleGenerator( - self.generator_color, - on_push=generate_apples, - marble_activation=True, - ) - - self.left_generator_platform = GeneratorPlatform(self.left_generator_platform_color) - self.right_generator_platform = GeneratorPlatform(self.right_generator_platform_color) - - self.marble = Marble(self.marble_color, env=self) - - self.put_obj_np(self.left_generator_platform, self.left_apple_current_pos) - self.put_obj_np(self.right_generator_platform, self.right_apple_current_pos) - - self.put_obj_np(self.generator, self.generator_current_pos) - - self.put_obj_np(self.marble, self.marble_current_pos) - - def reset( - self, *args, **kwargs - ): - # This env must be used inside the parametric env - if not kwargs: - # The only place when kwargs can empty is during the class construction - # reset should be called again before using the env (paramenv does it in its constructor) - assert self.parameters is None - assert not self.init_done - self.init_done = True - - obs = super().reset() - return obs - - else: - assert self.init_done - - self.parameters = dict(kwargs) - - assert self.parameters is not None - assert len(self.parameters) > 0 - - obs = super().reset() - - self.agent_ate_the_apple = False - self.agent_opened_the_box = False - self.agent_turned_on_the_switch = False - self.agent_pressed_the_generator = False - self.agent_pushed_the_marble = False - - return obs - - def step(self, action): - success = False - p_action = action[0] - utterance_action = action[1:] - - apple_had_been_eaten = self.agent_apple.eaten - generator_had_been_pressed = self.generator.is_pressed - marble_had_been_pushed = self.marble.was_pushed - - # primitive actions - _, reward, done, info = super().step(p_action) - - self.marble.step() - - # eaten just now by primitive actions of the agent - if not self.agent_ate_the_apple: - self.agent_ate_the_apple = self.agent_apple.eaten and not apple_had_been_eaten - - if not self.agent_pressed_the_generator: - self.agent_pressed_the_generator = self.generator.is_pressed and not generator_had_been_pressed - - if not self.agent_pushed_the_marble: - self.agent_pushed_the_marble = self.marble.was_pushed and not marble_had_been_pushed - - # utterances - agent_spoke = not all(np.isnan(utterance_action)) - if agent_spoke: - utterance = self.grammar.construct_utterance(utterance_action) - - if self.hear_yourself: - self.utterance += "YOU: {} \n".format(utterance) - self.full_conversation += "YOU: {} \n".format(utterance) - else: - utterance = None - - if self.version == "Social": - reply, npc_info = self.caretaker.step(utterance) - - if reply: - self.utterance += "{}: {} \n".format(self.caretaker.name, reply) - self.full_conversation += "{}: {} \n".format(self.caretaker.name, reply) - else: - npc_info = { - "prim_action": "no_op", - "utterance": "no_op", - "was_introduced_to": False, - } - - # aftermath - if p_action == self.actions.done: - done = True - - if self.agent_ate_the_apple: - # check that it is the agent who ate it - assert self.actions(p_action) == self.actions.toggle - assert self.get_cell(*self.front_pos) == self.agent_apple - - if self.version == "Asocial" or self.role in ["A", "B"]: - reward = self._reward() - success = True - done = True - - elif self.role == "Meta": - - if self.agent_side == "L": - reward = self._reward() / 2 - success = True - done = True - - elif self.agent_side == "R": - # revert and rotate - reward = self._reward() / 2 - self.agent_ate_the_apple = False - self.agent_side = "L" - self.put_objects_in_env(remove_objects=True) - - # teleport the agent and the NPC - self.place_agent(size=self.left_half_size, top=self.left_half_top) - - self.grid.set(*self.caretaker.cur_pos, None) - - self.caretaker = Partner(self.npc_color, "Partner", self) - self.place_obj(self.caretaker, size=self.right_half_size, top=self.right_half_top, reject_fn=MarblePassEnv.is_in_marble_way) - else: - raise ValueError(f"Side unknown - {self.agent_side}.") - else: - raise ValueError(f"Role unknown - {self.role}.") - - # discount - if self.step_penalty: - reward = reward - 0.01 - - # update obs with NPC movement - obs = self.gen_obs(full_obs=self.full_obs) - - # fill observation with text - self.append_existing_utterance_to_history() - obs = self.add_utterance_to_observation(obs) - self.reset_utterance() - - if done: - if reward > 0: - self.outcome_info = "SUCCESS: agent got {} reward \n".format(np.round(reward, 1)) - else: - self.outcome_info = "FAILURE: agent got {} reward \n".format(reward) - - if self.version == "Social": - # is the npc seen by the agent - ag_view_npc = self.relative_coords(*self.caretaker.cur_pos) - - if ag_view_npc is not None: - # in the agent's field of view - ag_view_npc_x, ag_view_npc_y = ag_view_npc - - n_dims = obs['image'].shape[-1] - npc_encoding = self.caretaker.encode(n_dims) - - # is it occluded - npc_observed = all(obs['image'][ag_view_npc_x, ag_view_npc_y] == npc_encoding) - else: - npc_observed = False - else: - npc_observed = False - - info = {**info, **{"NPC_"+k: v for k, v in npc_info.items()}} - - info["NPC_observed"] = npc_observed - info["success"] = success - - return obs, reward, done, info - - def _reward(self): - if self.diminished_reward: - return super()._reward() - else: - return 1.0 - - # def render(self, *args, **kwargs): - # obs = super().render(*args, **kwargs) - # - # self.window.clear_text() # erase previous text - # self.window.set_caption(self.full_conversation) - # - # # self.window.ax.set_title("correct color: {}".format(self.box.target_color), loc="left", fontsize=10) - # - # if self.outcome_info: - # color = None - # if "SUCCESS" in self.outcome_info: color = "lime" - # elif "FAILURE" in self.outcome_info: - # color = "red" - # self.window.add_text(*(0.01, 0.85, self.outcome_info), - # **{'fontsize':15, 'color':color, 'weight':"bold"}) - # - # self.window.show_img(obs) # re-draw image to add changes to window - # return obs - - -register( - id='SocialAI-MarblePassEnv-v1', - entry_point='gym_minigrid.social_ai_envs:MarblePassEnv' -) \ No newline at end of file diff --git a/spaces/fuxin123zz/ChuanhuChatGPT/chatgpt - macOS.command b/spaces/fuxin123zz/ChuanhuChatGPT/chatgpt - macOS.command deleted file mode 100644 index fa015edca9e6916f24394813ce8ba77d2072e296..0000000000000000000000000000000000000000 --- a/spaces/fuxin123zz/ChuanhuChatGPT/chatgpt - macOS.command +++ /dev/null @@ -1,7 +0,0 @@ -#!/bin/bash -echo Opening ChuanhuChatGPT... -cd "$(dirname "${BASH_SOURCE[0]}")" -nohup python3 ChuanhuChatbot.py >/dev/null 2>&1 & -sleep 5 -open http://127.0.0.1:7860 -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). If you kill ChuanhuChatbot, Use "pkill -f 'ChuanhuChatbot'" command in terminal. \ No newline at end of file diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/midas/midas/midas_net_custom.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/midas/midas/midas_net_custom.py deleted file mode 100644 index 50e4acb5e53d5fabefe3dde16ab49c33c2b7797c..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/midas/midas/midas_net_custom.py +++ /dev/null @@ -1,128 +0,0 @@ -"""MidashNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import FeatureFusionBlock, FeatureFusionBlock_custom, Interpolate, _make_encoder - - -class MidasNet_small(BaseModel): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=64, backbone="efficientnet_lite3", non_negative=True, exportable=True, channels_last=False, align_corners=True, - blocks={'expand': True}): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - backbone (str, optional): Backbone network for encoder. Defaults to resnet50 - """ - print("Loading weights: ", path) - - super(MidasNet_small, self).__init__() - - use_pretrained = False if path else True - - self.channels_last = channels_last - self.blocks = blocks - self.backbone = backbone - - self.groups = 1 - - features1=features - features2=features - features3=features - features4=features - self.expand = False - if "expand" in self.blocks and self.blocks['expand'] == True: - self.expand = True - features1=features - features2=features*2 - features3=features*4 - features4=features*8 - - self.pretrained, self.scratch = _make_encoder(self.backbone, features, use_pretrained, groups=self.groups, expand=self.expand, exportable=exportable) - - self.scratch.activation = nn.ReLU(False) - - self.scratch.refinenet4 = FeatureFusionBlock_custom(features4, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet3 = FeatureFusionBlock_custom(features3, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet2 = FeatureFusionBlock_custom(features2, self.scratch.activation, deconv=False, bn=False, expand=self.expand, align_corners=align_corners) - self.scratch.refinenet1 = FeatureFusionBlock_custom(features1, self.scratch.activation, deconv=False, bn=False, align_corners=align_corners) - - - self.scratch.output_conv = nn.Sequential( - nn.Conv2d(features, features//2, kernel_size=3, stride=1, padding=1, groups=self.groups), - Interpolate(scale_factor=2, mode="bilinear"), - nn.Conv2d(features//2, 32, kernel_size=3, stride=1, padding=1), - self.scratch.activation, - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - nn.Identity(), - ) - - if path: - self.load(path) - - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - if self.channels_last==True: - print("self.channels_last = ", self.channels_last) - x.contiguous(memory_format=torch.channels_last) - - - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return torch.squeeze(out, dim=1) - - - -def fuse_model(m): - prev_previous_type = nn.Identity() - prev_previous_name = '' - previous_type = nn.Identity() - previous_name = '' - for name, module in m.named_modules(): - if prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d and type(module) == nn.ReLU: - # print("FUSED ", prev_previous_name, previous_name, name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name, name], inplace=True) - elif prev_previous_type == nn.Conv2d and previous_type == nn.BatchNorm2d: - # print("FUSED ", prev_previous_name, previous_name) - torch.quantization.fuse_modules(m, [prev_previous_name, previous_name], inplace=True) - # elif previous_type == nn.Conv2d and type(module) == nn.ReLU: - # print("FUSED ", previous_name, name) - # torch.quantization.fuse_modules(m, [previous_name, name], inplace=True) - - prev_previous_type = previous_type - prev_previous_name = previous_name - previous_type = type(module) - previous_name = name \ No newline at end of file diff --git a/spaces/gheng/belanjawan-2024-chatbot/README.md b/spaces/gheng/belanjawan-2024-chatbot/README.md deleted file mode 100644 index 964b5406cbecc7dd59e03d6fa60aec23e19f2f8d..0000000000000000000000000000000000000000 --- a/spaces/gheng/belanjawan-2024-chatbot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Belanjawan 2024 Chatbot -emoji: 👁 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.48.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gotgitgood/33.GZUZ.33/share_btn.py b/spaces/gotgitgood/33.GZUZ.33/share_btn.py deleted file mode 100644 index 07e04ae242ea01e6bd57de9186e2b00dfee28061..0000000000000000000000000000000000000000 --- a/spaces/gotgitgood/33.GZUZ.33/share_btn.py +++ /dev/null @@ -1,74 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - async function getInputVideoFile(videoEl){ - const res = await fetch(videoEl.src); - const blob = await res.blob(); - const videoId = Date.now() % 200; - const fileName = `sd-perception-${{videoId}}.mp4`; - return new File([blob], fileName, { type: 'video/mp4' }); - } - - async function audioToBase64(audioFile) { - return new Promise((resolve, reject) => { - let reader = new FileReader(); - reader.readAsDataURL(audioFile); - reader.onload = () => resolve(reader.result); - reader.onerror = error => reject(error); - - }); - } - const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app'); - const inputPromptEl = gradioEl.querySelector('#prompt-in input').value; - const outputVideoEl = gradioEl.querySelector('#output-video video'); - - let titleTxt = `Text-to-Audio: ${inputPromptEl}`; - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!outputVideoEl){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const outputVideo = await getInputVideoFile(outputVideoEl); - const urlOutputVideo = await uploadFile(outputVideo); - - const descriptionMd = ` -##### ${inputPromptEl} - -${urlOutputVideo} -`; - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/haoheliu/audioldm2-text2audio-text2music/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Emotiv Research Edition Sdk Crack.md b/spaces/gotiQspiryo/whisper-ui/examples/Emotiv Research Edition Sdk Crack.md deleted file mode 100644 index 19e0e40d0e74da723cbb331b38d0c34ba1f12176..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Emotiv Research Edition Sdk Crack.md +++ /dev/null @@ -1,12 +0,0 @@ -

emotiv research edition sdk crack


Download ››› https://urlgoal.com/2uyMAz



- -September 13, 2553 B.C. The Emotiv EPOC is a relatively cheap ($300) EEG headset designed for gaming. ... However, you will need to purchase the SDK starting from ... for game development. -In addition to the standard accessory box, you will receive an SDK. -You can download it from the Emotiv-UK website. -You will also get access to online support. -The package does not include programming instructions or a developer SDK, but you will find several tutorials, some of which are free. -This way you can get the base version for your system. -In addition to the standard accessory box, you will receive an SDK. 8a78ff9644
-
-
-

diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Fluturi Volumul 2 Pdf Download.md b/spaces/gotiQspiryo/whisper-ui/examples/Fluturi Volumul 2 Pdf Download.md deleted file mode 100644 index 077a5b23db8c0dc072a1855fed938b24f6ae20f2..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Fluturi Volumul 2 Pdf Download.md +++ /dev/null @@ -1,33 +0,0 @@ - -

Fluturi Volumul 2 Pdf Download: A Review of Irina Binder's Bestselling Romance Novel

-

If you are looking for a captivating and emotional read, you might want to check out Fluturi Volumul 2 by Irina Binder. This is the second book in the Fluturi series, which follows the love story of Irina and Andrei, two young people who meet and fall in love in college. The first book, Fluturi Volumul 1, ended with a cliffhanger that left readers wondering what will happen next. In this book, we find out how Irina and Andrei cope with the challenges and obstacles that life throws at them, such as distance, jealousy, betrayal, family issues, and personal growth.

-

Fluturi Volumul 2 is a book that will make you feel a range of emotions, from joy to sadness, from anger to forgiveness. It is a book that explores the themes of love, friendship, loyalty, trust, and self-discovery. It is a book that will make you reflect on your own relationships and choices. It is a book that will inspire you to follow your dreams and listen to your heart.

-

Fluturi Volumul 2 Pdf Download


DOWNLOAD –––––>>> https://urlgoal.com/2uyNwL



-

The author, Irina Binder, is a Romanian writer who has gained popularity and acclaim for her Fluturi series. She has a unique style of writing that combines poetry, prose, and diary entries. She writes with honesty and sensitivity, capturing the inner thoughts and feelings of her characters. She also uses metaphors and symbols to convey deeper meanings and messages. One of the most prominent symbols in her books is the butterfly (fluturi in Romanian), which represents the beauty, fragility, and transformation of life and love.

-

If you are interested in reading Fluturi Volumul 2 by Irina Binder, you can download it as a PDF file from various online sources. However, we recommend that you buy the book from a reputable seller or publisher to support the author and enjoy the best quality of reading. You can also read the first book, Fluturi Volumul 1, if you haven't already done so. You will not regret diving into this wonderful and captivating story that will touch your soul.

In this article, we will give you some more details about Fluturi Volumul 2 by Irina Binder, such as the main characters, the plot summary, and the critical reception. We will also give you some tips on how to enjoy this book to the fullest and some suggestions for similar books that you might like.

-

The Main Characters

-

The main characters of Fluturi Volumul 2 are Irina and Andrei, the protagonists of the Fluturi series. They are both students at the University of Bucharest, where they study different majors. Irina is a shy and introverted girl who loves reading and writing. She has a passion for poetry and literature, and she dreams of becoming a famous writer someday. Andrei is a confident and charismatic boy who loves sports and music. He is a talented soccer player and a guitar player, and he has a lot of admirers and friends. They are both from different social backgrounds and have different personalities, but they share a strong connection and attraction that draws them together.

-

Other important characters in Fluturi Volumul 2 are:

-
    -
  • Andreea: Irina's best friend and roommate. She is a cheerful and supportive girl who always tries to help Irina with her problems. She is also in love with Andrei's best friend, Alex.
  • -
  • Alex: Andrei's best friend and roommate. He is a funny and loyal guy who always stands by Andrei's side. He is also in love with Andreea, but he is afraid to confess his feelings.
  • -
  • Cristian: Irina's ex-boyfriend. He is a selfish and manipulative guy who cheated on Irina with another girl. He still tries to interfere in Irina's life and cause trouble for her and Andrei.
  • -
  • Laura: Andrei's ex-girlfriend. She is a beautiful and popular girl who dated Andrei for a long time. She still has feelings for Andrei and tries to win him back.
  • -
  • Maria: Irina's mother. She is a strict and conservative woman who disapproves of Irina's relationship with Andrei. She wants Irina to marry Cristian, who comes from a wealthy family.
  • -
  • Victor: Andrei's father. He is a kind and supportive man who loves Andrei unconditionally. He encourages Andrei to follow his dreams and supports his relationship with Irina.
  • -
-

The Plot Summary

-

Fluturi Volumul 2 picks up where Fluturi Volumul 1 left off. Irina and Andrei have decided to give their relationship another chance after breaking up for a while due to misunderstandings and lies. They are happy and in love, but they still have to face many challenges and difficulties that threaten their happiness. Some of these challenges are:

-

-
    -
  • The distance: Irina has to move to another city for her studies, while Andrei stays in Bucharest. They try to keep in touch through phone calls and messages, but they miss each other terribly.
  • -
  • The jealousy: Irina and Andrei have to deal with the presence of their exes, Cristian and Laura, who try to sabotage their relationship. They also have to cope with the rumors and gossip that circulate around them.
  • -
  • The betrayal: Irina and Andrei have to face some shocking revelations that shake their trust and faith in each other. They discover some secrets and lies that hurt them deeply.
  • -
  • The family issues: Irina and Andrei have to deal with the opposition of their families, especially Irina's mother, who does not accept their relationship. They have to fight for their love against the prejudices and expectations of their parents.
  • -
  • The personal growth: Irina and Andrei have to grow up and mature as individuals and as a couple. They have to learn from their mistakes and overcome their fears and insecurities. They have to find their own paths and goals in life.
  • -
-

Will Irina and Andrei be able to overcome all these obstacles and stay together? Will they be able to fulfill their dreams and aspirations? Will they be able to find their true happiness? You will have to read Fluturi Volumul 2 by Irina Binder to find out!

-

The Critical Reception

-

Fluturi Volumul 2 by Irina Binder has received mixed reviews from critics and readers alike. Some people have praised the book for its emotional intensity, its poetic language, its realistic portrayal of young love

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Gujjubhai The Great Download __FULL__ 300 Mb Movie.md b/spaces/gotiQspiryo/whisper-ui/examples/Gujjubhai The Great Download __FULL__ 300 Mb Movie.md deleted file mode 100644 index fefa0faf8011e595855ee0559d661c8091351ecd..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Gujjubhai The Great Download __FULL__ 300 Mb Movie.md +++ /dev/null @@ -1,28 +0,0 @@ -
-

How to Download Gujjubhai The Great, a Hilarious Gujarati Comedy Film, in 300 Mb

-

Gujjubhai The Great is a 2015 Gujarati comedy film directed by Ishaan Randeria and starring Siddharth Randeria, Jimit Trivedi, Swati Shah and Dipna Patel. It is based on a popular stage play of the same name by Siddharth Randeria, who also plays the lead role of Hasmukh Gandhi, a middle-aged businessman who tries to stop his daughter from marrying a wrong person with the help of his loyal employee Bakul[^2^]. The film was a huge success at the box office and received positive reviews from critics and audiences alike.

-

If you are looking for a fun and entertaining movie to watch with your family or friends, Gujjubhai The Great is a great choice. But how can you download it in 300 Mb without compromising the quality? Here are some tips to help you out:

-

Gujjubhai The Great Download 300 Mb Movie


Downloadhttps://urlgoal.com/2uyLr9



-
    -
  • First, you need to find a reliable and safe website that offers Gujjubhai The Great in 300 Mb format. You can use a search engine like Bing to look for such websites. For example, you can type "Gujjubhai The Great Download 300 Mb Movie" in the search box and see what results come up.
  • -
  • Second, you need to check the reviews and ratings of the website before downloading anything. You can also look for comments from other users who have downloaded the movie from the same website. This will help you avoid any malware or viruses that might harm your device.
  • -
  • Third, you need to make sure that the website has a fast and stable download speed. You don't want to waste your time and data on a slow or interrupted download. You can use a tool like Speedtest to check the download speed of the website.
  • -
  • Fourth, you need to have enough storage space on your device to save the movie. You can check the available space on your device by going to Settings > Storage. You can also delete some unwanted files or apps to free up some space.
  • -
  • Fifth, you need to have a good video player that can play the movie in 300 Mb format. You can use VLC Media Player, which is a free and versatile media player that supports various formats and codecs. You can download it from its official website or from Google Play Store or App Store.
  • -
-

By following these steps, you can enjoy watching Gujjubhai The Great, a hilarious Gujarati comedy film, in 300 Mb on your device. You can also share it with your friends and family and have a good laugh together.

Gujjubhai The Great is not only a hilarious comedy film, but also a showcase of the talented cast and crew who worked hard to make it a success. Here are some of the people behind this film and their roles:

-
    -
  • Siddharth Randeria: He is the writer, producer and actor of Gujjubhai The Great. He plays the role of Hasmukh Gandhi, the protagonist of the film. He is also a well-known theatre artist who has written and acted in many popular Gujarati plays, including the Gujjubhai series.
  • -
  • Jimit Trivedi: He is the actor who plays the role of Bakul Buch, the loyal employee of Hasmukh Gandhi who helps him in his plan to stop his daughter's marriage. He is also a theatre and television actor who has appeared in shows like Taarak Mehta Ka Ooltah Chashmah and films like Bhool Bhulaiyaa and 102 Not Out.
  • -
  • Swati Shah: She is the actress who plays the role of Pramila Gandhi, the wife of Hasmukh Gandhi and the mother of Tanisha Gandhi. She is also a television actress who has appeared in shows like Saath Nibhaana Saathiya and Baa Bahoo Aur Baby.
  • -
  • Dipna Patel: She is the actress who plays the role of Tanisha Gandhi, the daughter of Hasmukh Gandhi and the love interest of Montu. She is also a model and dancer who has participated in various beauty pageants and dance shows.
  • -
  • Alekh Sangal: He is the actor who plays the role of Montu, the boyfriend of Tanisha Gandhi who pretends to know Sonia Kapoor. He is also a television actor who has appeared in shows like Pyaar Ka Dard Hai Meetha Meetha Pyaara Pyaara and Kuch Toh Log Kahenge.
  • -
  • Sunil Vishrani: He is the actor who plays the role of Inspector Aakash Jhala, a police officer who gets involved in the chaos caused by Hasmukh Gandhi's plan. He is also a theatre and television actor who has appeared in shows like Khichdi and Sarabhai vs Sarabhai.
  • -
  • Khatera Hakimi: She is the actress who plays the role of Sonia Kapoor, a superstar actress who unknowingly becomes a part of Hasmukh Gandhi's plan. She is also a model and dancer who has appeared in various music videos and commercials.
  • -
  • Dharmesh Vyas: He is the actor who plays the role of Bade Bhai, a don who gets involved in the chaos caused by Hasmukh Gandhi's plan. He is also a theatre and film actor who has appeared in films like Tuu To Gayo and Chhello Divas.
  • -
-

Gujjubhai The Great is directed by Ishaan Randeria, who is also the son of Siddharth Randeria. The film's music is composed by Parth Bharat Thakkar and Advait Nemlekar, while the cinematography is done by Himanshu Dubey. The film's editing is done by Tushar Parekh, while the art direction is done by Abhishek Kharsani and Anvit Randeria.

-

Gujjubhai The Great is a film that will make you laugh out loud with its witty dialogues, hilarious situations and amazing performances. It is a film that celebrates Gujarati culture and humour with a touch of romance and drama. It is a film that you should not miss if you are looking for some entertainment and fun.

-

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/gradio-client-demos/comparing-captioning-models/README.md b/spaces/gradio-client-demos/comparing-captioning-models/README.md deleted file mode 100644 index 2c7b6de73fa3a62afe0d0895177cbfe7e1ac0091..0000000000000000000000000000000000000000 --- a/spaces/gradio-client-demos/comparing-captioning-models/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Comparing Captioning Models -emoji: 🔥 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -duplicated_from: nielsr/comparing-captioning-models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gradio/HuBERT/fairseq/modules/dynamicconv_layer/cuda_function_gen.py b/spaces/gradio/HuBERT/fairseq/modules/dynamicconv_layer/cuda_function_gen.py deleted file mode 100644 index 9304f99eb8169a614f39babc830c84cac80e080b..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/modules/dynamicconv_layer/cuda_function_gen.py +++ /dev/null @@ -1,223 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -def gen_forward(): - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - blocks = [32, 64, 128, 256] - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "dynamicconv_cuda.cuh" - -std::vector dynamicconv_cuda_forward(at::Tensor input, at::Tensor weight, int padding_l) { - - at::DeviceGuard g(input.device()); - const auto minibatch = input.size(0); - const auto numFeatures = input.size(1); - const auto sequenceLength = input.size(2); - - const auto numHeads = weight.size(1); - const auto filterSize = weight.size(2); - - const auto numFiltersInBlock = numFeatures / numHeads; - const dim3 blocks(minibatch, numFeatures); - - auto output = at::zeros_like(input); - auto stream = at::cuda::getCurrentCUDAStream(); -""" - - switch = """ - switch(filterSize) { -""" - - case_k = """ - case {k}: -""" - - main_block = """ - if (padding_l == {pad}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(input.scalar_type(), "dynamicconv_forward", ([&] {{ - dynamicconv_forward_kernel<{k}, {b_size}, {pad}, scalar_t> - <<>>( - input.data(), - weight.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - numHeads, - output.data()); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping forward pass" << std::endl; - } - break;\n -""" - - end = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping forward pass" << std::endl; - } - - return {output}; -} -""" - - with open("dynamicconv_cuda_forward.cu", "w") as forward: - forward.write(head) - forward.write(switch) - for k in kernels: - b_size = 32 - for b in blocks: - if b > k: - b_size = b - break - forward.write(case_k.format(k=k)) - for pad in [k // 2, k - 1]: - forward.write(main_block.format(k=k, b_size=b_size, pad=pad)) - forward.write(bad_padding) - forward.write(end) - - -def gen_backward(): - - kernels = [3, 5, 7, 15, 31, 63, 127, 255] - thresh = [512, 512, 512, 512, 512, 380, 256, 256] - min_block = [64, 64, 64, 64, 64, 64, 128, 256] - seqs = [32 * x for x in [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16]] - - head = """ -/** - * Copyright (c) Facebook, Inc. and its affiliates. - * - * This source code is licensed under the MIT license found in the - * LICENSE file in the root directory of this source tree. - */ - -#include "dynamicconv_cuda.cuh" - -std::vector dynamicconv_cuda_backward(at::Tensor gradOutput, int padding_l, at::Tensor input, at::Tensor weight) { - - at::DeviceGuard g(input.device()); - const auto minibatch = input.size(0); - const auto numFeatures = input.size(1); - const auto sequenceLength = input.size(2); - - const auto numHeads = weight.size(1); - const auto filterSize = weight.size(2); - - const auto numFiltersInBlock = numFeatures / numHeads; - auto numChunks = 1; - - auto gradInput = at::zeros_like(input); - auto gradWeight = at::zeros_like(weight); - auto stream = at::cuda::getCurrentCUDAStream(); - - dim3 blocks(minibatch, numHeads, numChunks); -""" - - sequence_if = """ - if (sequenceLength < {seq}) {{ - switch(filterSize) {{ -""" - - case_k = """ - case {k}: -""" - - chunks_reset = """ - numChunks = int(ceilf(sequenceLength/float({b_size}))); - blocks = dim3(minibatch, numHeads, numChunks); -""" - - main_block = """ - if (padding_l == {p}) {{ - AT_DISPATCH_FLOATING_TYPES_AND_HALF(gradOutput.scalar_type(), "dynamicconv_backward", ([&] {{ - dynamicconv_backward_kernel<{k}, {b_size}, {p}, scalar_t> - <<>>( - gradOutput.data(), - input.data(), - weight.data(), - minibatch, - sequenceLength, - numFeatures, - numFiltersInBlock, - numHeads, - gradWeight.data(), - gradInput.data()); - }})); - }} else -""" - - bad_padding = """ - { - std::cout << "WARNING: Unsupported padding size - skipping backward pass" << std::endl; - } - break;\n -""" - - bad_filter = """ - default: - std::cout << "WARNING: Unsupported filter length passed - skipping backward pass" << std::endl; - } -""" - - con_else = """ - } else -""" - - final_else = """ - { - switch(filterSize) { -""" - - last_return = """ - } - return {gradInput, gradWeight}; -} -""" - - with open("dynamicconv_cuda_backward.cu", "w") as backward: - backward.write(head) - for seq in seqs: - backward.write(sequence_if.format(seq=seq)) - for k, t, m in zip(kernels, thresh, min_block): - backward.write(case_k.format(k=k)) - if seq <= t: - b_size = seq - else: - b_size = m - backward.write(chunks_reset.format(b_size=b_size)) - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=b_size, p=p)) - backward.write(bad_padding) - backward.write(bad_filter) - backward.write(con_else) - backward.write(final_else) - for k, m in zip(kernels, min_block): - backward.write(case_k.format(k=k)) - backward.write(chunks_reset.format(b_size=m)) - for p in [k // 2, k - 1]: - backward.write(main_block.format(k=k, b_size=m, p=p)) - backward.write(bad_padding) - backward.write(bad_filter) - backward.write(last_return) - - -if __name__ == "__main__": - gen_forward() - gen_backward() diff --git a/spaces/gradio/HuBERT/tests/test_file_io.py b/spaces/gradio/HuBERT/tests/test_file_io.py deleted file mode 100644 index 425812bf1672489093941e5fa09f9da3171559ee..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/tests/test_file_io.py +++ /dev/null @@ -1,58 +0,0 @@ -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import os -import shutil -import sys -import tempfile -import unittest -from typing import Optional -from unittest.mock import MagicMock - - -class TestFileIO(unittest.TestCase): - - _tmpdir: Optional[str] = None - _tmpfile: Optional[str] = None - _tmpfile_contents = "Hello, World" - - @classmethod - def setUpClass(cls) -> None: - cls._tmpdir = tempfile.mkdtemp() - with open(os.path.join(cls._tmpdir, "test.txt"), "w") as f: - cls._tmpfile = f.name - f.write(cls._tmpfile_contents) - f.flush() - - @classmethod - def tearDownClass(cls) -> None: - # Cleanup temp working dir. - if cls._tmpdir is not None: - shutil.rmtree(cls._tmpdir) # type: ignore - - def test_file_io(self): - from fairseq.file_io import PathManager - - with PathManager.open(os.path.join(self._tmpdir, "test.txt"), "r") as f: - s = f.read() - self.assertEqual(s, self._tmpfile_contents) - - def test_file_io_oss(self): - # Mock iopath to simulate oss environment. - sys.modules["iopath"] = MagicMock() - from fairseq.file_io import PathManager - - with PathManager.open(os.path.join(self._tmpdir, "test.txt"), "r") as f: - s = f.read() - self.assertEqual(s, self._tmpfile_contents) - - def test_file_io_async(self): - # ioPath `PathManager` is initialized after the first `opena` call. - try: - from fairseq.file_io import IOPathManager, PathManager - _asyncfile = os.path.join(self._tmpdir, "async.txt") - f = PathManager.opena(_asyncfile, "wb") - f.close() - - finally: - self.assertTrue(PathManager.async_close()) diff --git a/spaces/guetLzy/Real-ESRGAN-Demo/docs/anime_video_model.md b/spaces/guetLzy/Real-ESRGAN-Demo/docs/anime_video_model.md deleted file mode 100644 index 0ad5c85804c1f8636c3720a652b40bbd9df0fe2e..0000000000000000000000000000000000000000 --- a/spaces/guetLzy/Real-ESRGAN-Demo/docs/anime_video_model.md +++ /dev/null @@ -1,136 +0,0 @@ -# Anime Video Models - -:white_check_mark: We add small models that are optimized for anime videos :-)
-More comparisons can be found in [anime_comparisons.md](anime_comparisons.md) - -- [How to Use](#how-to-use) -- [PyTorch Inference](#pytorch-inference) -- [ncnn Executable File](#ncnn-executable-file) - - [Step 1: Use ffmpeg to extract frames from video](#step-1-use-ffmpeg-to-extract-frames-from-video) - - [Step 2: Inference with Real-ESRGAN executable file](#step-2-inference-with-real-esrgan-executable-file) - - [Step 3: Merge the enhanced frames back into a video](#step-3-merge-the-enhanced-frames-back-into-a-video) -- [More Demos](#more-demos) - -| Models | Scale | Description | -| ---------------------------------------------------------------------------------------------------------------------------------- | :---- | :----------------------------- | -| [realesr-animevideov3](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth) | X4 1 | Anime video model with XS size | - -Note:
-1 This model can also be used for X1, X2, X3. - ---- - -The following are some demos (best view in the full screen mode). - - - - - - - -## How to Use - -### PyTorch Inference - -```bash -# download model -wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-animevideov3.pth -P weights -# single gpu and single process inference -CUDA_VISIBLE_DEVICES=0 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2 -# single gpu and multi process inference (you can use multi-processing to improve GPU utilization) -CUDA_VISIBLE_DEVICES=0 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2 --num_process_per_gpu 2 -# multi gpu and multi process inference -CUDA_VISIBLE_DEVICES=0,1,2,3 python inference_realesrgan_video.py -i inputs/video/onepiece_demo.mp4 -n realesr-animevideov3 -s 2 --suffix outx2 --num_process_per_gpu 2 -``` - -```console -Usage: ---num_process_per_gpu The total number of process is num_gpu * num_process_per_gpu. The bottleneck of - the program lies on the IO, so the GPUs are usually not fully utilized. To alleviate - this issue, you can use multi-processing by setting this parameter. As long as it - does not exceed the CUDA memory ---extract_frame_first If you encounter ffmpeg error when using multi-processing, you can turn this option on. -``` - -### NCNN Executable File - -#### Step 1: Use ffmpeg to extract frames from video - -```bash -ffmpeg -i onepiece_demo.mp4 -qscale:v 1 -qmin 1 -qmax 1 -vsync 0 tmp_frames/frame%08d.png -``` - -- Remember to create the folder `tmp_frames` ahead - -#### Step 2: Inference with Real-ESRGAN executable file - -1. Download the latest portable [Windows](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-windows.zip) / [Linux](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-ubuntu.zip) / [MacOS](https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesrgan-ncnn-vulkan-20220424-macos.zip) **executable files for Intel/AMD/Nvidia GPU** - -1. Taking the Windows as example, run: - - ```bash - ./realesrgan-ncnn-vulkan.exe -i tmp_frames -o out_frames -n realesr-animevideov3 -s 2 -f jpg - ``` - - - Remember to create the folder `out_frames` ahead - -#### Step 3: Merge the enhanced frames back into a video - -1. First obtain fps from input videos by - - ```bash - ffmpeg -i onepiece_demo.mp4 - ``` - - ```console - Usage: - -i input video path - ``` - - You will get the output similar to the following screenshot. - -

- -

- -2. Merge frames - - ```bash - ffmpeg -r 23.98 -i out_frames/frame%08d.jpg -c:v libx264 -r 23.98 -pix_fmt yuv420p output.mp4 - ``` - - ```console - Usage: - -i input video path - -c:v video encoder (usually we use libx264) - -r fps, remember to modify it to meet your needs - -pix_fmt pixel format in video - ``` - - If you also want to copy audio from the input videos, run: - - ```bash - ffmpeg -r 23.98 -i out_frames/frame%08d.jpg -i onepiece_demo.mp4 -map 0:v:0 -map 1:a:0 -c:a copy -c:v libx264 -r 23.98 -pix_fmt yuv420p output_w_audio.mp4 - ``` - - ```console - Usage: - -i input video path, here we use two input streams - -c:v video encoder (usually we use libx264) - -r fps, remember to modify it to meet your needs - -pix_fmt pixel format in video - ``` - -## More Demos - -- Input video for One Piece: - - - -- Out video for One Piece - - - -**More comparisons** - - diff --git a/spaces/h2oai/wave-tour/examples/choice_group_inline.py b/spaces/h2oai/wave-tour/examples/choice_group_inline.py deleted file mode 100644 index a5d87e842910521da132e4034fe7c4e26b3ce637..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/choice_group_inline.py +++ /dev/null @@ -1,32 +0,0 @@ -# Form / Choice Group / Inline -# Use #choice groups to let users select one option from two or more choices and inline to present the choices horizontally. -# #form #choice_group #inline -# --- -from h2o_wave import main, app, Q, ui - -choices = [ - ui.choice('A', 'Option A'), - ui.choice('B', 'Option B'), - ui.choice('C', 'Option C', disabled=True), - ui.choice('D', 'Option D'), - ui.choice('E', 'Option E'), - ui.choice('F', 'Option F'), - ui.choice('G', 'Option H'), - ui.choice('I', 'Option I'), -] - - -@app('/demo') -async def serve(q: Q): - if q.args.show_inputs: - q.page['example'].items = [ - ui.text(f'selected={q.args.choice_group}'), - ui.button(name='show_form', label='Back', primary=True), - ] - else: - q.page['example'] = ui.form_card(box='1 1 4 10', items=[ - ui.choice_group(name='choice_group', inline=True, label='Pick one', - value='B', required=True, choices=choices), - ui.button(name='show_inputs', label='Submit', primary=True), - ]) - await q.page.save() diff --git a/spaces/haoqi7/research/lrt/clustering/models/keyBartPlus.py b/spaces/haoqi7/research/lrt/clustering/models/keyBartPlus.py deleted file mode 100644 index 7b74a5726fcbde6f292a8e4e33829a35b0ce1e90..0000000000000000000000000000000000000000 --- a/spaces/haoqi7/research/lrt/clustering/models/keyBartPlus.py +++ /dev/null @@ -1,411 +0,0 @@ -from typing import Optional, List, Union, Tuple -import torch -import torch.nn as nn -import random -from torch.nn import CrossEntropyLoss - -from transformers.utils import ( -add_start_docstrings_to_model_forward, -add_end_docstrings, -replace_return_docstrings -) - -from transformers import AutoModelForSeq2SeqLM -from transformers.models.bart.modeling_bart import ( - BartForConditionalGeneration, - _expand_mask, logger, - shift_tokens_right, - BartPretrainedModel, - BART_INPUTS_DOCSTRING, - _CONFIG_FOR_DOC, - BART_GENERATION_EXAMPLE, - BartModel, - BartDecoder - -) -from .adapter import Adapter -from transformers.modeling_outputs import ( - BaseModelOutputWithPastAndCrossAttentions, - Seq2SeqModelOutput, - BaseModelOutput, - Seq2SeqLMOutput -) - - -class KeyBartAdapter(BartForConditionalGeneration): - def __init__(self,adapter_hid_dim:int) -> None: - keyBart = AutoModelForSeq2SeqLM.from_pretrained("bloomberg/KeyBART") - self.__fix_weights__(keyBart) - - super().__init__(keyBart.model.config) - self.lm_head = keyBart.lm_head - self.model = BartPlus(keyBart, adapter_hid_dim) - self.register_buffer("final_logits_bias", torch.zeros((1, self.model.shared.num_embeddings))) - - - def __fix_weights__(self,keyBart:BartForConditionalGeneration): - for i in keyBart.model.parameters(): - i.requires_grad = False - for i in keyBart.lm_head.parameters(): - i.requires_grad = False - - @add_start_docstrings_to_model_forward(BART_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=Seq2SeqLMOutput, config_class=_CONFIG_FOR_DOC) - @add_end_docstrings(BART_GENERATION_EXAMPLE) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - decoder_input_ids: Optional[torch.LongTensor] = None, - decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, - encoder_outputs: Optional[List[torch.FloatTensor]] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - decoder_inputs_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, Seq2SeqLMOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., - config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored - (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. - Returns: - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if labels is not None: - if use_cache: - logger.warning("The `use_cache` argument is changed to `False` since `labels` is provided.") - use_cache = False - if decoder_input_ids is None and decoder_inputs_embeds is None: - decoder_input_ids = shift_tokens_right( - labels, self.config.pad_token_id, self.config.decoder_start_token_id - ) - - outputs = self.model( - input_ids, - attention_mask=attention_mask, - decoder_input_ids=decoder_input_ids, - encoder_outputs=encoder_outputs, - decoder_attention_mask=decoder_attention_mask, - head_mask=head_mask, - decoder_head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, - past_key_values=past_key_values, - inputs_embeds=inputs_embeds, - decoder_inputs_embeds=decoder_inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias - - masked_lm_loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (lm_logits,) + outputs[1:] - return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output - - return Seq2SeqLMOutput( - loss=masked_lm_loss, - logits=lm_logits, - past_key_values=outputs.past_key_values, - decoder_hidden_states=outputs.decoder_hidden_states, - decoder_attentions=outputs.decoder_attentions, - cross_attentions=outputs.cross_attentions, - encoder_last_hidden_state=outputs.encoder_last_hidden_state, - encoder_hidden_states=outputs.encoder_hidden_states, - encoder_attentions=outputs.encoder_attentions, - ) - - - -class BartDecoderPlus(BartDecoder): - def __init__(self,keyBart:BartForConditionalGeneration,adapter_hid_dim: int) -> None: - super().__init__(keyBart.get_decoder().config) - self.decoder = keyBart.model.decoder - self.adapters = nn.ModuleList([Adapter(self.decoder.config.d_model,adapter_hid_dim) for _ in range(len(self.decoder.layers))]) - self.config = self.decoder.config - self.dropout = self.decoder.dropout - self.layerdrop = self.decoder.layerdrop - self.padding_idx = self.decoder.padding_idx - self.max_target_positions = self.decoder.max_target_positions - self.embed_scale = self.decoder.embed_scale - self.embed_tokens = self.decoder.embed_tokens - self.embed_positions = self.decoder.embed_positions - self.layers = self.decoder.layers - self.layernorm_embedding = self.decoder.layernorm_embedding - self.gradient_checkpointing = self.decoder.gradient_checkpointing - - - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPastAndCrossAttentions]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - use_cache = use_cache if use_cache is not None else self.config.use_cache - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # retrieve input_ids and inputs_embeds - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time") - elif input_ids is not None: - input = input_ids - input_shape = input.shape - input_ids = input_ids.view(-1, input_shape[-1]) - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - input = inputs_embeds[:, :, -1] - else: - raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds") - - # past_key_values_length - past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0 - - if inputs_embeds is None: - inputs_embeds = self.decoder.embed_tokens(input) * self.decoder.embed_scale - - attention_mask = self.decoder._prepare_decoder_attention_mask( - attention_mask, input_shape, inputs_embeds, past_key_values_length - ) - - # expand encoder attention mask - if encoder_hidden_states is not None and encoder_attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - encoder_attention_mask = _expand_mask(encoder_attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]) - - # embed positions - positions = self.decoder.embed_positions(input, past_key_values_length) - - hidden_states = inputs_embeds + positions - hidden_states = self.decoder.layernorm_embedding(hidden_states) - - hidden_states = nn.functional.dropout(hidden_states, p=self.decoder.dropout, training=self.decoder.training) - - # decoder layers - all_hidden_states = () if output_hidden_states else None - all_self_attns = () if output_attentions else None - all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - next_decoder_cache = () if use_cache else None - - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != (len(self.decoder.layers)): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.decoder.layers)} layers, but it is for" - f" {head_mask.size()[0]}." - ) - - for idx, decoder_layer in enumerate(self.decoder.layers): - # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) - if output_hidden_states: - all_hidden_states += (hidden_states,) - dropout_probability = random.uniform(0, 1) - if self.decoder.training and (dropout_probability < self.decoder.layerdrop): - continue - - past_key_value = past_key_values[idx] if past_key_values is not None else None - - if self.decoder.gradient_checkpointing and self.decoder.training: - - if use_cache: - logger.warning( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - def create_custom_forward(module): - def custom_forward(*inputs): - # None for past_key_value - return module(*inputs, output_attentions, use_cache) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(decoder_layer), - hidden_states, - attention_mask, - encoder_hidden_states, - encoder_attention_mask, - head_mask[idx] if head_mask is not None else None, - cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None, - None, - ) - else: - - layer_outputs = decoder_layer( - hidden_states, - attention_mask=attention_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=( - cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None - ), - past_key_value=past_key_value, - output_attentions=output_attentions, - use_cache=use_cache, - ) - hidden_states = layer_outputs[0] - - ######################### new ################################# - hidden_states = self.adapters[idx](hidden_states) - ######################### new ################################# - - if use_cache: - next_decoder_cache += (layer_outputs[3 if output_attentions else 1],) - - if output_attentions: - all_self_attns += (layer_outputs[1],) - - if encoder_hidden_states is not None: - all_cross_attentions += (layer_outputs[2],) - - # add hidden states from the last decoder layer - if output_hidden_states: - all_hidden_states += (hidden_states,) - - next_cache = next_decoder_cache if use_cache else None - if not return_dict: - return tuple( - v - for v in [hidden_states, next_cache, all_hidden_states, all_self_attns, all_cross_attentions] - if v is not None - ) - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=next_cache, - hidden_states=all_hidden_states, - attentions=all_self_attns, - cross_attentions=all_cross_attentions, - ) - -class BartPlus(BartModel): - def __init__(self,keyBart: BartForConditionalGeneration, adapter_hid_dim: int ) -> None: - super().__init__(keyBart.model.config) - self.config = keyBart.model.config - - # self.shared = nn.Embedding(vocab_size, config.d_model, padding_idx) - self.shared = keyBart.model.shared - - #self.encoder = BartEncoder(config, self.shared) - self.encoder = keyBart.model.encoder - - #self.decoder = BartDecoder(config, self.shared) - #self.decoder = keyBart.model.decoder - self.decoder = BartDecoderPlus(keyBart,adapter_hid_dim=adapter_hid_dim) - - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - decoder_input_ids: Optional[torch.LongTensor] = None, - decoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - decoder_head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, - encoder_outputs: Optional[List[torch.FloatTensor]] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - decoder_inputs_embeds: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, Seq2SeqModelOutput]: - - # different to other models, Bart automatically creates decoder_input_ids from - # input_ids if no decoder_input_ids are provided - if decoder_input_ids is None and decoder_inputs_embeds is None: - if input_ids is None: - raise ValueError( - "If no `decoder_input_ids` or `decoder_inputs_embeds` are " - "passed, `input_ids` cannot be `None`. Please pass either " - "`input_ids` or `decoder_input_ids` or `decoder_inputs_embeds`." - ) - - decoder_input_ids = shift_tokens_right( - input_ids, self.config.pad_token_id, self.config.decoder_start_token_id - ) - - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - use_cache = use_cache if use_cache is not None else self.config.use_cache - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if encoder_outputs is None: - encoder_outputs = self.encoder( - input_ids=input_ids, - attention_mask=attention_mask, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True - elif return_dict and not isinstance(encoder_outputs, BaseModelOutput): - encoder_outputs = BaseModelOutput( - last_hidden_state=encoder_outputs[0], - hidden_states=encoder_outputs[1] if len(encoder_outputs) > 1 else None, - attentions=encoder_outputs[2] if len(encoder_outputs) > 2 else None, - ) - - # decoder outputs consists of (dec_features, past_key_value, dec_hidden, dec_attn) - decoder_outputs = self.decoder( - input_ids=decoder_input_ids, - attention_mask=decoder_attention_mask, - encoder_hidden_states=encoder_outputs[0], - encoder_attention_mask=attention_mask, - head_mask=decoder_head_mask, - cross_attn_head_mask=cross_attn_head_mask, - past_key_values=past_key_values, - inputs_embeds=decoder_inputs_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - if not return_dict: - return decoder_outputs + encoder_outputs - - return Seq2SeqModelOutput( - last_hidden_state=decoder_outputs.last_hidden_state, - past_key_values=decoder_outputs.past_key_values, - decoder_hidden_states=decoder_outputs.hidden_states, - decoder_attentions=decoder_outputs.attentions, - cross_attentions=decoder_outputs.cross_attentions, - encoder_last_hidden_state=encoder_outputs.last_hidden_state, - encoder_hidden_states=encoder_outputs.hidden_states, - encoder_attentions=encoder_outputs.attentions, - ) - diff --git a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/layers/deform_pool.py b/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/layers/deform_pool.py deleted file mode 100644 index 7fb3f2e341a34f5747e7bfa9b2d858d74492697d..0000000000000000000000000000000000000000 --- a/spaces/haotiz/glip-zeroshot-demo/maskrcnn_benchmark/layers/deform_pool.py +++ /dev/null @@ -1,423 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .deform_conv import DeformConv2d - -def add_conv(in_ch, out_ch, ksize, stride, leaky=True): - """ - Add a conv2d / batchnorm / leaky ReLU block. - Args: - in_ch (int): number of input channels of the convolution layer. - out_ch (int): number of output channels of the convolution layer. - ksize (int): kernel size of the convolution layer. - stride (int): stride of the convolution layer. - Returns: - stage (Sequential) : Sequential layers composing a convolution block. - """ - stage = nn.Sequential() - pad = (ksize - 1) // 2 - stage.add_module('conv', nn.Conv2d(in_channels=in_ch, - out_channels=out_ch, kernel_size=ksize, stride=stride, - padding=pad, bias=False)) - stage.add_module('batch_norm', nn.BatchNorm2d(out_ch)) - if leaky: - stage.add_module('leaky', nn.LeakyReLU(0.1)) - else: - stage.add_module('relu6', nn.ReLU6(inplace=True)) - return stage - - -class upsample(nn.Module): - __constants__ = ['size', 'scale_factor', 'mode', 'align_corners', 'name'] - - def __init__(self, size=None, scale_factor=None, mode='nearest', align_corners=None): - super(upsample, self).__init__() - self.name = type(self).__name__ - self.size = size - self.scale_factor = scale_factor - self.mode = mode - self.align_corners = align_corners - - def forward(self, input): - return F.interpolate(input, self.size, self.scale_factor, self.mode, self.align_corners) - - def extra_repr(self): - if self.scale_factor is not None: - info = 'scale_factor=' + str(self.scale_factor) - else: - info = 'size=' + str(self.size) - info += ', mode=' + self.mode - return info - -class SPPLayer(nn.Module): - def __init__(self): - super(SPPLayer, self).__init__() - - def forward(self, x): - x_1 = x - x_2 = F.max_pool2d(x, 5, stride=1, padding=2) - x_3 = F.max_pool2d(x, 9, stride=1, padding=4) - x_4 = F.max_pool2d(x, 13, stride=1, padding=6) - out = torch.cat((x_1, x_2, x_3, x_4),dim=1) - return out - -class DropBlock(nn.Module): - def __init__(self, block_size=7, keep_prob=0.9): - super(DropBlock, self).__init__() - self.block_size = block_size - self.keep_prob = keep_prob - self.gamma = None - self.kernel_size = (block_size, block_size) - self.stride = (1, 1) - self.padding = (block_size//2, block_size//2) - - def reset(self, block_size, keep_prob): - self.block_size = block_size - self.keep_prob = keep_prob - self.gamma = None - self.kernel_size = (block_size, block_size) - self.stride = (1, 1) - self.padding = (block_size//2, block_size//2) - - def calculate_gamma(self, x): - return (1-self.keep_prob) * x.shape[-1]**2/ \ - (self.block_size**2 * (x.shape[-1] - self.block_size + 1)**2) - - def forward(self, x): - if (not self.training or self.keep_prob==1): #set keep_prob=1 to turn off dropblock - return x - if self.gamma is None: - self.gamma = self.calculate_gamma(x) - if x.type() == 'torch.cuda.HalfTensor': #TODO: not fully support for FP16 now - FP16 = True - x = x.float() - else: - FP16 = False - p = torch.ones_like(x) * (self.gamma) - mask = 1 - torch.nn.functional.max_pool2d(torch.bernoulli(p), - self.kernel_size, - self.stride, - self.padding) - - out = mask * x * (mask.numel()/mask.sum()) - - if FP16: - out = out.half() - return out - -class resblock(nn.Module): - """ - Sequential residual blocks each of which consists of \ - two convolution layers. - Args: - ch (int): number of input and output channels. - nblocks (int): number of residual blocks. - shortcut (bool): if True, residual tensor addition is enabled. - """ - def __init__(self, ch, nblocks=1, shortcut=True): - - super().__init__() - self.shortcut = shortcut - self.module_list = nn.ModuleList() - for i in range(nblocks): - resblock_one = nn.ModuleList() - resblock_one.append(add_conv(ch, ch//2, 1, 1)) - resblock_one.append(add_conv(ch//2, ch, 3, 1)) - self.module_list.append(resblock_one) - - def forward(self, x): - for module in self.module_list: - h = x - for res in module: - h = res(h) - x = x + h if self.shortcut else h - return x - - -class RFBblock(nn.Module): - def __init__(self,in_ch,residual=False): - super(RFBblock, self).__init__() - inter_c = in_ch // 4 - self.branch_0 = nn.Sequential( - nn.Conv2d(in_channels=in_ch, out_channels=inter_c, kernel_size=1, stride=1, padding=0), - ) - self.branch_1 = nn.Sequential( - nn.Conv2d(in_channels=in_ch, out_channels=inter_c, kernel_size=1, stride=1, padding=0), - nn.Conv2d(in_channels=inter_c, out_channels=inter_c, kernel_size=3, stride=1, padding=1) - ) - self.branch_2 = nn.Sequential( - nn.Conv2d(in_channels=in_ch, out_channels=inter_c, kernel_size=1, stride=1, padding=0), - nn.Conv2d(in_channels=inter_c, out_channels=inter_c, kernel_size=3, stride=1, padding=1), - nn.Conv2d(in_channels=inter_c, out_channels=inter_c, kernel_size=3, stride=1, dilation=2, padding=2) - ) - self.branch_3 = nn.Sequential( - nn.Conv2d(in_channels=in_ch, out_channels=inter_c, kernel_size=1, stride=1, padding=0), - nn.Conv2d(in_channels=inter_c, out_channels=inter_c, kernel_size=5, stride=1, padding=2), - nn.Conv2d(in_channels=inter_c, out_channels=inter_c, kernel_size=3, stride=1, dilation=3, padding=3) - ) - self.residual= residual - - def forward(self,x): - x_0 = self.branch_0(x) - x_1 = self.branch_1(x) - x_2 = self.branch_2(x) - x_3 = self.branch_3(x) - out = torch.cat((x_0,x_1,x_2,x_3),1) - if self.residual: - out +=x - return out - - -class FeatureAdaption(nn.Module): - def __init__(self, in_ch, out_ch, n_anchors, rfb=False, sep=False): - super(FeatureAdaption, self).__init__() - if sep: - self.sep=True - else: - self.sep=False - self.conv_offset = nn.Conv2d(in_channels=2*n_anchors, - out_channels=2*9*n_anchors, groups = n_anchors, kernel_size=1,stride=1,padding=0) - self.dconv = DeformConv2d(in_channels=in_ch, out_channels=out_ch, kernel_size=3, stride=1, - padding=1, deformable_groups=n_anchors) - self.rfb=None - if rfb: - self.rfb = RFBblock(out_ch) - - def forward(self, input, wh_pred): - #The RFB block is added behind FeatureAdaption - #For mobilenet, we currently don't support rfb and FeatureAdaption - if self.sep: - return input - if self.rfb is not None: - input = self.rfb(input) - wh_pred_new = wh_pred.detach() - offset = self.conv_offset(wh_pred_new) - out = self.dconv(input, offset) - return out - - -class ASFFmobile(nn.Module): - def __init__(self, level, rfb=False, vis=False): - super(ASFFmobile, self).__init__() - self.level = level - self.dim = [512, 256, 128] - self.inter_dim = self.dim[self.level] - if level==0: - self.stride_level_1 = add_conv(256, self.inter_dim, 3, 2, leaky=False) - self.stride_level_2 = add_conv(128, self.inter_dim, 3, 2, leaky=False) - self.expand = add_conv(self.inter_dim, 1024, 3, 1, leaky=False) - elif level==1: - self.compress_level_0 = add_conv(512, self.inter_dim, 1, 1, leaky=False) - self.stride_level_2 = add_conv(128, self.inter_dim, 3, 2, leaky=False) - self.expand = add_conv(self.inter_dim, 512, 3, 1, leaky=False) - elif level==2: - self.compress_level_0 = add_conv(512, self.inter_dim, 1, 1, leaky=False) - self.compress_level_1 = add_conv(256, self.inter_dim, 1, 1, leaky=False) - self.expand = add_conv(self.inter_dim, 256, 3, 1,leaky=False) - - compress_c = 8 if rfb else 16 #when adding rfb, we use half number of channels to save memory - - self.weight_level_0 = add_conv(self.inter_dim, compress_c, 1, 1, leaky=False) - self.weight_level_1 = add_conv(self.inter_dim, compress_c, 1, 1, leaky=False) - self.weight_level_2 = add_conv(self.inter_dim, compress_c, 1, 1, leaky=False) - - self.weight_levels = nn.Conv2d(compress_c*3, 3, kernel_size=1, stride=1, padding=0) - self.vis= vis - - - def forward(self, x_level_0, x_level_1, x_level_2): - if self.level==0: - level_0_resized = x_level_0 - level_1_resized = self.stride_level_1(x_level_1) - - level_2_downsampled_inter =F.max_pool2d(x_level_2, 3, stride=2, padding=1) - level_2_resized = self.stride_level_2(level_2_downsampled_inter) - - elif self.level==1: - level_0_compressed = self.compress_level_0(x_level_0) - level_0_resized =F.interpolate(level_0_compressed, scale_factor=2, mode='nearest') - level_1_resized =x_level_1 - level_2_resized =self.stride_level_2(x_level_2) - elif self.level==2: - level_0_compressed = self.compress_level_0(x_level_0) - level_0_resized =F.interpolate(level_0_compressed, scale_factor=4, mode='nearest') - level_1_compressed = self.compress_level_1(x_level_1) - level_1_resized =F.interpolate(level_1_compressed, scale_factor=2, mode='nearest') - level_2_resized =x_level_2 - - level_0_weight_v = self.weight_level_0(level_0_resized) - level_1_weight_v = self.weight_level_1(level_1_resized) - level_2_weight_v = self.weight_level_2(level_2_resized) - levels_weight_v = torch.cat((level_0_weight_v, level_1_weight_v, level_2_weight_v),1) - levels_weight = self.weight_levels(levels_weight_v) - levels_weight = F.softmax(levels_weight, dim=1) - - fused_out_reduced = level_0_resized * levels_weight[:,0:1,:,:]+ \ - level_1_resized * levels_weight[:,1:2,:,:]+ \ - level_2_resized * levels_weight[:,2:,:,:] - - out = self.expand(fused_out_reduced) - - if self.vis: - return out, levels_weight, fused_out_reduced.sum(dim=1) - else: - return out - - -class ASFF(nn.Module): - def __init__(self, level, rfb=False, vis=False): - super(ASFF, self).__init__() - self.level = level - self.dim = [512, 256, 256] - self.inter_dim = self.dim[self.level] - if level==0: - self.stride_level_1 = add_conv(256, self.inter_dim, 3, 2) - self.stride_level_2 = add_conv(256, self.inter_dim, 3, 2) - self.expand = add_conv(self.inter_dim, 1024, 3, 1) - elif level==1: - self.compress_level_0 = add_conv(512, self.inter_dim, 1, 1) - self.stride_level_2 = add_conv(256, self.inter_dim, 3, 2) - self.expand = add_conv(self.inter_dim, 512, 3, 1) - elif level==2: - self.compress_level_0 = add_conv(512, self.inter_dim, 1, 1) - self.expand = add_conv(self.inter_dim, 256, 3, 1) - - compress_c = 8 if rfb else 16 #when adding rfb, we use half number of channels to save memory - - self.weight_level_0 = add_conv(self.inter_dim, compress_c, 1, 1) - self.weight_level_1 = add_conv(self.inter_dim, compress_c, 1, 1) - self.weight_level_2 = add_conv(self.inter_dim, compress_c, 1, 1) - - self.weight_levels = nn.Conv2d(compress_c*3, 3, kernel_size=1, stride=1, padding=0) - self.vis= vis - - - def forward(self, x_level_0, x_level_1, x_level_2): - if self.level==0: - level_0_resized = x_level_0 - level_1_resized = self.stride_level_1(x_level_1) - - level_2_downsampled_inter =F.max_pool2d(x_level_2, 3, stride=2, padding=1) - level_2_resized = self.stride_level_2(level_2_downsampled_inter) - - elif self.level==1: - level_0_compressed = self.compress_level_0(x_level_0) - level_0_resized =F.interpolate(level_0_compressed, scale_factor=2, mode='nearest') - level_1_resized =x_level_1 - level_2_resized =self.stride_level_2(x_level_2) - elif self.level==2: - level_0_compressed = self.compress_level_0(x_level_0) - level_0_resized =F.interpolate(level_0_compressed, scale_factor=4, mode='nearest') - level_1_resized =F.interpolate(x_level_1, scale_factor=2, mode='nearest') - level_2_resized =x_level_2 - - level_0_weight_v = self.weight_level_0(level_0_resized) - level_1_weight_v = self.weight_level_1(level_1_resized) - level_2_weight_v = self.weight_level_2(level_2_resized) - levels_weight_v = torch.cat((level_0_weight_v, level_1_weight_v, level_2_weight_v),1) - levels_weight = self.weight_levels(levels_weight_v) - levels_weight = F.softmax(levels_weight, dim=1) - - fused_out_reduced = level_0_resized * levels_weight[:,0:1,:,:]+ \ - level_1_resized * levels_weight[:,1:2,:,:]+ \ - level_2_resized * levels_weight[:,2:,:,:] - - out = self.expand(fused_out_reduced) - - if self.vis: - return out, levels_weight, fused_out_reduced.sum(dim=1) - else: - return out - -def make_divisible(v, divisor, min_value=None): - """ - This function is taken from the original tf repo. - It ensures that all layers have a channel number that is divisible by 8 - It can be seen here: - https://github.com/tensorflow/models/blob/master/research/slim/nets/mobilenet/mobilenet.py - :param v: - :param divisor: - :param min_value: - :return: - """ - if min_value is None: - min_value = divisor - new_v = max(min_value, int(v + divisor / 2) // divisor * divisor) - # Make sure that round down does not go down by more than 10%. - if new_v < 0.9 * v: - new_v += divisor - return new_v - - -class ConvBNReLU(nn.Sequential): - def __init__(self, in_planes, out_planes, kernel_size=3, stride=1, groups=1): - padding = (kernel_size - 1) // 2 - super(ConvBNReLU, self).__init__( - nn.Conv2d(in_planes, out_planes, kernel_size, stride, padding, groups=groups, bias=False), - nn.BatchNorm2d(out_planes), - nn.ReLU6(inplace=True) - ) - -def add_sepconv(in_ch, out_ch, ksize, stride): - - stage = nn.Sequential() - pad = (ksize - 1) // 2 - stage.add_module('sepconv', nn.Conv2d(in_channels=in_ch, - out_channels=in_ch, kernel_size=ksize, stride=stride, - padding=pad, groups=in_ch, bias=False)) - stage.add_module('sepbn', nn.BatchNorm2d(in_ch)) - stage.add_module('seprelu6', nn.ReLU6(inplace=True)) - stage.add_module('ptconv', nn.Conv2d(in_ch, out_ch, 1, 1, 0, bias=False)) - stage.add_module('ptbn', nn.BatchNorm2d(out_ch)) - stage.add_module('ptrelu6', nn.ReLU6(inplace=True)) - return stage - -class InvertedResidual(nn.Module): - def __init__(self, inp, oup, stride, expand_ratio): - super(InvertedResidual, self).__init__() - self.stride = stride - assert stride in [1, 2] - - hidden_dim = int(round(inp * expand_ratio)) - self.use_res_connect = self.stride == 1 and inp == oup - - layers = [] - if expand_ratio != 1: - # pw - layers.append(ConvBNReLU(inp, hidden_dim, kernel_size=1)) - layers.extend([ - # dw - ConvBNReLU(hidden_dim, hidden_dim, stride=stride, groups=hidden_dim), - # pw-linear - nn.Conv2d(hidden_dim, oup, 1, 1, 0, bias=False), - nn.BatchNorm2d(oup), - ]) - self.conv = nn.Sequential(*layers) - - def forward(self, x): - if self.use_res_connect: - return x + self.conv(x) - else: - return self.conv(x) - -class ressepblock(nn.Module): - def __init__(self, ch, out_ch, in_ch=None, shortcut=True): - - super().__init__() - self.shortcut = shortcut - self.module_list = nn.ModuleList() - in_ch = ch//2 if in_ch==None else in_ch - resblock_one = nn.ModuleList() - resblock_one.append(add_conv(ch, in_ch, 1, 1, leaky=False)) - resblock_one.append(add_conv(in_ch, out_ch, 3, 1,leaky=False)) - self.module_list.append(resblock_one) - - def forward(self, x): - for module in self.module_list: - h = x - for res in module: - h = res(h) - x = x + h if self.shortcut else h - return x - diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/solver/build.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/solver/build.py deleted file mode 100644 index 6d9d0ee5df1a6135c1a3df0151dfe0e36aa9971a..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/solver/build.py +++ /dev/null @@ -1,165 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from enum import Enum -from typing import Any, Callable, Dict, Iterable, List, Set, Type, Union -import torch - -from detectron2.config import CfgNode - -from .lr_scheduler import WarmupCosineLR, WarmupMultiStepLR - -_GradientClipperInput = Union[torch.Tensor, Iterable[torch.Tensor]] -_GradientClipper = Callable[[_GradientClipperInput], None] - - -class GradientClipType(Enum): - VALUE = "value" - NORM = "norm" - - -def _create_gradient_clipper(cfg: CfgNode) -> _GradientClipper: - """ - Creates gradient clipping closure to clip by value or by norm, - according to the provided config. - """ - cfg = cfg.clone() - - def clip_grad_norm(p: _GradientClipperInput): - torch.nn.utils.clip_grad_norm_(p, cfg.CLIP_VALUE, cfg.NORM_TYPE) - - def clip_grad_value(p: _GradientClipperInput): - torch.nn.utils.clip_grad_value_(p, cfg.CLIP_VALUE) - - _GRADIENT_CLIP_TYPE_TO_CLIPPER = { - GradientClipType.VALUE: clip_grad_value, - GradientClipType.NORM: clip_grad_norm, - } - return _GRADIENT_CLIP_TYPE_TO_CLIPPER[GradientClipType(cfg.CLIP_TYPE)] - - -def _generate_optimizer_class_with_gradient_clipping( - optimizer_type: Type[torch.optim.Optimizer], gradient_clipper: _GradientClipper -) -> Type[torch.optim.Optimizer]: - """ - Dynamically creates a new type that inherits the type of a given instance - and overrides the `step` method to add gradient clipping - """ - - def optimizer_wgc_step(self, closure=None): - for group in self.param_groups: - for p in group["params"]: - gradient_clipper(p) - super(type(self), self).step(closure) - - OptimizerWithGradientClip = type( - optimizer_type.__name__ + "WithGradientClip", - (optimizer_type,), - {"step": optimizer_wgc_step}, - ) - return OptimizerWithGradientClip - - -def maybe_add_gradient_clipping( - cfg: CfgNode, optimizer: torch.optim.Optimizer -) -> torch.optim.Optimizer: - """ - If gradient clipping is enabled through config options, wraps the existing - optimizer instance of some type OptimizerType to become an instance - of the new dynamically created class OptimizerTypeWithGradientClip - that inherits OptimizerType and overrides the `step` method to - include gradient clipping. - - Args: - cfg: CfgNode - configuration options - optimizer: torch.optim.Optimizer - existing optimizer instance - - Return: - optimizer: torch.optim.Optimizer - either the unmodified optimizer instance (if gradient clipping is - disabled), or the same instance with adjusted __class__ to override - the `step` method and include gradient clipping - """ - if not cfg.SOLVER.CLIP_GRADIENTS.ENABLED: - return optimizer - grad_clipper = _create_gradient_clipper(cfg.SOLVER.CLIP_GRADIENTS) - OptimizerWithGradientClip = _generate_optimizer_class_with_gradient_clipping( - type(optimizer), grad_clipper - ) - optimizer.__class__ = OptimizerWithGradientClip - return optimizer - - -def build_optimizer(cfg: CfgNode, model: torch.nn.Module) -> torch.optim.Optimizer: - """ - Build an optimizer from config. - """ - norm_module_types = ( - torch.nn.BatchNorm1d, - torch.nn.BatchNorm2d, - torch.nn.BatchNorm3d, - torch.nn.SyncBatchNorm, - # NaiveSyncBatchNorm inherits from BatchNorm2d - torch.nn.GroupNorm, - torch.nn.InstanceNorm1d, - torch.nn.InstanceNorm2d, - torch.nn.InstanceNorm3d, - torch.nn.LayerNorm, - torch.nn.LocalResponseNorm, - ) - params: List[Dict[str, Any]] = [] - memo: Set[torch.nn.parameter.Parameter] = set() - for module in model.modules(): - for key, value in module.named_parameters(recurse=False): - if not value.requires_grad: - continue - # Avoid duplicating parameters - if value in memo: - continue - memo.add(value) - lr = cfg.SOLVER.BASE_LR - weight_decay = cfg.SOLVER.WEIGHT_DECAY - if isinstance(module, norm_module_types): - weight_decay = cfg.SOLVER.WEIGHT_DECAY_NORM - elif key == "bias": - # NOTE: unlike Detectron v1, we now default BIAS_LR_FACTOR to 1.0 - # and WEIGHT_DECAY_BIAS to WEIGHT_DECAY so that bias optimizer - # hyperparameters are by default exactly the same as for regular - # weights. - lr = cfg.SOLVER.BASE_LR * cfg.SOLVER.BIAS_LR_FACTOR - weight_decay = cfg.SOLVER.WEIGHT_DECAY_BIAS - params += [{"params": [value], "lr": lr, "weight_decay": weight_decay}] - - optimizer = torch.optim.SGD( - params, cfg.SOLVER.BASE_LR, momentum=cfg.SOLVER.MOMENTUM, nesterov=cfg.SOLVER.NESTEROV - ) - optimizer = maybe_add_gradient_clipping(cfg, optimizer) - return optimizer - - -def build_lr_scheduler( - cfg: CfgNode, optimizer: torch.optim.Optimizer -) -> torch.optim.lr_scheduler._LRScheduler: - """ - Build a LR scheduler from config. - """ - name = cfg.SOLVER.LR_SCHEDULER_NAME - if name == "WarmupMultiStepLR": - return WarmupMultiStepLR( - optimizer, - cfg.SOLVER.STEPS, - cfg.SOLVER.GAMMA, - warmup_factor=cfg.SOLVER.WARMUP_FACTOR, - warmup_iters=cfg.SOLVER.WARMUP_ITERS, - warmup_method=cfg.SOLVER.WARMUP_METHOD, - ) - elif name == "WarmupCosineLR": - return WarmupCosineLR( - optimizer, - cfg.SOLVER.MAX_ITER, - warmup_factor=cfg.SOLVER.WARMUP_FACTOR, - warmup_iters=cfg.SOLVER.WARMUP_ITERS, - warmup_method=cfg.SOLVER.WARMUP_METHOD, - ) - else: - raise ValueError("Unknown LR scheduler: {}".format(name)) diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/PointRend/point_rend/dataset_mapper.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/PointRend/point_rend/dataset_mapper.py deleted file mode 100644 index 76b64ee79b679741d547c5d1ffca55ac756051ae..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/PointRend/point_rend/dataset_mapper.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import copy -import logging -import numpy as np -import torch -from fvcore.common.file_io import PathManager -from fvcore.transforms.transform import CropTransform -from PIL import Image - -from detectron2.data import detection_utils as utils -from detectron2.data import transforms as T - -from .color_augmentation import ColorAugSSDTransform - -""" -This file contains the mapping that's applied to "dataset dicts" for semantic segmentation models. -Unlike the default DatasetMapper this mapper uses cropping as the last transformation. -""" - -__all__ = ["SemSegDatasetMapper"] - - -class SemSegDatasetMapper: - """ - A callable which takes a dataset dict in Detectron2 Dataset format, - and map it into a format used by semantic segmentation models. - - The callable currently does the following: - - 1. Read the image from "file_name" - 2. Applies geometric transforms to the image and annotation - 3. Find and applies suitable cropping to the image and annotation - 4. Prepare image and annotation to Tensors - """ - - def __init__(self, cfg, is_train=True): - if cfg.INPUT.CROP.ENABLED and is_train: - self.crop_gen = T.RandomCrop(cfg.INPUT.CROP.TYPE, cfg.INPUT.CROP.SIZE) - logging.getLogger(__name__).info("CropGen used in training: " + str(self.crop_gen)) - else: - self.crop_gen = None - - self.tfm_gens = utils.build_transform_gen(cfg, is_train) - - if cfg.INPUT.COLOR_AUG_SSD: - self.tfm_gens.append(ColorAugSSDTransform(img_format=cfg.INPUT.FORMAT)) - logging.getLogger(__name__).info( - "Color augmnetation used in training: " + str(self.tfm_gens[-1]) - ) - - # fmt: off - self.img_format = cfg.INPUT.FORMAT - self.single_category_max_area = cfg.INPUT.CROP.SINGLE_CATEGORY_MAX_AREA - self.ignore_value = cfg.MODEL.SEM_SEG_HEAD.IGNORE_VALUE - # fmt: on - - self.is_train = is_train - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format. - - Returns: - dict: a format that builtin models in detectron2 accept - """ - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - image = utils.read_image(dataset_dict["file_name"], format=self.img_format) - utils.check_image_size(dataset_dict, image) - assert "sem_seg_file_name" in dataset_dict - - image, transforms = T.apply_transform_gens(self.tfm_gens, image) - if self.is_train: - with PathManager.open(dataset_dict.pop("sem_seg_file_name"), "rb") as f: - sem_seg_gt = Image.open(f) - sem_seg_gt = np.asarray(sem_seg_gt, dtype="uint8") - sem_seg_gt = transforms.apply_segmentation(sem_seg_gt) - if self.crop_gen: - image, sem_seg_gt = crop_transform( - image, - sem_seg_gt, - self.crop_gen, - self.single_category_max_area, - self.ignore_value, - ) - dataset_dict["sem_seg"] = torch.as_tensor(sem_seg_gt.astype("long")) - - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1))) - - if not self.is_train: - dataset_dict.pop("sem_seg_file_name", None) - return dataset_dict - - return dataset_dict - - -def crop_transform(image, sem_seg, crop_gen, single_category_max_area, ignore_value): - """ - Find a cropping window such that no single category occupies more than - `single_category_max_area` in `sem_seg`. The function retries random cropping 10 times max. - """ - if single_category_max_area >= 1.0: - crop_tfm = crop_gen.get_transform(image) - sem_seg_temp = crop_tfm.apply_segmentation(sem_seg) - else: - h, w = sem_seg.shape - crop_size = crop_gen.get_crop_size((h, w)) - for _ in range(10): - y0 = np.random.randint(h - crop_size[0] + 1) - x0 = np.random.randint(w - crop_size[1] + 1) - sem_seg_temp = sem_seg[y0 : y0 + crop_size[0], x0 : x0 + crop_size[1]] - labels, cnt = np.unique(sem_seg_temp, return_counts=True) - cnt = cnt[labels != ignore_value] - if len(cnt) > 1 and np.max(cnt) / np.sum(cnt) < single_category_max_area: - break - crop_tfm = CropTransform(x0, y0, crop_size[1], crop_size[0]) - image = crop_tfm.apply_image(image) - return image, sem_seg_temp diff --git a/spaces/hkayabilisim/LIME/README.md b/spaces/hkayabilisim/LIME/README.md deleted file mode 100644 index 354c5d03aa5241024d6793b0e64274d9573cae5e..0000000000000000000000000000000000000000 --- a/spaces/hkayabilisim/LIME/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: LIME -emoji: 🐠 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/huang4414/GTest/public/GTest/main.html b/spaces/huang4414/GTest/public/GTest/main.html deleted file mode 100644 index d2a97b82451b3bc72e7728243477f19dbf980b9a..0000000000000000000000000000000000000000 --- a/spaces/huang4414/GTest/public/GTest/main.html +++ /dev/null @@ -1,27 +0,0 @@ - - - - - GTest - - - - - - -


-
- -
-
-
-
-
- -
- - - - - - \ No newline at end of file diff --git a/spaces/hugging-fellows/img-to-music/style.css b/spaces/hugging-fellows/img-to-music/style.css deleted file mode 100644 index 8f7397fe7f0971636015170df075cd2d070344ec..0000000000000000000000000000000000000000 --- a/spaces/hugging-fellows/img-to-music/style.css +++ /dev/null @@ -1,51 +0,0 @@ -#col-container {max-width: 510px; margin-left: auto; margin-right: auto;} -a {text-decoration-line: underline; font-weight: 600;} -div#music-output .h-full { - min-height: 5rem; -} -.footer { - margin-bottom: 45px; - margin-top: 10px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } -.animate-spin { - animation: spin 1s linear infinite; -} -@keyframes spin { - from { - transform: rotate(0deg); - } - to { - transform: rotate(360deg); - } -} -#share-btn-container { - display: flex; padding-left: 0.5rem !important; padding-right: 0.5rem !important; background-color: #000000; justify-content: center; align-items: center; border-radius: 9999px !important; width: 13rem; -} -#share-btn { - all: initial; color: #ffffff;font-weight: 600; cursor:pointer; font-family: 'IBM Plex Sans', sans-serif; margin-left: 0.5rem !important; padding-top: 0.25rem !important; padding-bottom: 0.25rem !important;right:0; -} -#share-btn * { - all: unset; -} -#share-btn-container div:nth-child(-n+2){ - width: auto !important; - min-height: 0px !important; -} -#share-btn-container .wrap { - display: none !important; -} \ No newline at end of file diff --git a/spaces/huggingface/Model_Cards_Writing_Tool/persist.py b/spaces/huggingface/Model_Cards_Writing_Tool/persist.py deleted file mode 100644 index 0fd58c1544523ae3a0e800adbf92851ed9c1c854..0000000000000000000000000000000000000000 --- a/spaces/huggingface/Model_Cards_Writing_Tool/persist.py +++ /dev/null @@ -1,26 +0,0 @@ -# Thank god this existed. -# https://gist.github.com/okld/0aba4869ba6fdc8d49132e6974e2e662 - -from streamlit import session_state as _state - -_PERSIST_STATE_KEY = f"{__name__}_PERSIST" - - -def persist(key: str) -> str: - """Mark widget state as persistent.""" - if _PERSIST_STATE_KEY not in _state: - _state[_PERSIST_STATE_KEY] = set() - - _state[_PERSIST_STATE_KEY].add(key) - - return key - - -def load_widget_state(): - """Load persistent widget state.""" - if _PERSIST_STATE_KEY in _state: - _state.update({ - key: value - for key, value in _state.items() - if key in _state[_PERSIST_STATE_KEY] - }) \ No newline at end of file diff --git a/spaces/huspacy/example-applications/examples/keyphrases.py b/spaces/huspacy/example-applications/examples/keyphrases.py deleted file mode 100644 index a1bc3bb8fb89413a53a4d995af2ee4f613b8753b..0000000000000000000000000000000000000000 --- a/spaces/huspacy/example-applications/examples/keyphrases.py +++ /dev/null @@ -1,53 +0,0 @@ -from typing import List, Tuple - -import gradio as gr -import pandas as pd -from textacy.extract.keyterms.sgrank import sgrank as keywords - -from examples.common import NLP, IDF - - -def process(text: str) -> pd.DataFrame: - doc = NLP(text) - terms: List[Tuple[str, float]] = keywords(doc, topn=10, include_pos=("NOUN", "PROPN"), idf=IDF, ngrams=(1, 2, 3)) - term_set = [t for t, _ in terms] - return pd.DataFrame([{"Keyphrase": term, "Score": prob} - for term, prob in terms - if all(other == term or term not in other for other in term_set)]) - - -EXAMPLES = [ - """A magyar futballválogatott negyedik Nemzetek Ligája mérkőzésén másodszor nyert, a hazai 1-0-s siker után idegenben 4-0-val megsemmisítette Angliát. Nagy győzelem volt, az enervált angolokat a saját közönségük fütyülte ki, miután a második félidőben el sem találták a kaput. 96 éve nem kaptak ekkora verést az angolok hazai pályán. -Az angol kapitány, Gareth Southgate kilenc helyen változtatott azon a csapaton, amelyik az olaszok ellen gól nélküli döntetlent játszott szombaton. Marco Rossi a kapuban cserélt, Dibusz Dénes állt a gólvonalon, Schäfer András visszatért a középpályára, miután a németek ellen letöltötte eltiltását. A 3-4-2-1-es felállás ezúttal sem változott. Ha végig akarja nézni helyszíni közvetítésünket, mit műveltek az angolok a Himnusz alatt, itt megteheti. -Az első helyzetre a hatodik percig kellett várni, akkor Kane passzából James húzott el a bal oldalon, középre adta a labdát, Bowen fejelt, Nagy Zsolt a helyén volt, és mentett. -Az első magyar lövés rögtön a kapuban landolt. -Szoboszlai ívelt be szabadrúgást a tizenhatoson belülre, Lang és Stone ugrott fel fejelni, a magyar védő megelőzte ellenfelét. A labda Kane talpa alatt elcsúszott, Sallai combbal átvette, azonnal lőtt 7 méterről, mielőtt még Phillips odaért volna, Ramsdale ugyan beleért, de kiütni már nem tudta. Szoboszlainak volt egy másik, sokkal közelebbről elvégzett szabadrúgása, amit igen veszélyesen lőtt be a kapu elé, James a gólvonalról mentett, Szalai Attila elől. A kipattanóból az angolok egy gyors kontrát vezettek, de a magyar tízes visszafutott, és a tizenhatoson belül szerelni tudott. -A magyar válogatott nem volt nagy nyomás alatt, az angolok akkor jártak legközelebb a gólhoz, amikor Orbán a 36. percben a saját kapuja felé fejelt, de Dibusz akkor is a helyén volt. A hazai szögletek veszélyesek voltak, de egyik sem annyira, hogy a szívünkhöz kellett volna kapnunk. -A második félidőben az angolok felgyorsították a játékukat, de igazán komoly helyzetet nem tudtak kialakítani, Rossi pedig már az 55. percben érezte, hogy frissíteni kell, és Szoboszlai helyére Gazdag, Styles helyére pedig Nagy Ádám állt. -Mivel a válogatott visszaállt, és fegyelmezetten zárta le a területeket, az angolok ötlettelen, olykor lassú adogatása veszélytelen volt. A csapat ezúttal is igazolta, mennyire képes megnehezíteni, megkeseríteni a riválisai dolgát. -A hajrában jött a varázslat Most azonban azt is igazolta, hogy egy pillanat alatt a kapu elé tud kerülni. A Szalai helyére beálló Ádám Martin megharcolt egy labdáért, megtartotta a térfél közepén, majd tökéletesen szöktette Sallait, aki a tizenhatoson belül állítgatás nélkül jobb külsővel elrúgta a labdát Ramsdale lába mellett. -A 77. percben Kane használható labdát kapott a szélről, kilenc méterről fejelt, a kapufáról pattant vissza a labda a mezőnybe, a center megpróbálta átvenni, de másodszorra már nem tudta, így odalett a helyzet. A 81. percben Magyarország berúgta a harmadik gólt. -Nego fejesét még hárították a védők, a kipattanót készítette le Ádám Martin a támadást kísérő Nagy Zsolt elé, aki 17 méterről külsővel, állítgatás nélkül, pazarul lőtte ki a jobb alsó sarkot. A 29 éves felcsúti védő ennek a 11 napnak a legnagyobb felfedezettje, mert a németek ellen is eredményes volt szombaton. -Stonest a hajrában még kiállították, a mieink nem törekedtek még jobban a gólkülönbség javítására, de magabiztosan passzolgattak, így Angliának esélye sem volt a szépítésre, miután alig volt náluk a labda. Hogy teljes legyen az angol KO, arról Gazdag Dániel gondoskodott, amikor egy nagy sprint után lazán átpörgette a labdát a kimozduló kapus felett. -Ha Anglia nem is veszi komolyan ezt a sorozatot, mert a novemberi vb-re készül, és egy fárasztó szezon végén már a legszívesebben pihennének a klasszisai, négy gólt biztosan nem akart kapni, mert így könnyen ki is eshetnek az A divízióból. Tavaly szeptemberben Eb-selejtezőn Anglia ugyanilyen arányban verte a mieinket a Puskás Arénában, ez most egy méltó visszavágás volt. -A csoport másik meccsén a németek 5-2-vel küldték haza az olaszokat, és ezzel feljöttek a második helyre.""", - """A megszokott menetrenden kívül és váratlanul nagy mértékben emelt a jegybank az irányadó rátán. A forint a hírre 395 alá erősödött az euróval szemben. -7,25 százalékos kamattal hirdette meg a Magyar Nemzeti Bank (MNB) az egyhetes betéti tenderét – derül ki a jegybank által közzétett adatokból. Az egyhetes betét azt jelenti, hogy a bankok ezzel a rátával parkoltathatják a pénzüket egy hétig az MNB-ben. Mivel az egyhetes betéti eszköz kamata jelenleg magasabb, mint az alapkamat, valójában ez az irányadó ráta. -A kamatemelés váratlan, mind időzítését, mind mértékét tekintve. A jegybank ugyan kommunikációja szerint nyitva tartja a lehetőségét, hogy bármikor emeljen az egyhetes betét kamatán, azon a Monetáris Tanács havi kamatdöntő ülései után szokott emelni. A cél az, hogy idővel az alapkamat és az egyhetes betét kamata újra összezárjon. A kamatemelés így a bevett menetrenden kívüli. A mértéke is nagyobb a megszokottnál, az emelés 0,5 százalékpontos, miközben a jegybank egy ideje 0,3 százalékpontos lépésekkel haladt felfelé. Legutóbb márciusban volt 0,5 százalékpontos emelés, ekkor a ráta 5,35 százalékról 5,85 százalékra nőtt. -A Monetáris Tanács legutóbb május végén emelt az alapkamat szintjén, 5,4-ről 5,9 százalékra. Addig kell a kamatokat emelni, amíg az szükséges, hogy az inflációs célt fenntartható módon el tudjuk érni – mondta Virág Barnabás alelnök az alapkamat-emelés után. Az alelnök szerint a következő hónapokban várhatóan még tovább nő az infláció az áprilisi 9,5 százalékról, Virág elhúzódó infláció elleni harcot, a szigorúbb monetáris kondíciók tartós fenntartását ígérte. -A forint árfolyama mindenesetre jól reagált az egyhetesbetétkamat-emelésre, az euróval szemben a napi nyitó árfolyam 397,5 környékéről 395 alá erősödött. A megelőző napokban a forint sorra döntögette a negatív árfolyamrekordokat, az euróval szemben többször 400 fölött is járt. Jelenleg a történelmi mélypont 402,96. -""", - """Jövőre nem lesz "ledolgozós" hétvége -2023-ban egyetlen szombati munkanap sem lesz. -Jövőre kétszer (húsvétkor és karácsonykor) lesz négynapos a hétvége. Emellett négyszer lesz háromnapos hosszú hétvége, mivel május 1-én, pünkösdkor, október 23-án, és az újévkor is egy-egy hétfővel bővülnek majd a hétvégi szabadnapok - ezt Koncz Zsófia, a Technológiai és Ipari Minisztérium új parlamenti államtitkára közölte egy Facebook-bejegyzésben, hangsúlyozva, hogy a munkarendet meghatározó minisztériumuk úgy döntött, hogy 2023-ban egyetlen egynapos, "ledolgozós” hétvége sem lesz. -Hogy mennyit ér egy munkanap, arról csak becsléseket lehet készíteni, és a statisztikusok legszívesebben erről is lebeszélnék a kísérletező kedvűeket. Nagyon leegyszerűsítve mondhatjuk azt: a GDP-t leosztva a munkanapok számával 160 milliárd forintot ér egy munkanap, de akkor még azt sem vettük figyelembe, hogy van munka azért hétvégéken is. -""" -] - -demo = gr.Interface( - fn=process, - inputs=gr.Textbox(value=EXAMPLES[0], lines=10, label="Input text", show_label=True), - outputs=gr.DataFrame(label="Keywords", show_label=False, max_cols=2, max_rows=10), - examples=EXAMPLES, - # cache_examples=True, -) diff --git a/spaces/hussain-shk/IndiSent/subword-nmt/subword_nmt/apply_bpe.py b/spaces/hussain-shk/IndiSent/subword-nmt/subword_nmt/apply_bpe.py deleted file mode 100644 index 25996c808d02643c45d0ee0a837b5b291f8aa4f8..0000000000000000000000000000000000000000 --- a/spaces/hussain-shk/IndiSent/subword-nmt/subword_nmt/apply_bpe.py +++ /dev/null @@ -1,448 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# Author: Rico Sennrich - -"""Use operations learned with learn_bpe.py to encode a new text. -The text will not be smaller, but use only a fixed vocabulary, with rare words -encoded as variable-length sequences of subword units. - -Reference: -Rico Sennrich, Barry Haddow and Alexandra Birch (2015). Neural Machine Translation of Rare Words with Subword Units. -Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016). Berlin, Germany. -""" - -from __future__ import unicode_literals, division - -import sys -import os -import inspect -import codecs -import io -import argparse -import re -import warnings -import random -import tempfile -from multiprocessing import Pool, cpu_count - -# hack for python2/3 compatibility -from io import open -argparse.open = open - -class BPE(object): - - def __init__(self, codes, merges=-1, separator='@@', vocab=None, glossaries=None): - - codes.seek(0) - offset=1 - - # check version information - firstline = codes.readline() - if firstline.startswith('#version:'): - self.version = tuple([int(x) for x in re.sub(r'(\.0+)*$','', firstline.split()[-1]).split(".")]) - offset += 1 - else: - self.version = (0, 1) - codes.seek(0) - - self.bpe_codes = [tuple(item.strip('\r\n ').split(' ')) for (n, item) in enumerate(codes.read().rstrip('\n').split('\n')) if (n < merges or merges == -1)] - - for i, item in enumerate(self.bpe_codes): - if len(item) != 2: - sys.stderr.write('Error: invalid line {0} in BPE codes file: {1}\n'.format(i+offset, ' '.join(item))) - sys.stderr.write('The line should exist of exactly two subword units, separated by whitespace\n') - sys.exit(1) - - # some hacking to deal with duplicates (only consider first instance) - self.bpe_codes = dict([(code,i) for (i,code) in reversed(list(enumerate(self.bpe_codes)))]) - - self.bpe_codes_reverse = dict([(pair[0] + pair[1], pair) for pair,i in self.bpe_codes.items()]) - - self.separator = separator - - self.vocab = vocab - - self.glossaries = glossaries if glossaries else [] - - self.glossaries_regex = re.compile('^({})$'.format('|'.join(glossaries))) if glossaries else None - - self.cache = {} - - def process_lines(self, filename, outfile, dropout=0, num_workers=1): - - if sys.version_info < (3, 0): - print("Parallel mode is only supported in Python3.") - sys.exit(1) - - if num_workers == 1: - _process_lines(self, filename, outfile, dropout, 0, 0) - elif num_workers > 1: - with open(filename, encoding="utf-8") as f: - size = os.fstat(f.fileno()).st_size - chunk_size = int(size / num_workers) - offsets = [0 for _ in range(num_workers + 1)] - for i in range(1, num_workers): - f.seek(chunk_size * i) - pos = f.tell() - while True: - try: - line = f.readline() - break - except UnicodeDecodeError: - pos -= 1 - f.seek(pos) - offsets[i] = f.tell() - assert 0 <= offsets[i] < 1e20, "Bad new line separator, e.g. '\\r'" - res_files = [] - pool = Pool(processes=num_workers) - for i in range(num_workers): - tmp = tempfile.NamedTemporaryFile(delete=False) - tmp.close() - res_files.append(tmp) - pool.apply_async(_process_lines, (self, filename, tmp.name, dropout, offsets[i], offsets[i + 1])) - pool.close() - pool.join() - for i in range(num_workers): - with open(res_files[i].name, encoding="utf-8") as fi: - for line in fi: - outfile.write(line) - os.remove(res_files[i].name) - else: - raise ValueError('`num_workers` is expected to be a positive number, but got {}.'.format(num_workers)) - - def process_line(self, line, dropout=0): - """segment line, dealing with leading and trailing whitespace""" - - out = "" - - leading_whitespace = len(line)-len(line.lstrip('\r\n ')) - if leading_whitespace: - out += line[:leading_whitespace] - - out += self.segment(line, dropout) - - trailing_whitespace = len(line)-len(line.rstrip('\r\n ')) - if trailing_whitespace and trailing_whitespace != len(line): - out += line[-trailing_whitespace:] - - return out - - def segment(self, sentence, dropout=0): - """segment single sentence (whitespace-tokenized string) with BPE encoding""" - segments = self.segment_tokens(sentence.strip('\r\n ').split(' '), dropout) - return ' '.join(segments) - - def segment_tokens(self, tokens, dropout=0): - """segment a sequence of tokens with BPE encoding""" - output = [] - for word in tokens: - # eliminate double spaces - if not word: - continue - new_word = [out for segment in self._isolate_glossaries(word) - for out in encode(segment, - self.bpe_codes, - self.bpe_codes_reverse, - self.vocab, - self.separator, - self.version, - self.cache, - self.glossaries_regex, - dropout)] - - for item in new_word[:-1]: - output.append(item + self.separator) - output.append(new_word[-1]) - - return output - - def _isolate_glossaries(self, word): - word_segments = [word] - for gloss in self.glossaries: - word_segments = [out_segments for segment in word_segments - for out_segments in isolate_glossary(segment, gloss)] - return word_segments - -def _process_lines(bpe, filename, outfile, dropout, begin, end): - if isinstance(outfile, str): - fo = open(outfile, "w", encoding="utf-8") - else: - fo = outfile - with open(filename, encoding="utf-8") as f: - f.seek(begin) - line = f.readline() - while line: - pos = f.tell() - assert 0 <= pos < 1e20, "Bad new line separator, e.g. '\\r'" - if end > 0 and pos > end: - break - fo.write(bpe.process_line(line, dropout)) - line = f.readline() - if isinstance(outfile, str): - fo.close() - -def create_parser(subparsers=None): - - if subparsers: - parser = subparsers.add_parser('apply-bpe', - formatter_class=argparse.RawDescriptionHelpFormatter, - description="learn BPE-based word segmentation") - else: - parser = argparse.ArgumentParser( - formatter_class=argparse.RawDescriptionHelpFormatter, - description="learn BPE-based word segmentation") - - parser.add_argument( - '--input', '-i', type=argparse.FileType('r'), default=sys.stdin, - metavar='PATH', - help="Input file (default: standard input).") - parser.add_argument( - '--codes', '-c', type=argparse.FileType('r'), metavar='PATH', - required=True, - help="File with BPE codes (created by learn_bpe.py).") - parser.add_argument( - '--merges', '-m', type=int, default=-1, - metavar='INT', - help="Use this many BPE operations (<= number of learned symbols)"+ - "default: Apply all the learned merge operations") - parser.add_argument( - '--output', '-o', type=argparse.FileType('w'), default=sys.stdout, - metavar='PATH', - help="Output file (default: standard output)") - parser.add_argument( - '--separator', '-s', type=str, default='@@', metavar='STR', - help="Separator between non-final subword units (default: '%(default)s'))") - parser.add_argument( - '--vocabulary', type=argparse.FileType('r'), default=None, - metavar="PATH", - help="Vocabulary file (built with get_vocab.py). If provided, this script reverts any merge operations that produce an OOV.") - parser.add_argument( - '--vocabulary-threshold', type=int, default=None, - metavar="INT", - help="Vocabulary threshold. If vocabulary is provided, any word with frequency < threshold will be treated as OOV") - parser.add_argument( - '--dropout', type=float, default=0, - metavar="P", - help="Dropout BPE merge operations with probability P (Provilkov et al., 2019). Use this on training data only.") - parser.add_argument( - '--glossaries', type=str, nargs='+', default=None, - metavar="STR", - help="Glossaries. Words matching any of the words/regex provided in glossaries will not be affected "+ - "by the BPE (i.e. they will neither be broken into subwords, nor concatenated with other subwords. "+ - "Can be provided as a list of words/regex after the --glossaries argument. Enclose each regex in quotes.") - parser.add_argument( - '--seed', type=int, default=None, - metavar="S", - help="Random seed for the random number generators (e.g. for BPE dropout with --dropout).") - parser.add_argument( - '--num-workers', type=int, default=1, - help="Number of processors to process texts, only supported in Python3. If -1, set `multiprocessing.cpu_count()`. (default: %(default)s)") - - return parser - -def encode(orig, bpe_codes, bpe_codes_reverse, vocab, separator, version, cache, glossaries_regex=None, dropout=0): - """Encode word based on list of BPE merge operations, which are applied consecutively - """ - - if not dropout and orig in cache: - return cache[orig] - - if glossaries_regex and glossaries_regex.match(orig): - cache[orig] = (orig,) - return (orig,) - - if len(orig) == 1: - return orig - - if version == (0, 1): - word = list(orig) + [''] - elif version == (0, 2): # more consistent handling of word-final segments - word = list(orig[:-1]) + [orig[-1] + ''] - else: - raise NotImplementedError - - while len(word) > 1: - - # get list of symbol pairs; optionally apply dropout - pairs = [(bpe_codes[pair],i,pair) for (i,pair) in enumerate(zip(word, word[1:])) if (not dropout or random.random() > dropout) and pair in bpe_codes] - - if not pairs: - break - - #get first merge operation in list of BPE codes - bigram = min(pairs)[2] - - # find start position of all pairs that we want to merge - positions = [i for (rank,i,pair) in pairs if pair == bigram] - - i = 0 - new_word = [] - bigram = ''.join(bigram) - for j in positions: - # merges are invalid if they start before current position. This can happen if there are overlapping pairs: (x x x -> xx x) - if j < i: - continue - new_word.extend(word[i:j]) # all symbols before merged pair - new_word.append(bigram) # merged pair - i = j+2 # continue after merged pair - new_word.extend(word[i:]) # add all symbols until end of word - word = new_word - - # don't print end-of-word symbols - if word[-1] == '': - word = word[:-1] - elif word[-1].endswith(''): - word[-1] = word[-1][:-4] - - word = tuple(word) - if vocab: - word = check_vocab_and_split(word, bpe_codes_reverse, vocab, separator) - - cache[orig] = word - return word - -def recursive_split(segment, bpe_codes, vocab, separator, final=False): - """Recursively split segment into smaller units (by reversing BPE merges) - until all units are either in-vocabulary, or cannot be split futher.""" - - try: - if final: - left, right = bpe_codes[segment + ''] - right = right[:-4] - else: - left, right = bpe_codes[segment] - except: - #sys.stderr.write('cannot split {0} further.\n'.format(segment)) - yield segment - return - - if left + separator in vocab: - yield left - else: - for item in recursive_split(left, bpe_codes, vocab, separator, False): - yield item - - if (final and right in vocab) or (not final and right + separator in vocab): - yield right - else: - for item in recursive_split(right, bpe_codes, vocab, separator, final): - yield item - -def check_vocab_and_split(orig, bpe_codes, vocab, separator): - """Check for each segment in word if it is in-vocabulary, - and segment OOV segments into smaller units by reversing the BPE merge operations""" - - out = [] - - for segment in orig[:-1]: - if segment + separator in vocab: - out.append(segment) - else: - #sys.stderr.write('OOV: {0}\n'.format(segment)) - for item in recursive_split(segment, bpe_codes, vocab, separator, False): - out.append(item) - - segment = orig[-1] - if segment in vocab: - out.append(segment) - else: - #sys.stderr.write('OOV: {0}\n'.format(segment)) - for item in recursive_split(segment, bpe_codes, vocab, separator, True): - out.append(item) - - return out - - -def read_vocabulary(vocab_file, threshold): - """read vocabulary file produced by get_vocab.py, and filter according to frequency threshold. - """ - - vocabulary = set() - - for line in vocab_file: - word, freq = line.strip('\r\n ').split(' ') - freq = int(freq) - if threshold == None or freq >= threshold: - vocabulary.add(word) - - return vocabulary - -def isolate_glossary(word, glossary): - """ - Isolate a glossary present inside a word. - - Returns a list of subwords. In which all 'glossary' glossaries are isolated - - For example, if 'USA' is the glossary and '1934USABUSA' the word, the return value is: - ['1934', 'USA', 'B', 'USA'] - """ - # regex equivalent of (if word == glossary or glossary not in word) - if re.match('^'+glossary+'$', word) or not re.search(glossary, word): - return [word] - else: - segments = re.split(r'({})'.format(glossary), word) - segments, ending = segments[:-1], segments[-1] - segments = list(filter(None, segments)) # Remove empty strings in regex group. - return segments + [ending.strip('\r\n ')] if ending != '' else segments - -if __name__ == '__main__': - - currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) - newdir = os.path.join(currentdir, 'subword_nmt') - if os.path.isdir(newdir): - warnings.simplefilter('default') - warnings.warn( - "this script's location has moved to {0}. This symbolic link will be removed in a future version. Please point to the new location, or install the package and use the command 'subword-nmt'".format(newdir), - DeprecationWarning - ) - - # python 2/3 compatibility - if sys.version_info < (3, 0): - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin) - else: - sys.stdin = io.TextIOWrapper(sys.stdin.buffer, encoding='utf-8') - sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8') - sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8', write_through=True, line_buffering=True) - - parser = create_parser() - args = parser.parse_args() - - if args.num_workers <= 0: - args.num_workers = cpu_count() - - # read/write files as UTF-8 - args.codes = codecs.open(args.codes.name, encoding='utf-8') - if args.input.name != '': - args.input = codecs.open(args.input.name, encoding='utf-8') - if args.output.name != '': - args.output = codecs.open(args.output.name, 'w', encoding='utf-8') - if args.vocabulary: - args.vocabulary = codecs.open(args.vocabulary.name, encoding='utf-8') - - if args.vocabulary: - vocabulary = read_vocabulary(args.vocabulary, args.vocabulary_threshold) - else: - vocabulary = None - - if sys.version_info < (3, 0): - args.separator = args.separator.decode('UTF-8') - if args.glossaries: - args.glossaries = [g.decode('UTF-8') for g in args.glossaries] - if args.num_workers > 1: - args.num_workers = 1 - warnings.warn("Parallel mode is only supported in Python3. Using 1 processor instead.") - - if args.seed is not None: - random.seed(args.seed) - - bpe = BPE(args.codes, args.merges, args.separator, vocabulary, args.glossaries) - - if args.input.name == '' or args.num_workers == 1: - if args.num_workers > 1: - warnings.warn("In parallel mode, the input cannot be STDIN. Using 1 processor instead.") - for line in args.input: - args.output.write(bpe.process_line(line, args.dropout)) - else: - bpe.process_lines(args.input.name, args.output, args.dropout, args.num_workers) diff --git a/spaces/hysts/multiresolution-textual-inversion/README.md b/spaces/hysts/multiresolution-textual-inversion/README.md deleted file mode 100644 index 54c7c62c0bf4bc56ba7a2272bdffbc63d67ee5e5..0000000000000000000000000000000000000000 --- a/spaces/hysts/multiresolution-textual-inversion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Multiresolution Textual Inversion -emoji: 🏢 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -suggested_hardware: t4-small ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/iamironman4279/SadTalker/src/face3d/models/arcface_torch/configs/3millions_pfc.py b/spaces/iamironman4279/SadTalker/src/face3d/models/arcface_torch/configs/3millions_pfc.py deleted file mode 100644 index 77caafdbb300d8109d5bfdb844f131710ef81f20..0000000000000000000000000000000000000000 --- a/spaces/iamironman4279/SadTalker/src/face3d/models/arcface_torch/configs/3millions_pfc.py +++ /dev/null @@ -1,23 +0,0 @@ -from easydict import EasyDict as edict - -# configs for test speed - -config = edict() -config.loss = "arcface" -config.network = "r50" -config.resume = False -config.output = None -config.embedding_size = 512 -config.sample_rate = 0.1 -config.fp16 = True -config.momentum = 0.9 -config.weight_decay = 5e-4 -config.batch_size = 128 -config.lr = 0.1 # batch size is 512 - -config.rec = "synthetic" -config.num_classes = 300 * 10000 -config.num_epoch = 30 -config.warmup_epoch = -1 -config.decay_epoch = [10, 16, 22] -config.val_targets = [] diff --git a/spaces/igtsolutions/igtsolutions/index.html b/spaces/igtsolutions/igtsolutions/index.html deleted file mode 100644 index 58275de3b1c343a98420342baa076b9baaafa157..0000000000000000000000000000000000000000 --- a/spaces/igtsolutions/igtsolutions/index.html +++ /dev/null @@ -1,19 +0,0 @@ - - - - - - My static Space - - - -
-

Welcome to your static Space!

-

You can modify this app directly by editing index.html in the Files and versions tab.

-

- Also don't forget to check the - Spaces documentation. -

-
- - diff --git a/spaces/inamXcontru/PoeticTTS/Calf Image To Text Converter Serial Key OCR .md b/spaces/inamXcontru/PoeticTTS/Calf Image To Text Converter Serial Key OCR .md deleted file mode 100644 index a0b6f93d53e82900b007344c4043a593afa5c64a..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Calf Image To Text Converter Serial Key OCR .md +++ /dev/null @@ -1,6 +0,0 @@ -

naanum rowdy thaan movie download 720p youtube


Download File ->>> https://gohhs.com/2uz3Sd



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/indonesian-nlp/luganda-asr/app.py b/spaces/indonesian-nlp/luganda-asr/app.py deleted file mode 100644 index cd0247b67e2dfb3d41b690f71c4f1ab8e6b728e6..0000000000000000000000000000000000000000 --- a/spaces/indonesian-nlp/luganda-asr/app.py +++ /dev/null @@ -1,71 +0,0 @@ -import soundfile as sf -import torch -from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor -from pyctcdecode import build_ctcdecoder -import gradio as gr -import librosa -import os -from multiprocessing import Pool - - -class KenLM: - def __init__(self, tokenizer, model_name, num_workers=8, beam_width=128): - self.num_workers = num_workers - self.beam_width = beam_width - vocab_dict = tokenizer.get_vocab() - self.vocabulary = [x[0] for x in sorted(vocab_dict.items(), key=lambda x: x[1], reverse=False)] - # Workaround for wrong number of vocabularies: - self.vocabulary = self.vocabulary[:-2] - self.decoder = build_ctcdecoder(self.vocabulary, model_name) - - @staticmethod - def lm_postprocess(text): - return ' '.join([x if len(x) > 1 else "" for x in text.split()]).strip() - - def decode(self, logits): - probs = logits.cpu().numpy() - # probs = logits.numpy() - with Pool(self.num_workers) as pool: - text = self.decoder.decode_batch(pool, probs) - text = [KenLM.lm_postprocess(x) for x in text] - return text - - -def convert(inputfile, outfile): - target_sr = 16000 - data, sample_rate = librosa.load(inputfile) - data = librosa.resample(data, orig_sr=sample_rate, target_sr=target_sr) - sf.write(outfile, data, target_sr) - - -api_token = os.getenv("API_TOKEN") -model_name = "indonesian-nlp/wav2vec2-luganda" -processor = Wav2Vec2Processor.from_pretrained(model_name, use_auth_token=api_token) -model = Wav2Vec2ForCTC.from_pretrained(model_name, use_auth_token=api_token) -kenlm = KenLM(processor.tokenizer, "5gram.bin") - - -def parse_transcription(wav_file): - filename = wav_file.name.split('.')[0] - convert(wav_file.name, filename + "16k.wav") - speech, _ = sf.read(filename + "16k.wav") - input_values = processor(speech, sampling_rate=16_000, return_tensors="pt").input_values - with torch.no_grad(): - logits = model(input_values).logits - transcription = kenlm.decode(logits)[0] - return transcription - - -output = gr.outputs.Textbox(label="The transcript") - -input_ = gr.inputs.Audio(source="microphone", type="file") - -gr.Interface(parse_transcription, inputs=input_, outputs=[output], - analytics_enabled=False, - title="Automatic Speech Recognition for Luganda", - description="Speech Recognition Live Demo for Luganda", - article="This demo was built for the " - "Mozilla Luganda Automatic Speech Recognition Competition. " - "It uses the indonesian-nlp/wav2vec2-luganda model " - "which was fine-tuned on Luganda Common Voice speech datasets.", - enable_queue=True).launch(inline=False, server_name="0.0.0.0", show_tips=False, enable_queue=True) \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Buddha.dll Sleeping Dogs Crack D.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Buddha.dll Sleeping Dogs Crack D.md deleted file mode 100644 index 67b2f93a07501cab0c6302079b81b8f4547bade4..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Buddha.dll Sleeping Dogs Crack D.md +++ /dev/null @@ -1,6 +0,0 @@ -

Buddha.dll Sleeping Dogs Crack D


Download Zip 🗸 https://urlin.us/2uEw2N



-
- 3cee63e6c2
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Dolby Digital AC 3 Pro Plugin 1.0 Build 3060.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Dolby Digital AC 3 Pro Plugin 1.0 Build 3060.md deleted file mode 100644 index 9d78cebe44f9fac0b59cf236c25a94e32b1799af..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Dolby Digital AC 3 Pro Plugin 1.0 Build 3060.md +++ /dev/null @@ -1,6 +0,0 @@ -

Dolby Digital AC 3 Pro Plugin 1.0 Build 3060


Download Zip > https://urlin.us/2uEvM7



-
-Photo Go 1.0b (build 123). Noise reduction plugin 2.0h (build 451). Dolby Digital AC-3 Pro 1.0 plugin (build 3060). MainConcept MPEG-1&2 Pro Plugin 2.0 ... Dolby Digital AC-3 Pro Plugin 1.0 (build 3060). IFF Plugin 1.6. Neat Video Pro 2.5 plugin (build 1490). Compressor plugin (build 487). E-Light 2.0 plugin (build 946). E-Motion 2.0 plugin (build 96). E-Motion 2.0 plugin (build 946). Mosaic plugin 1.2. Overture 1.8 plugin (build 489). Renderforest Plugin 8a78ff9644
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/FantaMorph.Deluxe.v5.2.5.Incl.Keymaker-CORE Free Download [HOT].md b/spaces/inplisQlawa/anything-midjourney-v4-1/FantaMorph.Deluxe.v5.2.5.Incl.Keymaker-CORE Free Download [HOT].md deleted file mode 100644 index 2ed3d396282defadf29a6fcf5f746ccaccee8d82..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/FantaMorph.Deluxe.v5.2.5.Incl.Keymaker-CORE Free Download [HOT].md +++ /dev/null @@ -1,38 +0,0 @@ -

FantaMorph.Deluxe.v5.2.5.Incl.Keymaker-CORE Free Download


Download Zip ☆☆☆☆☆ https://urlin.us/2uEvqe



- -Some of the features is of course also available in iMovie. - -13. Adobe Spark - -Adobe Spark is the most widely used video editing tool out there. It’s built on Adobe Premiere Pro editor. Like iMovie, you can create timeline and add soundtrack with Adobe Spark. - -Adobe Spark is used by both professional and amateur. You can create stunning videos for presentation with the help of Adobe Spark. - -14. iMovie - -iMovie is the Apple’s video editing software. It’s been around for a long time. Unlike Adobe Spark, it is not available as a standalone app. You can also not use iMovie to do videos. You’ll need to download iMovie from the Mac App Store. - -iMovie is an easy to use app to create videos. You can create a playlist of photos, videos, audios, and even text. - -You can add frames to videos, add backgrounds, use transitions to create mesmerizing videos. - -iMovie has a powerful effect called slideshow. You can use some of the effects on your videos to make it look like a slideshow. You can also use something called video filter. The video filter makes your video look like cartoon, like a black and white, or sepia. - -15. VideoHive - -VideoHive is video editing app like iMovie and Spark. It’s video editor is available both on App Store and Google Play Store. It’s a little bit more expensive than Spark and iMovie. - -VideoHive is also popular for its video editing tool. The app is pretty easy to use. You can import and use videos, photos, and audios. You can add effects to videos, trim, split, and upload them. - -VideoHive gives you a lot of options. You can also add effects to videos. The app can be used by both amateur and professional. - -16. Final Cut Pro - -Final Cut Pro is Apple’s premier video editing software. It is very powerful video editing tool. Final Cut Pro X is the latest version of Final Cut Pro. - -Final Cut Pro gives you a lot of options to use to create amazing videos. You can use the timeline, the keyframe to get things done. The app allows you to add text, layers, and effects. - -You can add photos, footage, stills to your video. You can also use some of the 4fefd39f24
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Fsx Tastaturbelegung Deutsch Pdf Download Extra Qualityl.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Fsx Tastaturbelegung Deutsch Pdf Download Extra Qualityl.md deleted file mode 100644 index 001f62a3ad51a682fa2222a9f4102665864aadf7..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Fsx Tastaturbelegung Deutsch Pdf Download Extra Qualityl.md +++ /dev/null @@ -1,11 +0,0 @@ -

Fsx Tastaturbelegung Deutsch Pdf Downloadl


Download Zip ✦✦✦ https://urlin.us/2uEyv1



-
-Video zum Thema Flugplatz DLC LFHU L' Alp D' Hughes Airport (Xbox + PC) & Airbus H135...Flightplan (LFHU) -Description -Flightplan DLC LFHU for Xbox & Airbus H135 & H135P1 and Airbus H135P2 as aircraft model and Airbus H135P1 as real aircraft for Flightplan DLC LFHU game. -Airbus H135P1 and Airbus H135P2 can be used on Xbox & Flightplan DLC. -Flightplan DLC for Xbox and Flightplan DLC for PC provide players with the Flightplan DLC as an add-on. -Flightplan DLC LFHU for Xbox & Airbus 8a78ff9644
-
-
-

diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/JetBrains PhpStorm 2018.2.5 Crack [CracksMind] Serial Key Keygen !!INSTALL!!.md b/spaces/inplisQlawa/anything-midjourney-v4-1/JetBrains PhpStorm 2018.2.5 Crack [CracksMind] Serial Key Keygen !!INSTALL!!.md deleted file mode 100644 index b9862c88f9700115d6b2953650cfc223a3ce6a84..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/JetBrains PhpStorm 2018.2.5 Crack [CracksMind] Serial Key Keygen !!INSTALL!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

JetBrains PhpStorm 2018.2.5 Crack [CracksMind] Serial Key keygen


DOWNLOAD ——— https://urlin.us/2uEwnT



-
-PhpStorm 2019.1.2 Crack Key × Activation Code X64 File Crack [Latest] (self. ... PhpStorm 2018.2.5 Crack + Licese Key Free & Keygen [Latest] ... 4d29de3e1b
-
-
-

diff --git a/spaces/ispast/Genshin_MB_VITS_TTS/text/symbols.py b/spaces/ispast/Genshin_MB_VITS_TTS/text/symbols.py deleted file mode 100644 index a88333a3aafd078f2441b81f8394b96fb27dc77d..0000000000000000000000000000000000000000 --- a/spaces/ispast/Genshin_MB_VITS_TTS/text/symbols.py +++ /dev/null @@ -1,22 +0,0 @@ -''' -Defines the set of symbols used in text input to the model. -''' - -''' -_pad = '_' -_punctuation = ';:,.!?¡¿—…"«»“” ' -_letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz' -_letters_ipa = "ɑɐɒæɓʙβɔɕçɗɖðʤəɘɚɛɜɝɞɟʄɡɠɢʛɦɧħɥʜɨɪʝɭɬɫɮʟɱɯɰŋɳɲɴøɵɸθœɶʘɹɺɾɻʀʁɽʂʃʈʧʉʊʋⱱʌɣɤʍχʎʏʑʐʒʔʡʕʢǀǁǂǃˈˌːˑʼʴʰʱʲʷˠˤ˞↓↑→↗↘'̩'ᵻ" -''' - -# cjke_cleaners2 -_pad = '_' -_punctuation = ',.!?-~…' -_letters = 'NQabdefghijklmnopstuvwxyzɑæʃʑçɯɪɔɛɹðəɫɥɸʊɾʒθβŋɦ⁼ʰ`^#*=ˈˌ→↓↑ ' - - -# Export all symbols: -#symbols = [_pad] + list(_punctuation) + list(_letters) + list(_letters_ipa) -symbols = [_pad] + list(_punctuation) + list(_letters) -# Special symbol ids -SPACE_ID = symbols.index(" ") diff --git a/spaces/ivntl/MMS/uroman/bin/de-accent.pl b/spaces/ivntl/MMS/uroman/bin/de-accent.pl deleted file mode 100644 index d73ed8361f2a65377e605504b67d74d8fb1a755b..0000000000000000000000000000000000000000 --- a/spaces/ivntl/MMS/uroman/bin/de-accent.pl +++ /dev/null @@ -1,201 +0,0 @@ -#!/usr/bin/perl -w - -sub print_version { - print STDERR "$0 version 1.1\n"; - print STDERR " Author: Ulf Hermjakob\n"; - print STDERR " Last changed: March 14, 2011\n"; -} - -sub print_usage { - print STDERR "$0 [options] < with_accents.txt > without_accents.txt\n"; - print STDERR " -h or -help\n"; - print STDERR " -v or -version\n"; -} - -sub de_accent_string { - local($s) = @_; - - # $s =~ tr/A-Z/a-z/; - unless (0) { - # Latin-1 - if ($s =~ /\xC3[\x80-\xBF]/) { - $s =~ s/(À|Á|Â|Ã|Ä|Å)/A/g; - $s =~ s/Æ/Ae/g; - $s =~ s/Ç/C/g; - $s =~ s/Ð/D/g; - $s =~ s/(È|É|Ê|Ë)/E/g; - $s =~ s/(Ì|Í|Î|Ï)/I/g; - $s =~ s/Ñ/N/g; - $s =~ s/(Ò|Ó|Ô|Õ|Ö|Ø)/O/g; - $s =~ s/(Ù|Ú|Û|Ü)/U/g; - $s =~ s/Þ/Th/g; - $s =~ s/Ý/Y/g; - $s =~ s/(à|á|â|ã|ä|å)/a/g; - $s =~ s/æ/ae/g; - $s =~ s/ç/c/g; - $s =~ s/(è|é|ê|ë)/e/g; - $s =~ s/(ì|í|î|ï)/i/g; - $s =~ s/ð/d/g; - $s =~ s/ñ/n/g; - $s =~ s/(ò|ó|ô|õ|ö)/o/g; - $s =~ s/ß/ss/g; - $s =~ s/þ/th/g; - $s =~ s/(ù|ú|û|ü)/u/g; - $s =~ s/(ý|ÿ)/y/g; - } - # Latin Extended-A - if ($s =~ /[\xC4-\xC5][\x80-\xBF]/) { - $s =~ s/(Ā|Ă|Ą)/A/g; - $s =~ s/(ā|ă|ą)/a/g; - $s =~ s/(Ć|Ĉ|Ċ|Č)/C/g; - $s =~ s/(ć|ĉ|ċ|č)/c/g; - $s =~ s/(Ď|Đ)/D/g; - $s =~ s/(ď|đ)/d/g; - $s =~ s/(Ē|Ĕ|Ė|Ę|Ě)/E/g; - $s =~ s/(ē|ĕ|ė|ę|ě)/e/g; - $s =~ s/(Ĝ|Ğ|Ġ|Ģ)/G/g; - $s =~ s/(ĝ|ğ|ġ|ģ)/g/g; - $s =~ s/(Ĥ|Ħ)/H/g; - $s =~ s/(ĥ|ħ)/h/g; - $s =~ s/(Ĩ|Ī|Ĭ|Į|İ)/I/g; - $s =~ s/(ĩ|ī|ĭ|į|ı)/i/g; - $s =~ s/IJ/Ij/g; - $s =~ s/ij/ij/g; - $s =~ s/Ĵ/J/g; - $s =~ s/ĵ/j/g; - $s =~ s/Ķ/K/g; - $s =~ s/(ķ|ĸ)/k/g; - $s =~ s/(Ĺ|Ļ|Ľ|Ŀ|Ł)/L/g; - $s =~ s/(ļ|ľ|ŀ|ł)/l/g; - $s =~ s/(Ń|Ņ|Ň|Ŋ)/N/g; - $s =~ s/(ń|ņ|ň|ʼn|ŋ)/n/g; - $s =~ s/(Ō|Ŏ|Ő)/O/g; - $s =~ s/(ō|ŏ|ő)/o/g; - $s =~ s/Œ/Oe/g; - $s =~ s/œ/oe/g; - $s =~ s/(Ŕ|Ŗ|Ř)/R/g; - $s =~ s/(ŕ|ŗ|ř)/r/g; - $s =~ s/(Ś|Ŝ|Ş|Š)/S/g; - $s =~ s/(ś|ŝ|ş|š|ſ)/s/g; - $s =~ s/(Ţ|Ť|Ŧ)/T/g; - $s =~ s/(ţ|ť|ŧ)/t/g; - $s =~ s/(Ũ|Ū|Ŭ|Ů|Ű|Ų)/U/g; - $s =~ s/(ũ|ū|ŭ|ů|ű|ų)/u/g; - $s =~ s/Ŵ/W/g; - $s =~ s/ŵ/w/g; - $s =~ s/(Ŷ|Ÿ)/Y/g; - $s =~ s/ŷ/y/g; - $s =~ s/(Ź|Ż|Ž)/Z/g; - $s =~ s/(ź|ż|ž)/z/g; - } - # Latin Extended Additional - if ($s =~ /\xE1[\xB8-\xBF][\x80-\xBF]/) { - $s =~ s/(ḁ|ạ|ả|ấ|ầ|ẩ|ẫ|ậ|ắ|ằ|ẳ|ẵ|ặ|ẚ)/a/g; - $s =~ s/(ḃ|ḅ|ḇ)/b/g; - $s =~ s/(ḉ)/c/g; - $s =~ s/(ḋ|ḍ|ḏ|ḑ|ḓ)/d/g; - $s =~ s/(ḕ|ḗ|ḙ|ḛ|ḝ|ẹ|ẻ|ẽ|ế|ề|ể|ễ|ệ)/e/g; - $s =~ s/(ḟ)/f/g; - $s =~ s/(ḡ)/g/g; - $s =~ s/(ḣ|ḥ|ḧ|ḩ|ḫ)/h/g; - $s =~ s/(ḭ|ḯ|ỉ|ị)/i/g; - $s =~ s/(ḱ|ḳ|ḵ)/k/g; - $s =~ s/(ḷ|ḹ|ḻ|ḽ)/l/g; - $s =~ s/(ḿ|ṁ|ṃ)/m/g; - $s =~ s/(ṅ|ṇ|ṉ|ṋ)/m/g; - $s =~ s/(ọ|ỏ|ố|ồ|ổ|ỗ|ộ|ớ|ờ|ở|ỡ|ợ|ṍ|ṏ|ṑ|ṓ)/o/g; - $s =~ s/(ṕ|ṗ)/p/g; - $s =~ s/(ṙ|ṛ|ṝ|ṟ)/r/g; - $s =~ s/(ṡ|ṣ|ṥ|ṧ|ṩ|ẛ)/s/g; - $s =~ s/(ṫ|ṭ|ṯ|ṱ)/t/g; - $s =~ s/(ṳ|ṵ|ṷ|ṹ|ṻ|ụ|ủ|ứ|ừ|ử|ữ|ự)/u/g; - $s =~ s/(ṽ|ṿ)/v/g; - $s =~ s/(ẁ|ẃ|ẅ|ẇ|ẉ|ẘ)/w/g; - $s =~ s/(ẋ|ẍ)/x/g; - $s =~ s/(ẏ|ỳ|ỵ|ỷ|ỹ|ẙ)/y/g; - $s =~ s/(ẑ|ẓ|ẕ)/z/g; - $s =~ s/(Ḁ|Ạ|Ả|Ấ|Ầ|Ẩ|Ẫ|Ậ|Ắ|Ằ|Ẳ|Ẵ|Ặ)/A/g; - $s =~ s/(Ḃ|Ḅ|Ḇ)/B/g; - $s =~ s/(Ḉ)/C/g; - $s =~ s/(Ḋ|Ḍ|Ḏ|Ḑ|Ḓ)/D/g; - $s =~ s/(Ḕ|Ḗ|Ḙ|Ḛ|Ḝ|Ẹ|Ẻ|Ẽ|Ế|Ề|Ể|Ễ|Ệ)/E/g; - $s =~ s/(Ḟ)/F/g; - $s =~ s/(Ḡ)/G/g; - $s =~ s/(Ḣ|Ḥ|Ḧ|Ḩ|Ḫ)/H/g; - $s =~ s/(Ḭ|Ḯ|Ỉ|Ị)/I/g; - $s =~ s/(Ḱ|Ḳ|Ḵ)/K/g; - $s =~ s/(Ḷ|Ḹ|Ḻ|Ḽ)/L/g; - $s =~ s/(Ḿ|Ṁ|Ṃ)/M/g; - $s =~ s/(Ṅ|Ṇ|Ṉ|Ṋ)/N/g; - $s =~ s/(Ṍ|Ṏ|Ṑ|Ṓ|Ọ|Ỏ|Ố|Ồ|Ổ|Ỗ|Ộ|Ớ|Ờ|Ở|Ỡ|Ợ)/O/g; - $s =~ s/(Ṕ|Ṗ)/P/g; - $s =~ s/(Ṙ|Ṛ|Ṝ|Ṟ)/R/g; - $s =~ s/(Ṡ|Ṣ|Ṥ|Ṧ|Ṩ)/S/g; - $s =~ s/(Ṫ|Ṭ|Ṯ|Ṱ)/T/g; - $s =~ s/(Ṳ|Ṵ|Ṷ|Ṹ|Ṻ|Ụ|Ủ|Ứ|Ừ|Ử|Ữ|Ự)/U/g; - $s =~ s/(Ṽ|Ṿ)/V/g; - $s =~ s/(Ẁ|Ẃ|Ẅ|Ẇ|Ẉ)/W/g; - $s =~ s/(Ẍ)/X/g; - $s =~ s/(Ẏ|Ỳ|Ỵ|Ỷ|Ỹ)/Y/g; - $s =~ s/(Ẑ|Ẓ|Ẕ)/Z/g; - } - # Greek letters - if ($s =~ /\xCE[\x86-\xAB]/) { - $s =~ s/ά/α/g; - $s =~ s/έ/ε/g; - $s =~ s/ί/ι/g; - $s =~ s/ϊ/ι/g; - $s =~ s/ΐ/ι/g; - $s =~ s/ό/ο/g; - $s =~ s/ύ/υ/g; - $s =~ s/ϋ/υ/g; - $s =~ s/ΰ/υ/g; - $s =~ s/ώ/ω/g; - $s =~ s/Ά/Α/g; - $s =~ s/Έ/Ε/g; - $s =~ s/Ή/Η/g; - $s =~ s/Ί/Ι/g; - $s =~ s/Ϊ/Ι/g; - $s =~ s/Ύ/Υ/g; - $s =~ s/Ϋ/Υ/g; - $s =~ s/Ώ/Ω/g; - } - # Cyrillic letters - if ($s =~ /\xD0[\x80-\xAF]/) { - $s =~ s/Ѐ/Е/g; - $s =~ s/Ё/Е/g; - $s =~ s/Ѓ/Г/g; - $s =~ s/Ќ/К/g; - $s =~ s/Ѝ/И/g; - $s =~ s/Й/И/g; - $s =~ s/ѐ/е/g; - $s =~ s/ё/е/g; - $s =~ s/ѓ/г/g; - $s =~ s/ќ/к/g; - $s =~ s/ѝ/и/g; - $s =~ s/й/и/g; - } - } - return $s; -} - -while (@ARGV) { - $arg = shift @ARGV; - if ($arg =~ /^-*(h|help)$/i) { - &print_usage; - exit 1; - } elsif ($arg =~ /^-*(v|version)$/i) { - &print_version; - exit 1; - } else { - print STDERR "Ignoring unrecognized argument $arg\n"; - } -} - -$line_number = 0; -while (<>) { - $line_number++; - print &de_accent_string($_); -} -exit 0; - diff --git a/spaces/james-oldfield/PandA/networks/genforce/scripts/slurm_train.sh b/spaces/james-oldfield/PandA/networks/genforce/scripts/slurm_train.sh deleted file mode 100644 index 2027c4c341c09fbe76abeea6fb8702e8e4e748db..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/genforce/scripts/slurm_train.sh +++ /dev/null @@ -1,28 +0,0 @@ -#!/bin/bash - -set -x - -PARTITION=$1 -JOB_NAME=$2 -CONFIG=$3 -WORK_DIR=$4 -GPUS=${GPUS:-8} -GPUS_PER_NODE=${GPUS_PER_NODE:-8} -CPUS_PER_NODE=${CPUS_PER_NODE:-8} - -SRUN_ARGS=${SRUN_ARGS:-""} -PY_ARGS=${@:5} - -PYTHONPATH="$(dirname $0)/..":$PYTHONPATH \ -srun -p ${PARTITION} \ - --job-name=${JOB_NAME} \ - --gres=gpu:${GPUS_PER_NODE} \ - --ntasks=${GPUS} \ - --ntasks-per-node=${GPUS_PER_NODE} \ - --cpus-per-task=${CPUS_PER_NODE} \ - --kill-on-bad-exit=1 \ - ${SRUN_ARGS} \ - python -u ./train.py ${CONFIG} \ - --work_dir=${WORK_DIR} \ - --launcher="slurm" \ - ${PY_ARGS} diff --git a/spaces/james-oldfield/PandA/networks/stylegan3/training/augment.py b/spaces/james-oldfield/PandA/networks/stylegan3/training/augment.py deleted file mode 100644 index d68e35c96ef9fa9c18bbb6668f03b9463098710e..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/stylegan3/training/augment.py +++ /dev/null @@ -1,436 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Augmentation pipeline from the paper -"Training Generative Adversarial Networks with Limited Data". -Matches the original implementation by Karras et al. at -https://github.com/NVlabs/stylegan2-ada/blob/main/training/augment.py""" - -import numpy as np -import scipy.signal -import torch -from torch_utils import persistence -from torch_utils import misc -from torch_utils.ops import upfirdn2d -from torch_utils.ops import grid_sample_gradfix -from torch_utils.ops import conv2d_gradfix - -#---------------------------------------------------------------------------- -# Coefficients of various wavelet decomposition low-pass filters. - -wavelets = { - 'haar': [0.7071067811865476, 0.7071067811865476], - 'db1': [0.7071067811865476, 0.7071067811865476], - 'db2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025], - 'db3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569], - 'db4': [-0.010597401784997278, 0.032883011666982945, 0.030841381835986965, -0.18703481171888114, -0.02798376941698385, 0.6308807679295904, 0.7148465705525415, 0.23037781330885523], - 'db5': [0.003335725285001549, -0.012580751999015526, -0.006241490213011705, 0.07757149384006515, -0.03224486958502952, -0.24229488706619015, 0.13842814590110342, 0.7243085284385744, 0.6038292697974729, 0.160102397974125], - 'db6': [-0.00107730108499558, 0.004777257511010651, 0.0005538422009938016, -0.031582039318031156, 0.02752286553001629, 0.09750160558707936, -0.12976686756709563, -0.22626469396516913, 0.3152503517092432, 0.7511339080215775, 0.4946238903983854, 0.11154074335008017], - 'db7': [0.0003537138000010399, -0.0018016407039998328, 0.00042957797300470274, 0.012550998556013784, -0.01657454163101562, -0.03802993693503463, 0.0806126091510659, 0.07130921926705004, -0.22403618499416572, -0.14390600392910627, 0.4697822874053586, 0.7291320908465551, 0.39653931948230575, 0.07785205408506236], - 'db8': [-0.00011747678400228192, 0.0006754494059985568, -0.0003917403729959771, -0.00487035299301066, 0.008746094047015655, 0.013981027917015516, -0.04408825393106472, -0.01736930100202211, 0.128747426620186, 0.00047248457399797254, -0.2840155429624281, -0.015829105256023893, 0.5853546836548691, 0.6756307362980128, 0.3128715909144659, 0.05441584224308161], - 'sym2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025], - 'sym3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569], - 'sym4': [-0.07576571478927333, -0.02963552764599851, 0.49761866763201545, 0.8037387518059161, 0.29785779560527736, -0.09921954357684722, -0.012603967262037833, 0.0322231006040427], - 'sym5': [0.027333068345077982, 0.029519490925774643, -0.039134249302383094, 0.1993975339773936, 0.7234076904024206, 0.6339789634582119, 0.01660210576452232, -0.17532808990845047, -0.021101834024758855, 0.019538882735286728], - 'sym6': [0.015404109327027373, 0.0034907120842174702, -0.11799011114819057, -0.048311742585633, 0.4910559419267466, 0.787641141030194, 0.3379294217276218, -0.07263752278646252, -0.021060292512300564, 0.04472490177066578, 0.0017677118642428036, -0.007800708325034148], - 'sym7': [0.002681814568257878, -0.0010473848886829163, -0.01263630340325193, 0.03051551316596357, 0.0678926935013727, -0.049552834937127255, 0.017441255086855827, 0.5361019170917628, 0.767764317003164, 0.2886296317515146, -0.14004724044296152, -0.10780823770381774, 0.004010244871533663, 0.010268176708511255], - 'sym8': [-0.0033824159510061256, -0.0005421323317911481, 0.03169508781149298, 0.007607487324917605, -0.1432942383508097, -0.061273359067658524, 0.4813596512583722, 0.7771857517005235, 0.3644418948353314, -0.05194583810770904, -0.027219029917056003, 0.049137179673607506, 0.003808752013890615, -0.01495225833704823, -0.0003029205147213668, 0.0018899503327594609], -} - -#---------------------------------------------------------------------------- -# Helpers for constructing transformation matrices. - -def matrix(*rows, device=None): - assert all(len(row) == len(rows[0]) for row in rows) - elems = [x for row in rows for x in row] - ref = [x for x in elems if isinstance(x, torch.Tensor)] - if len(ref) == 0: - return misc.constant(np.asarray(rows), device=device) - assert device is None or device == ref[0].device - elems = [x if isinstance(x, torch.Tensor) else misc.constant(x, shape=ref[0].shape, device=ref[0].device) for x in elems] - return torch.stack(elems, dim=-1).reshape(ref[0].shape + (len(rows), -1)) - -def translate2d(tx, ty, **kwargs): - return matrix( - [1, 0, tx], - [0, 1, ty], - [0, 0, 1], - **kwargs) - -def translate3d(tx, ty, tz, **kwargs): - return matrix( - [1, 0, 0, tx], - [0, 1, 0, ty], - [0, 0, 1, tz], - [0, 0, 0, 1], - **kwargs) - -def scale2d(sx, sy, **kwargs): - return matrix( - [sx, 0, 0], - [0, sy, 0], - [0, 0, 1], - **kwargs) - -def scale3d(sx, sy, sz, **kwargs): - return matrix( - [sx, 0, 0, 0], - [0, sy, 0, 0], - [0, 0, sz, 0], - [0, 0, 0, 1], - **kwargs) - -def rotate2d(theta, **kwargs): - return matrix( - [torch.cos(theta), torch.sin(-theta), 0], - [torch.sin(theta), torch.cos(theta), 0], - [0, 0, 1], - **kwargs) - -def rotate3d(v, theta, **kwargs): - vx = v[..., 0]; vy = v[..., 1]; vz = v[..., 2] - s = torch.sin(theta); c = torch.cos(theta); cc = 1 - c - return matrix( - [vx*vx*cc+c, vx*vy*cc-vz*s, vx*vz*cc+vy*s, 0], - [vy*vx*cc+vz*s, vy*vy*cc+c, vy*vz*cc-vx*s, 0], - [vz*vx*cc-vy*s, vz*vy*cc+vx*s, vz*vz*cc+c, 0], - [0, 0, 0, 1], - **kwargs) - -def translate2d_inv(tx, ty, **kwargs): - return translate2d(-tx, -ty, **kwargs) - -def scale2d_inv(sx, sy, **kwargs): - return scale2d(1 / sx, 1 / sy, **kwargs) - -def rotate2d_inv(theta, **kwargs): - return rotate2d(-theta, **kwargs) - -#---------------------------------------------------------------------------- -# Versatile image augmentation pipeline from the paper -# "Training Generative Adversarial Networks with Limited Data". -# -# All augmentations are disabled by default; individual augmentations can -# be enabled by setting their probability multipliers to 1. - -@persistence.persistent_class -class AugmentPipe(torch.nn.Module): - def __init__(self, - xflip=0, rotate90=0, xint=0, xint_max=0.125, - scale=0, rotate=0, aniso=0, xfrac=0, scale_std=0.2, rotate_max=1, aniso_std=0.2, xfrac_std=0.125, - brightness=0, contrast=0, lumaflip=0, hue=0, saturation=0, brightness_std=0.2, contrast_std=0.5, hue_max=1, saturation_std=1, - imgfilter=0, imgfilter_bands=[1,1,1,1], imgfilter_std=1, - noise=0, cutout=0, noise_std=0.1, cutout_size=0.5, - ): - super().__init__() - self.register_buffer('p', torch.ones([])) # Overall multiplier for augmentation probability. - - # Pixel blitting. - self.xflip = float(xflip) # Probability multiplier for x-flip. - self.rotate90 = float(rotate90) # Probability multiplier for 90 degree rotations. - self.xint = float(xint) # Probability multiplier for integer translation. - self.xint_max = float(xint_max) # Range of integer translation, relative to image dimensions. - - # General geometric transformations. - self.scale = float(scale) # Probability multiplier for isotropic scaling. - self.rotate = float(rotate) # Probability multiplier for arbitrary rotation. - self.aniso = float(aniso) # Probability multiplier for anisotropic scaling. - self.xfrac = float(xfrac) # Probability multiplier for fractional translation. - self.scale_std = float(scale_std) # Log2 standard deviation of isotropic scaling. - self.rotate_max = float(rotate_max) # Range of arbitrary rotation, 1 = full circle. - self.aniso_std = float(aniso_std) # Log2 standard deviation of anisotropic scaling. - self.xfrac_std = float(xfrac_std) # Standard deviation of frational translation, relative to image dimensions. - - # Color transformations. - self.brightness = float(brightness) # Probability multiplier for brightness. - self.contrast = float(contrast) # Probability multiplier for contrast. - self.lumaflip = float(lumaflip) # Probability multiplier for luma flip. - self.hue = float(hue) # Probability multiplier for hue rotation. - self.saturation = float(saturation) # Probability multiplier for saturation. - self.brightness_std = float(brightness_std) # Standard deviation of brightness. - self.contrast_std = float(contrast_std) # Log2 standard deviation of contrast. - self.hue_max = float(hue_max) # Range of hue rotation, 1 = full circle. - self.saturation_std = float(saturation_std) # Log2 standard deviation of saturation. - - # Image-space filtering. - self.imgfilter = float(imgfilter) # Probability multiplier for image-space filtering. - self.imgfilter_bands = list(imgfilter_bands) # Probability multipliers for individual frequency bands. - self.imgfilter_std = float(imgfilter_std) # Log2 standard deviation of image-space filter amplification. - - # Image-space corruptions. - self.noise = float(noise) # Probability multiplier for additive RGB noise. - self.cutout = float(cutout) # Probability multiplier for cutout. - self.noise_std = float(noise_std) # Standard deviation of additive RGB noise. - self.cutout_size = float(cutout_size) # Size of the cutout rectangle, relative to image dimensions. - - # Setup orthogonal lowpass filter for geometric augmentations. - self.register_buffer('Hz_geom', upfirdn2d.setup_filter(wavelets['sym6'])) - - # Construct filter bank for image-space filtering. - Hz_lo = np.asarray(wavelets['sym2']) # H(z) - Hz_hi = Hz_lo * ((-1) ** np.arange(Hz_lo.size)) # H(-z) - Hz_lo2 = np.convolve(Hz_lo, Hz_lo[::-1]) / 2 # H(z) * H(z^-1) / 2 - Hz_hi2 = np.convolve(Hz_hi, Hz_hi[::-1]) / 2 # H(-z) * H(-z^-1) / 2 - Hz_fbank = np.eye(4, 1) # Bandpass(H(z), b_i) - for i in range(1, Hz_fbank.shape[0]): - Hz_fbank = np.dstack([Hz_fbank, np.zeros_like(Hz_fbank)]).reshape(Hz_fbank.shape[0], -1)[:, :-1] - Hz_fbank = scipy.signal.convolve(Hz_fbank, [Hz_lo2]) - Hz_fbank[i, (Hz_fbank.shape[1] - Hz_hi2.size) // 2 : (Hz_fbank.shape[1] + Hz_hi2.size) // 2] += Hz_hi2 - self.register_buffer('Hz_fbank', torch.as_tensor(Hz_fbank, dtype=torch.float32)) - - def forward(self, images, debug_percentile=None): - assert isinstance(images, torch.Tensor) and images.ndim == 4 - batch_size, num_channels, height, width = images.shape - device = images.device - if debug_percentile is not None: - debug_percentile = torch.as_tensor(debug_percentile, dtype=torch.float32, device=device) - - # ------------------------------------- - # Select parameters for pixel blitting. - # ------------------------------------- - - # Initialize inverse homogeneous 2D transform: G_inv @ pixel_out ==> pixel_in - I_3 = torch.eye(3, device=device) - G_inv = I_3 - - # Apply x-flip with probability (xflip * strength). - if self.xflip > 0: - i = torch.floor(torch.rand([batch_size], device=device) * 2) - i = torch.where(torch.rand([batch_size], device=device) < self.xflip * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 2)) - G_inv = G_inv @ scale2d_inv(1 - 2 * i, 1) - - # Apply 90 degree rotations with probability (rotate90 * strength). - if self.rotate90 > 0: - i = torch.floor(torch.rand([batch_size], device=device) * 4) - i = torch.where(torch.rand([batch_size], device=device) < self.rotate90 * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 4)) - G_inv = G_inv @ rotate2d_inv(-np.pi / 2 * i) - - # Apply integer translation with probability (xint * strength). - if self.xint > 0: - t = (torch.rand([batch_size, 2], device=device) * 2 - 1) * self.xint_max - t = torch.where(torch.rand([batch_size, 1], device=device) < self.xint * self.p, t, torch.zeros_like(t)) - if debug_percentile is not None: - t = torch.full_like(t, (debug_percentile * 2 - 1) * self.xint_max) - G_inv = G_inv @ translate2d_inv(torch.round(t[:,0] * width), torch.round(t[:,1] * height)) - - # -------------------------------------------------------- - # Select parameters for general geometric transformations. - # -------------------------------------------------------- - - # Apply isotropic scaling with probability (scale * strength). - if self.scale > 0: - s = torch.exp2(torch.randn([batch_size], device=device) * self.scale_std) - s = torch.where(torch.rand([batch_size], device=device) < self.scale * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.scale_std)) - G_inv = G_inv @ scale2d_inv(s, s) - - # Apply pre-rotation with probability p_rot. - p_rot = 1 - torch.sqrt((1 - self.rotate * self.p).clamp(0, 1)) # P(pre OR post) = p - if self.rotate > 0: - theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.rotate_max - theta = torch.where(torch.rand([batch_size], device=device) < p_rot, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.full_like(theta, (debug_percentile * 2 - 1) * np.pi * self.rotate_max) - G_inv = G_inv @ rotate2d_inv(-theta) # Before anisotropic scaling. - - # Apply anisotropic scaling with probability (aniso * strength). - if self.aniso > 0: - s = torch.exp2(torch.randn([batch_size], device=device) * self.aniso_std) - s = torch.where(torch.rand([batch_size], device=device) < self.aniso * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.aniso_std)) - G_inv = G_inv @ scale2d_inv(s, 1 / s) - - # Apply post-rotation with probability p_rot. - if self.rotate > 0: - theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.rotate_max - theta = torch.where(torch.rand([batch_size], device=device) < p_rot, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.zeros_like(theta) - G_inv = G_inv @ rotate2d_inv(-theta) # After anisotropic scaling. - - # Apply fractional translation with probability (xfrac * strength). - if self.xfrac > 0: - t = torch.randn([batch_size, 2], device=device) * self.xfrac_std - t = torch.where(torch.rand([batch_size, 1], device=device) < self.xfrac * self.p, t, torch.zeros_like(t)) - if debug_percentile is not None: - t = torch.full_like(t, torch.erfinv(debug_percentile * 2 - 1) * self.xfrac_std) - G_inv = G_inv @ translate2d_inv(t[:,0] * width, t[:,1] * height) - - # ---------------------------------- - # Execute geometric transformations. - # ---------------------------------- - - # Execute if the transform is not identity. - if G_inv is not I_3: - - # Calculate padding. - cx = (width - 1) / 2 - cy = (height - 1) / 2 - cp = matrix([-cx, -cy, 1], [cx, -cy, 1], [cx, cy, 1], [-cx, cy, 1], device=device) # [idx, xyz] - cp = G_inv @ cp.t() # [batch, xyz, idx] - Hz_pad = self.Hz_geom.shape[0] // 4 - margin = cp[:, :2, :].permute(1, 0, 2).flatten(1) # [xy, batch * idx] - margin = torch.cat([-margin, margin]).max(dim=1).values # [x0, y0, x1, y1] - margin = margin + misc.constant([Hz_pad * 2 - cx, Hz_pad * 2 - cy] * 2, device=device) - margin = margin.max(misc.constant([0, 0] * 2, device=device)) - margin = margin.min(misc.constant([width-1, height-1] * 2, device=device)) - mx0, my0, mx1, my1 = margin.ceil().to(torch.int32) - - # Pad image and adjust origin. - images = torch.nn.functional.pad(input=images, pad=[mx0,mx1,my0,my1], mode='reflect') - G_inv = translate2d((mx0 - mx1) / 2, (my0 - my1) / 2) @ G_inv - - # Upsample. - images = upfirdn2d.upsample2d(x=images, f=self.Hz_geom, up=2) - G_inv = scale2d(2, 2, device=device) @ G_inv @ scale2d_inv(2, 2, device=device) - G_inv = translate2d(-0.5, -0.5, device=device) @ G_inv @ translate2d_inv(-0.5, -0.5, device=device) - - # Execute transformation. - shape = [batch_size, num_channels, (height + Hz_pad * 2) * 2, (width + Hz_pad * 2) * 2] - G_inv = scale2d(2 / images.shape[3], 2 / images.shape[2], device=device) @ G_inv @ scale2d_inv(2 / shape[3], 2 / shape[2], device=device) - grid = torch.nn.functional.affine_grid(theta=G_inv[:,:2,:], size=shape, align_corners=False) - images = grid_sample_gradfix.grid_sample(images, grid) - - # Downsample and crop. - images = upfirdn2d.downsample2d(x=images, f=self.Hz_geom, down=2, padding=-Hz_pad*2, flip_filter=True) - - # -------------------------------------------- - # Select parameters for color transformations. - # -------------------------------------------- - - # Initialize homogeneous 3D transformation matrix: C @ color_in ==> color_out - I_4 = torch.eye(4, device=device) - C = I_4 - - # Apply brightness with probability (brightness * strength). - if self.brightness > 0: - b = torch.randn([batch_size], device=device) * self.brightness_std - b = torch.where(torch.rand([batch_size], device=device) < self.brightness * self.p, b, torch.zeros_like(b)) - if debug_percentile is not None: - b = torch.full_like(b, torch.erfinv(debug_percentile * 2 - 1) * self.brightness_std) - C = translate3d(b, b, b) @ C - - # Apply contrast with probability (contrast * strength). - if self.contrast > 0: - c = torch.exp2(torch.randn([batch_size], device=device) * self.contrast_std) - c = torch.where(torch.rand([batch_size], device=device) < self.contrast * self.p, c, torch.ones_like(c)) - if debug_percentile is not None: - c = torch.full_like(c, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.contrast_std)) - C = scale3d(c, c, c) @ C - - # Apply luma flip with probability (lumaflip * strength). - v = misc.constant(np.asarray([1, 1, 1, 0]) / np.sqrt(3), device=device) # Luma axis. - if self.lumaflip > 0: - i = torch.floor(torch.rand([batch_size, 1, 1], device=device) * 2) - i = torch.where(torch.rand([batch_size, 1, 1], device=device) < self.lumaflip * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 2)) - C = (I_4 - 2 * v.ger(v) * i) @ C # Householder reflection. - - # Apply hue rotation with probability (hue * strength). - if self.hue > 0 and num_channels > 1: - theta = (torch.rand([batch_size], device=device) * 2 - 1) * np.pi * self.hue_max - theta = torch.where(torch.rand([batch_size], device=device) < self.hue * self.p, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.full_like(theta, (debug_percentile * 2 - 1) * np.pi * self.hue_max) - C = rotate3d(v, theta) @ C # Rotate around v. - - # Apply saturation with probability (saturation * strength). - if self.saturation > 0 and num_channels > 1: - s = torch.exp2(torch.randn([batch_size, 1, 1], device=device) * self.saturation_std) - s = torch.where(torch.rand([batch_size, 1, 1], device=device) < self.saturation * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.saturation_std)) - C = (v.ger(v) + (I_4 - v.ger(v)) * s) @ C - - # ------------------------------ - # Execute color transformations. - # ------------------------------ - - # Execute if the transform is not identity. - if C is not I_4: - images = images.reshape([batch_size, num_channels, height * width]) - if num_channels == 3: - images = C[:, :3, :3] @ images + C[:, :3, 3:] - elif num_channels == 1: - C = C[:, :3, :].mean(dim=1, keepdims=True) - images = images * C[:, :, :3].sum(dim=2, keepdims=True) + C[:, :, 3:] - else: - raise ValueError('Image must be RGB (3 channels) or L (1 channel)') - images = images.reshape([batch_size, num_channels, height, width]) - - # ---------------------- - # Image-space filtering. - # ---------------------- - - if self.imgfilter > 0: - num_bands = self.Hz_fbank.shape[0] - assert len(self.imgfilter_bands) == num_bands - expected_power = misc.constant(np.array([10, 1, 1, 1]) / 13, device=device) # Expected power spectrum (1/f). - - # Apply amplification for each band with probability (imgfilter * strength * band_strength). - g = torch.ones([batch_size, num_bands], device=device) # Global gain vector (identity). - for i, band_strength in enumerate(self.imgfilter_bands): - t_i = torch.exp2(torch.randn([batch_size], device=device) * self.imgfilter_std) - t_i = torch.where(torch.rand([batch_size], device=device) < self.imgfilter * self.p * band_strength, t_i, torch.ones_like(t_i)) - if debug_percentile is not None: - t_i = torch.full_like(t_i, torch.exp2(torch.erfinv(debug_percentile * 2 - 1) * self.imgfilter_std)) if band_strength > 0 else torch.ones_like(t_i) - t = torch.ones([batch_size, num_bands], device=device) # Temporary gain vector. - t[:, i] = t_i # Replace i'th element. - t = t / (expected_power * t.square()).sum(dim=-1, keepdims=True).sqrt() # Normalize power. - g = g * t # Accumulate into global gain. - - # Construct combined amplification filter. - Hz_prime = g @ self.Hz_fbank # [batch, tap] - Hz_prime = Hz_prime.unsqueeze(1).repeat([1, num_channels, 1]) # [batch, channels, tap] - Hz_prime = Hz_prime.reshape([batch_size * num_channels, 1, -1]) # [batch * channels, 1, tap] - - # Apply filter. - p = self.Hz_fbank.shape[1] // 2 - images = images.reshape([1, batch_size * num_channels, height, width]) - images = torch.nn.functional.pad(input=images, pad=[p,p,p,p], mode='reflect') - images = conv2d_gradfix.conv2d(input=images, weight=Hz_prime.unsqueeze(2), groups=batch_size*num_channels) - images = conv2d_gradfix.conv2d(input=images, weight=Hz_prime.unsqueeze(3), groups=batch_size*num_channels) - images = images.reshape([batch_size, num_channels, height, width]) - - # ------------------------ - # Image-space corruptions. - # ------------------------ - - # Apply additive RGB noise with probability (noise * strength). - if self.noise > 0: - sigma = torch.randn([batch_size, 1, 1, 1], device=device).abs() * self.noise_std - sigma = torch.where(torch.rand([batch_size, 1, 1, 1], device=device) < self.noise * self.p, sigma, torch.zeros_like(sigma)) - if debug_percentile is not None: - sigma = torch.full_like(sigma, torch.erfinv(debug_percentile) * self.noise_std) - images = images + torch.randn([batch_size, num_channels, height, width], device=device) * sigma - - # Apply cutout with probability (cutout * strength). - if self.cutout > 0: - size = torch.full([batch_size, 2, 1, 1, 1], self.cutout_size, device=device) - size = torch.where(torch.rand([batch_size, 1, 1, 1, 1], device=device) < self.cutout * self.p, size, torch.zeros_like(size)) - center = torch.rand([batch_size, 2, 1, 1, 1], device=device) - if debug_percentile is not None: - size = torch.full_like(size, self.cutout_size) - center = torch.full_like(center, debug_percentile) - coord_x = torch.arange(width, device=device).reshape([1, 1, 1, -1]) - coord_y = torch.arange(height, device=device).reshape([1, 1, -1, 1]) - mask_x = (((coord_x + 0.5) / width - center[:, 0]).abs() >= size[:, 0] / 2) - mask_y = (((coord_y + 0.5) / height - center[:, 1]).abs() >= size[:, 1] / 2) - mask = torch.logical_or(mask_x, mask_y).to(torch.float32) - images = images * mask - - return images - -#---------------------------------------------------------------------------- diff --git a/spaces/jarvisbot/ChatImprovement/crazy_functions/test_project/latex/attention/background.tex b/spaces/jarvisbot/ChatImprovement/crazy_functions/test_project/latex/attention/background.tex deleted file mode 100644 index 785069dc0f9143bad24e640056dd1072d5c6e5b5..0000000000000000000000000000000000000000 --- a/spaces/jarvisbot/ChatImprovement/crazy_functions/test_project/latex/attention/background.tex +++ /dev/null @@ -1,58 +0,0 @@ -The goal of reducing sequential computation also forms the foundation of the Extended Neural GPU \citep{extendedngpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as basic building block, computing hidden representations in parallel for all input and output positions. In these models, the number of operations required to relate signals from two arbitrary input or output positions grows in the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes it more difficult to learn dependencies between distant positions \citep{hochreiter2001gradient}. In the Transformer this is reduced to a constant number of operations, albeit at the cost of reduced effective resolution due to averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as described in section~\ref{sec:attention}. - -Self-attention, sometimes called intra-attention is an attention mechanism relating different positions of a single sequence in order to compute a representation of the sequence. Self-attention has been used successfully in a variety of tasks including reading comprehension, abstractive summarization, textual entailment and learning task-independent sentence representations \citep{cheng2016long, decomposableAttnModel, paulus2017deep, lin2017structured}. - -End-to-end memory networks are based on a recurrent attention mechanism instead of sequence-aligned recurrence and have been shown to perform well on simple-language question answering and language modeling tasks \citep{sukhbaatar2015}. - -To the best of our knowledge, however, the Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence-aligned RNNs or convolution. -In the following sections, we will describe the Transformer, motivate self-attention and discuss its advantages over models such as \citep{neural_gpu, NalBytenet2017} and \citep{JonasFaceNet2017}. - - -%\citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs. - -%For example,! in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at low computation cost, making it an essential ingredient in competitive recurrent models for machine translation. - -%A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture. - -%After the seminal models introduced in \citep{sutskever14, bahdanau2014neural, cho2014learning}, recurrent models have become the dominant solution for both sequence modeling and sequence-to-sequence transduction. Many efforts such as \citep{wu2016google,luong2015effective,jozefowicz2016exploring} have pushed the boundaries of machine translation (MT) and language modeling with recurrent endoder-decoder and recurrent language models. Recent effort \citep{shazeer2017outrageously} has successfully combined the power of conditional computation with sequence models to train very large models for MT, pushing SOTA at lower computational cost. - -%Recurrent models compute a vector of hidden states $h_t$, for each time step $t$ of computation. $h_t$ is a function of both the input at time $t$ and the previous hidden state $h_t$. This dependence on the previous hidden state precludes processing all timesteps at once, instead requiring long sequences of sequential operations. In practice, this results in greatly reduced computational efficiency, as on modern computing hardware, a single operation on a large batch is much faster than a large number of operations on small batches. The problem gets worse at longer sequence lengths. Although sequential computation is not a severe bottleneck at inference time, as autoregressively generating each output requires all previous outputs, the inability to compute scores at all output positions at once hinders us from rapidly training our models over large datasets. Although impressive work such as \citep{Kuchaiev2017Factorization} is able to significantly accelerate the training of LSTMs with factorization tricks, we are still bound by the linear dependence on sequence length. - -%If the model could compute hidden states at each time step using only the inputs and outputs, it would be liberated from the dependence on results from previous time steps during training. This line of thought is the foundation of recent efforts such as the Markovian neural GPU \citep{neural_gpu}, ByteNet \citep{NalBytenet2017} and ConvS2S \citep{JonasFaceNet2017}, all of which use convolutional neural networks as a building block to compute hidden representations simultaneously for all timesteps, resulting in $O(1)$ sequential time complexity. \citep{JonasFaceNet2017} report new SOTA on machine translation for English-to-German (EnDe), Enlish-to-French (EnFr) and English-to-Romanian language pairs. - -%A crucial component for accurate sequence prediction is modeling cross-positional communication. For example, in MT, we must draw information from both input and previous output words to translate an output word accurately. An attention layer \citep{bahdanau2014neural} can connect a very large number of positions at a low computation cost, also $O(1)$ sequential time complexity, making it an essential ingredient in recurrent encoder-decoder architectures for MT. A natural question to ask then is, "Could we replace recurrence with attention?". \marginpar{Don't know if it's the most natural question to ask given the previous statements. Also, need to say that the complexity table summarizes these statements} Such a model would be blessed with the computational efficiency of attention and the power of cross-positional communication. In this work, show that pure attention models work remarkably well for MT, achieving new SOTA results on EnDe and EnFr, and can be trained in under $2$ days on xyz architecture. - - - -%Note: Facebook model is no better than RNNs in this regard, since it requires a number of layers proportional to the distance you want to communicate. Bytenet is more promising, since it requires a logarithmnic number of layers (does bytenet have SOTA results)? - -%Note: An attention layer can connect a very large number of positions at a low computation cost in O(1) sequential operations. This is why encoder-decoder attention has been so successful in seq-to-seq models so far. It is only natural, then, to also use attention to connect the timesteps of the same sequence. - -%Note: I wouldn't say that long sequences are not a problem during inference. It would be great if we could infer with no long sequences. We could just say later on that, while our training graph is constant-depth, our model still requires sequential operations in the decoder part during inference due to the autoregressive nature of the model. - -%\begin{table}[h!] -%\caption{Attention models are quite efficient for cross-positional communications when sequence length is smaller than channel depth. $n$ represents the sequence length and $d$ represents the channel depth.} -%\label{tab:op_complexities} -%\begin{center} -%\vspace{-5pt} -%\scalebox{0.75}{ - -%\begin{tabular}{l|c|c|c} -%\hline \hline -%Layer Type & Receptive & Complexity & Sequential \\ -% & Field & & Operations \\ -%\hline -%Pointwise Feed-Forward & $1$ & $O(n \cdot d^2)$ & $O(1)$ \\ -%\hline -%Recurrent & $n$ & $O(n \cdot d^2)$ & $O(n)$ \\ -%\hline -%Convolutional & $r$ & $O(r \cdot n \cdot d^2)$ & $O(1)$ \\ -%\hline -%Convolutional (separable) & $r$ & $O(r \cdot n \cdot d + n %\cdot d^2)$ & $O(1)$ \\ -%\hline -%Attention & $r$ & $O(r \cdot n \cdot d)$ & $O(1)$ \\ -%\hline \hline -%\end{tabular} -%} -%\end{center} -%\end{table} \ No newline at end of file diff --git a/spaces/jayparmr/CyberRealistic/README.md b/spaces/jayparmr/CyberRealistic/README.md deleted file mode 100644 index 828a3f120131763f9cf2b4252061bac615539c1c..0000000000000000000000000000000000000000 --- a/spaces/jayparmr/CyberRealistic/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: CyberRealistic -emoji: 🌍 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 3.28.3 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/jbilcke-hf/LifeSim/src/components/ui/textarea.tsx b/spaces/jbilcke-hf/LifeSim/src/components/ui/textarea.tsx deleted file mode 100644 index af10d34eeae448c2614c67141f83a8748754332c..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/LifeSim/src/components/ui/textarea.tsx +++ /dev/null @@ -1,24 +0,0 @@ -import * as React from "react" - -import { cn } from "@/lib/utils" - -export interface TextareaProps - extends React.TextareaHTMLAttributes {} - -const Textarea = React.forwardRef( - ({ className, ...props }, ref) => { - return ( -