diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download and Install QuickBooks Desktop Pro 2021 with These Easy Steps.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download and Install QuickBooks Desktop Pro 2021 with These Easy Steps.md deleted file mode 100644 index 27644960c7fb56576af91eb57ee3f40f30597641..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download and Install QuickBooks Desktop Pro 2021 with These Easy Steps.md +++ /dev/null @@ -1,59 +0,0 @@ - -

How to Download and Install QuickBooks Desktop Pro 2021

-

QuickBooks Desktop Pro 2021 is the latest version of the popular accounting software for small and medium-sized businesses. It offers new features and improvements that can help you manage your finances more efficiently and effectively. In this article, we will show you how to download and install QuickBooks Desktop Pro 2021 on your computer.

-

2021 quickbooks desktop pro download


Download Zip » https://byltly.com/2uKvfV



-

Step 1: Download QuickBooks Desktop Pro 2021

-

To download QuickBooks Desktop Pro 2021, you need to have a valid license or subscription from Intuit. You can purchase one from their official website or from a trusted reseller. Once you have your license or subscription, you can follow these steps to download the software:

- -

Step 2: Install QuickBooks Desktop Pro 2021

-

After downloading the file, you can install QuickBooks Desktop Pro 2021 by following these steps:

- -

Congratulations! You have successfully downloaded and installed QuickBooks Desktop Pro 2021 on your computer.

-

You can now start using the software to manage your business finances. If you need any help or support, you can visit the official website of Intuit or contact their customer service team. You can also check out their online community forums and tutorials for more tips and tricks on how to use QuickBooks Desktop Pro 2021.

- -

What's New in QuickBooks Desktop Pro 2021?

-

QuickBooks Desktop Pro 2021 comes with several new features and enhancements that can make your accounting tasks easier and faster. Some of the highlights include:

- -

How to Upgrade to QuickBooks Desktop Pro 2021?

-

If you are already using an older version of QuickBooks Desktop Pro, you can easily upgrade to QuickBooks Desktop Pro 2021 without losing any of your data or settings. You just need to follow these steps:

-

- -

How to Get Started with QuickBooks Desktop Pro 2021?

-

If you are new to QuickBooks Desktop Pro, you can get started with QuickBooks Desktop Pro 2021 by following these steps:

-

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forza Horizon 5 Download Ios.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forza Horizon 5 Download Ios.md deleted file mode 100644 index 35b44a619d9df4bc81a0aa4e14e62a7f72380876..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Forza Horizon 5 Download Ios.md +++ /dev/null @@ -1,31 +0,0 @@ - -

How to Download and Play Forza Horizon 5 on iOS Devices

-

Forza Horizon 5 is the latest installment of the popular racing game series developed by Playground Games and published by Microsoft. The game is set in Mexico, where you can explore a diverse and stunning open world with hundreds of cars to choose from. You can race, drift, stunt, and customize your vehicles as you compete in various events and challenges.

-

forza horizon 5 download ios


Download Zip ✒ ✒ ✒ https://byltly.com/2uKA7b



-

If you are an iOS user, you might be wondering if you can play Forza Horizon 5 on your iPhone or iPad. The good news is that you can, thanks to a mobile version of the game that is available on the App Store. The mobile version of Forza Horizon 5 offers the same gameplay and graphics as the console and PC versions, but with some optimizations and adjustments for touch controls and smaller screens.

-

In this article, we will show you how to download and play Forza Horizon 5 on your iOS devices in a few simple steps.

-

Step 1: Go to the App Store

-

The first step is to go to the App Store on your iOS device and search for Forza Horizon 5. You can also use this link to access the game page directly. You will see a screen with some information and screenshots of the game, as well as a download button.

-

Step 2: Download the game

-

The next step is to tap on the download button and wait for the game to be installed on your device. The game size is about 345 MB, so make sure you have enough space and a stable internet connection. You might also need to enter your Apple ID and password to confirm the download.

-

Step 3: Launch the game

-

Once the download is complete, you can launch the game from your home screen or app library. You will see a splash screen with the Forza Horizon 5 logo and some loading animations. The game might take some time to load depending on your device performance and network speed.

-

-

Step 4: Enjoy the game

-

After the game loads, you will see a main menu with some options to start playing. You can choose between solo or online modes, customize your profile and settings, view your achievements and leaderboards, and more. You can also access a tutorial that will teach you the basics of the game controls and mechanics.

-

To play the game, you will need to use touch gestures on your screen to steer, accelerate, brake, drift, and activate special features. You can also tilt your device to use motion controls if you prefer. The game will adapt to your skill level and preferences as you progress through the game.

-

Conclusion

-

In this article, we have shown you how to download and play Forza Horizon 5 on your iOS devices. We hope this guide was helpful and easy to follow. Now you can enjoy one of the best racing games ever made on your iPhone or iPad anytime and anywhere.

- -

Some Tips and Tricks for Forza Horizon 5 on iOS

-

If you want to get the most out of Forza Horizon 5 on your iOS devices, here are some tips and tricks that might help you:

- -

Forza Horizon 5 is a game that offers endless possibilities and fun for racing fans. Whether you want to race, drift, stunt, or explore, you will find something to enjoy in this game. Download Forza Horizon 5 on your iOS devices today and experience the thrill of driving in Mexico.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/5.1 Surround Sound Tamil Mp3 Songs UPD Free Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/5.1 Surround Sound Tamil Mp3 Songs UPD Free Download.md deleted file mode 100644 index 703e81c1413ec7d4c249ec86881c145d660c509b..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/5.1 Surround Sound Tamil Mp3 Songs UPD Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

5.1 surround sound tamil mp3 songs free download


Download Zip > https://imgfil.com/2uy09I



- -Welcome to Movie World Tamil Film Flicks YouTube Channel Movie World Entertainments is the leading ... Download Hungama Play app to get access to unlimited free movies, latest music videos, kids ... Manzoor sakhirani all mp3 songs download ... Sec 5.1 geometric and algebra connections linear equations answers. 1fdad05405
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Oyun Clubs Dark Riddle APK Hile The Most Challenging and Scary Game Ever.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Oyun Clubs Dark Riddle APK Hile The Most Challenging and Scary Game Ever.md deleted file mode 100644 index bda77b1e43486dd1e0dbacf854c7818191efe78d..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Android Oyun Clubs Dark Riddle APK Hile The Most Challenging and Scary Game Ever.md +++ /dev/null @@ -1,82 +0,0 @@ - -

Dark Riddle APK Hile Android Oyun Club: A Review

-

If you are looking for a game that combines escape, adventure and puzzle elements with stealth, humor and mystery, you might want to check out Dark Riddle. This is a popular game on the Android platform that lets you explore your neighbor's house and discover his secrets. But what if you want to enjoy the game without any limitations or interruptions? That's where Dark Riddle APK Hile comes in. This is a modded version of the game that gives you unlimited money and removes ads and in-app purchases. In this article, we will review Dark Riddle APK Hile and tell you how to download and install it from Android Oyun Club. We will also discuss the features, benefits, drawbacks and risks of using this modded version.

-

What is Dark Riddle?

-

A game of escape, adventure and puzzle

-

Dark Riddle is a game developed by Nika Entertainment that was released in 2019. It is inspired by other games like Hello Neighbor and Granny, where you have to sneak into your neighbor's house and find out what he is hiding. You can use various items and tools to distract, trick or fight your neighbor, who will chase you if he sees you. You can also interact with other characters and objects in the game world, such as animals, cars, plants and more. The game has different levels and modes, each with its own challenges and surprises.

-

dark riddle apk hile android oyun club


Download Zip ☆☆☆ https://urlin.us/2uSZ6K



-

A game of stealth, humor and mystery

-

Dark Riddle is not just a game of escape, adventure and puzzle. It is also a game of stealth, humor and mystery. You have to use your skills and creativity to avoid being detected by your neighbor, who has a lot of traps and cameras in his house. You can also use your sense of humor to prank your neighbor or make him laugh. The game has a lot of funny moments and dialogues that will make you smile. Moreover, the game has a lot of mystery and suspense that will keep you hooked. You will want to know more about your neighbor's secrets and motives, as well as the story behind the game.

-

What is Dark Riddle APK Hile?

-

A modded version of the game with unlimited money

-

Dark Riddle APK Hile is a modded version of the game that gives you unlimited money. This means that you can buy anything you want in the game without worrying about the cost. You can get all the items, skins and weapons that are available in the game store. You can also upgrade your skills and abilities to make yourself stronger and faster. With unlimited money, you can enjoy the game without any restrictions or limitations.

-

A way to enjoy the game without ads or in-app purchases

-

Dark Riddle APK Hile is also a way to enjoy the game without ads or in-app purchases. This means that you can play the game without any interruptions or annoyances. You don't have to watch any ads or spend any real money to get extra features or resources in the game. You can play the game smoothly and comfortably without any hassle or pressure.

-

How to

How to download and install Dark Riddle APK Hile?

-

The steps to download the file from Android Oyun Club

-

Dark Riddle APK Hile is available for download from Android Oyun Club, a website that offers modded versions of various Android games. To download the file from Android Oyun Club, you need to follow these steps:

-
    -
  1. Go to the official website of Android Oyun Club at https://androidoyun.club/
  2. -
  3. Search for Dark Riddle in the search bar or browse the categories to find the game.
  4. -
  5. Click on the game title and scroll down to the download section.
  6. -
  7. Choose the version of Dark Riddle APK Hile that you want to download and click on the download button.
  8. -
  9. Wait for the download to complete and save the file on your device.
  10. -
-

The steps to install the file on your device

-

After downloading the file from Android Oyun Club, you need to install it on your device. To install the file on your device, you need to follow these steps:

-
    -
  1. Go to the settings of your device and enable the option to install apps from unknown sources.
  2. -
  3. Locate the downloaded file on your device and tap on it.
  4. -
  5. Follow the instructions on the screen and allow the necessary permissions.
  6. -
  7. Wait for the installation to finish and launch the game.
  8. -
-

What are the features and benefits of Dark Riddle APK Hile?

-

The features of the modded version, such as unlocked items, skins and weapons

-

Dark Riddle APK Hile has many features that make it different from the original version of the game. Some of these features are:

-

This is a first-person adventure thriller with an interactive environment and interesting quests. Solve puzzles and uncover the secrets of a suspicious neighbor who lives across from you.
-Your adventure begins in an unusual city where you can find many useful and unique items. You will meet a police officer and a seller of alien devices, and during the game you will get acquainted with unusual creatures. Each item and character has a huge story behind it.
-The game has a lot of humor, various levels of difficulty and multiple endings - the outcome of the story depends entirely on your actions and decisions. You can use headphones to explore the city in detail and better understand the plot.

- -

The benefits of the modded version, such as more fun, freedom and challenge

-

Dark Riddle APK Hile has many benefits that make it more fun, freedom and challenge than the original version of the game. Some of these benefits are:

-

What are the drawbacks and risks of Dark Riddle APK Hile?

-

The drawbacks of the modded version, such as possible bugs, glitches and crashes

-

Dark Riddle APK Hile is not a perfect version of the game. It has some drawbacks that may affect your gaming experience. Some of these drawbacks are:

- -

The risks of the modded version, such as malware, viruses and bans

-

Dark Riddle APK Hile is not a safe version of the game. It has some risks that may harm your device or account. Some of these risks are:

- -

Conclusion

-

Dark Riddle APK Hile is a modded version of the game that gives you unlimited money and removes ads and in-app purchases. It also unlocks all the items, skins and weapons in the game. It is a way to enjoy the game without any limitations or interruptions. However, it also has some drawbacks and risks that may affect your gaming experience or harm your device or account. Therefore, you should be careful and responsible when using this modded version. You should also respect the original developers and creators of the game and support them if you like their work.

-

FAQs

-
    -
  1. Q: Is Dark Riddle APK Hile legal?
  2. -
  3. A: Dark Riddle APK Hile is not legal. It is a modded version of the game that violates the terms and conditions of the original game. It also infringes the intellectual property rights of the original developers and creators of the game.
  4. -
  5. Q: Is Dark Riddle APK Hile safe?
  6. -
  7. A: Dark Riddle APK Hile is not safe. It is a modded version of the game that may contain malware, viruses or bans that may harm your device or account. It also may have bugs, glitches or crashes that may affect your gaming experience.
  8. -
  9. Q: How to update Dark Riddle APK Hile?
  10. -
  11. A: Dark Riddle APK Hile is not easy to update. It is a modded version of the game that may not be compatible with the latest version of the original game. You may need to download and install a new version of Dark Riddle APK Hile from Android Oyun Club whenever there is an update available.
  12. -
  13. Q: How to uninstall Dark Riddle APK Hile?
  14. -
  15. A: Dark Riddle APK Hile is easy to uninstall. You can simply delete the file from your device or go to the settings of your device and uninstall the app like any other app.
  16. -
  17. Q: Where to get more information about Dark Riddle APK Hile?
  18. -
  19. A: You can get more information about Dark Riddle APK Hile from Android Oyun Club, the website that offers this modded version of the game. You can also visit the official website or social media pages of Dark Riddle, the original game, to get more information about it.
  20. -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/8 Ball Pool bitAIM APK A Complete Guide to the Ultimate Pool Game Experience.md b/spaces/1phancelerku/anime-remove-background/8 Ball Pool bitAIM APK A Complete Guide to the Ultimate Pool Game Experience.md deleted file mode 100644 index fa85a2b0fd999ed5916b113f5bc70d1ca9af0213..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/8 Ball Pool bitAIM APK A Complete Guide to the Ultimate Pool Game Experience.md +++ /dev/null @@ -1,146 +0,0 @@ -
-

What is 8 ball pool bitaim apk?

-

If you are a fan of pool games, you might have heard of or played 8 ball pool, one of the most popular and addictive online multiplayer games on Android. 8 ball pool is a game where you can compete with players from all over the world in various modes and tournaments, using your skills and strategies to pocket balls and win coins and rewards. But what if you want to have an edge over your opponents and improve your game performance? That's where 8 ball pool bitaim apk comes in.

-

8 ball pool bitaim apk


Download File ····· https://jinyurl.com/2uNSOG



-

8 ball pool bitaim apk is a modded version of 8 ball pool that allows you to hack the aim of your striker and hit the pieces with perfect accuracy. With 8 ball pool bitaim apk, you can win every match and earn more coins and gems. But is 8 ball pool bitaim apk safe and legal to use? How can you download and install it on your device? And what are its features and benefits? In this article, we will answer all these questions and more, so keep reading.

-

How to play 8 ball pool?

-

Before we dive into the details of 8 ball pool bitaim apk, let's first review the basics of how to play 8 ball pool. 8 ball pool is a game played with a cue ball and fifteen object balls, numbered 1 through 15. Balls 1–7 are solid colors and commonly referred to as “low balls”, and balls 9–15 are striped and commonly referred to as “high balls.” One player must pocket balls of solid colors, while the other player must pocket the striped balls. The player who pockets their entire group and then legally pockets the 8-ball wins the game.

-

To start the game, one player must break the rack by hitting the cue ball into the triangle of object balls. For the break shot to be legal, the breaker must either pocket a number ball or drive at least four number balls to one or more rails. No ball is called, and the cue ball is not required to hit any particular object ball first. If the breaker fails to make a legal break, the opponent can choose to break again or accept the table as it is.

-

After a legal break, if any object ball is pocketed, then that determines whether that player has solids or stripes for that game. If no object ball is pocketed on a legal break or if both a solid and a stripe are pocketed on a legal break then it is an open table until one player pockets either a solid or stripe on their turn. Once solids or stripes have been determined for each player then they must continue shooting at their designated group until they have cleared their group from the table.

-

A player's turn continues until they fail to pocket one of their group or

commit a foul. A foul occurs when the player fails to hit any ball with the cue ball, hits the wrong group of balls first, pockets the cue ball, pockets the 8-ball before clearing their group, pockets the 8-ball in the wrong pocket, or drives any ball off the table. If a player commits a foul, their opponent gets ball in hand, meaning they can place the cue ball anywhere on the table for their next shot.

-

bitAIM app for carrom pool
-bitAIM+ download apk free
-bitAIM AI aim assistance tool
-bitAIM for carrom pool practices
-bitAIM apk latest version 3.6.54
-bitAIM image recognition technique
-bitAIM android app free download
-bitAIM apkcombo apps tools
-bitAIM app.ai.lab.bitaimplus
-bitAIM apk mirror download
-bitAIM carrom pool master shots
-bitAIM apk file size 28 MB
-bitAIM app developer bitAIM+
-bitAIM apk update Aug 12, 2022
-bitAIM app category tools
-bitAIM apk google play id
-bitAIM app installs 500+
-bitAIM apk direct and indirect shot
-bitAIM app description tools advertisement
-bitAIM apk multi-collision of coin
-bitAIM app reviews and ratings
-bitAIM apk how to install
-bitAIM app features and benefits
-bitAIM apk compatible devices
-bitAIM app screenshots and videos
-bitAIM apk mod unlimited coins
-bitAIM app alternatives and similar apps
-bitAIM apk download for pc windows 10
-bitAIM app support and contact information
-bitAIM apk no root required
-bitAIM app privacy policy and terms of service
-bitAIM apk online generator tool
-bitAIM app tips and tricks guide
-bitAIM apk offline mode available
-bitAIM app user feedback and suggestions
-bitAIM apk safe and secure download link
-bitAIM app pros and cons analysis
-bitAIM apk hack version download 2023
-bitAIM app frequently asked questions (FAQs)
-bitAIM apk premium features unlocked

-

The game ends when one player legally pockets the 8-ball in a designated pocket after clearing their group. The player must call the pocket for the 8-ball before shooting. If the player pockets the 8-ball in an uncalled pocket, or pockets the 8-ball and the cue ball on the same shot, they lose the game.

-

How to download and install bitaim apk?

-

Now that you know how to play 8 ball pool, you might be wondering how to get bitaim apk on your device. Bitaim apk is not available on the official Google Play Store, so you will need to download it from a third-party source. Here are the steps and requirements for downloading and installing bitaim apk:

-
    -
  1. Make sure your device has enough storage space and meets the minimum system requirements for running 8 ball pool. The game requires Android 4.4 or higher and at least 1 GB of RAM.
  2. -
  3. Enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, go to Settings > Security > Unknown Sources and toggle it on.
  4. -
  5. Download bitaim apk from a reliable and trusted website. You can search for bitaim apk on Google or use this link: (https://bitaimapk.com/). Be careful not to download any fake or malicious files that might harm your device.
  6. -
  7. Locate the downloaded file on your device and tap on it to start the installation process. Follow the instructions on the screen and grant the necessary permissions for the app to run.
  8. -
  9. Launch 8 ball pool bitaim apk and enjoy playing with unlimited aim and accuracy.
  10. -
-

What are the features of bitaim apk?

-

Bitaim apk is a modded version of 8 ball pool that offers many features and benefits that can enhance your gaming experience and make you a better player. Here are some of the features of bitaim apk:

- -

How to use bitaim apk?

-

Using bitaim apk is very easy and simple. All you need to do is follow these steps:

-
    -
  1. Launch 8 ball pool bitaim apk on your device and log in with your account or create a new one.
  2. -
  3. Select a game mode or tournament that you want to play and join a match.
  4. -
  5. When it is your turn to shoot, you will see a green line showing you the direction and angle of your shot. You can also see a yellow circle indicating the best pocket for each ball.
  6. -
  7. To adjust your aim, swipe left or right on the screen. To adjust your power, swipe up or down on the screen.
  8. -
  9. To shoot, tap on the screen when you are ready.
  10. -
  11. Enjoy winning every match with perfect accuracy and skill.
  12. -
-

How to activate indirect or premium shots?

-

Bitaim apk also offers indirect or premium shots, which are more advanced and challenging shots that require more skill and strategy. Indirect shots are shots that involve hitting one or more rails before pocketing a ball. Premium shots are shots that involve using spin, curve, or jump to pocket a ball. To activate indirect or premium shots, you need to pay a certain amount of coins or gems, depending on the level of difficulty and reward. Here is a table showing the cost and benefit of each type of shot: | Type of shot | Cost | Benefit | | --- | --- | --- | | Indirect shot | 50 coins or 5 gems | Double the coins or gems you win | | Premium shot | 100 coins or 10 gems | Triple the coins or gems you win | To activate indirect or premium shots, you need to tap on the icon that appears on the top right corner of the screen before shooting. You can choose between coins or gems as the payment method. Once you activate the shot, you will see a blue line showing you the trajectory and angle of your shot, as well as a red circle indicating the spin, curve, or jump effect. You can adjust your shot as usual and then shoot when you are ready.

How to use bitaim apk with Lulubox?

-

Lulubox is another popular app that can enhance your gaming experience by providing you with various features and hacks for different games. Lulubox is compatible with 8 ball pool bitaim apk, and you can use them together to get more benefits and advantages. Here are some of the features that Lulubox can offer for 8 ball pool:

- -

To use bitaim apk with Lulubox, you need to follow these steps:

-
    -
  1. Download and install Lulubox from a reliable and trusted website. You can search for Lulubox on Google or use this link: (https://www.luluboxapk.com/).
  2. -
  3. Launch Lulubox on your device and grant the necessary permissions for the app to run.
  4. -
  5. Find 8 ball pool bitaim apk on the list of games that Lulubox supports and tap on it.
  6. -
  7. Select the features that you want to activate for 8 ball pool bitaim apk and tap on the launch button.
  8. -
  9. Enjoy playing 8 ball pool bitaim apk with Lulubox.
  10. -
-

How to update bitaim apk?

-

Bitaim apk is constantly updated by its developers to ensure that it works smoothly and efficiently with the latest version of 8 ball pool. To update bitaim apk, you need to follow these steps:

-
    -
  1. Check if there is a new version of bitaim apk available on the website where you downloaded it from. You can also check for updates within the app itself by tapping on the menu button and then on the update option.
  2. -
  3. If there is a new version available, download it from the website or from the app.
  4. -
  5. Delete the old version of bitaim apk from your device.
  6. -
  7. Install the new version of bitaim apk following the same steps as before.
  8. -
  9. Launch 8 ball pool bitaim apk and enjoy playing with the latest features and bug fixes.
  10. -
-

What are the pros and cons of bitaim apk?

-

Bitaim apk is a modded version of 8 ball pool that offers many features and benefits that can enhance your gaming experience and make you a better player. However, it also has some drawbacks and risks that you should be aware of before using it. Here are some of the pros and cons of bitaim apk:

- | Pros | Cons | | --- | --- | | It helps you aim and shoot with perfect accuracy. | It takes away some of the challenge and fun of playing 8 ball pool. | | It allows you to win every match and earn more coins and gems. | It may be considered cheating by some players and may ruin their gaming experience. | | It removes all the ads that interrupt your gameplay. | It may not be compatible with some devices or versions of 8 ball pool. | | It does not require root access to work on your device. | It may expose your device to malware or viruses from unknown sources. | | It provides free updates for its users. | It may get detected and banned by the game developers or moderators. |

What are some alternatives to bitaim apk?

-

If you are looking for some alternatives to bitaim apk, you might want to check out these other apps that can provide similar or better features for 8 ball pool:

- -

Conclusion

-

8 ball pool bitaim apk is a modded version of 8 ball pool that allows you to hack the aim of your striker and hit the pieces with perfect accuracy. It offers many features and benefits that can enhance your gaming experience and make you a better player, such as AI assistance, shots recording, no ads, no root required, and free updates. However, it also has some drawbacks and risks that you should be aware of before using it, such as cheating, compatibility issues, malware threats, and ban risks. Therefore, you should use it at your own discretion and responsibility.

-

If you want to download and install bitaim apk on your device, you can follow the steps and requirements that we have provided in this article. You can also use bitaim apk with Lulubox to get more features and hacks for 8 ball pool. Alternatively, you can check out some other apps that can provide similar or better features for 8 ball pool, such as 8 Ball Pool Mod Menu, 8 Ball Pool Tool, and 8 Ball Pool Guideline Hack.

-

We hope that this article has helped you understand what is 8 ball pool bitaim apk and how to use it effectively and safely. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

-

FAQs

-

Here are some of the frequently asked questions and answers about 8 ball pool bitaim apk:

-
    -
  1. Q: Is 8 ball pool bitaim apk safe and legal to use?
  2. -
  3. A: Bitaim apk is not safe or legal to use, as it is a modded version of 8 ball pool that violates the terms and conditions of the game. It may expose your device to malware or viruses from unknown sources, and it may get detected and banned by the game developers or moderators. Therefore, you should use it at your own risk and responsibility.
  4. -
  5. Q: How can I avoid getting banned by using bitaim apk?
  6. -
  7. A: There is no guarantee that you will not get banned by using bitaim apk, as it is a modded version of 8 ball pool that violates the terms and conditions of the game. However, you can try to reduce the chances of getting banned by following these tips:
  8. - -
  9. Q: Can I use bitaim apk with other mods or hacks for 8 ball pool?
  10. -
  11. A: Bitaim apk is compatible with some other mods or hacks for 8 ball pool, such as Lulubox. However, you should be careful not to use too many mods or hacks at the same time, as they may cause conflicts or errors in your game performance. You should also be aware that using more mods or hacks may increase the risk of getting banned by the game developers or moderators.
  12. -
  13. Q: How can I contact the developers of bitaim apk?
  14. -
  15. A: Bitaim apk is developed by a team of anonymous and independent developers who do not have an official website or social media account. Therefore, it is difficult to contact them directly or get support from them. However, you can try to leave a comment or feedback on the website where you downloaded bitaim apk from, and hope that they will see it and respond to it.
  16. -
  17. Q: What are some tips and tricks for playing 8 ball pool?
  18. -
  19. A: 8 ball pool is a game that requires skill, strategy, and practice to master. Here are some tips and tricks that can help you improve your game and win more matches:
  20. - -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Boost Your Brain Power with Mental Arithmetic Techniques.md b/spaces/1phancelerku/anime-remove-background/Boost Your Brain Power with Mental Arithmetic Techniques.md deleted file mode 100644 index f61140c1e82e1bc24caaebfe2cd0427369688f1a..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Boost Your Brain Power with Mental Arithmetic Techniques.md +++ /dev/null @@ -1,111 +0,0 @@ - -

How to Practice and Improve Your Mental Arithmetic Skills

-

Mental arithmetic is the skill of doing calculations in your head without using any tools or devices, such as a calculator, pen and paper, or abacus. It is a valuable skill that can help you in many everyday situations, such as shopping, cooking, traveling, and more. It can also improve your number sense, logical thinking, memory, and speed of computation.

-

mental aritmetik


DOWNLOADhttps://jinyurl.com/2uNLX1



-

But how do you practice and improve your mental arithmetic skills? What are some tips and techniques that can make it easier and faster? And what are some games and resources that can challenge you and make it fun? In this article, we will answer these questions and provide you with some useful information on how to become a master of mental math.

-

Tips and Techniques for Mental Arithmetic

-

There are many tips and techniques that can help you perform mental arithmetic more efficiently and accurately. Here are some of the most common ones:

-

Break down the problems into parts

-

One of the easiest ways to simplify mental arithmetic problems is to break them down into smaller parts that are easier to handle. For example, if you need to add or subtract several numbers, you can group them by their place value (hundreds, tens, ones) and add or subtract them separately. For example:

-

712 + 281 = (700 + 200) + (10 + 80) + (2 + 1) = 900 + 90 + 3 = 993

-

815 - 521 = (800 - 500) + (10 - 20) + (5 - 1) = 300 - 10 + 4 = 294

-

mental aritmetik eğitimi
-mental aritmetik kursu
-mental aritmetik nedir
-mental aritmetik nasıl yapılır
-mental aritmetik faydaları
-mental aritmetik örnekleri
-mental aritmetik kitabı
-mental aritmetik abaküs
-mental aritmetik uygulaması
-mental aritmetik sertifikası
-mental aritmetik beyin gelişimi
-mental aritmetik hafıza teknikleri
-mental aritmetik zeka testi
-mental aritmetik online eğitim
-mental aritmetik ders programı
-mental aritmetik egzersizleri
-mental aritmetik çarpım tablosu
-mental aritmetik matematik oyunları
-mental aritmetik soru bankası
-mental aritmetik öğretmeni
-mental aritmetik franchise
-mental aritmetik yorumları
-mental aritmetik videoları
-mental aritmetik çalışma saatleri
-mental aritmetik fiyatları
-mental aritmetik indirim kuponu
-mental aritmetik başarı hikayeleri
-mental aritmetik sınav soruları
-mental aritmetik öğrenci girişi
-mental aritmetik veli bilgilendirme sistemi
-mental aritmetik iş ilanları
-mental aritmetik bayilik şartları
-mental aritmetik seminerleri
-mental aritmetik yarışması
-mental aritmetik kampı
-mental aritmetik blog yazıları
-mental aritmetik sosyal medya hesapları
-mental aritmetik web sitesi tasarımı
-mental aritmetik logo tasarımı
-mental aritmetik broşür tasarımı

-

Use round numbers and adjust later

-

Another way to make mental arithmetic easier is to use round numbers that are close to the original ones and adjust the answer later by adding or subtracting the difference. For example:

-

596 + 380 = (600 + 380) - 4 = 980 - 4 = 976

-

38 x 3 = (40 x 3) - (2 x 3) = 120 - 6 = 114

-

Reorder the numbers to make convenient sums

-

Sometimes, you can reorder the numbers in an addition or subtraction problem to make convenient sums that are easy to remember or work with. For example, you can look for numbers that add up to a multiple of 10 or a power of 10. For example:

-

7 + 4 + 9 + 13 + 6 + 51 = (7 + 13) + (9 +51) + (6 +4) =20 +60+10=90

-

1000+20+1000+30+1000+40+1000+10=4000+100=4100

-

Multiply from left to right

Use square numbers and roots

-

Square numbers are the result of multiplying a number by itself, such as 4 × 4 = 16 or 9 × 9 = 81. Knowing some common square numbers can help you with mental arithmetic, especially when you need to multiply or divide large numbers. For example:

-

48 × 52 = (50 − 2) × (50 + 2) = 50² − 2² = 2500 − 4 = 2496

-

Here, we used the identity (a − b) × (a + b) = a² − b² to simplify the problem. We also used the fact that 50² = 2500, which is easy to remember.

-

Roots are the opposite of squares. The square root of a number is the number that, when multiplied by itself, gives that number. For example, the square root of 16 is 4, because 4 × 4 = 16. Finding square roots mentally can be tricky, but there are some methods that can help you estimate them or find them exactly. For example:

-

To estimate the square root of a number, find the two nearest square numbers and use them as a guide. For example, to estimate the square root of 75, we can use the fact that 64 < 75 < 81, and that the square roots of 64 and 81 are 8 and 9, respectively. Therefore, the square root of 75 is between 8 and 9, closer to 9 than to 8.

-

To find the exact square root of a number, use the fact that the difference between two consecutive square numbers is equal to the sum of their square roots. For example, to find the square root of 169, we can use the fact that 169 − 144 = 25, and that the square roots of 169 and 144 are x and 12, respectively. Therefore, x + 12 = 25, and x = 13.

-

Estimate and approximate

-

Sometimes, you don't need to find the exact answer to a mental arithmetic problem, but only an estimate or an approximation. This can save you time and effort, and still give you a reasonable idea of the magnitude of the answer. Estimating and approximating can involve various techniques, such as rounding numbers, using benchmarks or reference points, using fractions or percentages, or using compatible numbers. For example:

-

To estimate how much money you will save by buying a shirt that is on sale for $24.99 instead of $29.99, you can round both prices to the nearest dollar and subtract them: $30 − $25 = $5. This is not the exact answer, but it is close enough for most purposes.

-

To approximate how many hours are in a year, you can use the benchmark that one year is about 365 days, and multiply it by 24: 365 × 24 = (360 +5) ×24=360×24+5×24=8640+120=8760. This is not the exact answer either, because it does not account for leap years or fractional hours, but it is a good approximation.

-

Games and Resources for Mental Arithmetic

-

If you want to practice and improve your mental arithmetic skills further, there are many games and resources that you can use to challenge yourself and have fun. Here are some examples:

-

Math Trainer

-

Math Trainer is a free online tool that lets you practice mental arithmetic with different types of problems and difficulty levels. You can choose from addition, subtraction, multiplication, division, mixed operations, fractions, decimals, percentages, powers and roots. You can also set a time limit and track your progress and accuracy.

-

Mental Math Cards

-

Mental Math Cards is a free app for iOS and Android devices that helps you practice mental arithmetic with flashcards. You can customize your settings to choose from different operations, number ranges, decimal places and time limits. You can also view your statistics and achievements.

-

Arithmetic Game

-

Arithmetic Game is a free online game that tests your mental arithmetic skills with four basic operations: addition, subtraction, multiplication and division. You have to fill in the blanks with the correct numbers to complete the equations as fast as you can. You can choose from three difficulty levels: easy, normal and hard.

-

Prodigy Game

-

Prodigy Game is a free online game that combines math skills with an adventure story. You have to create your own character and explore a fantasy world where you have to solve math problems to progress and unlock new features. You can choose from different topics and skills, such as mental arithmetic, fractions, geometry, algebra and more. You can also play with your friends and compete with other players. Prodigy Game is available for free on the web, or as an app for iOS and Android devices.

-

Mathnasium

-

Mathnasium is a learning center that offers personalized math tutoring and instruction for students of all ages and levels. Mathnasium uses a unique method that helps students develop their mental arithmetic skills, as well as their conceptual understanding, problem-solving abilities and confidence in math. Mathnasium has over 1,000 locations across the US and Canada, and you can find the nearest one to you on their website.

-

Conclusion

-

Mental arithmetic is a skill that can benefit you in many ways, both in school and in life. It can help you perform calculations faster and more accurately, improve your number sense and logical thinking, enhance your memory and concentration, and save you time and resources. By practicing some tips and techniques, such as breaking down problems, using round numbers, reordering numbers, multiplying from left to right, using square numbers and roots, and estimating and approximating, you can make mental arithmetic easier and more efficient. You can also use some games and resources, such as Math Trainer, Mental Math Cards, Arithmetic Game, Prodigy Game and Mathnasium, to challenge yourself and have fun while learning mental arithmetic.

-

FAQs

-

Here are some common questions and answers about mental arithmetic:

-

Q: How can I improve my mental arithmetic speed?

-

A: To improve your mental arithmetic speed, you need to practice regularly and consistently. You can use some of the games and resources mentioned above to practice different types of problems and difficulty levels. You can also set a time limit for yourself and try to beat your own records. The more you practice, the more familiar you will become with the numbers and the operations, and the faster you will be able to perform them.

-

Q: What are some benefits of mental arithmetic for children?

-

A: Mental arithmetic can help children develop their math skills from an early age. It can help them understand the meaning and relationships of numbers, operations, fractions, decimals, percentages and more. It can also help them improve their logical thinking, reasoning, creativity, memory and concentration. Mental arithmetic can also boost their confidence and motivation in math, as they can see their progress and achievements.

-

Q: What are some challenges of mental arithmetic?

-

A: Mental arithmetic can be challenging for some people because it requires a lot of attention, focus and mental effort. It can also be affected by factors such as stress, anxiety, fatigue or distraction. Some people may also have difficulties with certain types of problems or operations, such as division or fractions. To overcome these challenges, it is important to practice mental arithmetic in a relaxed and positive environment, start with simple problems and gradually increase the complexity, use some tips and techniques to simplify the problems, check your answers for accuracy, and seek help or feedback if needed.

-

Q: What are some applications of mental arithmetic in real life?

-

A: Mental arithmetic can be useful in many real-life situations, such as:

- -

Q: How can I make mental arithmetic fun?

-

A: There are many ways to make mental arithmetic fun, such as:

- -

I hope you enjoyed this article and learned something new about mental arithmetic. If you have any questions or comments, feel free to leave them below. And don't forget to practice and have fun with mental arithmetic!

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy Clash of Clans APK with Unlimited Gems Gold and Elixir.md b/spaces/1phancelerku/anime-remove-background/Enjoy Clash of Clans APK with Unlimited Gems Gold and Elixir.md deleted file mode 100644 index 4240634d29cfbbcf85d88eca46fdb4ee7e9a0d9d..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy Clash of Clans APK with Unlimited Gems Gold and Elixir.md +++ /dev/null @@ -1,94 +0,0 @@ - -

Clash of Clans Orjinal APK: How to Download and Play the Epic Strategy Game

-

Clash of Clans is one of the most popular and addictive strategy games in the world. Millions of players worldwide join forces to build their villages, train their troops, and fight in epic clan wars. If you are looking for a fun and challenging game that will keep you entertained for hours, you should definitely try Clash of Clans. But how can you download and play the game on your Android device? In this article, we will show you how to get the orjinal APK of Clash of Clans, which is the official version of the game from a trusted source. We will also give you some tips and tricks on how to play the game and become a successful clasher.

-

clash of clans orjinal apk


DOWNLOAD »»» https://jinyurl.com/2uNL75



-

What is Clash of Clans?

-

Clash of Clans is a strategy game that was released in 2012 by Supercell, a Finnish game developer. The game is set in a fantasy world where you can create your own village, customize it with various buildings and defenses, and collect resources such as gold, elixir, and dark elixir. You can also recruit different types of troops, such as barbarians, archers, wizards, dragons, and more, and use them to attack other players' villages or defend your own. The game also features a multiplayer mode where you can join or create a clan, which is a group of players who can chat, donate troops, and participate in clan wars. Clan wars are special events where two clans face each other in a series of attacks and try to earn more stars than their opponents. The game also has a single-player mode where you can fight against the goblin king and his army in a campaign mode.

-

Why Download the Orjinal APK?

-

The orjinal APK of Clash of Clans is the official version of the game that you can download from Google Play Store or from Supercell's website. There are many advantages of downloading the orjinal APK instead of using unofficial or modded versions of the game. Some of these advantages are:

- -

How to Download and Install the Orjinal APK?

-

Downloading and installing the orjinal APK of Clash of Clans is very easy and simple. Just follow these steps:

-
    -
  1. Go to Google Play Store on your Android device and search for Clash of Clans. Alternatively, you can go to Supercell's website (https://supercell.com/en/games/clashofclans/) and click on "Download Now".
  2. -
  3. Tap on "Install" and wait for the download to finish.
  4. -
  5. Once the download is complete, tap on "Open" and enjoy playing Clash of Clans.
  6. -
-

Note: If you have an existing account or village on another device, you can link it to your new device by using Supercell ID or Google Play Games. Just go to Settings > Account > Link Device or Sign In.

-

How to Play Clash of Clans?

-

Playing Clash of Clans is fun and easy once you get the hang of it. Here are some tips and tricks on how to play the game and become a successful clasher:

-

clash of clans apk download latest version
-clash of clans mod apk unlimited everything
-clash of clans hack apk free download
-clash of clans apk indir android oyun club
-clash of clans apk update 2023
-clash of clans private server apk download
-clash of clans apk hile nasıl yapılır
-clash of clans apk mirror download
-clash of clans apk pure free download
-clash of clans apk mod menu
-clash of clans apk offline mode
-clash of clans apk no root required
-clash of clans apk yeni sürüm indir
-clash of clans apk for pc windows 10
-clash of clans apk full version download
-clash of clans apk cheat codes
-clash of clans apk hack online generator
-clash of clans apk orjinal kurulumu
-clash of clans apk son sürüm 2023
-clash of clans apk android 4.4.2
-clash of clans apk unlimited gems and coins
-clash of clans apk modded by ihackedit
-clash of clans apk free shopping
-clash of clans apk güncelleme sorunu
-clash of clans apk for ios devices
-clash of clans apk mod offline unlimited money
-clash of clans apk hack tool download
-clash of clans apk orjinal nasıl indirilir
-clash of clans apk eski sürüm indir
-clash of clans apk android 11 support
-clash of clans apk unlimited troops and spells
-clash of clans apk mod anti ban
-clash of clans apk free gems generator
-clash of clans apk hileli indir 2023
-clash of clans apk for fire tablet
-clash of clans apk mod unlimited gold and elixir
-clash of clans apk hack no survey no password
-clash of clans apk orjinal yükleme yöntemi
-clash of clans apk yeni güncelleme ne zaman gelecek
-clash of clans apk android 5.1.1 download
-clash of clans apk unlimited builder base resources
-clash of clans apk mod unlock all heroes and troops
-clash of clans apk free download for laptop
-clash of clans apk hile yapma programı indir
-clash of clans apk for chromebook download
-clash of clans apk mod supercell id login fix
-clash of clans apk free magic items and books
-clash of clans apk hileli oyun indir club

- -

Conclusion

-

Clash of Clans is an epic strategy game that will keep you hooked for hours. You can download the orjinal APK of the game from Google Play Store or Supercell's website and enjoy the latest updates and features of the game. You can also learn how to play the game and become a successful clasher by following our tips and tricks. So what are you waiting for? Download Clash of Clans today and join the millions of players worldwide who are having fun building their villages, raising their clans, and fighting in clan wars.

-

FAQs

-

Here are some common questions and answers about Clash of Clans:

-

Q: How can I get free gems in Clash of Clans?

-

A: You can get free gems by completing achievements and events, removing obstacles from your village, opening gem boxes or gem mines, or participating in special offers or surveys.

-

Q: How can I change my name in Clash of Clans?

-

A: You can change your name once for free by going to Settings > Change Name. After that, you will need to pay 500 gems to change your name again.

-

Q: How can I transfer my village to another device?

-

A: You can transfer your village to another device by using Supercell ID or Google Play Games. Just go to Settings > Account > Link Device or Sign In on both devices and follow the instructions.

-

Q: How can I contact Supercell for support or feedback?

-

A: You can contact Supercell by going to Settings > Help and Support > Contact Us or by visiting their website (https://supercell.helpshift.com/a/clash-of-clans/).

-

Q: How can I report a bug or a player in Clash of Clans?

-

A: You can report a bug or a player by going to Settings > Help and Support > Report an Issue or Report Player.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Enjoy the Festive Season with Daystar Choirs 12 Days of Christmas MP3 Download.md b/spaces/1phancelerku/anime-remove-background/Enjoy the Festive Season with Daystar Choirs 12 Days of Christmas MP3 Download.md deleted file mode 100644 index 57ded0ebe30299ba83c6959b6f8db552344150a6..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Enjoy the Festive Season with Daystar Choirs 12 Days of Christmas MP3 Download.md +++ /dev/null @@ -1,147 +0,0 @@ - -

How to Download 12 Days of Christmas by Daystar Choir MP3

-

Christmas is a season of joy, celebration, and music. One of the most festive and cheerful songs that you can listen to during this time is 12 Days of Christmas by Daystar Choir. This song is a live performance by a Nigerian gospel choir that sings a medley of traditional and modern Christmas carols with a twist. It is a fun and lively song that will make you dance and sing along.

-

download 12 days of christmas by daystar choir mp3


Download >>>>> https://jinyurl.com/2uNN1z



-

But how can you download this song as an MP3 file and enjoy it anytime and anywhere? In this article, we will show you where to find this song online and how to download it as an MP3 file. We will also tell you why this song is so popular and what are the benefits of downloading it as an MP3 file.

-

Where to Find 12 Days of Christmas by Daystar Choir MP3

-

There are two main ways to find this song online: online streaming platforms and free music download websites. Here are some examples of each option:

-

Online Streaming Platforms

-

Online streaming platforms are websites or apps that allow you to listen to music online without downloading it. Some of the most popular online streaming platforms that have this song are:

- -

Free Music Download Websites

Free music download websites are websites that allow you to download music for free and legally. Some of the free music download websites that have this song are:

-

download 12 days of christmas by daystar choir mp3 free
-download 12 days of christmas by daystar choir mp3 online
-download 12 days of christmas by daystar choir mp3 lyrics
-download 12 days of christmas by daystar choir mp3 shazam
-download 12 days of christmas by daystar choir mp3 last.fm
-download 12 days of christmas by daystar choir mp3 album
-download 12 days of christmas by daystar choir mp3 live
-download 12 days of christmas by daystar choir mp3 video
-download 12 days of christmas by daystar choir mp3 song
-download 12 days of christmas by daystar choir mp3 music
-download 12 days of christmas by daystar choir mp3 youtube
-download 12 days of christmas by daystar choir mp3 spotify
-download 12 days of christmas by daystar choir mp3 apple music
-download 12 days of christmas by daystar choir mp3 soundcloud
-download 12 days of christmas by daystar choir mp3 amazon music
-download 12 days of christmas by daystar choir mp3 google play music
-download 12 days of christmas by daystar choir mp3 deezer
-download 12 days of christmas by daystar choir mp3 tidal
-download 12 days of christmas by daystar choir mp3 pandora
-download 12 days of christmas by daystar choir mp3 napster
-download 12 days of christmas by daystar choir mp3 audiomack
-download 12 days of christmas by daystar choir mp3 bandcamp
-download 12 days of christmas by daystar choir mp3 reverbnation
-download 12 days of christmas by daystar choir mp3 datpiff
-download 12 days of christmas by daystar choir mp3 mixcloud
-download 12 days of christmas by daystar choir mp3 nigerian carols
-download 12 days of christmas by daystar choir mp3 ogo ni fun baba
-download 12 days of christmas by daystar choir mp3 jesu yi o iwo l'ologo didan
-download 12 days of christmas by daystar choir mp3 ding-dong feat taiwo oladoye
-download 12 days of christmas by daystar choir mp3 glory halleluyah feat taiwo oladoye
-download 12 days of christmas by daystar choir mp3 gbo ohun
-download 12 days of christmas by daystar choir mp3 dulci jubilo
-download 12 days of christmas by daystar choir mp3 joy festizie
-download 12 days of christmas by daystar choir mp3 nina yesu ne chingtok ishaku
-download 12 days of christmas by daystar choir mp3 almighty god dr pastor paul enenche
-download 12 days of christmas by daystar choir mp3 nagode feat solomon lange worship for change
-download 12 days of christmas by daystar choir mp3 you are the god dr paul enenche
-download 12 days of christmas by daystar choir mp3 solid rock judikay
-download 12 days of christmas by daystar choir mp3 elee dr pastor paul enenche
-download 12 days of christmas by daystar choir mp3 alpha and omega praise and worship
-how to download 12 days of christmas by daystar choir mp3
-where to download 12 days of christmas by daystar choir mp3
-best site to download 12 days of christmas by daystar choir mp3
-best app to download 12 days of christmas by daystar choir mp3
-best quality to download 12 days of christmas by daystar choir mp3
-best format to download 12 days of christmas by daystar choir mp3
-best device to download 12 days of christmas by daystar choir mp3
-best vpn to download 12 days of christmas by daystar choir mp3
-best proxy to download 12 days of christmas by daystar choir mp3

- -

How to Download 12 Days of Christmas by Daystar Choir MP3

-

Now that you know where to find this song online, how can you download it as an MP3 file? The process may vary depending on the source, but here are some general steps that you can follow:

-

From Online Streaming Platforms

-

If you want to download this song from online streaming platforms like Spotify, Shazam, or YouTube, you will need to use a third-party tool or app that can convert the song to MP3 format. There are many tools and apps available online, but some of the most popular ones are:

- - - - - - - - - - - - - - - - - - - - - -
Tool/AppWebsite/Download LinkFeatures
4K Video Downloader- Supports YouTube, Spotify, SoundCloud, Vimeo, TikTok, and more
- Allows you to download videos, playlists, channels, subtitles, and 3D/360° videos
- Supports MP3, MP4, MKV, FLV, OGG, and more formats
- Offers high-quality and fast downloads
- Available for Windows, Mac, and Linux
AudFree Spotify Music Converter- Supports Spotify songs, playlists, albums, podcasts, and radio
- Allows you to download Spotify music offline without premium
- Supports MP3, FLAC, WAV, AAC, M4A, and M4B formats
- Offers lossless quality and 5X speed
- Available for Windows and Mac
Shazam Downloader- Supports Shazam songs and playlists
- Allows you to download Shazam music with one click
- Supports MP3 format
- Offers high-quality downloads
- Available for Android devices
-

To download this song from online streaming platforms using these tools or apps, you need to follow these steps:

-
    -
  1. Open the online streaming platform and find the song that you want to download.
  2. -
  3. Copy the URL or link of the song.
  4. -
  5. Open the tool or app that you have chosen and paste the URL or link into the input box.
  6. -
  7. Select the MP3 format and quality that you want.
  8. -
  9. Click on the download or convert button and wait for the process to finish.
  10. -
  11. Save the MP3 file to your device or cloud storage.
  12. -
-

From Free Music Download Websites

If you want to download this song from free music download websites like Chosic, Pixabay, or Free Music Archive, you will not need to use any third-party tool or app. You can simply download the song directly from the website. To download this song from free music download websites, you need to follow these steps:

-
    -
  1. Open the free music download website and search for the song that you want to download.
  2. -
  3. Click on the song title or the download button or link.
  4. -
  5. Select the MP3 format and quality that you want.
  6. -
  7. Save the MP3 file to your device or cloud storage.
  8. -
-

Conclusion

-

12 Days of Christmas by Daystar Choir is a wonderful song that will brighten up your Christmas season. It is a live performance by a talented gospel choir that sings a medley of classic and modern Christmas carols with a twist. It is a fun and lively song that will make you dance and sing along.

-

You can download this song as an MP3 file and enjoy it anytime and anywhere. You can find this song online on various online streaming platforms and free music download websites. You can also use different tools and apps to convert and download the song as an MP3 file. All you need to do is follow the steps that we have shown you in this article.

-

So what are you waiting for? Download 12 Days of Christmas by Daystar Choir MP3 today and have a merry Christmas!

-

FAQs

-

What is Daystar Choir?

-

Daystar Choir is a gospel choir from Nigeria that is part of the Daystar Christian Centre. The choir is known for its annual Christmas carol concerts that feature various songs, dances, and performances. The choir has also released several albums and singles, such as "Glory Halleluyah", "Hark the Herald", and "Joy to the World".

-

What are some other songs by Daystar Choir?

-

Some other songs by Daystar Choir are:

- -

How can I support Daystar Choir?

-

You can support Daystar Choir by:

- -

What are some other Christmas songs that I can download for free?

Some other Christmas songs that you can download for free are:

- -

These are just some examples of the many Christmas songs that you can download for free online. You can explore more options by browsing through the websites that we have mentioned or using other sources that you trust. Just make sure that the songs are legal and royalty-free before you download them.

-

We hope that this article has helped you learn how to download 12 Days of Christmas by Daystar Choir MP3 and enjoy this wonderful song. We also hope that you have discovered some other Christmas songs that you can download for free and add to your holiday playlist. Have a merry Christmas and a happy new year!

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/801artistry/RVC801/demucs/__main__.py b/spaces/801artistry/RVC801/demucs/__main__.py deleted file mode 100644 index 5148f20623bdaa827777558844796ded1876d7d0..0000000000000000000000000000000000000000 --- a/spaces/801artistry/RVC801/demucs/__main__.py +++ /dev/null @@ -1,317 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import json -import math -import os -import sys -import time -from dataclasses import dataclass, field - -import torch as th -from torch import distributed, nn -from torch.nn.parallel.distributed import DistributedDataParallel - -from .augment import FlipChannels, FlipSign, Remix, Scale, Shift -from .compressed import get_compressed_datasets -from .model import Demucs -from .parser import get_name, get_parser -from .raw import Rawset -from .repitch import RepitchedWrapper -from .pretrained import load_pretrained, SOURCES -from .tasnet import ConvTasNet -from .test import evaluate -from .train import train_model, validate_model -from .utils import (human_seconds, load_model, save_model, get_state, - save_state, sizeof_fmt, get_quantizer) -from .wav import get_wav_datasets, get_musdb_wav_datasets - - -@dataclass -class SavedState: - metrics: list = field(default_factory=list) - last_state: dict = None - best_state: dict = None - optimizer: dict = None - - -def main(): - parser = get_parser() - args = parser.parse_args() - name = get_name(parser, args) - print(f"Experiment {name}") - - if args.musdb is None and args.rank == 0: - print( - "You must provide the path to the MusDB dataset with the --musdb flag. " - "To download the MusDB dataset, see https://sigsep.github.io/datasets/musdb.html.", - file=sys.stderr) - sys.exit(1) - - eval_folder = args.evals / name - eval_folder.mkdir(exist_ok=True, parents=True) - args.logs.mkdir(exist_ok=True) - metrics_path = args.logs / f"{name}.json" - eval_folder.mkdir(exist_ok=True, parents=True) - args.checkpoints.mkdir(exist_ok=True, parents=True) - args.models.mkdir(exist_ok=True, parents=True) - - if args.device is None: - device = "cpu" - if th.cuda.is_available(): - device = "cuda" - else: - device = args.device - - th.manual_seed(args.seed) - # Prevents too many threads to be started when running `museval` as it can be quite - # inefficient on NUMA architectures. - os.environ["OMP_NUM_THREADS"] = "1" - os.environ["MKL_NUM_THREADS"] = "1" - - if args.world_size > 1: - if device != "cuda" and args.rank == 0: - print("Error: distributed training is only available with cuda device", file=sys.stderr) - sys.exit(1) - th.cuda.set_device(args.rank % th.cuda.device_count()) - distributed.init_process_group(backend="nccl", - init_method="tcp://" + args.master, - rank=args.rank, - world_size=args.world_size) - - checkpoint = args.checkpoints / f"{name}.th" - checkpoint_tmp = args.checkpoints / f"{name}.th.tmp" - if args.restart and checkpoint.exists() and args.rank == 0: - checkpoint.unlink() - - if args.test or args.test_pretrained: - args.epochs = 1 - args.repeat = 0 - if args.test: - model = load_model(args.models / args.test) - else: - model = load_pretrained(args.test_pretrained) - elif args.tasnet: - model = ConvTasNet(audio_channels=args.audio_channels, - samplerate=args.samplerate, X=args.X, - segment_length=4 * args.samples, - sources=SOURCES) - else: - model = Demucs( - audio_channels=args.audio_channels, - channels=args.channels, - context=args.context, - depth=args.depth, - glu=args.glu, - growth=args.growth, - kernel_size=args.kernel_size, - lstm_layers=args.lstm_layers, - rescale=args.rescale, - rewrite=args.rewrite, - stride=args.conv_stride, - resample=args.resample, - normalize=args.normalize, - samplerate=args.samplerate, - segment_length=4 * args.samples, - sources=SOURCES, - ) - model.to(device) - if args.init: - model.load_state_dict(load_pretrained(args.init).state_dict()) - - if args.show: - print(model) - size = sizeof_fmt(4 * sum(p.numel() for p in model.parameters())) - print(f"Model size {size}") - return - - try: - saved = th.load(checkpoint, map_location='cpu') - except IOError: - saved = SavedState() - - optimizer = th.optim.Adam(model.parameters(), lr=args.lr) - - quantizer = None - quantizer = get_quantizer(model, args, optimizer) - - if saved.last_state is not None: - model.load_state_dict(saved.last_state, strict=False) - if saved.optimizer is not None: - optimizer.load_state_dict(saved.optimizer) - - model_name = f"{name}.th" - if args.save_model: - if args.rank == 0: - model.to("cpu") - model.load_state_dict(saved.best_state) - save_model(model, quantizer, args, args.models / model_name) - return - elif args.save_state: - model_name = f"{args.save_state}.th" - if args.rank == 0: - model.to("cpu") - model.load_state_dict(saved.best_state) - state = get_state(model, quantizer) - save_state(state, args.models / model_name) - return - - if args.rank == 0: - done = args.logs / f"{name}.done" - if done.exists(): - done.unlink() - - augment = [Shift(args.data_stride)] - if args.augment: - augment += [FlipSign(), FlipChannels(), Scale(), - Remix(group_size=args.remix_group_size)] - augment = nn.Sequential(*augment).to(device) - print("Agumentation pipeline:", augment) - - if args.mse: - criterion = nn.MSELoss() - else: - criterion = nn.L1Loss() - - # Setting number of samples so that all convolution windows are full. - # Prevents hard to debug mistake with the prediction being shifted compared - # to the input mixture. - samples = model.valid_length(args.samples) - print(f"Number of training samples adjusted to {samples}") - samples = samples + args.data_stride - if args.repitch: - # We need a bit more audio samples, to account for potential - # tempo change. - samples = math.ceil(samples / (1 - 0.01 * args.max_tempo)) - - args.metadata.mkdir(exist_ok=True, parents=True) - if args.raw: - train_set = Rawset(args.raw / "train", - samples=samples, - channels=args.audio_channels, - streams=range(1, len(model.sources) + 1), - stride=args.data_stride) - - valid_set = Rawset(args.raw / "valid", channels=args.audio_channels) - elif args.wav: - train_set, valid_set = get_wav_datasets(args, samples, model.sources) - elif args.is_wav: - train_set, valid_set = get_musdb_wav_datasets(args, samples, model.sources) - else: - train_set, valid_set = get_compressed_datasets(args, samples) - - if args.repitch: - train_set = RepitchedWrapper( - train_set, - proba=args.repitch, - max_tempo=args.max_tempo) - - best_loss = float("inf") - for epoch, metrics in enumerate(saved.metrics): - print(f"Epoch {epoch:03d}: " - f"train={metrics['train']:.8f} " - f"valid={metrics['valid']:.8f} " - f"best={metrics['best']:.4f} " - f"ms={metrics.get('true_model_size', 0):.2f}MB " - f"cms={metrics.get('compressed_model_size', 0):.2f}MB " - f"duration={human_seconds(metrics['duration'])}") - best_loss = metrics['best'] - - if args.world_size > 1: - dmodel = DistributedDataParallel(model, - device_ids=[th.cuda.current_device()], - output_device=th.cuda.current_device()) - else: - dmodel = model - - for epoch in range(len(saved.metrics), args.epochs): - begin = time.time() - model.train() - train_loss, model_size = train_model( - epoch, train_set, dmodel, criterion, optimizer, augment, - quantizer=quantizer, - batch_size=args.batch_size, - device=device, - repeat=args.repeat, - seed=args.seed, - diffq=args.diffq, - workers=args.workers, - world_size=args.world_size) - model.eval() - valid_loss = validate_model( - epoch, valid_set, model, criterion, - device=device, - rank=args.rank, - split=args.split_valid, - overlap=args.overlap, - world_size=args.world_size) - - ms = 0 - cms = 0 - if quantizer and args.rank == 0: - ms = quantizer.true_model_size() - cms = quantizer.compressed_model_size(num_workers=min(40, args.world_size * 10)) - - duration = time.time() - begin - if valid_loss < best_loss and ms <= args.ms_target: - best_loss = valid_loss - saved.best_state = { - key: value.to("cpu").clone() - for key, value in model.state_dict().items() - } - - saved.metrics.append({ - "train": train_loss, - "valid": valid_loss, - "best": best_loss, - "duration": duration, - "model_size": model_size, - "true_model_size": ms, - "compressed_model_size": cms, - }) - if args.rank == 0: - json.dump(saved.metrics, open(metrics_path, "w")) - - saved.last_state = model.state_dict() - saved.optimizer = optimizer.state_dict() - if args.rank == 0 and not args.test: - th.save(saved, checkpoint_tmp) - checkpoint_tmp.rename(checkpoint) - - print(f"Epoch {epoch:03d}: " - f"train={train_loss:.8f} valid={valid_loss:.8f} best={best_loss:.4f} ms={ms:.2f}MB " - f"cms={cms:.2f}MB " - f"duration={human_seconds(duration)}") - - if args.world_size > 1: - distributed.barrier() - - del dmodel - model.load_state_dict(saved.best_state) - if args.eval_cpu: - device = "cpu" - model.to(device) - model.eval() - evaluate(model, args.musdb, eval_folder, - is_wav=args.is_wav, - rank=args.rank, - world_size=args.world_size, - device=device, - save=args.save, - split=args.split_valid, - shifts=args.shifts, - overlap=args.overlap, - workers=args.eval_workers) - model.to("cpu") - if args.rank == 0: - if not (args.test or args.test_pretrained): - save_model(model, quantizer, args, args.models / model_name) - print("done") - done.write_text("done") - - -if __name__ == "__main__": - main() diff --git a/spaces/A666sxr/Genshin_TTS/text/__init__.py b/spaces/A666sxr/Genshin_TTS/text/__init__.py deleted file mode 100644 index 48ae82f3e40ecd1bf17a7de78d87790327af3362..0000000000000000000000000000000000000000 --- a/spaces/A666sxr/Genshin_TTS/text/__init__.py +++ /dev/null @@ -1,56 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/trainset_preprocess_pipeline_print.py b/spaces/AI-Hobbyist/Hoyo-RVC/trainset_preprocess_pipeline_print.py deleted file mode 100644 index 6188c866e0611eadd38228ce9d54fc6ee80576d0..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/trainset_preprocess_pipeline_print.py +++ /dev/null @@ -1,139 +0,0 @@ -import sys, os, multiprocessing -from scipy import signal - -now_dir = os.getcwd() -sys.path.append(now_dir) - -inp_root = sys.argv[1] -sr = int(sys.argv[2]) -n_p = int(sys.argv[3]) -exp_dir = sys.argv[4] -noparallel = sys.argv[5] == "True" -import numpy as np, os, traceback -from slicer2 import Slicer -import librosa, traceback -from scipy.io import wavfile -import multiprocessing -from my_utils import load_audio - -mutex = multiprocessing.Lock() -f = open("%s/preprocess.log" % exp_dir, "a+") - - -def println(strr): - mutex.acquire() - print(strr) - f.write("%s\n" % strr) - f.flush() - mutex.release() - - -class PreProcess: - def __init__(self, sr, exp_dir): - self.slicer = Slicer( - sr=sr, - threshold=-42, - min_length=1500, - min_interval=400, - hop_size=15, - max_sil_kept=500, - ) - self.sr = sr - self.bh, self.ah = signal.butter(N=5, Wn=48, btype="high", fs=self.sr) - self.per = 3.7 - self.overlap = 0.3 - self.tail = self.per + self.overlap - self.max = 0.9 - self.alpha = 0.75 - self.exp_dir = exp_dir - self.gt_wavs_dir = "%s/0_gt_wavs" % exp_dir - self.wavs16k_dir = "%s/1_16k_wavs" % exp_dir - os.makedirs(self.exp_dir, exist_ok=True) - os.makedirs(self.gt_wavs_dir, exist_ok=True) - os.makedirs(self.wavs16k_dir, exist_ok=True) - - def norm_write(self, tmp_audio, idx0, idx1): - tmp_max = np.abs(tmp_audio).max() - if tmp_max > 2.5: - print("%s-%s-%s-filtered" % (idx0, idx1, tmp_max)) - return - tmp_audio = (tmp_audio / tmp_max * (self.max * self.alpha)) + ( - 1 - self.alpha - ) * tmp_audio - wavfile.write( - "%s/%s_%s.wav" % (self.gt_wavs_dir, idx0, idx1), - self.sr, - tmp_audio.astype(np.float32), - ) - tmp_audio = librosa.resample( - tmp_audio, orig_sr=self.sr, target_sr=16000 - ) # , res_type="soxr_vhq" - wavfile.write( - "%s/%s_%s.wav" % (self.wavs16k_dir, idx0, idx1), - 16000, - tmp_audio.astype(np.float32), - ) - - def pipeline(self, path, idx0): - try: - audio = load_audio(path, self.sr) - # zero phased digital filter cause pre-ringing noise... - # audio = signal.filtfilt(self.bh, self.ah, audio) - audio = signal.lfilter(self.bh, self.ah, audio) - - idx1 = 0 - for audio in self.slicer.slice(audio): - i = 0 - while 1: - start = int(self.sr * (self.per - self.overlap) * i) - i += 1 - if len(audio[start:]) > self.tail * self.sr: - tmp_audio = audio[start : start + int(self.per * self.sr)] - self.norm_write(tmp_audio, idx0, idx1) - idx1 += 1 - else: - tmp_audio = audio[start:] - idx1 += 1 - break - self.norm_write(tmp_audio, idx0, idx1) - println("%s->Suc." % path) - except: - println("%s->%s" % (path, traceback.format_exc())) - - def pipeline_mp(self, infos): - for path, idx0 in infos: - self.pipeline(path, idx0) - - def pipeline_mp_inp_dir(self, inp_root, n_p): - try: - infos = [ - ("%s/%s" % (inp_root, name), idx) - for idx, name in enumerate(sorted(list(os.listdir(inp_root)))) - ] - if noparallel: - for i in range(n_p): - self.pipeline_mp(infos[i::n_p]) - else: - ps = [] - for i in range(n_p): - p = multiprocessing.Process( - target=self.pipeline_mp, args=(infos[i::n_p],) - ) - ps.append(p) - p.start() - for i in range(n_p): - ps[i].join() - except: - println("Fail. %s" % traceback.format_exc()) - - -def preprocess_trainset(inp_root, sr, n_p, exp_dir): - pp = PreProcess(sr, exp_dir) - println("start preprocess") - println(sys.argv) - pp.pipeline_mp_inp_dir(inp_root, n_p) - println("end preprocess") - - -if __name__ == "__main__": - preprocess_trainset(inp_root, sr, n_p, exp_dir) diff --git a/spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_pretrained_32khz_eval.py b/spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_pretrained_32khz_eval.py deleted file mode 100644 index 39ceaf7dab15ec3f0f669cfe57ca9e932a9ab40d..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_pretrained_32khz_eval.py +++ /dev/null @@ -1,99 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Evaluation with objective metrics for the pretrained MusicGen models. -This grid takes signature from the training grid and runs evaluation-only stage. - -When running the grid for the first time, please use: -REGEN=1 dora grid musicgen.musicgen_pretrained_32khz_eval -and re-use the REGEN=1 option when the grid is changed to force regenerating it. - -Note that you need the proper metrics external libraries setup to use all -the objective metrics activated in this grid. Refer to the README for more information. -""" - -import os - -from ._explorers import GenerationEvalExplorer -from ...environment import AudioCraftEnvironment -from ... import train - - -def eval(launcher, batch_size: int = 32, eval_melody: bool = False): - opts = { - 'dset': 'audio/musiccaps_32khz', - 'solver/musicgen/evaluation': 'objective_eval', - 'execute_only': 'evaluate', - '+dataset.evaluate.batch_size': batch_size, - '+metrics.fad.tf.batch_size': 16, - } - # chroma-specific evaluation - chroma_opts = { - 'dset': 'internal/music_400k_32khz', - 'dataset.evaluate.segment_duration': 30, - 'dataset.evaluate.num_samples': 1000, - 'evaluate.metrics.chroma_cosine': True, - 'evaluate.metrics.fad': False, - 'evaluate.metrics.kld': False, - 'evaluate.metrics.text_consistency': False, - } - # binary for FAD computation: replace this path with your own path - metrics_opts = { - 'metrics.fad.tf.bin': '/data/home/jadecopet/local/usr/opt/google-research' - } - opt1 = {'generate.lm.use_sampling': True, 'generate.lm.top_k': 250, 'generate.lm.top_p': 0.} - opt2 = {'transformer_lm.two_step_cfg': True} - - sub = launcher.bind(opts) - sub.bind_(metrics_opts) - - # base objective metrics - sub(opt1, opt2) - - if eval_melody: - # chroma-specific metrics - sub(opt1, opt2, chroma_opts) - - -@GenerationEvalExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=4, partition=partitions) - - if 'REGEN' not in os.environ: - folder = train.main.dora.dir / 'grids' / __name__.split('.', 2)[-1] - with launcher.job_array(): - for sig in folder.iterdir(): - if not sig.is_symlink(): - continue - xp = train.main.get_xp_from_sig(sig.name) - launcher(xp.argv) - return - - with launcher.job_array(): - musicgen_base = launcher.bind(solver="musicgen/musicgen_base_32khz") - musicgen_base.bind_({'autocast': False, 'fsdp.use': True}) - - # base musicgen models - musicgen_base_small = musicgen_base.bind({'continue_from': '//pretrained/facebook/musicgen-small'}) - eval(musicgen_base_small, batch_size=128) - - musicgen_base_medium = musicgen_base.bind({'continue_from': '//pretrained/facebook/musicgen-medium'}) - musicgen_base_medium.bind_({'model/lm/model_scale': 'medium'}) - eval(musicgen_base_medium, batch_size=128) - - musicgen_base_large = musicgen_base.bind({'continue_from': '//pretrained/facebook/musicgen-large'}) - musicgen_base_large.bind_({'model/lm/model_scale': 'large'}) - eval(musicgen_base_large, batch_size=128) - - # melody musicgen model - musicgen_melody = launcher.bind(solver="musicgen/musicgen_melody_32khz") - musicgen_melody.bind_({'autocast': False, 'fsdp.use': True}) - - musicgen_melody_medium = musicgen_melody.bind({'continue_from': '//pretrained/facebook/musicgen-melody'}) - musicgen_melody_medium.bind_({'model/lm/model_scale': 'medium'}) - eval(musicgen_melody_medium, batch_size=128, eval_melody=True) diff --git a/spaces/AIFILMS/StyleGANEX/models/bisenet/resnet.py b/spaces/AIFILMS/StyleGANEX/models/bisenet/resnet.py deleted file mode 100644 index aa2bf95130e9815ba378cb6f73207068b81a04b9..0000000000000000000000000000000000000000 --- a/spaces/AIFILMS/StyleGANEX/models/bisenet/resnet.py +++ /dev/null @@ -1,109 +0,0 @@ -#!/usr/bin/python -# -*- encoding: utf-8 -*- - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torch.utils.model_zoo as modelzoo - -# from modules.bn import InPlaceABNSync as BatchNorm2d - -resnet18_url = 'https://download.pytorch.org/models/resnet18-5c106cde.pth' - - -def conv3x3(in_planes, out_planes, stride=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=False) - - -class BasicBlock(nn.Module): - def __init__(self, in_chan, out_chan, stride=1): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(in_chan, out_chan, stride) - self.bn1 = nn.BatchNorm2d(out_chan) - self.conv2 = conv3x3(out_chan, out_chan) - self.bn2 = nn.BatchNorm2d(out_chan) - self.relu = nn.ReLU(inplace=True) - self.downsample = None - if in_chan != out_chan or stride != 1: - self.downsample = nn.Sequential( - nn.Conv2d(in_chan, out_chan, - kernel_size=1, stride=stride, bias=False), - nn.BatchNorm2d(out_chan), - ) - - def forward(self, x): - residual = self.conv1(x) - residual = F.relu(self.bn1(residual)) - residual = self.conv2(residual) - residual = self.bn2(residual) - - shortcut = x - if self.downsample is not None: - shortcut = self.downsample(x) - - out = shortcut + residual - out = self.relu(out) - return out - - -def create_layer_basic(in_chan, out_chan, bnum, stride=1): - layers = [BasicBlock(in_chan, out_chan, stride=stride)] - for i in range(bnum-1): - layers.append(BasicBlock(out_chan, out_chan, stride=1)) - return nn.Sequential(*layers) - - -class Resnet18(nn.Module): - def __init__(self): - super(Resnet18, self).__init__() - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, - bias=False) - self.bn1 = nn.BatchNorm2d(64) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = create_layer_basic(64, 64, bnum=2, stride=1) - self.layer2 = create_layer_basic(64, 128, bnum=2, stride=2) - self.layer3 = create_layer_basic(128, 256, bnum=2, stride=2) - self.layer4 = create_layer_basic(256, 512, bnum=2, stride=2) - self.init_weight() - - def forward(self, x): - x = self.conv1(x) - x = F.relu(self.bn1(x)) - x = self.maxpool(x) - - x = self.layer1(x) - feat8 = self.layer2(x) # 1/8 - feat16 = self.layer3(feat8) # 1/16 - feat32 = self.layer4(feat16) # 1/32 - return feat8, feat16, feat32 - - def init_weight(self): - state_dict = modelzoo.load_url(resnet18_url) - self_state_dict = self.state_dict() - for k, v in state_dict.items(): - if 'fc' in k: continue - self_state_dict.update({k: v}) - self.load_state_dict(self_state_dict) - - def get_params(self): - wd_params, nowd_params = [], [] - for name, module in self.named_modules(): - if isinstance(module, (nn.Linear, nn.Conv2d)): - wd_params.append(module.weight) - if not module.bias is None: - nowd_params.append(module.bias) - elif isinstance(module, nn.BatchNorm2d): - nowd_params += list(module.parameters()) - return wd_params, nowd_params - - -if __name__ == "__main__": - net = Resnet18() - x = torch.randn(16, 3, 224, 224) - out = net(x) - print(out[0].size()) - print(out[1].size()) - print(out[2].size()) - net.get_params() diff --git a/spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/ddpm.py b/spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/ddpm.py deleted file mode 100644 index d6927503659e3aeb3a88965d8574d4435874516f..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio/ldm/models/diffusion/ddpm.py +++ /dev/null @@ -1,1444 +0,0 @@ -""" -wild mixture of -https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py -https://github.com/CompVis/taming-transformers --- merci -""" -import torch -import torch.nn as nn -import numpy as np -import pytorch_lightning as pl -from torch.optim.lr_scheduler import LambdaLR -from einops import rearrange, repeat -from contextlib import contextmanager -from functools import partial -from tqdm import tqdm -from torchvision.utils import make_grid -from pytorch_lightning.utilities.distributed import rank_zero_only - -from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config -from ldm.modules.ema import LitEma -from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution -from ldm.models.autoencoder import IdentityFirstStage, AutoencoderKL -from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like -from ldm.models.diffusion.ddim import DDIMSampler - - -__conditioning_keys__ = {'concat': 'c_concat', - 'crossattn': 'c_crossattn', - 'adm': 'y'} - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -def uniform_on_device(r1, r2, shape, device): - return (r1 - r2) * torch.rand(*shape, device=device) + r2 - - -class DDPM(pl.LightningModule): - # classic DDPM with Gaussian diffusion, in image space - def __init__(self, - unet_config, - timesteps=1000, - beta_schedule="linear", - loss_type="l2", - ckpt_path=None, - ignore_keys=[], - load_only_unet=False, - monitor="val/loss", - use_ema=True, - first_stage_key="image", - image_size=256, - channels=3, - log_every_t=100, - clip_denoised=True, - linear_start=1e-4, - linear_end=2e-2, - cosine_s=8e-3, - given_betas=None, - original_elbo_weight=0., - v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta - l_simple_weight=1., - conditioning_key=None, - parameterization="eps", # all config files uses "eps" - scheduler_config=None, - use_positional_encodings=False, - learn_logvar=False, - logvar_init=0., - ): - super().__init__() - assert parameterization in ["eps", "x0"], 'currently only supporting "eps" and "x0"' - self.parameterization = parameterization - print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode") - self.cond_stage_model = None - self.clip_denoised = clip_denoised - self.log_every_t = log_every_t - self.first_stage_key = first_stage_key - self.image_size = image_size # try conv? - self.channels = channels - self.use_positional_encodings = use_positional_encodings - self.model = DiffusionWrapper(unet_config, conditioning_key) - count_params(self.model, verbose=True) - self.use_ema = use_ema - if self.use_ema: - self.model_ema = LitEma(self.model) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - self.use_scheduler = scheduler_config is not None - if self.use_scheduler: - self.scheduler_config = scheduler_config - - self.v_posterior = v_posterior - self.original_elbo_weight = original_elbo_weight - self.l_simple_weight = l_simple_weight - - if monitor is not None: - self.monitor = monitor - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet) - - self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps, - linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s) - - self.loss_type = loss_type - - self.learn_logvar = learn_logvar - self.logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,)) - if self.learn_logvar: - self.logvar = nn.Parameter(self.logvar, requires_grad=True) - - def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if exists(given_betas): - betas = given_betas - else: - betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, - cosine_s=cosine_s) - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / ( - 1. - alphas_cumprod) + self.v_posterior * betas - # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - self.register_buffer('posterior_variance', to_torch(posterior_variance)) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) - self.register_buffer('posterior_mean_coef1', to_torch( - betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) - self.register_buffer('posterior_mean_coef2', to_torch( - (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) - - if self.parameterization == "eps": - lvlb_weights = self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)) - elif self.parameterization == "x0": - lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod)) - else: - raise NotImplementedError("mu not supported") - # TODO how to choose this term - lvlb_weights[0] = lvlb_weights[1] - self.register_buffer('lvlb_weights', lvlb_weights, persistent=False) - assert not torch.isnan(self.lvlb_weights).all() - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.model.parameters()) - self.model_ema.copy_to(self.model) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.model.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start) - variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise - ) - - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start + - extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, x, t, clip_denoised: bool): - model_out = self.model(x, t) - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - if clip_denoised: - x_recon.clamp_(-1., 1.) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, t, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised) - noise = noise_like(x.shape, device, repeat_noise) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def p_sample_loop(self, shape, return_intermediates=False): - device = self.betas.device - b = shape[0] - img = torch.randn(shape, device=device) - intermediates = [img] - for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps): - img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long), - clip_denoised=self.clip_denoised) - if i % self.log_every_t == 0 or i == self.num_timesteps - 1: - intermediates.append(img) - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, batch_size=16, return_intermediates=False): - image_size = self.image_size - channels = self.channels - return self.p_sample_loop((batch_size, channels, image_size, image_size), - return_intermediates=return_intermediates) - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - def get_loss(self, pred, target, mean=True): - if self.loss_type == 'l1': - loss = (target - pred).abs() - if mean: - loss = loss.mean() - elif self.loss_type == 'l2': - if mean: - loss = torch.nn.functional.mse_loss(target, pred) - else: - loss = torch.nn.functional.mse_loss(target, pred, reduction='none') - else: - raise NotImplementedError("unknown loss type '{loss_type}'") - - return loss - - def p_losses(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_out = self.model(x_noisy, t) - - loss_dict = {} - if self.parameterization == "eps": - target = noise - elif self.parameterization == "x0": - target = x_start - else: - raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported") - - loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3]) - - log_prefix = 'train' if self.training else 'val' - - loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()}) - loss_simple = loss.mean() * self.l_simple_weight - - loss_vlb = (self.lvlb_weights[t] * loss).mean() - loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb}) - - loss = loss_simple + self.original_elbo_weight * loss_vlb - - loss_dict.update({f'{log_prefix}/loss': loss}) - - return loss, loss_dict - - def forward(self, x, *args, **kwargs): - # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size - # assert h == img_size and w == img_size, f'height and width of image must be {img_size}' - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - return self.p_losses(x, t, *args, **kwargs) - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = rearrange(x, 'b h w c -> b c h w') - x = x.to(memory_format=torch.contiguous_format).float() - return x - - def shared_step(self, batch): - x = self.get_input(batch, self.first_stage_key) - loss, loss_dict = self(x) - return loss, loss_dict - - def training_step(self, batch, batch_idx): - loss, loss_dict = self.shared_step(batch) - - self.log_dict(loss_dict, prog_bar=True, - logger=True, on_step=True, on_epoch=True) - - self.log("global_step", self.global_step, - prog_bar=True, logger=True, on_step=True, on_epoch=False) - - if self.use_scheduler: - lr = self.optimizers().param_groups[0]['lr'] - self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False) - - return loss - - @torch.no_grad() - def validation_step(self, batch, batch_idx): - _, loss_dict_no_ema = self.shared_step(batch) - with self.ema_scope(): - _, loss_dict_ema = self.shared_step(batch) - loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema} - self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self.model) - - def _get_rows_from_list(self, samples): - n_imgs_per_row = len(samples) - denoise_grid = rearrange(samples, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs): - log = dict() - x = self.get_input(batch, self.first_stage_key) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - x = x.to(self.device)[:N] - log["inputs"] = x - - # get diffusion row - diffusion_row = list() - x_start = x[:n_row] - - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(x_start) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - diffusion_row.append(x_noisy) - - log["diffusion_row"] = self._get_rows_from_list(diffusion_row) - - if sample: - # get denoise row - with self.ema_scope("Plotting"): - samples, denoise_row = self.sample(batch_size=N, return_intermediates=True) - - log["samples"] = samples - log["denoise_row"] = self._get_rows_from_list(denoise_row) - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.learn_logvar: - params = params + [self.logvar] - opt = torch.optim.AdamW(params, lr=lr) - return opt - - -class LatentDiffusion(DDPM): - """main class""" - def __init__(self, - first_stage_config, - cond_stage_config, - num_timesteps_cond=None, - cond_stage_key="image",# 'caption' for txt2image, 'masked_image' for inpainting - cond_stage_trainable=False, - concat_mode=True,# true for inpainting - cond_stage_forward=None, - conditioning_key=None, # 'crossattn' for txt2image, None for inpainting - scale_factor=1.0, - scale_by_std=False, - *args, **kwargs): - self.num_timesteps_cond = default(num_timesteps_cond, 1) - self.scale_by_std = scale_by_std - assert self.num_timesteps_cond <= kwargs['timesteps'] - # for backwards compatibility after implementation of DiffusionWrapper - if conditioning_key is None: - conditioning_key = 'concat' if concat_mode else 'crossattn' - if cond_stage_config == '__is_unconditional__': - conditioning_key = None - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", []) - super().__init__(conditioning_key=conditioning_key, *args, **kwargs) - self.concat_mode = concat_mode - self.cond_stage_trainable = cond_stage_trainable - self.cond_stage_key = cond_stage_key - try: - self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 - except: - self.num_downs = 0 - if not scale_by_std: - self.scale_factor = scale_factor - else: - self.register_buffer('scale_factor', torch.tensor(scale_factor)) - self.instantiate_first_stage(first_stage_config) - self.instantiate_cond_stage(cond_stage_config) - self.cond_stage_forward = cond_stage_forward - self.clip_denoised = False - self.bbox_tokenizer = None - - self.restarted_from_ckpt = False - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys) - self.restarted_from_ckpt = True - - def make_cond_schedule(self, ): - self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long) - ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long() - self.cond_ids[:self.num_timesteps_cond] = ids - - @rank_zero_only - @torch.no_grad() - def on_train_batch_start(self, batch, batch_idx, dataloader_idx): - # only for very first batch - if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt: - assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously' - # set rescale weight to 1./std of encodings - print("### USING STD-RESCALING ###") - x = super().get_input(batch, self.first_stage_key) - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - del self.scale_factor - self.register_buffer('scale_factor', 1. / z.flatten().std()) - print(f"setting self.scale_factor to {self.scale_factor}") - print("### USING STD-RESCALING ###") - - def register_schedule(self, - given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s) - - self.shorten_cond_schedule = self.num_timesteps_cond > 1 - if self.shorten_cond_schedule: - self.make_cond_schedule() - - def instantiate_first_stage(self, config): - model = instantiate_from_config(config) - self.first_stage_model = model.eval() - self.first_stage_model.train = disabled_train - for param in self.first_stage_model.parameters(): - param.requires_grad = False - - def instantiate_cond_stage(self, config): - if not self.cond_stage_trainable: - if config == "__is_first_stage__":# inpaint - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__": - print(f"Training {self.__class__.__name__} as an unconditional model.") - self.cond_stage_model = None - # self.be_unconditional = True - else: - model = instantiate_from_config(config) - self.cond_stage_model = model.eval() - self.cond_stage_model.train = disabled_train - for param in self.cond_stage_model.parameters(): - param.requires_grad = False - else: - assert config != '__is_first_stage__' - assert config != '__is_unconditional__' - model = instantiate_from_config(config) - self.cond_stage_model = model - - def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False): - denoise_row = [] - for zd in tqdm(samples, desc=desc): - denoise_row.append(self.decode_first_stage(zd.to(self.device), - force_not_quantize=force_no_decoder_quantization)) - n_imgs_per_row = len(denoise_row) - denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W - denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - def get_first_stage_encoding(self, encoder_posterior): - if isinstance(encoder_posterior, DiagonalGaussianDistribution): - z = encoder_posterior.sample() - elif isinstance(encoder_posterior, torch.Tensor): - z = encoder_posterior - else: - raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented") - return self.scale_factor * z - - def get_learned_conditioning(self, c): - if self.cond_stage_forward is None: - if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode): - c = self.cond_stage_model.encode(c) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - else: - c = self.cond_stage_model(c) - else: - assert hasattr(self.cond_stage_model, self.cond_stage_forward) - c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) - return c - - def meshgrid(self, h, w): - y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1) - x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1) - - arr = torch.cat([y, x], dim=-1) - return arr - - def delta_border(self, h, w): - """ - :param h: height - :param w: width - :return: normalized distance to image border, - wtith min distance = 0 at border and max dist = 0.5 at image center - """ - lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2) - arr = self.meshgrid(h, w) / lower_right_corner - dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0] - dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0] - edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0] - return edge_dist - - def get_weighting(self, h, w, Ly, Lx, device): - weighting = self.delta_border(h, w) - weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"], - self.split_input_params["clip_max_weight"], ) - weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device) - - if self.split_input_params["tie_braker"]: - L_weighting = self.delta_border(Ly, Lx) - L_weighting = torch.clip(L_weighting, - self.split_input_params["clip_min_tie_weight"], - self.split_input_params["clip_max_tie_weight"]) - - L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device) - weighting = weighting * L_weighting - return weighting - - def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code - """ - :param x: img of size (bs, c, h, w) - :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1]) - """ - bs, nc, h, w = x.shape - - # number of crops in image - Ly = (h - kernel_size[0]) // stride[0] + 1 - Lx = (w - kernel_size[1]) // stride[1] + 1 - - if uf == 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params) - - weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx)) - - elif uf > 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf), - dilation=1, padding=0, - stride=(stride[0] * uf, stride[1] * uf)) - fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx)) - - elif df > 1 and uf == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df), - dilation=1, padding=0, - stride=(stride[0] // df, stride[1] // df)) - fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx)) - - else: - raise NotImplementedError - - return fold, unfold, normalization, weighting - - @torch.no_grad() - def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False, - cond_key=None, return_original_cond=False, bs=None): - x = super().get_input(batch, k) - if bs is not None: - x = x[:bs] - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - - if self.model.conditioning_key is not None: - if cond_key is None: - cond_key = self.cond_stage_key - if cond_key != self.first_stage_key:# cond_key is not image. for inapint it's masked_img - if cond_key in ['caption', 'coordinates_bbox']: - xc = batch[cond_key] - elif cond_key == 'class_label': - xc = batch - else: - xc = super().get_input(batch, cond_key).to(self.device) - else: - xc = x - if not self.cond_stage_trainable or force_c_encode: - if isinstance(xc, dict) or isinstance(xc, list): - # import pudb; pudb.set_trace() - c = self.get_learned_conditioning(xc) - else: - c = self.get_learned_conditioning(xc.to(self.device)) - else: - c = xc - if bs is not None: - c = c[:bs] - - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - ckey = __conditioning_keys__[self.model.conditioning_key] - c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y} - - else: - c = None - xc = None - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - c = {'pos_x': pos_x, 'pos_y': pos_y} - out = [z, c] - if return_first_stage_outputs: - xrec = self.decode_first_stage(z) - out.extend([x, xrec]) - if return_original_cond: - out.append(xc) - return out - - @torch.no_grad() - def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - uf = self.split_input_params["vqf"] - bs, nc, h, w = z.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) - - z = unfold(z) # (bn, nc * prod(**ks), L) - # 1. Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - # 2. apply model loop over last dim - if isinstance(self.first_stage_model, VQModelInterface): - output_list = [self.first_stage_model.decode(z[:, :, :, :, i], - force_not_quantize=predict_cids or force_not_quantize) - for i in range(z.shape[-1])] - else: - - output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) - o = o * weighting - # Reverse 1. reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization # norm is shape (1, 1, h, w) - return decoded - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - # same as above but without decorator - def differentiable_decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - uf = self.split_input_params["vqf"] - bs, nc, h, w = z.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) - - z = unfold(z) # (bn, nc * prod(**ks), L) - # 1. Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - # 2. apply model loop over last dim - if isinstance(self.first_stage_model, VQModelInterface): - output_list = [self.first_stage_model.decode(z[:, :, :, :, i], - force_not_quantize=predict_cids or force_not_quantize) - for i in range(z.shape[-1])] - else: - - output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) - o = o * weighting - # Reverse 1. reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization # norm is shape (1, 1, h, w) - return decoded - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - @torch.no_grad() - def encode_first_stage(self, x): - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - df = self.split_input_params["vqf"] - self.split_input_params['original_image_size'] = x.shape[-2:] - bs, nc, h, w = x.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(x, ks, stride, df=df) - z = unfold(x) # (bn, nc * prod(**ks), L) - # Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - output_list = [self.first_stage_model.encode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) - o = o * weighting - - # Reverse reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization - return decoded - - else: - return self.first_stage_model.encode(x) - else: - return self.first_stage_model.encode(x) - - def shared_step(self, batch, **kwargs): - x, c = self.get_input(batch, self.first_stage_key) - loss = self(x, c) - return loss - - def forward(self, x, c, *args, **kwargs): - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - if self.model.conditioning_key is not None: - assert c is not None - if self.cond_stage_trainable:# true when use text - c = self.get_learned_conditioning(c) # c: string list -> [B, T, Context_dim] - if self.shorten_cond_schedule: # TODO: drop this option - tc = self.cond_ids[t].to(self.device) - c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float())) - return self.p_losses(x, c, t, *args, **kwargs) - - def _rescale_annotations(self, bboxes, crop_coordinates): # TODO: move to dataset - def rescale_bbox(bbox): - x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2]) - y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3]) - w = min(bbox[2] / crop_coordinates[2], 1 - x0) - h = min(bbox[3] / crop_coordinates[3], 1 - y0) - return x0, y0, w, h - - return [rescale_bbox(b) for b in bboxes] - - def apply_model(self, x_noisy, t, cond, return_ids=False): - - if isinstance(cond, dict): - # hybrid case, cond is exptected to be a dict - pass - else: - if not isinstance(cond, list): - cond = [cond] - key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn' - cond = {key: cond} - - if hasattr(self, "split_input_params"): - assert len(cond) == 1 # todo can only deal with one conditioning atm - assert not return_ids - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - - h, w = x_noisy.shape[-2:] - - fold, unfold, normalization, weighting = self.get_fold_unfold(x_noisy, ks, stride) - - z = unfold(x_noisy) # (bn, nc * prod(**ks), L) - # Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - z_list = [z[:, :, :, :, i] for i in range(z.shape[-1])] - - if self.cond_stage_key in ["image", "LR_image", "segmentation", - 'bbox_img'] and self.model.conditioning_key: # todo check for completeness - c_key = next(iter(cond.keys())) # get key - c = next(iter(cond.values())) # get value - assert (len(c) == 1) # todo extend to list with more than one elem - c = c[0] # get element - - c = unfold(c) - c = c.view((c.shape[0], -1, ks[0], ks[1], c.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])] - - elif self.cond_stage_key == 'coordinates_bbox': - assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size' - - # assuming padding of unfold is always 0 and its dilation is always 1 - n_patches_per_row = int((w - ks[0]) / stride[0] + 1) - full_img_h, full_img_w = self.split_input_params['original_image_size'] - # as we are operating on latents, we need the factor from the original image size to the - # spatial latent size to properly rescale the crops for regenerating the bbox annotations - num_downs = self.first_stage_model.encoder.num_resolutions - 1 - rescale_latent = 2 ** (num_downs) - - # get top left postions of patches as conforming for the bbbox tokenizer, therefore we - # need to rescale the tl patch coordinates to be in between (0,1) - tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w, - rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h) - for patch_nr in range(z.shape[-1])] - - # patch_limits are tl_coord, width and height coordinates as (x_tl, y_tl, h, w) - patch_limits = [(x_tl, y_tl, - rescale_latent * ks[0] / full_img_w, - rescale_latent * ks[1] / full_img_h) for x_tl, y_tl in tl_patch_coordinates] - # patch_values = [(np.arange(x_tl,min(x_tl+ks, 1.)),np.arange(y_tl,min(y_tl+ks, 1.))) for x_tl, y_tl in tl_patch_coordinates] - - # tokenize crop coordinates for the bounding boxes of the respective patches - patch_limits_tknzd = [torch.LongTensor(self.bbox_tokenizer._crop_encoder(bbox))[None].to(self.device) - for bbox in patch_limits] # list of length l with tensors of shape (1, 2) - print(patch_limits_tknzd[0].shape) - # cut tknzd crop position from conditioning - assert isinstance(cond, dict), 'cond must be dict to be fed into model' - cut_cond = cond['c_crossattn'][0][..., :-2].to(self.device) - print(cut_cond.shape) - - adapted_cond = torch.stack([torch.cat([cut_cond, p], dim=1) for p in patch_limits_tknzd]) - adapted_cond = rearrange(adapted_cond, 'l b n -> (l b) n') - print(adapted_cond.shape) - adapted_cond = self.get_learned_conditioning(adapted_cond) - print(adapted_cond.shape) - adapted_cond = rearrange(adapted_cond, '(l b) n d -> l b n d', l=z.shape[-1]) - print(adapted_cond.shape) - - cond_list = [{'c_crossattn': [e]} for e in adapted_cond] - - else: - cond_list = [cond for i in range(z.shape[-1])] # Todo make this more efficient - - # apply model by loop over crops - output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])] - assert not isinstance(output_list[0], - tuple) # todo cant deal with multiple model outputs check this never happens - - o = torch.stack(output_list, axis=-1) - o = o * weighting - # Reverse reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - x_recon = fold(o) / normalization - - else: - x_recon = self.model(x_noisy, t, **cond) - - if isinstance(x_recon, tuple) and not return_ids: - return x_recon[0] - else: - return x_recon - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \ - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - This term can't be optimized, as it only depends on the encoder. - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0) - return mean_flat(kl_prior) / np.log(2.0) - - def p_losses(self, x_start, cond, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_output = self.apply_model(x_noisy, t, cond) - - loss_dict = {} - prefix = 'train' if self.training else 'val' - - if self.parameterization == "x0": - target = x_start - elif self.parameterization == "eps": - target = noise - else: - raise NotImplementedError() - - loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3]) - loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()}) - - logvar_t = self.logvar[t].to(self.device) - loss = loss_simple / torch.exp(logvar_t) + logvar_t - # loss = loss_simple / torch.exp(self.logvar) + self.logvar - if self.learn_logvar: - loss_dict.update({f'{prefix}/loss_gamma': loss.mean()}) - loss_dict.update({'logvar': self.logvar.data.mean()}) - - loss = self.l_simple_weight * loss.mean() - - loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3)) - loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean() - loss_dict.update({f'{prefix}/loss_vlb': loss_vlb}) - loss += (self.original_elbo_weight * loss_vlb) - loss_dict.update({f'{prefix}/loss': loss}) - - return loss, loss_dict - - def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False, - return_x0=False, score_corrector=None, corrector_kwargs=None): - t_in = t - model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs) - - if return_codebook_ids: - model_out, logits = model_out - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1., 1.) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - if return_codebook_ids: - return model_mean, posterior_variance, posterior_log_variance, logits - elif return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False, - return_codebook_ids=False, quantize_denoised=False, return_x0=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised, - return_codebook_ids=return_codebook_ids, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if return_codebook_ids: - raise DeprecationWarning("Support dropped.") - model_mean, _, model_log_variance, logits = outputs - elif return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - - if return_codebook_ids: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1) - if return_x0: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False, - img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0., - score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None, - log_every_t=None): - if not log_every_t: - log_every_t = self.log_every_t - timesteps = self.num_timesteps - if batch_size is not None: - b = batch_size if batch_size is not None else shape[0] - shape = [batch_size] + list(shape) - else: - b = batch_size = shape[0] - if x_T is None: - img = torch.randn(shape, device=self.device) - else: - img = x_T - intermediates = [] - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation', - total=timesteps) if verbose else reversed( - range(0, timesteps)) - if type(temperature) == float: - temperature = [temperature] * timesteps - - for i in iterator: - ts = torch.full((b,), i, device=self.device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img, x0_partial = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, return_x0=True, - temperature=temperature[i], noise_dropout=noise_dropout, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if mask is not None: - assert x0 is not None - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(x0_partial) - if callback: callback(i) - if img_callback: img_callback(img, i) - return img, intermediates - - @torch.no_grad() - def p_sample_loop(self, cond, shape, return_intermediates=False, - x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, start_T=None, - log_every_t=None): - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed( - range(0, timesteps)) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match - - for i in iterator: - ts = torch.full((b,), i, device=device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: callback(i) - if img_callback: img_callback(img, i) - - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None, - verbose=True, timesteps=None, quantize_denoised=False, - mask=None, x0=None, shape=None,**kwargs): - if shape is None: - shape = (batch_size, self.channels, self.image_size, self.image_size) - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - return self.p_sample_loop(cond, - shape, - return_intermediates=return_intermediates, x_T=x_T, - verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised, - mask=mask, x0=x0) - - @torch.no_grad() - def sample_log(self,cond,batch_size,ddim, ddim_steps,**kwargs): - - if ddim: - ddim_sampler = DDIMSampler(self) - shape = (self.channels, self.image_size, self.image_size) - samples, intermediates =ddim_sampler.sample(ddim_steps,batch_size, - shape,cond,verbose=False,**kwargs) - - else: - samples, intermediates = self.sample(cond=cond, batch_size=batch_size, - return_intermediates=True,**kwargs) - - return samples, intermediates - - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=200, ddim_eta=1., return_keys=None, - quantize_denoised=True, inpaint=True, plot_denoise_rows=False, plot_progressive_rows=True, - plot_diffusion_rows=True, **kwargs): - - use_ddim = ddim_steps is not None - - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, - return_first_stage_outputs=True, - force_c_encode=True, - return_original_cond=True, - bs=N) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["caption"]) - log["conditioning"] = xc - elif self.cond_stage_key == 'class_label': - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"]) - log['conditioning'] = xc - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with self.ema_scope("Plotting"): - samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, - ddim_steps=ddim_steps,eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance( - self.first_stage_model, IdentityFirstStage): - # also display when quantizing x0 while sampling - with self.ema_scope("Plotting Quantized Denoised"): - samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, - ddim_steps=ddim_steps,eta=ddim_eta, - quantize_denoised=True) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True, - # quantize_denoised=True) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_x0_quantized"] = x_samples - - if inpaint: - # make a simple center square - b, h, w = z.shape[0], z.shape[2], z.shape[3] - mask = torch.ones(N, h, w).to(self.device) - # zeros will be filled in - mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0. - mask = mask[:, None, ...] - with self.ema_scope("Plotting Inpaint"): - - samples, _ = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_inpainting"] = x_samples - log["mask"] = mask - - # outpaint - with self.ema_scope("Plotting Outpaint"): - samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_outpainting"] = x_samples - - if plot_progressive_rows: - with self.ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, - shape=(self.channels, self.image_size, self.image_size), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.cond_stage_trainable: - print(f"{self.__class__.__name__}: Also optimizing conditioner params!") - params = params + list(self.cond_stage_model.parameters()) - if self.learn_logvar: - print('Diffusion model optimizing logvar') - params.append(self.logvar) - opt = torch.optim.AdamW(params, lr=lr) - if self.use_scheduler: - assert 'target' in self.scheduler_config - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }] - return [opt], scheduler - return opt - - @torch.no_grad() - def to_rgb(self, x): - x = x.float() - if not hasattr(self, "colorize"): - self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x) - x = nn.functional.conv2d(x, weight=self.colorize) - x = 2. * (x - x.min()) / (x.max() - x.min()) - 1. - return x - - -class DiffusionWrapper(pl.LightningModule): - def __init__(self, diff_model_config, conditioning_key): - super().__init__() - self.diffusion_model = instantiate_from_config(diff_model_config) - self.conditioning_key = conditioning_key # 'crossattn' for txt2image, concat for inpainting - assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm'] - - def forward(self, x, t, c_concat: list = None, c_crossattn: list = None): - """param x: tensor with shape:[B,C,mel_len,T]""" - if self.conditioning_key is None: - out = self.diffusion_model(x, t) - elif self.conditioning_key == 'concat': - xc = torch.cat([x] + c_concat, dim=1)# channel dim,x shape (b,3,64,64) c_concat shape(b,4,64,64) - out = self.diffusion_model(xc, t) - elif self.conditioning_key == 'crossattn': - cc = torch.cat(c_crossattn, 1)# [b,seq_len,dim] - out = self.diffusion_model(x, t, context=cc) - elif self.conditioning_key == 'hybrid':# not implemented in the LatentDiffusion - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(xc, t, context=cc) - elif self.conditioning_key == 'adm': - cc = c_crossattn[0] - out = self.diffusion_model(x, t, y=cc) - else: - raise NotImplementedError() - - return out - - -class Layout2ImgDiffusion(LatentDiffusion): - # TODO: move all layout-specific hacks to this class - def __init__(self, cond_stage_key, *args, **kwargs): - assert cond_stage_key == 'coordinates_bbox', 'Layout2ImgDiffusion only for cond_stage_key="coordinates_bbox"' - super().__init__(cond_stage_key=cond_stage_key, *args, **kwargs) - - def log_images(self, batch, N=8, *args, **kwargs): - logs = super().log_images(batch=batch, N=N, *args, **kwargs) - - key = 'train' if self.training else 'validation' - dset = self.trainer.datamodule.datasets[key] - mapper = dset.conditional_builders[self.cond_stage_key] - - bbox_imgs = [] - map_fn = lambda catno: dset.get_textual_label(dset.get_category_id(catno)) - for tknzd_bbox in batch[self.cond_stage_key][:N]: - bboximg = mapper.plot(tknzd_bbox.detach().cpu(), map_fn, (256, 256)) - bbox_imgs.append(bboximg) - - cond_img = torch.stack(bbox_imgs, dim=0) - logs['bbox_image'] = cond_img - return logs diff --git a/spaces/AIGC-Audio/Make_An_Audio/wav_evaluation/models/audio.py b/spaces/AIGC-Audio/Make_An_Audio/wav_evaluation/models/audio.py deleted file mode 100644 index 0980d729dd3b579fee0380d0b9d7055e6843ba12..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/Make_An_Audio/wav_evaluation/models/audio.py +++ /dev/null @@ -1,179 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchlibrosa.stft import Spectrogram, LogmelFilterBank - -def get_audio_encoder(name: str): - if name == "Cnn14": - return Cnn14 - else: - raise Exception('The audio encoder name {} is incorrect or not supported'.format(name)) - - -class ConvBlock(nn.Module): - def __init__(self, in_channels, out_channels): - - super(ConvBlock, self).__init__() - - self.conv1 = nn.Conv2d(in_channels=in_channels, - out_channels=out_channels, - kernel_size=(3, 3), stride=(1, 1), - padding=(1, 1), bias=False) - - self.conv2 = nn.Conv2d(in_channels=out_channels, - out_channels=out_channels, - kernel_size=(3, 3), stride=(1, 1), - padding=(1, 1), bias=False) - - self.bn1 = nn.BatchNorm2d(out_channels) - self.bn2 = nn.BatchNorm2d(out_channels) - - - def forward(self, input, pool_size=(2, 2), pool_type='avg'): - - x = input - x = F.relu_(self.bn1(self.conv1(x))) - x = F.relu_(self.bn2(self.conv2(x))) - if pool_type == 'max': - x = F.max_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg': - x = F.avg_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg+max': - x1 = F.avg_pool2d(x, kernel_size=pool_size) - x2 = F.max_pool2d(x, kernel_size=pool_size) - x = x1 + x2 - else: - raise Exception('Incorrect argument!') - - return x - - -class ConvBlock5x5(nn.Module): - def __init__(self, in_channels, out_channels): - - super(ConvBlock5x5, self).__init__() - - self.conv1 = nn.Conv2d(in_channels=in_channels, - out_channels=out_channels, - kernel_size=(5, 5), stride=(1, 1), - padding=(2, 2), bias=False) - - self.bn1 = nn.BatchNorm2d(out_channels) - - - def forward(self, input, pool_size=(2, 2), pool_type='avg'): - - x = input - x = F.relu_(self.bn1(self.conv1(x))) - if pool_type == 'max': - x = F.max_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg': - x = F.avg_pool2d(x, kernel_size=pool_size) - elif pool_type == 'avg+max': - x1 = F.avg_pool2d(x, kernel_size=pool_size) - x2 = F.max_pool2d(x, kernel_size=pool_size) - x = x1 + x2 - else: - raise Exception('Incorrect argument!') - - return x - - -class AttBlock(nn.Module): - def __init__(self, n_in, n_out, activation='linear', temperature=1.): - super(AttBlock, self).__init__() - - self.activation = activation - self.temperature = temperature - self.att = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True) - self.cla = nn.Conv1d(in_channels=n_in, out_channels=n_out, kernel_size=1, stride=1, padding=0, bias=True) - - self.bn_att = nn.BatchNorm1d(n_out) - - def forward(self, x): - # x: (n_samples, n_in, n_time) - norm_att = torch.softmax(torch.clamp(self.att(x), -10, 10), dim=-1) - cla = self.nonlinear_transform(self.cla(x)) - x = torch.sum(norm_att * cla, dim=2) - return x, norm_att, cla - - def nonlinear_transform(self, x): - if self.activation == 'linear': - return x - elif self.activation == 'sigmoid': - return torch.sigmoid(x) - - -class Cnn14(nn.Module): - def __init__(self, sample_rate, window_size, hop_size, mel_bins, fmin, - fmax, classes_num, out_emb): - - super(Cnn14, self).__init__() - - window = 'hann' - center = True - pad_mode = 'reflect' - ref = 1.0 - amin = 1e-10 - top_db = None - - # Spectrogram extractor - self.spectrogram_extractor = Spectrogram(n_fft=window_size, hop_length=hop_size, - win_length=window_size, window=window, center=center, pad_mode=pad_mode, - freeze_parameters=True) - - # Logmel feature extractor - self.logmel_extractor = LogmelFilterBank(sr=sample_rate, n_fft=window_size, - n_mels=mel_bins, fmin=fmin, fmax=fmax, ref=ref, amin=amin, top_db=top_db, - freeze_parameters=True) - - self.bn0 = nn.BatchNorm2d(64) - - self.conv_block1 = ConvBlock(in_channels=1, out_channels=64) - self.conv_block2 = ConvBlock(in_channels=64, out_channels=128) - self.conv_block3 = ConvBlock(in_channels=128, out_channels=256) - self.conv_block4 = ConvBlock(in_channels=256, out_channels=512) - self.conv_block5 = ConvBlock(in_channels=512, out_channels=1024) - self.conv_block6 = ConvBlock(in_channels=1024, out_channels=2048) - - # out_emb is 2048 for best Cnn14 - self.fc1 = nn.Linear(2048, out_emb, bias=True) - self.fc_audioset = nn.Linear(out_emb, classes_num, bias=True) - - def forward(self, input, mixup_lambda=None): - """ - Input: (batch_size, data_length) - """ - - x = self.spectrogram_extractor(input) # (batch_size, 1, time_steps, freq_bins) - x = self.logmel_extractor(x) # (batch_size, 1, time_steps, mel_bins) - - x = x.transpose(1, 3) - x = self.bn0(x) - x = x.transpose(1, 3) - - x = self.conv_block1(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block2(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block3(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block4(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block5(x, pool_size=(2, 2), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = self.conv_block6(x, pool_size=(1, 1), pool_type='avg') - x = F.dropout(x, p=0.2, training=self.training) - x = torch.mean(x, dim=3) - - (x1, _) = torch.max(x, dim=2) - x2 = torch.mean(x, dim=2) - x = x1 + x2 - x = F.dropout(x, p=0.5, training=self.training) - x = F.relu_(self.fc1(x)) - embedding = F.dropout(x, p=0.5, training=self.training) - clipwise_output = torch.sigmoid(self.fc_audioset(x)) - - output_dict = {'clipwise_output': clipwise_output, 'embedding': embedding} - - return output_dict \ No newline at end of file diff --git a/spaces/AILab-CVC/SEED-LLaMA/gradio_demo/conversation.py b/spaces/AILab-CVC/SEED-LLaMA/gradio_demo/conversation.py deleted file mode 100644 index dd1b45b09e479e9f53ff6fba42568f7acaf53e20..0000000000000000000000000000000000000000 --- a/spaces/AILab-CVC/SEED-LLaMA/gradio_demo/conversation.py +++ /dev/null @@ -1,190 +0,0 @@ -import dataclasses -from enum import auto, Enum -from typing import List, Tuple - -import io -import base64 -import os -from PIL import Image -import copy - -IMG_FLAG = '' - - -class SeparatorStyle(Enum): - """Different separator style.""" - SINGLE = auto() - TWO = auto() - MPT = auto() - PLAIN = auto() - LLAMA_2 = auto() - - -def decode_image(encoded_image: str) -> Image: - decoded_bytes = base64.b64decode(encoded_image.encode('utf-8')) - buffer = io.BytesIO(decoded_bytes) - image = Image.open(buffer) - return image - - -def encode_image(image: Image.Image, format: str = 'PNG') -> str: - with io.BytesIO() as buffer: - image.save(buffer, format=format) - encoded_image = base64.b64encode(buffer.getvalue()).decode('utf-8') - return encoded_image - - -@dataclasses.dataclass -class Conversation: - """A class that keeps all conversation history.""" - system: str - roles: List[str] - messages: List[dict] # multi-turn -> user & assistant -> {'images': [PIL.Image,], 'text': str} - offset: int - sep_style: SeparatorStyle = SeparatorStyle.SINGLE - sep: str = "###" - sep2: str = None - version: str = "Unknown" - - skip_next: bool = False - - def get_prompt(self): - messages = copy.deepcopy(self.messages) - if self.sep_style == SeparatorStyle.SINGLE: - if self.system is None or self.system == '': - text = '' - else: - text = self.system + self.sep - images = [] - for message in messages: - text += message['role'] + ": " + message['message']['text'] + self.sep - for image_path, image_ids in zip(message['message']['images'], message['message']['images_ids']): - if image_ids is not None: - images.append(image_ids) - else: - image = Image.open(image_path).resize((256, 256)) - image_base64 = encode_image(image) - images.append(image_base64) - - text += self.roles[1] + ":" - elif self.sep_style == SeparatorStyle.LLAMA_2: - b_token = "[INST] " - e_token = " [/INST]" - if self.system is None or self.system == '': - text = '' - else: - text = f"<>\n{self.system}\n<>\n\n" - images = [] - for idx, message in enumerate(messages): - # text += message['role'] + ": " + message['message']['text'] + self.sep - if idx % 2 == 0: - text += b_token + message['message']['text'] + e_token + self.sep - else: - text += message['message']['text'] + self.sep - - for image_path, image_ids in zip(message['message']['images'], message['message']['images_ids']): - if image_ids is not None: - images.append(image_ids) - else: - image = Image.open(image_path).resize((256, 256)) - image_base64 = encode_image(image) - images.append(image_base64) - else: - raise NotImplementedError - - return {'text': text, 'images': images} - - def update_image_ids(self, images_ids): - image_count = 0 - for message in self.messages: - for idx in range(len(message['message']['images_ids'])): - if message['message']["images_ids"][idx] is None: - message['message']["images_ids"][idx] = images_ids[image_count] - image_count += 1 - - assert len(images_ids) == image_count, print(len(images_ids), image_count) - - def append_message(self, role, message): - self.messages.append([role, message]) - - def to_gradio_chatbot(self): - dialog = [] - for i, single_turn in enumerate(self.messages[self.offset:]): - single_turn = single_turn['message'] - text_list = single_turn['text'].split(IMG_FLAG) - assert len(text_list) == len(single_turn['images']) + 1, print(text_list, len(single_turn['images'])) - message = '' - for image_idx in range(len(single_turn['images'])): - # image = single_turn['images'][image_idx] - # image_base64 = encode_image(image) - # image_str = f'user upload image' - image_path = single_turn['images'][image_idx] - if image_path == '': - message += text_list[image_idx] + '' - else: - message += text_list[image_idx] + f'![](file={image_path})' - message += text_list[-1] - - if i % 2 == 0: - dialog.append([message, None]) - else: - dialog[-1][-1] = message - - return dialog - - def copy(self): - return Conversation(system=self.system, - roles=self.roles, - messages=copy.deepcopy(self.messages), - offset=self.offset, - sep_style=self.sep_style, - sep=self.sep, - sep2=self.sep2, - version=self.version) - - def dict(self): - messages = copy.deepcopy(self.messages) - for message in messages: - if 'images_ids' in message: - message.pop('images_ids') - for i in range(len(message['message']['images'])): - message['message']['images'][i] = os.path.basename(message['message']['images'][i]) - return { - "system": self.system, - "roles": self.roles, - "messages": messages, - "offset": self.offset, - "sep": self.sep, - "sep2": self.sep2, - } - - -conv_seed_vicuna = Conversation( - system="", - roles=("USER", "ASSISTANT"), - version="v2", - messages=[], - offset=0, - sep_style=SeparatorStyle.SINGLE, - sep='\n', -) - -conv_seed_vicuna_system = Conversation( - system="A chat between a curious user and an artificial intelligence assistant. ", - roles=("USER", "ASSISTANT"), - version="v2", - messages=[], - offset=0, - sep_style=SeparatorStyle.SINGLE, - sep='\n', -) - -conv_seed_llama2 = Conversation( - system="", - roles=("[INST]", "[/INST]"), - version="v2", - messages=[], - offset=0, - sep_style=SeparatorStyle.LLAMA_2, - sep='\n', -) \ No newline at end of file diff --git a/spaces/Abhaykoul/HelpingAI-2.0/README.md b/spaces/Abhaykoul/HelpingAI-2.0/README.md deleted file mode 100644 index 17fe074947f534c723c096b8863e378f8b4433a9..0000000000000000000000000000000000000000 --- a/spaces/Abhaykoul/HelpingAI-2.0/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: HelpingAI 2.0 -emoji: 👀 -colorFrom: blue -colorTo: blue -sdk: streamlit -sdk_version: 1.28.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Aditya9790/yolo7-object-tracking/utils/torch_utils.py b/spaces/Aditya9790/yolo7-object-tracking/utils/torch_utils.py deleted file mode 100644 index bee0ad57517a334748afe7db19f6e45bd657afe6..0000000000000000000000000000000000000000 --- a/spaces/Aditya9790/yolo7-object-tracking/utils/torch_utils.py +++ /dev/null @@ -1,374 +0,0 @@ -# YOLOR PyTorch utils - -import datetime -import logging -import math -import os -import platform -import subprocess -import time -from contextlib import contextmanager -from copy import deepcopy -from pathlib import Path - -import torch -import torch.backends.cudnn as cudnn -import torch.nn as nn -import torch.nn.functional as F -import torchvision - -try: - import thop # for FLOPS computation -except ImportError: - thop = None -logger = logging.getLogger(__name__) - - -@contextmanager -def torch_distributed_zero_first(local_rank: int): - """ - Decorator to make all processes in distributed training wait for each local_master to do something. - """ - if local_rank not in [-1, 0]: - torch.distributed.barrier() - yield - if local_rank == 0: - torch.distributed.barrier() - - -def init_torch_seeds(seed=0): - # Speed-reproducibility tradeoff https://pytorch.org/docs/stable/notes/randomness.html - torch.manual_seed(seed) - if seed == 0: # slower, more reproducible - cudnn.benchmark, cudnn.deterministic = False, True - else: # faster, less reproducible - cudnn.benchmark, cudnn.deterministic = True, False - - -def date_modified(path=__file__): - # return human-readable file modification date, i.e. '2021-3-26' - t = datetime.datetime.fromtimestamp(Path(path).stat().st_mtime) - return f'{t.year}-{t.month}-{t.day}' - - -def git_describe(path=Path(__file__).parent): # path must be a directory - # return human-readable git description, i.e. v5.0-5-g3e25f1e https://git-scm.com/docs/git-describe - s = f'git -C {path} describe --tags --long --always' - try: - return subprocess.check_output(s, shell=True, stderr=subprocess.STDOUT).decode()[:-1] - except subprocess.CalledProcessError as e: - return '' # not a git repository - - -def select_device(device='', batch_size=None): - # device = 'cpu' or '0' or '0,1,2,3' - s = f'YOLOR 🚀 {git_describe() or date_modified()} torch {torch.__version__} ' # string - cpu = device.lower() == 'cpu' - if cpu: - os.environ['CUDA_VISIBLE_DEVICES'] = '-1' # force torch.cuda.is_available() = False - elif device: # non-cpu device requested - os.environ['CUDA_VISIBLE_DEVICES'] = device # set environment variable - assert torch.cuda.is_available(), f'CUDA unavailable, invalid device {device} requested' # check availability - - cuda = not cpu and torch.cuda.is_available() - if cuda: - n = torch.cuda.device_count() - if n > 1 and batch_size: # check that batch_size is compatible with device_count - assert batch_size % n == 0, f'batch-size {batch_size} not multiple of GPU count {n}' - space = ' ' * len(s) - for i, d in enumerate(device.split(',') if device else range(n)): - p = torch.cuda.get_device_properties(i) - s += f"{'' if i == 0 else space}CUDA:{d} ({p.name}, {p.total_memory / 1024 ** 2}MB)\n" # bytes to MB - else: - s += 'CPU\n' - - logger.info(s.encode().decode('ascii', 'ignore') if platform.system() == 'Windows' else s) # emoji-safe - return torch.device('cuda:0' if cuda else 'cpu') - - -def time_synchronized(): - # pytorch-accurate time - if torch.cuda.is_available(): - torch.cuda.synchronize() - return time.time() - - -def profile(x, ops, n=100, device=None): - # profile a pytorch module or list of modules. Example usage: - # x = torch.randn(16, 3, 640, 640) # input - # m1 = lambda x: x * torch.sigmoid(x) - # m2 = nn.SiLU() - # profile(x, [m1, m2], n=100) # profile speed over 100 iterations - - device = device or torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') - x = x.to(device) - x.requires_grad = True - print(torch.__version__, device.type, torch.cuda.get_device_properties(0) if device.type == 'cuda' else '') - print(f"\n{'Params':>12s}{'GFLOPS':>12s}{'forward (ms)':>16s}{'backward (ms)':>16s}{'input':>24s}{'output':>24s}") - for m in ops if isinstance(ops, list) else [ops]: - m = m.to(device) if hasattr(m, 'to') else m # device - m = m.half() if hasattr(m, 'half') and isinstance(x, torch.Tensor) and x.dtype is torch.float16 else m # type - dtf, dtb, t = 0., 0., [0., 0., 0.] # dt forward, backward - try: - flops = thop.profile(m, inputs=(x,), verbose=False)[0] / 1E9 * 2 # GFLOPS - except: - flops = 0 - - for _ in range(n): - t[0] = time_synchronized() - y = m(x) - t[1] = time_synchronized() - try: - _ = y.sum().backward() - t[2] = time_synchronized() - except: # no backward method - t[2] = float('nan') - dtf += (t[1] - t[0]) * 1000 / n # ms per op forward - dtb += (t[2] - t[1]) * 1000 / n # ms per op backward - - s_in = tuple(x.shape) if isinstance(x, torch.Tensor) else 'list' - s_out = tuple(y.shape) if isinstance(y, torch.Tensor) else 'list' - p = sum(list(x.numel() for x in m.parameters())) if isinstance(m, nn.Module) else 0 # parameters - print(f'{p:12}{flops:12.4g}{dtf:16.4g}{dtb:16.4g}{str(s_in):>24s}{str(s_out):>24s}') - - -def is_parallel(model): - return type(model) in (nn.parallel.DataParallel, nn.parallel.DistributedDataParallel) - - -def intersect_dicts(da, db, exclude=()): - # Dictionary intersection of matching keys and shapes, omitting 'exclude' keys, using da values - return {k: v for k, v in da.items() if k in db and not any(x in k for x in exclude) and v.shape == db[k].shape} - - -def initialize_weights(model): - for m in model.modules(): - t = type(m) - if t is nn.Conv2d: - pass # nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif t is nn.BatchNorm2d: - m.eps = 1e-3 - m.momentum = 0.03 - elif t in [nn.Hardswish, nn.LeakyReLU, nn.ReLU, nn.ReLU6]: - m.inplace = True - - -def find_modules(model, mclass=nn.Conv2d): - # Finds layer indices matching module class 'mclass' - return [i for i, m in enumerate(model.module_list) if isinstance(m, mclass)] - - -def sparsity(model): - # Return global model sparsity - a, b = 0., 0. - for p in model.parameters(): - a += p.numel() - b += (p == 0).sum() - return b / a - - -def prune(model, amount=0.3): - # Prune model to requested global sparsity - import torch.nn.utils.prune as prune - print('Pruning model... ', end='') - for name, m in model.named_modules(): - if isinstance(m, nn.Conv2d): - prune.l1_unstructured(m, name='weight', amount=amount) # prune - prune.remove(m, 'weight') # make permanent - print(' %.3g global sparsity' % sparsity(model)) - - -def fuse_conv_and_bn(conv, bn): - # Fuse convolution and batchnorm layers https://tehnokv.com/posts/fusing-batchnorm-and-conv/ - fusedconv = nn.Conv2d(conv.in_channels, - conv.out_channels, - kernel_size=conv.kernel_size, - stride=conv.stride, - padding=conv.padding, - groups=conv.groups, - bias=True).requires_grad_(False).to(conv.weight.device) - - # prepare filters - w_conv = conv.weight.clone().view(conv.out_channels, -1) - w_bn = torch.diag(bn.weight.div(torch.sqrt(bn.eps + bn.running_var))) - fusedconv.weight.copy_(torch.mm(w_bn, w_conv).view(fusedconv.weight.shape)) - - # prepare spatial bias - b_conv = torch.zeros(conv.weight.size(0), device=conv.weight.device) if conv.bias is None else conv.bias - b_bn = bn.bias - bn.weight.mul(bn.running_mean).div(torch.sqrt(bn.running_var + bn.eps)) - fusedconv.bias.copy_(torch.mm(w_bn, b_conv.reshape(-1, 1)).reshape(-1) + b_bn) - - return fusedconv - - -def model_info(model, verbose=False, img_size=640): - # Model information. img_size may be int or list, i.e. img_size=640 or img_size=[640, 320] - n_p = sum(x.numel() for x in model.parameters()) # number parameters - n_g = sum(x.numel() for x in model.parameters() if x.requires_grad) # number gradients - if verbose: - print('%5s %40s %9s %12s %20s %10s %10s' % ('layer', 'name', 'gradient', 'parameters', 'shape', 'mu', 'sigma')) - for i, (name, p) in enumerate(model.named_parameters()): - name = name.replace('module_list.', '') - print('%5g %40s %9s %12g %20s %10.3g %10.3g' % - (i, name, p.requires_grad, p.numel(), list(p.shape), p.mean(), p.std())) - - try: # FLOPS - from thop import profile - stride = max(int(model.stride.max()), 32) if hasattr(model, 'stride') else 32 - img = torch.zeros((1, model.yaml.get('ch', 3), stride, stride), device=next(model.parameters()).device) # input - flops = profile(deepcopy(model), inputs=(img,), verbose=False)[0] / 1E9 * 2 # stride GFLOPS - img_size = img_size if isinstance(img_size, list) else [img_size, img_size] # expand if int/float - fs = ', %.1f GFLOPS' % (flops * img_size[0] / stride * img_size[1] / stride) # 640x640 GFLOPS - except (ImportError, Exception): - fs = '' - - logger.info(f"Model Summary: {len(list(model.modules()))} layers, {n_p} parameters, {n_g} gradients{fs}") - - -def load_classifier(name='resnet101', n=2): - # Loads a pretrained model reshaped to n-class output - model = torchvision.models.__dict__[name](pretrained=True) - - # ResNet model properties - # input_size = [3, 224, 224] - # input_space = 'RGB' - # input_range = [0, 1] - # mean = [0.485, 0.456, 0.406] - # std = [0.229, 0.224, 0.225] - - # Reshape output to n classes - filters = model.fc.weight.shape[1] - model.fc.bias = nn.Parameter(torch.zeros(n), requires_grad=True) - model.fc.weight = nn.Parameter(torch.zeros(n, filters), requires_grad=True) - model.fc.out_features = n - return model - - -def scale_img(img, ratio=1.0, same_shape=False, gs=32): # img(16,3,256,416) - # scales img(bs,3,y,x) by ratio constrained to gs-multiple - if ratio == 1.0: - return img - else: - h, w = img.shape[2:] - s = (int(h * ratio), int(w * ratio)) # new size - img = F.interpolate(img, size=s, mode='bilinear', align_corners=False) # resize - if not same_shape: # pad/crop img - h, w = [math.ceil(x * ratio / gs) * gs for x in (h, w)] - return F.pad(img, [0, w - s[1], 0, h - s[0]], value=0.447) # value = imagenet mean - - -def copy_attr(a, b, include=(), exclude=()): - # Copy attributes from b to a, options to only include [...] and to exclude [...] - for k, v in b.__dict__.items(): - if (len(include) and k not in include) or k.startswith('_') or k in exclude: - continue - else: - setattr(a, k, v) - - -class ModelEMA: - """ Model Exponential Moving Average from https://github.com/rwightman/pytorch-image-models - Keep a moving average of everything in the model state_dict (parameters and buffers). - This is intended to allow functionality like - https://www.tensorflow.org/api_docs/python/tf/train/ExponentialMovingAverage - A smoothed version of the weights is necessary for some training schemes to perform well. - This class is sensitive where it is initialized in the sequence of model init, - GPU assignment and distributed training wrappers. - """ - - def __init__(self, model, decay=0.9999, updates=0): - # Create EMA - self.ema = deepcopy(model.module if is_parallel(model) else model).eval() # FP32 EMA - # if next(model.parameters()).device.type != 'cpu': - # self.ema.half() # FP16 EMA - self.updates = updates # number of EMA updates - self.decay = lambda x: decay * (1 - math.exp(-x / 2000)) # decay exponential ramp (to help early epochs) - for p in self.ema.parameters(): - p.requires_grad_(False) - - def update(self, model): - # Update EMA parameters - with torch.no_grad(): - self.updates += 1 - d = self.decay(self.updates) - - msd = model.module.state_dict() if is_parallel(model) else model.state_dict() # model state_dict - for k, v in self.ema.state_dict().items(): - if v.dtype.is_floating_point: - v *= d - v += (1. - d) * msd[k].detach() - - def update_attr(self, model, include=(), exclude=('process_group', 'reducer')): - # Update EMA attributes - copy_attr(self.ema, model, include, exclude) - - -class BatchNormXd(torch.nn.modules.batchnorm._BatchNorm): - def _check_input_dim(self, input): - # The only difference between BatchNorm1d, BatchNorm2d, BatchNorm3d, etc - # is this method that is overwritten by the sub-class - # This original goal of this method was for tensor sanity checks - # If you're ok bypassing those sanity checks (eg. if you trust your inference - # to provide the right dimensional inputs), then you can just use this method - # for easy conversion from SyncBatchNorm - # (unfortunately, SyncBatchNorm does not store the original class - if it did - # we could return the one that was originally created) - return - -def revert_sync_batchnorm(module): - # this is very similar to the function that it is trying to revert: - # https://github.com/pytorch/pytorch/blob/c8b3686a3e4ba63dc59e5dcfe5db3430df256833/torch/nn/modules/batchnorm.py#L679 - module_output = module - if isinstance(module, torch.nn.modules.batchnorm.SyncBatchNorm): - new_cls = BatchNormXd - module_output = BatchNormXd(module.num_features, - module.eps, module.momentum, - module.affine, - module.track_running_stats) - if module.affine: - with torch.no_grad(): - module_output.weight = module.weight - module_output.bias = module.bias - module_output.running_mean = module.running_mean - module_output.running_var = module.running_var - module_output.num_batches_tracked = module.num_batches_tracked - if hasattr(module, "qconfig"): - module_output.qconfig = module.qconfig - for name, child in module.named_children(): - module_output.add_module(name, revert_sync_batchnorm(child)) - del module - return module_output - - -class TracedModel(nn.Module): - - def __init__(self, model=None, device=None, img_size=(640,640)): - super(TracedModel, self).__init__() - - print(" Convert model to Traced-model... ") - self.stride = model.stride - self.names = model.names - self.model = model - - self.model = revert_sync_batchnorm(self.model) - self.model.to('cpu') - self.model.eval() - - self.detect_layer = self.model.model[-1] - self.model.traced = True - - rand_example = torch.rand(1, 3, img_size, img_size) - - traced_script_module = torch.jit.trace(self.model, rand_example, strict=False) - #traced_script_module = torch.jit.script(self.model) - traced_script_module.save("traced_model.pt") - print(" traced_script_module saved! ") - self.model = traced_script_module - self.model.to(device) - self.detect_layer.to(device) - print(" model is traced! \n") - - def forward(self, x, augment=False, profile=False): - out = self.model(x) - out = self.detect_layer(out) - return out diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/effectlayer-plugin.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/effectlayer-plugin.js deleted file mode 100644 index eb376b6ad8496bceca6547488431db5ac89bcdeb..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/plugins/effectlayer-plugin.js +++ /dev/null @@ -1,23 +0,0 @@ -import Factory from './gameobjects/shader/effectlayer/effectlayer/Factory.js'; -import Creator from './gameobjects/shader/effectlayer/effectlayer/Creator.js'; -import EffectLayer from './gameobjects/shader/effectlayer/effectlayer/EffectLayer.js'; -import SetValue from './utils/object/SetValue.js'; - -class EffectLayerPlugin extends Phaser.Plugins.BasePlugin { - - constructor(pluginManager) { - super(pluginManager); - - // Register our new Game Object type - pluginManager.registerGameObject('rexEffectLayer', Factory, Creator); - } - - start() { - var eventEmitter = this.game.events; - eventEmitter.on('destroy', this.destroy, this); - } -} - -SetValue(window, 'RexPlugins.GameObjects.EffectLayer', EffectLayer); - -export default EffectLayerPlugin; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/AddChild.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/AddChild.js deleted file mode 100644 index 82d7c905be3a0318ae3a58580ebf4907cb74e172..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/utils/AddChild.js +++ /dev/null @@ -1,16 +0,0 @@ -import Container from '../../container/Container.js'; - -const ContainerAdd = Container.prototype.add; - -var AddChild = function (gameObject) { - ContainerAdd.call(this, gameObject); - - if (this.sizerEventsEnable) { - gameObject.emit('sizer.add', gameObject, this); - this.emit('add', gameObject, this); - } - - return this; -} - -export default AddChild; \ No newline at end of file diff --git a/spaces/Andy1621/uniformer_image_detection/configs/ssd/README.md b/spaces/Andy1621/uniformer_image_detection/configs/ssd/README.md deleted file mode 100644 index 51262d68efa1e8be0e91e92c2c3dc5585ab2411e..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/ssd/README.md +++ /dev/null @@ -1,21 +0,0 @@ -# SSD: Single Shot MultiBox Detector - -## Introduction - -[ALGORITHM] - -```latex -@article{Liu_2016, - title={SSD: Single Shot MultiBox Detector}, - journal={ECCV}, - author={Liu, Wei and Anguelov, Dragomir and Erhan, Dumitru and Szegedy, Christian and Reed, Scott and Fu, Cheng-Yang and Berg, Alexander C.}, - year={2016}, -} -``` - -## Results and models - -| Backbone | Size | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download | -| :------: | :---: | :---: | :-----: | :------: | :------------: | :----: | :------: | :--------: | -| VGG16 | 300 | caffe | 120e | 10.2 | 43.7 | 25.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ssd/ssd300_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ssd/ssd300_coco/ssd300_coco_20200307-a92d2092.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ssd/ssd300_coco/ssd300_coco_20200307_174216.log.json) | -| VGG16 | 512 | caffe | 120e | 9.3 | 30.7 | 29.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/ssd/ssd512_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/ssd/ssd512_coco/ssd512_coco_20200308-038c5591.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/ssd/ssd512_coco/ssd512_coco_20200308_134447.log.json) | diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/utils/res_layer.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/utils/res_layer.py deleted file mode 100644 index 4a4efd3dd30b30123ed5135eac080ad9f7f7b448..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/utils/res_layer.py +++ /dev/null @@ -1,187 +0,0 @@ -from mmcv.cnn import build_conv_layer, build_norm_layer -from torch import nn as nn - - -class ResLayer(nn.Sequential): - """ResLayer to build ResNet style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - downsample_first (bool): Downsample at the first block or last block. - False for Hourglass, True for ResNet. Default: True - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - downsample_first=True, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - if downsample_first: - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - inplanes = planes * block.expansion - for _ in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - - else: # downsample_first=False is for HourglassModule - for _ in range(num_blocks - 1): - layers.append( - block( - inplanes=inplanes, - planes=inplanes, - stride=1, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - super(ResLayer, self).__init__(*layers) - - -class SimplifiedBasicBlock(nn.Module): - """Simplified version of original basic residual block. This is used in - `SCNet `_. - - - Norm layer is now optional - - Last ReLU in forward function is removed - """ - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None): - super(SimplifiedBasicBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - assert not with_cp, 'Not implemented yet.' - self.with_norm = norm_cfg is not None - with_bias = True if norm_cfg is None else False - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=with_bias) - if self.with_norm: - self.norm1_name, norm1 = build_norm_layer( - norm_cfg, planes, postfix=1) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=with_bias) - if self.with_norm: - self.norm2_name, norm2 = build_norm_layer( - norm_cfg, planes, postfix=2) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) if self.with_norm else None - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) if self.with_norm else None - - def forward(self, x): - """Forward function.""" - - identity = x - - out = self.conv1(x) - if self.with_norm: - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - if self.with_norm: - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out diff --git a/spaces/AndyCer/TehVenom-MPT-7b-Chat-Instruct-LongCTX-Merge/README.md b/spaces/AndyCer/TehVenom-MPT-7b-Chat-Instruct-LongCTX-Merge/README.md deleted file mode 100644 index f5484d9cc06ab7898094b249f2f77dc825e46ed8..0000000000000000000000000000000000000000 --- a/spaces/AndyCer/TehVenom-MPT-7b-Chat-Instruct-LongCTX-Merge/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: TehVenom MPT 7b Chat Instruct LongCTX Merge -emoji: 📉 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Anthony7906/MengHuiMXD_GPT/Dockerfile b/spaces/Anthony7906/MengHuiMXD_GPT/Dockerfile deleted file mode 100644 index 335c2dba28ba8c365de9306858462a59dea25f28..0000000000000000000000000000000000000000 --- a/spaces/Anthony7906/MengHuiMXD_GPT/Dockerfile +++ /dev/null @@ -1,15 +0,0 @@ -FROM python:3.9 as builder -RUN apt-get update && apt-get install -y build-essential -COPY requirements.txt . -COPY requirements_advanced.txt . -RUN pip install --user -r requirements.txt -# RUN pip install --user -r requirements_advanced.txt - -FROM python:3.9 -MAINTAINER iskoldt -COPY --from=builder /root/.local /root/.local -ENV PATH=/root/.local/bin:$PATH -COPY . /app -WORKDIR /app -ENV dockerrun yes -CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"] diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/core.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/core.py deleted file mode 100644 index 6ff3c766f7dd9f4111cbd9d2a5f668e4435798b5..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/pyparsing/core.py +++ /dev/null @@ -1,5814 +0,0 @@ -# -# core.py -# -import os -import typing -from typing import ( - NamedTuple, - Union, - Callable, - Any, - Generator, - Tuple, - List, - TextIO, - Set, - Sequence, -) -from abc import ABC, abstractmethod -from enum import Enum -import string -import copy -import warnings -import re -import sys -from collections.abc import Iterable -import traceback -import types -from operator import itemgetter -from functools import wraps -from threading import RLock -from pathlib import Path - -from .util import ( - _FifoCache, - _UnboundedCache, - __config_flags, - _collapse_string_to_ranges, - _escape_regex_range_chars, - _bslash, - _flatten, - LRUMemo as _LRUMemo, - UnboundedMemo as _UnboundedMemo, -) -from .exceptions import * -from .actions import * -from .results import ParseResults, _ParseResultsWithOffset -from .unicode import pyparsing_unicode - -_MAX_INT = sys.maxsize -str_type: Tuple[type, ...] = (str, bytes) - -# -# Copyright (c) 2003-2022 Paul T. McGuire -# -# Permission is hereby granted, free of charge, to any person obtaining -# a copy of this software and associated documentation files (the -# "Software"), to deal in the Software without restriction, including -# without limitation the rights to use, copy, modify, merge, publish, -# distribute, sublicense, and/or sell copies of the Software, and to -# permit persons to whom the Software is furnished to do so, subject to -# the following conditions: -# -# The above copyright notice and this permission notice shall be -# included in all copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. -# IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY -# CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, -# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE -# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. -# - - -if sys.version_info >= (3, 8): - from functools import cached_property -else: - - class cached_property: - def __init__(self, func): - self._func = func - - def __get__(self, instance, owner=None): - ret = instance.__dict__[self._func.__name__] = self._func(instance) - return ret - - -class __compat__(__config_flags): - """ - A cross-version compatibility configuration for pyparsing features that will be - released in a future version. By setting values in this configuration to True, - those features can be enabled in prior versions for compatibility development - and testing. - - - ``collect_all_And_tokens`` - flag to enable fix for Issue #63 that fixes erroneous grouping - of results names when an :class:`And` expression is nested within an :class:`Or` or :class:`MatchFirst`; - maintained for compatibility, but setting to ``False`` no longer restores pre-2.3.1 - behavior - """ - - _type_desc = "compatibility" - - collect_all_And_tokens = True - - _all_names = [__ for __ in locals() if not __.startswith("_")] - _fixed_names = """ - collect_all_And_tokens - """.split() - - -class __diag__(__config_flags): - _type_desc = "diagnostic" - - warn_multiple_tokens_in_named_alternation = False - warn_ungrouped_named_tokens_in_collection = False - warn_name_set_on_empty_Forward = False - warn_on_parse_using_empty_Forward = False - warn_on_assignment_to_Forward = False - warn_on_multiple_string_args_to_oneof = False - warn_on_match_first_with_lshift_operator = False - enable_debug_on_named_expressions = False - - _all_names = [__ for __ in locals() if not __.startswith("_")] - _warning_names = [name for name in _all_names if name.startswith("warn")] - _debug_names = [name for name in _all_names if name.startswith("enable_debug")] - - @classmethod - def enable_all_warnings(cls) -> None: - for name in cls._warning_names: - cls.enable(name) - - -class Diagnostics(Enum): - """ - Diagnostic configuration (all default to disabled) - - ``warn_multiple_tokens_in_named_alternation`` - flag to enable warnings when a results - name is defined on a :class:`MatchFirst` or :class:`Or` expression with one or more :class:`And` subexpressions - - ``warn_ungrouped_named_tokens_in_collection`` - flag to enable warnings when a results - name is defined on a containing expression with ungrouped subexpressions that also - have results names - - ``warn_name_set_on_empty_Forward`` - flag to enable warnings when a :class:`Forward` is defined - with a results name, but has no contents defined - - ``warn_on_parse_using_empty_Forward`` - flag to enable warnings when a :class:`Forward` is - defined in a grammar but has never had an expression attached to it - - ``warn_on_assignment_to_Forward`` - flag to enable warnings when a :class:`Forward` is defined - but is overwritten by assigning using ``'='`` instead of ``'<<='`` or ``'<<'`` - - ``warn_on_multiple_string_args_to_oneof`` - flag to enable warnings when :class:`one_of` is - incorrectly called with multiple str arguments - - ``enable_debug_on_named_expressions`` - flag to auto-enable debug on all subsequent - calls to :class:`ParserElement.set_name` - - Diagnostics are enabled/disabled by calling :class:`enable_diag` and :class:`disable_diag`. - All warnings can be enabled by calling :class:`enable_all_warnings`. - """ - - warn_multiple_tokens_in_named_alternation = 0 - warn_ungrouped_named_tokens_in_collection = 1 - warn_name_set_on_empty_Forward = 2 - warn_on_parse_using_empty_Forward = 3 - warn_on_assignment_to_Forward = 4 - warn_on_multiple_string_args_to_oneof = 5 - warn_on_match_first_with_lshift_operator = 6 - enable_debug_on_named_expressions = 7 - - -def enable_diag(diag_enum: Diagnostics) -> None: - """ - Enable a global pyparsing diagnostic flag (see :class:`Diagnostics`). - """ - __diag__.enable(diag_enum.name) - - -def disable_diag(diag_enum: Diagnostics) -> None: - """ - Disable a global pyparsing diagnostic flag (see :class:`Diagnostics`). - """ - __diag__.disable(diag_enum.name) - - -def enable_all_warnings() -> None: - """ - Enable all global pyparsing diagnostic warnings (see :class:`Diagnostics`). - """ - __diag__.enable_all_warnings() - - -# hide abstract class -del __config_flags - - -def _should_enable_warnings( - cmd_line_warn_options: typing.Iterable[str], warn_env_var: typing.Optional[str] -) -> bool: - enable = bool(warn_env_var) - for warn_opt in cmd_line_warn_options: - w_action, w_message, w_category, w_module, w_line = (warn_opt + "::::").split( - ":" - )[:5] - if not w_action.lower().startswith("i") and ( - not (w_message or w_category or w_module) or w_module == "pyparsing" - ): - enable = True - elif w_action.lower().startswith("i") and w_module in ("pyparsing", ""): - enable = False - return enable - - -if _should_enable_warnings( - sys.warnoptions, os.environ.get("PYPARSINGENABLEALLWARNINGS") -): - enable_all_warnings() - - -# build list of single arg builtins, that can be used as parse actions -_single_arg_builtins = { - sum, - len, - sorted, - reversed, - list, - tuple, - set, - any, - all, - min, - max, -} - -_generatorType = types.GeneratorType -ParseAction = Union[ - Callable[[], Any], - Callable[[ParseResults], Any], - Callable[[int, ParseResults], Any], - Callable[[str, int, ParseResults], Any], -] -ParseCondition = Union[ - Callable[[], bool], - Callable[[ParseResults], bool], - Callable[[int, ParseResults], bool], - Callable[[str, int, ParseResults], bool], -] -ParseFailAction = Callable[[str, int, "ParserElement", Exception], None] -DebugStartAction = Callable[[str, int, "ParserElement", bool], None] -DebugSuccessAction = Callable[ - [str, int, int, "ParserElement", ParseResults, bool], None -] -DebugExceptionAction = Callable[[str, int, "ParserElement", Exception, bool], None] - - -alphas = string.ascii_uppercase + string.ascii_lowercase -identchars = pyparsing_unicode.Latin1.identchars -identbodychars = pyparsing_unicode.Latin1.identbodychars -nums = "0123456789" -hexnums = nums + "ABCDEFabcdef" -alphanums = alphas + nums -printables = "".join([c for c in string.printable if c not in string.whitespace]) - -_trim_arity_call_line: traceback.StackSummary = None - - -def _trim_arity(func, max_limit=3): - """decorator to trim function calls to match the arity of the target""" - global _trim_arity_call_line - - if func in _single_arg_builtins: - return lambda s, l, t: func(t) - - limit = 0 - found_arity = False - - def extract_tb(tb, limit=0): - frames = traceback.extract_tb(tb, limit=limit) - frame_summary = frames[-1] - return [frame_summary[:2]] - - # synthesize what would be returned by traceback.extract_stack at the call to - # user's parse action 'func', so that we don't incur call penalty at parse time - - # fmt: off - LINE_DIFF = 7 - # IF ANY CODE CHANGES, EVEN JUST COMMENTS OR BLANK LINES, BETWEEN THE NEXT LINE AND - # THE CALL TO FUNC INSIDE WRAPPER, LINE_DIFF MUST BE MODIFIED!!!! - _trim_arity_call_line = (_trim_arity_call_line or traceback.extract_stack(limit=2)[-1]) - pa_call_line_synth = (_trim_arity_call_line[0], _trim_arity_call_line[1] + LINE_DIFF) - - def wrapper(*args): - nonlocal found_arity, limit - while 1: - try: - ret = func(*args[limit:]) - found_arity = True - return ret - except TypeError as te: - # re-raise TypeErrors if they did not come from our arity testing - if found_arity: - raise - else: - tb = te.__traceback__ - trim_arity_type_error = ( - extract_tb(tb, limit=2)[-1][:2] == pa_call_line_synth - ) - del tb - - if trim_arity_type_error: - if limit < max_limit: - limit += 1 - continue - - raise - # fmt: on - - # copy func name to wrapper for sensible debug output - # (can't use functools.wraps, since that messes with function signature) - func_name = getattr(func, "__name__", getattr(func, "__class__").__name__) - wrapper.__name__ = func_name - wrapper.__doc__ = func.__doc__ - - return wrapper - - -def condition_as_parse_action( - fn: ParseCondition, message: str = None, fatal: bool = False -) -> ParseAction: - """ - Function to convert a simple predicate function that returns ``True`` or ``False`` - into a parse action. Can be used in places when a parse action is required - and :class:`ParserElement.add_condition` cannot be used (such as when adding a condition - to an operator level in :class:`infix_notation`). - - Optional keyword arguments: - - - ``message`` - define a custom message to be used in the raised exception - - ``fatal`` - if True, will raise :class:`ParseFatalException` to stop parsing immediately; - otherwise will raise :class:`ParseException` - - """ - msg = message if message is not None else "failed user-defined condition" - exc_type = ParseFatalException if fatal else ParseException - fn = _trim_arity(fn) - - @wraps(fn) - def pa(s, l, t): - if not bool(fn(s, l, t)): - raise exc_type(s, l, msg) - - return pa - - -def _default_start_debug_action( - instring: str, loc: int, expr: "ParserElement", cache_hit: bool = False -): - cache_hit_str = "*" if cache_hit else "" - print( - ( - "{}Match {} at loc {}({},{})\n {}\n {}^".format( - cache_hit_str, - expr, - loc, - lineno(loc, instring), - col(loc, instring), - line(loc, instring), - " " * (col(loc, instring) - 1), - ) - ) - ) - - -def _default_success_debug_action( - instring: str, - startloc: int, - endloc: int, - expr: "ParserElement", - toks: ParseResults, - cache_hit: bool = False, -): - cache_hit_str = "*" if cache_hit else "" - print("{}Matched {} -> {}".format(cache_hit_str, expr, toks.as_list())) - - -def _default_exception_debug_action( - instring: str, - loc: int, - expr: "ParserElement", - exc: Exception, - cache_hit: bool = False, -): - cache_hit_str = "*" if cache_hit else "" - print( - "{}Match {} failed, {} raised: {}".format( - cache_hit_str, expr, type(exc).__name__, exc - ) - ) - - -def null_debug_action(*args): - """'Do-nothing' debug action, to suppress debugging output during parsing.""" - - -class ParserElement(ABC): - """Abstract base level parser element class.""" - - DEFAULT_WHITE_CHARS: str = " \n\t\r" - verbose_stacktrace: bool = False - _literalStringClass: typing.Optional[type] = None - - @staticmethod - def set_default_whitespace_chars(chars: str) -> None: - r""" - Overrides the default whitespace chars - - Example:: - - # default whitespace chars are space, and newline - Word(alphas)[1, ...].parse_string("abc def\nghi jkl") # -> ['abc', 'def', 'ghi', 'jkl'] - - # change to just treat newline as significant - ParserElement.set_default_whitespace_chars(" \t") - Word(alphas)[1, ...].parse_string("abc def\nghi jkl") # -> ['abc', 'def'] - """ - ParserElement.DEFAULT_WHITE_CHARS = chars - - # update whitespace all parse expressions defined in this module - for expr in _builtin_exprs: - if expr.copyDefaultWhiteChars: - expr.whiteChars = set(chars) - - @staticmethod - def inline_literals_using(cls: type) -> None: - """ - Set class to be used for inclusion of string literals into a parser. - - Example:: - - # default literal class used is Literal - integer = Word(nums) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - date_str.parse_string("1999/12/31") # -> ['1999', '/', '12', '/', '31'] - - - # change to Suppress - ParserElement.inline_literals_using(Suppress) - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - - date_str.parse_string("1999/12/31") # -> ['1999', '12', '31'] - """ - ParserElement._literalStringClass = cls - - class DebugActions(NamedTuple): - debug_try: typing.Optional[DebugStartAction] - debug_match: typing.Optional[DebugSuccessAction] - debug_fail: typing.Optional[DebugExceptionAction] - - def __init__(self, savelist: bool = False): - self.parseAction: List[ParseAction] = list() - self.failAction: typing.Optional[ParseFailAction] = None - self.customName = None - self._defaultName = None - self.resultsName = None - self.saveAsList = savelist - self.skipWhitespace = True - self.whiteChars = set(ParserElement.DEFAULT_WHITE_CHARS) - self.copyDefaultWhiteChars = True - # used when checking for left-recursion - self.mayReturnEmpty = False - self.keepTabs = False - self.ignoreExprs: List["ParserElement"] = list() - self.debug = False - self.streamlined = False - # optimize exception handling for subclasses that don't advance parse index - self.mayIndexError = True - self.errmsg = "" - # mark results names as modal (report only last) or cumulative (list all) - self.modalResults = True - # custom debug actions - self.debugActions = self.DebugActions(None, None, None) - # avoid redundant calls to preParse - self.callPreparse = True - self.callDuringTry = False - self.suppress_warnings_: List[Diagnostics] = [] - - def suppress_warning(self, warning_type: Diagnostics) -> "ParserElement": - """ - Suppress warnings emitted for a particular diagnostic on this expression. - - Example:: - - base = pp.Forward() - base.suppress_warning(Diagnostics.warn_on_parse_using_empty_Forward) - - # statement would normally raise a warning, but is now suppressed - print(base.parseString("x")) - - """ - self.suppress_warnings_.append(warning_type) - return self - - def copy(self) -> "ParserElement": - """ - Make a copy of this :class:`ParserElement`. Useful for defining - different parse actions for the same parsing pattern, using copies of - the original parse element. - - Example:: - - integer = Word(nums).set_parse_action(lambda toks: int(toks[0])) - integerK = integer.copy().add_parse_action(lambda toks: toks[0] * 1024) + Suppress("K") - integerM = integer.copy().add_parse_action(lambda toks: toks[0] * 1024 * 1024) + Suppress("M") - - print((integerK | integerM | integer)[1, ...].parse_string("5K 100 640K 256M")) - - prints:: - - [5120, 100, 655360, 268435456] - - Equivalent form of ``expr.copy()`` is just ``expr()``:: - - integerM = integer().add_parse_action(lambda toks: toks[0] * 1024 * 1024) + Suppress("M") - """ - cpy = copy.copy(self) - cpy.parseAction = self.parseAction[:] - cpy.ignoreExprs = self.ignoreExprs[:] - if self.copyDefaultWhiteChars: - cpy.whiteChars = set(ParserElement.DEFAULT_WHITE_CHARS) - return cpy - - def set_results_name( - self, name: str, list_all_matches: bool = False, *, listAllMatches: bool = False - ) -> "ParserElement": - """ - Define name for referencing matching tokens as a nested attribute - of the returned parse results. - - Normally, results names are assigned as you would assign keys in a dict: - any existing value is overwritten by later values. If it is necessary to - keep all values captured for a particular results name, call ``set_results_name`` - with ``list_all_matches`` = True. - - NOTE: ``set_results_name`` returns a *copy* of the original :class:`ParserElement` object; - this is so that the client can define a basic element, such as an - integer, and reference it in multiple places with different names. - - You can also set results names using the abbreviated syntax, - ``expr("name")`` in place of ``expr.set_results_name("name")`` - - see :class:`__call__`. If ``list_all_matches`` is required, use - ``expr("name*")``. - - Example:: - - date_str = (integer.set_results_name("year") + '/' - + integer.set_results_name("month") + '/' - + integer.set_results_name("day")) - - # equivalent form: - date_str = integer("year") + '/' + integer("month") + '/' + integer("day") - """ - listAllMatches = listAllMatches or list_all_matches - return self._setResultsName(name, listAllMatches) - - def _setResultsName(self, name, listAllMatches=False): - if name is None: - return self - newself = self.copy() - if name.endswith("*"): - name = name[:-1] - listAllMatches = True - newself.resultsName = name - newself.modalResults = not listAllMatches - return newself - - def set_break(self, break_flag: bool = True) -> "ParserElement": - """ - Method to invoke the Python pdb debugger when this element is - about to be parsed. Set ``break_flag`` to ``True`` to enable, ``False`` to - disable. - """ - if break_flag: - _parseMethod = self._parse - - def breaker(instring, loc, doActions=True, callPreParse=True): - import pdb - - # this call to pdb.set_trace() is intentional, not a checkin error - pdb.set_trace() - return _parseMethod(instring, loc, doActions, callPreParse) - - breaker._originalParseMethod = _parseMethod - self._parse = breaker - else: - if hasattr(self._parse, "_originalParseMethod"): - self._parse = self._parse._originalParseMethod - return self - - def set_parse_action(self, *fns: ParseAction, **kwargs) -> "ParserElement": - """ - Define one or more actions to perform when successfully matching parse element definition. - - Parse actions can be called to perform data conversions, do extra validation, - update external data structures, or enhance or replace the parsed tokens. - Each parse action ``fn`` is a callable method with 0-3 arguments, called as - ``fn(s, loc, toks)`` , ``fn(loc, toks)`` , ``fn(toks)`` , or just ``fn()`` , where: - - - s = the original string being parsed (see note below) - - loc = the location of the matching substring - - toks = a list of the matched tokens, packaged as a :class:`ParseResults` object - - The parsed tokens are passed to the parse action as ParseResults. They can be - modified in place using list-style append, extend, and pop operations to update - the parsed list elements; and with dictionary-style item set and del operations - to add, update, or remove any named results. If the tokens are modified in place, - it is not necessary to return them with a return statement. - - Parse actions can also completely replace the given tokens, with another ``ParseResults`` - object, or with some entirely different object (common for parse actions that perform data - conversions). A convenient way to build a new parse result is to define the values - using a dict, and then create the return value using :class:`ParseResults.from_dict`. - - If None is passed as the ``fn`` parse action, all previously added parse actions for this - expression are cleared. - - Optional keyword arguments: - - - call_during_try = (default= ``False``) indicate if parse action should be run during - lookaheads and alternate testing. For parse actions that have side effects, it is - important to only call the parse action once it is determined that it is being - called as part of a successful parse. For parse actions that perform additional - validation, then call_during_try should be passed as True, so that the validation - code is included in the preliminary "try" parses. - - Note: the default parsing behavior is to expand tabs in the input string - before starting the parsing process. See :class:`parse_string` for more - information on parsing strings containing ```` s, and suggested - methods to maintain a consistent view of the parsed string, the parse - location, and line and column positions within the parsed string. - - Example:: - - # parse dates in the form YYYY/MM/DD - - # use parse action to convert toks from str to int at parse time - def convert_to_int(toks): - return int(toks[0]) - - # use a parse action to verify that the date is a valid date - def is_valid_date(instring, loc, toks): - from datetime import date - year, month, day = toks[::2] - try: - date(year, month, day) - except ValueError: - raise ParseException(instring, loc, "invalid date given") - - integer = Word(nums) - date_str = integer + '/' + integer + '/' + integer - - # add parse actions - integer.set_parse_action(convert_to_int) - date_str.set_parse_action(is_valid_date) - - # note that integer fields are now ints, not strings - date_str.run_tests(''' - # successful parse - note that integer fields were converted to ints - 1999/12/31 - - # fail - invalid date - 1999/13/31 - ''') - """ - if list(fns) == [None]: - self.parseAction = [] - else: - if not all(callable(fn) for fn in fns): - raise TypeError("parse actions must be callable") - self.parseAction = [_trim_arity(fn) for fn in fns] - self.callDuringTry = kwargs.get( - "call_during_try", kwargs.get("callDuringTry", False) - ) - return self - - def add_parse_action(self, *fns: ParseAction, **kwargs) -> "ParserElement": - """ - Add one or more parse actions to expression's list of parse actions. See :class:`set_parse_action`. - - See examples in :class:`copy`. - """ - self.parseAction += [_trim_arity(fn) for fn in fns] - self.callDuringTry = self.callDuringTry or kwargs.get( - "call_during_try", kwargs.get("callDuringTry", False) - ) - return self - - def add_condition(self, *fns: ParseCondition, **kwargs) -> "ParserElement": - """Add a boolean predicate function to expression's list of parse actions. See - :class:`set_parse_action` for function call signatures. Unlike ``set_parse_action``, - functions passed to ``add_condition`` need to return boolean success/fail of the condition. - - Optional keyword arguments: - - - message = define a custom message to be used in the raised exception - - fatal = if True, will raise ParseFatalException to stop parsing immediately; otherwise will raise - ParseException - - call_during_try = boolean to indicate if this method should be called during internal tryParse calls, - default=False - - Example:: - - integer = Word(nums).set_parse_action(lambda toks: int(toks[0])) - year_int = integer.copy() - year_int.add_condition(lambda toks: toks[0] >= 2000, message="Only support years 2000 and later") - date_str = year_int + '/' + integer + '/' + integer - - result = date_str.parse_string("1999/12/31") # -> Exception: Only support years 2000 and later (at char 0), - (line:1, col:1) - """ - for fn in fns: - self.parseAction.append( - condition_as_parse_action( - fn, message=kwargs.get("message"), fatal=kwargs.get("fatal", False) - ) - ) - - self.callDuringTry = self.callDuringTry or kwargs.get( - "call_during_try", kwargs.get("callDuringTry", False) - ) - return self - - def set_fail_action(self, fn: ParseFailAction) -> "ParserElement": - """ - Define action to perform if parsing fails at this expression. - Fail acton fn is a callable function that takes the arguments - ``fn(s, loc, expr, err)`` where: - - - s = string being parsed - - loc = location where expression match was attempted and failed - - expr = the parse expression that failed - - err = the exception thrown - - The function returns no value. It may throw :class:`ParseFatalException` - if it is desired to stop parsing immediately.""" - self.failAction = fn - return self - - def _skipIgnorables(self, instring, loc): - exprsFound = True - while exprsFound: - exprsFound = False - for e in self.ignoreExprs: - try: - while 1: - loc, dummy = e._parse(instring, loc) - exprsFound = True - except ParseException: - pass - return loc - - def preParse(self, instring, loc): - if self.ignoreExprs: - loc = self._skipIgnorables(instring, loc) - - if self.skipWhitespace: - instrlen = len(instring) - white_chars = self.whiteChars - while loc < instrlen and instring[loc] in white_chars: - loc += 1 - - return loc - - def parseImpl(self, instring, loc, doActions=True): - return loc, [] - - def postParse(self, instring, loc, tokenlist): - return tokenlist - - # @profile - def _parseNoCache( - self, instring, loc, doActions=True, callPreParse=True - ) -> Tuple[int, ParseResults]: - TRY, MATCH, FAIL = 0, 1, 2 - debugging = self.debug # and doActions) - len_instring = len(instring) - - if debugging or self.failAction: - # print("Match {} at loc {}({}, {})".format(self, loc, lineno(loc, instring), col(loc, instring))) - try: - if callPreParse and self.callPreparse: - pre_loc = self.preParse(instring, loc) - else: - pre_loc = loc - tokens_start = pre_loc - if self.debugActions.debug_try: - self.debugActions.debug_try(instring, tokens_start, self, False) - if self.mayIndexError or pre_loc >= len_instring: - try: - loc, tokens = self.parseImpl(instring, pre_loc, doActions) - except IndexError: - raise ParseException(instring, len_instring, self.errmsg, self) - else: - loc, tokens = self.parseImpl(instring, pre_loc, doActions) - except Exception as err: - # print("Exception raised:", err) - if self.debugActions.debug_fail: - self.debugActions.debug_fail( - instring, tokens_start, self, err, False - ) - if self.failAction: - self.failAction(instring, tokens_start, self, err) - raise - else: - if callPreParse and self.callPreparse: - pre_loc = self.preParse(instring, loc) - else: - pre_loc = loc - tokens_start = pre_loc - if self.mayIndexError or pre_loc >= len_instring: - try: - loc, tokens = self.parseImpl(instring, pre_loc, doActions) - except IndexError: - raise ParseException(instring, len_instring, self.errmsg, self) - else: - loc, tokens = self.parseImpl(instring, pre_loc, doActions) - - tokens = self.postParse(instring, loc, tokens) - - ret_tokens = ParseResults( - tokens, self.resultsName, asList=self.saveAsList, modal=self.modalResults - ) - if self.parseAction and (doActions or self.callDuringTry): - if debugging: - try: - for fn in self.parseAction: - try: - tokens = fn(instring, tokens_start, ret_tokens) - except IndexError as parse_action_exc: - exc = ParseException("exception raised in parse action") - raise exc from parse_action_exc - - if tokens is not None and tokens is not ret_tokens: - ret_tokens = ParseResults( - tokens, - self.resultsName, - asList=self.saveAsList - and isinstance(tokens, (ParseResults, list)), - modal=self.modalResults, - ) - except Exception as err: - # print "Exception raised in user parse action:", err - if self.debugActions.debug_fail: - self.debugActions.debug_fail( - instring, tokens_start, self, err, False - ) - raise - else: - for fn in self.parseAction: - try: - tokens = fn(instring, tokens_start, ret_tokens) - except IndexError as parse_action_exc: - exc = ParseException("exception raised in parse action") - raise exc from parse_action_exc - - if tokens is not None and tokens is not ret_tokens: - ret_tokens = ParseResults( - tokens, - self.resultsName, - asList=self.saveAsList - and isinstance(tokens, (ParseResults, list)), - modal=self.modalResults, - ) - if debugging: - # print("Matched", self, "->", ret_tokens.as_list()) - if self.debugActions.debug_match: - self.debugActions.debug_match( - instring, tokens_start, loc, self, ret_tokens, False - ) - - return loc, ret_tokens - - def try_parse(self, instring: str, loc: int, raise_fatal: bool = False) -> int: - try: - return self._parse(instring, loc, doActions=False)[0] - except ParseFatalException: - if raise_fatal: - raise - raise ParseException(instring, loc, self.errmsg, self) - - def can_parse_next(self, instring: str, loc: int) -> bool: - try: - self.try_parse(instring, loc) - except (ParseException, IndexError): - return False - else: - return True - - # cache for left-recursion in Forward references - recursion_lock = RLock() - recursion_memos: typing.Dict[ - Tuple[int, "Forward", bool], Tuple[int, Union[ParseResults, Exception]] - ] = {} - - # argument cache for optimizing repeated calls when backtracking through recursive expressions - packrat_cache = ( - {} - ) # this is set later by enabled_packrat(); this is here so that reset_cache() doesn't fail - packrat_cache_lock = RLock() - packrat_cache_stats = [0, 0] - - # this method gets repeatedly called during backtracking with the same arguments - - # we can cache these arguments and save ourselves the trouble of re-parsing the contained expression - def _parseCache( - self, instring, loc, doActions=True, callPreParse=True - ) -> Tuple[int, ParseResults]: - HIT, MISS = 0, 1 - TRY, MATCH, FAIL = 0, 1, 2 - lookup = (self, instring, loc, callPreParse, doActions) - with ParserElement.packrat_cache_lock: - cache = ParserElement.packrat_cache - value = cache.get(lookup) - if value is cache.not_in_cache: - ParserElement.packrat_cache_stats[MISS] += 1 - try: - value = self._parseNoCache(instring, loc, doActions, callPreParse) - except ParseBaseException as pe: - # cache a copy of the exception, without the traceback - cache.set(lookup, pe.__class__(*pe.args)) - raise - else: - cache.set(lookup, (value[0], value[1].copy(), loc)) - return value - else: - ParserElement.packrat_cache_stats[HIT] += 1 - if self.debug and self.debugActions.debug_try: - try: - self.debugActions.debug_try(instring, loc, self, cache_hit=True) - except TypeError: - pass - if isinstance(value, Exception): - if self.debug and self.debugActions.debug_fail: - try: - self.debugActions.debug_fail( - instring, loc, self, value, cache_hit=True - ) - except TypeError: - pass - raise value - - loc_, result, endloc = value[0], value[1].copy(), value[2] - if self.debug and self.debugActions.debug_match: - try: - self.debugActions.debug_match( - instring, loc_, endloc, self, result, cache_hit=True - ) - except TypeError: - pass - - return loc_, result - - _parse = _parseNoCache - - @staticmethod - def reset_cache() -> None: - ParserElement.packrat_cache.clear() - ParserElement.packrat_cache_stats[:] = [0] * len( - ParserElement.packrat_cache_stats - ) - ParserElement.recursion_memos.clear() - - _packratEnabled = False - _left_recursion_enabled = False - - @staticmethod - def disable_memoization() -> None: - """ - Disables active Packrat or Left Recursion parsing and their memoization - - This method also works if neither Packrat nor Left Recursion are enabled. - This makes it safe to call before activating Packrat nor Left Recursion - to clear any previous settings. - """ - ParserElement.reset_cache() - ParserElement._left_recursion_enabled = False - ParserElement._packratEnabled = False - ParserElement._parse = ParserElement._parseNoCache - - @staticmethod - def enable_left_recursion( - cache_size_limit: typing.Optional[int] = None, *, force=False - ) -> None: - """ - Enables "bounded recursion" parsing, which allows for both direct and indirect - left-recursion. During parsing, left-recursive :class:`Forward` elements are - repeatedly matched with a fixed recursion depth that is gradually increased - until finding the longest match. - - Example:: - - from pip._vendor import pyparsing as pp - pp.ParserElement.enable_left_recursion() - - E = pp.Forward("E") - num = pp.Word(pp.nums) - # match `num`, or `num '+' num`, or `num '+' num '+' num`, ... - E <<= E + '+' - num | num - - print(E.parse_string("1+2+3")) - - Recursion search naturally memoizes matches of ``Forward`` elements and may - thus skip reevaluation of parse actions during backtracking. This may break - programs with parse actions which rely on strict ordering of side-effects. - - Parameters: - - - cache_size_limit - (default=``None``) - memoize at most this many - ``Forward`` elements during matching; if ``None`` (the default), - memoize all ``Forward`` elements. - - Bounded Recursion parsing works similar but not identical to Packrat parsing, - thus the two cannot be used together. Use ``force=True`` to disable any - previous, conflicting settings. - """ - if force: - ParserElement.disable_memoization() - elif ParserElement._packratEnabled: - raise RuntimeError("Packrat and Bounded Recursion are not compatible") - if cache_size_limit is None: - ParserElement.recursion_memos = _UnboundedMemo() - elif cache_size_limit > 0: - ParserElement.recursion_memos = _LRUMemo(capacity=cache_size_limit) - else: - raise NotImplementedError("Memo size of %s" % cache_size_limit) - ParserElement._left_recursion_enabled = True - - @staticmethod - def enable_packrat(cache_size_limit: int = 128, *, force: bool = False) -> None: - """ - Enables "packrat" parsing, which adds memoizing to the parsing logic. - Repeated parse attempts at the same string location (which happens - often in many complex grammars) can immediately return a cached value, - instead of re-executing parsing/validating code. Memoizing is done of - both valid results and parsing exceptions. - - Parameters: - - - cache_size_limit - (default= ``128``) - if an integer value is provided - will limit the size of the packrat cache; if None is passed, then - the cache size will be unbounded; if 0 is passed, the cache will - be effectively disabled. - - This speedup may break existing programs that use parse actions that - have side-effects. For this reason, packrat parsing is disabled when - you first import pyparsing. To activate the packrat feature, your - program must call the class method :class:`ParserElement.enable_packrat`. - For best results, call ``enable_packrat()`` immediately after - importing pyparsing. - - Example:: - - from pip._vendor import pyparsing - pyparsing.ParserElement.enable_packrat() - - Packrat parsing works similar but not identical to Bounded Recursion parsing, - thus the two cannot be used together. Use ``force=True`` to disable any - previous, conflicting settings. - """ - if force: - ParserElement.disable_memoization() - elif ParserElement._left_recursion_enabled: - raise RuntimeError("Packrat and Bounded Recursion are not compatible") - if not ParserElement._packratEnabled: - ParserElement._packratEnabled = True - if cache_size_limit is None: - ParserElement.packrat_cache = _UnboundedCache() - else: - ParserElement.packrat_cache = _FifoCache(cache_size_limit) - ParserElement._parse = ParserElement._parseCache - - def parse_string( - self, instring: str, parse_all: bool = False, *, parseAll: bool = False - ) -> ParseResults: - """ - Parse a string with respect to the parser definition. This function is intended as the primary interface to the - client code. - - :param instring: The input string to be parsed. - :param parse_all: If set, the entire input string must match the grammar. - :param parseAll: retained for pre-PEP8 compatibility, will be removed in a future release. - :raises ParseException: Raised if ``parse_all`` is set and the input string does not match the whole grammar. - :returns: the parsed data as a :class:`ParseResults` object, which may be accessed as a `list`, a `dict`, or - an object with attributes if the given parser includes results names. - - If the input string is required to match the entire grammar, ``parse_all`` flag must be set to ``True``. This - is also equivalent to ending the grammar with :class:`StringEnd`(). - - To report proper column numbers, ``parse_string`` operates on a copy of the input string where all tabs are - converted to spaces (8 spaces per tab, as per the default in ``string.expandtabs``). If the input string - contains tabs and the grammar uses parse actions that use the ``loc`` argument to index into the string - being parsed, one can ensure a consistent view of the input string by doing one of the following: - - - calling ``parse_with_tabs`` on your grammar before calling ``parse_string`` (see :class:`parse_with_tabs`), - - define your parse action using the full ``(s,loc,toks)`` signature, and reference the input string using the - parse action's ``s`` argument, or - - explicitly expand the tabs in your input string before calling ``parse_string``. - - Examples: - - By default, partial matches are OK. - - >>> res = Word('a').parse_string('aaaaabaaa') - >>> print(res) - ['aaaaa'] - - The parsing behavior varies by the inheriting class of this abstract class. Please refer to the children - directly to see more examples. - - It raises an exception if parse_all flag is set and instring does not match the whole grammar. - - >>> res = Word('a').parse_string('aaaaabaaa', parse_all=True) - Traceback (most recent call last): - ... - pyparsing.ParseException: Expected end of text, found 'b' (at char 5), (line:1, col:6) - """ - parseAll = parse_all or parseAll - - ParserElement.reset_cache() - if not self.streamlined: - self.streamline() - for e in self.ignoreExprs: - e.streamline() - if not self.keepTabs: - instring = instring.expandtabs() - try: - loc, tokens = self._parse(instring, 0) - if parseAll: - loc = self.preParse(instring, loc) - se = Empty() + StringEnd() - se._parse(instring, loc) - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clearing out pyparsing internal stack trace - raise exc.with_traceback(None) - else: - return tokens - - def scan_string( - self, - instring: str, - max_matches: int = _MAX_INT, - overlap: bool = False, - *, - debug: bool = False, - maxMatches: int = _MAX_INT, - ) -> Generator[Tuple[ParseResults, int, int], None, None]: - """ - Scan the input string for expression matches. Each match will return the - matching tokens, start location, and end location. May be called with optional - ``max_matches`` argument, to clip scanning after 'n' matches are found. If - ``overlap`` is specified, then overlapping matches will be reported. - - Note that the start and end locations are reported relative to the string - being parsed. See :class:`parse_string` for more information on parsing - strings with embedded tabs. - - Example:: - - source = "sldjf123lsdjjkf345sldkjf879lkjsfd987" - print(source) - for tokens, start, end in Word(alphas).scan_string(source): - print(' '*start + '^'*(end-start)) - print(' '*start + tokens[0]) - - prints:: - - sldjf123lsdjjkf345sldkjf879lkjsfd987 - ^^^^^ - sldjf - ^^^^^^^ - lsdjjkf - ^^^^^^ - sldkjf - ^^^^^^ - lkjsfd - """ - maxMatches = min(maxMatches, max_matches) - if not self.streamlined: - self.streamline() - for e in self.ignoreExprs: - e.streamline() - - if not self.keepTabs: - instring = str(instring).expandtabs() - instrlen = len(instring) - loc = 0 - preparseFn = self.preParse - parseFn = self._parse - ParserElement.resetCache() - matches = 0 - try: - while loc <= instrlen and matches < maxMatches: - try: - preloc = preparseFn(instring, loc) - nextLoc, tokens = parseFn(instring, preloc, callPreParse=False) - except ParseException: - loc = preloc + 1 - else: - if nextLoc > loc: - matches += 1 - if debug: - print( - { - "tokens": tokens.asList(), - "start": preloc, - "end": nextLoc, - } - ) - yield tokens, preloc, nextLoc - if overlap: - nextloc = preparseFn(instring, loc) - if nextloc > loc: - loc = nextLoc - else: - loc += 1 - else: - loc = nextLoc - else: - loc = preloc + 1 - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clears out pyparsing internal stack trace - raise exc.with_traceback(None) - - def transform_string(self, instring: str, *, debug: bool = False) -> str: - """ - Extension to :class:`scan_string`, to modify matching text with modified tokens that may - be returned from a parse action. To use ``transform_string``, define a grammar and - attach a parse action to it that modifies the returned token list. - Invoking ``transform_string()`` on a target string will then scan for matches, - and replace the matched text patterns according to the logic in the parse - action. ``transform_string()`` returns the resulting transformed string. - - Example:: - - wd = Word(alphas) - wd.set_parse_action(lambda toks: toks[0].title()) - - print(wd.transform_string("now is the winter of our discontent made glorious summer by this sun of york.")) - - prints:: - - Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York. - """ - out: List[str] = [] - lastE = 0 - # force preservation of s, to minimize unwanted transformation of string, and to - # keep string locs straight between transform_string and scan_string - self.keepTabs = True - try: - for t, s, e in self.scan_string(instring, debug=debug): - out.append(instring[lastE:s]) - if t: - if isinstance(t, ParseResults): - out += t.as_list() - elif isinstance(t, Iterable) and not isinstance(t, str_type): - out.extend(t) - else: - out.append(t) - lastE = e - out.append(instring[lastE:]) - out = [o for o in out if o] - return "".join([str(s) for s in _flatten(out)]) - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clears out pyparsing internal stack trace - raise exc.with_traceback(None) - - def search_string( - self, - instring: str, - max_matches: int = _MAX_INT, - *, - debug: bool = False, - maxMatches: int = _MAX_INT, - ) -> ParseResults: - """ - Another extension to :class:`scan_string`, simplifying the access to the tokens found - to match the given parse expression. May be called with optional - ``max_matches`` argument, to clip searching after 'n' matches are found. - - Example:: - - # a capitalized word starts with an uppercase letter, followed by zero or more lowercase letters - cap_word = Word(alphas.upper(), alphas.lower()) - - print(cap_word.search_string("More than Iron, more than Lead, more than Gold I need Electricity")) - - # the sum() builtin can be used to merge results into a single ParseResults object - print(sum(cap_word.search_string("More than Iron, more than Lead, more than Gold I need Electricity"))) - - prints:: - - [['More'], ['Iron'], ['Lead'], ['Gold'], ['I'], ['Electricity']] - ['More', 'Iron', 'Lead', 'Gold', 'I', 'Electricity'] - """ - maxMatches = min(maxMatches, max_matches) - try: - return ParseResults( - [t for t, s, e in self.scan_string(instring, maxMatches, debug=debug)] - ) - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clears out pyparsing internal stack trace - raise exc.with_traceback(None) - - def split( - self, - instring: str, - maxsplit: int = _MAX_INT, - include_separators: bool = False, - *, - includeSeparators=False, - ) -> Generator[str, None, None]: - """ - Generator method to split a string using the given expression as a separator. - May be called with optional ``maxsplit`` argument, to limit the number of splits; - and the optional ``include_separators`` argument (default= ``False``), if the separating - matching text should be included in the split results. - - Example:: - - punc = one_of(list(".,;:/-!?")) - print(list(punc.split("This, this?, this sentence, is badly punctuated!"))) - - prints:: - - ['This', ' this', '', ' this sentence', ' is badly punctuated', ''] - """ - includeSeparators = includeSeparators or include_separators - last = 0 - for t, s, e in self.scan_string(instring, max_matches=maxsplit): - yield instring[last:s] - if includeSeparators: - yield t[0] - last = e - yield instring[last:] - - def __add__(self, other) -> "ParserElement": - """ - Implementation of ``+`` operator - returns :class:`And`. Adding strings to a :class:`ParserElement` - converts them to :class:`Literal`s by default. - - Example:: - - greet = Word(alphas) + "," + Word(alphas) + "!" - hello = "Hello, World!" - print(hello, "->", greet.parse_string(hello)) - - prints:: - - Hello, World! -> ['Hello', ',', 'World', '!'] - - ``...`` may be used as a parse expression as a short form of :class:`SkipTo`. - - Literal('start') + ... + Literal('end') - - is equivalent to: - - Literal('start') + SkipTo('end')("_skipped*") + Literal('end') - - Note that the skipped text is returned with '_skipped' as a results name, - and to support having multiple skips in the same parser, the value returned is - a list of all skipped text. - """ - if other is Ellipsis: - return _PendingSkip(self) - - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return And([self, other]) - - def __radd__(self, other) -> "ParserElement": - """ - Implementation of ``+`` operator when left operand is not a :class:`ParserElement` - """ - if other is Ellipsis: - return SkipTo(self)("_skipped*") + self - - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return other + self - - def __sub__(self, other) -> "ParserElement": - """ - Implementation of ``-`` operator, returns :class:`And` with error stop - """ - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return self + And._ErrorStop() + other - - def __rsub__(self, other) -> "ParserElement": - """ - Implementation of ``-`` operator when left operand is not a :class:`ParserElement` - """ - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return other - self - - def __mul__(self, other) -> "ParserElement": - """ - Implementation of ``*`` operator, allows use of ``expr * 3`` in place of - ``expr + expr + expr``. Expressions may also be multiplied by a 2-integer - tuple, similar to ``{min, max}`` multipliers in regular expressions. Tuples - may also include ``None`` as in: - - ``expr*(n, None)`` or ``expr*(n, )`` is equivalent - to ``expr*n + ZeroOrMore(expr)`` - (read as "at least n instances of ``expr``") - - ``expr*(None, n)`` is equivalent to ``expr*(0, n)`` - (read as "0 to n instances of ``expr``") - - ``expr*(None, None)`` is equivalent to ``ZeroOrMore(expr)`` - - ``expr*(1, None)`` is equivalent to ``OneOrMore(expr)`` - - Note that ``expr*(None, n)`` does not raise an exception if - more than n exprs exist in the input stream; that is, - ``expr*(None, n)`` does not enforce a maximum number of expr - occurrences. If this behavior is desired, then write - ``expr*(None, n) + ~expr`` - """ - if other is Ellipsis: - other = (0, None) - elif isinstance(other, tuple) and other[:1] == (Ellipsis,): - other = ((0,) + other[1:] + (None,))[:2] - - if isinstance(other, int): - minElements, optElements = other, 0 - elif isinstance(other, tuple): - other = tuple(o if o is not Ellipsis else None for o in other) - other = (other + (None, None))[:2] - if other[0] is None: - other = (0, other[1]) - if isinstance(other[0], int) and other[1] is None: - if other[0] == 0: - return ZeroOrMore(self) - if other[0] == 1: - return OneOrMore(self) - else: - return self * other[0] + ZeroOrMore(self) - elif isinstance(other[0], int) and isinstance(other[1], int): - minElements, optElements = other - optElements -= minElements - else: - raise TypeError( - "cannot multiply ParserElement and ({}) objects".format( - ",".join(type(item).__name__ for item in other) - ) - ) - else: - raise TypeError( - "cannot multiply ParserElement and {} objects".format( - type(other).__name__ - ) - ) - - if minElements < 0: - raise ValueError("cannot multiply ParserElement by negative value") - if optElements < 0: - raise ValueError( - "second tuple value must be greater or equal to first tuple value" - ) - if minElements == optElements == 0: - return And([]) - - if optElements: - - def makeOptionalList(n): - if n > 1: - return Opt(self + makeOptionalList(n - 1)) - else: - return Opt(self) - - if minElements: - if minElements == 1: - ret = self + makeOptionalList(optElements) - else: - ret = And([self] * minElements) + makeOptionalList(optElements) - else: - ret = makeOptionalList(optElements) - else: - if minElements == 1: - ret = self - else: - ret = And([self] * minElements) - return ret - - def __rmul__(self, other) -> "ParserElement": - return self.__mul__(other) - - def __or__(self, other) -> "ParserElement": - """ - Implementation of ``|`` operator - returns :class:`MatchFirst` - """ - if other is Ellipsis: - return _PendingSkip(self, must_skip=True) - - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return MatchFirst([self, other]) - - def __ror__(self, other) -> "ParserElement": - """ - Implementation of ``|`` operator when left operand is not a :class:`ParserElement` - """ - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return other | self - - def __xor__(self, other) -> "ParserElement": - """ - Implementation of ``^`` operator - returns :class:`Or` - """ - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return Or([self, other]) - - def __rxor__(self, other) -> "ParserElement": - """ - Implementation of ``^`` operator when left operand is not a :class:`ParserElement` - """ - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return other ^ self - - def __and__(self, other) -> "ParserElement": - """ - Implementation of ``&`` operator - returns :class:`Each` - """ - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return Each([self, other]) - - def __rand__(self, other) -> "ParserElement": - """ - Implementation of ``&`` operator when left operand is not a :class:`ParserElement` - """ - if isinstance(other, str_type): - other = self._literalStringClass(other) - if not isinstance(other, ParserElement): - raise TypeError( - "Cannot combine element of type {} with ParserElement".format( - type(other).__name__ - ) - ) - return other & self - - def __invert__(self) -> "ParserElement": - """ - Implementation of ``~`` operator - returns :class:`NotAny` - """ - return NotAny(self) - - # disable __iter__ to override legacy use of sequential access to __getitem__ to - # iterate over a sequence - __iter__ = None - - def __getitem__(self, key): - """ - use ``[]`` indexing notation as a short form for expression repetition: - - - ``expr[n]`` is equivalent to ``expr*n`` - - ``expr[m, n]`` is equivalent to ``expr*(m, n)`` - - ``expr[n, ...]`` or ``expr[n,]`` is equivalent - to ``expr*n + ZeroOrMore(expr)`` - (read as "at least n instances of ``expr``") - - ``expr[..., n]`` is equivalent to ``expr*(0, n)`` - (read as "0 to n instances of ``expr``") - - ``expr[...]`` and ``expr[0, ...]`` are equivalent to ``ZeroOrMore(expr)`` - - ``expr[1, ...]`` is equivalent to ``OneOrMore(expr)`` - - ``None`` may be used in place of ``...``. - - Note that ``expr[..., n]`` and ``expr[m, n]``do not raise an exception - if more than ``n`` ``expr``s exist in the input stream. If this behavior is - desired, then write ``expr[..., n] + ~expr``. - """ - - # convert single arg keys to tuples - try: - if isinstance(key, str_type): - key = (key,) - iter(key) - except TypeError: - key = (key, key) - - if len(key) > 2: - raise TypeError( - "only 1 or 2 index arguments supported ({}{})".format( - key[:5], "... [{}]".format(len(key)) if len(key) > 5 else "" - ) - ) - - # clip to 2 elements - ret = self * tuple(key[:2]) - return ret - - def __call__(self, name: str = None) -> "ParserElement": - """ - Shortcut for :class:`set_results_name`, with ``list_all_matches=False``. - - If ``name`` is given with a trailing ``'*'`` character, then ``list_all_matches`` will be - passed as ``True``. - - If ``name` is omitted, same as calling :class:`copy`. - - Example:: - - # these are equivalent - userdata = Word(alphas).set_results_name("name") + Word(nums + "-").set_results_name("socsecno") - userdata = Word(alphas)("name") + Word(nums + "-")("socsecno") - """ - if name is not None: - return self._setResultsName(name) - else: - return self.copy() - - def suppress(self) -> "ParserElement": - """ - Suppresses the output of this :class:`ParserElement`; useful to keep punctuation from - cluttering up returned output. - """ - return Suppress(self) - - def ignore_whitespace(self, recursive: bool = True) -> "ParserElement": - """ - Enables the skipping of whitespace before matching the characters in the - :class:`ParserElement`'s defined pattern. - - :param recursive: If ``True`` (the default), also enable whitespace skipping in child elements (if any) - """ - self.skipWhitespace = True - return self - - def leave_whitespace(self, recursive: bool = True) -> "ParserElement": - """ - Disables the skipping of whitespace before matching the characters in the - :class:`ParserElement`'s defined pattern. This is normally only used internally by - the pyparsing module, but may be needed in some whitespace-sensitive grammars. - - :param recursive: If true (the default), also disable whitespace skipping in child elements (if any) - """ - self.skipWhitespace = False - return self - - def set_whitespace_chars( - self, chars: Union[Set[str], str], copy_defaults: bool = False - ) -> "ParserElement": - """ - Overrides the default whitespace chars - """ - self.skipWhitespace = True - self.whiteChars = set(chars) - self.copyDefaultWhiteChars = copy_defaults - return self - - def parse_with_tabs(self) -> "ParserElement": - """ - Overrides default behavior to expand ```` s to spaces before parsing the input string. - Must be called before ``parse_string`` when the input grammar contains elements that - match ```` characters. - """ - self.keepTabs = True - return self - - def ignore(self, other: "ParserElement") -> "ParserElement": - """ - Define expression to be ignored (e.g., comments) while doing pattern - matching; may be called repeatedly, to define multiple comment or other - ignorable patterns. - - Example:: - - patt = Word(alphas)[1, ...] - patt.parse_string('ablaj /* comment */ lskjd') - # -> ['ablaj'] - - patt.ignore(c_style_comment) - patt.parse_string('ablaj /* comment */ lskjd') - # -> ['ablaj', 'lskjd'] - """ - import typing - - if isinstance(other, str_type): - other = Suppress(other) - - if isinstance(other, Suppress): - if other not in self.ignoreExprs: - self.ignoreExprs.append(other) - else: - self.ignoreExprs.append(Suppress(other.copy())) - return self - - def set_debug_actions( - self, - start_action: DebugStartAction, - success_action: DebugSuccessAction, - exception_action: DebugExceptionAction, - ) -> "ParserElement": - """ - Customize display of debugging messages while doing pattern matching: - - - ``start_action`` - method to be called when an expression is about to be parsed; - should have the signature ``fn(input_string: str, location: int, expression: ParserElement, cache_hit: bool)`` - - - ``success_action`` - method to be called when an expression has successfully parsed; - should have the signature ``fn(input_string: str, start_location: int, end_location: int, expression: ParserELement, parsed_tokens: ParseResults, cache_hit: bool)`` - - - ``exception_action`` - method to be called when expression fails to parse; - should have the signature ``fn(input_string: str, location: int, expression: ParserElement, exception: Exception, cache_hit: bool)`` - """ - self.debugActions = self.DebugActions( - start_action or _default_start_debug_action, - success_action or _default_success_debug_action, - exception_action or _default_exception_debug_action, - ) - self.debug = True - return self - - def set_debug(self, flag: bool = True) -> "ParserElement": - """ - Enable display of debugging messages while doing pattern matching. - Set ``flag`` to ``True`` to enable, ``False`` to disable. - - Example:: - - wd = Word(alphas).set_name("alphaword") - integer = Word(nums).set_name("numword") - term = wd | integer - - # turn on debugging for wd - wd.set_debug() - - term[1, ...].parse_string("abc 123 xyz 890") - - prints:: - - Match alphaword at loc 0(1,1) - Matched alphaword -> ['abc'] - Match alphaword at loc 3(1,4) - Exception raised:Expected alphaword (at char 4), (line:1, col:5) - Match alphaword at loc 7(1,8) - Matched alphaword -> ['xyz'] - Match alphaword at loc 11(1,12) - Exception raised:Expected alphaword (at char 12), (line:1, col:13) - Match alphaword at loc 15(1,16) - Exception raised:Expected alphaword (at char 15), (line:1, col:16) - - The output shown is that produced by the default debug actions - custom debug actions can be - specified using :class:`set_debug_actions`. Prior to attempting - to match the ``wd`` expression, the debugging message ``"Match at loc (,)"`` - is shown. Then if the parse succeeds, a ``"Matched"`` message is shown, or an ``"Exception raised"`` - message is shown. Also note the use of :class:`set_name` to assign a human-readable name to the expression, - which makes debugging and exception messages easier to understand - for instance, the default - name created for the :class:`Word` expression without calling ``set_name`` is ``"W:(A-Za-z)"``. - """ - if flag: - self.set_debug_actions( - _default_start_debug_action, - _default_success_debug_action, - _default_exception_debug_action, - ) - else: - self.debug = False - return self - - @property - def default_name(self) -> str: - if self._defaultName is None: - self._defaultName = self._generateDefaultName() - return self._defaultName - - @abstractmethod - def _generateDefaultName(self): - """ - Child classes must define this method, which defines how the ``default_name`` is set. - """ - - def set_name(self, name: str) -> "ParserElement": - """ - Define name for this expression, makes debugging and exception messages clearer. - Example:: - Word(nums).parse_string("ABC") # -> Exception: Expected W:(0-9) (at char 0), (line:1, col:1) - Word(nums).set_name("integer").parse_string("ABC") # -> Exception: Expected integer (at char 0), (line:1, col:1) - """ - self.customName = name - self.errmsg = "Expected " + self.name - if __diag__.enable_debug_on_named_expressions: - self.set_debug() - return self - - @property - def name(self) -> str: - # This will use a user-defined name if available, but otherwise defaults back to the auto-generated name - return self.customName if self.customName is not None else self.default_name - - def __str__(self) -> str: - return self.name - - def __repr__(self) -> str: - return str(self) - - def streamline(self) -> "ParserElement": - self.streamlined = True - self._defaultName = None - return self - - def recurse(self) -> Sequence["ParserElement"]: - return [] - - def _checkRecursion(self, parseElementList): - subRecCheckList = parseElementList[:] + [self] - for e in self.recurse(): - e._checkRecursion(subRecCheckList) - - def validate(self, validateTrace=None) -> None: - """ - Check defined expressions for valid structure, check for infinite recursive definitions. - """ - self._checkRecursion([]) - - def parse_file( - self, - file_or_filename: Union[str, Path, TextIO], - encoding: str = "utf-8", - parse_all: bool = False, - *, - parseAll: bool = False, - ) -> ParseResults: - """ - Execute the parse expression on the given file or filename. - If a filename is specified (instead of a file object), - the entire file is opened, read, and closed before parsing. - """ - parseAll = parseAll or parse_all - try: - file_contents = file_or_filename.read() - except AttributeError: - with open(file_or_filename, "r", encoding=encoding) as f: - file_contents = f.read() - try: - return self.parse_string(file_contents, parseAll) - except ParseBaseException as exc: - if ParserElement.verbose_stacktrace: - raise - else: - # catch and re-raise exception from here, clears out pyparsing internal stack trace - raise exc.with_traceback(None) - - def __eq__(self, other): - if self is other: - return True - elif isinstance(other, str_type): - return self.matches(other, parse_all=True) - elif isinstance(other, ParserElement): - return vars(self) == vars(other) - return False - - def __hash__(self): - return id(self) - - def matches( - self, test_string: str, parse_all: bool = True, *, parseAll: bool = True - ) -> bool: - """ - Method for quick testing of a parser against a test string. Good for simple - inline microtests of sub expressions while building up larger parser. - - Parameters: - - ``test_string`` - to test against this expression for a match - - ``parse_all`` - (default= ``True``) - flag to pass to :class:`parse_string` when running tests - - Example:: - - expr = Word(nums) - assert expr.matches("100") - """ - parseAll = parseAll and parse_all - try: - self.parse_string(str(test_string), parse_all=parseAll) - return True - except ParseBaseException: - return False - - def run_tests( - self, - tests: Union[str, List[str]], - parse_all: bool = True, - comment: typing.Optional[Union["ParserElement", str]] = "#", - full_dump: bool = True, - print_results: bool = True, - failure_tests: bool = False, - post_parse: Callable[[str, ParseResults], str] = None, - file: typing.Optional[TextIO] = None, - with_line_numbers: bool = False, - *, - parseAll: bool = True, - fullDump: bool = True, - printResults: bool = True, - failureTests: bool = False, - postParse: Callable[[str, ParseResults], str] = None, - ) -> Tuple[bool, List[Tuple[str, Union[ParseResults, Exception]]]]: - """ - Execute the parse expression on a series of test strings, showing each - test, the parsed results or where the parse failed. Quick and easy way to - run a parse expression against a list of sample strings. - - Parameters: - - ``tests`` - a list of separate test strings, or a multiline string of test strings - - ``parse_all`` - (default= ``True``) - flag to pass to :class:`parse_string` when running tests - - ``comment`` - (default= ``'#'``) - expression for indicating embedded comments in the test - string; pass None to disable comment filtering - - ``full_dump`` - (default= ``True``) - dump results as list followed by results names in nested outline; - if False, only dump nested list - - ``print_results`` - (default= ``True``) prints test output to stdout - - ``failure_tests`` - (default= ``False``) indicates if these tests are expected to fail parsing - - ``post_parse`` - (default= ``None``) optional callback for successful parse results; called as - `fn(test_string, parse_results)` and returns a string to be added to the test output - - ``file`` - (default= ``None``) optional file-like object to which test output will be written; - if None, will default to ``sys.stdout`` - - ``with_line_numbers`` - default= ``False``) show test strings with line and column numbers - - Returns: a (success, results) tuple, where success indicates that all tests succeeded - (or failed if ``failure_tests`` is True), and the results contain a list of lines of each - test's output - - Example:: - - number_expr = pyparsing_common.number.copy() - - result = number_expr.run_tests(''' - # unsigned integer - 100 - # negative integer - -100 - # float with scientific notation - 6.02e23 - # integer with scientific notation - 1e-12 - ''') - print("Success" if result[0] else "Failed!") - - result = number_expr.run_tests(''' - # stray character - 100Z - # missing leading digit before '.' - -.100 - # too many '.' - 3.14.159 - ''', failure_tests=True) - print("Success" if result[0] else "Failed!") - - prints:: - - # unsigned integer - 100 - [100] - - # negative integer - -100 - [-100] - - # float with scientific notation - 6.02e23 - [6.02e+23] - - # integer with scientific notation - 1e-12 - [1e-12] - - Success - - # stray character - 100Z - ^ - FAIL: Expected end of text (at char 3), (line:1, col:4) - - # missing leading digit before '.' - -.100 - ^ - FAIL: Expected {real number with scientific notation | real number | signed integer} (at char 0), (line:1, col:1) - - # too many '.' - 3.14.159 - ^ - FAIL: Expected end of text (at char 4), (line:1, col:5) - - Success - - Each test string must be on a single line. If you want to test a string that spans multiple - lines, create a test like this:: - - expr.run_tests(r"this is a test\\n of strings that spans \\n 3 lines") - - (Note that this is a raw string literal, you must include the leading ``'r'``.) - """ - from .testing import pyparsing_test - - parseAll = parseAll and parse_all - fullDump = fullDump and full_dump - printResults = printResults and print_results - failureTests = failureTests or failure_tests - postParse = postParse or post_parse - if isinstance(tests, str_type): - line_strip = type(tests).strip - tests = [line_strip(test_line) for test_line in tests.rstrip().splitlines()] - if isinstance(comment, str_type): - comment = Literal(comment) - if file is None: - file = sys.stdout - print_ = file.write - - result: Union[ParseResults, Exception] - allResults = [] - comments = [] - success = True - NL = Literal(r"\n").add_parse_action(replace_with("\n")).ignore(quoted_string) - BOM = "\ufeff" - for t in tests: - if comment is not None and comment.matches(t, False) or comments and not t: - comments.append( - pyparsing_test.with_line_numbers(t) if with_line_numbers else t - ) - continue - if not t: - continue - out = [ - "\n" + "\n".join(comments) if comments else "", - pyparsing_test.with_line_numbers(t) if with_line_numbers else t, - ] - comments = [] - try: - # convert newline marks to actual newlines, and strip leading BOM if present - t = NL.transform_string(t.lstrip(BOM)) - result = self.parse_string(t, parse_all=parseAll) - except ParseBaseException as pe: - fatal = "(FATAL)" if isinstance(pe, ParseFatalException) else "" - out.append(pe.explain()) - out.append("FAIL: " + str(pe)) - if ParserElement.verbose_stacktrace: - out.extend(traceback.format_tb(pe.__traceback__)) - success = success and failureTests - result = pe - except Exception as exc: - out.append("FAIL-EXCEPTION: {}: {}".format(type(exc).__name__, exc)) - if ParserElement.verbose_stacktrace: - out.extend(traceback.format_tb(exc.__traceback__)) - success = success and failureTests - result = exc - else: - success = success and not failureTests - if postParse is not None: - try: - pp_value = postParse(t, result) - if pp_value is not None: - if isinstance(pp_value, ParseResults): - out.append(pp_value.dump()) - else: - out.append(str(pp_value)) - else: - out.append(result.dump()) - except Exception as e: - out.append(result.dump(full=fullDump)) - out.append( - "{} failed: {}: {}".format( - postParse.__name__, type(e).__name__, e - ) - ) - else: - out.append(result.dump(full=fullDump)) - out.append("") - - if printResults: - print_("\n".join(out)) - - allResults.append((t, result)) - - return success, allResults - - def create_diagram( - self, - output_html: Union[TextIO, Path, str], - vertical: int = 3, - show_results_names: bool = False, - show_groups: bool = False, - **kwargs, - ) -> None: - """ - Create a railroad diagram for the parser. - - Parameters: - - output_html (str or file-like object) - output target for generated - diagram HTML - - vertical (int) - threshold for formatting multiple alternatives vertically - instead of horizontally (default=3) - - show_results_names - bool flag whether diagram should show annotations for - defined results names - - show_groups - bool flag whether groups should be highlighted with an unlabeled surrounding box - Additional diagram-formatting keyword arguments can also be included; - see railroad.Diagram class. - """ - - try: - from .diagram import to_railroad, railroad_to_html - except ImportError as ie: - raise Exception( - "must ``pip install pyparsing[diagrams]`` to generate parser railroad diagrams" - ) from ie - - self.streamline() - - railroad = to_railroad( - self, - vertical=vertical, - show_results_names=show_results_names, - show_groups=show_groups, - diagram_kwargs=kwargs, - ) - if isinstance(output_html, (str, Path)): - with open(output_html, "w", encoding="utf-8") as diag_file: - diag_file.write(railroad_to_html(railroad)) - else: - # we were passed a file-like object, just write to it - output_html.write(railroad_to_html(railroad)) - - setDefaultWhitespaceChars = set_default_whitespace_chars - inlineLiteralsUsing = inline_literals_using - setResultsName = set_results_name - setBreak = set_break - setParseAction = set_parse_action - addParseAction = add_parse_action - addCondition = add_condition - setFailAction = set_fail_action - tryParse = try_parse - canParseNext = can_parse_next - resetCache = reset_cache - enableLeftRecursion = enable_left_recursion - enablePackrat = enable_packrat - parseString = parse_string - scanString = scan_string - searchString = search_string - transformString = transform_string - setWhitespaceChars = set_whitespace_chars - parseWithTabs = parse_with_tabs - setDebugActions = set_debug_actions - setDebug = set_debug - defaultName = default_name - setName = set_name - parseFile = parse_file - runTests = run_tests - ignoreWhitespace = ignore_whitespace - leaveWhitespace = leave_whitespace - - -class _PendingSkip(ParserElement): - # internal placeholder class to hold a place were '...' is added to a parser element, - # once another ParserElement is added, this placeholder will be replaced with a SkipTo - def __init__(self, expr: ParserElement, must_skip: bool = False): - super().__init__() - self.anchor = expr - self.must_skip = must_skip - - def _generateDefaultName(self): - return str(self.anchor + Empty()).replace("Empty", "...") - - def __add__(self, other) -> "ParserElement": - skipper = SkipTo(other).set_name("...")("_skipped*") - if self.must_skip: - - def must_skip(t): - if not t._skipped or t._skipped.as_list() == [""]: - del t[0] - t.pop("_skipped", None) - - def show_skip(t): - if t._skipped.as_list()[-1:] == [""]: - t.pop("_skipped") - t["_skipped"] = "missing <" + repr(self.anchor) + ">" - - return ( - self.anchor + skipper().add_parse_action(must_skip) - | skipper().add_parse_action(show_skip) - ) + other - - return self.anchor + skipper + other - - def __repr__(self): - return self.defaultName - - def parseImpl(self, *args): - raise Exception( - "use of `...` expression without following SkipTo target expression" - ) - - -class Token(ParserElement): - """Abstract :class:`ParserElement` subclass, for defining atomic - matching patterns. - """ - - def __init__(self): - super().__init__(savelist=False) - - def _generateDefaultName(self): - return type(self).__name__ - - -class Empty(Token): - """ - An empty token, will always match. - """ - - def __init__(self): - super().__init__() - self.mayReturnEmpty = True - self.mayIndexError = False - - -class NoMatch(Token): - """ - A token that will never match. - """ - - def __init__(self): - super().__init__() - self.mayReturnEmpty = True - self.mayIndexError = False - self.errmsg = "Unmatchable token" - - def parseImpl(self, instring, loc, doActions=True): - raise ParseException(instring, loc, self.errmsg, self) - - -class Literal(Token): - """ - Token to exactly match a specified string. - - Example:: - - Literal('blah').parse_string('blah') # -> ['blah'] - Literal('blah').parse_string('blahfooblah') # -> ['blah'] - Literal('blah').parse_string('bla') # -> Exception: Expected "blah" - - For case-insensitive matching, use :class:`CaselessLiteral`. - - For keyword matching (force word break before and after the matched string), - use :class:`Keyword` or :class:`CaselessKeyword`. - """ - - def __init__(self, match_string: str = "", *, matchString: str = ""): - super().__init__() - match_string = matchString or match_string - self.match = match_string - self.matchLen = len(match_string) - try: - self.firstMatchChar = match_string[0] - except IndexError: - raise ValueError("null string passed to Literal; use Empty() instead") - self.errmsg = "Expected " + self.name - self.mayReturnEmpty = False - self.mayIndexError = False - - # Performance tuning: modify __class__ to select - # a parseImpl optimized for single-character check - if self.matchLen == 1 and type(self) is Literal: - self.__class__ = _SingleCharLiteral - - def _generateDefaultName(self): - return repr(self.match) - - def parseImpl(self, instring, loc, doActions=True): - if instring[loc] == self.firstMatchChar and instring.startswith( - self.match, loc - ): - return loc + self.matchLen, self.match - raise ParseException(instring, loc, self.errmsg, self) - - -class _SingleCharLiteral(Literal): - def parseImpl(self, instring, loc, doActions=True): - if instring[loc] == self.firstMatchChar: - return loc + 1, self.match - raise ParseException(instring, loc, self.errmsg, self) - - -ParserElement._literalStringClass = Literal - - -class Keyword(Token): - """ - Token to exactly match a specified string as a keyword, that is, - it must be immediately followed by a non-keyword character. Compare - with :class:`Literal`: - - - ``Literal("if")`` will match the leading ``'if'`` in - ``'ifAndOnlyIf'``. - - ``Keyword("if")`` will not; it will only match the leading - ``'if'`` in ``'if x=1'``, or ``'if(y==2)'`` - - Accepts two optional constructor arguments in addition to the - keyword string: - - - ``identChars`` is a string of characters that would be valid - identifier characters, defaulting to all alphanumerics + "_" and - "$" - - ``caseless`` allows case-insensitive matching, default is ``False``. - - Example:: - - Keyword("start").parse_string("start") # -> ['start'] - Keyword("start").parse_string("starting") # -> Exception - - For case-insensitive matching, use :class:`CaselessKeyword`. - """ - - DEFAULT_KEYWORD_CHARS = alphanums + "_$" - - def __init__( - self, - match_string: str = "", - ident_chars: typing.Optional[str] = None, - caseless: bool = False, - *, - matchString: str = "", - identChars: typing.Optional[str] = None, - ): - super().__init__() - identChars = identChars or ident_chars - if identChars is None: - identChars = Keyword.DEFAULT_KEYWORD_CHARS - match_string = matchString or match_string - self.match = match_string - self.matchLen = len(match_string) - try: - self.firstMatchChar = match_string[0] - except IndexError: - raise ValueError("null string passed to Keyword; use Empty() instead") - self.errmsg = "Expected {} {}".format(type(self).__name__, self.name) - self.mayReturnEmpty = False - self.mayIndexError = False - self.caseless = caseless - if caseless: - self.caselessmatch = match_string.upper() - identChars = identChars.upper() - self.identChars = set(identChars) - - def _generateDefaultName(self): - return repr(self.match) - - def parseImpl(self, instring, loc, doActions=True): - errmsg = self.errmsg - errloc = loc - if self.caseless: - if instring[loc : loc + self.matchLen].upper() == self.caselessmatch: - if loc == 0 or instring[loc - 1].upper() not in self.identChars: - if ( - loc >= len(instring) - self.matchLen - or instring[loc + self.matchLen].upper() not in self.identChars - ): - return loc + self.matchLen, self.match - else: - # followed by keyword char - errmsg += ", was immediately followed by keyword character" - errloc = loc + self.matchLen - else: - # preceded by keyword char - errmsg += ", keyword was immediately preceded by keyword character" - errloc = loc - 1 - # else no match just raise plain exception - - else: - if ( - instring[loc] == self.firstMatchChar - and self.matchLen == 1 - or instring.startswith(self.match, loc) - ): - if loc == 0 or instring[loc - 1] not in self.identChars: - if ( - loc >= len(instring) - self.matchLen - or instring[loc + self.matchLen] not in self.identChars - ): - return loc + self.matchLen, self.match - else: - # followed by keyword char - errmsg += ( - ", keyword was immediately followed by keyword character" - ) - errloc = loc + self.matchLen - else: - # preceded by keyword char - errmsg += ", keyword was immediately preceded by keyword character" - errloc = loc - 1 - # else no match just raise plain exception - - raise ParseException(instring, errloc, errmsg, self) - - @staticmethod - def set_default_keyword_chars(chars) -> None: - """ - Overrides the default characters used by :class:`Keyword` expressions. - """ - Keyword.DEFAULT_KEYWORD_CHARS = chars - - setDefaultKeywordChars = set_default_keyword_chars - - -class CaselessLiteral(Literal): - """ - Token to match a specified string, ignoring case of letters. - Note: the matched results will always be in the case of the given - match string, NOT the case of the input text. - - Example:: - - CaselessLiteral("CMD")[1, ...].parse_string("cmd CMD Cmd10") - # -> ['CMD', 'CMD', 'CMD'] - - (Contrast with example for :class:`CaselessKeyword`.) - """ - - def __init__(self, match_string: str = "", *, matchString: str = ""): - match_string = matchString or match_string - super().__init__(match_string.upper()) - # Preserve the defining literal. - self.returnString = match_string - self.errmsg = "Expected " + self.name - - def parseImpl(self, instring, loc, doActions=True): - if instring[loc : loc + self.matchLen].upper() == self.match: - return loc + self.matchLen, self.returnString - raise ParseException(instring, loc, self.errmsg, self) - - -class CaselessKeyword(Keyword): - """ - Caseless version of :class:`Keyword`. - - Example:: - - CaselessKeyword("CMD")[1, ...].parse_string("cmd CMD Cmd10") - # -> ['CMD', 'CMD'] - - (Contrast with example for :class:`CaselessLiteral`.) - """ - - def __init__( - self, - match_string: str = "", - ident_chars: typing.Optional[str] = None, - *, - matchString: str = "", - identChars: typing.Optional[str] = None, - ): - identChars = identChars or ident_chars - match_string = matchString or match_string - super().__init__(match_string, identChars, caseless=True) - - -class CloseMatch(Token): - """A variation on :class:`Literal` which matches "close" matches, - that is, strings with at most 'n' mismatching characters. - :class:`CloseMatch` takes parameters: - - - ``match_string`` - string to be matched - - ``caseless`` - a boolean indicating whether to ignore casing when comparing characters - - ``max_mismatches`` - (``default=1``) maximum number of - mismatches allowed to count as a match - - The results from a successful parse will contain the matched text - from the input string and the following named results: - - - ``mismatches`` - a list of the positions within the - match_string where mismatches were found - - ``original`` - the original match_string used to compare - against the input string - - If ``mismatches`` is an empty list, then the match was an exact - match. - - Example:: - - patt = CloseMatch("ATCATCGAATGGA") - patt.parse_string("ATCATCGAAXGGA") # -> (['ATCATCGAAXGGA'], {'mismatches': [[9]], 'original': ['ATCATCGAATGGA']}) - patt.parse_string("ATCAXCGAAXGGA") # -> Exception: Expected 'ATCATCGAATGGA' (with up to 1 mismatches) (at char 0), (line:1, col:1) - - # exact match - patt.parse_string("ATCATCGAATGGA") # -> (['ATCATCGAATGGA'], {'mismatches': [[]], 'original': ['ATCATCGAATGGA']}) - - # close match allowing up to 2 mismatches - patt = CloseMatch("ATCATCGAATGGA", max_mismatches=2) - patt.parse_string("ATCAXCGAAXGGA") # -> (['ATCAXCGAAXGGA'], {'mismatches': [[4, 9]], 'original': ['ATCATCGAATGGA']}) - """ - - def __init__( - self, - match_string: str, - max_mismatches: int = None, - *, - maxMismatches: int = 1, - caseless=False, - ): - maxMismatches = max_mismatches if max_mismatches is not None else maxMismatches - super().__init__() - self.match_string = match_string - self.maxMismatches = maxMismatches - self.errmsg = "Expected {!r} (with up to {} mismatches)".format( - self.match_string, self.maxMismatches - ) - self.caseless = caseless - self.mayIndexError = False - self.mayReturnEmpty = False - - def _generateDefaultName(self): - return "{}:{!r}".format(type(self).__name__, self.match_string) - - def parseImpl(self, instring, loc, doActions=True): - start = loc - instrlen = len(instring) - maxloc = start + len(self.match_string) - - if maxloc <= instrlen: - match_string = self.match_string - match_stringloc = 0 - mismatches = [] - maxMismatches = self.maxMismatches - - for match_stringloc, s_m in enumerate( - zip(instring[loc:maxloc], match_string) - ): - src, mat = s_m - if self.caseless: - src, mat = src.lower(), mat.lower() - - if src != mat: - mismatches.append(match_stringloc) - if len(mismatches) > maxMismatches: - break - else: - loc = start + match_stringloc + 1 - results = ParseResults([instring[start:loc]]) - results["original"] = match_string - results["mismatches"] = mismatches - return loc, results - - raise ParseException(instring, loc, self.errmsg, self) - - -class Word(Token): - """Token for matching words composed of allowed character sets. - Parameters: - - ``init_chars`` - string of all characters that should be used to - match as a word; "ABC" will match "AAA", "ABAB", "CBAC", etc.; - if ``body_chars`` is also specified, then this is the string of - initial characters - - ``body_chars`` - string of characters that - can be used for matching after a matched initial character as - given in ``init_chars``; if omitted, same as the initial characters - (default=``None``) - - ``min`` - minimum number of characters to match (default=1) - - ``max`` - maximum number of characters to match (default=0) - - ``exact`` - exact number of characters to match (default=0) - - ``as_keyword`` - match as a keyword (default=``False``) - - ``exclude_chars`` - characters that might be - found in the input ``body_chars`` string but which should not be - accepted for matching ;useful to define a word of all - printables except for one or two characters, for instance - (default=``None``) - - :class:`srange` is useful for defining custom character set strings - for defining :class:`Word` expressions, using range notation from - regular expression character sets. - - A common mistake is to use :class:`Word` to match a specific literal - string, as in ``Word("Address")``. Remember that :class:`Word` - uses the string argument to define *sets* of matchable characters. - This expression would match "Add", "AAA", "dAred", or any other word - made up of the characters 'A', 'd', 'r', 'e', and 's'. To match an - exact literal string, use :class:`Literal` or :class:`Keyword`. - - pyparsing includes helper strings for building Words: - - - :class:`alphas` - - :class:`nums` - - :class:`alphanums` - - :class:`hexnums` - - :class:`alphas8bit` (alphabetic characters in ASCII range 128-255 - - accented, tilded, umlauted, etc.) - - :class:`punc8bit` (non-alphabetic characters in ASCII range - 128-255 - currency, symbols, superscripts, diacriticals, etc.) - - :class:`printables` (any non-whitespace character) - - ``alphas``, ``nums``, and ``printables`` are also defined in several - Unicode sets - see :class:`pyparsing_unicode``. - - Example:: - - # a word composed of digits - integer = Word(nums) # equivalent to Word("0123456789") or Word(srange("0-9")) - - # a word with a leading capital, and zero or more lowercase - capital_word = Word(alphas.upper(), alphas.lower()) - - # hostnames are alphanumeric, with leading alpha, and '-' - hostname = Word(alphas, alphanums + '-') - - # roman numeral (not a strict parser, accepts invalid mix of characters) - roman = Word("IVXLCDM") - - # any string of non-whitespace characters, except for ',' - csv_value = Word(printables, exclude_chars=",") - """ - - def __init__( - self, - init_chars: str = "", - body_chars: typing.Optional[str] = None, - min: int = 1, - max: int = 0, - exact: int = 0, - as_keyword: bool = False, - exclude_chars: typing.Optional[str] = None, - *, - initChars: typing.Optional[str] = None, - bodyChars: typing.Optional[str] = None, - asKeyword: bool = False, - excludeChars: typing.Optional[str] = None, - ): - initChars = initChars or init_chars - bodyChars = bodyChars or body_chars - asKeyword = asKeyword or as_keyword - excludeChars = excludeChars or exclude_chars - super().__init__() - if not initChars: - raise ValueError( - "invalid {}, initChars cannot be empty string".format( - type(self).__name__ - ) - ) - - initChars = set(initChars) - self.initChars = initChars - if excludeChars: - excludeChars = set(excludeChars) - initChars -= excludeChars - if bodyChars: - bodyChars = set(bodyChars) - excludeChars - self.initCharsOrig = "".join(sorted(initChars)) - - if bodyChars: - self.bodyCharsOrig = "".join(sorted(bodyChars)) - self.bodyChars = set(bodyChars) - else: - self.bodyCharsOrig = "".join(sorted(initChars)) - self.bodyChars = set(initChars) - - self.maxSpecified = max > 0 - - if min < 1: - raise ValueError( - "cannot specify a minimum length < 1; use Opt(Word()) if zero-length word is permitted" - ) - - self.minLen = min - - if max > 0: - self.maxLen = max - else: - self.maxLen = _MAX_INT - - if exact > 0: - self.maxLen = exact - self.minLen = exact - - self.errmsg = "Expected " + self.name - self.mayIndexError = False - self.asKeyword = asKeyword - - # see if we can make a regex for this Word - if " " not in self.initChars | self.bodyChars and (min == 1 and exact == 0): - if self.bodyChars == self.initChars: - if max == 0: - repeat = "+" - elif max == 1: - repeat = "" - else: - repeat = "{{{},{}}}".format( - self.minLen, "" if self.maxLen == _MAX_INT else self.maxLen - ) - self.reString = "[{}]{}".format( - _collapse_string_to_ranges(self.initChars), - repeat, - ) - elif len(self.initChars) == 1: - if max == 0: - repeat = "*" - else: - repeat = "{{0,{}}}".format(max - 1) - self.reString = "{}[{}]{}".format( - re.escape(self.initCharsOrig), - _collapse_string_to_ranges(self.bodyChars), - repeat, - ) - else: - if max == 0: - repeat = "*" - elif max == 2: - repeat = "" - else: - repeat = "{{0,{}}}".format(max - 1) - self.reString = "[{}][{}]{}".format( - _collapse_string_to_ranges(self.initChars), - _collapse_string_to_ranges(self.bodyChars), - repeat, - ) - if self.asKeyword: - self.reString = r"\b" + self.reString + r"\b" - - try: - self.re = re.compile(self.reString) - except re.error: - self.re = None - else: - self.re_match = self.re.match - self.__class__ = _WordRegex - - def _generateDefaultName(self): - def charsAsStr(s): - max_repr_len = 16 - s = _collapse_string_to_ranges(s, re_escape=False) - if len(s) > max_repr_len: - return s[: max_repr_len - 3] + "..." - else: - return s - - if self.initChars != self.bodyChars: - base = "W:({}, {})".format( - charsAsStr(self.initChars), charsAsStr(self.bodyChars) - ) - else: - base = "W:({})".format(charsAsStr(self.initChars)) - - # add length specification - if self.minLen > 1 or self.maxLen != _MAX_INT: - if self.minLen == self.maxLen: - if self.minLen == 1: - return base[2:] - else: - return base + "{{{}}}".format(self.minLen) - elif self.maxLen == _MAX_INT: - return base + "{{{},...}}".format(self.minLen) - else: - return base + "{{{},{}}}".format(self.minLen, self.maxLen) - return base - - def parseImpl(self, instring, loc, doActions=True): - if instring[loc] not in self.initChars: - raise ParseException(instring, loc, self.errmsg, self) - - start = loc - loc += 1 - instrlen = len(instring) - bodychars = self.bodyChars - maxloc = start + self.maxLen - maxloc = min(maxloc, instrlen) - while loc < maxloc and instring[loc] in bodychars: - loc += 1 - - throwException = False - if loc - start < self.minLen: - throwException = True - elif self.maxSpecified and loc < instrlen and instring[loc] in bodychars: - throwException = True - elif self.asKeyword: - if ( - start > 0 - and instring[start - 1] in bodychars - or loc < instrlen - and instring[loc] in bodychars - ): - throwException = True - - if throwException: - raise ParseException(instring, loc, self.errmsg, self) - - return loc, instring[start:loc] - - -class _WordRegex(Word): - def parseImpl(self, instring, loc, doActions=True): - result = self.re_match(instring, loc) - if not result: - raise ParseException(instring, loc, self.errmsg, self) - - loc = result.end() - return loc, result.group() - - -class Char(_WordRegex): - """A short-cut class for defining :class:`Word` ``(characters, exact=1)``, - when defining a match of any single character in a string of - characters. - """ - - def __init__( - self, - charset: str, - as_keyword: bool = False, - exclude_chars: typing.Optional[str] = None, - *, - asKeyword: bool = False, - excludeChars: typing.Optional[str] = None, - ): - asKeyword = asKeyword or as_keyword - excludeChars = excludeChars or exclude_chars - super().__init__( - charset, exact=1, asKeyword=asKeyword, excludeChars=excludeChars - ) - self.reString = "[{}]".format(_collapse_string_to_ranges(self.initChars)) - if asKeyword: - self.reString = r"\b{}\b".format(self.reString) - self.re = re.compile(self.reString) - self.re_match = self.re.match - - -class Regex(Token): - r"""Token for matching strings that match a given regular - expression. Defined with string specifying the regular expression in - a form recognized by the stdlib Python `re module `_. - If the given regex contains named groups (defined using ``(?P...)``), - these will be preserved as named :class:`ParseResults`. - - If instead of the Python stdlib ``re`` module you wish to use a different RE module - (such as the ``regex`` module), you can do so by building your ``Regex`` object with - a compiled RE that was compiled using ``regex``. - - Example:: - - realnum = Regex(r"[+-]?\d+\.\d*") - # ref: https://stackoverflow.com/questions/267399/how-do-you-match-only-valid-roman-numerals-with-a-regular-expression - roman = Regex(r"M{0,4}(CM|CD|D?{0,3})(XC|XL|L?X{0,3})(IX|IV|V?I{0,3})") - - # named fields in a regex will be returned as named results - date = Regex(r'(?P\d{4})-(?P\d\d?)-(?P\d\d?)') - - # the Regex class will accept re's compiled using the regex module - import regex - parser = pp.Regex(regex.compile(r'[0-9]')) - """ - - def __init__( - self, - pattern: Any, - flags: Union[re.RegexFlag, int] = 0, - as_group_list: bool = False, - as_match: bool = False, - *, - asGroupList: bool = False, - asMatch: bool = False, - ): - """The parameters ``pattern`` and ``flags`` are passed - to the ``re.compile()`` function as-is. See the Python - `re module `_ module for an - explanation of the acceptable patterns and flags. - """ - super().__init__() - asGroupList = asGroupList or as_group_list - asMatch = asMatch or as_match - - if isinstance(pattern, str_type): - if not pattern: - raise ValueError("null string passed to Regex; use Empty() instead") - - self._re = None - self.reString = self.pattern = pattern - self.flags = flags - - elif hasattr(pattern, "pattern") and hasattr(pattern, "match"): - self._re = pattern - self.pattern = self.reString = pattern.pattern - self.flags = flags - - else: - raise TypeError( - "Regex may only be constructed with a string or a compiled RE object" - ) - - self.errmsg = "Expected " + self.name - self.mayIndexError = False - self.asGroupList = asGroupList - self.asMatch = asMatch - if self.asGroupList: - self.parseImpl = self.parseImplAsGroupList - if self.asMatch: - self.parseImpl = self.parseImplAsMatch - - @cached_property - def re(self): - if self._re: - return self._re - else: - try: - return re.compile(self.pattern, self.flags) - except re.error: - raise ValueError( - "invalid pattern ({!r}) passed to Regex".format(self.pattern) - ) - - @cached_property - def re_match(self): - return self.re.match - - @cached_property - def mayReturnEmpty(self): - return self.re_match("") is not None - - def _generateDefaultName(self): - return "Re:({})".format(repr(self.pattern).replace("\\\\", "\\")) - - def parseImpl(self, instring, loc, doActions=True): - result = self.re_match(instring, loc) - if not result: - raise ParseException(instring, loc, self.errmsg, self) - - loc = result.end() - ret = ParseResults(result.group()) - d = result.groupdict() - if d: - for k, v in d.items(): - ret[k] = v - return loc, ret - - def parseImplAsGroupList(self, instring, loc, doActions=True): - result = self.re_match(instring, loc) - if not result: - raise ParseException(instring, loc, self.errmsg, self) - - loc = result.end() - ret = result.groups() - return loc, ret - - def parseImplAsMatch(self, instring, loc, doActions=True): - result = self.re_match(instring, loc) - if not result: - raise ParseException(instring, loc, self.errmsg, self) - - loc = result.end() - ret = result - return loc, ret - - def sub(self, repl: str) -> ParserElement: - r""" - Return :class:`Regex` with an attached parse action to transform the parsed - result as if called using `re.sub(expr, repl, string) `_. - - Example:: - - make_html = Regex(r"(\w+):(.*?):").sub(r"<\1>\2") - print(make_html.transform_string("h1:main title:")) - # prints "

main title

" - """ - if self.asGroupList: - raise TypeError("cannot use sub() with Regex(asGroupList=True)") - - if self.asMatch and callable(repl): - raise TypeError("cannot use sub() with a callable with Regex(asMatch=True)") - - if self.asMatch: - - def pa(tokens): - return tokens[0].expand(repl) - - else: - - def pa(tokens): - return self.re.sub(repl, tokens[0]) - - return self.add_parse_action(pa) - - -class QuotedString(Token): - r""" - Token for matching strings that are delimited by quoting characters. - - Defined with the following parameters: - - - ``quote_char`` - string of one or more characters defining the - quote delimiting string - - ``esc_char`` - character to re_escape quotes, typically backslash - (default= ``None``) - - ``esc_quote`` - special quote sequence to re_escape an embedded quote - string (such as SQL's ``""`` to re_escape an embedded ``"``) - (default= ``None``) - - ``multiline`` - boolean indicating whether quotes can span - multiple lines (default= ``False``) - - ``unquote_results`` - boolean indicating whether the matched text - should be unquoted (default= ``True``) - - ``end_quote_char`` - string of one or more characters defining the - end of the quote delimited string (default= ``None`` => same as - quote_char) - - ``convert_whitespace_escapes`` - convert escaped whitespace - (``'\t'``, ``'\n'``, etc.) to actual whitespace - (default= ``True``) - - Example:: - - qs = QuotedString('"') - print(qs.search_string('lsjdf "This is the quote" sldjf')) - complex_qs = QuotedString('{{', end_quote_char='}}') - print(complex_qs.search_string('lsjdf {{This is the "quote"}} sldjf')) - sql_qs = QuotedString('"', esc_quote='""') - print(sql_qs.search_string('lsjdf "This is the quote with ""embedded"" quotes" sldjf')) - - prints:: - - [['This is the quote']] - [['This is the "quote"']] - [['This is the quote with "embedded" quotes']] - """ - ws_map = ((r"\t", "\t"), (r"\n", "\n"), (r"\f", "\f"), (r"\r", "\r")) - - def __init__( - self, - quote_char: str = "", - esc_char: typing.Optional[str] = None, - esc_quote: typing.Optional[str] = None, - multiline: bool = False, - unquote_results: bool = True, - end_quote_char: typing.Optional[str] = None, - convert_whitespace_escapes: bool = True, - *, - quoteChar: str = "", - escChar: typing.Optional[str] = None, - escQuote: typing.Optional[str] = None, - unquoteResults: bool = True, - endQuoteChar: typing.Optional[str] = None, - convertWhitespaceEscapes: bool = True, - ): - super().__init__() - escChar = escChar or esc_char - escQuote = escQuote or esc_quote - unquoteResults = unquoteResults and unquote_results - endQuoteChar = endQuoteChar or end_quote_char - convertWhitespaceEscapes = ( - convertWhitespaceEscapes and convert_whitespace_escapes - ) - quote_char = quoteChar or quote_char - - # remove white space from quote chars - wont work anyway - quote_char = quote_char.strip() - if not quote_char: - raise ValueError("quote_char cannot be the empty string") - - if endQuoteChar is None: - endQuoteChar = quote_char - else: - endQuoteChar = endQuoteChar.strip() - if not endQuoteChar: - raise ValueError("endQuoteChar cannot be the empty string") - - self.quoteChar = quote_char - self.quoteCharLen = len(quote_char) - self.firstQuoteChar = quote_char[0] - self.endQuoteChar = endQuoteChar - self.endQuoteCharLen = len(endQuoteChar) - self.escChar = escChar - self.escQuote = escQuote - self.unquoteResults = unquoteResults - self.convertWhitespaceEscapes = convertWhitespaceEscapes - - sep = "" - inner_pattern = "" - - if escQuote: - inner_pattern += r"{}(?:{})".format(sep, re.escape(escQuote)) - sep = "|" - - if escChar: - inner_pattern += r"{}(?:{}.)".format(sep, re.escape(escChar)) - sep = "|" - self.escCharReplacePattern = re.escape(self.escChar) + "(.)" - - if len(self.endQuoteChar) > 1: - inner_pattern += ( - "{}(?:".format(sep) - + "|".join( - "(?:{}(?!{}))".format( - re.escape(self.endQuoteChar[:i]), - re.escape(self.endQuoteChar[i:]), - ) - for i in range(len(self.endQuoteChar) - 1, 0, -1) - ) - + ")" - ) - sep = "|" - - if multiline: - self.flags = re.MULTILINE | re.DOTALL - inner_pattern += r"{}(?:[^{}{}])".format( - sep, - _escape_regex_range_chars(self.endQuoteChar[0]), - (_escape_regex_range_chars(escChar) if escChar is not None else ""), - ) - else: - self.flags = 0 - inner_pattern += r"{}(?:[^{}\n\r{}])".format( - sep, - _escape_regex_range_chars(self.endQuoteChar[0]), - (_escape_regex_range_chars(escChar) if escChar is not None else ""), - ) - - self.pattern = "".join( - [ - re.escape(self.quoteChar), - "(?:", - inner_pattern, - ")*", - re.escape(self.endQuoteChar), - ] - ) - - try: - self.re = re.compile(self.pattern, self.flags) - self.reString = self.pattern - self.re_match = self.re.match - except re.error: - raise ValueError( - "invalid pattern {!r} passed to Regex".format(self.pattern) - ) - - self.errmsg = "Expected " + self.name - self.mayIndexError = False - self.mayReturnEmpty = True - - def _generateDefaultName(self): - if self.quoteChar == self.endQuoteChar and isinstance(self.quoteChar, str_type): - return "string enclosed in {!r}".format(self.quoteChar) - - return "quoted string, starting with {} ending with {}".format( - self.quoteChar, self.endQuoteChar - ) - - def parseImpl(self, instring, loc, doActions=True): - result = ( - instring[loc] == self.firstQuoteChar - and self.re_match(instring, loc) - or None - ) - if not result: - raise ParseException(instring, loc, self.errmsg, self) - - loc = result.end() - ret = result.group() - - if self.unquoteResults: - - # strip off quotes - ret = ret[self.quoteCharLen : -self.endQuoteCharLen] - - if isinstance(ret, str_type): - # replace escaped whitespace - if "\\" in ret and self.convertWhitespaceEscapes: - for wslit, wschar in self.ws_map: - ret = ret.replace(wslit, wschar) - - # replace escaped characters - if self.escChar: - ret = re.sub(self.escCharReplacePattern, r"\g<1>", ret) - - # replace escaped quotes - if self.escQuote: - ret = ret.replace(self.escQuote, self.endQuoteChar) - - return loc, ret - - -class CharsNotIn(Token): - """Token for matching words composed of characters *not* in a given - set (will include whitespace in matched characters if not listed in - the provided exclusion set - see example). Defined with string - containing all disallowed characters, and an optional minimum, - maximum, and/or exact length. The default value for ``min`` is - 1 (a minimum value < 1 is not valid); the default values for - ``max`` and ``exact`` are 0, meaning no maximum or exact - length restriction. - - Example:: - - # define a comma-separated-value as anything that is not a ',' - csv_value = CharsNotIn(',') - print(delimited_list(csv_value).parse_string("dkls,lsdkjf,s12 34,@!#,213")) - - prints:: - - ['dkls', 'lsdkjf', 's12 34', '@!#', '213'] - """ - - def __init__( - self, - not_chars: str = "", - min: int = 1, - max: int = 0, - exact: int = 0, - *, - notChars: str = "", - ): - super().__init__() - self.skipWhitespace = False - self.notChars = not_chars or notChars - self.notCharsSet = set(self.notChars) - - if min < 1: - raise ValueError( - "cannot specify a minimum length < 1; use " - "Opt(CharsNotIn()) if zero-length char group is permitted" - ) - - self.minLen = min - - if max > 0: - self.maxLen = max - else: - self.maxLen = _MAX_INT - - if exact > 0: - self.maxLen = exact - self.minLen = exact - - self.errmsg = "Expected " + self.name - self.mayReturnEmpty = self.minLen == 0 - self.mayIndexError = False - - def _generateDefaultName(self): - not_chars_str = _collapse_string_to_ranges(self.notChars) - if len(not_chars_str) > 16: - return "!W:({}...)".format(self.notChars[: 16 - 3]) - else: - return "!W:({})".format(self.notChars) - - def parseImpl(self, instring, loc, doActions=True): - notchars = self.notCharsSet - if instring[loc] in notchars: - raise ParseException(instring, loc, self.errmsg, self) - - start = loc - loc += 1 - maxlen = min(start + self.maxLen, len(instring)) - while loc < maxlen and instring[loc] not in notchars: - loc += 1 - - if loc - start < self.minLen: - raise ParseException(instring, loc, self.errmsg, self) - - return loc, instring[start:loc] - - -class White(Token): - """Special matching class for matching whitespace. Normally, - whitespace is ignored by pyparsing grammars. This class is included - when some whitespace structures are significant. Define with - a string containing the whitespace characters to be matched; default - is ``" \\t\\r\\n"``. Also takes optional ``min``, - ``max``, and ``exact`` arguments, as defined for the - :class:`Word` class. - """ - - whiteStrs = { - " ": "", - "\t": "", - "\n": "", - "\r": "", - "\f": "", - "\u00A0": "", - "\u1680": "", - "\u180E": "", - "\u2000": "", - "\u2001": "", - "\u2002": "", - "\u2003": "", - "\u2004": "", - "\u2005": "", - "\u2006": "", - "\u2007": "", - "\u2008": "", - "\u2009": "", - "\u200A": "", - "\u200B": "", - "\u202F": "", - "\u205F": "", - "\u3000": "", - } - - def __init__(self, ws: str = " \t\r\n", min: int = 1, max: int = 0, exact: int = 0): - super().__init__() - self.matchWhite = ws - self.set_whitespace_chars( - "".join(c for c in self.whiteStrs if c not in self.matchWhite), - copy_defaults=True, - ) - # self.leave_whitespace() - self.mayReturnEmpty = True - self.errmsg = "Expected " + self.name - - self.minLen = min - - if max > 0: - self.maxLen = max - else: - self.maxLen = _MAX_INT - - if exact > 0: - self.maxLen = exact - self.minLen = exact - - def _generateDefaultName(self): - return "".join(White.whiteStrs[c] for c in self.matchWhite) - - def parseImpl(self, instring, loc, doActions=True): - if instring[loc] not in self.matchWhite: - raise ParseException(instring, loc, self.errmsg, self) - start = loc - loc += 1 - maxloc = start + self.maxLen - maxloc = min(maxloc, len(instring)) - while loc < maxloc and instring[loc] in self.matchWhite: - loc += 1 - - if loc - start < self.minLen: - raise ParseException(instring, loc, self.errmsg, self) - - return loc, instring[start:loc] - - -class PositionToken(Token): - def __init__(self): - super().__init__() - self.mayReturnEmpty = True - self.mayIndexError = False - - -class GoToColumn(PositionToken): - """Token to advance to a specific column of input text; useful for - tabular report scraping. - """ - - def __init__(self, colno: int): - super().__init__() - self.col = colno - - def preParse(self, instring, loc): - if col(loc, instring) != self.col: - instrlen = len(instring) - if self.ignoreExprs: - loc = self._skipIgnorables(instring, loc) - while ( - loc < instrlen - and instring[loc].isspace() - and col(loc, instring) != self.col - ): - loc += 1 - return loc - - def parseImpl(self, instring, loc, doActions=True): - thiscol = col(loc, instring) - if thiscol > self.col: - raise ParseException(instring, loc, "Text not in expected column", self) - newloc = loc + self.col - thiscol - ret = instring[loc:newloc] - return newloc, ret - - -class LineStart(PositionToken): - r"""Matches if current position is at the beginning of a line within - the parse string - - Example:: - - test = '''\ - AAA this line - AAA and this line - AAA but not this one - B AAA and definitely not this one - ''' - - for t in (LineStart() + 'AAA' + restOfLine).search_string(test): - print(t) - - prints:: - - ['AAA', ' this line'] - ['AAA', ' and this line'] - - """ - - def __init__(self): - super().__init__() - self.leave_whitespace() - self.orig_whiteChars = set() | self.whiteChars - self.whiteChars.discard("\n") - self.skipper = Empty().set_whitespace_chars(self.whiteChars) - self.errmsg = "Expected start of line" - - def preParse(self, instring, loc): - if loc == 0: - return loc - else: - ret = self.skipper.preParse(instring, loc) - if "\n" in self.orig_whiteChars: - while instring[ret : ret + 1] == "\n": - ret = self.skipper.preParse(instring, ret + 1) - return ret - - def parseImpl(self, instring, loc, doActions=True): - if col(loc, instring) == 1: - return loc, [] - raise ParseException(instring, loc, self.errmsg, self) - - -class LineEnd(PositionToken): - """Matches if current position is at the end of a line within the - parse string - """ - - def __init__(self): - super().__init__() - self.whiteChars.discard("\n") - self.set_whitespace_chars(self.whiteChars, copy_defaults=False) - self.errmsg = "Expected end of line" - - def parseImpl(self, instring, loc, doActions=True): - if loc < len(instring): - if instring[loc] == "\n": - return loc + 1, "\n" - else: - raise ParseException(instring, loc, self.errmsg, self) - elif loc == len(instring): - return loc + 1, [] - else: - raise ParseException(instring, loc, self.errmsg, self) - - -class StringStart(PositionToken): - """Matches if current position is at the beginning of the parse - string - """ - - def __init__(self): - super().__init__() - self.errmsg = "Expected start of text" - - def parseImpl(self, instring, loc, doActions=True): - if loc != 0: - # see if entire string up to here is just whitespace and ignoreables - if loc != self.preParse(instring, 0): - raise ParseException(instring, loc, self.errmsg, self) - return loc, [] - - -class StringEnd(PositionToken): - """ - Matches if current position is at the end of the parse string - """ - - def __init__(self): - super().__init__() - self.errmsg = "Expected end of text" - - def parseImpl(self, instring, loc, doActions=True): - if loc < len(instring): - raise ParseException(instring, loc, self.errmsg, self) - elif loc == len(instring): - return loc + 1, [] - elif loc > len(instring): - return loc, [] - else: - raise ParseException(instring, loc, self.errmsg, self) - - -class WordStart(PositionToken): - """Matches if the current position is at the beginning of a - :class:`Word`, and is not preceded by any character in a given - set of ``word_chars`` (default= ``printables``). To emulate the - ``\b`` behavior of regular expressions, use - ``WordStart(alphanums)``. ``WordStart`` will also match at - the beginning of the string being parsed, or at the beginning of - a line. - """ - - def __init__(self, word_chars: str = printables, *, wordChars: str = printables): - wordChars = word_chars if wordChars == printables else wordChars - super().__init__() - self.wordChars = set(wordChars) - self.errmsg = "Not at the start of a word" - - def parseImpl(self, instring, loc, doActions=True): - if loc != 0: - if ( - instring[loc - 1] in self.wordChars - or instring[loc] not in self.wordChars - ): - raise ParseException(instring, loc, self.errmsg, self) - return loc, [] - - -class WordEnd(PositionToken): - """Matches if the current position is at the end of a :class:`Word`, - and is not followed by any character in a given set of ``word_chars`` - (default= ``printables``). To emulate the ``\b`` behavior of - regular expressions, use ``WordEnd(alphanums)``. ``WordEnd`` - will also match at the end of the string being parsed, or at the end - of a line. - """ - - def __init__(self, word_chars: str = printables, *, wordChars: str = printables): - wordChars = word_chars if wordChars == printables else wordChars - super().__init__() - self.wordChars = set(wordChars) - self.skipWhitespace = False - self.errmsg = "Not at the end of a word" - - def parseImpl(self, instring, loc, doActions=True): - instrlen = len(instring) - if instrlen > 0 and loc < instrlen: - if ( - instring[loc] in self.wordChars - or instring[loc - 1] not in self.wordChars - ): - raise ParseException(instring, loc, self.errmsg, self) - return loc, [] - - -class ParseExpression(ParserElement): - """Abstract subclass of ParserElement, for combining and - post-processing parsed tokens. - """ - - def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = False): - super().__init__(savelist) - self.exprs: List[ParserElement] - if isinstance(exprs, _generatorType): - exprs = list(exprs) - - if isinstance(exprs, str_type): - self.exprs = [self._literalStringClass(exprs)] - elif isinstance(exprs, ParserElement): - self.exprs = [exprs] - elif isinstance(exprs, Iterable): - exprs = list(exprs) - # if sequence of strings provided, wrap with Literal - if any(isinstance(expr, str_type) for expr in exprs): - exprs = ( - self._literalStringClass(e) if isinstance(e, str_type) else e - for e in exprs - ) - self.exprs = list(exprs) - else: - try: - self.exprs = list(exprs) - except TypeError: - self.exprs = [exprs] - self.callPreparse = False - - def recurse(self) -> Sequence[ParserElement]: - return self.exprs[:] - - def append(self, other) -> ParserElement: - self.exprs.append(other) - self._defaultName = None - return self - - def leave_whitespace(self, recursive: bool = True) -> ParserElement: - """ - Extends ``leave_whitespace`` defined in base class, and also invokes ``leave_whitespace`` on - all contained expressions. - """ - super().leave_whitespace(recursive) - - if recursive: - self.exprs = [e.copy() for e in self.exprs] - for e in self.exprs: - e.leave_whitespace(recursive) - return self - - def ignore_whitespace(self, recursive: bool = True) -> ParserElement: - """ - Extends ``ignore_whitespace`` defined in base class, and also invokes ``leave_whitespace`` on - all contained expressions. - """ - super().ignore_whitespace(recursive) - if recursive: - self.exprs = [e.copy() for e in self.exprs] - for e in self.exprs: - e.ignore_whitespace(recursive) - return self - - def ignore(self, other) -> ParserElement: - if isinstance(other, Suppress): - if other not in self.ignoreExprs: - super().ignore(other) - for e in self.exprs: - e.ignore(self.ignoreExprs[-1]) - else: - super().ignore(other) - for e in self.exprs: - e.ignore(self.ignoreExprs[-1]) - return self - - def _generateDefaultName(self): - return "{}:({})".format(self.__class__.__name__, str(self.exprs)) - - def streamline(self) -> ParserElement: - if self.streamlined: - return self - - super().streamline() - - for e in self.exprs: - e.streamline() - - # collapse nested :class:`And`'s of the form ``And(And(And(a, b), c), d)`` to ``And(a, b, c, d)`` - # but only if there are no parse actions or resultsNames on the nested And's - # (likewise for :class:`Or`'s and :class:`MatchFirst`'s) - if len(self.exprs) == 2: - other = self.exprs[0] - if ( - isinstance(other, self.__class__) - and not other.parseAction - and other.resultsName is None - and not other.debug - ): - self.exprs = other.exprs[:] + [self.exprs[1]] - self._defaultName = None - self.mayReturnEmpty |= other.mayReturnEmpty - self.mayIndexError |= other.mayIndexError - - other = self.exprs[-1] - if ( - isinstance(other, self.__class__) - and not other.parseAction - and other.resultsName is None - and not other.debug - ): - self.exprs = self.exprs[:-1] + other.exprs[:] - self._defaultName = None - self.mayReturnEmpty |= other.mayReturnEmpty - self.mayIndexError |= other.mayIndexError - - self.errmsg = "Expected " + str(self) - - return self - - def validate(self, validateTrace=None) -> None: - tmp = (validateTrace if validateTrace is not None else [])[:] + [self] - for e in self.exprs: - e.validate(tmp) - self._checkRecursion([]) - - def copy(self) -> ParserElement: - ret = super().copy() - ret.exprs = [e.copy() for e in self.exprs] - return ret - - def _setResultsName(self, name, listAllMatches=False): - if ( - __diag__.warn_ungrouped_named_tokens_in_collection - and Diagnostics.warn_ungrouped_named_tokens_in_collection - not in self.suppress_warnings_ - ): - for e in self.exprs: - if ( - isinstance(e, ParserElement) - and e.resultsName - and Diagnostics.warn_ungrouped_named_tokens_in_collection - not in e.suppress_warnings_ - ): - warnings.warn( - "{}: setting results name {!r} on {} expression " - "collides with {!r} on contained expression".format( - "warn_ungrouped_named_tokens_in_collection", - name, - type(self).__name__, - e.resultsName, - ), - stacklevel=3, - ) - - return super()._setResultsName(name, listAllMatches) - - ignoreWhitespace = ignore_whitespace - leaveWhitespace = leave_whitespace - - -class And(ParseExpression): - """ - Requires all given :class:`ParseExpression` s to be found in the given order. - Expressions may be separated by whitespace. - May be constructed using the ``'+'`` operator. - May also be constructed using the ``'-'`` operator, which will - suppress backtracking. - - Example:: - - integer = Word(nums) - name_expr = Word(alphas)[1, ...] - - expr = And([integer("id"), name_expr("name"), integer("age")]) - # more easily written as: - expr = integer("id") + name_expr("name") + integer("age") - """ - - class _ErrorStop(Empty): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - self.leave_whitespace() - - def _generateDefaultName(self): - return "-" - - def __init__( - self, exprs_arg: typing.Iterable[ParserElement], savelist: bool = True - ): - exprs: List[ParserElement] = list(exprs_arg) - if exprs and Ellipsis in exprs: - tmp = [] - for i, expr in enumerate(exprs): - if expr is Ellipsis: - if i < len(exprs) - 1: - skipto_arg: ParserElement = (Empty() + exprs[i + 1]).exprs[-1] - tmp.append(SkipTo(skipto_arg)("_skipped*")) - else: - raise Exception( - "cannot construct And with sequence ending in ..." - ) - else: - tmp.append(expr) - exprs[:] = tmp - super().__init__(exprs, savelist) - if self.exprs: - self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) - if not isinstance(self.exprs[0], White): - self.set_whitespace_chars( - self.exprs[0].whiteChars, - copy_defaults=self.exprs[0].copyDefaultWhiteChars, - ) - self.skipWhitespace = self.exprs[0].skipWhitespace - else: - self.skipWhitespace = False - else: - self.mayReturnEmpty = True - self.callPreparse = True - - def streamline(self) -> ParserElement: - # collapse any _PendingSkip's - if self.exprs: - if any( - isinstance(e, ParseExpression) - and e.exprs - and isinstance(e.exprs[-1], _PendingSkip) - for e in self.exprs[:-1] - ): - for i, e in enumerate(self.exprs[:-1]): - if e is None: - continue - if ( - isinstance(e, ParseExpression) - and e.exprs - and isinstance(e.exprs[-1], _PendingSkip) - ): - e.exprs[-1] = e.exprs[-1] + self.exprs[i + 1] - self.exprs[i + 1] = None - self.exprs = [e for e in self.exprs if e is not None] - - super().streamline() - - # link any IndentedBlocks to the prior expression - for prev, cur in zip(self.exprs, self.exprs[1:]): - # traverse cur or any first embedded expr of cur looking for an IndentedBlock - # (but watch out for recursive grammar) - seen = set() - while cur: - if id(cur) in seen: - break - seen.add(id(cur)) - if isinstance(cur, IndentedBlock): - prev.add_parse_action( - lambda s, l, t, cur_=cur: setattr( - cur_, "parent_anchor", col(l, s) - ) - ) - break - subs = cur.recurse() - cur = next(iter(subs), None) - - self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) - return self - - def parseImpl(self, instring, loc, doActions=True): - # pass False as callPreParse arg to _parse for first element, since we already - # pre-parsed the string as part of our And pre-parsing - loc, resultlist = self.exprs[0]._parse( - instring, loc, doActions, callPreParse=False - ) - errorStop = False - for e in self.exprs[1:]: - # if isinstance(e, And._ErrorStop): - if type(e) is And._ErrorStop: - errorStop = True - continue - if errorStop: - try: - loc, exprtokens = e._parse(instring, loc, doActions) - except ParseSyntaxException: - raise - except ParseBaseException as pe: - pe.__traceback__ = None - raise ParseSyntaxException._from_exception(pe) - except IndexError: - raise ParseSyntaxException( - instring, len(instring), self.errmsg, self - ) - else: - loc, exprtokens = e._parse(instring, loc, doActions) - if exprtokens or exprtokens.haskeys(): - resultlist += exprtokens - return loc, resultlist - - def __iadd__(self, other): - if isinstance(other, str_type): - other = self._literalStringClass(other) - return self.append(other) # And([self, other]) - - def _checkRecursion(self, parseElementList): - subRecCheckList = parseElementList[:] + [self] - for e in self.exprs: - e._checkRecursion(subRecCheckList) - if not e.mayReturnEmpty: - break - - def _generateDefaultName(self): - inner = " ".join(str(e) for e in self.exprs) - # strip off redundant inner {}'s - while len(inner) > 1 and inner[0 :: len(inner) - 1] == "{}": - inner = inner[1:-1] - return "{" + inner + "}" - - -class Or(ParseExpression): - """Requires that at least one :class:`ParseExpression` is found. If - two expressions match, the expression that matches the longest - string will be used. May be constructed using the ``'^'`` - operator. - - Example:: - - # construct Or using '^' operator - - number = Word(nums) ^ Combine(Word(nums) + '.' + Word(nums)) - print(number.search_string("123 3.1416 789")) - - prints:: - - [['123'], ['3.1416'], ['789']] - """ - - def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = False): - super().__init__(exprs, savelist) - if self.exprs: - self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) - self.skipWhitespace = all(e.skipWhitespace for e in self.exprs) - else: - self.mayReturnEmpty = True - - def streamline(self) -> ParserElement: - super().streamline() - if self.exprs: - self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) - self.saveAsList = any(e.saveAsList for e in self.exprs) - self.skipWhitespace = all( - e.skipWhitespace and not isinstance(e, White) for e in self.exprs - ) - else: - self.saveAsList = False - return self - - def parseImpl(self, instring, loc, doActions=True): - maxExcLoc = -1 - maxException = None - matches = [] - fatals = [] - if all(e.callPreparse for e in self.exprs): - loc = self.preParse(instring, loc) - for e in self.exprs: - try: - loc2 = e.try_parse(instring, loc, raise_fatal=True) - except ParseFatalException as pfe: - pfe.__traceback__ = None - pfe.parserElement = e - fatals.append(pfe) - maxException = None - maxExcLoc = -1 - except ParseException as err: - if not fatals: - err.__traceback__ = None - if err.loc > maxExcLoc: - maxException = err - maxExcLoc = err.loc - except IndexError: - if len(instring) > maxExcLoc: - maxException = ParseException( - instring, len(instring), e.errmsg, self - ) - maxExcLoc = len(instring) - else: - # save match among all matches, to retry longest to shortest - matches.append((loc2, e)) - - if matches: - # re-evaluate all matches in descending order of length of match, in case attached actions - # might change whether or how much they match of the input. - matches.sort(key=itemgetter(0), reverse=True) - - if not doActions: - # no further conditions or parse actions to change the selection of - # alternative, so the first match will be the best match - best_expr = matches[0][1] - return best_expr._parse(instring, loc, doActions) - - longest = -1, None - for loc1, expr1 in matches: - if loc1 <= longest[0]: - # already have a longer match than this one will deliver, we are done - return longest - - try: - loc2, toks = expr1._parse(instring, loc, doActions) - except ParseException as err: - err.__traceback__ = None - if err.loc > maxExcLoc: - maxException = err - maxExcLoc = err.loc - else: - if loc2 >= loc1: - return loc2, toks - # didn't match as much as before - elif loc2 > longest[0]: - longest = loc2, toks - - if longest != (-1, None): - return longest - - if fatals: - if len(fatals) > 1: - fatals.sort(key=lambda e: -e.loc) - if fatals[0].loc == fatals[1].loc: - fatals.sort(key=lambda e: (-e.loc, -len(str(e.parserElement)))) - max_fatal = fatals[0] - raise max_fatal - - if maxException is not None: - maxException.msg = self.errmsg - raise maxException - else: - raise ParseException( - instring, loc, "no defined alternatives to match", self - ) - - def __ixor__(self, other): - if isinstance(other, str_type): - other = self._literalStringClass(other) - return self.append(other) # Or([self, other]) - - def _generateDefaultName(self): - return "{" + " ^ ".join(str(e) for e in self.exprs) + "}" - - def _setResultsName(self, name, listAllMatches=False): - if ( - __diag__.warn_multiple_tokens_in_named_alternation - and Diagnostics.warn_multiple_tokens_in_named_alternation - not in self.suppress_warnings_ - ): - if any( - isinstance(e, And) - and Diagnostics.warn_multiple_tokens_in_named_alternation - not in e.suppress_warnings_ - for e in self.exprs - ): - warnings.warn( - "{}: setting results name {!r} on {} expression " - "will return a list of all parsed tokens in an And alternative, " - "in prior versions only the first token was returned; enclose " - "contained argument in Group".format( - "warn_multiple_tokens_in_named_alternation", - name, - type(self).__name__, - ), - stacklevel=3, - ) - - return super()._setResultsName(name, listAllMatches) - - -class MatchFirst(ParseExpression): - """Requires that at least one :class:`ParseExpression` is found. If - more than one expression matches, the first one listed is the one that will - match. May be constructed using the ``'|'`` operator. - - Example:: - - # construct MatchFirst using '|' operator - - # watch the order of expressions to match - number = Word(nums) | Combine(Word(nums) + '.' + Word(nums)) - print(number.search_string("123 3.1416 789")) # Fail! -> [['123'], ['3'], ['1416'], ['789']] - - # put more selective expression first - number = Combine(Word(nums) + '.' + Word(nums)) | Word(nums) - print(number.search_string("123 3.1416 789")) # Better -> [['123'], ['3.1416'], ['789']] - """ - - def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = False): - super().__init__(exprs, savelist) - if self.exprs: - self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) - self.skipWhitespace = all(e.skipWhitespace for e in self.exprs) - else: - self.mayReturnEmpty = True - - def streamline(self) -> ParserElement: - if self.streamlined: - return self - - super().streamline() - if self.exprs: - self.saveAsList = any(e.saveAsList for e in self.exprs) - self.mayReturnEmpty = any(e.mayReturnEmpty for e in self.exprs) - self.skipWhitespace = all( - e.skipWhitespace and not isinstance(e, White) for e in self.exprs - ) - else: - self.saveAsList = False - self.mayReturnEmpty = True - return self - - def parseImpl(self, instring, loc, doActions=True): - maxExcLoc = -1 - maxException = None - - for e in self.exprs: - try: - return e._parse( - instring, - loc, - doActions, - ) - except ParseFatalException as pfe: - pfe.__traceback__ = None - pfe.parserElement = e - raise - except ParseException as err: - if err.loc > maxExcLoc: - maxException = err - maxExcLoc = err.loc - except IndexError: - if len(instring) > maxExcLoc: - maxException = ParseException( - instring, len(instring), e.errmsg, self - ) - maxExcLoc = len(instring) - - if maxException is not None: - maxException.msg = self.errmsg - raise maxException - else: - raise ParseException( - instring, loc, "no defined alternatives to match", self - ) - - def __ior__(self, other): - if isinstance(other, str_type): - other = self._literalStringClass(other) - return self.append(other) # MatchFirst([self, other]) - - def _generateDefaultName(self): - return "{" + " | ".join(str(e) for e in self.exprs) + "}" - - def _setResultsName(self, name, listAllMatches=False): - if ( - __diag__.warn_multiple_tokens_in_named_alternation - and Diagnostics.warn_multiple_tokens_in_named_alternation - not in self.suppress_warnings_ - ): - if any( - isinstance(e, And) - and Diagnostics.warn_multiple_tokens_in_named_alternation - not in e.suppress_warnings_ - for e in self.exprs - ): - warnings.warn( - "{}: setting results name {!r} on {} expression " - "will return a list of all parsed tokens in an And alternative, " - "in prior versions only the first token was returned; enclose " - "contained argument in Group".format( - "warn_multiple_tokens_in_named_alternation", - name, - type(self).__name__, - ), - stacklevel=3, - ) - - return super()._setResultsName(name, listAllMatches) - - -class Each(ParseExpression): - """Requires all given :class:`ParseExpression` s to be found, but in - any order. Expressions may be separated by whitespace. - - May be constructed using the ``'&'`` operator. - - Example:: - - color = one_of("RED ORANGE YELLOW GREEN BLUE PURPLE BLACK WHITE BROWN") - shape_type = one_of("SQUARE CIRCLE TRIANGLE STAR HEXAGON OCTAGON") - integer = Word(nums) - shape_attr = "shape:" + shape_type("shape") - posn_attr = "posn:" + Group(integer("x") + ',' + integer("y"))("posn") - color_attr = "color:" + color("color") - size_attr = "size:" + integer("size") - - # use Each (using operator '&') to accept attributes in any order - # (shape and posn are required, color and size are optional) - shape_spec = shape_attr & posn_attr & Opt(color_attr) & Opt(size_attr) - - shape_spec.run_tests(''' - shape: SQUARE color: BLACK posn: 100, 120 - shape: CIRCLE size: 50 color: BLUE posn: 50,80 - color:GREEN size:20 shape:TRIANGLE posn:20,40 - ''' - ) - - prints:: - - shape: SQUARE color: BLACK posn: 100, 120 - ['shape:', 'SQUARE', 'color:', 'BLACK', 'posn:', ['100', ',', '120']] - - color: BLACK - - posn: ['100', ',', '120'] - - x: 100 - - y: 120 - - shape: SQUARE - - - shape: CIRCLE size: 50 color: BLUE posn: 50,80 - ['shape:', 'CIRCLE', 'size:', '50', 'color:', 'BLUE', 'posn:', ['50', ',', '80']] - - color: BLUE - - posn: ['50', ',', '80'] - - x: 50 - - y: 80 - - shape: CIRCLE - - size: 50 - - - color: GREEN size: 20 shape: TRIANGLE posn: 20,40 - ['color:', 'GREEN', 'size:', '20', 'shape:', 'TRIANGLE', 'posn:', ['20', ',', '40']] - - color: GREEN - - posn: ['20', ',', '40'] - - x: 20 - - y: 40 - - shape: TRIANGLE - - size: 20 - """ - - def __init__(self, exprs: typing.Iterable[ParserElement], savelist: bool = True): - super().__init__(exprs, savelist) - if self.exprs: - self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) - else: - self.mayReturnEmpty = True - self.skipWhitespace = True - self.initExprGroups = True - self.saveAsList = True - - def streamline(self) -> ParserElement: - super().streamline() - if self.exprs: - self.mayReturnEmpty = all(e.mayReturnEmpty for e in self.exprs) - else: - self.mayReturnEmpty = True - return self - - def parseImpl(self, instring, loc, doActions=True): - if self.initExprGroups: - self.opt1map = dict( - (id(e.expr), e) for e in self.exprs if isinstance(e, Opt) - ) - opt1 = [e.expr for e in self.exprs if isinstance(e, Opt)] - opt2 = [ - e - for e in self.exprs - if e.mayReturnEmpty and not isinstance(e, (Opt, Regex, ZeroOrMore)) - ] - self.optionals = opt1 + opt2 - self.multioptionals = [ - e.expr.set_results_name(e.resultsName, list_all_matches=True) - for e in self.exprs - if isinstance(e, _MultipleMatch) - ] - self.multirequired = [ - e.expr.set_results_name(e.resultsName, list_all_matches=True) - for e in self.exprs - if isinstance(e, OneOrMore) - ] - self.required = [ - e for e in self.exprs if not isinstance(e, (Opt, ZeroOrMore, OneOrMore)) - ] - self.required += self.multirequired - self.initExprGroups = False - - tmpLoc = loc - tmpReqd = self.required[:] - tmpOpt = self.optionals[:] - multis = self.multioptionals[:] - matchOrder = [] - - keepMatching = True - failed = [] - fatals = [] - while keepMatching: - tmpExprs = tmpReqd + tmpOpt + multis - failed.clear() - fatals.clear() - for e in tmpExprs: - try: - tmpLoc = e.try_parse(instring, tmpLoc, raise_fatal=True) - except ParseFatalException as pfe: - pfe.__traceback__ = None - pfe.parserElement = e - fatals.append(pfe) - failed.append(e) - except ParseException: - failed.append(e) - else: - matchOrder.append(self.opt1map.get(id(e), e)) - if e in tmpReqd: - tmpReqd.remove(e) - elif e in tmpOpt: - tmpOpt.remove(e) - if len(failed) == len(tmpExprs): - keepMatching = False - - # look for any ParseFatalExceptions - if fatals: - if len(fatals) > 1: - fatals.sort(key=lambda e: -e.loc) - if fatals[0].loc == fatals[1].loc: - fatals.sort(key=lambda e: (-e.loc, -len(str(e.parserElement)))) - max_fatal = fatals[0] - raise max_fatal - - if tmpReqd: - missing = ", ".join([str(e) for e in tmpReqd]) - raise ParseException( - instring, - loc, - "Missing one or more required elements ({})".format(missing), - ) - - # add any unmatched Opts, in case they have default values defined - matchOrder += [e for e in self.exprs if isinstance(e, Opt) and e.expr in tmpOpt] - - total_results = ParseResults([]) - for e in matchOrder: - loc, results = e._parse(instring, loc, doActions) - total_results += results - - return loc, total_results - - def _generateDefaultName(self): - return "{" + " & ".join(str(e) for e in self.exprs) + "}" - - -class ParseElementEnhance(ParserElement): - """Abstract subclass of :class:`ParserElement`, for combining and - post-processing parsed tokens. - """ - - def __init__(self, expr: Union[ParserElement, str], savelist: bool = False): - super().__init__(savelist) - if isinstance(expr, str_type): - if issubclass(self._literalStringClass, Token): - expr = self._literalStringClass(expr) - elif issubclass(type(self), self._literalStringClass): - expr = Literal(expr) - else: - expr = self._literalStringClass(Literal(expr)) - self.expr = expr - if expr is not None: - self.mayIndexError = expr.mayIndexError - self.mayReturnEmpty = expr.mayReturnEmpty - self.set_whitespace_chars( - expr.whiteChars, copy_defaults=expr.copyDefaultWhiteChars - ) - self.skipWhitespace = expr.skipWhitespace - self.saveAsList = expr.saveAsList - self.callPreparse = expr.callPreparse - self.ignoreExprs.extend(expr.ignoreExprs) - - def recurse(self) -> Sequence[ParserElement]: - return [self.expr] if self.expr is not None else [] - - def parseImpl(self, instring, loc, doActions=True): - if self.expr is not None: - return self.expr._parse(instring, loc, doActions, callPreParse=False) - else: - raise ParseException(instring, loc, "No expression defined", self) - - def leave_whitespace(self, recursive: bool = True) -> ParserElement: - super().leave_whitespace(recursive) - - if recursive: - self.expr = self.expr.copy() - if self.expr is not None: - self.expr.leave_whitespace(recursive) - return self - - def ignore_whitespace(self, recursive: bool = True) -> ParserElement: - super().ignore_whitespace(recursive) - - if recursive: - self.expr = self.expr.copy() - if self.expr is not None: - self.expr.ignore_whitespace(recursive) - return self - - def ignore(self, other) -> ParserElement: - if isinstance(other, Suppress): - if other not in self.ignoreExprs: - super().ignore(other) - if self.expr is not None: - self.expr.ignore(self.ignoreExprs[-1]) - else: - super().ignore(other) - if self.expr is not None: - self.expr.ignore(self.ignoreExprs[-1]) - return self - - def streamline(self) -> ParserElement: - super().streamline() - if self.expr is not None: - self.expr.streamline() - return self - - def _checkRecursion(self, parseElementList): - if self in parseElementList: - raise RecursiveGrammarException(parseElementList + [self]) - subRecCheckList = parseElementList[:] + [self] - if self.expr is not None: - self.expr._checkRecursion(subRecCheckList) - - def validate(self, validateTrace=None) -> None: - if validateTrace is None: - validateTrace = [] - tmp = validateTrace[:] + [self] - if self.expr is not None: - self.expr.validate(tmp) - self._checkRecursion([]) - - def _generateDefaultName(self): - return "{}:({})".format(self.__class__.__name__, str(self.expr)) - - ignoreWhitespace = ignore_whitespace - leaveWhitespace = leave_whitespace - - -class IndentedBlock(ParseElementEnhance): - """ - Expression to match one or more expressions at a given indentation level. - Useful for parsing text where structure is implied by indentation (like Python source code). - """ - - class _Indent(Empty): - def __init__(self, ref_col: int): - super().__init__() - self.errmsg = "expected indent at column {}".format(ref_col) - self.add_condition(lambda s, l, t: col(l, s) == ref_col) - - class _IndentGreater(Empty): - def __init__(self, ref_col: int): - super().__init__() - self.errmsg = "expected indent at column greater than {}".format(ref_col) - self.add_condition(lambda s, l, t: col(l, s) > ref_col) - - def __init__( - self, expr: ParserElement, *, recursive: bool = False, grouped: bool = True - ): - super().__init__(expr, savelist=True) - # if recursive: - # raise NotImplementedError("IndentedBlock with recursive is not implemented") - self._recursive = recursive - self._grouped = grouped - self.parent_anchor = 1 - - def parseImpl(self, instring, loc, doActions=True): - # advance parse position to non-whitespace by using an Empty() - # this should be the column to be used for all subsequent indented lines - anchor_loc = Empty().preParse(instring, loc) - - # see if self.expr matches at the current location - if not it will raise an exception - # and no further work is necessary - self.expr.try_parse(instring, anchor_loc, doActions) - - indent_col = col(anchor_loc, instring) - peer_detect_expr = self._Indent(indent_col) - - inner_expr = Empty() + peer_detect_expr + self.expr - if self._recursive: - sub_indent = self._IndentGreater(indent_col) - nested_block = IndentedBlock( - self.expr, recursive=self._recursive, grouped=self._grouped - ) - nested_block.set_debug(self.debug) - nested_block.parent_anchor = indent_col - inner_expr += Opt(sub_indent + nested_block) - - inner_expr.set_name(f"inner {hex(id(inner_expr))[-4:].upper()}@{indent_col}") - block = OneOrMore(inner_expr) - - trailing_undent = self._Indent(self.parent_anchor) | StringEnd() - - if self._grouped: - wrapper = Group - else: - wrapper = lambda expr: expr - return (wrapper(block) + Optional(trailing_undent)).parseImpl( - instring, anchor_loc, doActions - ) - - -class AtStringStart(ParseElementEnhance): - """Matches if expression matches at the beginning of the parse - string:: - - AtStringStart(Word(nums)).parse_string("123") - # prints ["123"] - - AtStringStart(Word(nums)).parse_string(" 123") - # raises ParseException - """ - - def __init__(self, expr: Union[ParserElement, str]): - super().__init__(expr) - self.callPreparse = False - - def parseImpl(self, instring, loc, doActions=True): - if loc != 0: - raise ParseException(instring, loc, "not found at string start") - return super().parseImpl(instring, loc, doActions) - - -class AtLineStart(ParseElementEnhance): - r"""Matches if an expression matches at the beginning of a line within - the parse string - - Example:: - - test = '''\ - AAA this line - AAA and this line - AAA but not this one - B AAA and definitely not this one - ''' - - for t in (AtLineStart('AAA') + restOfLine).search_string(test): - print(t) - - prints:: - - ['AAA', ' this line'] - ['AAA', ' and this line'] - - """ - - def __init__(self, expr: Union[ParserElement, str]): - super().__init__(expr) - self.callPreparse = False - - def parseImpl(self, instring, loc, doActions=True): - if col(loc, instring) != 1: - raise ParseException(instring, loc, "not found at line start") - return super().parseImpl(instring, loc, doActions) - - -class FollowedBy(ParseElementEnhance): - """Lookahead matching of the given parse expression. - ``FollowedBy`` does *not* advance the parsing position within - the input string, it only verifies that the specified parse - expression matches at the current position. ``FollowedBy`` - always returns a null token list. If any results names are defined - in the lookahead expression, those *will* be returned for access by - name. - - Example:: - - # use FollowedBy to match a label only if it is followed by a ':' - data_word = Word(alphas) - label = data_word + FollowedBy(':') - attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)) - - attr_expr[1, ...].parse_string("shape: SQUARE color: BLACK posn: upper left").pprint() - - prints:: - - [['shape', 'SQUARE'], ['color', 'BLACK'], ['posn', 'upper left']] - """ - - def __init__(self, expr: Union[ParserElement, str]): - super().__init__(expr) - self.mayReturnEmpty = True - - def parseImpl(self, instring, loc, doActions=True): - # by using self._expr.parse and deleting the contents of the returned ParseResults list - # we keep any named results that were defined in the FollowedBy expression - _, ret = self.expr._parse(instring, loc, doActions=doActions) - del ret[:] - - return loc, ret - - -class PrecededBy(ParseElementEnhance): - """Lookbehind matching of the given parse expression. - ``PrecededBy`` does not advance the parsing position within the - input string, it only verifies that the specified parse expression - matches prior to the current position. ``PrecededBy`` always - returns a null token list, but if a results name is defined on the - given expression, it is returned. - - Parameters: - - - expr - expression that must match prior to the current parse - location - - retreat - (default= ``None``) - (int) maximum number of characters - to lookbehind prior to the current parse location - - If the lookbehind expression is a string, :class:`Literal`, - :class:`Keyword`, or a :class:`Word` or :class:`CharsNotIn` - with a specified exact or maximum length, then the retreat - parameter is not required. Otherwise, retreat must be specified to - give a maximum number of characters to look back from - the current parse position for a lookbehind match. - - Example:: - - # VB-style variable names with type prefixes - int_var = PrecededBy("#") + pyparsing_common.identifier - str_var = PrecededBy("$") + pyparsing_common.identifier - - """ - - def __init__( - self, expr: Union[ParserElement, str], retreat: typing.Optional[int] = None - ): - super().__init__(expr) - self.expr = self.expr().leave_whitespace() - self.mayReturnEmpty = True - self.mayIndexError = False - self.exact = False - if isinstance(expr, str_type): - retreat = len(expr) - self.exact = True - elif isinstance(expr, (Literal, Keyword)): - retreat = expr.matchLen - self.exact = True - elif isinstance(expr, (Word, CharsNotIn)) and expr.maxLen != _MAX_INT: - retreat = expr.maxLen - self.exact = True - elif isinstance(expr, PositionToken): - retreat = 0 - self.exact = True - self.retreat = retreat - self.errmsg = "not preceded by " + str(expr) - self.skipWhitespace = False - self.parseAction.append(lambda s, l, t: t.__delitem__(slice(None, None))) - - def parseImpl(self, instring, loc=0, doActions=True): - if self.exact: - if loc < self.retreat: - raise ParseException(instring, loc, self.errmsg) - start = loc - self.retreat - _, ret = self.expr._parse(instring, start) - else: - # retreat specified a maximum lookbehind window, iterate - test_expr = self.expr + StringEnd() - instring_slice = instring[max(0, loc - self.retreat) : loc] - last_expr = ParseException(instring, loc, self.errmsg) - for offset in range(1, min(loc, self.retreat + 1) + 1): - try: - # print('trying', offset, instring_slice, repr(instring_slice[loc - offset:])) - _, ret = test_expr._parse( - instring_slice, len(instring_slice) - offset - ) - except ParseBaseException as pbe: - last_expr = pbe - else: - break - else: - raise last_expr - return loc, ret - - -class Located(ParseElementEnhance): - """ - Decorates a returned token with its starting and ending - locations in the input string. - - This helper adds the following results names: - - - ``locn_start`` - location where matched expression begins - - ``locn_end`` - location where matched expression ends - - ``value`` - the actual parsed results - - Be careful if the input text contains ```` characters, you - may want to call :class:`ParserElement.parse_with_tabs` - - Example:: - - wd = Word(alphas) - for match in Located(wd).search_string("ljsdf123lksdjjf123lkkjj1222"): - print(match) - - prints:: - - [0, ['ljsdf'], 5] - [8, ['lksdjjf'], 15] - [18, ['lkkjj'], 23] - - """ - - def parseImpl(self, instring, loc, doActions=True): - start = loc - loc, tokens = self.expr._parse(instring, start, doActions, callPreParse=False) - ret_tokens = ParseResults([start, tokens, loc]) - ret_tokens["locn_start"] = start - ret_tokens["value"] = tokens - ret_tokens["locn_end"] = loc - if self.resultsName: - # must return as a list, so that the name will be attached to the complete group - return loc, [ret_tokens] - else: - return loc, ret_tokens - - -class NotAny(ParseElementEnhance): - """ - Lookahead to disallow matching with the given parse expression. - ``NotAny`` does *not* advance the parsing position within the - input string, it only verifies that the specified parse expression - does *not* match at the current position. Also, ``NotAny`` does - *not* skip over leading whitespace. ``NotAny`` always returns - a null token list. May be constructed using the ``'~'`` operator. - - Example:: - - AND, OR, NOT = map(CaselessKeyword, "AND OR NOT".split()) - - # take care not to mistake keywords for identifiers - ident = ~(AND | OR | NOT) + Word(alphas) - boolean_term = Opt(NOT) + ident - - # very crude boolean expression - to support parenthesis groups and - # operation hierarchy, use infix_notation - boolean_expr = boolean_term + ((AND | OR) + boolean_term)[...] - - # integers that are followed by "." are actually floats - integer = Word(nums) + ~Char(".") - """ - - def __init__(self, expr: Union[ParserElement, str]): - super().__init__(expr) - # do NOT use self.leave_whitespace(), don't want to propagate to exprs - # self.leave_whitespace() - self.skipWhitespace = False - - self.mayReturnEmpty = True - self.errmsg = "Found unwanted token, " + str(self.expr) - - def parseImpl(self, instring, loc, doActions=True): - if self.expr.can_parse_next(instring, loc): - raise ParseException(instring, loc, self.errmsg, self) - return loc, [] - - def _generateDefaultName(self): - return "~{" + str(self.expr) + "}" - - -class _MultipleMatch(ParseElementEnhance): - def __init__( - self, - expr: ParserElement, - stop_on: typing.Optional[Union[ParserElement, str]] = None, - *, - stopOn: typing.Optional[Union[ParserElement, str]] = None, - ): - super().__init__(expr) - stopOn = stopOn or stop_on - self.saveAsList = True - ender = stopOn - if isinstance(ender, str_type): - ender = self._literalStringClass(ender) - self.stopOn(ender) - - def stopOn(self, ender) -> ParserElement: - if isinstance(ender, str_type): - ender = self._literalStringClass(ender) - self.not_ender = ~ender if ender is not None else None - return self - - def parseImpl(self, instring, loc, doActions=True): - self_expr_parse = self.expr._parse - self_skip_ignorables = self._skipIgnorables - check_ender = self.not_ender is not None - if check_ender: - try_not_ender = self.not_ender.tryParse - - # must be at least one (but first see if we are the stopOn sentinel; - # if so, fail) - if check_ender: - try_not_ender(instring, loc) - loc, tokens = self_expr_parse(instring, loc, doActions) - try: - hasIgnoreExprs = not not self.ignoreExprs - while 1: - if check_ender: - try_not_ender(instring, loc) - if hasIgnoreExprs: - preloc = self_skip_ignorables(instring, loc) - else: - preloc = loc - loc, tmptokens = self_expr_parse(instring, preloc, doActions) - if tmptokens or tmptokens.haskeys(): - tokens += tmptokens - except (ParseException, IndexError): - pass - - return loc, tokens - - def _setResultsName(self, name, listAllMatches=False): - if ( - __diag__.warn_ungrouped_named_tokens_in_collection - and Diagnostics.warn_ungrouped_named_tokens_in_collection - not in self.suppress_warnings_ - ): - for e in [self.expr] + self.expr.recurse(): - if ( - isinstance(e, ParserElement) - and e.resultsName - and Diagnostics.warn_ungrouped_named_tokens_in_collection - not in e.suppress_warnings_ - ): - warnings.warn( - "{}: setting results name {!r} on {} expression " - "collides with {!r} on contained expression".format( - "warn_ungrouped_named_tokens_in_collection", - name, - type(self).__name__, - e.resultsName, - ), - stacklevel=3, - ) - - return super()._setResultsName(name, listAllMatches) - - -class OneOrMore(_MultipleMatch): - """ - Repetition of one or more of the given expression. - - Parameters: - - expr - expression that must match one or more times - - stop_on - (default= ``None``) - expression for a terminating sentinel - (only required if the sentinel would ordinarily match the repetition - expression) - - Example:: - - data_word = Word(alphas) - label = data_word + FollowedBy(':') - attr_expr = Group(label + Suppress(':') + OneOrMore(data_word).set_parse_action(' '.join)) - - text = "shape: SQUARE posn: upper left color: BLACK" - attr_expr[1, ...].parse_string(text).pprint() # Fail! read 'color' as data instead of next label -> [['shape', 'SQUARE color']] - - # use stop_on attribute for OneOrMore to avoid reading label string as part of the data - attr_expr = Group(label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)) - OneOrMore(attr_expr).parse_string(text).pprint() # Better -> [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'BLACK']] - - # could also be written as - (attr_expr * (1,)).parse_string(text).pprint() - """ - - def _generateDefaultName(self): - return "{" + str(self.expr) + "}..." - - -class ZeroOrMore(_MultipleMatch): - """ - Optional repetition of zero or more of the given expression. - - Parameters: - - ``expr`` - expression that must match zero or more times - - ``stop_on`` - expression for a terminating sentinel - (only required if the sentinel would ordinarily match the repetition - expression) - (default= ``None``) - - Example: similar to :class:`OneOrMore` - """ - - def __init__( - self, - expr: ParserElement, - stop_on: typing.Optional[Union[ParserElement, str]] = None, - *, - stopOn: typing.Optional[Union[ParserElement, str]] = None, - ): - super().__init__(expr, stopOn=stopOn or stop_on) - self.mayReturnEmpty = True - - def parseImpl(self, instring, loc, doActions=True): - try: - return super().parseImpl(instring, loc, doActions) - except (ParseException, IndexError): - return loc, ParseResults([], name=self.resultsName) - - def _generateDefaultName(self): - return "[" + str(self.expr) + "]..." - - -class _NullToken: - def __bool__(self): - return False - - def __str__(self): - return "" - - -class Opt(ParseElementEnhance): - """ - Optional matching of the given expression. - - Parameters: - - ``expr`` - expression that must match zero or more times - - ``default`` (optional) - value to be returned if the optional expression is not found. - - Example:: - - # US postal code can be a 5-digit zip, plus optional 4-digit qualifier - zip = Combine(Word(nums, exact=5) + Opt('-' + Word(nums, exact=4))) - zip.run_tests(''' - # traditional ZIP code - 12345 - - # ZIP+4 form - 12101-0001 - - # invalid ZIP - 98765- - ''') - - prints:: - - # traditional ZIP code - 12345 - ['12345'] - - # ZIP+4 form - 12101-0001 - ['12101-0001'] - - # invalid ZIP - 98765- - ^ - FAIL: Expected end of text (at char 5), (line:1, col:6) - """ - - __optionalNotMatched = _NullToken() - - def __init__( - self, expr: Union[ParserElement, str], default: Any = __optionalNotMatched - ): - super().__init__(expr, savelist=False) - self.saveAsList = self.expr.saveAsList - self.defaultValue = default - self.mayReturnEmpty = True - - def parseImpl(self, instring, loc, doActions=True): - self_expr = self.expr - try: - loc, tokens = self_expr._parse(instring, loc, doActions, callPreParse=False) - except (ParseException, IndexError): - default_value = self.defaultValue - if default_value is not self.__optionalNotMatched: - if self_expr.resultsName: - tokens = ParseResults([default_value]) - tokens[self_expr.resultsName] = default_value - else: - tokens = [default_value] - else: - tokens = [] - return loc, tokens - - def _generateDefaultName(self): - inner = str(self.expr) - # strip off redundant inner {}'s - while len(inner) > 1 and inner[0 :: len(inner) - 1] == "{}": - inner = inner[1:-1] - return "[" + inner + "]" - - -Optional = Opt - - -class SkipTo(ParseElementEnhance): - """ - Token for skipping over all undefined text until the matched - expression is found. - - Parameters: - - ``expr`` - target expression marking the end of the data to be skipped - - ``include`` - if ``True``, the target expression is also parsed - (the skipped text and target expression are returned as a 2-element - list) (default= ``False``). - - ``ignore`` - (default= ``None``) used to define grammars (typically quoted strings and - comments) that might contain false matches to the target expression - - ``fail_on`` - (default= ``None``) define expressions that are not allowed to be - included in the skipped test; if found before the target expression is found, - the :class:`SkipTo` is not a match - - Example:: - - report = ''' - Outstanding Issues Report - 1 Jan 2000 - - # | Severity | Description | Days Open - -----+----------+-------------------------------------------+----------- - 101 | Critical | Intermittent system crash | 6 - 94 | Cosmetic | Spelling error on Login ('log|n') | 14 - 79 | Minor | System slow when running too many reports | 47 - ''' - integer = Word(nums) - SEP = Suppress('|') - # use SkipTo to simply match everything up until the next SEP - # - ignore quoted strings, so that a '|' character inside a quoted string does not match - # - parse action will call token.strip() for each matched token, i.e., the description body - string_data = SkipTo(SEP, ignore=quoted_string) - string_data.set_parse_action(token_map(str.strip)) - ticket_expr = (integer("issue_num") + SEP - + string_data("sev") + SEP - + string_data("desc") + SEP - + integer("days_open")) - - for tkt in ticket_expr.search_string(report): - print tkt.dump() - - prints:: - - ['101', 'Critical', 'Intermittent system crash', '6'] - - days_open: '6' - - desc: 'Intermittent system crash' - - issue_num: '101' - - sev: 'Critical' - ['94', 'Cosmetic', "Spelling error on Login ('log|n')", '14'] - - days_open: '14' - - desc: "Spelling error on Login ('log|n')" - - issue_num: '94' - - sev: 'Cosmetic' - ['79', 'Minor', 'System slow when running too many reports', '47'] - - days_open: '47' - - desc: 'System slow when running too many reports' - - issue_num: '79' - - sev: 'Minor' - """ - - def __init__( - self, - other: Union[ParserElement, str], - include: bool = False, - ignore: bool = None, - fail_on: typing.Optional[Union[ParserElement, str]] = None, - *, - failOn: Union[ParserElement, str] = None, - ): - super().__init__(other) - failOn = failOn or fail_on - self.ignoreExpr = ignore - self.mayReturnEmpty = True - self.mayIndexError = False - self.includeMatch = include - self.saveAsList = False - if isinstance(failOn, str_type): - self.failOn = self._literalStringClass(failOn) - else: - self.failOn = failOn - self.errmsg = "No match found for " + str(self.expr) - - def parseImpl(self, instring, loc, doActions=True): - startloc = loc - instrlen = len(instring) - self_expr_parse = self.expr._parse - self_failOn_canParseNext = ( - self.failOn.canParseNext if self.failOn is not None else None - ) - self_ignoreExpr_tryParse = ( - self.ignoreExpr.tryParse if self.ignoreExpr is not None else None - ) - - tmploc = loc - while tmploc <= instrlen: - if self_failOn_canParseNext is not None: - # break if failOn expression matches - if self_failOn_canParseNext(instring, tmploc): - break - - if self_ignoreExpr_tryParse is not None: - # advance past ignore expressions - while 1: - try: - tmploc = self_ignoreExpr_tryParse(instring, tmploc) - except ParseBaseException: - break - - try: - self_expr_parse(instring, tmploc, doActions=False, callPreParse=False) - except (ParseException, IndexError): - # no match, advance loc in string - tmploc += 1 - else: - # matched skipto expr, done - break - - else: - # ran off the end of the input string without matching skipto expr, fail - raise ParseException(instring, loc, self.errmsg, self) - - # build up return values - loc = tmploc - skiptext = instring[startloc:loc] - skipresult = ParseResults(skiptext) - - if self.includeMatch: - loc, mat = self_expr_parse(instring, loc, doActions, callPreParse=False) - skipresult += mat - - return loc, skipresult - - -class Forward(ParseElementEnhance): - """ - Forward declaration of an expression to be defined later - - used for recursive grammars, such as algebraic infix notation. - When the expression is known, it is assigned to the ``Forward`` - variable using the ``'<<'`` operator. - - Note: take care when assigning to ``Forward`` not to overlook - precedence of operators. - - Specifically, ``'|'`` has a lower precedence than ``'<<'``, so that:: - - fwd_expr << a | b | c - - will actually be evaluated as:: - - (fwd_expr << a) | b | c - - thereby leaving b and c out as parseable alternatives. It is recommended that you - explicitly group the values inserted into the ``Forward``:: - - fwd_expr << (a | b | c) - - Converting to use the ``'<<='`` operator instead will avoid this problem. - - See :class:`ParseResults.pprint` for an example of a recursive - parser created using ``Forward``. - """ - - def __init__(self, other: typing.Optional[Union[ParserElement, str]] = None): - self.caller_frame = traceback.extract_stack(limit=2)[0] - super().__init__(other, savelist=False) - self.lshift_line = None - - def __lshift__(self, other): - if hasattr(self, "caller_frame"): - del self.caller_frame - if isinstance(other, str_type): - other = self._literalStringClass(other) - self.expr = other - self.mayIndexError = self.expr.mayIndexError - self.mayReturnEmpty = self.expr.mayReturnEmpty - self.set_whitespace_chars( - self.expr.whiteChars, copy_defaults=self.expr.copyDefaultWhiteChars - ) - self.skipWhitespace = self.expr.skipWhitespace - self.saveAsList = self.expr.saveAsList - self.ignoreExprs.extend(self.expr.ignoreExprs) - self.lshift_line = traceback.extract_stack(limit=2)[-2] - return self - - def __ilshift__(self, other): - return self << other - - def __or__(self, other): - caller_line = traceback.extract_stack(limit=2)[-2] - if ( - __diag__.warn_on_match_first_with_lshift_operator - and caller_line == self.lshift_line - and Diagnostics.warn_on_match_first_with_lshift_operator - not in self.suppress_warnings_ - ): - warnings.warn( - "using '<<' operator with '|' is probably an error, use '<<='", - stacklevel=2, - ) - ret = super().__or__(other) - return ret - - def __del__(self): - # see if we are getting dropped because of '=' reassignment of var instead of '<<=' or '<<' - if ( - self.expr is None - and __diag__.warn_on_assignment_to_Forward - and Diagnostics.warn_on_assignment_to_Forward not in self.suppress_warnings_ - ): - warnings.warn_explicit( - "Forward defined here but no expression attached later using '<<=' or '<<'", - UserWarning, - filename=self.caller_frame.filename, - lineno=self.caller_frame.lineno, - ) - - def parseImpl(self, instring, loc, doActions=True): - if ( - self.expr is None - and __diag__.warn_on_parse_using_empty_Forward - and Diagnostics.warn_on_parse_using_empty_Forward - not in self.suppress_warnings_ - ): - # walk stack until parse_string, scan_string, search_string, or transform_string is found - parse_fns = [ - "parse_string", - "scan_string", - "search_string", - "transform_string", - ] - tb = traceback.extract_stack(limit=200) - for i, frm in enumerate(reversed(tb), start=1): - if frm.name in parse_fns: - stacklevel = i + 1 - break - else: - stacklevel = 2 - warnings.warn( - "Forward expression was never assigned a value, will not parse any input", - stacklevel=stacklevel, - ) - if not ParserElement._left_recursion_enabled: - return super().parseImpl(instring, loc, doActions) - # ## Bounded Recursion algorithm ## - # Recursion only needs to be processed at ``Forward`` elements, since they are - # the only ones that can actually refer to themselves. The general idea is - # to handle recursion stepwise: We start at no recursion, then recurse once, - # recurse twice, ..., until more recursion offers no benefit (we hit the bound). - # - # The "trick" here is that each ``Forward`` gets evaluated in two contexts - # - to *match* a specific recursion level, and - # - to *search* the bounded recursion level - # and the two run concurrently. The *search* must *match* each recursion level - # to find the best possible match. This is handled by a memo table, which - # provides the previous match to the next level match attempt. - # - # See also "Left Recursion in Parsing Expression Grammars", Medeiros et al. - # - # There is a complication since we not only *parse* but also *transform* via - # actions: We do not want to run the actions too often while expanding. Thus, - # we expand using `doActions=False` and only run `doActions=True` if the next - # recursion level is acceptable. - with ParserElement.recursion_lock: - memo = ParserElement.recursion_memos - try: - # we are parsing at a specific recursion expansion - use it as-is - prev_loc, prev_result = memo[loc, self, doActions] - if isinstance(prev_result, Exception): - raise prev_result - return prev_loc, prev_result.copy() - except KeyError: - act_key = (loc, self, True) - peek_key = (loc, self, False) - # we are searching for the best recursion expansion - keep on improving - # both `doActions` cases must be tracked separately here! - prev_loc, prev_peek = memo[peek_key] = ( - loc - 1, - ParseException( - instring, loc, "Forward recursion without base case", self - ), - ) - if doActions: - memo[act_key] = memo[peek_key] - while True: - try: - new_loc, new_peek = super().parseImpl(instring, loc, False) - except ParseException: - # we failed before getting any match – do not hide the error - if isinstance(prev_peek, Exception): - raise - new_loc, new_peek = prev_loc, prev_peek - # the match did not get better: we are done - if new_loc <= prev_loc: - if doActions: - # replace the match for doActions=False as well, - # in case the action did backtrack - prev_loc, prev_result = memo[peek_key] = memo[act_key] - del memo[peek_key], memo[act_key] - return prev_loc, prev_result.copy() - del memo[peek_key] - return prev_loc, prev_peek.copy() - # the match did get better: see if we can improve further - else: - if doActions: - try: - memo[act_key] = super().parseImpl(instring, loc, True) - except ParseException as e: - memo[peek_key] = memo[act_key] = (new_loc, e) - raise - prev_loc, prev_peek = memo[peek_key] = new_loc, new_peek - - def leave_whitespace(self, recursive: bool = True) -> ParserElement: - self.skipWhitespace = False - return self - - def ignore_whitespace(self, recursive: bool = True) -> ParserElement: - self.skipWhitespace = True - return self - - def streamline(self) -> ParserElement: - if not self.streamlined: - self.streamlined = True - if self.expr is not None: - self.expr.streamline() - return self - - def validate(self, validateTrace=None) -> None: - if validateTrace is None: - validateTrace = [] - - if self not in validateTrace: - tmp = validateTrace[:] + [self] - if self.expr is not None: - self.expr.validate(tmp) - self._checkRecursion([]) - - def _generateDefaultName(self): - # Avoid infinite recursion by setting a temporary _defaultName - self._defaultName = ": ..." - - # Use the string representation of main expression. - retString = "..." - try: - if self.expr is not None: - retString = str(self.expr)[:1000] - else: - retString = "None" - finally: - return self.__class__.__name__ + ": " + retString - - def copy(self) -> ParserElement: - if self.expr is not None: - return super().copy() - else: - ret = Forward() - ret <<= self - return ret - - def _setResultsName(self, name, list_all_matches=False): - if ( - __diag__.warn_name_set_on_empty_Forward - and Diagnostics.warn_name_set_on_empty_Forward - not in self.suppress_warnings_ - ): - if self.expr is None: - warnings.warn( - "{}: setting results name {!r} on {} expression " - "that has no contained expression".format( - "warn_name_set_on_empty_Forward", name, type(self).__name__ - ), - stacklevel=3, - ) - - return super()._setResultsName(name, list_all_matches) - - ignoreWhitespace = ignore_whitespace - leaveWhitespace = leave_whitespace - - -class TokenConverter(ParseElementEnhance): - """ - Abstract subclass of :class:`ParseExpression`, for converting parsed results. - """ - - def __init__(self, expr: Union[ParserElement, str], savelist=False): - super().__init__(expr) # , savelist) - self.saveAsList = False - - -class Combine(TokenConverter): - """Converter to concatenate all matching tokens to a single string. - By default, the matching patterns must also be contiguous in the - input string; this can be disabled by specifying - ``'adjacent=False'`` in the constructor. - - Example:: - - real = Word(nums) + '.' + Word(nums) - print(real.parse_string('3.1416')) # -> ['3', '.', '1416'] - # will also erroneously match the following - print(real.parse_string('3. 1416')) # -> ['3', '.', '1416'] - - real = Combine(Word(nums) + '.' + Word(nums)) - print(real.parse_string('3.1416')) # -> ['3.1416'] - # no match when there are internal spaces - print(real.parse_string('3. 1416')) # -> Exception: Expected W:(0123...) - """ - - def __init__( - self, - expr: ParserElement, - join_string: str = "", - adjacent: bool = True, - *, - joinString: typing.Optional[str] = None, - ): - super().__init__(expr) - joinString = joinString if joinString is not None else join_string - # suppress whitespace-stripping in contained parse expressions, but re-enable it on the Combine itself - if adjacent: - self.leave_whitespace() - self.adjacent = adjacent - self.skipWhitespace = True - self.joinString = joinString - self.callPreparse = True - - def ignore(self, other) -> ParserElement: - if self.adjacent: - ParserElement.ignore(self, other) - else: - super().ignore(other) - return self - - def postParse(self, instring, loc, tokenlist): - retToks = tokenlist.copy() - del retToks[:] - retToks += ParseResults( - ["".join(tokenlist._asStringList(self.joinString))], modal=self.modalResults - ) - - if self.resultsName and retToks.haskeys(): - return [retToks] - else: - return retToks - - -class Group(TokenConverter): - """Converter to return the matched tokens as a list - useful for - returning tokens of :class:`ZeroOrMore` and :class:`OneOrMore` expressions. - - The optional ``aslist`` argument when set to True will return the - parsed tokens as a Python list instead of a pyparsing ParseResults. - - Example:: - - ident = Word(alphas) - num = Word(nums) - term = ident | num - func = ident + Opt(delimited_list(term)) - print(func.parse_string("fn a, b, 100")) - # -> ['fn', 'a', 'b', '100'] - - func = ident + Group(Opt(delimited_list(term))) - print(func.parse_string("fn a, b, 100")) - # -> ['fn', ['a', 'b', '100']] - """ - - def __init__(self, expr: ParserElement, aslist: bool = False): - super().__init__(expr) - self.saveAsList = True - self._asPythonList = aslist - - def postParse(self, instring, loc, tokenlist): - if self._asPythonList: - return ParseResults.List( - tokenlist.asList() - if isinstance(tokenlist, ParseResults) - else list(tokenlist) - ) - else: - return [tokenlist] - - -class Dict(TokenConverter): - """Converter to return a repetitive expression as a list, but also - as a dictionary. Each element can also be referenced using the first - token in the expression as its key. Useful for tabular report - scraping when the first column can be used as a item key. - - The optional ``asdict`` argument when set to True will return the - parsed tokens as a Python dict instead of a pyparsing ParseResults. - - Example:: - - data_word = Word(alphas) - label = data_word + FollowedBy(':') - - text = "shape: SQUARE posn: upper left color: light blue texture: burlap" - attr_expr = (label + Suppress(':') + OneOrMore(data_word, stop_on=label).set_parse_action(' '.join)) - - # print attributes as plain groups - print(attr_expr[1, ...].parse_string(text).dump()) - - # instead of OneOrMore(expr), parse using Dict(Group(expr)[1, ...]) - Dict will auto-assign names - result = Dict(Group(attr_expr)[1, ...]).parse_string(text) - print(result.dump()) - - # access named fields as dict entries, or output as dict - print(result['shape']) - print(result.as_dict()) - - prints:: - - ['shape', 'SQUARE', 'posn', 'upper left', 'color', 'light blue', 'texture', 'burlap'] - [['shape', 'SQUARE'], ['posn', 'upper left'], ['color', 'light blue'], ['texture', 'burlap']] - - color: 'light blue' - - posn: 'upper left' - - shape: 'SQUARE' - - texture: 'burlap' - SQUARE - {'color': 'light blue', 'posn': 'upper left', 'texture': 'burlap', 'shape': 'SQUARE'} - - See more examples at :class:`ParseResults` of accessing fields by results name. - """ - - def __init__(self, expr: ParserElement, asdict: bool = False): - super().__init__(expr) - self.saveAsList = True - self._asPythonDict = asdict - - def postParse(self, instring, loc, tokenlist): - for i, tok in enumerate(tokenlist): - if len(tok) == 0: - continue - - ikey = tok[0] - if isinstance(ikey, int): - ikey = str(ikey).strip() - - if len(tok) == 1: - tokenlist[ikey] = _ParseResultsWithOffset("", i) - - elif len(tok) == 2 and not isinstance(tok[1], ParseResults): - tokenlist[ikey] = _ParseResultsWithOffset(tok[1], i) - - else: - try: - dictvalue = tok.copy() # ParseResults(i) - except Exception: - exc = TypeError( - "could not extract dict values from parsed results" - " - Dict expression must contain Grouped expressions" - ) - raise exc from None - - del dictvalue[0] - - if len(dictvalue) != 1 or ( - isinstance(dictvalue, ParseResults) and dictvalue.haskeys() - ): - tokenlist[ikey] = _ParseResultsWithOffset(dictvalue, i) - else: - tokenlist[ikey] = _ParseResultsWithOffset(dictvalue[0], i) - - if self._asPythonDict: - return [tokenlist.as_dict()] if self.resultsName else tokenlist.as_dict() - else: - return [tokenlist] if self.resultsName else tokenlist - - -class Suppress(TokenConverter): - """Converter for ignoring the results of a parsed expression. - - Example:: - - source = "a, b, c,d" - wd = Word(alphas) - wd_list1 = wd + (',' + wd)[...] - print(wd_list1.parse_string(source)) - - # often, delimiters that are useful during parsing are just in the - # way afterward - use Suppress to keep them out of the parsed output - wd_list2 = wd + (Suppress(',') + wd)[...] - print(wd_list2.parse_string(source)) - - # Skipped text (using '...') can be suppressed as well - source = "lead in START relevant text END trailing text" - start_marker = Keyword("START") - end_marker = Keyword("END") - find_body = Suppress(...) + start_marker + ... + end_marker - print(find_body.parse_string(source) - - prints:: - - ['a', ',', 'b', ',', 'c', ',', 'd'] - ['a', 'b', 'c', 'd'] - ['START', 'relevant text ', 'END'] - - (See also :class:`delimited_list`.) - """ - - def __init__(self, expr: Union[ParserElement, str], savelist: bool = False): - if expr is ...: - expr = _PendingSkip(NoMatch()) - super().__init__(expr) - - def __add__(self, other) -> "ParserElement": - if isinstance(self.expr, _PendingSkip): - return Suppress(SkipTo(other)) + other - else: - return super().__add__(other) - - def __sub__(self, other) -> "ParserElement": - if isinstance(self.expr, _PendingSkip): - return Suppress(SkipTo(other)) - other - else: - return super().__sub__(other) - - def postParse(self, instring, loc, tokenlist): - return [] - - def suppress(self) -> ParserElement: - return self - - -def trace_parse_action(f: ParseAction) -> ParseAction: - """Decorator for debugging parse actions. - - When the parse action is called, this decorator will print - ``">> entering method-name(line:, , )"``. - When the parse action completes, the decorator will print - ``"<<"`` followed by the returned value, or any exception that the parse action raised. - - Example:: - - wd = Word(alphas) - - @trace_parse_action - def remove_duplicate_chars(tokens): - return ''.join(sorted(set(''.join(tokens)))) - - wds = wd[1, ...].set_parse_action(remove_duplicate_chars) - print(wds.parse_string("slkdjs sld sldd sdlf sdljf")) - - prints:: - - >>entering remove_duplicate_chars(line: 'slkdjs sld sldd sdlf sdljf', 0, (['slkdjs', 'sld', 'sldd', 'sdlf', 'sdljf'], {})) - < 3: - thisFunc = paArgs[0].__class__.__name__ + "." + thisFunc - sys.stderr.write( - ">>entering {}(line: {!r}, {}, {!r})\n".format(thisFunc, line(l, s), l, t) - ) - try: - ret = f(*paArgs) - except Exception as exc: - sys.stderr.write("< str: - r"""Helper to easily define string ranges for use in :class:`Word` - construction. Borrows syntax from regexp ``'[]'`` string range - definitions:: - - srange("[0-9]") -> "0123456789" - srange("[a-z]") -> "abcdefghijklmnopqrstuvwxyz" - srange("[a-z$_]") -> "abcdefghijklmnopqrstuvwxyz$_" - - The input string must be enclosed in []'s, and the returned string - is the expanded character set joined into a single string. The - values enclosed in the []'s may be: - - - a single character - - an escaped character with a leading backslash (such as ``\-`` - or ``\]``) - - an escaped hex character with a leading ``'\x'`` - (``\x21``, which is a ``'!'`` character) (``\0x##`` - is also supported for backwards compatibility) - - an escaped octal character with a leading ``'\0'`` - (``\041``, which is a ``'!'`` character) - - a range of any of the above, separated by a dash (``'a-z'``, - etc.) - - any combination of the above (``'aeiouy'``, - ``'a-zA-Z0-9_$'``, etc.) - """ - _expanded = ( - lambda p: p - if not isinstance(p, ParseResults) - else "".join(chr(c) for c in range(ord(p[0]), ord(p[1]) + 1)) - ) - try: - return "".join(_expanded(part) for part in _reBracketExpr.parse_string(s).body) - except Exception: - return "" - - -def token_map(func, *args) -> ParseAction: - """Helper to define a parse action by mapping a function to all - elements of a :class:`ParseResults` list. If any additional args are passed, - they are forwarded to the given function as additional arguments - after the token, as in - ``hex_integer = Word(hexnums).set_parse_action(token_map(int, 16))``, - which will convert the parsed data to an integer using base 16. - - Example (compare the last to example in :class:`ParserElement.transform_string`:: - - hex_ints = Word(hexnums)[1, ...].set_parse_action(token_map(int, 16)) - hex_ints.run_tests(''' - 00 11 22 aa FF 0a 0d 1a - ''') - - upperword = Word(alphas).set_parse_action(token_map(str.upper)) - upperword[1, ...].run_tests(''' - my kingdom for a horse - ''') - - wd = Word(alphas).set_parse_action(token_map(str.title)) - wd[1, ...].set_parse_action(' '.join).run_tests(''' - now is the winter of our discontent made glorious summer by this sun of york - ''') - - prints:: - - 00 11 22 aa FF 0a 0d 1a - [0, 17, 34, 170, 255, 10, 13, 26] - - my kingdom for a horse - ['MY', 'KINGDOM', 'FOR', 'A', 'HORSE'] - - now is the winter of our discontent made glorious summer by this sun of york - ['Now Is The Winter Of Our Discontent Made Glorious Summer By This Sun Of York'] - """ - - def pa(s, l, t): - return [func(tokn, *args) for tokn in t] - - func_name = getattr(func, "__name__", getattr(func, "__class__").__name__) - pa.__name__ = func_name - - return pa - - -def autoname_elements() -> None: - """ - Utility to simplify mass-naming of parser elements, for - generating railroad diagram with named subdiagrams. - """ - for name, var in sys._getframe().f_back.f_locals.items(): - if isinstance(var, ParserElement) and not var.customName: - var.set_name(name) - - -dbl_quoted_string = Combine( - Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*') + '"' -).set_name("string enclosed in double quotes") - -sgl_quoted_string = Combine( - Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*") + "'" -).set_name("string enclosed in single quotes") - -quoted_string = Combine( - Regex(r'"(?:[^"\n\r\\]|(?:"")|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*') + '"' - | Regex(r"'(?:[^'\n\r\\]|(?:'')|(?:\\(?:[^x]|x[0-9a-fA-F]+)))*") + "'" -).set_name("quotedString using single or double quotes") - -unicode_string = Combine("u" + quoted_string.copy()).set_name("unicode string literal") - - -alphas8bit = srange(r"[\0xc0-\0xd6\0xd8-\0xf6\0xf8-\0xff]") -punc8bit = srange(r"[\0xa1-\0xbf\0xd7\0xf7]") - -# build list of built-in expressions, for future reference if a global default value -# gets updated -_builtin_exprs: List[ParserElement] = [ - v for v in vars().values() if isinstance(v, ParserElement) -] - -# backward compatibility names -tokenMap = token_map -conditionAsParseAction = condition_as_parse_action -nullDebugAction = null_debug_action -sglQuotedString = sgl_quoted_string -dblQuotedString = dbl_quoted_string -quotedString = quoted_string -unicodeString = unicode_string -lineStart = line_start -lineEnd = line_end -stringStart = string_start -stringEnd = string_end -traceParseAction = trace_parse_action diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/vision.cpp b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/vision.cpp deleted file mode 100644 index c9a2cd4f20e6f58be1c5783d67c64232dd59b560..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/layers/csrc/vision.cpp +++ /dev/null @@ -1,117 +0,0 @@ -// Copyright (c) Facebook, Inc. and its affiliates. - -#include -#include "ROIAlignRotated/ROIAlignRotated.h" -#include "box_iou_rotated/box_iou_rotated.h" -#include "cocoeval/cocoeval.h" -#include "deformable/deform_conv.h" -#include "nms_rotated/nms_rotated.h" - -namespace detectron2 { - -#if defined(WITH_CUDA) || defined(WITH_HIP) -extern int get_cudart_version(); -#endif - -std::string get_cuda_version() { -#if defined(WITH_CUDA) || defined(WITH_HIP) - std::ostringstream oss; - -#if defined(WITH_CUDA) - oss << "CUDA "; -#else - oss << "HIP "; -#endif - - // copied from - // https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/cuda/detail/CUDAHooks.cpp#L231 - auto printCudaStyleVersion = [&](int v) { - oss << (v / 1000) << "." << (v / 10 % 100); - if (v % 10 != 0) { - oss << "." << (v % 10); - } - }; - printCudaStyleVersion(get_cudart_version()); - return oss.str(); -#else // neither CUDA nor HIP - return std::string("not available"); -#endif -} - -bool has_cuda() { -#if defined(WITH_CUDA) - return true; -#else - return false; -#endif -} - -// similar to -// https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/Version.cpp -std::string get_compiler_version() { - std::ostringstream ss; -#if defined(__GNUC__) -#ifndef __clang__ - -#if ((__GNUC__ <= 4) && (__GNUC_MINOR__ <= 8)) -#error "GCC >= 4.9 is required!" -#endif - - { ss << "GCC " << __GNUC__ << "." << __GNUC_MINOR__; } -#endif -#endif - -#if defined(__clang_major__) - { - ss << "clang " << __clang_major__ << "." << __clang_minor__ << "." - << __clang_patchlevel__; - } -#endif - -#if defined(_MSC_VER) - { ss << "MSVC " << _MSC_FULL_VER; } -#endif - return ss.str(); -} - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("get_compiler_version", &get_compiler_version, "get_compiler_version"); - m.def("get_cuda_version", &get_cuda_version, "get_cuda_version"); - m.def("has_cuda", &has_cuda, "has_cuda"); - - m.def("deform_conv_forward", &deform_conv_forward, "deform_conv_forward"); - m.def( - "deform_conv_backward_input", - &deform_conv_backward_input, - "deform_conv_backward_input"); - m.def( - "deform_conv_backward_filter", - &deform_conv_backward_filter, - "deform_conv_backward_filter"); - m.def( - "modulated_deform_conv_forward", - &modulated_deform_conv_forward, - "modulated_deform_conv_forward"); - m.def( - "modulated_deform_conv_backward", - &modulated_deform_conv_backward, - "modulated_deform_conv_backward"); - - m.def("COCOevalAccumulate", &COCOeval::Accumulate, "COCOeval::Accumulate"); - m.def( - "COCOevalEvaluateImages", - &COCOeval::EvaluateImages, - "COCOeval::EvaluateImages"); - pybind11::class_(m, "InstanceAnnotation") - .def(pybind11::init()); - pybind11::class_(m, "ImageEvaluation") - .def(pybind11::init<>()); -} - -TORCH_LIBRARY(detectron2, m) { - m.def("nms_rotated", &nms_rotated); - m.def("box_iou_rotated", &box_iou_rotated); - m.def("roi_align_rotated_forward", &ROIAlignRotated_forward); - m.def("roi_align_rotated_backward", &ROIAlignRotated_backward); -} -} // namespace detectron2 diff --git a/spaces/BAAI/vid2vid-zero/gradio_demo/app_running.py b/spaces/BAAI/vid2vid-zero/gradio_demo/app_running.py deleted file mode 100644 index 1f6105342c1b84c6adbab5e5724d8105af3df348..0000000000000000000000000000000000000000 --- a/spaces/BAAI/vid2vid-zero/gradio_demo/app_running.py +++ /dev/null @@ -1,169 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import os - -import gradio as gr - -from gradio_demo.runner import Runner - - -def create_demo(runner: Runner, - pipe: InferencePipeline | None = None) -> gr.Blocks: - hf_token = os.getenv('HF_TOKEN') - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - with gr.Box(): - gr.Markdown('Input Data') - input_video = gr.File(label='Input video') - input_prompt = gr.Textbox( - label='Input prompt', - max_lines=1, - placeholder='A car is moving on the road.') - gr.Markdown(''' - - Upload a video and write a `Input Prompt` that describes the video. - ''') - - with gr.Column(): - with gr.Box(): - gr.Markdown('Input Parameters') - with gr.Row(): - model_path = gr.Text( - label='Path to off-the-shelf model', - value='CompVis/stable-diffusion-v1-4', - max_lines=1) - resolution = gr.Dropdown(choices=['512', '768'], - value='512', - label='Resolution', - visible=False) - - with gr.Accordion('Advanced settings', open=False): - sample_start_idx = gr.Number( - label='Start Frame Index',value=0) - sample_frame_rate = gr.Number( - label='Frame Rate',value=1) - n_sample_frames = gr.Number( - label='Number of Frames',value=8) - guidance_scale = gr.Number( - label='Guidance Scale', value=7.5) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=100000, - step=1, - randomize=True, - value=33) - input_token = gr.Text(label='Hugging Face Write Token', - placeholder='', - visible=False if hf_token else True) - gr.Markdown(''' - - Upload input video or choose an exmple blow - - Set hyperparameters & click start - - It takes a few minutes to download model first - ''') - - with gr.Row(): - with gr.Column(): - validation_prompt = gr.Text( - label='Validation Prompt', - placeholder= - 'prompt to test the model, e.g: a Lego man is surfing') - - remove_gpu_after_running = gr.Checkbox( - label='Remove GPU after running', - value=False, - interactive=bool(os.getenv('SPACE_ID')), - visible=False) - - with gr.Row(): - result = gr.Video(label='Result') - - # examples - with gr.Row(): - examples = [ - [ - 'CompVis/stable-diffusion-v1-4', - "data/car-moving.mp4", - 'A car is moving on the road.', - 8, 0, 1, - 'A jeep car is moving on the desert.', - 7.5, 512, 33, - False, None, - ], - - [ - 'CompVis/stable-diffusion-v1-4', - "data/black-swan.mp4", - 'A blackswan is swimming on the water.', - 8, 0, 4, - 'A white swan is swimming on the water.', - 7.5, 512, 33, - False, None, - ], - - [ - 'CompVis/stable-diffusion-v1-4', - "data/child-riding.mp4", - 'A child is riding a bike on the road.', - 8, 0, 1, - 'A lego child is riding a bike on the road.', - 7.5, 512, 33, - False, None, - ], - - [ - 'CompVis/stable-diffusion-v1-4', - "data/car-turn.mp4", - 'A jeep car is moving on the road.', - 8, 0, 6, - 'A jeep car is moving on the snow.', - 7.5, 512, 33, - False, None, - ], - - [ - 'CompVis/stable-diffusion-v1-4', - "data/rabbit-watermelon.mp4", - 'A rabbit is eating a watermelon.', - 8, 0, 6, - 'A puppy is eating an orange.', - 7.5, 512, 33, - False, None, - ], - - ] - gr.Examples(examples=examples, - fn=runner.run_vid2vid_zero, - inputs=[ - model_path, input_video, input_prompt, - n_sample_frames, sample_start_idx, sample_frame_rate, - validation_prompt, guidance_scale, resolution, seed, - remove_gpu_after_running, - input_token, - ], - outputs=result, - cache_examples=os.getenv('SYSTEM') == 'spaces' - ) - - # run - run_button_vid2vid_zero = gr.Button('Start vid2vid-zero') - run_button_vid2vid_zero.click( - fn=runner.run_vid2vid_zero, - inputs=[ - model_path, input_video, input_prompt, - n_sample_frames, sample_start_idx, sample_frame_rate, - validation_prompt, guidance_scale, resolution, seed, - remove_gpu_after_running, - input_token, - ], - outputs=result) - - return demo - - -if __name__ == '__main__': - hf_token = os.getenv('HF_TOKEN') - runner = Runner(hf_token) - demo = create_demo(runner) - demo.queue(max_size=1).launch(share=False) diff --git a/spaces/BFH/BKMotionsAI/app.py b/spaces/BFH/BKMotionsAI/app.py deleted file mode 100644 index bc29914d0329cc553aa8404f8a162f7a0aba7ae9..0000000000000000000000000000000000000000 --- a/spaces/BFH/BKMotionsAI/app.py +++ /dev/null @@ -1,86 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -import gradio as gr -import numpy as np -import requests -from transformers import AutoModelForSequenceClassification, AutoTokenizer, TextClassificationPipeline, pipeline -from langdetect import detect -from matplotlib import pyplot as plt -import imageio - -# Load the model -model = AutoModelForSequenceClassification.from_pretrained("saved_model") -tokenizer = AutoTokenizer.from_pretrained("saved_model") -pipe = TextClassificationPipeline(model=model, tokenizer=tokenizer) - -# Function called by the UI -def attribution(text): - - # Clean the plot - plt.clf() - - # Detect the language - language = detect(text) - - # Translate the input in german if necessary - if language == 'fr': - translator = pipeline("translation", model="Helsinki-NLP/opus-mt-fr-de") - translatedText = translator(text[0:1000]) - text = translatedText[0]["translation_text"] - elif language != 'de': - return "The language is not recognized, it must be either in German or in French.", None - - # Set the bars of the bar chart - bars = "" - if language == 'fr': - bars = ("DDPS", "DFI", "AS-MPC", "DFJP", "DEFR", "DETEC", "DFAE", "Parl", "ChF", "DFF", "AF", "TF") - else: - bars = ("VBS", "EDI", "AB-BA", "EJPD", "WBF", "UVEK", "EDA", "Parl", "BK", "EFD", "BV", "BGer") - - # Make the prediction with the 1000 first characters - results = pipe(text[0:1000], return_all_scores=True) - rates = [row["score"] for row in results[0]] - - # Bar chart - y_pos = np.arange(len(bars)) - plt.barh(y_pos, rates) - plt.yticks(y_pos, bars) - - # Set the output text - name = "" - maxRate = np.max(rates) - maxIndex = np.argmax(rates) - - # ML model not sure if highest probability < 60% - if maxRate < 0.6: - # de / fr - if language == 'de': - name = "Das ML-Modell ist nicht sicher. Das Departement könnte sein : \n\n" - else: - name = "Le modèle ML n'est pas sûr. Le département pourrait être : \n\n" - i = 0 - # Show each department that has a probability > 10% - while i == 0: - if rates[maxIndex] >= 0.1: - name = name + "\t" + str(rates[maxIndex])[2:4] + "%" + "\t\t\t\t\t" + bars[maxIndex] + "\n" - rates[maxIndex] = 0 - maxIndex = np.argmax(rates) - else: - i = 1 - # ML model pretty sure, show only one department - else: - name = str(maxRate)[2:4] + "%" + "\t\t\t\t\t\t" + bars[maxIndex] - - # Save the bar chart as png and load it (enables better display) - plt.savefig('rates.png') - im = imageio.imread('rates.png') - - return name, im - - -# display the UI -interface = gr.Interface(fn=attribution, - inputs=[gr.inputs.Textbox(lines=20, placeholder="Geben Sie bitte den Titel und den Sumbmitted Text des Vorstoss ein.\nVeuillez entrer le titre et le Submitted Text de la requête.")], - outputs=['text', 'image']) -interface.launch() \ No newline at end of file diff --git a/spaces/Banbri/zcvzcv/src/app/interface/zoom/index.tsx b/spaces/Banbri/zcvzcv/src/app/interface/zoom/index.tsx deleted file mode 100644 index 5c8d31a3af1c80f8a9ef15330bb84c0d2c3069de..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/app/interface/zoom/index.tsx +++ /dev/null @@ -1,35 +0,0 @@ -import { useStore } from "@/app/store" -import { VerticalSlider } from "@/components/ui/vertical-slider" -import { cn } from "@/lib/utils" - -export function Zoom() { - const zoomLevel = useStore((state) => state.zoomLevel) - const setZoomLevel = useStore((state) => state.setZoomLevel) - const isGeneratingStory = useStore((state) => state.isGeneratingStory) - - return ( -
-
- Zoom -
-
- setZoomLevel(value[0] || 10)} - value={[zoomLevel]} - className="h-64 md:h-80" - orientation="vertical" - /> -
-
- ) -} \ No newline at end of file diff --git a/spaces/Bart92/RVC_HF/colab_for_mdx.py b/spaces/Bart92/RVC_HF/colab_for_mdx.py deleted file mode 100644 index 274846d0b5395865a05fce0da86b96d26ac06999..0000000000000000000000000000000000000000 --- a/spaces/Bart92/RVC_HF/colab_for_mdx.py +++ /dev/null @@ -1,71 +0,0 @@ -import json -import os -import gc -import psutil -import requests -import subprocess -import time -import logging -import sys -import shutil -now_dir = os.getcwd() -sys.path.append(now_dir) -first_cell_executed = False -file_folder = "Colab-for-MDX_B" -def first_cell_ran(): - global first_cell_executed - if first_cell_executed: - #print("The 'first_cell_ran' function has already been executed.") - return - - - - first_cell_executed = True - os.makedirs("tmp_models", exist_ok=True) - - - - class hide_opt: # hide outputs - def __enter__(self): - self._original_stdout = sys.stdout - sys.stdout = open(os.devnull, "w") - - def __exit__(self, exc_type, exc_val, exc_tb): - sys.stdout.close() - sys.stdout = self._original_stdout - - def get_size(bytes, suffix="B"): # read ram - global svmem - factor = 1024 - for unit in ["", "K", "M", "G", "T", "P"]: - if bytes < factor: - return f"{bytes:.2f}{unit}{suffix}" - bytes /= factor - svmem = psutil.virtual_memory() - - - def use_uvr_without_saving(): - print("Notice: files won't be saved to personal drive.") - print(f"Downloading {file_folder}...", end=" ") - with hide_opt(): - #os.chdir(mounting_path) - items_to_move = ["demucs", "diffq","julius","model","separated","tracks","mdx.py","MDX-Net_Colab.ipynb"] - subprocess.run(["git", "clone", "https://github.com/NaJeongMo/Colab-for-MDX_B.git"]) - for item_name in items_to_move: - item_path = os.path.join(file_folder, item_name) - if os.path.exists(item_path): - if os.path.isfile(item_path): - shutil.move(item_path, now_dir) - elif os.path.isdir(item_path): - shutil.move(item_path, now_dir) - try: - shutil.rmtree(file_folder) - except PermissionError: - print(f"No se pudo eliminar la carpeta {file_folder}. Puede estar relacionada con Git.") - - - use_uvr_without_saving() - print("done!") - if not os.path.exists("tracks"): - os.mkdir("tracks") -first_cell_ran() \ No newline at end of file diff --git a/spaces/BartPoint/VoiceChange_Beta/infer_pack/transforms.py b/spaces/BartPoint/VoiceChange_Beta/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/BartPoint/VoiceChange_Beta/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Benson/text-generation/Examples/Avakin Life Pc.md b/spaces/Benson/text-generation/Examples/Avakin Life Pc.md deleted file mode 100644 index 02926a30996e18c4a25723d7ec42b59c5bfc562f..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Avakin Life Pc.md +++ /dev/null @@ -1,50 +0,0 @@ - -
- H2: Cómo descargar e instalar Avakin Life en PC
- H3: Cómo usar BlueStacks para jugar Avakin Life en PC
- H3: Cómo usar el sitio web oficial de Avakin para jugar Avakin Life en PC
- H2: Beneficios de jugar Avakin Life en PC PC H3: Mejores gráficos y rendimiento
- H3: Más control y personalización
- H3: Comunicación y traducción más fáciles
- H2: Conclusión: Comienza tu segunda vida en el PC hoy
- H4: Preguntas frecuentes | Tabla 2: Artículo con formato HTML

Avakin Life PC: Cómo jugar al mundo virtual en 3D en tu ordenador

-

Si estás buscando un juego de rol que te permita crear tu propio avatar, explorar un mundo virtual y conocer nuevos amigos, entonces deberías echar un vistazo a Avakin Life. Avakin Life es un juego de mundo virtual en 3D de Lockwood Publishing que está disponible en dispositivos iOS y Android. Puede personalizar su apariencia, estilo y hogar, ir a aventuras, unirse a concursos de moda y socializar con millones de jugadores de todo el mundo.

-

avakin life pc


Download Zip 🆗 https://bltlly.com/2v6Jwg



-

Pero ¿sabías que también puedes jugar Avakin Life en tu PC? Sí, lo has oído bien. Puedes disfrutar de este increíble juego en una pantalla más grande, con mejores gráficos, rendimiento y control. En este artículo, le mostraremos cómo descargar e instalar Avakin Life en su computadora usando dos métodos diferentes. También te contaremos los beneficios de jugar a Avakin Life en PC y responderemos algunas preguntas frecuentes. ¡Así que, empecemos!

-

Cómo descargar e instalar Avakin Life en PC

-

Hay dos formas de jugar Avakin Life en tu PC. Uno es utilizar un software emulador como BlueStacks, que le permite ejecutar aplicaciones y juegos de Android en su ordenador. La otra es utilizar el sitio web oficial de Avakin, que ofrece una versión web del juego a la que puedes acceder a través de tu navegador. Estos son los pasos para cada método:

-

Cómo usar BlueStacks para jugar Avakin Life en PC

-
    - -
  1. Inicie BlueStacks e inicie sesión en su cuenta de Google. Esto le permitirá acceder a la Google Play Store.
  2. -
  3. Busque Avakin Life en Google Play Store y haga clic en el botón de instalación. Alternativamente, puede descargar el archivo APK de una fuente de confianza y arrastrarlo y soltarlo en BlueStacks.
  4. -
  5. Una vez completada la instalación, haga clic en el icono de Avakin Life en la pantalla de inicio de BlueStacks para comenzar a jugar.
  6. -
-

Cómo usar el sitio web oficial de Avakin para jugar Avakin Life en PC

-
    -
  1. Vaya al sitio web oficial de Avakin (https://avakin.com) y haga clic en el botón "Descargar" en la esquina superior derecha.
  2. -
  3. Seleccione su plataforma preferida entre las opciones disponibles. Puede elegir entre Windows, Mac, Linux o Web.
  4. -
  5. Si elige Web, será redirigido a una página donde puede jugar Avakin Life directamente en su navegador. Tendrá que iniciar sesión con su cuenta de Facebook o crear una nueva cuenta con su dirección de correo electrónico.
  6. -
  7. Si elige cualquiera de las otras plataformas, tendrá que descargar e instalar un pequeño archivo lanzador que le permitirá jugar Avakin Life en su computadora. Siga las instrucciones en la pantalla para completar el proceso.
  8. -
  9. Una vez instalado el lanzador, ábralo e inicie sesión con su cuenta de Facebook o dirección de correo electrónico. A continuación, puede comenzar a jugar Avakin Life en su PC.
  10. -
-

Beneficios de jugar Avakin Life en PC

-

Ahora que sabe cómo jugar Avakin Life en su PC, es posible que se pregunte por qué debe hacerlo. Bueno, hay muchas ventajas de jugar a este juego en un ordenador en lugar de en un dispositivo móvil. Estas son algunas de ellas:

-

Mejores gráficos y rendimiento

- -

Más control y personalización

-

Otro beneficio de jugar Avakin Life en PC es que puedes tener más opciones de control y personalización. Puede utilizar el teclado y el ratón para navegar por el juego, que puede ser más conveniente y preciso que el uso de una pantalla táctil. También puedes ajustar la configuración del juego según tus preferencias, como la resolución, el sonido y el idioma. Incluso puedes usar trucos y hacks para mejorar tu juego, como conseguir monedas, gemas o objetos ilimitados. Sin embargo, ten cuidado de no abusar de estas características o podrías ser expulsado del juego.

-

Comunicación y traducción más fáciles

-

Un tercer beneficio de jugar Avakin Life en PC es que puedes comunicarte y traducir más fácilmente con otros jugadores. Puede usar su teclado para escribir más rápido y cómodamente que usando un teclado virtual. También puedes usar chat de voz o video chat para hablar con tus amigos o hacer otros nuevos. También puedes usar herramientas de traducción para entender e interactuar con jugadores de diferentes países y culturas. Puedes aprender nuevos idiomas, intercambiar ideas y divertirte con gente de todo el mundo.

-

Conclusión: Comience su segunda vida en el PC hoy

-

Avakin Life es un fantástico juego que te permite crear tu propio avatar, explorar un mundo virtual y conocer nuevos amigos. Pero si quieres llevar tu experiencia de juego al siguiente nivel, deberías intentar jugar a Avakin Life en PC. Puede disfrutar de mejores gráficos, rendimiento, control y personalización. También puede comunicarse y traducir más fácilmente con otros jugadores. Jugar a Avakin Life en PC te hará sentir que estás viviendo una segunda vida en un mundo virtual en 3D.

-

- -

Esperamos que este artículo te haya ayudado a aprender a jugar Avakin Life en PC y por qué deberías hacerlo. Si tiene alguna pregunta o comentario, háganoslo saber en los comentarios a continuación. Nos encantaría saber de usted.

-

Preguntas frecuentes

-
    -
  • Q: ¿Avakin Life es libre de jugar?
  • -
  • A: Sí, Avakin Life es gratis para jugar en dispositivos móviles y PC. Sin embargo, hay algunos elementos del juego y características que requieren dinero real para comprar. También puedes ver anuncios u ofertas completas para ganar monedas y gemas gratis.
  • -
  • Q: ¿Es la vida de Avakin segura para los niños?
  • -
  • A: Avakin Life está clasificado 12+ por la App Store y 13+ por la Google Play Store. Contiene violencia leve, contenido sexual, desnudez, blasfemia, alcohol, tabaco y drogas. También permite a los usuarios chatear con extraños en línea, lo que puede plantear algunos riesgos. Por lo tanto, se recomienda la orientación y supervisión de los padres para los jugadores más jóvenes.
  • -
  • Q: ¿Cómo puedo actualizar Avakin Life en PC?
  • -
  • A: Si está utilizando BlueStacks para jugar Avakin Life en PC, puede actualizar el juego yendo a la Google Play Store y haciendo clic en el botón de actualización. Si estás usando el sitio web oficial de Avakin para jugar a Avakin Life en PC, no necesitas actualizar el juego manualmente, ya que se actualizará automáticamente.
  • -
  • Q: ¿Cómo puedo eliminar mi cuenta de Avakin Life?
  • -
  • A: Si desea eliminar su cuenta de Avakin Life, debe ponerse en contacto con el equipo de atención al cliente a través de su sitio web (https://avakin.com/ support/) o correo electrónico (support@avakin.com). Deberá proporcionar su nombre de usuario, dirección de correo electrónico, ID de dispositivo y razón para eliminar su cuenta. Una vez procesada su solicitud, su cuenta será eliminada permanentemente.
  • -
  • Q: ¿Cómo me comunico con el soporte de Avakin Life?
  • - -

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Bus De Conduccin De Telolet 3d Mod Apk V1.2. 4b.md b/spaces/Benson/text-generation/Examples/Descargar Bus De Conduccin De Telolet 3d Mod Apk V1.2. 4b.md deleted file mode 100644 index 6e1baf5bb90310755622487cfab50a5fb6f4dd66..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Bus De Conduccin De Telolet 3d Mod Apk V1.2. 4b.md +++ /dev/null @@ -1,60 +0,0 @@ - -

Descargar Telolet autobús de conducción 3D Mod APK v1.2. 4b y disfrutar de la diversión de conducir un autobús realista en Indonesia

-

Si eres un fan de los juegos de conducción de autobuses, es posible que hayas oído hablar de Telolet Bus Driving 3D, un juego revolucionario en el género de la conducción árcade sin fin con gráficos y control realistas en 3D. En este juego, usted puede viajar a través de los coches de tráfico de la carretera de Indonesia con un autobús muy fresco y hacer que los niños felices tocando la bocina de su único autobús telolet. Pero lo que si quieres disfrutar del juego sin limitaciones o interrupciones? Bueno, puedes hacerlo descargando Telolet Bus Driving 3D Mod APK v1.2. 4b, que te da dinero ilimitado, todos los autobuses desbloqueados, y sin anuncios. En este artículo, te contaremos más sobre este juego, sus características y cómo descargarlo e instalarlo en tu dispositivo.

-

descargar bus de conducción de telolet 3d mod apk v1.2. 4b


DOWNLOAD ►►► https://bltlly.com/2v6JJz



-

¿Qué es Telolet Bus Driving 3D?

-

Telolet Bus Driving 3D es un juego desarrollado por LOCOS, un estudio de juegos indonesio que tiene como objetivo crear juegos divertidos y atractivos para todos. El juego se inspiró en el fenómeno viral de "Om Telolet Om", que significa "Señor, toca la bocina, señor" en indonesio. Esta es una frase que los niños gritan a los conductores de autobús para pedirles que toquen sus distintivos cuernos telolet, que producen un sonido musical. El juego fue lanzado en diciembre de 2016 y desde entonces ha ganado más de 10 millones de descargas en Google Play Store.

-

Características de Telolet Bus Driving 3D

-

Telolet Bus Driving 3D no es solo un juego de conducción simple. Tiene muchas características que lo hacen destacar de otros juegos del mismo género. Estos son algunos de ellos:

-

Impresionantes gráficos 3D

-

El juego tiene increíbles gráficos en 3D que te hacen sentir como si estuvieras conduciendo un autobús real en Indonesia. Puedes ver los detalles del autobús, el tráfico, el medio ambiente y los niños que te animan cuando tocas la bocina.

-

Manejo del coche suave y realista

- -

Muchos autobuses para elegir

-

El juego tiene muchos autobuses para elegir, cada uno con su propio diseño, color, velocidad y melodía de cuerno telolet. Puedes desbloquear nuevos buses ganando monedas o usando la versión mod APK.

-

3 lugares famosos en Indonesia

-

El juego tiene 3 lugares famosos en Indonesia que puedes explorar: Pantura, Kampoeng y Cipali. Cada lugar tiene su propio paisaje, tráfico y desafíos.

-

-

3 modos de juego

-

El juego tiene 3 modos de juego: One Way, Rush Hour y Two Way. En el modo One Way, conduce en una carretera de un solo sentido con tráfico moderado. En el modo de hora punta, se enfrenta a un atasco de tráfico pesado y tiene que evitar colisiones. En el modo de dos vías, se conduce en una carretera de dos vías con tráfico entrante y tiene que adelantar a otros vehículos.

-

Tipos ricos de tráfico NPC Indonesia

-

El juego tiene ricos tipos de tráfico NPC Indonesia que hacen el juego más realista y desafiante. Usted encontrará coches, camiones, motocicletas, autobuses y otros vehículos que tienen diferentes comportamientos y velocidades. También verás peatones, animales y obstáculos en la carretera.

-

Actualizaciones de atributos

-

El juego tiene actualizaciones de atributos que le permiten mejorar el rendimiento y la apariencia de su autobús. Puedes actualizar tu velocidad, freno, bocina y color usando las monedas que ganes del juego o la versión mod APK.

-

Misiones diarias difíciles

-

El juego tiene desafiantes misiones diarias que te dan recompensas y objetivos adicionales. Puede completar varias tareas, como conducir cierta distancia, tocar la bocina un cierto número de veces, adelantar un cierto número de vehículos y más.

-

Tablas de clasificación en línea y logros

-

El juego tiene tablas de clasificación en línea y logros que le permiten competir con otros jugadores y mostrar sus habilidades. Puedes posicionarte en las tablas de clasificación globales y regionales al ganar altas puntuaciones y monedas. También puedes desbloquear logros al completar varios desafíos e hitos.

- -

Telolet Bus Driving 3D es un juego divertido y adictivo que te mantendrá entretenido durante horas. Sin embargo, si quieres disfrutar del juego sin limitaciones ni interrupciones, debes descargar Telolet Bus Driving 3D Mod APK v1.2. 4b, que le da los siguientes beneficios:

-

Dinero ilimitado

-

Con la versión APK mod, usted tendrá dinero ilimitado que se puede utilizar para comprar y actualizar cualquier autobús que desee. No tienes que preocuparte por quedarte sin monedas o gastar dinero real para conseguir más.

-

Todos los autobuses desbloqueados

-

Con la versión mod APK, tendrás todos los buses desbloqueados desde el principio. No tienes que jugar durante horas o completar misiones para desbloquear nuevos autobuses. Puedes elegir el autobús que quieras y disfrutar de sus características únicas.

-

No hay anuncios

-

Con la versión mod APK, no tendrás anuncios que interrumpan tu juego o te molesten. No tienes que ver videos o hacer clic en banners para obtener monedas o recompensas adicionales. Puedes jugar el juego sin problemas y sin distracciones.

-

Cómo descargar e instalar Telolet Bus Driving 3D Mod APK v1.2. 4b?

-

Si está interesado en descargar e instalar Telolet Bus Driving 3D Mod APK v1.2. 4b en su dispositivo, puede seguir estos sencillos pasos:

-

Paso 1: Descargar el archivo APK de una fuente de confianza

-

El primer paso es descargar el archivo APK de una fuente de confianza que proporciona descargas seguras y libres de virus. Puede utilizar este enlace para descargar el archivo directamente a su dispositivo o transferirlo desde su PC.

-

Paso 2: Habilitar fuentes desconocidas en el dispositivo

-

El segundo paso es habilitar fuentes desconocidas en su dispositivo para que pueda instalar aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo.

-

Paso 3: Instalar el archivo APK y disfrutar del juego

- -

Esperamos que este artículo le haya ayudado a aprender más sobre Telolet Bus Driving 3D Mod APK v1.2. 4b y cómo descargarlo e instalarlo en su dispositivo. Este es un gran juego para los entusiastas de la conducción de autobuses que quieren experimentar la emoción de conducir un autobús realista en Indonesia con un cuerno musical. ¡Descárgalo ahora y diviértete!

-

Conclusión

-

Telolet Bus Driving 3D es un juego innovador en el género de la conducción árcade sin fin con gráficos y control 3D realistas. Fue inspirado por el fenómeno viral de "Om Telolet Om", que significa "Señor, toca la bocina, señor" en indonesio. El juego tiene muchas características que lo hacen destacar de otros juegos en el mismo género, tales como impresionantes gráficos en 3D, manejo de automóviles suave y realista, muchos autobuses para elegir, 3 lugares famosos en Indonesia, 3 modos de juego, ricos tipos de tráfico NPC Indonesia, actualizaciones de atributos, misiones diarias desafiantes, y tablas de clasificación en línea y logros. Sin embargo, si quieres disfrutar del juego sin limitaciones ni interrupciones, debes descargar Telolet Bus Driving 3D Mod APK v1.2. 4b, que le da dinero ilimitado, todos los autobuses desbloqueados, y sin anuncios. Para descargar e instalar la versión mod APK, solo tiene que seguir tres sencillos pasos: descargar el archivo APK de una fuente de confianza, habilitar fuentes desconocidas en su dispositivo, e instalar el archivo APK y disfrutar del juego. Este es un gran juego para los entusiastas de la conducción de autobuses que quieren experimentar la emoción de conducir un autobús realista en Indonesia con un cuerno musical. ¡Descárgalo ahora y diviértete!

-

Preguntas frecuentes

-

Aquí hay algunas preguntas frecuentes sobre Telolet Bus Driving 3D Mod APK v1.2. 4b:

-

Es Telolet autobús de conducción 3D Mod APK v1.2. 4b seguro para descargar e instalar?

-

Sí, Telolet autobús de conducción 3D Mod APK v1.2. 4b es seguro para descargar e instalar siempre y cuando utilice una fuente de confianza que proporciona descargas libres de virus. Puede utilizar este enlace para descargar el archivo de forma segura.

- -

No, no es necesario rootear el dispositivo para usar Telolet Bus Driving 3D Mod APK v1.2. 4b. Solo necesitas habilitar fuentes desconocidas en la configuración de tu dispositivo e instalar el archivo APK como de costumbre.

-

Será Telolet autobús de conducción 3D Mod APK v1.2. 4b afectar mi progreso original del juego?

-

No, Telolet Bus Driving 3D Mod APK v1.2. 4b no afectará su progreso original del juego. Puedes jugar ambas versiones por separado y cambiar entre ellas cuando quieras.

-

¿Puedo jugar Telolet autobús de conducción 3D Mod APK v1.2. 4b en línea con otros jugadores?

-

Sí, se puede jugar Telolet autobús de conducción 3D Mod APK v1.2. 4b en línea con otros jugadores y competir en las tablas de clasificación y logros. Sin embargo, es posible que encuentre algunos problemas de compatibilidad con los jugadores que utilizan la versión original del juego.

-

¿Cómo puedo contactar al desarrollador de Telolet Bus Driving 3D Mod APK v1.2. 4b si tengo alguna pregunta o comentario?

-

Puede ponerse en contacto con el desarrollador de Telolet Bus Driving 3D Mod APK v1.2. 4b enviando un correo electrónico a locosgames@gmail.com o visitando su página de Facebook en https://www.facebook.com/locosgames/ Estarán encantados de saber de usted y responder a sus preguntas o comentarios.

64aa2da5cf
-
-
\ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/json.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/json.py deleted file mode 100644 index ea94493f21e6f5583469d882d08203381ee31117..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/rich/json.py +++ /dev/null @@ -1,140 +0,0 @@ -from pathlib import Path -from json import loads, dumps -from typing import Any, Callable, Optional, Union - -from .text import Text -from .highlighter import JSONHighlighter, NullHighlighter - - -class JSON: - """A renderable which pretty prints JSON. - - Args: - json (str): JSON encoded data. - indent (Union[None, int, str], optional): Number of characters to indent by. Defaults to 2. - highlight (bool, optional): Enable highlighting. Defaults to True. - skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False. - ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False. - check_circular (bool, optional): Check for circular references. Defaults to True. - allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True. - default (Callable, optional): A callable that converts values that can not be encoded - in to something that can be JSON encoded. Defaults to None. - sort_keys (bool, optional): Sort dictionary keys. Defaults to False. - """ - - def __init__( - self, - json: str, - indent: Union[None, int, str] = 2, - highlight: bool = True, - skip_keys: bool = False, - ensure_ascii: bool = False, - check_circular: bool = True, - allow_nan: bool = True, - default: Optional[Callable[[Any], Any]] = None, - sort_keys: bool = False, - ) -> None: - data = loads(json) - json = dumps( - data, - indent=indent, - skipkeys=skip_keys, - ensure_ascii=ensure_ascii, - check_circular=check_circular, - allow_nan=allow_nan, - default=default, - sort_keys=sort_keys, - ) - highlighter = JSONHighlighter() if highlight else NullHighlighter() - self.text = highlighter(json) - self.text.no_wrap = True - self.text.overflow = None - - @classmethod - def from_data( - cls, - data: Any, - indent: Union[None, int, str] = 2, - highlight: bool = True, - skip_keys: bool = False, - ensure_ascii: bool = False, - check_circular: bool = True, - allow_nan: bool = True, - default: Optional[Callable[[Any], Any]] = None, - sort_keys: bool = False, - ) -> "JSON": - """Encodes a JSON object from arbitrary data. - - Args: - data (Any): An object that may be encoded in to JSON - indent (Union[None, int, str], optional): Number of characters to indent by. Defaults to 2. - highlight (bool, optional): Enable highlighting. Defaults to True. - default (Callable, optional): Optional callable which will be called for objects that cannot be serialized. Defaults to None. - skip_keys (bool, optional): Skip keys not of a basic type. Defaults to False. - ensure_ascii (bool, optional): Escape all non-ascii characters. Defaults to False. - check_circular (bool, optional): Check for circular references. Defaults to True. - allow_nan (bool, optional): Allow NaN and Infinity values. Defaults to True. - default (Callable, optional): A callable that converts values that can not be encoded - in to something that can be JSON encoded. Defaults to None. - sort_keys (bool, optional): Sort dictionary keys. Defaults to False. - - Returns: - JSON: New JSON object from the given data. - """ - json_instance: "JSON" = cls.__new__(cls) - json = dumps( - data, - indent=indent, - skipkeys=skip_keys, - ensure_ascii=ensure_ascii, - check_circular=check_circular, - allow_nan=allow_nan, - default=default, - sort_keys=sort_keys, - ) - highlighter = JSONHighlighter() if highlight else NullHighlighter() - json_instance.text = highlighter(json) - json_instance.text.no_wrap = True - json_instance.text.overflow = None - return json_instance - - def __rich__(self) -> Text: - return self.text - - -if __name__ == "__main__": - - import argparse - import sys - - parser = argparse.ArgumentParser(description="Pretty print json") - parser.add_argument( - "path", - metavar="PATH", - help="path to file, or - for stdin", - ) - parser.add_argument( - "-i", - "--indent", - metavar="SPACES", - type=int, - help="Number of spaces in an indent", - default=2, - ) - args = parser.parse_args() - - from pip._vendor.rich.console import Console - - console = Console() - error_console = Console(stderr=True) - - try: - if args.path == "-": - json_data = sys.stdin.read() - else: - json_data = Path(args.path).read_text() - except Exception as error: - error_console.print(f"Unable to read {args.path!r}; {error}") - sys.exit(-1) - - console.print(JSON(json_data, indent=args.indent), soft_wrap=True) diff --git a/spaces/CAMP-ViL/Xplainer/app.py b/spaces/CAMP-ViL/Xplainer/app.py deleted file mode 100644 index b2b673afd870e6fb48cf1f3d791007c931d20c6b..0000000000000000000000000000000000000000 --- a/spaces/CAMP-ViL/Xplainer/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from pathlib import Path - -import gradio as gr -import numpy as np -from matplotlib import pyplot as plt - -from descriptors import disease_descriptors_chexpert, disease_descriptors_chestxray14 -from model import InferenceModel - - -def plot_bars(model_output): - # sort model_output by overall_probability - model_output = {k: v for k, v in sorted(model_output.items(), key=lambda item: item[1]['overall_probability'], reverse=True)} - - # Create a figure with as many subplots as there are diseases, arranged vertically - fig, axs = plt.subplots(len(model_output), 1, figsize=(10, 5 * len(model_output))) - # axs is not iterable if only one subplot is created, so make it a list - if len(model_output) == 1: - axs = [axs] - - for ax, (disease, data) in zip(axs, model_output.items()): - desc_probs = list(data['descriptor_probabilities'].items()) - # sort descending - desc_probs = sorted(desc_probs, key=lambda item: item[1], reverse=True) - - my_probs = [p[1] for p in desc_probs] - min_prob = min(my_probs) - max_prob = max(my_probs) - my_labels = [p[0] for p in desc_probs] - - # Convert probabilities to differences from 0.5 - diffs = np.abs(np.array(my_probs) - 0.5) - - # Set colors based on sign of difference - colors = ['red' if p < 0.5 else 'forestgreen' for p in my_probs] - - # Plot bars with appropriate colors and left offsets - left = [p if p < 0.5 else 0.5 for p in my_probs] - bars = ax.barh(my_labels, diffs, left=left, color=colors, alpha=0.3) - - for i, bar in enumerate(bars): - ax.text(min_prob - 0.04, bar.get_y() + bar.get_height() / 2, my_labels[i], ha='left', va='center', color='black', fontsize=15) - - ax.set_xlim(min(min_prob - 0.05, 0.49), max(max_prob + 0.05, 0.51)) - - # Invert the y-axis to show bars with values less than 0.5 to the left of the center - ax.invert_yaxis() - - ax.set_yticks([]) - - # Add a title for the disease - if data['overall_probability'] >= 0.5: - ax.set_title(f"{disease} : score of {data['overall_probability']:.2f}") - else: - ax.set_title(f"No {disease} : score of {data['overall_probability']:.2f}") - - # make title larger and bold - ax.title.set_fontsize(15) - ax.title.set_fontweight(600) - - # Save the plot - plt.tight_layout() # Adjust subplot parameters to give specified padding - file_path = 'plot.png' - plt.savefig(file_path) - plt.close(fig) - - return file_path - - -def classify_image(inference_model, image_path, diseases_to_predict): - descriptors_with_indication = [d + " indicating " + disease for disease, descriptors in diseases_to_predict.items() for d in descriptors] - probs, negative_probs = inference_model.get_descriptor_probs(image_path=Path(image_path), descriptors=descriptors_with_indication, - do_negative_prompting=True, demo=True) - - disease_probs, negative_disease_probs = inference_model.get_diseases_probs(diseases_to_predict, pos_probs=probs, negative_probs=negative_probs) - - model_output = {} - for idx, disease in enumerate(diseases_to_predict.keys()): - model_output[disease] = { - 'overall_probability': disease_probs[disease], - 'descriptor_probabilities': {descriptor: probs[f'{descriptor} indicating {disease}'].item() for descriptor in - diseases_to_predict[disease]} - } - - file_path = plot_bars(model_output) - return file_path - - -# Define the function you want to wrap -def process_input(image_path, prompt_names: list, disease_name: str, descriptors: str): - diseases_to_predict = {} - - for prompt in prompt_names: - if prompt == 'Custom': - diseases_to_predict[disease_name] = descriptors.split('\n') - else: - if prompt in disease_descriptors_chexpert: - diseases_to_predict[prompt] = disease_descriptors_chexpert[prompt] - else: # only chestxray14 - diseases_to_predict[prompt] = disease_descriptors_chestxray14[prompt] - - # classify - model = InferenceModel() - output = classify_image(model, image_path, diseases_to_predict) - - return output - -with open("article.md", "r") as f: - article = f.read() -with open("description.md", "r") as f: - description = f.read() - -# Define the Gradio interface -iface = gr.Interface( - fn=process_input, - examples = [['examples/enlarged_cardiomediastinum.jpg', ['Enlarged Cardiomediastinum'], '', ''],['examples/edema.jpg', ['Edema'], '', ''], - ['examples/support_devices.jpg', ['Custom'], 'Pacemaker', 'metalic object\nimplant on the left side of the chest\nimplanted cardiac device']], - inputs=[gr.inputs.Image(type="filepath"), gr.inputs.CheckboxGroup( - choices=['Enlarged Cardiomediastinum', 'Cardiomegaly', 'Lung Opacity', 'Lung Lesion', 'Edema', 'Consolidation', 'Pneumonia', - 'Atelectasis', 'Pneumothorax', 'Pleural Effusion', 'Pleural Other', 'Fracture', 'Support Devices', - 'Infiltration', 'Mass', 'Nodule', 'Emphysema', 'Fibrosis', 'Pleural Thickening', 'Hernia', - 'Custom'], - default=['Enlarged Cardiomediastinum', 'Cardiomegaly', 'Lung Opacity', 'Lung Lesion', 'Edema', 'Consolidation', 'Pneumonia', - 'Atelectasis', 'Pneumothorax', 'Pleural Effusion', 'Pleural Other', 'Fracture', 'Support Devices'], - label='Select to use predefined disease descriptors. Select "Custom" to define your own observations.'), - gr.inputs.Textbox(lines=2, placeholder="Name of pathology for which you want to define custom observations", label='Pathology:'), - gr.inputs.Textbox(lines=2, placeholder="Add your custom (positive) observations separated by a new line" - "\n Note: Each descriptor will automatically be embedded into our prompt format: There is/are (no) indicating " - "\n Example:\n\n Opacity\nPleural Effusion\nConsolidation" - , label='Custom Observations:')], - article=article, - description=description, - outputs=gr.outputs.Image(type="filepath") -) - -# Launch the interface -iface.launch() diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_templates/layout.html b/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_templates/layout.html deleted file mode 100644 index 7280406960f90844f60619e1d1ebc5ee7562a046..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/openvqa/docs/_templates/layout.html +++ /dev/null @@ -1,35 +0,0 @@ -{% extends "!layout.html" %} - - -{% block menu %} - -{{ super() }} -{% endblock %} - -{% block footer %} -{{ super() }} - - - - - - - -{% endblock %} \ No newline at end of file diff --git a/spaces/CVPR/LIVE/cuda_utils.h b/spaces/CVPR/LIVE/cuda_utils.h deleted file mode 100644 index 1e4609babc129a27397df72879bd6c8f55e71d1a..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/cuda_utils.h +++ /dev/null @@ -1,53 +0,0 @@ -#pragma once - -#ifdef __CUDACC__ - #include - #include -#endif -#include -#include -#include - -#ifdef __CUDACC__ -#define checkCuda(x) do { if((x)!=cudaSuccess) { \ - printf("CUDA Runtime Error: %s at %s:%d\n",\ - cudaGetErrorString(x),__FILE__,__LINE__);\ - exit(1);}} while(0) -#endif - -template -DEVICE -inline T infinity() { -#ifdef __CUDA_ARCH__ - const unsigned long long ieee754inf = 0x7ff0000000000000; - return __longlong_as_double(ieee754inf); -#else - return std::numeric_limits::infinity(); -#endif -} - -template <> -DEVICE -inline double infinity() { -#ifdef __CUDA_ARCH__ - return __longlong_as_double(0x7ff0000000000000ULL); -#else - return std::numeric_limits::infinity(); -#endif -} - -template <> -DEVICE -inline float infinity() { -#ifdef __CUDA_ARCH__ - return __int_as_float(0x7f800000); -#else - return std::numeric_limits::infinity(); -#endif -} - -inline void cuda_synchronize() { -#ifdef __CUDACC__ - checkCuda(cudaDeviceSynchronize()); -#endif -} diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_eigen.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_eigen.cpp deleted file mode 100644 index 56aa1a4a6fe6b60a1d85c54cd40ee70ddde3528f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_eigen.cpp +++ /dev/null @@ -1,327 +0,0 @@ -/* - tests/eigen.cpp -- automatic conversion of Eigen types - - Copyright (c) 2016 Wenzel Jakob - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#include "pybind11_tests.h" -#include "constructor_stats.h" -#include -#include - -#if defined(_MSC_VER) -# pragma warning(disable: 4996) // C4996: std::unary_negation is deprecated -#endif - -#include - -using MatrixXdR = Eigen::Matrix; - - - -// Sets/resets a testing reference matrix to have values of 10*r + c, where r and c are the -// (1-based) row/column number. -template void reset_ref(M &x) { - for (int i = 0; i < x.rows(); i++) for (int j = 0; j < x.cols(); j++) - x(i, j) = 11 + 10*i + j; -} - -// Returns a static, column-major matrix -Eigen::MatrixXd &get_cm() { - static Eigen::MatrixXd *x; - if (!x) { - x = new Eigen::MatrixXd(3, 3); - reset_ref(*x); - } - return *x; -} -// Likewise, but row-major -MatrixXdR &get_rm() { - static MatrixXdR *x; - if (!x) { - x = new MatrixXdR(3, 3); - reset_ref(*x); - } - return *x; -} -// Resets the values of the static matrices returned by get_cm()/get_rm() -void reset_refs() { - reset_ref(get_cm()); - reset_ref(get_rm()); -} - -// Returns element 2,1 from a matrix (used to test copy/nocopy) -double get_elem(Eigen::Ref m) { return m(2, 1); }; - - -// Returns a matrix with 10*r + 100*c added to each matrix element (to help test that the matrix -// reference is referencing rows/columns correctly). -template Eigen::MatrixXd adjust_matrix(MatrixArgType m) { - Eigen::MatrixXd ret(m); - for (int c = 0; c < m.cols(); c++) for (int r = 0; r < m.rows(); r++) - ret(r, c) += 10*r + 100*c; - return ret; -} - -struct CustomOperatorNew { - CustomOperatorNew() = default; - - Eigen::Matrix4d a = Eigen::Matrix4d::Zero(); - Eigen::Matrix4d b = Eigen::Matrix4d::Identity(); - - EIGEN_MAKE_ALIGNED_OPERATOR_NEW; -}; - -TEST_SUBMODULE(eigen, m) { - using FixedMatrixR = Eigen::Matrix; - using FixedMatrixC = Eigen::Matrix; - using DenseMatrixR = Eigen::Matrix; - using DenseMatrixC = Eigen::Matrix; - using FourRowMatrixC = Eigen::Matrix; - using FourColMatrixC = Eigen::Matrix; - using FourRowMatrixR = Eigen::Matrix; - using FourColMatrixR = Eigen::Matrix; - using SparseMatrixR = Eigen::SparseMatrix; - using SparseMatrixC = Eigen::SparseMatrix; - - // various tests - m.def("double_col", [](const Eigen::VectorXf &x) -> Eigen::VectorXf { return 2.0f * x; }); - m.def("double_row", [](const Eigen::RowVectorXf &x) -> Eigen::RowVectorXf { return 2.0f * x; }); - m.def("double_complex", [](const Eigen::VectorXcf &x) -> Eigen::VectorXcf { return 2.0f * x; }); - m.def("double_threec", [](py::EigenDRef x) { x *= 2; }); - m.def("double_threer", [](py::EigenDRef x) { x *= 2; }); - m.def("double_mat_cm", [](Eigen::MatrixXf x) -> Eigen::MatrixXf { return 2.0f * x; }); - m.def("double_mat_rm", [](DenseMatrixR x) -> DenseMatrixR { return 2.0f * x; }); - - // test_eigen_ref_to_python - // Different ways of passing via Eigen::Ref; the first and second are the Eigen-recommended - m.def("cholesky1", [](Eigen::Ref x) -> Eigen::MatrixXd { return x.llt().matrixL(); }); - m.def("cholesky2", [](const Eigen::Ref &x) -> Eigen::MatrixXd { return x.llt().matrixL(); }); - m.def("cholesky3", [](const Eigen::Ref &x) -> Eigen::MatrixXd { return x.llt().matrixL(); }); - m.def("cholesky4", [](Eigen::Ref x) -> Eigen::MatrixXd { return x.llt().matrixL(); }); - - // test_eigen_ref_mutators - // Mutators: these add some value to the given element using Eigen, but Eigen should be mapping into - // the numpy array data and so the result should show up there. There are three versions: one that - // works on a contiguous-row matrix (numpy's default), one for a contiguous-column matrix, and one - // for any matrix. - auto add_rm = [](Eigen::Ref x, int r, int c, double v) { x(r,c) += v; }; - auto add_cm = [](Eigen::Ref x, int r, int c, double v) { x(r,c) += v; }; - - // Mutators (Eigen maps into numpy variables): - m.def("add_rm", add_rm); // Only takes row-contiguous - m.def("add_cm", add_cm); // Only takes column-contiguous - // Overloaded versions that will accept either row or column contiguous: - m.def("add1", add_rm); - m.def("add1", add_cm); - m.def("add2", add_cm); - m.def("add2", add_rm); - // This one accepts a matrix of any stride: - m.def("add_any", [](py::EigenDRef x, int r, int c, double v) { x(r,c) += v; }); - - // Return mutable references (numpy maps into eigen variables) - m.def("get_cm_ref", []() { return Eigen::Ref(get_cm()); }); - m.def("get_rm_ref", []() { return Eigen::Ref(get_rm()); }); - // The same references, but non-mutable (numpy maps into eigen variables, but is !writeable) - m.def("get_cm_const_ref", []() { return Eigen::Ref(get_cm()); }); - m.def("get_rm_const_ref", []() { return Eigen::Ref(get_rm()); }); - - m.def("reset_refs", reset_refs); // Restores get_{cm,rm}_ref to original values - - // Increments and returns ref to (same) matrix - m.def("incr_matrix", [](Eigen::Ref m, double v) { - m += Eigen::MatrixXd::Constant(m.rows(), m.cols(), v); - return m; - }, py::return_value_policy::reference); - - // Same, but accepts a matrix of any strides - m.def("incr_matrix_any", [](py::EigenDRef m, double v) { - m += Eigen::MatrixXd::Constant(m.rows(), m.cols(), v); - return m; - }, py::return_value_policy::reference); - - // Returns an eigen slice of even rows - m.def("even_rows", [](py::EigenDRef m) { - return py::EigenDMap( - m.data(), (m.rows() + 1) / 2, m.cols(), - py::EigenDStride(m.outerStride(), 2 * m.innerStride())); - }, py::return_value_policy::reference); - - // Returns an eigen slice of even columns - m.def("even_cols", [](py::EigenDRef m) { - return py::EigenDMap( - m.data(), m.rows(), (m.cols() + 1) / 2, - py::EigenDStride(2 * m.outerStride(), m.innerStride())); - }, py::return_value_policy::reference); - - // Returns diagonals: a vector-like object with an inner stride != 1 - m.def("diagonal", [](const Eigen::Ref &x) { return x.diagonal(); }); - m.def("diagonal_1", [](const Eigen::Ref &x) { return x.diagonal<1>(); }); - m.def("diagonal_n", [](const Eigen::Ref &x, int index) { return x.diagonal(index); }); - - // Return a block of a matrix (gives non-standard strides) - m.def("block", [](const Eigen::Ref &x, int start_row, int start_col, int block_rows, int block_cols) { - return x.block(start_row, start_col, block_rows, block_cols); - }); - - // test_eigen_return_references, test_eigen_keepalive - // return value referencing/copying tests: - class ReturnTester { - Eigen::MatrixXd mat = create(); - public: - ReturnTester() { print_created(this); } - ~ReturnTester() { print_destroyed(this); } - static Eigen::MatrixXd create() { return Eigen::MatrixXd::Ones(10, 10); } - static const Eigen::MatrixXd createConst() { return Eigen::MatrixXd::Ones(10, 10); } - Eigen::MatrixXd &get() { return mat; } - Eigen::MatrixXd *getPtr() { return &mat; } - const Eigen::MatrixXd &view() { return mat; } - const Eigen::MatrixXd *viewPtr() { return &mat; } - Eigen::Ref ref() { return mat; } - Eigen::Ref refConst() { return mat; } - Eigen::Block block(int r, int c, int nrow, int ncol) { return mat.block(r, c, nrow, ncol); } - Eigen::Block blockConst(int r, int c, int nrow, int ncol) const { return mat.block(r, c, nrow, ncol); } - py::EigenDMap corners() { return py::EigenDMap(mat.data(), - py::EigenDStride(mat.outerStride() * (mat.outerSize()-1), mat.innerStride() * (mat.innerSize()-1))); } - py::EigenDMap cornersConst() const { return py::EigenDMap(mat.data(), - py::EigenDStride(mat.outerStride() * (mat.outerSize()-1), mat.innerStride() * (mat.innerSize()-1))); } - }; - using rvp = py::return_value_policy; - py::class_(m, "ReturnTester") - .def(py::init<>()) - .def_static("create", &ReturnTester::create) - .def_static("create_const", &ReturnTester::createConst) - .def("get", &ReturnTester::get, rvp::reference_internal) - .def("get_ptr", &ReturnTester::getPtr, rvp::reference_internal) - .def("view", &ReturnTester::view, rvp::reference_internal) - .def("view_ptr", &ReturnTester::view, rvp::reference_internal) - .def("copy_get", &ReturnTester::get) // Default rvp: copy - .def("copy_view", &ReturnTester::view) // " - .def("ref", &ReturnTester::ref) // Default for Ref is to reference - .def("ref_const", &ReturnTester::refConst) // Likewise, but const - .def("ref_safe", &ReturnTester::ref, rvp::reference_internal) - .def("ref_const_safe", &ReturnTester::refConst, rvp::reference_internal) - .def("copy_ref", &ReturnTester::ref, rvp::copy) - .def("copy_ref_const", &ReturnTester::refConst, rvp::copy) - .def("block", &ReturnTester::block) - .def("block_safe", &ReturnTester::block, rvp::reference_internal) - .def("block_const", &ReturnTester::blockConst, rvp::reference_internal) - .def("copy_block", &ReturnTester::block, rvp::copy) - .def("corners", &ReturnTester::corners, rvp::reference_internal) - .def("corners_const", &ReturnTester::cornersConst, rvp::reference_internal) - ; - - // test_special_matrix_objects - // Returns a DiagonalMatrix with diagonal (1,2,3,...) - m.def("incr_diag", [](int k) { - Eigen::DiagonalMatrix m(k); - for (int i = 0; i < k; i++) m.diagonal()[i] = i+1; - return m; - }); - - // Returns a SelfAdjointView referencing the lower triangle of m - m.def("symmetric_lower", [](const Eigen::MatrixXi &m) { - return m.selfadjointView(); - }); - // Returns a SelfAdjointView referencing the lower triangle of m - m.def("symmetric_upper", [](const Eigen::MatrixXi &m) { - return m.selfadjointView(); - }); - - // Test matrix for various functions below. - Eigen::MatrixXf mat(5, 6); - mat << 0, 3, 0, 0, 0, 11, - 22, 0, 0, 0, 17, 11, - 7, 5, 0, 1, 0, 11, - 0, 0, 0, 0, 0, 11, - 0, 0, 14, 0, 8, 11; - - // test_fixed, and various other tests - m.def("fixed_r", [mat]() -> FixedMatrixR { return FixedMatrixR(mat); }); - m.def("fixed_r_const", [mat]() -> const FixedMatrixR { return FixedMatrixR(mat); }); - m.def("fixed_c", [mat]() -> FixedMatrixC { return FixedMatrixC(mat); }); - m.def("fixed_copy_r", [](const FixedMatrixR &m) -> FixedMatrixR { return m; }); - m.def("fixed_copy_c", [](const FixedMatrixC &m) -> FixedMatrixC { return m; }); - // test_mutator_descriptors - m.def("fixed_mutator_r", [](Eigen::Ref) {}); - m.def("fixed_mutator_c", [](Eigen::Ref) {}); - m.def("fixed_mutator_a", [](py::EigenDRef) {}); - // test_dense - m.def("dense_r", [mat]() -> DenseMatrixR { return DenseMatrixR(mat); }); - m.def("dense_c", [mat]() -> DenseMatrixC { return DenseMatrixC(mat); }); - m.def("dense_copy_r", [](const DenseMatrixR &m) -> DenseMatrixR { return m; }); - m.def("dense_copy_c", [](const DenseMatrixC &m) -> DenseMatrixC { return m; }); - // test_sparse, test_sparse_signature - m.def("sparse_r", [mat]() -> SparseMatrixR { return Eigen::SparseView(mat); }); - m.def("sparse_c", [mat]() -> SparseMatrixC { return Eigen::SparseView(mat); }); - m.def("sparse_copy_r", [](const SparseMatrixR &m) -> SparseMatrixR { return m; }); - m.def("sparse_copy_c", [](const SparseMatrixC &m) -> SparseMatrixC { return m; }); - // test_partially_fixed - m.def("partial_copy_four_rm_r", [](const FourRowMatrixR &m) -> FourRowMatrixR { return m; }); - m.def("partial_copy_four_rm_c", [](const FourColMatrixR &m) -> FourColMatrixR { return m; }); - m.def("partial_copy_four_cm_r", [](const FourRowMatrixC &m) -> FourRowMatrixC { return m; }); - m.def("partial_copy_four_cm_c", [](const FourColMatrixC &m) -> FourColMatrixC { return m; }); - - // test_cpp_casting - // Test that we can cast a numpy object to a Eigen::MatrixXd explicitly - m.def("cpp_copy", [](py::handle m) { return m.cast()(1, 0); }); - m.def("cpp_ref_c", [](py::handle m) { return m.cast>()(1, 0); }); - m.def("cpp_ref_r", [](py::handle m) { return m.cast>()(1, 0); }); - m.def("cpp_ref_any", [](py::handle m) { return m.cast>()(1, 0); }); - - - // test_nocopy_wrapper - // Test that we can prevent copying into an argument that would normally copy: First a version - // that would allow copying (if types or strides don't match) for comparison: - m.def("get_elem", &get_elem); - // Now this alternative that calls the tells pybind to fail rather than copy: - m.def("get_elem_nocopy", [](Eigen::Ref m) -> double { return get_elem(m); }, - py::arg().noconvert()); - // Also test a row-major-only no-copy const ref: - m.def("get_elem_rm_nocopy", [](Eigen::Ref> &m) -> long { return m(2, 1); }, - py::arg().noconvert()); - - // test_issue738 - // Issue #738: 1xN or Nx1 2D matrices were neither accepted nor properly copied with an - // incompatible stride value on the length-1 dimension--but that should be allowed (without - // requiring a copy!) because the stride value can be safely ignored on a size-1 dimension. - m.def("iss738_f1", &adjust_matrix &>, py::arg().noconvert()); - m.def("iss738_f2", &adjust_matrix> &>, py::arg().noconvert()); - - // test_issue1105 - // Issue #1105: when converting from a numpy two-dimensional (Nx1) or (1xN) value into a dense - // eigen Vector or RowVector, the argument would fail to load because the numpy copy would fail: - // numpy won't broadcast a Nx1 into a 1-dimensional vector. - m.def("iss1105_col", [](Eigen::VectorXd) { return true; }); - m.def("iss1105_row", [](Eigen::RowVectorXd) { return true; }); - - // test_named_arguments - // Make sure named arguments are working properly: - m.def("matrix_multiply", [](const py::EigenDRef A, const py::EigenDRef B) - -> Eigen::MatrixXd { - if (A.cols() != B.rows()) throw std::domain_error("Nonconformable matrices!"); - return A * B; - }, py::arg("A"), py::arg("B")); - - // test_custom_operator_new - py::class_(m, "CustomOperatorNew") - .def(py::init<>()) - .def_readonly("a", &CustomOperatorNew::a) - .def_readonly("b", &CustomOperatorNew::b); - - // test_eigen_ref_life_support - // In case of a failure (the caster's temp array does not live long enough), creating - // a new array (np.ones(10)) increases the chances that the temp array will be garbage - // collected and/or that its memory will be overridden with different values. - m.def("get_elem_direct", [](Eigen::Ref v) { - py::module::import("numpy").attr("ones")(10); - return v(5); - }); - m.def("get_elem_indirect", [](std::vector> v) { - py::module::import("numpy").attr("ones")(10); - return v[0](5); - }); -} diff --git a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/iterator_traversal_tags.h b/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/iterator_traversal_tags.h deleted file mode 100644 index 73cd1f76af298ab1e88aad2c91c9266be77d793f..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/iterator_traversal_tags.h +++ /dev/null @@ -1,41 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -namespace thrust -{ - -// define Boost's traversal tags -struct no_traversal_tag {}; - -struct incrementable_traversal_tag - : no_traversal_tag {}; - -struct single_pass_traversal_tag - : incrementable_traversal_tag {}; - -struct forward_traversal_tag - : single_pass_traversal_tag {}; - -struct bidirectional_traversal_tag - : forward_traversal_tag {}; - -struct random_access_traversal_tag - : bidirectional_traversal_tag {}; - -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/reverse_iterator_base.h b/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/reverse_iterator_base.h deleted file mode 100644 index 68fa1f2f818a456bc53f7cb81aaa425a63e475ff..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/iterator/detail/reverse_iterator_base.h +++ /dev/null @@ -1,42 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include - -namespace thrust -{ - -template class reverse_iterator; - -namespace detail -{ - -template - struct reverse_iterator_base -{ - typedef thrust::iterator_adaptor< - thrust::reverse_iterator, - BidirectionalIterator - > type; -}; // end reverse_iterator_base - -} // end detail - -} // end thrust - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/pointer.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/pointer.h deleted file mode 100644 index f198385ce23fe6c391cb999e39c769f789f4729b..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/pointer.h +++ /dev/null @@ -1,321 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in ccudaliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace cuda_cub -{ - -template -class pointer; - -} // end cuda_cub -} // end thrust - - -// specialize thrust::iterator_traits to avoid problems with the name of -// pointer's constructor shadowing its nested pointer type -// do this before pointer is defined so the specialization is correctly -// used inside the definition -namespace thrust -{ - -template -struct iterator_traits > -{ -private: - typedef thrust::cuda_cub::pointer ptr; - -public: - typedef typename ptr::iterator_category iterator_category; - typedef typename ptr::value_type value_type; - typedef typename ptr::difference_type difference_type; - typedef ptr pointer; - typedef typename ptr::reference reference; -}; // end iterator_traits - -namespace cuda_cub { - -// forward declaration of reference for pointer -template -class reference; - -// XXX nvcc + msvc have trouble instantiating reference below -// this is a workaround -template -struct reference_msvc_workaround -{ - typedef thrust::cuda_cub::reference type; -}; // end reference_msvc_workaround - - -/*! \p pointer stores a pointer to an object allocated in memory available to the cuda system. - * This type provides type safety when dispatching standard algorithms on ranges resident - * in cuda memory. - * - * \p pointer has pointer semantics: it may be dereferenced and manipulated with pointer arithmetic. - * - * \p pointer can be created with the function \p cuda::malloc, or by explicitly calling its constructor - * with a raw pointer. - * - * The raw pointer encapsulated by a \p pointer may be obtained by eiter its get member function - * or the \p raw_pointer_cast function. - * - * \note \p pointer is not a "smart" pointer; it is the programmer's responsibility to deallocate memory - * pointed to by \p pointer. - * - * \tparam T specifies the type of the pointee. - * - * \see cuda::malloc - * \see cuda::free - * \see raw_pointer_cast - */ -template -class pointer - : public thrust::pointer< - T, - thrust::cuda_cub::tag, - thrust::cuda_cub::reference, - thrust::cuda_cub::pointer > -{ - -private: - typedef thrust::pointer< - T, - thrust::cuda_cub::tag, - typename reference_msvc_workaround::type, - thrust::cuda_cub::pointer > - super_t; - -public: - /*! \p pointer's no-argument constructor initializes its encapsulated pointer to \c 0. - */ - __host__ __device__ - pointer() : super_t() {} - - #if THRUST_CPP_DIALECT >= 2011 - // NOTE: This is needed so that Thrust smart pointers can be used in - // `std::unique_ptr`. - __host__ __device__ - pointer(decltype(nullptr)) : super_t(nullptr) {} - #endif - - /*! This constructor allows construction of a pointer from a T*. - * - * \param ptr A raw pointer to copy from, presumed to point to a location in memory - * accessible by the \p cuda system. - * \tparam OtherT \p OtherT shall be convertible to \p T. - */ - template - __host__ __device__ explicit pointer(OtherT *ptr) : super_t(ptr) - { - } - - /*! This constructor allows construction from another pointer-like object with related type. - * - * \param other The \p OtherPointer to copy. - * \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible - * to \p thrust::system::cuda::tag and its element type shall be convertible to \p T. - */ - template - __host__ __device__ - pointer(const OtherPointer &other, - typename thrust::detail::enable_if_pointer_is_convertible< - OtherPointer, - pointer>::type * = 0) : super_t(other) - { - } - - /*! This constructor allows construction from another pointer-like object with \p void type. - * - * \param other The \p OtherPointer to copy. - * \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible - * to \p thrust::system::cuda::tag and its element type shall be \p void. - */ - template - __host__ __device__ - explicit - pointer(const OtherPointer &other, - typename thrust::detail::enable_if_void_pointer_is_system_convertible< - OtherPointer, - pointer>::type * = 0) : super_t(other) - { - } - - /*! Assignment operator allows assigning from another pointer-like object with related type. - * - * \param other The other pointer-like object to assign from. - * \tparam OtherPointer The system tag associated with \p OtherPointer shall be convertible - * to \p thrust::system::cuda::tag and its element type shall be convertible to \p T. - */ - template - __host__ __device__ - typename thrust::detail::enable_if_pointer_is_convertible< - OtherPointer, - pointer, - pointer &>::type - operator=(const OtherPointer &other) - { - return super_t::operator=(other); - } - - #if THRUST_CPP_DIALECT >= 2011 - // NOTE: This is needed so that Thrust smart pointers can be used in - // `std::unique_ptr`. - __host__ __device__ - pointer& operator=(decltype(nullptr)) - { - super_t::operator=(nullptr); - return *this; - } - #endif -}; // struct pointer - -/*! \p reference is a wrapped reference to an object stored in memory available to the \p cuda system. - * \p reference is the type of the result of dereferencing a \p cuda::pointer. - * - * \tparam T Specifies the type of the referenced object. - */ -template -class reference - : public thrust::reference< - T, - thrust::cuda_cub::pointer, - thrust::cuda_cub::reference > -{ - -private: - typedef thrust::reference< - T, - thrust::cuda_cub::pointer, - thrust::cuda_cub::reference > - super_t; - -public: - /*! \cond - */ - - typedef typename super_t::value_type value_type; - typedef typename super_t::pointer pointer; - - /*! \endcond - */ - - /*! This constructor initializes this \p reference to refer to an object - * pointed to by the given \p pointer. After this \p reference is constructed, - * it shall refer to the object pointed to by \p ptr. - * - * \param ptr A \p pointer to copy from. - */ - __host__ __device__ explicit reference(const pointer &ptr) - : super_t(ptr) - { - } - - /*! This constructor accepts a const reference to another \p reference of related type. - * After this \p reference is constructed, it shall refer to the same object as \p other. - * - * \param other A \p reference to copy from. - * \tparam OtherT The element type of the other \p reference. - * - * \note This constructor is templated primarily to allow initialization of reference - * from reference. - */ - template - __host__ __device__ - reference(const reference &other, - typename thrust::detail::enable_if_convertible< - typename reference::pointer, - pointer>::type * = 0) - : super_t(other) - { - } - - /*! Copy assignment operator copy assigns from another \p reference of related type. - * - * \param other The other \p reference to assign from. - * \return *this - * \tparam OtherT The element type of the other \p reference. - */ - template - __host__ __device__ - reference & - operator=(const reference &other); - - /*! Assignment operator assigns from a \p value_type. - * - * \param x The \p value_type to assign from. - * \return *this - */ - __host__ __device__ - reference & - operator=(const value_type &x); -}; // struct reference - -/*! Exchanges the values of two objects referred to by \p reference. - * \p x The first \p reference of interest. - * \p y The second \p reference of interest. - */ -template -__host__ __device__ void swap(reference x, reference y); - -} // end cuda_cub - -namespace system { - - -/*! \addtogroup system_backends Systems - * \ingroup system - * \{ - */ - -/*! \namespace thrust::system::cuda - * \brief \p thrust::system::cuda is the namespace containing functionality for allocating, manipulating, - * and deallocating memory available to Thrust's CUDA backend system. - * The identifiers are provided in a separate namespace underneath thrust::system - * for import convenience but are also aliased in the top-level thrust::cuda - * namespace for easy access. - * - */ - -namespace cuda { -using thrust::cuda_cub::pointer; -using thrust::cuda_cub::reference; -} // end cuda - -/*! \} - */ - -} // end system - -/*! \namespace thrust::cuda - * \brief \p thrust::cuda is a top-level alias for \p thrust::system::cuda. */ -namespace cuda { -using thrust::cuda_cub::pointer; -using thrust::cuda_cub::reference; -} // end cuda - -} // end thrust - -#include diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/generate.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/generate.h deleted file mode 100644 index ac38be51617dc0cd61008035bc3e64a7544ac0c1..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/generate.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system has no special generate functions - diff --git a/spaces/CVPR/WALT/mmdet/datasets/pipelines/transforms.py b/spaces/CVPR/WALT/mmdet/datasets/pipelines/transforms.py deleted file mode 100644 index 5166fc09bd16ab7f4a5b59485fe7976bfd2dfdd2..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/datasets/pipelines/transforms.py +++ /dev/null @@ -1,1812 +0,0 @@ -import copy -import inspect - -import mmcv -import numpy as np -from numpy import random - -from mmdet.core import PolygonMasks -from mmdet.core.evaluation.bbox_overlaps import bbox_overlaps -from ..builder import PIPELINES - -try: - from imagecorruptions import corrupt -except ImportError: - corrupt = None - -try: - import albumentations - from albumentations import Compose -except ImportError: - albumentations = None - Compose = None - - -@PIPELINES.register_module() -class Resize(object): - """Resize images & bbox & mask. - - This transform resizes the input image to some scale. Bboxes and masks are - then resized with the same scale factor. If the input dict contains the key - "scale", then the scale in the input dict is used, otherwise the specified - scale in the init method is used. If the input dict contains the key - "scale_factor" (if MultiScaleFlipAug does not give img_scale but - scale_factor), the actual scale will be computed by image shape and - scale_factor. - - `img_scale` can either be a tuple (single-scale) or a list of tuple - (multi-scale). There are 3 multiscale modes: - - - ``ratio_range is not None``: randomly sample a ratio from the ratio \ - range and multiply it with the image scale. - - ``ratio_range is None`` and ``multiscale_mode == "range"``: randomly \ - sample a scale from the multiscale range. - - ``ratio_range is None`` and ``multiscale_mode == "value"``: randomly \ - sample a scale from multiple scales. - - Args: - img_scale (tuple or list[tuple]): Images scales for resizing. - multiscale_mode (str): Either "range" or "value". - ratio_range (tuple[float]): (min_ratio, max_ratio) - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - backend (str): Image resize backend, choices are 'cv2' and 'pillow'. - These two backends generates slightly different results. Defaults - to 'cv2'. - override (bool, optional): Whether to override `scale` and - `scale_factor` so as to call resize twice. Default False. If True, - after the first resizing, the existed `scale` and `scale_factor` - will be ignored so the second resizing can be allowed. - This option is a work-around for multiple times of resize in DETR. - Defaults to False. - """ - - def __init__(self, - img_scale=None, - multiscale_mode='range', - ratio_range=None, - keep_ratio=True, - bbox_clip_border=True, - backend='cv2', - override=False): - if img_scale is None: - self.img_scale = None - else: - if isinstance(img_scale, list): - self.img_scale = img_scale - else: - self.img_scale = [img_scale] - assert mmcv.is_list_of(self.img_scale, tuple) - - if ratio_range is not None: - # mode 1: given a scale and a range of image ratio - assert len(self.img_scale) == 1 - else: - # mode 2: given multiple scales or a range of scales - assert multiscale_mode in ['value', 'range'] - - self.backend = backend - self.multiscale_mode = multiscale_mode - self.ratio_range = ratio_range - self.keep_ratio = keep_ratio - # TODO: refactor the override option in Resize - self.override = override - self.bbox_clip_border = bbox_clip_border - - @staticmethod - def random_select(img_scales): - """Randomly select an img_scale from given candidates. - - Args: - img_scales (list[tuple]): Images scales for selection. - - Returns: - (tuple, int): Returns a tuple ``(img_scale, scale_dix)``, \ - where ``img_scale`` is the selected image scale and \ - ``scale_idx`` is the selected index in the given candidates. - """ - - assert mmcv.is_list_of(img_scales, tuple) - scale_idx = np.random.randint(len(img_scales)) - img_scale = img_scales[scale_idx] - return img_scale, scale_idx - - @staticmethod - def random_sample(img_scales): - """Randomly sample an img_scale when ``multiscale_mode=='range'``. - - Args: - img_scales (list[tuple]): Images scale range for sampling. - There must be two tuples in img_scales, which specify the lower - and upper bound of image scales. - - Returns: - (tuple, None): Returns a tuple ``(img_scale, None)``, where \ - ``img_scale`` is sampled scale and None is just a placeholder \ - to be consistent with :func:`random_select`. - """ - - assert mmcv.is_list_of(img_scales, tuple) and len(img_scales) == 2 - img_scale_long = [max(s) for s in img_scales] - img_scale_short = [min(s) for s in img_scales] - long_edge = np.random.randint( - min(img_scale_long), - max(img_scale_long) + 1) - short_edge = np.random.randint( - min(img_scale_short), - max(img_scale_short) + 1) - img_scale = (long_edge, short_edge) - return img_scale, None - - @staticmethod - def random_sample_ratio(img_scale, ratio_range): - """Randomly sample an img_scale when ``ratio_range`` is specified. - - A ratio will be randomly sampled from the range specified by - ``ratio_range``. Then it would be multiplied with ``img_scale`` to - generate sampled scale. - - Args: - img_scale (tuple): Images scale base to multiply with ratio. - ratio_range (tuple[float]): The minimum and maximum ratio to scale - the ``img_scale``. - - Returns: - (tuple, None): Returns a tuple ``(scale, None)``, where \ - ``scale`` is sampled ratio multiplied with ``img_scale`` and \ - None is just a placeholder to be consistent with \ - :func:`random_select`. - """ - - assert isinstance(img_scale, tuple) and len(img_scale) == 2 - min_ratio, max_ratio = ratio_range - assert min_ratio <= max_ratio - ratio = np.random.random_sample() * (max_ratio - min_ratio) + min_ratio - scale = int(img_scale[0] * ratio), int(img_scale[1] * ratio) - return scale, None - - def _random_scale(self, results): - """Randomly sample an img_scale according to ``ratio_range`` and - ``multiscale_mode``. - - If ``ratio_range`` is specified, a ratio will be sampled and be - multiplied with ``img_scale``. - If multiple scales are specified by ``img_scale``, a scale will be - sampled according to ``multiscale_mode``. - Otherwise, single scale will be used. - - Args: - results (dict): Result dict from :obj:`dataset`. - - Returns: - dict: Two new keys 'scale` and 'scale_idx` are added into \ - ``results``, which would be used by subsequent pipelines. - """ - - if self.ratio_range is not None: - scale, scale_idx = self.random_sample_ratio( - self.img_scale[0], self.ratio_range) - elif len(self.img_scale) == 1: - scale, scale_idx = self.img_scale[0], 0 - elif self.multiscale_mode == 'range': - scale, scale_idx = self.random_sample(self.img_scale) - elif self.multiscale_mode == 'value': - scale, scale_idx = self.random_select(self.img_scale) - else: - raise NotImplementedError - - results['scale'] = scale - results['scale_idx'] = scale_idx - - def _resize_img(self, results): - """Resize images with ``results['scale']``.""" - for key in results.get('img_fields', ['img']): - if self.keep_ratio: - img, scale_factor = mmcv.imrescale( - results[key], - results['scale'], - return_scale=True, - backend=self.backend) - # the w_scale and h_scale has minor difference - # a real fix should be done in the mmcv.imrescale in the future - new_h, new_w = img.shape[:2] - h, w = results[key].shape[:2] - w_scale = new_w / w - h_scale = new_h / h - else: - img, w_scale, h_scale = mmcv.imresize( - results[key], - results['scale'], - return_scale=True, - backend=self.backend) - results[key] = img - - scale_factor = np.array([w_scale, h_scale, w_scale, h_scale], - dtype=np.float32) - results['img_shape'] = img.shape - # in case that there is no padding - results['pad_shape'] = img.shape - results['scale_factor'] = scale_factor - results['keep_ratio'] = self.keep_ratio - - def _resize_bboxes(self, results): - """Resize bounding boxes with ``results['scale_factor']``.""" - for key in results.get('bbox_fields', []): - bboxes = results[key] * results['scale_factor'] - if self.bbox_clip_border: - img_shape = results['img_shape'] - bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) - bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) - results[key] = bboxes - - def _resize_masks(self, results): - """Resize masks with ``results['scale']``""" - for key in results.get('mask_fields', []): - if results[key] is None: - continue - if self.keep_ratio: - results[key] = results[key].rescale(results['scale']) - else: - results[key] = results[key].resize(results['img_shape'][:2]) - - def _resize_seg(self, results): - """Resize semantic segmentation map with ``results['scale']``.""" - for key in results.get('seg_fields', []): - if self.keep_ratio: - gt_seg = mmcv.imrescale( - results[key], - results['scale'], - interpolation='nearest', - backend=self.backend) - else: - gt_seg = mmcv.imresize( - results[key], - results['scale'], - interpolation='nearest', - backend=self.backend) - results['gt_semantic_seg'] = gt_seg - - def __call__(self, results): - """Call function to resize images, bounding boxes, masks, semantic - segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Resized results, 'img_shape', 'pad_shape', 'scale_factor', \ - 'keep_ratio' keys are added into result dict. - """ - - if 'scale' not in results: - if 'scale_factor' in results: - img_shape = results['img'].shape[:2] - scale_factor = results['scale_factor'] - assert isinstance(scale_factor, float) - results['scale'] = tuple( - [int(x * scale_factor) for x in img_shape][::-1]) - else: - self._random_scale(results) - else: - if not self.override: - assert 'scale_factor' not in results, ( - 'scale and scale_factor cannot be both set.') - else: - results.pop('scale') - if 'scale_factor' in results: - results.pop('scale_factor') - self._random_scale(results) - - self._resize_img(results) - self._resize_bboxes(results) - self._resize_masks(results) - self._resize_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(img_scale={self.img_scale}, ' - repr_str += f'multiscale_mode={self.multiscale_mode}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'keep_ratio={self.keep_ratio}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class RandomFlip(object): - """Flip the image & bbox & mask. - - If the input dict contains the key "flip", then the flag will be used, - otherwise it will be randomly decided by a ratio specified in the init - method. - - When random flip is enabled, ``flip_ratio``/``direction`` can either be a - float/string or tuple of float/string. There are 3 flip modes: - - - ``flip_ratio`` is float, ``direction`` is string: the image will be - ``direction``ly flipped with probability of ``flip_ratio`` . - E.g., ``flip_ratio=0.5``, ``direction='horizontal'``, - then image will be horizontally flipped with probability of 0.5. - - ``flip_ratio`` is float, ``direction`` is list of string: the image wil - be ``direction[i]``ly flipped with probability of - ``flip_ratio/len(direction)``. - E.g., ``flip_ratio=0.5``, ``direction=['horizontal', 'vertical']``, - then image will be horizontally flipped with probability of 0.25, - vertically with probability of 0.25. - - ``flip_ratio`` is list of float, ``direction`` is list of string: - given ``len(flip_ratio) == len(direction)``, the image wil - be ``direction[i]``ly flipped with probability of ``flip_ratio[i]``. - E.g., ``flip_ratio=[0.3, 0.5]``, ``direction=['horizontal', - 'vertical']``, then image will be horizontally flipped with probability - of 0.3, vertically with probability of 0.5 - - Args: - flip_ratio (float | list[float], optional): The flipping probability. - Default: None. - direction(str | list[str], optional): The flipping direction. Options - are 'horizontal', 'vertical', 'diagonal'. Default: 'horizontal'. - If input is a list, the length must equal ``flip_ratio``. Each - element in ``flip_ratio`` indicates the flip probability of - corresponding direction. - """ - - def __init__(self, flip_ratio=None, direction='horizontal'): - if isinstance(flip_ratio, list): - assert mmcv.is_list_of(flip_ratio, float) - assert 0 <= sum(flip_ratio) <= 1 - elif isinstance(flip_ratio, float): - assert 0 <= flip_ratio <= 1 - elif flip_ratio is None: - pass - else: - raise ValueError('flip_ratios must be None, float, ' - 'or list of float') - self.flip_ratio = flip_ratio - - valid_directions = ['horizontal', 'vertical', 'diagonal'] - if isinstance(direction, str): - assert direction in valid_directions - elif isinstance(direction, list): - assert mmcv.is_list_of(direction, str) - assert set(direction).issubset(set(valid_directions)) - else: - raise ValueError('direction must be either str or list of str') - self.direction = direction - - if isinstance(flip_ratio, list): - assert len(self.flip_ratio) == len(self.direction) - - def bbox_flip(self, bboxes, img_shape, direction): - """Flip bboxes horizontally. - - Args: - bboxes (numpy.ndarray): Bounding boxes, shape (..., 4*k) - img_shape (tuple[int]): Image shape (height, width) - direction (str): Flip direction. Options are 'horizontal', - 'vertical'. - - Returns: - numpy.ndarray: Flipped bounding boxes. - """ - - assert bboxes.shape[-1] % 4 == 0 - flipped = bboxes.copy() - if direction == 'horizontal': - w = img_shape[1] - flipped[..., 0::4] = w - bboxes[..., 2::4] - flipped[..., 2::4] = w - bboxes[..., 0::4] - elif direction == 'vertical': - h = img_shape[0] - flipped[..., 1::4] = h - bboxes[..., 3::4] - flipped[..., 3::4] = h - bboxes[..., 1::4] - elif direction == 'diagonal': - w = img_shape[1] - h = img_shape[0] - flipped[..., 0::4] = w - bboxes[..., 2::4] - flipped[..., 1::4] = h - bboxes[..., 3::4] - flipped[..., 2::4] = w - bboxes[..., 0::4] - flipped[..., 3::4] = h - bboxes[..., 1::4] - else: - raise ValueError(f"Invalid flipping direction '{direction}'") - return flipped - - def __call__(self, results): - """Call function to flip bounding boxes, masks, semantic segmentation - maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Flipped results, 'flip', 'flip_direction' keys are added \ - into result dict. - """ - - if 'flip' not in results: - if isinstance(self.direction, list): - # None means non-flip - direction_list = self.direction + [None] - else: - # None means non-flip - direction_list = [self.direction, None] - - if isinstance(self.flip_ratio, list): - non_flip_ratio = 1 - sum(self.flip_ratio) - flip_ratio_list = self.flip_ratio + [non_flip_ratio] - else: - non_flip_ratio = 1 - self.flip_ratio - # exclude non-flip - single_ratio = self.flip_ratio / (len(direction_list) - 1) - flip_ratio_list = [single_ratio] * (len(direction_list) - - 1) + [non_flip_ratio] - - cur_dir = np.random.choice(direction_list, p=flip_ratio_list) - - results['flip'] = cur_dir is not None - if 'flip_direction' not in results: - results['flip_direction'] = cur_dir - if results['flip']: - # flip image - for key in results.get('img_fields', ['img']): - results[key] = mmcv.imflip( - results[key], direction=results['flip_direction']) - # flip bboxes - for key in results.get('bbox_fields', []): - results[key] = self.bbox_flip(results[key], - results['img_shape'], - results['flip_direction']) - # flip masks - for key in results.get('mask_fields', []): - results[key] = results[key].flip(results['flip_direction']) - - # flip segs - for key in results.get('seg_fields', []): - results[key] = mmcv.imflip( - results[key], direction=results['flip_direction']) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(flip_ratio={self.flip_ratio})' - - -@PIPELINES.register_module() -class Pad(object): - """Pad the image & mask. - - There are two padding modes: (1) pad to a fixed size and (2) pad to the - minimum size that is divisible by some number. - Added keys are "pad_shape", "pad_fixed_size", "pad_size_divisor", - - Args: - size (tuple, optional): Fixed padding size. - size_divisor (int, optional): The divisor of padded size. - pad_val (float, optional): Padding value, 0 by default. - """ - - def __init__(self, size=None, size_divisor=None, pad_val=0): - self.size = size - self.size_divisor = size_divisor - self.pad_val = pad_val - # only one of size and size_divisor should be valid - assert size is not None or size_divisor is not None - assert size is None or size_divisor is None - - def _pad_img(self, results): - """Pad images according to ``self.size``.""" - for key in results.get('img_fields', ['img']): - if self.size is not None: - padded_img = mmcv.impad( - results[key], shape=self.size, pad_val=self.pad_val) - elif self.size_divisor is not None: - padded_img = mmcv.impad_to_multiple( - results[key], self.size_divisor, pad_val=self.pad_val) - results[key] = padded_img - results['pad_shape'] = padded_img.shape - results['pad_fixed_size'] = self.size - results['pad_size_divisor'] = self.size_divisor - - def _pad_masks(self, results): - """Pad masks according to ``results['pad_shape']``.""" - pad_shape = results['pad_shape'][:2] - for key in results.get('mask_fields', []): - results[key] = results[key].pad(pad_shape, pad_val=self.pad_val) - - def _pad_seg(self, results): - """Pad semantic segmentation map according to - ``results['pad_shape']``.""" - for key in results.get('seg_fields', []): - results[key] = mmcv.impad( - results[key], shape=results['pad_shape'][:2]) - - def __call__(self, results): - """Call function to pad images, masks, semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Updated result dict. - """ - self._pad_img(results) - self._pad_masks(results) - self._pad_seg(results) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(size={self.size}, ' - repr_str += f'size_divisor={self.size_divisor}, ' - repr_str += f'pad_val={self.pad_val})' - return repr_str - - -@PIPELINES.register_module() -class Normalize(object): - """Normalize the image. - - Added key is "img_norm_cfg". - - Args: - mean (sequence): Mean values of 3 channels. - std (sequence): Std values of 3 channels. - to_rgb (bool): Whether to convert the image from BGR to RGB, - default is true. - """ - - def __init__(self, mean, std, to_rgb=True): - self.mean = np.array(mean, dtype=np.float32) - self.std = np.array(std, dtype=np.float32) - self.to_rgb = to_rgb - - def __call__(self, results): - """Call function to normalize images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Normalized results, 'img_norm_cfg' key is added into - result dict. - """ - for key in results.get('img_fields', ['img']): - results[key] = mmcv.imnormalize(results[key], self.mean, self.std, - self.to_rgb) - results['img_norm_cfg'] = dict( - mean=self.mean, std=self.std, to_rgb=self.to_rgb) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(mean={self.mean}, std={self.std}, to_rgb={self.to_rgb})' - return repr_str - - -@PIPELINES.register_module() -class RandomCrop(object): - """Random crop the image & bboxes & masks. - - The absolute `crop_size` is sampled based on `crop_type` and `image_size`, - then the cropped results are generated. - - Args: - crop_size (tuple): The relative ratio or absolute pixels of - height and width. - crop_type (str, optional): one of "relative_range", "relative", - "absolute", "absolute_range". "relative" randomly crops - (h * crop_size[0], w * crop_size[1]) part from an input of size - (h, w). "relative_range" uniformly samples relative crop size from - range [crop_size[0], 1] and [crop_size[1], 1] for height and width - respectively. "absolute" crops from an input with absolute size - (crop_size[0], crop_size[1]). "absolute_range" uniformly samples - crop_h in range [crop_size[0], min(h, crop_size[1])] and crop_w - in range [crop_size[0], min(w, crop_size[1])]. Default "absolute". - allow_negative_crop (bool, optional): Whether to allow a crop that does - not contain any bbox area. Default False. - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - - Note: - - If the image is smaller than the absolute crop size, return the - original image. - - The keys for bboxes, labels and masks must be aligned. That is, - `gt_bboxes` corresponds to `gt_labels` and `gt_masks`, and - `gt_bboxes_ignore` corresponds to `gt_labels_ignore` and - `gt_masks_ignore`. - - If the crop does not contain any gt-bbox region and - `allow_negative_crop` is set to False, skip this image. - """ - - def __init__(self, - crop_size, - crop_type='absolute', - allow_negative_crop=False, - bbox_clip_border=True): - if crop_type not in [ - 'relative_range', 'relative', 'absolute', 'absolute_range' - ]: - raise ValueError(f'Invalid crop_type {crop_type}.') - if crop_type in ['absolute', 'absolute_range']: - assert crop_size[0] > 0 and crop_size[1] > 0 - assert isinstance(crop_size[0], int) and isinstance( - crop_size[1], int) - else: - assert 0 < crop_size[0] <= 1 and 0 < crop_size[1] <= 1 - self.crop_size = crop_size - self.crop_type = crop_type - self.allow_negative_crop = allow_negative_crop - self.bbox_clip_border = bbox_clip_border - # The key correspondence from bboxes to labels and masks. - self.bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - self.bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - - def _crop_data(self, results, crop_size, allow_negative_crop): - """Function to randomly crop images, bounding boxes, masks, semantic - segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - crop_size (tuple): Expected absolute size after cropping, (h, w). - allow_negative_crop (bool): Whether to allow a crop that does not - contain any bbox area. Default to False. - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - assert crop_size[0] > 0 and crop_size[1] > 0 - for key in results.get('img_fields', ['img']): - img = results[key] - margin_h = max(img.shape[0] - crop_size[0], 0) - margin_w = max(img.shape[1] - crop_size[1], 0) - offset_h = np.random.randint(0, margin_h + 1) - offset_w = np.random.randint(0, margin_w + 1) - crop_y1, crop_y2 = offset_h, offset_h + crop_size[0] - crop_x1, crop_x2 = offset_w, offset_w + crop_size[1] - - # crop the image - img = img[crop_y1:crop_y2, crop_x1:crop_x2, ...] - img_shape = img.shape - results[key] = img - results['img_shape'] = img_shape - - # crop bboxes accordingly and clip to the image boundary - for key in results.get('bbox_fields', []): - # e.g. gt_bboxes and gt_bboxes_ignore - bbox_offset = np.array([offset_w, offset_h, offset_w, offset_h], - dtype=np.float32) - bboxes = results[key] - bbox_offset - if self.bbox_clip_border: - bboxes[:, 0::2] = np.clip(bboxes[:, 0::2], 0, img_shape[1]) - bboxes[:, 1::2] = np.clip(bboxes[:, 1::2], 0, img_shape[0]) - valid_inds = (bboxes[:, 2] > bboxes[:, 0]) & ( - bboxes[:, 3] > bboxes[:, 1]) - # If the crop does not contain any gt-bbox area and - # allow_negative_crop is False, skip this image. - if (key == 'gt_bboxes' and not valid_inds.any() - and not allow_negative_crop): - return None - results[key] = bboxes[valid_inds, :] - # label fields. e.g. gt_labels and gt_labels_ignore - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][valid_inds] - - # mask fields, e.g. gt_masks and gt_masks_ignore - mask_key = self.bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][ - valid_inds.nonzero()[0]].crop( - np.asarray([crop_x1, crop_y1, crop_x2, crop_y2])) - - - # crop semantic seg - for key in results.get('seg_fields', []): - results[key] = results[key][crop_y1:crop_y2, crop_x1:crop_x2] - - return results - - def _get_crop_size(self, image_size): - """Randomly generates the absolute crop size based on `crop_type` and - `image_size`. - - Args: - image_size (tuple): (h, w). - - Returns: - crop_size (tuple): (crop_h, crop_w) in absolute pixels. - """ - h, w = image_size - if self.crop_type == 'absolute': - return (min(self.crop_size[0], h), min(self.crop_size[1], w)) - elif self.crop_type == 'absolute_range': - assert self.crop_size[0] <= self.crop_size[1] - crop_h = np.random.randint( - min(h, self.crop_size[0]), - min(h, self.crop_size[1]) + 1) - crop_w = np.random.randint( - min(w, self.crop_size[0]), - min(w, self.crop_size[1]) + 1) - return crop_h, crop_w - elif self.crop_type == 'relative': - crop_h, crop_w = self.crop_size - return int(h * crop_h + 0.5), int(w * crop_w + 0.5) - elif self.crop_type == 'relative_range': - crop_size = np.asarray(self.crop_size, dtype=np.float32) - crop_h, crop_w = crop_size + np.random.rand(2) * (1 - crop_size) - return int(h * crop_h + 0.5), int(w * crop_w + 0.5) - - def __call__(self, results): - """Call function to randomly crop images, bounding boxes, masks, - semantic segmentation maps. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Randomly cropped results, 'img_shape' key in result dict is - updated according to crop size. - """ - image_size = results['img'].shape[:2] - crop_size = self._get_crop_size(image_size) - results = self._crop_data(results, crop_size, self.allow_negative_crop) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(crop_size={self.crop_size}, ' - repr_str += f'crop_type={self.crop_type}, ' - repr_str += f'allow_negative_crop={self.allow_negative_crop}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class SegRescale(object): - """Rescale semantic segmentation maps. - - Args: - scale_factor (float): The scale factor of the final output. - backend (str): Image rescale backend, choices are 'cv2' and 'pillow'. - These two backends generates slightly different results. Defaults - to 'cv2'. - """ - - def __init__(self, scale_factor=1, backend='cv2'): - self.scale_factor = scale_factor - self.backend = backend - - def __call__(self, results): - """Call function to scale the semantic segmentation map. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with semantic segmentation map scaled. - """ - - for key in results.get('seg_fields', []): - if self.scale_factor != 1: - results[key] = mmcv.imrescale( - results[key], - self.scale_factor, - interpolation='nearest', - backend=self.backend) - return results - - def __repr__(self): - return self.__class__.__name__ + f'(scale_factor={self.scale_factor})' - - -@PIPELINES.register_module() -class PhotoMetricDistortion(object): - """Apply photometric distortion to image sequentially, every transformation - is applied with a probability of 0.5. The position of random contrast is in - second or second to last. - - 1. random brightness - 2. random contrast (mode 0) - 3. convert color from BGR to HSV - 4. random saturation - 5. random hue - 6. convert color from HSV to BGR - 7. random contrast (mode 1) - 8. randomly swap channels - - Args: - brightness_delta (int): delta of brightness. - contrast_range (tuple): range of contrast. - saturation_range (tuple): range of saturation. - hue_delta (int): delta of hue. - """ - - def __init__(self, - brightness_delta=32, - contrast_range=(0.5, 1.5), - saturation_range=(0.5, 1.5), - hue_delta=18): - self.brightness_delta = brightness_delta - self.contrast_lower, self.contrast_upper = contrast_range - self.saturation_lower, self.saturation_upper = saturation_range - self.hue_delta = hue_delta - - def __call__(self, results): - """Call function to perform photometric distortion on images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images distorted. - """ - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - assert img.dtype == np.float32, \ - 'PhotoMetricDistortion needs the input image of dtype np.float32,'\ - ' please set "to_float32=True" in "LoadImageFromFile" pipeline' - # random brightness - if random.randint(2): - delta = random.uniform(-self.brightness_delta, - self.brightness_delta) - img += delta - - # mode == 0 --> do random contrast first - # mode == 1 --> do random contrast last - mode = random.randint(2) - if mode == 1: - if random.randint(2): - alpha = random.uniform(self.contrast_lower, - self.contrast_upper) - img *= alpha - - # convert color from BGR to HSV - img = mmcv.bgr2hsv(img) - - # random saturation - if random.randint(2): - img[..., 1] *= random.uniform(self.saturation_lower, - self.saturation_upper) - - # random hue - if random.randint(2): - img[..., 0] += random.uniform(-self.hue_delta, self.hue_delta) - img[..., 0][img[..., 0] > 360] -= 360 - img[..., 0][img[..., 0] < 0] += 360 - - # convert color from HSV to BGR - img = mmcv.hsv2bgr(img) - - # random contrast - if mode == 0: - if random.randint(2): - alpha = random.uniform(self.contrast_lower, - self.contrast_upper) - img *= alpha - - # randomly swap channels - if random.randint(2): - img = img[..., random.permutation(3)] - - results['img'] = img - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(\nbrightness_delta={self.brightness_delta},\n' - repr_str += 'contrast_range=' - repr_str += f'{(self.contrast_lower, self.contrast_upper)},\n' - repr_str += 'saturation_range=' - repr_str += f'{(self.saturation_lower, self.saturation_upper)},\n' - repr_str += f'hue_delta={self.hue_delta})' - return repr_str - - -@PIPELINES.register_module() -class Expand(object): - """Random expand the image & bboxes. - - Randomly place the original image on a canvas of 'ratio' x original image - size filled with mean values. The ratio is in the range of ratio_range. - - Args: - mean (tuple): mean value of dataset. - to_rgb (bool): if need to convert the order of mean to align with RGB. - ratio_range (tuple): range of expand ratio. - prob (float): probability of applying this transformation - """ - - def __init__(self, - mean=(0, 0, 0), - to_rgb=True, - ratio_range=(1, 4), - seg_ignore_label=None, - prob=0.5): - self.to_rgb = to_rgb - self.ratio_range = ratio_range - if to_rgb: - self.mean = mean[::-1] - else: - self.mean = mean - self.min_ratio, self.max_ratio = ratio_range - self.seg_ignore_label = seg_ignore_label - self.prob = prob - - def __call__(self, results): - """Call function to expand images, bounding boxes. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images, bounding boxes expanded - """ - - if random.uniform(0, 1) > self.prob: - return results - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - - h, w, c = img.shape - ratio = random.uniform(self.min_ratio, self.max_ratio) - # speedup expand when meets large image - if np.all(self.mean == self.mean[0]): - expand_img = np.empty((int(h * ratio), int(w * ratio), c), - img.dtype) - expand_img.fill(self.mean[0]) - else: - expand_img = np.full((int(h * ratio), int(w * ratio), c), - self.mean, - dtype=img.dtype) - left = int(random.uniform(0, w * ratio - w)) - top = int(random.uniform(0, h * ratio - h)) - expand_img[top:top + h, left:left + w] = img - - results['img'] = expand_img - # expand bboxes - for key in results.get('bbox_fields', []): - results[key] = results[key] + np.tile( - (left, top), 2).astype(results[key].dtype) - - # expand masks - for key in results.get('mask_fields', []): - results[key] = results[key].expand( - int(h * ratio), int(w * ratio), top, left) - - # expand segs - for key in results.get('seg_fields', []): - gt_seg = results[key] - expand_gt_seg = np.full((int(h * ratio), int(w * ratio)), - self.seg_ignore_label, - dtype=gt_seg.dtype) - expand_gt_seg[top:top + h, left:left + w] = gt_seg - results[key] = expand_gt_seg - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(mean={self.mean}, to_rgb={self.to_rgb}, ' - repr_str += f'ratio_range={self.ratio_range}, ' - repr_str += f'seg_ignore_label={self.seg_ignore_label})' - return repr_str - - -@PIPELINES.register_module() -class MinIoURandomCrop(object): - """Random crop the image & bboxes, the cropped patches have minimum IoU - requirement with original image & bboxes, the IoU threshold is randomly - selected from min_ious. - - Args: - min_ious (tuple): minimum IoU threshold for all intersections with - bounding boxes - min_crop_size (float): minimum crop's size (i.e. h,w := a*h, a*w, - where a >= min_crop_size). - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - - Note: - The keys for bboxes, labels and masks should be paired. That is, \ - `gt_bboxes` corresponds to `gt_labels` and `gt_masks`, and \ - `gt_bboxes_ignore` to `gt_labels_ignore` and `gt_masks_ignore`. - """ - - def __init__(self, - min_ious=(0.1, 0.3, 0.5, 0.7, 0.9), - min_crop_size=0.3, - bbox_clip_border=True): - # 1: return ori img - self.min_ious = min_ious - self.sample_mode = (1, *min_ious, 0) - self.min_crop_size = min_crop_size - self.bbox_clip_border = bbox_clip_border - self.bbox2label = { - 'gt_bboxes': 'gt_labels', - 'gt_bboxes_ignore': 'gt_labels_ignore' - } - self.bbox2mask = { - 'gt_bboxes': 'gt_masks', - 'gt_bboxes_ignore': 'gt_masks_ignore' - } - - def __call__(self, results): - """Call function to crop images and bounding boxes with minimum IoU - constraint. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images and bounding boxes cropped, \ - 'img_shape' key is updated. - """ - - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - img = results['img'] - assert 'bbox_fields' in results - boxes = [results[key] for key in results['bbox_fields']] - boxes = np.concatenate(boxes, 0) - h, w, c = img.shape - while True: - mode = random.choice(self.sample_mode) - self.mode = mode - if mode == 1: - return results - - min_iou = mode - for i in range(50): - new_w = random.uniform(self.min_crop_size * w, w) - new_h = random.uniform(self.min_crop_size * h, h) - - # h / w in [0.5, 2] - if new_h / new_w < 0.5 or new_h / new_w > 2: - continue - - left = random.uniform(w - new_w) - top = random.uniform(h - new_h) - - patch = np.array( - (int(left), int(top), int(left + new_w), int(top + new_h))) - # Line or point crop is not allowed - if patch[2] == patch[0] or patch[3] == patch[1]: - continue - overlaps = bbox_overlaps( - patch.reshape(-1, 4), boxes.reshape(-1, 4)).reshape(-1) - if len(overlaps) > 0 and overlaps.min() < min_iou: - continue - - # center of boxes should inside the crop img - # only adjust boxes and instance masks when the gt is not empty - if len(overlaps) > 0: - # adjust boxes - def is_center_of_bboxes_in_patch(boxes, patch): - center = (boxes[:, :2] + boxes[:, 2:]) / 2 - mask = ((center[:, 0] > patch[0]) * - (center[:, 1] > patch[1]) * - (center[:, 0] < patch[2]) * - (center[:, 1] < patch[3])) - return mask - - mask = is_center_of_bboxes_in_patch(boxes, patch) - if not mask.any(): - continue - for key in results.get('bbox_fields', []): - boxes = results[key].copy() - mask = is_center_of_bboxes_in_patch(boxes, patch) - boxes = boxes[mask] - if self.bbox_clip_border: - boxes[:, 2:] = boxes[:, 2:].clip(max=patch[2:]) - boxes[:, :2] = boxes[:, :2].clip(min=patch[:2]) - boxes -= np.tile(patch[:2], 2) - - results[key] = boxes - # labels - label_key = self.bbox2label.get(key) - if label_key in results: - results[label_key] = results[label_key][mask] - - # mask fields - mask_key = self.bbox2mask.get(key) - if mask_key in results: - results[mask_key] = results[mask_key][ - mask.nonzero()[0]].crop(patch) - # adjust the img no matter whether the gt is empty before crop - img = img[patch[1]:patch[3], patch[0]:patch[2]] - results['img'] = img - results['img_shape'] = img.shape - - # seg fields - for key in results.get('seg_fields', []): - results[key] = results[key][patch[1]:patch[3], - patch[0]:patch[2]] - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(min_ious={self.min_ious}, ' - repr_str += f'min_crop_size={self.min_crop_size}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class Corrupt(object): - """Corruption augmentation. - - Corruption transforms implemented based on - `imagecorruptions `_. - - Args: - corruption (str): Corruption name. - severity (int, optional): The severity of corruption. Default: 1. - """ - - def __init__(self, corruption, severity=1): - self.corruption = corruption - self.severity = severity - - def __call__(self, results): - """Call function to corrupt image. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Result dict with images corrupted. - """ - - if corrupt is None: - raise RuntimeError('imagecorruptions is not installed') - if 'img_fields' in results: - assert results['img_fields'] == ['img'], \ - 'Only single img_fields is allowed' - results['img'] = corrupt( - results['img'].astype(np.uint8), - corruption_name=self.corruption, - severity=self.severity) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(corruption={self.corruption}, ' - repr_str += f'severity={self.severity})' - return repr_str - - -@PIPELINES.register_module() -class Albu(object): - """Albumentation augmentation. - - Adds custom transformations from Albumentations library. - Please, visit `https://albumentations.readthedocs.io` - to get more information. - - An example of ``transforms`` is as followed: - - .. code-block:: - - [ - dict( - type='ShiftScaleRotate', - shift_limit=0.0625, - scale_limit=0.0, - rotate_limit=0, - interpolation=1, - p=0.5), - dict( - type='RandomBrightnessContrast', - brightness_limit=[0.1, 0.3], - contrast_limit=[0.1, 0.3], - p=0.2), - dict(type='ChannelShuffle', p=0.1), - dict( - type='OneOf', - transforms=[ - dict(type='Blur', blur_limit=3, p=1.0), - dict(type='MedianBlur', blur_limit=3, p=1.0) - ], - p=0.1), - ] - - Args: - transforms (list[dict]): A list of albu transformations - bbox_params (dict): Bbox_params for albumentation `Compose` - keymap (dict): Contains {'input key':'albumentation-style key'} - skip_img_without_anno (bool): Whether to skip the image if no ann left - after aug - """ - - def __init__(self, - transforms, - bbox_params=None, - keymap=None, - update_pad_shape=False, - skip_img_without_anno=False): - if Compose is None: - raise RuntimeError('albumentations is not installed') - - # Args will be modified later, copying it will be safer - transforms = copy.deepcopy(transforms) - if bbox_params is not None: - bbox_params = copy.deepcopy(bbox_params) - if keymap is not None: - keymap = copy.deepcopy(keymap) - self.transforms = transforms - self.filter_lost_elements = False - self.update_pad_shape = update_pad_shape - self.skip_img_without_anno = skip_img_without_anno - - # A simple workaround to remove masks without boxes - if (isinstance(bbox_params, dict) and 'label_fields' in bbox_params - and 'filter_lost_elements' in bbox_params): - self.filter_lost_elements = True - self.origin_label_fields = bbox_params['label_fields'] - bbox_params['label_fields'] = ['idx_mapper'] - del bbox_params['filter_lost_elements'] - - self.bbox_params = ( - self.albu_builder(bbox_params) if bbox_params else None) - self.aug = Compose([self.albu_builder(t) for t in self.transforms], - bbox_params=self.bbox_params) - - if not keymap: - self.keymap_to_albu = { - 'img': 'image', - 'gt_masks': 'masks', - 'gt_bboxes': 'bboxes' - } - else: - self.keymap_to_albu = keymap - self.keymap_back = {v: k for k, v in self.keymap_to_albu.items()} - - def albu_builder(self, cfg): - """Import a module from albumentations. - - It inherits some of :func:`build_from_cfg` logic. - - Args: - cfg (dict): Config dict. It should at least contain the key "type". - - Returns: - obj: The constructed object. - """ - - assert isinstance(cfg, dict) and 'type' in cfg - args = cfg.copy() - - obj_type = args.pop('type') - if mmcv.is_str(obj_type): - if albumentations is None: - raise RuntimeError('albumentations is not installed') - obj_cls = getattr(albumentations, obj_type) - elif inspect.isclass(obj_type): - obj_cls = obj_type - else: - raise TypeError( - f'type must be a str or valid type, but got {type(obj_type)}') - - if 'transforms' in args: - args['transforms'] = [ - self.albu_builder(transform) - for transform in args['transforms'] - ] - - return obj_cls(**args) - - @staticmethod - def mapper(d, keymap): - """Dictionary mapper. Renames keys according to keymap provided. - - Args: - d (dict): old dict - keymap (dict): {'old_key':'new_key'} - Returns: - dict: new dict. - """ - - updated_dict = {} - for k, v in zip(d.keys(), d.values()): - new_k = keymap.get(k, k) - updated_dict[new_k] = d[k] - return updated_dict - - def __call__(self, results): - # dict to albumentations format - results = self.mapper(results, self.keymap_to_albu) - # TODO: add bbox_fields - if 'bboxes' in results: - # to list of boxes - if isinstance(results['bboxes'], np.ndarray): - results['bboxes'] = [x for x in results['bboxes']] - # add pseudo-field for filtration - if self.filter_lost_elements: - results['idx_mapper'] = np.arange(len(results['bboxes'])) - - # TODO: Support mask structure in albu - if 'masks' in results: - if isinstance(results['masks'], PolygonMasks): - raise NotImplementedError( - 'Albu only supports BitMap masks now') - ori_masks = results['masks'] - if albumentations.__version__ < '0.5': - results['masks'] = results['masks'].masks - else: - results['masks'] = [mask for mask in results['masks'].masks] - - results = self.aug(**results) - - if 'bboxes' in results: - if isinstance(results['bboxes'], list): - results['bboxes'] = np.array( - results['bboxes'], dtype=np.float32) - results['bboxes'] = results['bboxes'].reshape(-1, 4) - - # filter label_fields - if self.filter_lost_elements: - - for label in self.origin_label_fields: - results[label] = np.array( - [results[label][i] for i in results['idx_mapper']]) - if 'masks' in results: - results['masks'] = np.array( - [results['masks'][i] for i in results['idx_mapper']]) - results['masks'] = ori_masks.__class__( - results['masks'], results['image'].shape[0], - results['image'].shape[1]) - - if (not len(results['idx_mapper']) - and self.skip_img_without_anno): - return None - - if 'gt_labels' in results: - if isinstance(results['gt_labels'], list): - results['gt_labels'] = np.array(results['gt_labels']) - results['gt_labels'] = results['gt_labels'].astype(np.int64) - - # back to the original format - results = self.mapper(results, self.keymap_back) - - # update final shape - if self.update_pad_shape: - results['pad_shape'] = results['img'].shape - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ + f'(transforms={self.transforms})' - return repr_str - - -@PIPELINES.register_module() -class RandomCenterCropPad(object): - """Random center crop and random around padding for CornerNet. - - This operation generates randomly cropped image from the original image and - pads it simultaneously. Different from :class:`RandomCrop`, the output - shape may not equal to ``crop_size`` strictly. We choose a random value - from ``ratios`` and the output shape could be larger or smaller than - ``crop_size``. The padding operation is also different from :class:`Pad`, - here we use around padding instead of right-bottom padding. - - The relation between output image (padding image) and original image: - - .. code:: text - - output image - - +----------------------------+ - | padded area | - +------|----------------------------|----------+ - | | cropped area | | - | | +---------------+ | | - | | | . center | | | original image - | | | range | | | - | | +---------------+ | | - +------|----------------------------|----------+ - | padded area | - +----------------------------+ - - There are 5 main areas in the figure: - - - output image: output image of this operation, also called padding - image in following instruction. - - original image: input image of this operation. - - padded area: non-intersect area of output image and original image. - - cropped area: the overlap of output image and original image. - - center range: a smaller area where random center chosen from. - center range is computed by ``border`` and original image's shape - to avoid our random center is too close to original image's border. - - Also this operation act differently in train and test mode, the summary - pipeline is listed below. - - Train pipeline: - - 1. Choose a ``random_ratio`` from ``ratios``, the shape of padding image - will be ``random_ratio * crop_size``. - 2. Choose a ``random_center`` in center range. - 3. Generate padding image with center matches the ``random_center``. - 4. Initialize the padding image with pixel value equals to ``mean``. - 5. Copy the cropped area to padding image. - 6. Refine annotations. - - Test pipeline: - - 1. Compute output shape according to ``test_pad_mode``. - 2. Generate padding image with center matches the original image - center. - 3. Initialize the padding image with pixel value equals to ``mean``. - 4. Copy the ``cropped area`` to padding image. - - Args: - crop_size (tuple | None): expected size after crop, final size will - computed according to ratio. Requires (h, w) in train mode, and - None in test mode. - ratios (tuple): random select a ratio from tuple and crop image to - (crop_size[0] * ratio) * (crop_size[1] * ratio). - Only available in train mode. - border (int): max distance from center select area to image border. - Only available in train mode. - mean (sequence): Mean values of 3 channels. - std (sequence): Std values of 3 channels. - to_rgb (bool): Whether to convert the image from BGR to RGB. - test_mode (bool): whether involve random variables in transform. - In train mode, crop_size is fixed, center coords and ratio is - random selected from predefined lists. In test mode, crop_size - is image's original shape, center coords and ratio is fixed. - test_pad_mode (tuple): padding method and padding shape value, only - available in test mode. Default is using 'logical_or' with - 127 as padding shape value. - - - 'logical_or': final_shape = input_shape | padding_shape_value - - 'size_divisor': final_shape = int( - ceil(input_shape / padding_shape_value) * padding_shape_value) - bbox_clip_border (bool, optional): Whether clip the objects outside - the border of the image. Defaults to True. - """ - - def __init__(self, - crop_size=None, - ratios=(0.9, 1.0, 1.1), - border=128, - mean=None, - std=None, - to_rgb=None, - test_mode=False, - test_pad_mode=('logical_or', 127), - bbox_clip_border=True): - if test_mode: - assert crop_size is None, 'crop_size must be None in test mode' - assert ratios is None, 'ratios must be None in test mode' - assert border is None, 'border must be None in test mode' - assert isinstance(test_pad_mode, (list, tuple)) - assert test_pad_mode[0] in ['logical_or', 'size_divisor'] - else: - assert isinstance(crop_size, (list, tuple)) - assert crop_size[0] > 0 and crop_size[1] > 0, ( - 'crop_size must > 0 in train mode') - assert isinstance(ratios, (list, tuple)) - assert test_pad_mode is None, ( - 'test_pad_mode must be None in train mode') - - self.crop_size = crop_size - self.ratios = ratios - self.border = border - # We do not set default value to mean, std and to_rgb because these - # hyper-parameters are easy to forget but could affect the performance. - # Please use the same setting as Normalize for performance assurance. - assert mean is not None and std is not None and to_rgb is not None - self.to_rgb = to_rgb - self.input_mean = mean - self.input_std = std - if to_rgb: - self.mean = mean[::-1] - self.std = std[::-1] - else: - self.mean = mean - self.std = std - self.test_mode = test_mode - self.test_pad_mode = test_pad_mode - self.bbox_clip_border = bbox_clip_border - - def _get_border(self, border, size): - """Get final border for the target size. - - This function generates a ``final_border`` according to image's shape. - The area between ``final_border`` and ``size - final_border`` is the - ``center range``. We randomly choose center from the ``center range`` - to avoid our random center is too close to original image's border. - Also ``center range`` should be larger than 0. - - Args: - border (int): The initial border, default is 128. - size (int): The width or height of original image. - Returns: - int: The final border. - """ - k = 2 * border / size - i = pow(2, np.ceil(np.log2(np.ceil(k))) + (k == int(k))) - return border // i - - def _filter_boxes(self, patch, boxes): - """Check whether the center of each box is in the patch. - - Args: - patch (list[int]): The cropped area, [left, top, right, bottom]. - boxes (numpy array, (N x 4)): Ground truth boxes. - - Returns: - mask (numpy array, (N,)): Each box is inside or outside the patch. - """ - center = (boxes[:, :2] + boxes[:, 2:]) / 2 - mask = (center[:, 0] > patch[0]) * (center[:, 1] > patch[1]) * ( - center[:, 0] < patch[2]) * ( - center[:, 1] < patch[3]) - return mask - - def _crop_image_and_paste(self, image, center, size): - """Crop image with a given center and size, then paste the cropped - image to a blank image with two centers align. - - This function is equivalent to generating a blank image with ``size`` - as its shape. Then cover it on the original image with two centers ( - the center of blank image and the random center of original image) - aligned. The overlap area is paste from the original image and the - outside area is filled with ``mean pixel``. - - Args: - image (np array, H x W x C): Original image. - center (list[int]): Target crop center coord. - size (list[int]): Target crop size. [target_h, target_w] - - Returns: - cropped_img (np array, target_h x target_w x C): Cropped image. - border (np array, 4): The distance of four border of - ``cropped_img`` to the original image area, [top, bottom, - left, right] - patch (list[int]): The cropped area, [left, top, right, bottom]. - """ - center_y, center_x = center - target_h, target_w = size - img_h, img_w, img_c = image.shape - - x0 = max(0, center_x - target_w // 2) - x1 = min(center_x + target_w // 2, img_w) - y0 = max(0, center_y - target_h // 2) - y1 = min(center_y + target_h // 2, img_h) - patch = np.array((int(x0), int(y0), int(x1), int(y1))) - - left, right = center_x - x0, x1 - center_x - top, bottom = center_y - y0, y1 - center_y - - cropped_center_y, cropped_center_x = target_h // 2, target_w // 2 - cropped_img = np.zeros((target_h, target_w, img_c), dtype=image.dtype) - for i in range(img_c): - cropped_img[:, :, i] += self.mean[i] - y_slice = slice(cropped_center_y - top, cropped_center_y + bottom) - x_slice = slice(cropped_center_x - left, cropped_center_x + right) - cropped_img[y_slice, x_slice, :] = image[y0:y1, x0:x1, :] - - border = np.array([ - cropped_center_y - top, cropped_center_y + bottom, - cropped_center_x - left, cropped_center_x + right - ], - dtype=np.float32) - - return cropped_img, border, patch - - def _train_aug(self, results): - """Random crop and around padding the original image. - - Args: - results (dict): Image infomations in the augment pipeline. - - Returns: - results (dict): The updated dict. - """ - img = results['img'] - h, w, c = img.shape - boxes = results['gt_bboxes'] - while True: - scale = random.choice(self.ratios) - new_h = int(self.crop_size[0] * scale) - new_w = int(self.crop_size[1] * scale) - h_border = self._get_border(self.border, h) - w_border = self._get_border(self.border, w) - - for i in range(50): - center_x = random.randint(low=w_border, high=w - w_border) - center_y = random.randint(low=h_border, high=h - h_border) - - cropped_img, border, patch = self._crop_image_and_paste( - img, [center_y, center_x], [new_h, new_w]) - - mask = self._filter_boxes(patch, boxes) - # if image do not have valid bbox, any crop patch is valid. - if not mask.any() and len(boxes) > 0: - continue - - results['img'] = cropped_img - results['img_shape'] = cropped_img.shape - results['pad_shape'] = cropped_img.shape - - x0, y0, x1, y1 = patch - - left_w, top_h = center_x - x0, center_y - y0 - cropped_center_x, cropped_center_y = new_w // 2, new_h // 2 - - # crop bboxes accordingly and clip to the image boundary - for key in results.get('bbox_fields', []): - mask = self._filter_boxes(patch, results[key]) - bboxes = results[key][mask] - bboxes[:, 0:4:2] += cropped_center_x - left_w - x0 - bboxes[:, 1:4:2] += cropped_center_y - top_h - y0 - if self.bbox_clip_border: - bboxes[:, 0:4:2] = np.clip(bboxes[:, 0:4:2], 0, new_w) - bboxes[:, 1:4:2] = np.clip(bboxes[:, 1:4:2], 0, new_h) - keep = (bboxes[:, 2] > bboxes[:, 0]) & ( - bboxes[:, 3] > bboxes[:, 1]) - bboxes = bboxes[keep] - results[key] = bboxes - if key in ['gt_bboxes']: - if 'gt_labels' in results: - labels = results['gt_labels'][mask] - labels = labels[keep] - results['gt_labels'] = labels - if 'gt_masks' in results: - raise NotImplementedError( - 'RandomCenterCropPad only supports bbox.') - - # crop semantic seg - for key in results.get('seg_fields', []): - raise NotImplementedError( - 'RandomCenterCropPad only supports bbox.') - return results - - def _test_aug(self, results): - """Around padding the original image without cropping. - - The padding mode and value are from ``test_pad_mode``. - - Args: - results (dict): Image infomations in the augment pipeline. - - Returns: - results (dict): The updated dict. - """ - img = results['img'] - h, w, c = img.shape - results['img_shape'] = img.shape - if self.test_pad_mode[0] in ['logical_or']: - target_h = h | self.test_pad_mode[1] - target_w = w | self.test_pad_mode[1] - elif self.test_pad_mode[0] in ['size_divisor']: - divisor = self.test_pad_mode[1] - target_h = int(np.ceil(h / divisor)) * divisor - target_w = int(np.ceil(w / divisor)) * divisor - else: - raise NotImplementedError( - 'RandomCenterCropPad only support two testing pad mode:' - 'logical-or and size_divisor.') - - cropped_img, border, _ = self._crop_image_and_paste( - img, [h // 2, w // 2], [target_h, target_w]) - results['img'] = cropped_img - results['pad_shape'] = cropped_img.shape - results['border'] = border - return results - - def __call__(self, results): - img = results['img'] - assert img.dtype == np.float32, ( - 'RandomCenterCropPad needs the input image of dtype np.float32,' - ' please set "to_float32=True" in "LoadImageFromFile" pipeline') - h, w, c = img.shape - assert c == len(self.mean) - if self.test_mode: - return self._test_aug(results) - else: - return self._train_aug(results) - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(crop_size={self.crop_size}, ' - repr_str += f'ratios={self.ratios}, ' - repr_str += f'border={self.border}, ' - repr_str += f'mean={self.input_mean}, ' - repr_str += f'std={self.input_std}, ' - repr_str += f'to_rgb={self.to_rgb}, ' - repr_str += f'test_mode={self.test_mode}, ' - repr_str += f'test_pad_mode={self.test_pad_mode}, ' - repr_str += f'bbox_clip_border={self.bbox_clip_border})' - return repr_str - - -@PIPELINES.register_module() -class CutOut(object): - """CutOut operation. - - Randomly drop some regions of image used in - `Cutout `_. - - Args: - n_holes (int | tuple[int, int]): Number of regions to be dropped. - If it is given as a list, number of holes will be randomly - selected from the closed interval [`n_holes[0]`, `n_holes[1]`]. - cutout_shape (tuple[int, int] | list[tuple[int, int]]): The candidate - shape of dropped regions. It can be `tuple[int, int]` to use a - fixed cutout shape, or `list[tuple[int, int]]` to randomly choose - shape from the list. - cutout_ratio (tuple[float, float] | list[tuple[float, float]]): The - candidate ratio of dropped regions. It can be `tuple[float, float]` - to use a fixed ratio or `list[tuple[float, float]]` to randomly - choose ratio from the list. Please note that `cutout_shape` - and `cutout_ratio` cannot be both given at the same time. - fill_in (tuple[float, float, float] | tuple[int, int, int]): The value - of pixel to fill in the dropped regions. Default: (0, 0, 0). - """ - - def __init__(self, - n_holes, - cutout_shape=None, - cutout_ratio=None, - fill_in=(0, 0, 0)): - - assert (cutout_shape is None) ^ (cutout_ratio is None), \ - 'Either cutout_shape or cutout_ratio should be specified.' - assert (isinstance(cutout_shape, (list, tuple)) - or isinstance(cutout_ratio, (list, tuple))) - if isinstance(n_holes, tuple): - assert len(n_holes) == 2 and 0 <= n_holes[0] < n_holes[1] - else: - n_holes = (n_holes, n_holes) - self.n_holes = n_holes - self.fill_in = fill_in - self.with_ratio = cutout_ratio is not None - self.candidates = cutout_ratio if self.with_ratio else cutout_shape - if not isinstance(self.candidates, list): - self.candidates = [self.candidates] - - def __call__(self, results): - """Call function to drop some regions of image.""" - h, w, c = results['img'].shape - n_holes = np.random.randint(self.n_holes[0], self.n_holes[1] + 1) - for _ in range(n_holes): - x1 = np.random.randint(0, w) - y1 = np.random.randint(0, h) - index = np.random.randint(0, len(self.candidates)) - if not self.with_ratio: - cutout_w, cutout_h = self.candidates[index] - else: - cutout_w = int(self.candidates[index][0] * w) - cutout_h = int(self.candidates[index][1] * h) - - x2 = np.clip(x1 + cutout_w, 0, w) - y2 = np.clip(y1 + cutout_h, 0, h) - results['img'][y1:y2, x1:x2, :] = self.fill_in - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(n_holes={self.n_holes}, ' - repr_str += (f'cutout_ratio={self.candidates}, ' if self.with_ratio - else f'cutout_shape={self.candidates}, ') - repr_str += f'fill_in={self.fill_in})' - return repr_str diff --git a/spaces/CVPR/WALT/mmdet/models/utils/transformer.py b/spaces/CVPR/WALT/mmdet/models/utils/transformer.py deleted file mode 100644 index 83870eead42f4b0bf73c9e19248d5512d3d044c5..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/utils/transformer.py +++ /dev/null @@ -1,860 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import (Linear, build_activation_layer, build_norm_layer, - xavier_init) - -from .builder import TRANSFORMER - - -class MultiheadAttention(nn.Module): - """A warpper for torch.nn.MultiheadAttention. - - This module implements MultiheadAttention with residual connection, - and positional encoding used in DETR is also passed as input. - - Args: - embed_dims (int): The embedding dimension. - num_heads (int): Parallel attention heads. Same as - `nn.MultiheadAttention`. - dropout (float): A Dropout layer on attn_output_weights. Default 0.0. - """ - - def __init__(self, embed_dims, num_heads, dropout=0.0): - super(MultiheadAttention, self).__init__() - assert embed_dims % num_heads == 0, 'embed_dims must be ' \ - f'divisible by num_heads. got {embed_dims} and {num_heads}.' - self.embed_dims = embed_dims - self.num_heads = num_heads - self.dropout = dropout - self.attn = nn.MultiheadAttention(embed_dims, num_heads, dropout) - self.dropout = nn.Dropout(dropout) - - def forward(self, - x, - key=None, - value=None, - residual=None, - query_pos=None, - key_pos=None, - attn_mask=None, - key_padding_mask=None): - """Forward function for `MultiheadAttention`. - - Args: - x (Tensor): The input query with shape [num_query, bs, - embed_dims]. Same in `nn.MultiheadAttention.forward`. - key (Tensor): The key tensor with shape [num_key, bs, - embed_dims]. Same in `nn.MultiheadAttention.forward`. - Default None. If None, the `query` will be used. - value (Tensor): The value tensor with same shape as `key`. - Same in `nn.MultiheadAttention.forward`. Default None. - If None, the `key` will be used. - residual (Tensor): The tensor used for addition, with the - same shape as `x`. Default None. If None, `x` will be used. - query_pos (Tensor): The positional encoding for query, with - the same shape as `x`. Default None. If not None, it will - be added to `x` before forward function. - key_pos (Tensor): The positional encoding for `key`, with the - same shape as `key`. Default None. If not None, it will - be added to `key` before forward function. If None, and - `query_pos` has the same shape as `key`, then `query_pos` - will be used for `key_pos`. - attn_mask (Tensor): ByteTensor mask with shape [num_query, - num_key]. Same in `nn.MultiheadAttention.forward`. - Default None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_key]. - Same in `nn.MultiheadAttention.forward`. Default None. - - Returns: - Tensor: forwarded results with shape [num_query, bs, embed_dims]. - """ - query = x - if key is None: - key = query - if value is None: - value = key - if residual is None: - residual = x - if key_pos is None: - if query_pos is not None and key is not None: - if query_pos.shape == key.shape: - key_pos = query_pos - if query_pos is not None: - query = query + query_pos - if key_pos is not None: - key = key + key_pos - out = self.attn( - query, - key, - value=value, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask)[0] - - return residual + self.dropout(out) - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'dropout={self.dropout})' - return repr_str - - -class FFN(nn.Module): - """Implements feed-forward networks (FFNs) with residual connection. - - Args: - embed_dims (int): The feature dimension. Same as - `MultiheadAttention`. - feedforward_channels (int): The hidden dimension of FFNs. - num_fcs (int, optional): The number of fully-connected layers in - FFNs. Defaults to 2. - act_cfg (dict, optional): The activation config for FFNs. - dropout (float, optional): Probability of an element to be - zeroed. Default 0.0. - add_residual (bool, optional): Add resudual connection. - Defaults to True. - """ - - def __init__(self, - embed_dims, - feedforward_channels, - num_fcs=2, - act_cfg=dict(type='ReLU', inplace=True), - dropout=0.0, - add_residual=True): - super(FFN, self).__init__() - assert num_fcs >= 2, 'num_fcs should be no less ' \ - f'than 2. got {num_fcs}.' - self.embed_dims = embed_dims - self.feedforward_channels = feedforward_channels - self.num_fcs = num_fcs - self.act_cfg = act_cfg - self.dropout = dropout - self.activate = build_activation_layer(act_cfg) - - layers = nn.ModuleList() - in_channels = embed_dims - for _ in range(num_fcs - 1): - layers.append( - nn.Sequential( - Linear(in_channels, feedforward_channels), self.activate, - nn.Dropout(dropout))) - in_channels = feedforward_channels - layers.append(Linear(feedforward_channels, embed_dims)) - self.layers = nn.Sequential(*layers) - self.dropout = nn.Dropout(dropout) - self.add_residual = add_residual - - def forward(self, x, residual=None): - """Forward function for `FFN`.""" - out = self.layers(x) - if not self.add_residual: - return out - if residual is None: - residual = x - return residual + self.dropout(out) - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'num_fcs={self.num_fcs}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'add_residual={self.add_residual})' - return repr_str - - -class TransformerEncoderLayer(nn.Module): - """Implements one encoder layer in DETR transformer. - - Args: - embed_dims (int): The feature dimension. Same as `FFN`. - num_heads (int): Parallel attention heads. - feedforward_channels (int): The hidden dimension for FFNs. - dropout (float): Probability of an element to be zeroed. Default 0.0. - order (tuple[str]): The order for encoder layer. Valid examples are - ('selfattn', 'norm', 'ffn', 'norm') and ('norm', 'selfattn', - 'norm', 'ffn'). Default ('selfattn', 'norm', 'ffn', 'norm'). - act_cfg (dict): The activation config for FFNs. Default ReLU. - norm_cfg (dict): Config dict for normalization layer. Default - layer normalization. - num_fcs (int): The number of fully-connected layers for FFNs. - Default 2. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'ffn', 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2): - super(TransformerEncoderLayer, self).__init__() - assert isinstance(order, tuple) and len(order) == 4 - assert set(order) == set(['selfattn', 'norm', 'ffn']) - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = order[0] == 'norm' - self.self_attn = MultiheadAttention(embed_dims, num_heads, dropout) - self.ffn = FFN(embed_dims, feedforward_channels, num_fcs, act_cfg, - dropout) - self.norms = nn.ModuleList() - self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1]) - self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1]) - - def forward(self, x, pos=None, attn_mask=None, key_padding_mask=None): - """Forward function for `TransformerEncoderLayer`. - - Args: - x (Tensor): The input query with shape [num_key, bs, - embed_dims]. Same in `MultiheadAttention.forward`. - pos (Tensor): The positional encoding for query. Default None. - Same as `query_pos` in `MultiheadAttention.forward`. - attn_mask (Tensor): ByteTensor mask with shape [num_key, - num_key]. Same in `MultiheadAttention.forward`. Default None. - key_padding_mask (Tensor): ByteTensor with shape [bs, num_key]. - Same in `MultiheadAttention.forward`. Default None. - - Returns: - Tensor: forwarded results with shape [num_key, bs, embed_dims]. - """ - norm_cnt = 0 - inp_residual = x - for layer in self.order: - if layer == 'selfattn': - # self attention - query = key = value = x - x = self.self_attn( - query, - key, - value, - inp_residual if self.pre_norm else None, - query_pos=pos, - key_pos=pos, - attn_mask=attn_mask, - key_padding_mask=key_padding_mask) - inp_residual = x - elif layer == 'norm': - x = self.norms[norm_cnt](x) - norm_cnt += 1 - elif layer == 'ffn': - x = self.ffn(x, inp_residual if self.pre_norm else None) - return x - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs})' - return repr_str - - -class TransformerDecoderLayer(nn.Module): - """Implements one decoder layer in DETR transformer. - - Args: - embed_dims (int): The feature dimension. Same as - `TransformerEncoderLayer`. - num_heads (int): Parallel attention heads. - feedforward_channels (int): Same as `TransformerEncoderLayer`. - dropout (float): Same as `TransformerEncoderLayer`. Default 0.0. - order (tuple[str]): The order for decoder layer. Valid examples are - ('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn', 'norm') and - ('norm', 'selfattn', 'norm', 'multiheadattn', 'norm', 'ffn'). - Default the former. - act_cfg (dict): Same as `TransformerEncoderLayer`. Default ReLU. - norm_cfg (dict): Config dict for normalization layer. Default - layer normalization. - num_fcs (int): The number of fully-connected layers in FFNs. - """ - - def __init__(self, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn', - 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2): - super(TransformerDecoderLayer, self).__init__() - assert isinstance(order, tuple) and len(order) == 6 - assert set(order) == set(['selfattn', 'norm', 'multiheadattn', 'ffn']) - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = order[0] == 'norm' - self.self_attn = MultiheadAttention(embed_dims, num_heads, dropout) - self.multihead_attn = MultiheadAttention(embed_dims, num_heads, - dropout) - self.ffn = FFN(embed_dims, feedforward_channels, num_fcs, act_cfg, - dropout) - self.norms = nn.ModuleList() - # 3 norm layers in official DETR's TransformerDecoderLayer - for _ in range(3): - self.norms.append(build_norm_layer(norm_cfg, embed_dims)[1]) - - def forward(self, - x, - memory, - memory_pos=None, - query_pos=None, - memory_attn_mask=None, - target_attn_mask=None, - memory_key_padding_mask=None, - target_key_padding_mask=None): - """Forward function for `TransformerDecoderLayer`. - - Args: - x (Tensor): Input query with shape [num_query, bs, embed_dims]. - memory (Tensor): Tensor got from `TransformerEncoder`, with shape - [num_key, bs, embed_dims]. - memory_pos (Tensor): The positional encoding for `memory`. Default - None. Same as `key_pos` in `MultiheadAttention.forward`. - query_pos (Tensor): The positional encoding for `query`. Default - None. Same as `query_pos` in `MultiheadAttention.forward`. - memory_attn_mask (Tensor): ByteTensor mask for `memory`, with - shape [num_key, num_key]. Same as `attn_mask` in - `MultiheadAttention.forward`. Default None. - target_attn_mask (Tensor): ByteTensor mask for `x`, with shape - [num_query, num_query]. Same as `attn_mask` in - `MultiheadAttention.forward`. Default None. - memory_key_padding_mask (Tensor): ByteTensor for `memory`, with - shape [bs, num_key]. Same as `key_padding_mask` in - `MultiheadAttention.forward`. Default None. - target_key_padding_mask (Tensor): ByteTensor for `x`, with shape - [bs, num_query]. Same as `key_padding_mask` in - `MultiheadAttention.forward`. Default None. - - Returns: - Tensor: forwarded results with shape [num_query, bs, embed_dims]. - """ - norm_cnt = 0 - inp_residual = x - for layer in self.order: - if layer == 'selfattn': - query = key = value = x - x = self.self_attn( - query, - key, - value, - inp_residual if self.pre_norm else None, - query_pos, - key_pos=query_pos, - attn_mask=target_attn_mask, - key_padding_mask=target_key_padding_mask) - inp_residual = x - elif layer == 'norm': - x = self.norms[norm_cnt](x) - norm_cnt += 1 - elif layer == 'multiheadattn': - query = x - key = value = memory - x = self.multihead_attn( - query, - key, - value, - inp_residual if self.pre_norm else None, - query_pos, - key_pos=memory_pos, - attn_mask=memory_attn_mask, - key_padding_mask=memory_key_padding_mask) - inp_residual = x - elif layer == 'ffn': - x = self.ffn(x, inp_residual if self.pre_norm else None) - return x - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs})' - return repr_str - - -class TransformerEncoder(nn.Module): - """Implements the encoder in DETR transformer. - - Args: - num_layers (int): The number of `TransformerEncoderLayer`. - embed_dims (int): Same as `TransformerEncoderLayer`. - num_heads (int): Same as `TransformerEncoderLayer`. - feedforward_channels (int): Same as `TransformerEncoderLayer`. - dropout (float): Same as `TransformerEncoderLayer`. Default 0.0. - order (tuple[str]): Same as `TransformerEncoderLayer`. - act_cfg (dict): Same as `TransformerEncoderLayer`. Default ReLU. - norm_cfg (dict): Same as `TransformerEncoderLayer`. Default - layer normalization. - num_fcs (int): Same as `TransformerEncoderLayer`. Default 2. - """ - - def __init__(self, - num_layers, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'ffn', 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2): - super(TransformerEncoder, self).__init__() - assert isinstance(order, tuple) and len(order) == 4 - assert set(order) == set(['selfattn', 'norm', 'ffn']) - self.num_layers = num_layers - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = order[0] == 'norm' - self.layers = nn.ModuleList() - for _ in range(num_layers): - self.layers.append( - TransformerEncoderLayer(embed_dims, num_heads, - feedforward_channels, dropout, order, - act_cfg, norm_cfg, num_fcs)) - self.norm = build_norm_layer(norm_cfg, - embed_dims)[1] if self.pre_norm else None - - def forward(self, x, pos=None, attn_mask=None, key_padding_mask=None): - """Forward function for `TransformerEncoder`. - - Args: - x (Tensor): Input query. Same in `TransformerEncoderLayer.forward`. - pos (Tensor): Positional encoding for query. Default None. - Same in `TransformerEncoderLayer.forward`. - attn_mask (Tensor): ByteTensor attention mask. Default None. - Same in `TransformerEncoderLayer.forward`. - key_padding_mask (Tensor): Same in - `TransformerEncoderLayer.forward`. Default None. - - Returns: - Tensor: Results with shape [num_key, bs, embed_dims]. - """ - for layer in self.layers: - x = layer(x, pos, attn_mask, key_padding_mask) - if self.norm is not None: - x = self.norm(x) - return x - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(num_layers={self.num_layers}, ' - repr_str += f'embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs})' - return repr_str - - -class TransformerDecoder(nn.Module): - """Implements the decoder in DETR transformer. - - Args: - num_layers (int): The number of `TransformerDecoderLayer`. - embed_dims (int): Same as `TransformerDecoderLayer`. - num_heads (int): Same as `TransformerDecoderLayer`. - feedforward_channels (int): Same as `TransformerDecoderLayer`. - dropout (float): Same as `TransformerDecoderLayer`. Default 0.0. - order (tuple[str]): Same as `TransformerDecoderLayer`. - act_cfg (dict): Same as `TransformerDecoderLayer`. Default ReLU. - norm_cfg (dict): Same as `TransformerDecoderLayer`. Default - layer normalization. - num_fcs (int): Same as `TransformerDecoderLayer`. Default 2. - """ - - def __init__(self, - num_layers, - embed_dims, - num_heads, - feedforward_channels, - dropout=0.0, - order=('selfattn', 'norm', 'multiheadattn', 'norm', 'ffn', - 'norm'), - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2, - return_intermediate=False): - super(TransformerDecoder, self).__init__() - assert isinstance(order, tuple) and len(order) == 6 - assert set(order) == set(['selfattn', 'norm', 'multiheadattn', 'ffn']) - self.num_layers = num_layers - self.embed_dims = embed_dims - self.num_heads = num_heads - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.order = order - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.return_intermediate = return_intermediate - self.layers = nn.ModuleList() - for _ in range(num_layers): - self.layers.append( - TransformerDecoderLayer(embed_dims, num_heads, - feedforward_channels, dropout, order, - act_cfg, norm_cfg, num_fcs)) - self.norm = build_norm_layer(norm_cfg, embed_dims)[1] - - def forward(self, - x, - memory, - memory_pos=None, - query_pos=None, - memory_attn_mask=None, - target_attn_mask=None, - memory_key_padding_mask=None, - target_key_padding_mask=None): - """Forward function for `TransformerDecoder`. - - Args: - x (Tensor): Input query. Same in `TransformerDecoderLayer.forward`. - memory (Tensor): Same in `TransformerDecoderLayer.forward`. - memory_pos (Tensor): Same in `TransformerDecoderLayer.forward`. - Default None. - query_pos (Tensor): Same in `TransformerDecoderLayer.forward`. - Default None. - memory_attn_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - target_attn_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - memory_key_padding_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - target_key_padding_mask (Tensor): Same in - `TransformerDecoderLayer.forward`. Default None. - - Returns: - Tensor: Results with shape [num_query, bs, embed_dims]. - """ - intermediate = [] - for layer in self.layers: - x = layer(x, memory, memory_pos, query_pos, memory_attn_mask, - target_attn_mask, memory_key_padding_mask, - target_key_padding_mask) - if self.return_intermediate: - intermediate.append(self.norm(x)) - if self.norm is not None: - x = self.norm(x) - if self.return_intermediate: - intermediate.pop() - intermediate.append(x) - if self.return_intermediate: - return torch.stack(intermediate) - return x.unsqueeze(0) - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(num_layers={self.num_layers}, ' - repr_str += f'embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'order={self.order}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs}, ' - repr_str += f'return_intermediate={self.return_intermediate})' - return repr_str - - -@TRANSFORMER.register_module() -class Transformer(nn.Module): - """Implements the DETR transformer. - - Following the official DETR implementation, this module copy-paste - from torch.nn.Transformer with modifications: - - * positional encodings are passed in MultiheadAttention - * extra LN at the end of encoder is removed - * decoder returns a stack of activations from all decoding layers - - See `paper: End-to-End Object Detection with Transformers - `_ for details. - - Args: - embed_dims (int): The feature dimension. - num_heads (int): Parallel attention heads. Same as - `nn.MultiheadAttention`. - num_encoder_layers (int): Number of `TransformerEncoderLayer`. - num_decoder_layers (int): Number of `TransformerDecoderLayer`. - feedforward_channels (int): The hidden dimension for FFNs used in both - encoder and decoder. - dropout (float): Probability of an element to be zeroed. Default 0.0. - act_cfg (dict): Activation config for FFNs used in both encoder - and decoder. Default ReLU. - norm_cfg (dict): Config dict for normalization used in both encoder - and decoder. Default layer normalization. - num_fcs (int): The number of fully-connected layers in FFNs, which is - used for both encoder and decoder. - pre_norm (bool): Whether the normalization layer is ordered - first in the encoder and decoder. Default False. - return_intermediate_dec (bool): Whether to return the intermediate - output from each TransformerDecoderLayer or only the last - TransformerDecoderLayer. Default False. If False, the returned - `hs` has shape [num_decoder_layers, bs, num_query, embed_dims]. - If True, the returned `hs` will have shape [1, bs, num_query, - embed_dims]. - """ - - def __init__(self, - embed_dims=512, - num_heads=8, - num_encoder_layers=6, - num_decoder_layers=6, - feedforward_channels=2048, - dropout=0.0, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2, - pre_norm=False, - return_intermediate_dec=False): - super(Transformer, self).__init__() - self.embed_dims = embed_dims - self.num_heads = num_heads - self.num_encoder_layers = num_encoder_layers - self.num_decoder_layers = num_decoder_layers - self.feedforward_channels = feedforward_channels - self.dropout = dropout - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.num_fcs = num_fcs - self.pre_norm = pre_norm - self.return_intermediate_dec = return_intermediate_dec - if self.pre_norm: - encoder_order = ('norm', 'selfattn', 'norm', 'ffn') - decoder_order = ('norm', 'selfattn', 'norm', 'multiheadattn', - 'norm', 'ffn') - else: - encoder_order = ('selfattn', 'norm', 'ffn', 'norm') - decoder_order = ('selfattn', 'norm', 'multiheadattn', 'norm', - 'ffn', 'norm') - self.encoder = TransformerEncoder(num_encoder_layers, embed_dims, - num_heads, feedforward_channels, - dropout, encoder_order, act_cfg, - norm_cfg, num_fcs) - self.decoder = TransformerDecoder(num_decoder_layers, embed_dims, - num_heads, feedforward_channels, - dropout, decoder_order, act_cfg, - norm_cfg, num_fcs, - return_intermediate_dec) - - def init_weights(self, distribution='uniform'): - """Initialize the transformer weights.""" - # follow the official DETR to init parameters - for m in self.modules(): - if hasattr(m, 'weight') and m.weight.dim() > 1: - xavier_init(m, distribution=distribution) - - def forward(self, x, mask, query_embed, pos_embed): - """Forward function for `Transformer`. - - Args: - x (Tensor): Input query with shape [bs, c, h, w] where - c = embed_dims. - mask (Tensor): The key_padding_mask used for encoder and decoder, - with shape [bs, h, w]. - query_embed (Tensor): The query embedding for decoder, with shape - [num_query, c]. - pos_embed (Tensor): The positional encoding for encoder and - decoder, with the same shape as `x`. - - Returns: - tuple[Tensor]: results of decoder containing the following tensor. - - - out_dec: Output from decoder. If return_intermediate_dec \ - is True output has shape [num_dec_layers, bs, - num_query, embed_dims], else has shape [1, bs, \ - num_query, embed_dims]. - - memory: Output results from encoder, with shape \ - [bs, embed_dims, h, w]. - """ - bs, c, h, w = x.shape - x = x.flatten(2).permute(2, 0, 1) # [bs, c, h, w] -> [h*w, bs, c] - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - query_embed = query_embed.unsqueeze(1).repeat( - 1, bs, 1) # [num_query, dim] -> [num_query, bs, dim] - mask = mask.flatten(1) # [bs, h, w] -> [bs, h*w] - memory = self.encoder( - x, pos=pos_embed, attn_mask=None, key_padding_mask=mask) - target = torch.zeros_like(query_embed) - # out_dec: [num_layers, num_query, bs, dim] - out_dec = self.decoder( - target, - memory, - memory_pos=pos_embed, - query_pos=query_embed, - memory_attn_mask=None, - target_attn_mask=None, - memory_key_padding_mask=mask, - target_key_padding_mask=None) - out_dec = out_dec.transpose(1, 2) - memory = memory.permute(1, 2, 0).reshape(bs, c, h, w) - return out_dec, memory - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(embed_dims={self.embed_dims}, ' - repr_str += f'num_heads={self.num_heads}, ' - repr_str += f'num_encoder_layers={self.num_encoder_layers}, ' - repr_str += f'num_decoder_layers={self.num_decoder_layers}, ' - repr_str += f'feedforward_channels={self.feedforward_channels}, ' - repr_str += f'dropout={self.dropout}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg}, ' - repr_str += f'num_fcs={self.num_fcs}, ' - repr_str += f'pre_norm={self.pre_norm}, ' - repr_str += f'return_intermediate_dec={self.return_intermediate_dec})' - return repr_str - - -@TRANSFORMER.register_module() -class DynamicConv(nn.Module): - """Implements Dynamic Convolution. - - This module generate parameters for each sample and - use bmm to implement 1*1 convolution. Code is modified - from the `official github repo `_ . - - Args: - in_channels (int): The input feature channel. - Defaults to 256. - feat_channels (int): The inner feature channel. - Defaults to 64. - out_channels (int, optional): The output feature channel. - When not specified, it will be set to `in_channels` - by default - input_feat_shape (int): The shape of input feature. - Defaults to 7. - act_cfg (dict): The activation config for DynamicConv. - norm_cfg (dict): Config dict for normalization layer. Default - layer normalization. - """ - - def __init__(self, - in_channels=256, - feat_channels=64, - out_channels=None, - input_feat_shape=7, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN')): - super(DynamicConv, self).__init__() - self.in_channels = in_channels - self.feat_channels = feat_channels - self.out_channels_raw = out_channels - self.input_feat_shape = input_feat_shape - self.act_cfg = act_cfg - self.norm_cfg = norm_cfg - self.out_channels = out_channels if out_channels else in_channels - - self.num_params_in = self.in_channels * self.feat_channels - self.num_params_out = self.out_channels * self.feat_channels - self.dynamic_layer = nn.Linear( - self.in_channels, self.num_params_in + self.num_params_out) - - self.norm_in = build_norm_layer(norm_cfg, self.feat_channels)[1] - self.norm_out = build_norm_layer(norm_cfg, self.out_channels)[1] - - self.activation = build_activation_layer(act_cfg) - - num_output = self.out_channels * input_feat_shape**2 - self.fc_layer = nn.Linear(num_output, self.out_channels) - self.fc_norm = build_norm_layer(norm_cfg, self.out_channels)[1] - - def forward(self, param_feature, input_feature): - """Forward function for `DynamicConv`. - - Args: - param_feature (Tensor): The feature can be used - to generate the parameter, has shape - (num_all_proposals, in_channels). - input_feature (Tensor): Feature that - interact with parameters, has shape - (num_all_proposals, in_channels, H, W). - - Returns: - Tensor: The output feature has shape - (num_all_proposals, out_channels). - """ - num_proposals = param_feature.size(0) - input_feature = input_feature.view(num_proposals, self.in_channels, - -1).permute(2, 0, 1) - - input_feature = input_feature.permute(1, 0, 2) - parameters = self.dynamic_layer(param_feature) - - param_in = parameters[:, :self.num_params_in].view( - -1, self.in_channels, self.feat_channels) - param_out = parameters[:, -self.num_params_out:].view( - -1, self.feat_channels, self.out_channels) - - # input_feature has shape (num_all_proposals, H*W, in_channels) - # param_in has shape (num_all_proposals, in_channels, feat_channels) - # feature has shape (num_all_proposals, H*W, feat_channels) - features = torch.bmm(input_feature, param_in) - features = self.norm_in(features) - features = self.activation(features) - - # param_out has shape (batch_size, feat_channels, out_channels) - features = torch.bmm(features, param_out) - features = self.norm_out(features) - features = self.activation(features) - - features = features.flatten(1) - features = self.fc_layer(features) - features = self.fc_norm(features) - features = self.activation(features) - - return features - - def __repr__(self): - """str: a string that describes the module""" - repr_str = self.__class__.__name__ - repr_str += f'(in_channels={self.in_channels}, ' - repr_str += f'feat_channels={self.feat_channels}, ' - repr_str += f'out_channels={self.out_channels_raw}, ' - repr_str += f'input_feat_shape={self.input_feat_shape}, ' - repr_str += f'act_cfg={self.act_cfg}, ' - repr_str += f'norm_cfg={self.norm_cfg})' - return repr_str diff --git a/spaces/Chintan-Donda/KKMS-KSSW-HF/README.md b/spaces/Chintan-Donda/KKMS-KSSW-HF/README.md deleted file mode 100644 index 6c48e3df3407bfdfdd569d3a4d76b5e647bfc11e..0000000000000000000000000000000000000000 --- a/spaces/Chintan-Donda/KKMS-KSSW-HF/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: KKMS KSSW -emoji: 🔥 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/CikeyQI/Yunzai/Yunzai/lib/config/redis.js b/spaces/CikeyQI/Yunzai/Yunzai/lib/config/redis.js deleted file mode 100644 index e4b84c0b575f603b0f3c70f91c0a4c1dbc62f138..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/lib/config/redis.js +++ /dev/null @@ -1,76 +0,0 @@ -import cfg from "./config.js" -import common from "../common/common.js" -import { createClient } from "redis" -import { exec } from "node:child_process" - -/** - * 初始化全局redis客户端 - */ -export default async function redisInit() { - const rc = cfg.redis - const redisUn = rc.username || "" - let redisPw = rc.password ? `:${rc.password}` : "" - if (rc.username || rc.password) - redisPw += "@" - const redisUrl = `redis://${redisUn}${redisPw}${rc.host}:${rc.port}/${rc.db}` - let client = createClient({ url: redisUrl }) - - try { - logger.info(`正在连接 ${logger.blue(redisUrl)}`) - await client.connect() - } catch (err) { - logger.error(`Redis 错误:${logger.red(err)}`) - - const cmd = "redis-server --save 900 1 --save 300 10 --daemonize yes" + await aarch64() - logger.info("正在启动 Redis...") - await execSync(cmd) - await common.sleep(1000) - - try { - client = createClient({ url: redisUrl }) - await client.connect() - } catch (err) { - logger.error(`Redis 错误:${logger.red(err)}`) - logger.error(`请先启动 Redis:${logger.blue(cmd)}`) - process.exit() - } - } - - client.on("error", async err => { - logger.error(`Redis 错误:${logger.red(err)}`) - const cmd = "redis-server --save 900 1 --save 300 10 --daemonize yes" + await aarch64() - logger.error(`请先启动 Redis:${cmd}`) - process.exit() - }) - - /** 全局变量 redis */ - global.redis = client - logger.info("Redis 连接成功") - return client -} - -async function aarch64() { - if (process.platform == "win32") - return "" - /** 判断arch */ - const arch = await execSync("uname -m") - if (arch.stdout && arch.stdout.includes("aarch64")) { - /** 判断redis版本 */ - let v = await execSync("redis-server -v") - if (v.stdout) { - v = v.stdout.match(/v=(\d)./) - /** 忽略arm警告 */ - if (v && v[1] >= 6) - return " --ignore-warnings ARM64-COW-BUG" - } - } - return "" -} - -function execSync (cmd) { - return new Promise((resolve, reject) => { - exec(cmd, (error, stdout, stderr) => { - resolve({ error, stdout, stderr }) - }) - }) -} \ No newline at end of file diff --git a/spaces/Cletrason/Cletrason-toad-mario-movie/config.py b/spaces/Cletrason/Cletrason-toad-mario-movie/config.py deleted file mode 100644 index e0c738d8cbad66bbe1666284aef926c326849701..0000000000000000000000000000000000000000 --- a/spaces/Cletrason/Cletrason-toad-mario-movie/config.py +++ /dev/null @@ -1 +0,0 @@ -save_memory = False diff --git a/spaces/Crow34/Comicdraw/README.md b/spaces/Crow34/Comicdraw/README.md deleted file mode 100644 index 6925a719ee6ea7d8e74bc7e9540838b5bb189f26..0000000000000000000000000000000000000000 --- a/spaces/Crow34/Comicdraw/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Comicdraw -emoji: 💻 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DESUCLUB/BLLAMA/README.md b/spaces/DESUCLUB/BLLAMA/README.md deleted file mode 100644 index 015ec39bc1e717522b0b9dbf89106deb414e7ef9..0000000000000000000000000000000000000000 --- a/spaces/DESUCLUB/BLLAMA/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -license: apache-2.0 -title: 'BLLAMA: ALPACA with BLIP 2' -sdk: gradio -emoji: 🔥 -colorFrom: red -colorTo: purple -pinned: true -app_file: generate.py ---- -## 🦙🌲🤏 BLLAMA: A BLIP2 + ALPACA-LORA Pipeline - -# Training - This is just a pipeline involving the use of both ALPACA and BLIP-2, without any prior finetuning. You can refer to the details in ALPACA_LORA's repo [here](https://github.com/tloen/alpaca-lora) and the BLIP-2 training details on their GitHub page [here](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). For the pipeline, I have used the BLIP-2 model found on HuggingSpace [here](https://huggingface.co/spaces/Salesforce/BLIP2) - - - -## Acknowledgements -Once again, I would like to credit the Salesforce team for creating BLIP2, as well as tloen, the original creator of alpaca-lora. I would also like to credit Meta, the original -creators of LLAMA, as well as the people behind the HuggingFace implementation of ALPACA \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/E_B_D_T_.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/E_B_D_T_.py deleted file mode 100644 index 42d10700d2d93613b5b5e2ea7b7cc86d295dedb2..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/E_B_D_T_.py +++ /dev/null @@ -1,823 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import ( - bytechr, - byteord, - bytesjoin, - strjoin, - safeEval, - readHex, - hexStr, - deHexStr, -) -from .BitmapGlyphMetrics import ( - BigGlyphMetrics, - bigGlyphMetricsFormat, - SmallGlyphMetrics, - smallGlyphMetricsFormat, -) -from . import DefaultTable -import itertools -import os -import struct -import logging - - -log = logging.getLogger(__name__) - -ebdtTableVersionFormat = """ - > # big endian - version: 16.16F -""" - -ebdtComponentFormat = """ - > # big endian - glyphCode: H - xOffset: b - yOffset: b -""" - - -class table_E_B_D_T_(DefaultTable.DefaultTable): - - # Keep a reference to the name of the data locator table. - locatorName = "EBLC" - - # This method can be overridden in subclasses to support new formats - # without changing the other implementation. Also can be used as a - # convenience method for coverting a font file to an alternative format. - def getImageFormatClass(self, imageFormat): - return ebdt_bitmap_classes[imageFormat] - - def decompile(self, data, ttFont): - # Get the version but don't advance the slice. - # Most of the lookup for this table is done relative - # to the begining so slice by the offsets provided - # in the EBLC table. - sstruct.unpack2(ebdtTableVersionFormat, data, self) - - # Keep a dict of glyphs that have been seen so they aren't remade. - # This dict maps intervals of data to the BitmapGlyph. - glyphDict = {} - - # Pull out the EBLC table and loop through glyphs. - # A strike is a concept that spans both tables. - # The actual bitmap data is stored in the EBDT. - locator = ttFont[self.__class__.locatorName] - self.strikeData = [] - for curStrike in locator.strikes: - bitmapGlyphDict = {} - self.strikeData.append(bitmapGlyphDict) - for indexSubTable in curStrike.indexSubTables: - dataIter = zip(indexSubTable.names, indexSubTable.locations) - for curName, curLoc in dataIter: - # Don't create duplicate data entries for the same glyphs. - # Instead just use the structures that already exist if they exist. - if curLoc in glyphDict: - curGlyph = glyphDict[curLoc] - else: - curGlyphData = data[slice(*curLoc)] - imageFormatClass = self.getImageFormatClass( - indexSubTable.imageFormat - ) - curGlyph = imageFormatClass(curGlyphData, ttFont) - glyphDict[curLoc] = curGlyph - bitmapGlyphDict[curName] = curGlyph - - def compile(self, ttFont): - - dataList = [] - dataList.append(sstruct.pack(ebdtTableVersionFormat, self)) - dataSize = len(dataList[0]) - - # Keep a dict of glyphs that have been seen so they aren't remade. - # This dict maps the id of the BitmapGlyph to the interval - # in the data. - glyphDict = {} - - # Go through the bitmap glyph data. Just in case the data for a glyph - # changed the size metrics should be recalculated. There are a variety - # of formats and they get stored in the EBLC table. That is why - # recalculation is defered to the EblcIndexSubTable class and just - # pass what is known about bitmap glyphs from this particular table. - locator = ttFont[self.__class__.locatorName] - for curStrike, curGlyphDict in zip(locator.strikes, self.strikeData): - for curIndexSubTable in curStrike.indexSubTables: - dataLocations = [] - for curName in curIndexSubTable.names: - # Handle the data placement based on seeing the glyph or not. - # Just save a reference to the location if the glyph has already - # been saved in compile. This code assumes that glyphs will only - # be referenced multiple times from indexFormat5. By luck the - # code may still work when referencing poorly ordered fonts with - # duplicate references. If there is a font that is unlucky the - # respective compile methods for the indexSubTables will fail - # their assertions. All fonts seem to follow this assumption. - # More complicated packing may be needed if a counter-font exists. - glyph = curGlyphDict[curName] - objectId = id(glyph) - if objectId not in glyphDict: - data = glyph.compile(ttFont) - data = curIndexSubTable.padBitmapData(data) - startByte = dataSize - dataSize += len(data) - endByte = dataSize - dataList.append(data) - dataLoc = (startByte, endByte) - glyphDict[objectId] = dataLoc - else: - dataLoc = glyphDict[objectId] - dataLocations.append(dataLoc) - # Just use the new data locations in the indexSubTable. - # The respective compile implementations will take care - # of any of the problems in the convertion that may arise. - curIndexSubTable.locations = dataLocations - - return bytesjoin(dataList) - - def toXML(self, writer, ttFont): - # When exporting to XML if one of the data export formats - # requires metrics then those metrics may be in the locator. - # In this case populate the bitmaps with "export metrics". - if ttFont.bitmapGlyphDataFormat in ("row", "bitwise"): - locator = ttFont[self.__class__.locatorName] - for curStrike, curGlyphDict in zip(locator.strikes, self.strikeData): - for curIndexSubTable in curStrike.indexSubTables: - for curName in curIndexSubTable.names: - glyph = curGlyphDict[curName] - # I'm not sure which metrics have priority here. - # For now if both metrics exist go with glyph metrics. - if hasattr(glyph, "metrics"): - glyph.exportMetrics = glyph.metrics - else: - glyph.exportMetrics = curIndexSubTable.metrics - glyph.exportBitDepth = curStrike.bitmapSizeTable.bitDepth - - writer.simpletag("header", [("version", self.version)]) - writer.newline() - locator = ttFont[self.__class__.locatorName] - for strikeIndex, bitmapGlyphDict in enumerate(self.strikeData): - writer.begintag("strikedata", [("index", strikeIndex)]) - writer.newline() - for curName, curBitmap in bitmapGlyphDict.items(): - curBitmap.toXML(strikeIndex, curName, writer, ttFont) - writer.endtag("strikedata") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "header": - self.version = safeEval(attrs["version"]) - elif name == "strikedata": - if not hasattr(self, "strikeData"): - self.strikeData = [] - strikeIndex = safeEval(attrs["index"]) - - bitmapGlyphDict = {} - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name[4:].startswith(_bitmapGlyphSubclassPrefix[4:]): - imageFormat = safeEval(name[len(_bitmapGlyphSubclassPrefix) :]) - glyphName = attrs["name"] - imageFormatClass = self.getImageFormatClass(imageFormat) - curGlyph = imageFormatClass(None, None) - curGlyph.fromXML(name, attrs, content, ttFont) - assert glyphName not in bitmapGlyphDict, ( - "Duplicate glyphs with the same name '%s' in the same strike." - % glyphName - ) - bitmapGlyphDict[glyphName] = curGlyph - else: - log.warning("%s being ignored by %s", name, self.__class__.__name__) - - # Grow the strike data array to the appropriate size. The XML - # format allows the strike index value to be out of order. - if strikeIndex >= len(self.strikeData): - self.strikeData += [None] * (strikeIndex + 1 - len(self.strikeData)) - assert ( - self.strikeData[strikeIndex] is None - ), "Duplicate strike EBDT indices." - self.strikeData[strikeIndex] = bitmapGlyphDict - - -class EbdtComponent(object): - def toXML(self, writer, ttFont): - writer.begintag("ebdtComponent", [("name", self.name)]) - writer.newline() - for componentName in sstruct.getformat(ebdtComponentFormat)[1][1:]: - writer.simpletag(componentName, value=getattr(self, componentName)) - writer.newline() - writer.endtag("ebdtComponent") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - self.name = attrs["name"] - componentNames = set(sstruct.getformat(ebdtComponentFormat)[1][1:]) - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name in componentNames: - vars(self)[name] = safeEval(attrs["value"]) - else: - log.warning("unknown name '%s' being ignored by EbdtComponent.", name) - - -# Helper functions for dealing with binary. - - -def _data2binary(data, numBits): - binaryList = [] - for curByte in data: - value = byteord(curByte) - numBitsCut = min(8, numBits) - for i in range(numBitsCut): - if value & 0x1: - binaryList.append("1") - else: - binaryList.append("0") - value = value >> 1 - numBits -= numBitsCut - return strjoin(binaryList) - - -def _binary2data(binary): - byteList = [] - for bitLoc in range(0, len(binary), 8): - byteString = binary[bitLoc : bitLoc + 8] - curByte = 0 - for curBit in reversed(byteString): - curByte = curByte << 1 - if curBit == "1": - curByte |= 1 - byteList.append(bytechr(curByte)) - return bytesjoin(byteList) - - -def _memoize(f): - class memodict(dict): - def __missing__(self, key): - ret = f(key) - if len(key) == 1: - self[key] = ret - return ret - - return memodict().__getitem__ - - -# 00100111 -> 11100100 per byte, not to be confused with little/big endian. -# Bitmap data per byte is in the order that binary is written on the page -# with the least significant bit as far right as possible. This is the -# opposite of what makes sense algorithmically and hence this function. -@_memoize -def _reverseBytes(data): - if len(data) != 1: - return bytesjoin(map(_reverseBytes, data)) - byte = byteord(data) - result = 0 - for i in range(8): - result = result << 1 - result |= byte & 1 - byte = byte >> 1 - return bytechr(result) - - -# This section of code is for reading and writing image data to/from XML. - - -def _writeRawImageData(strikeIndex, glyphName, bitmapObject, writer, ttFont): - writer.begintag("rawimagedata") - writer.newline() - writer.dumphex(bitmapObject.imageData) - writer.endtag("rawimagedata") - writer.newline() - - -def _readRawImageData(bitmapObject, name, attrs, content, ttFont): - bitmapObject.imageData = readHex(content) - - -def _writeRowImageData(strikeIndex, glyphName, bitmapObject, writer, ttFont): - metrics = bitmapObject.exportMetrics - del bitmapObject.exportMetrics - bitDepth = bitmapObject.exportBitDepth - del bitmapObject.exportBitDepth - - writer.begintag( - "rowimagedata", bitDepth=bitDepth, width=metrics.width, height=metrics.height - ) - writer.newline() - for curRow in range(metrics.height): - rowData = bitmapObject.getRow(curRow, bitDepth=bitDepth, metrics=metrics) - writer.simpletag("row", value=hexStr(rowData)) - writer.newline() - writer.endtag("rowimagedata") - writer.newline() - - -def _readRowImageData(bitmapObject, name, attrs, content, ttFont): - bitDepth = safeEval(attrs["bitDepth"]) - metrics = SmallGlyphMetrics() - metrics.width = safeEval(attrs["width"]) - metrics.height = safeEval(attrs["height"]) - - dataRows = [] - for element in content: - if not isinstance(element, tuple): - continue - name, attr, content = element - # Chop off 'imagedata' from the tag to get just the option. - if name == "row": - dataRows.append(deHexStr(attr["value"])) - bitmapObject.setRows(dataRows, bitDepth=bitDepth, metrics=metrics) - - -def _writeBitwiseImageData(strikeIndex, glyphName, bitmapObject, writer, ttFont): - metrics = bitmapObject.exportMetrics - del bitmapObject.exportMetrics - bitDepth = bitmapObject.exportBitDepth - del bitmapObject.exportBitDepth - - # A dict for mapping binary to more readable/artistic ASCII characters. - binaryConv = {"0": ".", "1": "@"} - - writer.begintag( - "bitwiseimagedata", - bitDepth=bitDepth, - width=metrics.width, - height=metrics.height, - ) - writer.newline() - for curRow in range(metrics.height): - rowData = bitmapObject.getRow( - curRow, bitDepth=1, metrics=metrics, reverseBytes=True - ) - rowData = _data2binary(rowData, metrics.width) - # Make the output a readable ASCII art form. - rowData = strjoin(map(binaryConv.get, rowData)) - writer.simpletag("row", value=rowData) - writer.newline() - writer.endtag("bitwiseimagedata") - writer.newline() - - -def _readBitwiseImageData(bitmapObject, name, attrs, content, ttFont): - bitDepth = safeEval(attrs["bitDepth"]) - metrics = SmallGlyphMetrics() - metrics.width = safeEval(attrs["width"]) - metrics.height = safeEval(attrs["height"]) - - # A dict for mapping from ASCII to binary. All characters are considered - # a '1' except space, period and '0' which maps to '0'. - binaryConv = {" ": "0", ".": "0", "0": "0"} - - dataRows = [] - for element in content: - if not isinstance(element, tuple): - continue - name, attr, content = element - if name == "row": - mapParams = zip(attr["value"], itertools.repeat("1")) - rowData = strjoin(itertools.starmap(binaryConv.get, mapParams)) - dataRows.append(_binary2data(rowData)) - - bitmapObject.setRows( - dataRows, bitDepth=bitDepth, metrics=metrics, reverseBytes=True - ) - - -def _writeExtFileImageData(strikeIndex, glyphName, bitmapObject, writer, ttFont): - try: - folder = os.path.dirname(writer.file.name) - except AttributeError: - # fall back to current directory if output file's directory isn't found - folder = "." - folder = os.path.join(folder, "bitmaps") - filename = glyphName + bitmapObject.fileExtension - if not os.path.isdir(folder): - os.makedirs(folder) - folder = os.path.join(folder, "strike%d" % strikeIndex) - if not os.path.isdir(folder): - os.makedirs(folder) - - fullPath = os.path.join(folder, filename) - writer.simpletag("extfileimagedata", value=fullPath) - writer.newline() - - with open(fullPath, "wb") as file: - file.write(bitmapObject.imageData) - - -def _readExtFileImageData(bitmapObject, name, attrs, content, ttFont): - fullPath = attrs["value"] - with open(fullPath, "rb") as file: - bitmapObject.imageData = file.read() - - -# End of XML writing code. - -# Important information about the naming scheme. Used for identifying formats -# in XML. -_bitmapGlyphSubclassPrefix = "ebdt_bitmap_format_" - - -class BitmapGlyph(object): - - # For the external file format. This can be changed in subclasses. This way - # when the extfile option is turned on files have the form: glyphName.ext - # The default is just a flat binary file with no meaning. - fileExtension = ".bin" - - # Keep track of reading and writing of various forms. - xmlDataFunctions = { - "raw": (_writeRawImageData, _readRawImageData), - "row": (_writeRowImageData, _readRowImageData), - "bitwise": (_writeBitwiseImageData, _readBitwiseImageData), - "extfile": (_writeExtFileImageData, _readExtFileImageData), - } - - def __init__(self, data, ttFont): - self.data = data - self.ttFont = ttFont - # TODO Currently non-lazy decompilation is untested here... - # if not ttFont.lazy: - # self.decompile() - # del self.data - - def __getattr__(self, attr): - # Allow lazy decompile. - if attr[:2] == "__": - raise AttributeError(attr) - if attr == "data": - raise AttributeError(attr) - self.decompile() - del self.data - return getattr(self, attr) - - def ensureDecompiled(self, recurse=False): - if hasattr(self, "data"): - self.decompile() - del self.data - - # Not a fan of this but it is needed for safer safety checking. - def getFormat(self): - return safeEval(self.__class__.__name__[len(_bitmapGlyphSubclassPrefix) :]) - - def toXML(self, strikeIndex, glyphName, writer, ttFont): - writer.begintag(self.__class__.__name__, [("name", glyphName)]) - writer.newline() - - self.writeMetrics(writer, ttFont) - # Use the internal write method to write using the correct output format. - self.writeData(strikeIndex, glyphName, writer, ttFont) - - writer.endtag(self.__class__.__name__) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - self.readMetrics(name, attrs, content, ttFont) - for element in content: - if not isinstance(element, tuple): - continue - name, attr, content = element - if not name.endswith("imagedata"): - continue - # Chop off 'imagedata' from the tag to get just the option. - option = name[: -len("imagedata")] - assert option in self.__class__.xmlDataFunctions - self.readData(name, attr, content, ttFont) - - # Some of the glyphs have the metrics. This allows for metrics to be - # added if the glyph format has them. Default behavior is to do nothing. - def writeMetrics(self, writer, ttFont): - pass - - # The opposite of write metrics. - def readMetrics(self, name, attrs, content, ttFont): - pass - - def writeData(self, strikeIndex, glyphName, writer, ttFont): - try: - writeFunc, readFunc = self.__class__.xmlDataFunctions[ - ttFont.bitmapGlyphDataFormat - ] - except KeyError: - writeFunc = _writeRawImageData - writeFunc(strikeIndex, glyphName, self, writer, ttFont) - - def readData(self, name, attrs, content, ttFont): - # Chop off 'imagedata' from the tag to get just the option. - option = name[: -len("imagedata")] - writeFunc, readFunc = self.__class__.xmlDataFunctions[option] - readFunc(self, name, attrs, content, ttFont) - - -# A closure for creating a mixin for the two types of metrics handling. -# Most of the code is very similar so its easier to deal with here. -# Everything works just by passing the class that the mixin is for. -def _createBitmapPlusMetricsMixin(metricsClass): - # Both metrics names are listed here to make meaningful error messages. - metricStrings = [BigGlyphMetrics.__name__, SmallGlyphMetrics.__name__] - curMetricsName = metricsClass.__name__ - # Find which metrics this is for and determine the opposite name. - metricsId = metricStrings.index(curMetricsName) - oppositeMetricsName = metricStrings[1 - metricsId] - - class BitmapPlusMetricsMixin(object): - def writeMetrics(self, writer, ttFont): - self.metrics.toXML(writer, ttFont) - - def readMetrics(self, name, attrs, content, ttFont): - for element in content: - if not isinstance(element, tuple): - continue - name, attrs, content = element - if name == curMetricsName: - self.metrics = metricsClass() - self.metrics.fromXML(name, attrs, content, ttFont) - elif name == oppositeMetricsName: - log.warning( - "Warning: %s being ignored in format %d.", - oppositeMetricsName, - self.getFormat(), - ) - - return BitmapPlusMetricsMixin - - -# Since there are only two types of mixin's just create them here. -BitmapPlusBigMetricsMixin = _createBitmapPlusMetricsMixin(BigGlyphMetrics) -BitmapPlusSmallMetricsMixin = _createBitmapPlusMetricsMixin(SmallGlyphMetrics) - -# Data that is bit aligned can be tricky to deal with. These classes implement -# helper functionality for dealing with the data and getting a particular row -# of bitwise data. Also helps implement fancy data export/import in XML. -class BitAlignedBitmapMixin(object): - def _getBitRange(self, row, bitDepth, metrics): - rowBits = bitDepth * metrics.width - bitOffset = row * rowBits - return (bitOffset, bitOffset + rowBits) - - def getRow(self, row, bitDepth=1, metrics=None, reverseBytes=False): - if metrics is None: - metrics = self.metrics - assert 0 <= row and row < metrics.height, "Illegal row access in bitmap" - - # Loop through each byte. This can cover two bytes in the original data or - # a single byte if things happen to be aligned. The very last entry might - # not be aligned so take care to trim the binary data to size and pad with - # zeros in the row data. Bit aligned data is somewhat tricky. - # - # Example of data cut. Data cut represented in x's. - # '|' represents byte boundary. - # data = ...0XX|XXXXXX00|000... => XXXXXXXX - # or - # data = ...0XX|XXXX0000|000... => XXXXXX00 - # or - # data = ...000|XXXXXXXX|000... => XXXXXXXX - # or - # data = ...000|00XXXX00|000... => XXXX0000 - # - dataList = [] - bitRange = self._getBitRange(row, bitDepth, metrics) - stepRange = bitRange + (8,) - for curBit in range(*stepRange): - endBit = min(curBit + 8, bitRange[1]) - numBits = endBit - curBit - cutPoint = curBit % 8 - firstByteLoc = curBit // 8 - secondByteLoc = endBit // 8 - if firstByteLoc < secondByteLoc: - numBitsCut = 8 - cutPoint - else: - numBitsCut = endBit - curBit - curByte = _reverseBytes(self.imageData[firstByteLoc]) - firstHalf = byteord(curByte) >> cutPoint - firstHalf = ((1 << numBitsCut) - 1) & firstHalf - newByte = firstHalf - if firstByteLoc < secondByteLoc and secondByteLoc < len(self.imageData): - curByte = _reverseBytes(self.imageData[secondByteLoc]) - secondHalf = byteord(curByte) << numBitsCut - newByte = (firstHalf | secondHalf) & ((1 << numBits) - 1) - dataList.append(bytechr(newByte)) - - # The way the data is kept is opposite the algorithm used. - data = bytesjoin(dataList) - if not reverseBytes: - data = _reverseBytes(data) - return data - - def setRows(self, dataRows, bitDepth=1, metrics=None, reverseBytes=False): - if metrics is None: - metrics = self.metrics - if not reverseBytes: - dataRows = list(map(_reverseBytes, dataRows)) - - # Keep track of a list of ordinal values as they are easier to modify - # than a list of strings. Map to actual strings later. - numBytes = (self._getBitRange(len(dataRows), bitDepth, metrics)[0] + 7) // 8 - ordDataList = [0] * numBytes - for row, data in enumerate(dataRows): - bitRange = self._getBitRange(row, bitDepth, metrics) - stepRange = bitRange + (8,) - for curBit, curByte in zip(range(*stepRange), data): - endBit = min(curBit + 8, bitRange[1]) - cutPoint = curBit % 8 - firstByteLoc = curBit // 8 - secondByteLoc = endBit // 8 - if firstByteLoc < secondByteLoc: - numBitsCut = 8 - cutPoint - else: - numBitsCut = endBit - curBit - curByte = byteord(curByte) - firstByte = curByte & ((1 << numBitsCut) - 1) - ordDataList[firstByteLoc] |= firstByte << cutPoint - if firstByteLoc < secondByteLoc and secondByteLoc < numBytes: - secondByte = (curByte >> numBitsCut) & ((1 << 8 - numBitsCut) - 1) - ordDataList[secondByteLoc] |= secondByte - - # Save the image data with the bits going the correct way. - self.imageData = _reverseBytes(bytesjoin(map(bytechr, ordDataList))) - - -class ByteAlignedBitmapMixin(object): - def _getByteRange(self, row, bitDepth, metrics): - rowBytes = (bitDepth * metrics.width + 7) // 8 - byteOffset = row * rowBytes - return (byteOffset, byteOffset + rowBytes) - - def getRow(self, row, bitDepth=1, metrics=None, reverseBytes=False): - if metrics is None: - metrics = self.metrics - assert 0 <= row and row < metrics.height, "Illegal row access in bitmap" - byteRange = self._getByteRange(row, bitDepth, metrics) - data = self.imageData[slice(*byteRange)] - if reverseBytes: - data = _reverseBytes(data) - return data - - def setRows(self, dataRows, bitDepth=1, metrics=None, reverseBytes=False): - if metrics is None: - metrics = self.metrics - if reverseBytes: - dataRows = map(_reverseBytes, dataRows) - self.imageData = bytesjoin(dataRows) - - -class ebdt_bitmap_format_1( - ByteAlignedBitmapMixin, BitmapPlusSmallMetricsMixin, BitmapGlyph -): - def decompile(self): - self.metrics = SmallGlyphMetrics() - dummy, data = sstruct.unpack2(smallGlyphMetricsFormat, self.data, self.metrics) - self.imageData = data - - def compile(self, ttFont): - data = sstruct.pack(smallGlyphMetricsFormat, self.metrics) - return data + self.imageData - - -class ebdt_bitmap_format_2( - BitAlignedBitmapMixin, BitmapPlusSmallMetricsMixin, BitmapGlyph -): - def decompile(self): - self.metrics = SmallGlyphMetrics() - dummy, data = sstruct.unpack2(smallGlyphMetricsFormat, self.data, self.metrics) - self.imageData = data - - def compile(self, ttFont): - data = sstruct.pack(smallGlyphMetricsFormat, self.metrics) - return data + self.imageData - - -class ebdt_bitmap_format_5(BitAlignedBitmapMixin, BitmapGlyph): - def decompile(self): - self.imageData = self.data - - def compile(self, ttFont): - return self.imageData - - -class ebdt_bitmap_format_6( - ByteAlignedBitmapMixin, BitmapPlusBigMetricsMixin, BitmapGlyph -): - def decompile(self): - self.metrics = BigGlyphMetrics() - dummy, data = sstruct.unpack2(bigGlyphMetricsFormat, self.data, self.metrics) - self.imageData = data - - def compile(self, ttFont): - data = sstruct.pack(bigGlyphMetricsFormat, self.metrics) - return data + self.imageData - - -class ebdt_bitmap_format_7( - BitAlignedBitmapMixin, BitmapPlusBigMetricsMixin, BitmapGlyph -): - def decompile(self): - self.metrics = BigGlyphMetrics() - dummy, data = sstruct.unpack2(bigGlyphMetricsFormat, self.data, self.metrics) - self.imageData = data - - def compile(self, ttFont): - data = sstruct.pack(bigGlyphMetricsFormat, self.metrics) - return data + self.imageData - - -class ComponentBitmapGlyph(BitmapGlyph): - def toXML(self, strikeIndex, glyphName, writer, ttFont): - writer.begintag(self.__class__.__name__, [("name", glyphName)]) - writer.newline() - - self.writeMetrics(writer, ttFont) - - writer.begintag("components") - writer.newline() - for curComponent in self.componentArray: - curComponent.toXML(writer, ttFont) - writer.endtag("components") - writer.newline() - - writer.endtag(self.__class__.__name__) - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - self.readMetrics(name, attrs, content, ttFont) - for element in content: - if not isinstance(element, tuple): - continue - name, attr, content = element - if name == "components": - self.componentArray = [] - for compElement in content: - if not isinstance(compElement, tuple): - continue - name, attrs, content = compElement - if name == "ebdtComponent": - curComponent = EbdtComponent() - curComponent.fromXML(name, attrs, content, ttFont) - self.componentArray.append(curComponent) - else: - log.warning("'%s' being ignored in component array.", name) - - -class ebdt_bitmap_format_8(BitmapPlusSmallMetricsMixin, ComponentBitmapGlyph): - def decompile(self): - self.metrics = SmallGlyphMetrics() - dummy, data = sstruct.unpack2(smallGlyphMetricsFormat, self.data, self.metrics) - data = data[1:] - - (numComponents,) = struct.unpack(">H", data[:2]) - data = data[2:] - self.componentArray = [] - for i in range(numComponents): - curComponent = EbdtComponent() - dummy, data = sstruct.unpack2(ebdtComponentFormat, data, curComponent) - curComponent.name = self.ttFont.getGlyphName(curComponent.glyphCode) - self.componentArray.append(curComponent) - - def compile(self, ttFont): - dataList = [] - dataList.append(sstruct.pack(smallGlyphMetricsFormat, self.metrics)) - dataList.append(b"\0") - dataList.append(struct.pack(">H", len(self.componentArray))) - for curComponent in self.componentArray: - curComponent.glyphCode = ttFont.getGlyphID(curComponent.name) - dataList.append(sstruct.pack(ebdtComponentFormat, curComponent)) - return bytesjoin(dataList) - - -class ebdt_bitmap_format_9(BitmapPlusBigMetricsMixin, ComponentBitmapGlyph): - def decompile(self): - self.metrics = BigGlyphMetrics() - dummy, data = sstruct.unpack2(bigGlyphMetricsFormat, self.data, self.metrics) - (numComponents,) = struct.unpack(">H", data[:2]) - data = data[2:] - self.componentArray = [] - for i in range(numComponents): - curComponent = EbdtComponent() - dummy, data = sstruct.unpack2(ebdtComponentFormat, data, curComponent) - curComponent.name = self.ttFont.getGlyphName(curComponent.glyphCode) - self.componentArray.append(curComponent) - - def compile(self, ttFont): - dataList = [] - dataList.append(sstruct.pack(bigGlyphMetricsFormat, self.metrics)) - dataList.append(struct.pack(">H", len(self.componentArray))) - for curComponent in self.componentArray: - curComponent.glyphCode = ttFont.getGlyphID(curComponent.name) - dataList.append(sstruct.pack(ebdtComponentFormat, curComponent)) - return bytesjoin(dataList) - - -# Dictionary of bitmap formats to the class representing that format -# currently only the ones listed in this map are the ones supported. -ebdt_bitmap_classes = { - 1: ebdt_bitmap_format_1, - 2: ebdt_bitmap_format_2, - 5: ebdt_bitmap_format_5, - 6: ebdt_bitmap_format_6, - 7: ebdt_bitmap_format_7, - 8: ebdt_bitmap_format_8, - 9: ebdt_bitmap_format_9, -} diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/button.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/button.py deleted file mode 100644 index 4d932c75becc1a324630fe6b1ad2442e229500b0..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/components/button.py +++ /dev/null @@ -1,121 +0,0 @@ -"""gr.Button() component.""" - -from __future__ import annotations - -from typing import Callable, Literal - -from gradio_client.documentation import document, set_documentation_group -from gradio_client.serializing import StringSerializable - -from gradio.components.base import Component, IOComponent, _Keywords -from gradio.deprecation import warn_deprecation, warn_style_method_deprecation -from gradio.events import Clickable - -set_documentation_group("component") - - -@document() -class Button(Clickable, IOComponent, StringSerializable): - """ - Used to create a button, that can be assigned arbitrary click() events. The label (value) of the button can be used as an input or set via the output of a function. - - Preprocessing: passes the button value as a {str} into the function - Postprocessing: expects a {str} to be returned from a function, which is set as the label of the button - Demos: blocks_inputs, blocks_kinematics - """ - - def __init__( - self, - value: str | Callable = "Run", - *, - variant: Literal["primary", "secondary", "stop"] = "secondary", - size: Literal["sm", "lg"] | None = None, - visible: bool = True, - interactive: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - scale: int | None = None, - min_width: int | None = None, - **kwargs, - ): - """ - Parameters: - value: Default text for the button to display. If callable, the function will be called whenever the app loads to set the initial value of the component. - variant: 'primary' for main call-to-action, 'secondary' for a more subdued style, 'stop' for a stop button. - size: Size of the button. Can be "sm" or "lg". - visible: If False, component will be hidden. - interactive: If False, the Button will be in a disabled state. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - """ - IOComponent.__init__( - self, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - value=value, - interactive=interactive, - scale=scale, - min_width=min_width, - **kwargs, - ) - if variant == "plain": - warn_deprecation("'plain' variant deprecated, using 'secondary' instead.") - variant = "secondary" - self.variant = variant - self.size = size - - def get_config(self): - return { - "value": self.value, - "variant": self.variant, - "size": self.size, - "interactive": self.interactive, - "scale": self.scale, - "min_width": self.min_width, - **Component.get_config(self), - } - - @staticmethod - def update( - value: str | Literal[_Keywords.NO_VALUE] | None = _Keywords.NO_VALUE, - variant: Literal["primary", "secondary", "stop"] | None = None, - size: Literal["sm", "lg"] | None = None, - visible: bool | None = None, - interactive: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - ): - return { - "variant": variant, - "size": size, - "visible": visible, - "value": value, - "interactive": interactive, - "scale": scale, - "min_width": min_width, - "__type__": "update", - } - - def style( - self, - *, - full_width: bool | None = None, - size: Literal["sm", "lg"] | None = None, - **kwargs, - ): - """ - This method is deprecated. Please set these arguments in the constructor instead. - """ - warn_style_method_deprecation() - if full_width is not None: - warn_deprecation( - "Use `scale` in place of full_width in the constructor. " - "scale=1 will make the button expand, whereas 0 will not." - ) - self.scale = 1 if full_width else None - if size is not None: - self.size = size - return self diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/UploadText-f599be03.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/UploadText-f599be03.js deleted file mode 100644 index 1881b800fea5ad4be06623d130edb8c06c14ad29..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/UploadText-f599be03.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as h,e as S,s as T,N as g,P as c,O as y,K as U,p as q,M as l,R as v,n as b,A as w,a4 as A}from"./index-1d65707a.js";import{X as C}from"./Blocks-c9e1499d.js";function K(t){let e,o=t[1](t[2][t[0]])+"",i,r,s,n,_=t[1]("or")+"",d,m,k,f=t[1]("interface.click_to_upload")+"",u;return{c(){e=g("div"),i=c(o),r=y(),s=g("span"),n=c("- "),d=c(_),m=c(" -"),k=y(),u=c(f),U(s,"class","or svelte-1ck5uk8"),U(e,"class","wrap svelte-1ck5uk8")},m(a,p){q(a,e,p),l(e,i),l(e,r),l(e,s),l(s,n),l(s,d),l(s,m),l(e,k),l(e,u)},p(a,[p]){p&3&&o!==(o=a[1](a[2][a[0]])+"")&&v(i,o),p&2&&_!==(_=a[1]("or")+"")&&v(d,_),p&2&&f!==(f=a[1]("interface.click_to_upload")+"")&&v(u,f)},i:b,o:b,d(a){a&&w(e)}}}function M(t,e,o){let i;A(t,C,n=>o(1,i=n));let{type:r="file"}=e;const s={image:"interface.drop_image",video:"interface.drop_video",audio:"interface.drop_audio",file:"interface.drop_file",csv:"interface.drop_csv"};return t.$$set=n=>{"type"in n&&o(0,r=n.type)},[r,i,s]}class P extends h{constructor(e){super(),S(this,e,M,K,T,{type:0})}}export{P as U}; -//# sourceMappingURL=UploadText-f599be03.js.map diff --git a/spaces/DaFujaTyping/hf-Chat-ui/src/styles/main.css b/spaces/DaFujaTyping/hf-Chat-ui/src/styles/main.css deleted file mode 100644 index 6ea57c50974dab960f23ce8440bfd576f10ddb52..0000000000000000000000000000000000000000 --- a/spaces/DaFujaTyping/hf-Chat-ui/src/styles/main.css +++ /dev/null @@ -1,17 +0,0 @@ -@import "./highlight-js.css"; - -@tailwind base; -@tailwind components; -@tailwind utilities; - -@layer components { - .btn { - @apply inline-flex flex-shrink-0 cursor-pointer select-none items-center justify-center whitespace-nowrap outline-none transition-all focus:ring disabled:cursor-default; - } -} - -@layer utilities { - .scrollbar-custom { - @apply scrollbar-thin scrollbar-track-transparent scrollbar-thumb-black/10 scrollbar-thumb-rounded-full scrollbar-w-1 hover:scrollbar-thumb-black/20 dark:scrollbar-thumb-white/10 dark:hover:scrollbar-thumb-white/20; - } -} diff --git a/spaces/DragGan/DragGan-Inversion/viz/capture_widget.py b/spaces/DragGan/DragGan-Inversion/viz/capture_widget.py deleted file mode 100644 index 79cc4f80c5bba2cf1e67593e85fb85cd7963ed89..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/viz/capture_widget.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -import os -import re -import numpy as np -import imgui -import PIL.Image -from gui_utils import imgui_utils -from . import renderer -import torch -import torchvision - -# ---------------------------------------------------------------------------- - - -class CaptureWidget: - def __init__(self, viz): - self.viz = viz - self.path = os.path.abspath(os.path.join( - os.path.dirname(__file__), '..', '_screenshots')) - self.dump_image = False - self.dump_gui = False - self.defer_frames = 0 - self.disabled_time = 0 - - def dump_png(self, image): - viz = self.viz - try: - _height, _width, channels = image.shape - print(viz.result) - assert image.dtype == np.uint8 - os.makedirs(self.path, exist_ok=True) - file_id = 0 - for entry in os.scandir(self.path): - if entry.is_file(): - match = re.fullmatch(r'(\d+).*', entry.name) - if match: - file_id = max(file_id, int(match.group(1)) + 1) - if channels == 1: - pil_image = PIL.Image.fromarray(image[:, :, 0], 'L') - else: - pil_image = PIL.Image.fromarray(image[:, :, :3], 'RGB') - pil_image.save(os.path.join(self.path, f'{file_id:05d}.png')) - np.save(os.path.join( - self.path, f'{file_id:05d}.npy'), viz.result.w) - except: - viz.result.error = renderer.CapturedException() - - @imgui_utils.scoped_by_object_id - def __call__(self, show=True): - viz = self.viz - if show: - with imgui_utils.grayed_out(self.disabled_time != 0): - imgui.text('Capture') - imgui.same_line(viz.label_w) - - _changed, self.path = imgui_utils.input_text('##path', self.path, 1024, - flags=( - imgui.INPUT_TEXT_AUTO_SELECT_ALL | imgui.INPUT_TEXT_ENTER_RETURNS_TRUE), - width=(-1), - help_text='PATH') - if imgui.is_item_hovered() and not imgui.is_item_active() and self.path != '': - imgui.set_tooltip(self.path) - imgui.text(' ') - imgui.same_line(viz.label_w) - if imgui_utils.button('Save image', width=viz.button_w, enabled=(self.disabled_time == 0 and 'image' in viz.result)): - self.dump_image = True - self.defer_frames = 2 - self.disabled_time = 0.5 - imgui.same_line() - if imgui_utils.button('Save GUI', width=viz.button_w, enabled=(self.disabled_time == 0)): - self.dump_gui = True - self.defer_frames = 2 - self.disabled_time = 0.5 - - self.disabled_time = max(self.disabled_time - viz.frame_delta, 0) - if self.defer_frames > 0: - self.defer_frames -= 1 - elif self.dump_image: - if 'image' in viz.result: - self.dump_png(viz.result.image) - self.dump_image = False - elif self.dump_gui: - viz.capture_next_frame() - self.dump_gui = False - captured_frame = viz.pop_captured_frame() - if captured_frame is not None: - self.dump_png(captured_frame) - -# ---------------------------------------------------------------------------- diff --git a/spaces/DragGan/DragGan/training/dataset.py b/spaces/DragGan/DragGan/training/dataset.py deleted file mode 100644 index 68c356e3b89b63211e0b4bdde88babcffd26d59e..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan/training/dataset.py +++ /dev/null @@ -1,238 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Streaming images and labels from datasets created with dataset_tool.py.""" - -import os -import numpy as np -import zipfile -import PIL.Image -import json -import torch -import dnnlib - -try: - import pyspng -except ImportError: - pyspng = None - -#---------------------------------------------------------------------------- - -class Dataset(torch.utils.data.Dataset): - def __init__(self, - name, # Name of the dataset. - raw_shape, # Shape of the raw image data (NCHW). - max_size = None, # Artificially limit the size of the dataset. None = no limit. Applied before xflip. - use_labels = False, # Enable conditioning labels? False = label dimension is zero. - xflip = False, # Artificially double the size of the dataset via x-flips. Applied after max_size. - random_seed = 0, # Random seed to use when applying max_size. - ): - self._name = name - self._raw_shape = list(raw_shape) - self._use_labels = use_labels - self._raw_labels = None - self._label_shape = None - - # Apply max_size. - self._raw_idx = np.arange(self._raw_shape[0], dtype=np.int64) - if (max_size is not None) and (self._raw_idx.size > max_size): - np.random.RandomState(random_seed).shuffle(self._raw_idx) - self._raw_idx = np.sort(self._raw_idx[:max_size]) - - # Apply xflip. - self._xflip = np.zeros(self._raw_idx.size, dtype=np.uint8) - if xflip: - self._raw_idx = np.tile(self._raw_idx, 2) - self._xflip = np.concatenate([self._xflip, np.ones_like(self._xflip)]) - - def _get_raw_labels(self): - if self._raw_labels is None: - self._raw_labels = self._load_raw_labels() if self._use_labels else None - if self._raw_labels is None: - self._raw_labels = np.zeros([self._raw_shape[0], 0], dtype=np.float32) - assert isinstance(self._raw_labels, np.ndarray) - assert self._raw_labels.shape[0] == self._raw_shape[0] - assert self._raw_labels.dtype in [np.float32, np.int64] - if self._raw_labels.dtype == np.int64: - assert self._raw_labels.ndim == 1 - assert np.all(self._raw_labels >= 0) - return self._raw_labels - - def close(self): # to be overridden by subclass - pass - - def _load_raw_image(self, raw_idx): # to be overridden by subclass - raise NotImplementedError - - def _load_raw_labels(self): # to be overridden by subclass - raise NotImplementedError - - def __getstate__(self): - return dict(self.__dict__, _raw_labels=None) - - def __del__(self): - try: - self.close() - except: - pass - - def __len__(self): - return self._raw_idx.size - - def __getitem__(self, idx): - image = self._load_raw_image(self._raw_idx[idx]) - assert isinstance(image, np.ndarray) - assert list(image.shape) == self.image_shape - assert image.dtype == np.uint8 - if self._xflip[idx]: - assert image.ndim == 3 # CHW - image = image[:, :, ::-1] - return image.copy(), self.get_label(idx) - - def get_label(self, idx): - label = self._get_raw_labels()[self._raw_idx[idx]] - if label.dtype == np.int64: - onehot = np.zeros(self.label_shape, dtype=np.float32) - onehot[label] = 1 - label = onehot - return label.copy() - - def get_details(self, idx): - d = dnnlib.EasyDict() - d.raw_idx = int(self._raw_idx[idx]) - d.xflip = (int(self._xflip[idx]) != 0) - d.raw_label = self._get_raw_labels()[d.raw_idx].copy() - return d - - @property - def name(self): - return self._name - - @property - def image_shape(self): - return list(self._raw_shape[1:]) - - @property - def num_channels(self): - assert len(self.image_shape) == 3 # CHW - return self.image_shape[0] - - @property - def resolution(self): - assert len(self.image_shape) == 3 # CHW - assert self.image_shape[1] == self.image_shape[2] - return self.image_shape[1] - - @property - def label_shape(self): - if self._label_shape is None: - raw_labels = self._get_raw_labels() - if raw_labels.dtype == np.int64: - self._label_shape = [int(np.max(raw_labels)) + 1] - else: - self._label_shape = raw_labels.shape[1:] - return list(self._label_shape) - - @property - def label_dim(self): - assert len(self.label_shape) == 1 - return self.label_shape[0] - - @property - def has_labels(self): - return any(x != 0 for x in self.label_shape) - - @property - def has_onehot_labels(self): - return self._get_raw_labels().dtype == np.int64 - -#---------------------------------------------------------------------------- - -class ImageFolderDataset(Dataset): - def __init__(self, - path, # Path to directory or zip. - resolution = None, # Ensure specific resolution, None = highest available. - **super_kwargs, # Additional arguments for the Dataset base class. - ): - self._path = path - self._zipfile = None - - if os.path.isdir(self._path): - self._type = 'dir' - self._all_fnames = {os.path.relpath(os.path.join(root, fname), start=self._path) for root, _dirs, files in os.walk(self._path) for fname in files} - elif self._file_ext(self._path) == '.zip': - self._type = 'zip' - self._all_fnames = set(self._get_zipfile().namelist()) - else: - raise IOError('Path must point to a directory or zip') - - PIL.Image.init() - self._image_fnames = sorted(fname for fname in self._all_fnames if self._file_ext(fname) in PIL.Image.EXTENSION) - if len(self._image_fnames) == 0: - raise IOError('No image files found in the specified path') - - name = os.path.splitext(os.path.basename(self._path))[0] - raw_shape = [len(self._image_fnames)] + list(self._load_raw_image(0).shape) - if resolution is not None and (raw_shape[2] != resolution or raw_shape[3] != resolution): - raise IOError('Image files do not match the specified resolution') - super().__init__(name=name, raw_shape=raw_shape, **super_kwargs) - - @staticmethod - def _file_ext(fname): - return os.path.splitext(fname)[1].lower() - - def _get_zipfile(self): - assert self._type == 'zip' - if self._zipfile is None: - self._zipfile = zipfile.ZipFile(self._path) - return self._zipfile - - def _open_file(self, fname): - if self._type == 'dir': - return open(os.path.join(self._path, fname), 'rb') - if self._type == 'zip': - return self._get_zipfile().open(fname, 'r') - return None - - def close(self): - try: - if self._zipfile is not None: - self._zipfile.close() - finally: - self._zipfile = None - - def __getstate__(self): - return dict(super().__getstate__(), _zipfile=None) - - def _load_raw_image(self, raw_idx): - fname = self._image_fnames[raw_idx] - with self._open_file(fname) as f: - if pyspng is not None and self._file_ext(fname) == '.png': - image = pyspng.load(f.read()) - else: - image = np.array(PIL.Image.open(f)) - if image.ndim == 2: - image = image[:, :, np.newaxis] # HW => HWC - image = image.transpose(2, 0, 1) # HWC => CHW - return image - - def _load_raw_labels(self): - fname = 'dataset.json' - if fname not in self._all_fnames: - return None - with self._open_file(fname) as f: - labels = json.load(f)['labels'] - if labels is None: - return None - labels = dict(labels) - labels = [labels[fname.replace('\\', '/')] for fname in self._image_fnames] - labels = np.array(labels) - labels = labels.astype({1: np.int64, 2: np.float32}[labels.ndim]) - return labels - -#---------------------------------------------------------------------------- diff --git a/spaces/Dusan/clickbaitonator/fudge/util.py b/spaces/Dusan/clickbaitonator/fudge/util.py deleted file mode 100644 index dd4d20bfc4705b7f48c0cf883c0471376a870ab6..0000000000000000000000000000000000000000 --- a/spaces/Dusan/clickbaitonator/fudge/util.py +++ /dev/null @@ -1,110 +0,0 @@ -import os -import time -import sys -from contextlib import contextmanager - -import torch - -from fudge.constants import * - -@contextmanager -def suppress_stdout(): - with open(os.devnull, "w") as devnull: - old_stdout = sys.stdout - sys.stdout = devnull - try: - yield - finally: - sys.stdout = old_stdout - - -def save_checkpoint(state, save_path): - os.makedirs(os.path.dirname(save_path), exist_ok=True) - torch.save(state, save_path) - - -def freeze(module): - for param in module.parameters(): - param.requires_grad = False - - -def num_params(model): - return sum(p.numel() for p in model.parameters() if p.requires_grad) - - -def clamp(x, limit): - return max(-limit, min(x, limit)) - - -def pad_to_length(tensor, length, dim, value=0): - """ - Pad tensor to given length in given dim using given value (value should be numeric) - """ - assert tensor.size(dim) <= length - if tensor.size(dim) < length: - zeros_shape = list(tensor.shape) - zeros_shape[dim] = length - tensor.size(dim) - zeros_shape = tuple(zeros_shape) - return torch.cat([tensor, torch.zeros(zeros_shape).type(tensor.type()).to(tensor.device).fill_(value)], dim=dim) - else: - return tensor - - -def pad_mask(lengths: torch.LongTensor) -> torch.ByteTensor: - """ - Create a mask of seq x batch where seq = max(lengths), with 0 in padding locations and 1 otherwise. - """ - # lengths: bs. Ex: [2, 3, 1] - max_seqlen = torch.max(lengths) - expanded_lengths = lengths.unsqueeze(0).repeat((max_seqlen, 1)) # [[2, 3, 1], [2, 3, 1], [2, 3, 1]] - indices = torch.arange(max_seqlen).unsqueeze(1).repeat((1, lengths.size(0))).to(lengths.device) # [[0, 0, 0], [1, 1, 1], [2, 2, 2]] - - return expanded_lengths > indices # pad locations are 0. #[[1, 1, 1], [1, 1, 0], [0, 1, 0]]. seqlen x bs - - -class ProgressMeter(object): - """ - Display meter - """ - def __init__(self, num_batches, meters, prefix=""): - self.batch_fmtstr = self._get_batch_fmtstr(num_batches) - self.meters = meters - self.prefix = prefix - - def display(self, batch): - entries = [self.prefix + self.batch_fmtstr.format(batch)] - entries.append(time.ctime(time.time())) - entries += [str(meter) for meter in self.meters] - print('\t'.join(entries)) - - def _get_batch_fmtstr(self, num_batches): - num_digits = len(str(num_batches // 1)) - fmt = '{:' + str(num_digits) + 'd}' - return '[' + fmt + '/' + fmt.format(num_batches) + ']' - - -class AverageMeter(object): - """ - Display meter - Computes and stores the average and current value - """ - def __init__(self, name, fmt=':f'): - self.name = name - self.fmt = fmt - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - def __str__(self): - fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})' - return fmtstr.format(**self.__dict__) \ No newline at end of file diff --git a/spaces/ECCV2022/bytetrack/yolox/layers/csrc/vision.cpp b/spaces/ECCV2022/bytetrack/yolox/layers/csrc/vision.cpp deleted file mode 100644 index 7663d0faf5c58542624d2f01730618b9aa9d4a25..0000000000000000000000000000000000000000 --- a/spaces/ECCV2022/bytetrack/yolox/layers/csrc/vision.cpp +++ /dev/null @@ -1,13 +0,0 @@ -#include "cocoeval/cocoeval.h" - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("COCOevalAccumulate", &COCOeval::Accumulate, "COCOeval::Accumulate"); - m.def( - "COCOevalEvaluateImages", - &COCOeval::EvaluateImages, - "COCOeval::EvaluateImages"); - pybind11::class_(m, "InstanceAnnotation") - .def(pybind11::init()); - pybind11::class_(m, "ImageEvaluation") - .def(pybind11::init<>()); -} diff --git a/spaces/EDGAhab/Aatrox-Talking/transforms.py b/spaces/EDGAhab/Aatrox-Talking/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/EDGAhab/Aatrox-Talking/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/Egrt/LicenseGAN/esrgan.py b/spaces/Egrt/LicenseGAN/esrgan.py deleted file mode 100644 index 913b4086f840ea16a7197fc07dd722dbd8e1843c..0000000000000000000000000000000000000000 --- a/spaces/Egrt/LicenseGAN/esrgan.py +++ /dev/null @@ -1,85 +0,0 @@ -import numpy as np -import torch -import torch.backends.cudnn as cudnn -from PIL import Image -from nets.SwinIR import Generator -from utils.utils import cvtColor, preprocess_input - - -class ESRGAN(object): - #-----------------------------------------# - # 注意修改model_path - #-----------------------------------------# - _defaults = { - #-----------------------------------------------# - # model_path指向logs文件夹下的权值文件 - #-----------------------------------------------# - "model_path" : 'model_data/Generator_SwinIR.pth', - #-----------------------------------------------# - # 上采样的倍数,和训练时一样 - #-----------------------------------------------# - "scale_factor" : 4, - #-----------------------------------------------# - # hr_shape - #-----------------------------------------------# - "hr_shape" : [128, 224], - #-------------------------------# - # 是否使用Cuda - # 没有GPU可以设置成False - #-------------------------------# - "cuda" : False, - } - - #---------------------------------------------------# - # 初始化SRGAN - #---------------------------------------------------# - def __init__(self, **kwargs): - self.__dict__.update(self._defaults) - for name, value in kwargs.items(): - setattr(self, name, value) - self.generate() - - def generate(self): - # self.net = Generator(self.scale_factor) - self.net = Generator(upscale=self.scale_factor, img_size=tuple(self.hr_shape), - window_size=8, img_range=1., depths=[3, 3, 3, 3], - embed_dim=60, num_heads=[3, 3, 3, 3], mlp_ratio=2, upsampler='pixelshuffledirect') - - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - self.net.load_state_dict(torch.load(self.model_path, map_location=device)) - self.net = self.net.eval() - print('{} model, and classes loaded.'.format(self.model_path)) - - if self.cuda: - self.net = torch.nn.DataParallel(self.net) - cudnn.benchmark = True - self.net = self.net.cuda() - - def generate_1x1_image(self, image): - #---------------------------------------------------------# - # 在这里将图像转换成RGB图像,防止灰度图在预测时报错。 - # 代码仅仅支持RGB图像的预测,所有其它类型的图像都会转化成RGB - #---------------------------------------------------------# - image = cvtColor(image) - #---------------------------------------------------------# - # 添加上batch_size维度,并进行归一化 - #---------------------------------------------------------# - image_data = np.expand_dims(np.transpose(preprocess_input(np.array(image, dtype=np.float32), [0.5,0.5,0.5], [0.5,0.5,0.5]), [2,0,1]), 0) - - with torch.no_grad(): - image_data = torch.from_numpy(image_data).type(torch.FloatTensor) - if self.cuda: - image_data = image_data.cuda() - - #---------------------------------------------------------# - # 将图像输入网络当中进行预测! - #---------------------------------------------------------# - hr_image = self.net(image_data)[0] - #---------------------------------------------------------# - # 将归一化的结果再转成rgb格式 - #---------------------------------------------------------# - hr_image = (hr_image.cpu().data.numpy().transpose(1, 2, 0) * 0.5 + 0.5) - hr_image = np.clip(hr_image * 255, 0, 255) - - hr_image = Image.fromarray(np.uint8(hr_image)) - return hr_image diff --git a/spaces/ElainaFanBoy/MusicGen/audiocraft/modules/conditioners.py b/spaces/ElainaFanBoy/MusicGen/audiocraft/modules/conditioners.py deleted file mode 100644 index 82792316024b88d4c5c38b0a28f443627771d509..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/MusicGen/audiocraft/modules/conditioners.py +++ /dev/null @@ -1,990 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import defaultdict -from copy import deepcopy -from dataclasses import dataclass, field -from itertools import chain -import logging -import math -import random -import re -import typing as tp -import warnings - -from einops import rearrange -from num2words import num2words -import spacy -from transformers import T5EncoderModel, T5Tokenizer # type: ignore -import torchaudio -import torch -from torch import nn -from torch import Tensor -import torch.nn.functional as F -from torch.nn.utils.rnn import pad_sequence - -from .streaming import StreamingModule -from .transformer import create_sin_embedding -from ..data.audio_dataset import SegmentInfo -from ..utils.autocast import TorchAutocast -from ..utils.utils import hash_trick, length_to_mask, collate - - -logger = logging.getLogger(__name__) -TextCondition = tp.Optional[str] # a text condition can be a string or None (if doesn't exist) -ConditionType = tp.Tuple[Tensor, Tensor] # condition, mask - - -class WavCondition(tp.NamedTuple): - wav: Tensor - length: Tensor - path: tp.List[tp.Optional[str]] = [] - - -def nullify_condition(condition: ConditionType, dim: int = 1): - """This function transforms an input condition to a null condition. - The way it is done by converting it to a single zero vector similarly - to how it is done inside WhiteSpaceTokenizer and NoopTokenizer. - - Args: - condition (ConditionType): a tuple of condition and mask (tp.Tuple[Tensor, Tensor]) - dim (int): the dimension that will be truncated (should be the time dimension) - WARNING!: dim should not be the batch dimension! - Returns: - ConditionType: a tuple of null condition and mask - """ - assert dim != 0, "dim cannot be the batch dimension!" - assert type(condition) == tuple and \ - type(condition[0]) == Tensor and \ - type(condition[1]) == Tensor, "'nullify_condition' got an unexpected input type!" - cond, mask = condition - B = cond.shape[0] - last_dim = cond.dim() - 1 - out = cond.transpose(dim, last_dim) - out = 0. * out[..., :1] - out = out.transpose(dim, last_dim) - mask = torch.zeros((B, 1), device=out.device).int() - assert cond.dim() == out.dim() - return out, mask - - -def nullify_wav(wav: Tensor) -> WavCondition: - """Create a nullified WavCondition from a wav tensor with appropriate shape. - - Args: - wav (Tensor): tensor of shape [B, T] - Returns: - WavCondition: wav condition with nullified wav. - """ - null_wav, _ = nullify_condition((wav, torch.zeros_like(wav)), dim=wav.dim() - 1) - return WavCondition( - wav=null_wav, - length=torch.tensor([0] * wav.shape[0], device=wav.device), - path=['null_wav'] * wav.shape[0] - ) - - -@dataclass -class ConditioningAttributes: - text: tp.Dict[str, tp.Optional[str]] = field(default_factory=dict) - wav: tp.Dict[str, WavCondition] = field(default_factory=dict) - - def __getitem__(self, item): - return getattr(self, item) - - @property - def text_attributes(self): - return self.text.keys() - - @property - def wav_attributes(self): - return self.wav.keys() - - @property - def attributes(self): - return {"text": self.text_attributes, "wav": self.wav_attributes} - - def to_flat_dict(self): - return { - **{f"text.{k}": v for k, v in self.text.items()}, - **{f"wav.{k}": v for k, v in self.wav.items()}, - } - - @classmethod - def from_flat_dict(cls, x): - out = cls() - for k, v in x.items(): - kind, att = k.split(".") - out[kind][att] = v - return out - - -class SegmentWithAttributes(SegmentInfo): - """Base class for all dataclasses that are used for conditioning. - All child classes should implement `to_condition_attributes` that converts - the existing attributes to a dataclass of type ConditioningAttributes. - """ - def to_condition_attributes(self) -> ConditioningAttributes: - raise NotImplementedError() - - -class Tokenizer: - """Base class for all tokenizers - (in case we want to introduce more advances tokenizers in the future). - """ - def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[Tensor, Tensor]: - raise NotImplementedError() - - -class WhiteSpaceTokenizer(Tokenizer): - """This tokenizer should be used for natural language descriptions. - For example: - ["he didn't, know he's going home.", 'shorter sentence'] => - [[78, 62, 31, 4, 78, 25, 19, 34], - [59, 77, 0, 0, 0, 0, 0, 0]] - """ - PUNCTUATIONS = "?:!.,;" - - def __init__(self, n_bins: int, pad_idx: int = 0, language: str = "en_core_web_sm", - lemma: bool = True, stopwords: bool = True) -> None: - self.n_bins = n_bins - self.pad_idx = pad_idx - self.lemma = lemma - self.stopwords = stopwords - try: - self.nlp = spacy.load(language) - except IOError: - spacy.cli.download(language) # type: ignore - self.nlp = spacy.load(language) - - @tp.no_type_check - def __call__( - self, - texts: tp.List[tp.Optional[str]], - return_text: bool = False - ) -> tp.Tuple[Tensor, Tensor]: - """Take a list of strings and convert them to a tensor of indices. - - Args: - texts (tp.List[str]): List of strings. - return_text (bool, optional): Whether to return text as additional tuple item. Defaults to False. - Returns: - tp.Tuple[Tensor, Tensor]: - - Indices of words in the LUT. - - And a mask indicating where the padding tokens are - """ - output, lengths = [], [] - texts = deepcopy(texts) - for i, text in enumerate(texts): - # if current sample doesn't have a certain attribute, replace with pad token - if text is None: - output.append(Tensor([self.pad_idx])) - lengths.append(0) - continue - - # convert numbers to words - text = re.sub(r"(\d+)", lambda x: num2words(int(x.group(0))), text) # type: ignore - # normalize text - text = self.nlp(text) # type: ignore - # remove stopwords - if self.stopwords: - text = [w for w in text if not w.is_stop] # type: ignore - # remove punctuations - text = [w for w in text if w.text not in self.PUNCTUATIONS] # type: ignore - # lemmatize if needed - text = [getattr(t, "lemma_" if self.lemma else "text") for t in text] # type: ignore - - texts[i] = " ".join(text) - lengths.append(len(text)) - # convert to tensor - tokens = Tensor([hash_trick(w, self.n_bins) for w in text]) - output.append(tokens) - - mask = length_to_mask(torch.IntTensor(lengths)).int() - padded_output = pad_sequence(output, padding_value=self.pad_idx).int().t() - if return_text: - return padded_output, mask, texts # type: ignore - return padded_output, mask - - -class NoopTokenizer(Tokenizer): - """This tokenizer should be used for global conditioners such as: artist, genre, key, etc. - The difference between this and WhiteSpaceTokenizer is that NoopTokenizer does not split - strings, so "Jeff Buckley" will get it's own index. Whereas WhiteSpaceTokenizer will - split it to ["Jeff", "Buckley"] and return an index per word. - - For example: - ["Queen", "ABBA", "Jeff Buckley"] => [43, 55, 101] - ["Metal", "Rock", "Classical"] => [0, 223, 51] - """ - def __init__(self, n_bins: int, pad_idx: int = 0): - self.n_bins = n_bins - self.pad_idx = pad_idx - - def __call__(self, texts: tp.List[tp.Optional[str]]) -> tp.Tuple[Tensor, Tensor]: - output, lengths = [], [] - for text in texts: - # if current sample doesn't have a certain attribute, replace with pad token - if text is None: - output.append(self.pad_idx) - lengths.append(0) - else: - output.append(hash_trick(text, self.n_bins)) - lengths.append(1) - - tokens = torch.LongTensor(output).unsqueeze(1) - mask = length_to_mask(torch.IntTensor(lengths)).int() - return tokens, mask - - -class BaseConditioner(nn.Module): - """Base model for all conditioner modules. We allow the output dim to be different - than the hidden dim for two reasons: 1) keep our LUTs small when the vocab is large; - 2) make all condition dims consistent. - - Args: - dim (int): Hidden dim of the model (text-encoder/LUT). - output_dim (int): Output dim of the conditioner. - """ - def __init__(self, dim, output_dim): - super().__init__() - self.dim = dim - self.output_dim = output_dim - self.output_proj = nn.Linear(dim, output_dim) - - def tokenize(self, *args, **kwargs) -> tp.Any: - """Should be any part of the processing that will lead to a synchronization - point, e.g. BPE tokenization with transfer to the GPU. - - The returned value will be saved and return later when calling forward(). - """ - raise NotImplementedError() - - def forward(self, inputs: tp.Any) -> ConditionType: - """Gets input that should be used as conditioning (e.g, genre, description or a waveform). - Outputs a ConditionType, after the input data was embedded as a dense vector. - - Returns: - ConditionType: - - A tensor of size [B, T, D] where B is the batch size, T is the length of the - output embedding and D is the dimension of the embedding. - - And a mask indicating where the padding tokens. - """ - raise NotImplementedError() - - -class TextConditioner(BaseConditioner): - ... - - -class LUTConditioner(TextConditioner): - """Lookup table TextConditioner. - - Args: - n_bins (int): Number of bins. - dim (int): Hidden dim of the model (text-encoder/LUT). - output_dim (int): Output dim of the conditioner. - tokenizer (str): Name of the tokenizer. - pad_idx (int, optional): Index for padding token. Defaults to 0. - """ - def __init__(self, n_bins: int, dim: int, output_dim: int, tokenizer: str, pad_idx: int = 0): - super().__init__(dim, output_dim) - self.embed = nn.Embedding(n_bins, dim) - self.tokenizer: Tokenizer - if tokenizer == "whitespace": - self.tokenizer = WhiteSpaceTokenizer(n_bins, pad_idx=pad_idx) - elif tokenizer == "noop": - self.tokenizer = NoopTokenizer(n_bins, pad_idx=pad_idx) - else: - raise ValueError(f"unrecognized tokenizer `{tokenizer}`.") - - def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Tuple[torch.Tensor, torch.Tensor]: - device = self.embed.weight.device - tokens, mask = self.tokenizer(x) - tokens, mask = tokens.to(device), mask.to(device) - return tokens, mask - - def forward(self, inputs: tp.Tuple[torch.Tensor, torch.Tensor]) -> ConditionType: - tokens, mask = inputs - embeds = self.embed(tokens) - embeds = self.output_proj(embeds) - embeds = (embeds * mask.unsqueeze(-1)) - return embeds, mask - - -class T5Conditioner(TextConditioner): - """T5-based TextConditioner. - - Args: - name (str): Name of the T5 model. - output_dim (int): Output dim of the conditioner. - finetune (bool): Whether to fine-tune T5 at train time. - device (str): Device for T5 Conditioner. - autocast_dtype (tp.Optional[str], optional): Autocast dtype. - word_dropout (float, optional): Word dropout probability. - normalize_text (bool, optional): Whether to apply text normalization. - """ - MODELS = ["t5-small", "t5-base", "t5-large", "t5-3b", "t5-11b", - "google/flan-t5-small", "google/flan-t5-base", "google/flan-t5-large", - "google/flan-t5-xl", "google/flan-t5-xxl"] - MODELS_DIMS = { - "t5-small": 512, - "t5-base": 768, - "t5-large": 1024, - "t5-3b": 1024, - "t5-11b": 1024, - "google/flan-t5-small": 512, - "google/flan-t5-base": 768, - "google/flan-t5-large": 1024, - "google/flan-t5-3b": 1024, - "google/flan-t5-11b": 1024, - } - - def __init__(self, name: str, output_dim: int, finetune: bool, device: str, - autocast_dtype: tp.Optional[str] = 'float32', word_dropout: float = 0., - normalize_text: bool = False): - assert name in self.MODELS, f"unrecognized t5 model name (should in {self.MODELS})" - super().__init__(self.MODELS_DIMS[name], output_dim) - self.device = device - self.name = name - self.finetune = finetune - self.word_dropout = word_dropout - - if autocast_dtype is None or self.device == 'cpu': - self.autocast = TorchAutocast(enabled=False) - if self.device != 'cpu': - logger.warning("T5 has no autocast, this might lead to NaN") - else: - dtype = getattr(torch, autocast_dtype) - assert isinstance(dtype, torch.dtype) - logger.info(f"T5 will be evaluated with autocast as {autocast_dtype}") - self.autocast = TorchAutocast(enabled=True, device_type=self.device, dtype=dtype) - # Let's disable logging temporarily because T5 will vomit some errors otherwise. - # thanks https://gist.github.com/simon-weber/7853144 - previous_level = logging.root.manager.disable - logging.disable(logging.ERROR) - with warnings.catch_warnings(): - warnings.simplefilter("ignore") - try: - self.t5_tokenizer = T5Tokenizer.from_pretrained(name) - t5 = T5EncoderModel.from_pretrained(name).train(mode=finetune) - finally: - logging.disable(previous_level) - if finetune: - self.t5 = t5 - else: - # this makes sure that the t5 models is not part - # of the saved checkpoint - self.__dict__["t5"] = t5.to(device) - - self.normalize_text = normalize_text - if normalize_text: - self.text_normalizer = WhiteSpaceTokenizer(1, lemma=True, stopwords=True) - - def tokenize(self, x: tp.List[tp.Optional[str]]) -> tp.Dict[str, torch.Tensor]: - # if current sample doesn't have a certain attribute, replace with empty string - entries: tp.List[str] = [xi if xi is not None else "" for xi in x] - if self.normalize_text: - _, _, entries = self.text_normalizer(entries, return_text=True) - if self.word_dropout > 0. and self.training: - new_entries = [] - for entry in entries: - words = [word for word in entry.split(" ") if random.random() >= self.word_dropout] - new_entries.append(" ".join(words)) - entries = new_entries - - empty_idx = torch.LongTensor([i for i, xi in enumerate(entries) if xi == ""]) - - inputs = self.t5_tokenizer(entries, return_tensors="pt", padding=True).to(self.device) - mask = inputs["attention_mask"] - mask[empty_idx, :] = 0 # zero-out index where the input is non-existant - return inputs - - def forward(self, inputs: tp.Dict[str, torch.Tensor]) -> ConditionType: - mask = inputs["attention_mask"] - with torch.set_grad_enabled(self.finetune), self.autocast: - embeds = self.t5(**inputs).last_hidden_state - embeds = self.output_proj(embeds.to(self.output_proj.weight)) - embeds = (embeds * mask.unsqueeze(-1)) - return embeds, mask - - -class WaveformConditioner(BaseConditioner): - """Base class for all conditioners that take a waveform as input. - Classes that inherit must implement `_get_wav_embedding` that outputs - a continuous tensor, and `_downsampling_factor` that returns the down-sampling - factor of the embedding model. - - Args: - dim (int): The internal representation dimension. - output_dim (int): Output dimension. - device (tp.Union[torch.device, str]): Device. - """ - def __init__(self, dim: int, output_dim: int, device: tp.Union[torch.device, str]): - super().__init__(dim, output_dim) - self.device = device - - def tokenize(self, wav_length: WavCondition) -> WavCondition: - wav, length, path = wav_length - assert length is not None - return WavCondition(wav.to(self.device), length.to(self.device), path) - - def _get_wav_embedding(self, wav: Tensor) -> Tensor: - """Gets as input a wav and returns a dense vector of conditions.""" - raise NotImplementedError() - - def _downsampling_factor(self): - """Returns the downsampling factor of the embedding model.""" - raise NotImplementedError() - - def forward(self, inputs: WavCondition) -> ConditionType: - """ - Args: - input (WavCondition): Tuple of (waveform, lengths). - Returns: - ConditionType: Dense vector representing the conditioning along with its' mask. - """ - wav, lengths, path = inputs - with torch.no_grad(): - embeds = self._get_wav_embedding(wav) - embeds = embeds.to(self.output_proj.weight) - embeds = self.output_proj(embeds) - - if lengths is not None: - lengths = lengths / self._downsampling_factor() - mask = length_to_mask(lengths, max_len=embeds.shape[1]).int() # type: ignore - else: - mask = torch.ones_like(embeds) - embeds = (embeds * mask.unsqueeze(2).to(self.device)) - - return embeds, mask - - -class ChromaStemConditioner(WaveformConditioner): - """Chroma conditioner that uses DEMUCS to first filter out drums and bass. The is followed by - the insight the drums and bass often dominate the chroma, leading to the chroma not containing the - information about melody. - - Args: - output_dim (int): Output dimension for the conditioner. - sample_rate (int): Sample rate for the chroma extractor. - n_chroma (int): Number of chroma for the chroma extractor. - radix2_exp (int): Radix2 exponent for the chroma extractor. - duration (float): Duration used during training. This is later used for correct padding - in case we are using chroma as prefix. - match_len_on_eval (bool, optional): If True then all chromas are padded to the training - duration. Defaults to False. - eval_wavs (str, optional): Path to a json egg with waveform, this waveforms are used as - conditions during eval (for cases where we don't want to leak test conditions like MusicCaps). - Defaults to None. - n_eval_wavs (int, optional): Limits the number of waveforms used for conditioning. Defaults to 0. - device (tp.Union[torch.device, str], optional): Device for the conditioner. - **kwargs: Additional parameters for the chroma extractor. - """ - def __init__(self, output_dim: int, sample_rate: int, n_chroma: int, radix2_exp: int, - duration: float, match_len_on_eval: bool = True, eval_wavs: tp.Optional[str] = None, - n_eval_wavs: int = 0, device: tp.Union[torch.device, str] = "cpu", **kwargs): - from demucs import pretrained - super().__init__(dim=n_chroma, output_dim=output_dim, device=device) - self.autocast = TorchAutocast(enabled=device != "cpu", device_type=self.device, dtype=torch.float32) - self.sample_rate = sample_rate - self.match_len_on_eval = match_len_on_eval - self.duration = duration - self.__dict__["demucs"] = pretrained.get_model('htdemucs').to(device) - self.stem2idx = {'drums': 0, 'bass': 1, 'other': 2, 'vocal': 3} - self.stem_idx = torch.LongTensor([self.stem2idx['vocal'], self.stem2idx['other']]).to(device) - self.chroma = ChromaExtractor(sample_rate=sample_rate, n_chroma=n_chroma, radix2_exp=radix2_exp, - device=device, **kwargs) - self.chroma_len = self._get_chroma_len() - - def _downsampling_factor(self): - return self.chroma.winhop - - def _get_chroma_len(self): - """Get length of chroma during training""" - dummy_wav = torch.zeros((1, self.sample_rate * self.duration), device=self.device) - dummy_chr = self.chroma(dummy_wav) - return dummy_chr.shape[1] - - @torch.no_grad() - def _get_filtered_wav(self, wav): - from demucs.apply import apply_model - from demucs.audio import convert_audio - with self.autocast: - wav = convert_audio(wav, self.sample_rate, self.demucs.samplerate, self.demucs.audio_channels) - stems = apply_model(self.demucs, wav, device=self.device) - stems = stems[:, self.stem_idx] # extract stem - stems = stems.sum(1) # merge extracted stems - stems = stems.mean(1, keepdim=True) # mono - stems = convert_audio(stems, self.demucs.samplerate, self.sample_rate, 1) - return stems - - @torch.no_grad() - def _get_wav_embedding(self, wav): - # avoid 0-size tensors when we are working with null conds - if wav.shape[-1] == 1: - return self.chroma(wav) - stems = self._get_filtered_wav(wav) - chroma = self.chroma(stems) - - if self.match_len_on_eval: - b, t, c = chroma.shape - if t > self.chroma_len: - chroma = chroma[:, :self.chroma_len] - logger.debug(f'chroma was truncated! ({t} -> {chroma.shape[1]})') - elif t < self.chroma_len: - # chroma = F.pad(chroma, (0, 0, 0, self.chroma_len - t)) - n_repeat = int(math.ceil(self.chroma_len / t)) - chroma = chroma.repeat(1, n_repeat, 1) - chroma = chroma[:, :self.chroma_len] - logger.debug(f'chroma was zero-padded! ({t} -> {chroma.shape[1]})') - return chroma - - -class ChromaExtractor(nn.Module): - """Chroma extraction class, handles chroma extraction and quantization. - - Args: - sample_rate (int): Sample rate. - n_chroma (int): Number of chroma to consider. - radix2_exp (int): Radix2 exponent. - nfft (tp.Optional[int], optional): Number of FFT. - winlen (tp.Optional[int], optional): Window length. - winhop (tp.Optional[int], optional): Window hop size. - argmax (bool, optional): Whether to use argmax. Defaults to False. - norm (float, optional): Norm for chroma normalization. Defaults to inf. - device (tp.Union[torch.device, str], optional): Device to use. Defaults to cpu. - """ - def __init__(self, sample_rate: int, n_chroma: int = 12, radix2_exp: int = 12, - nfft: tp.Optional[int] = None, winlen: tp.Optional[int] = None, winhop: tp.Optional[int] = None, - argmax: bool = False, norm: float = torch.inf, device: tp.Union[torch.device, str] = "cpu"): - super().__init__() - from librosa import filters - self.device = device - self.autocast = TorchAutocast(enabled=device != "cpu", device_type=self.device, dtype=torch.float32) - self.winlen = winlen or 2 ** radix2_exp - self.nfft = nfft or self.winlen - self.winhop = winhop or (self.winlen // 4) - self.sr = sample_rate - self.n_chroma = n_chroma - self.norm = norm - self.argmax = argmax - self.window = torch.hann_window(self.winlen).to(device) - self.fbanks = torch.from_numpy(filters.chroma(sr=sample_rate, n_fft=self.nfft, tuning=0, - n_chroma=self.n_chroma)).to(device) - self.spec = torchaudio.transforms.Spectrogram(n_fft=self.nfft, win_length=self.winlen, - hop_length=self.winhop, power=2, center=True, - pad=0, normalized=True).to(device) - - def forward(self, wav): - with self.autocast: - T = wav.shape[-1] - # in case we are getting a wav that was dropped out (nullified) - # make sure wav length is no less that nfft - if T < self.nfft: - pad = self.nfft - T - r = 0 if pad % 2 == 0 else 1 - wav = F.pad(wav, (pad // 2, pad // 2 + r), 'constant', 0) - assert wav.shape[-1] == self.nfft, f'expected len {self.nfft} but got {wav.shape[-1]}' - spec = self.spec(wav).squeeze(1) - raw_chroma = torch.einsum("cf,...ft->...ct", self.fbanks, spec) - norm_chroma = torch.nn.functional.normalize(raw_chroma, p=self.norm, dim=-2, eps=1e-6) - norm_chroma = rearrange(norm_chroma, "b d t -> b t d") - - if self.argmax: - idx = norm_chroma.argmax(-1, keepdims=True) - norm_chroma[:] = 0 - norm_chroma.scatter_(dim=-1, index=idx, value=1) - - return norm_chroma - - -def dropout_condition(sample: ConditioningAttributes, condition_type: str, condition: str): - """Utility function for nullifying an attribute inside an ConditioningAttributes object. - If the condition is of type "wav", then nullify it using "nullify_condition". - If the condition is of any other type, set its' value to None. - Works in-place. - """ - if condition_type not in ["text", "wav"]: - raise ValueError( - "dropout_condition got an unexpected condition type!" - f" expected 'wav' or 'text' but got '{condition_type}'" - ) - - if condition not in getattr(sample, condition_type): - raise ValueError( - "dropout_condition received an unexpected condition!" - f" expected wav={sample.wav.keys()} and text={sample.text.keys()}" - f"but got '{condition}' of type '{condition_type}'!" - ) - - if condition_type == "wav": - wav, length, path = sample.wav[condition] - sample.wav[condition] = nullify_wav(wav) - else: - sample.text[condition] = None - - return sample - - -class DropoutModule(nn.Module): - """Base class for all dropout modules.""" - def __init__(self, seed: int = 1234): - super().__init__() - self.rng = torch.Generator() - self.rng.manual_seed(seed) - - -class AttributeDropout(DropoutModule): - """Applies dropout with a given probability per attribute. This is different from the behavior of - ClassifierFreeGuidanceDropout as this allows for attributes to be dropped out separately. For example, - "artist" can be dropped while "genre" remains. This is in contrast to ClassifierFreeGuidanceDropout - where if "artist" is dropped "genre" must also be dropped. - - Args: - p (tp.Dict[str, float]): A dict mapping between attributes and dropout probability. For example: - ... - "genre": 0.1, - "artist": 0.5, - "wav": 0.25, - ... - active_on_eval (bool, optional): Whether the dropout is active at eval. Default to False. - seed (int, optional): Random seed. - """ - def __init__(self, p: tp.Dict[str, tp.Dict[str, float]], active_on_eval: bool = False, seed: int = 1234): - super().__init__(seed=seed) - self.active_on_eval = active_on_eval - # construct dict that return the values from p otherwise 0 - self.p = {} - for condition_type, probs in p.items(): - self.p[condition_type] = defaultdict(lambda: 0, probs) - - def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]: - """ - Args: - samples (tp.List[ConditioningAttributes]): List of conditions. - Returns: - tp.List[ConditioningAttributes]: List of conditions after certain attributes were set to None. - """ - if not self.training and not self.active_on_eval: - return samples - - samples = deepcopy(samples) - - for condition_type, ps in self.p.items(): # for condition types [text, wav] - for condition, p in ps.items(): # for attributes of each type (e.g., [artist, genre]) - if torch.rand(1, generator=self.rng).item() < p: - for sample in samples: - dropout_condition(sample, condition_type, condition) - - return samples - - def __repr__(self): - return f"AttributeDropout({dict(self.p)})" - - -class ClassifierFreeGuidanceDropout(DropoutModule): - """Applies Classifier Free Guidance dropout, meaning all attributes - are dropped with the same probability. - - Args: - p (float): Probability to apply condition dropout during training. - seed (int): Random seed. - """ - def __init__(self, p: float, seed: int = 1234): - super().__init__(seed=seed) - self.p = p - - def forward(self, samples: tp.List[ConditioningAttributes]) -> tp.List[ConditioningAttributes]: - """ - Args: - samples (tp.List[ConditioningAttributes]): List of conditions. - Returns: - tp.List[ConditioningAttributes]: List of conditions after all attributes were set to None. - """ - if not self.training: - return samples - - # decide on which attributes to drop in a batched fashion - drop = torch.rand(1, generator=self.rng).item() < self.p - if not drop: - return samples - - # nullify conditions of all attributes - samples = deepcopy(samples) - - for condition_type in ["wav", "text"]: - for sample in samples: - for condition in sample.attributes[condition_type]: - dropout_condition(sample, condition_type, condition) - - return samples - - def __repr__(self): - return f"ClassifierFreeGuidanceDropout(p={self.p})" - - -class ConditioningProvider(nn.Module): - """Main class to provide conditions given all the supported conditioners. - - Args: - conditioners (dict): Dictionary of conditioners. - merge_text_conditions_p (float, optional): Probability to merge all text sources - into a single text condition. Defaults to 0. - drop_desc_p (float, optional): Probability to drop the original description - when merging all text sources into a single text condition. Defaults to 0. - device (tp.Union[torch.device, str], optional): Device for conditioners and output condition types. - """ - def __init__( - self, - conditioners: tp.Dict[str, BaseConditioner], - merge_text_conditions_p: float = 0, - drop_desc_p: float = 0, - device: tp.Union[torch.device, str] = "cpu", - ): - super().__init__() - self.device = device - self.merge_text_conditions_p = merge_text_conditions_p - self.drop_desc_p = drop_desc_p - self.conditioners = nn.ModuleDict(conditioners) - - @property - def text_conditions(self): - return [k for k, v in self.conditioners.items() if isinstance(v, TextConditioner)] - - @property - def wav_conditions(self): - return [k for k, v in self.conditioners.items() if isinstance(v, WaveformConditioner)] - - @property - def has_wav_condition(self): - return len(self.wav_conditions) > 0 - - def tokenize(self, inputs: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.Any]: - """Match attributes/wavs with existing conditioners in self, and compute tokenize them accordingly. - This should be called before starting any real GPU work to avoid synchronization points. - This will return a dict matching conditioner names to their arbitrary tokenized representations. - - Args: - inputs (list[ConditioningAttribres]): List of ConditioningAttributes objects containing - text and wav conditions. - """ - assert all([type(x) == ConditioningAttributes for x in inputs]), \ - "got unexpected types input for conditioner! should be tp.List[ConditioningAttributes]" \ - f" but types were {set([type(x) for x in inputs])}" - - output = {} - text = self._collate_text(inputs) - wavs = self._collate_wavs(inputs) - - assert set(text.keys() | wavs.keys()).issubset(set(self.conditioners.keys())), \ - f"got an unexpected attribute! Expected {self.conditioners.keys()}, got {text.keys(), wavs.keys()}" - - for attribute, batch in chain(text.items(), wavs.items()): - output[attribute] = self.conditioners[attribute].tokenize(batch) - return output - - def forward(self, tokenized: tp.Dict[str, tp.Any]) -> tp.Dict[str, ConditionType]: - """Compute pairs of `(embedding, mask)` using the configured conditioners - and the tokenized representations. The output is for example: - - { - "genre": (torch.Tensor([B, 1, D_genre]), torch.Tensor([B, 1])), - "description": (torch.Tensor([B, T_desc, D_desc]), torch.Tensor([B, T_desc])), - ... - } - - Args: - tokenized (dict): Dict of tokenized representations as returned by `tokenize()`. - """ - output = {} - for attribute, inputs in tokenized.items(): - condition, mask = self.conditioners[attribute](inputs) - output[attribute] = (condition, mask) - return output - - def _collate_text(self, samples: tp.List[ConditioningAttributes]) -> tp.Dict[str, tp.List[tp.Optional[str]]]: - """Given a list of ConditioningAttributes objects, compile a dictionary where the keys - are the attributes and the values are the aggregated input per attribute. - For example: - Input: - [ - ConditioningAttributes(text={"genre": "Rock", "description": "A rock song with a guitar solo"}, wav=...), - ConditioningAttributes(text={"genre": "Hip-hop", "description": "A hip-hop verse"}, wav=...), - ] - Output: - { - "genre": ["Rock", "Hip-hop"], - "description": ["A rock song with a guitar solo", "A hip-hop verse"] - } - """ - batch_per_attribute: tp.Dict[str, tp.List[tp.Optional[str]]] = defaultdict(list) - - def _merge_conds(cond, merge_text_conditions_p=0, drop_desc_p=0): - def is_valid(k, v): - k_valid = k in ['key', 'bpm', 'genre', 'moods', 'instrument'] - v_valid = v is not None and isinstance(v, (int, float, str, list)) - return k_valid and v_valid - - def process_value(v): - if isinstance(v, (int, float, str)): - return v - if isinstance(v, list): - return ", ".join(v) - else: - RuntimeError(f"unknown type for text value! ({type(v), v})") - - desc = cond.text['description'] - meta_data = "" - if random.uniform(0, 1) < merge_text_conditions_p: - meta_pairs = [f'{k}: {process_value(v)}' for k, v in cond.text.items() if is_valid(k, v)] - random.shuffle(meta_pairs) - meta_data = ". ".join(meta_pairs) - desc = desc if not random.uniform(0, 1) < drop_desc_p else None - - if desc is None: - desc = meta_data if len(meta_data) > 1 else None - else: - desc = desc.rstrip('.') + ". " + meta_data - cond.text['description'] = desc.strip() if desc else None - - if self.training and self.merge_text_conditions_p: - for sample in samples: - _merge_conds(sample, self.merge_text_conditions_p, self.drop_desc_p) - - texts = [x.text for x in samples] - for text in texts: - for condition in self.text_conditions: - batch_per_attribute[condition].append(text[condition]) - - return batch_per_attribute - - def _collate_wavs(self, samples: tp.List[ConditioningAttributes]): - """Generate a dict where the keys are attributes by which we fetch similar wavs, - and the values are Tensors of wavs according to said attribtues. - - *Note*: by the time the samples reach this function, each sample should have some waveform - inside the "wav" attribute. It should be either: - 1. A real waveform - 2. A null waveform due to the sample having no similar waveforms (nullified by the dataset) - 3. A null waveform due to it being dropped in a dropout module (nullified by dropout) - - Args: - samples (tp.List[ConditioningAttributes]): List of ConditioningAttributes samples. - Returns: - dict: A dicionary mapping an attribute name to wavs. - """ - wavs = defaultdict(list) - lens = defaultdict(list) - paths = defaultdict(list) - out = {} - - for sample in samples: - for attribute in self.wav_conditions: - wav, length, path = sample.wav[attribute] - wavs[attribute].append(wav.flatten()) - lens[attribute].append(length) - paths[attribute].append(path) - - # stack all wavs to a single tensor - for attribute in self.wav_conditions: - stacked_wav, _ = collate(wavs[attribute], dim=0) - out[attribute] = WavCondition(stacked_wav.unsqueeze(1), - torch.cat(lens['self_wav']), paths[attribute]) # type: ignore - - return out - - -class ConditionFuser(StreamingModule): - """Condition fuser handles the logic to combine the different conditions - to the actual model input. - - Args: - fuse2cond (tp.Dict[str, str]): A dictionary that says how to fuse - each condition. For example: - { - "prepend": ["description"], - "sum": ["genre", "bpm"], - "cross": ["description"], - } - cross_attention_pos_emb (bool, optional): Use positional embeddings in cross attention. - cross_attention_pos_emb_scale (int): Scale for positional embeddings in cross attention if used. - """ - FUSING_METHODS = ["sum", "prepend", "cross", "input_interpolate"] - - def __init__(self, fuse2cond: tp.Dict[str, tp.List[str]], cross_attention_pos_emb: bool = False, - cross_attention_pos_emb_scale: float = 1.0): - super().__init__() - assert all( - [k in self.FUSING_METHODS for k in fuse2cond.keys()] - ), f"got invalid fuse method, allowed methods: {self.FUSING_MEHTODS}" - self.cross_attention_pos_emb = cross_attention_pos_emb - self.cross_attention_pos_emb_scale = cross_attention_pos_emb_scale - self.fuse2cond: tp.Dict[str, tp.List[str]] = fuse2cond - self.cond2fuse: tp.Dict[str, str] = {} - for fuse_method, conditions in fuse2cond.items(): - for condition in conditions: - self.cond2fuse[condition] = fuse_method - - def forward( - self, - input: Tensor, - conditions: tp.Dict[str, ConditionType] - ) -> tp.Tuple[Tensor, tp.Optional[Tensor]]: - """Fuse the conditions to the provided model input. - - Args: - input (Tensor): Transformer input. - conditions (tp.Dict[str, ConditionType]): Dict of conditions. - Returns: - tp.Tuple[Tensor, Tensor]: The first tensor is the transformer input - after the conditions have been fused. The second output tensor is the tensor - used for cross-attention or None if no cross attention inputs exist. - """ - B, T, _ = input.shape - - if 'offsets' in self._streaming_state: - first_step = False - offsets = self._streaming_state['offsets'] - else: - first_step = True - offsets = torch.zeros(input.shape[0], dtype=torch.long, device=input.device) - - assert set(conditions.keys()).issubset(set(self.cond2fuse.keys())), \ - f"given conditions contain unknown attributes for fuser, " \ - f"expected {self.cond2fuse.keys()}, got {conditions.keys()}" - cross_attention_output = None - for cond_type, (cond, cond_mask) in conditions.items(): - op = self.cond2fuse[cond_type] - if op == "sum": - input += cond - elif op == "input_interpolate": - cond = rearrange(cond, "b t d -> b d t") - cond = F.interpolate(cond, size=input.shape[1]) - input += rearrange(cond, "b d t -> b t d") - elif op == "prepend": - if first_step: - input = torch.cat([cond, input], dim=1) - elif op == "cross": - if cross_attention_output is not None: - cross_attention_output = torch.cat([cross_attention_output, cond], dim=1) - else: - cross_attention_output = cond - else: - raise ValueError(f"unknown op ({op})") - - if self.cross_attention_pos_emb and cross_attention_output is not None: - positions = torch.arange( - cross_attention_output.shape[1], - device=cross_attention_output.device - ).view(1, -1, 1) - pos_emb = create_sin_embedding(positions, cross_attention_output.shape[-1]) - cross_attention_output = cross_attention_output + self.cross_attention_pos_emb_scale * pos_emb - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return input, cross_attention_output diff --git a/spaces/Enderfga/mtCNN_sysu/README.md b/spaces/Enderfga/mtCNN_sysu/README.md deleted file mode 100644 index 80c91a6c9a0263aa2439688a4f568148a1c7e16e..0000000000000000000000000000000000000000 --- a/spaces/Enderfga/mtCNN_sysu/README.md +++ /dev/null @@ -1,136 +0,0 @@ ---- -title: MtCNN Sysu -emoji: 📈 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: openrail ---- -# Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks - -This repo contains the code, data and trained models for the paper [Joint Face Detection and Alignment using Multi-task Cascaded Convolutional Networks](https://arxiv.org/ftp/arxiv/papers/1604/1604.02878.pdf). - -## Overview - -MTCNN is a popular algorithm for face detection that uses multiple neural networks to detect faces in images. It is capable of detecting faces under various lighting and pose conditions and can detect multiple faces in an image. - -We have implemented MTCNN using the pytorch framework. Pytorch is a popular deep learning framework that provides tools for building and training neural networks. - -![](https://img.enderfga.cn/img/image-20221208152130975.png) - -![](https://img.enderfga.cn/img/image-20221208152231511.png) -## Description of file -```shell -├── README.md # explanatory document -├── get_data.py # Generate corresponding training data depending on the input “--net” -├── img # mid.png is used for testing visualization effects,other images are the corresponding results. -│ ├── mid.png -│   ├── onet.png -│   ├── pnet.png -│   ├── rnet.png -│   ├── result.png -│   └── result.jpg -├── model_store # Our pre-trained model -│   ├── onet_epoch_20.pt -│   ├── pnet_epoch_20.pt -│   └── rnet_epoch_20.pt -├── requirements.txt # Environmental version requirements -├── test.py # Specify different "--net" to get the corresponding visualization results -├── test.sh # Used to test mid.png, which will test the output visualization of three networks -├── train.out # Our complete training log for this experiment -├── train.py # Specify different "--net" for the training of the corresponding network -├── train.sh # Generate data from start to finish and train -└── utils # Some common tool functions and modules - ├── config.py - ├── dataloader.py - ├── detect.py - ├── models.py - ├── tool.py - └── vision.py -``` -## Requirements - -* numpy==1.21.4 -* matplotlib==3.5.0 -* opencv-python==4.4.0.42 -* torch==1.13.0+cu116 - -## How to Install - -- ```shell - conda create -n env python=3.8 -y - conda activate env - ``` -- ```shell - pip install -r requirements.txt - ``` - -## Preprocessing - -- download [WIDER_FACE](http://shuoyang1213.me/WIDERFACE/) face detection data then store it into ./data_set/face_detection -- download [CNN_FacePoint](http://mmlab.ie.cuhk.edu.hk/archive/CNN_FacePoint.htm) face detection and landmark data then store it into ./data_set/face_landmark - -### Preprocessed Data - -```shell -# Before training Pnet -python get_data.py --net=pnet -# Before training Rnet, please use your trained model path -python get_data.py --net=rnet --pnet_path=./model_store/pnet_epoch_20.pt -# Before training Onet, please use your trained model path -python get_data.py --net=onet --pnet_path=./model_store/pnet_epoch_20.pt --rnet_path=./model_store/rnet_epoch_20.pt -``` - -## How to Run - -### Train - -```shell -python train.py --net=pnet/rnet/onet #Specify the corresponding network to start training -bash train.sh #Alternatively, use the sh file to train in order -``` - -The checkpoints will be saved in a subfolder of `./model_store/*`. - -#### Finetuning from an existing checkpoint - -```shell -python train.py --net=pnet/rnet/onet --load=[model path] -``` - -model path should be a subdirectory in the `./model_store/` directory, e.g. `--load=./model_store/pnet_epoch_20.pt` - -### Evaluate - -#### Use the sh file to test in order - -```shell -bash test.sh -``` - -#### To detect a single image - -```shell -python test.py --net=pnet/rnet/onet --path=test.jpg -``` - -#### To detect a video stream from a camera - -```shell -python test.py --input_mode=0 -``` - -#### The result of "--net=pnet" - -![](https://img.enderfga.cn/img/20221208160900.png) - -#### The result of "--net=rnet" - -![](https://img.enderfga.cn/img/image-20221208155022083.png) - -#### The result of "--net=onet" - -![](https://img.enderfga.cn/img/image-20221208155044451.png) diff --git a/spaces/Falpx/DeepDanbooru_string/app.py b/spaces/Falpx/DeepDanbooru_string/app.py deleted file mode 100644 index 49019837c9207cc68cb37be0342f3bc44fd0decb..0000000000000000000000000000000000000000 --- a/spaces/Falpx/DeepDanbooru_string/app.py +++ /dev/null @@ -1,185 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import argparse -import functools -import os -import html -import pathlib -import tarfile - -import deepdanbooru as dd -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image -import tensorflow as tf -import piexif -import piexif.helper - -TITLE = 'DeepDanbooru String' - -TOKEN = os.environ['TOKEN'] -MODEL_REPO = 'CikeyQI/DeepDanbooru_string' -MODEL_FILENAME = 'model-resnet_custom_v3.h5' -LABEL_FILENAME = 'tags.txt' - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--score-slider-step', type=float, default=0.05) - parser.add_argument('--score-threshold', type=float, default=0.5) - parser.add_argument('--theme', type=str, default='dark-grass') - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - return parser.parse_args() - - -def load_sample_image_paths() -> list[pathlib.Path]: - image_dir = pathlib.Path('images') - if not image_dir.exists(): - dataset_repo = 'hysts/sample-images-TADNE' - path = huggingface_hub.hf_hub_download(dataset_repo, - 'images.tar.gz', - repo_type='dataset', - use_auth_token=TOKEN) - with tarfile.open(path) as f: - f.extractall() - return sorted(image_dir.glob('*')) - - -def load_model() -> tf.keras.Model: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - MODEL_FILENAME, - use_auth_token=TOKEN) - model = tf.keras.models.load_model(path) - return model - - -def load_labels() -> list[str]: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - LABEL_FILENAME, - use_auth_token=TOKEN) - with open(path) as f: - labels = [line.strip() for line in f.readlines()] - return labels - -def plaintext_to_html(text): - text = "

" + "
\n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "

" - return text - -def predict(image: PIL.Image.Image, score_threshold: float, - model: tf.keras.Model, labels: list[str]) -> dict[str, float]: - rawimage = image - _, height, width, _ = model.input_shape - image = np.asarray(image) - image = tf.image.resize(image, - size=(height, width), - method=tf.image.ResizeMethod.AREA, - preserve_aspect_ratio=True) - image = image.numpy() - image = dd.image.transform_and_pad_image(image, width, height) - image = image / 255. - probs = model.predict(image[None, ...])[0] - probs = probs.astype(float) - res = dict() - for prob, label in zip(probs.tolist(), labels): - if prob < score_threshold: - continue - res[label] = prob - b = dict(sorted(res.items(),key=lambda item:item[1], reverse=True)) - a = ', '.join(list(b.keys())).replace('_',' ').replace('(','\(').replace(')','\)') - c = ', '.join(list(b.keys())) - - items = rawimage.info - geninfo = '' - - if "exif" in rawimage.info: - exif = piexif.load(rawimage.info["exif"]) - exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'') - try: - exif_comment = piexif.helper.UserComment.load(exif_comment) - except ValueError: - exif_comment = exif_comment.decode('utf8', errors="ignore") - - items['exif comment'] = exif_comment - geninfo = exif_comment - - for field in ['jfif', 'jfif_version', 'jfif_unit', 'jfif_density', 'dpi', 'exif', - 'loop', 'background', 'timestamp', 'duration']: - items.pop(field, None) - - geninfo = items.get('parameters', geninfo) - - info = f""" -

PNG Info

-""" - for key, text in items.items(): - info += f""" -
-

{plaintext_to_html(str(key))}

-

{plaintext_to_html(str(text))}

-
-""".strip()+"\n" - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

{message}

" - - return (a,c,res,info) - - -def main(): - args = parse_args() - model = load_model() - labels = load_labels() - - func = functools.partial(predict, model=model, labels=labels) - func = functools.update_wrapper(func, predict) - - gr.Interface( - func, - [ - gr.inputs.Image(type='pil', label='Input'), - gr.inputs.Slider(0, - 1, - step=args.score_slider_step, - default=args.score_threshold, - label='Score Threshold'), - ], - [ - gr.outputs.Textbox(label='Output (string)'), - gr.outputs.Textbox(label='Output (raw string)'), - gr.outputs.Label(label='Output (label)'), - gr.outputs.HTML() - ], - examples=[ - ['miku.jpg',0.5], - ['miku2.jpg',0.5] - ], - title=TITLE, - description=''' -Demo for [KichangKim/DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) with "ready to copy" prompt and a prompt analyzer. - -Modified from [hysts/DeepDanbooru](https://huggingface.co/spaces/hysts/DeepDanbooru) - -PNG Info code forked from [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - ''', - theme=args.theme, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/models_onnx.py b/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/models_onnx.py deleted file mode 100644 index 963e67b29f828e9fdd096397952054fe77cf3d10..0000000000000000000000000000000000000000 --- a/spaces/Faridmaruf/RVCV2MODEL/lib/infer_pack/models_onnx.py +++ /dev/null @@ -1,819 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMsNSFsidM(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - version, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - if version == "v1": - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - else: - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - self.speaker_map = None - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def construct_spkmixmap(self, n_speaker): - self.speaker_map = torch.zeros((n_speaker, 1, 1, self.gin_channels)) - for i in range(n_speaker): - self.speaker_map[i] = self.emb_g(torch.LongTensor([[i]])) - self.speaker_map = self.speaker_map.unsqueeze(0) - - def forward(self, phone, phone_lengths, pitch, nsff0, g, rnd, max_len=None): - if self.speaker_map is not None: # [N, S] * [S, B, 1, H] - g = g.reshape((g.shape[0], g.shape[1], 1, 1, 1)) # [N, S, B, 1, 1] - g = g * self.speaker_map # [N, S, B, 1, H] - g = torch.sum(g, dim=1) # [N, 1, B, 1, H] - g = g.transpose(0, -1).transpose(0, -2).squeeze(0) # [B, H, N] - else: - g = g.unsqueeze(0) - g = self.emb_g(g).transpose(1, 2) - - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * rnd) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/Fernando22/freegpt-webui/client/js/change-language.js b/spaces/Fernando22/freegpt-webui/client/js/change-language.js deleted file mode 100644 index ce87f6f60c7a9acca5e1902612930ef677f3fb65..0000000000000000000000000000000000000000 --- a/spaces/Fernando22/freegpt-webui/client/js/change-language.js +++ /dev/null @@ -1,47 +0,0 @@ -document.addEventListener('DOMContentLoaded', fetchLanguages); - -async function fetchLanguages() { - try { - const [languagesResponse, currentLanguageResponse] = await Promise.all([ - fetch(`${url_prefix}/get-languages`), - fetch(`${url_prefix}/get-locale`) - ]); - - const languages = await languagesResponse.json(); - const currentLanguage = await currentLanguageResponse.text(); - - const languageSelect = document.getElementById('language'); - languages.forEach(lang => { - const option = document.createElement('option'); - option.value = lang; - option.textContent = lang; - languageSelect.appendChild(option); - }); - - const savedLanguage = localStorage.getItem("language") || currentLanguage; - setLanguageOnPageLoad(savedLanguage); - } catch (error) { - console.error("Failed to fetch languages or current language"); - } -} - -function setLanguageOnPageLoad(language) { - document.getElementById("language").value = language; -} - -function changeLanguage(lang) { - fetch(`${url_prefix}/change-language`, { - method: "POST", - headers: { - "Content-Type": "application/json", - }, - body: JSON.stringify({ language: lang }), - }).then((response) => { - if (response.ok) { - localStorage.setItem("language", lang); - location.reload(); - } else { - console.error("Failed to change language"); - } - }); -} diff --git a/spaces/GabeIsHaxkee/E/Dockerfile b/spaces/GabeIsHaxkee/E/Dockerfile deleted file mode 100644 index 3a4dc66fdb50519fca2a6eaf64cbe0ea05b09a3f..0000000000000000000000000000000000000000 --- a/spaces/GabeIsHaxkee/E/Dockerfile +++ /dev/null @@ -1,13 +0,0 @@ -FROM python:3.9 - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -COPY . . - -EXPOSE 7860 - -CMD ["shiny", "run", "app.py", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/GaenKoki/voicevox/test/test_synthesis_engine_base.py b/spaces/GaenKoki/voicevox/test/test_synthesis_engine_base.py deleted file mode 100644 index 63f976a0ee5ec012c2ce832e014fb5ee960ebecb..0000000000000000000000000000000000000000 --- a/spaces/GaenKoki/voicevox/test/test_synthesis_engine_base.py +++ /dev/null @@ -1,411 +0,0 @@ -from typing import List, Union -from unittest import TestCase -from unittest.mock import Mock - -import numpy - -from voicevox_engine.model import AccentPhrase, AudioQuery, Mora -from voicevox_engine.synthesis_engine import SynthesisEngine - - -def yukarin_s_mock(length: int, phoneme_list: numpy.ndarray, speaker_id: numpy.ndarray): - result = [] - # mockとしての適当な処理、特に意味はない - for i in range(length): - result.append(round(float(phoneme_list[i] * 0.0625 + speaker_id), 2)) - return numpy.array(result) - - -def yukarin_sa_mock( - length: int, - vowel_phoneme_list: numpy.ndarray, - consonant_phoneme_list: numpy.ndarray, - start_accent_list: numpy.ndarray, - end_accent_list: numpy.ndarray, - start_accent_phrase_list: numpy.ndarray, - end_accent_phrase_list: numpy.ndarray, - speaker_id: numpy.ndarray, -): - result = [] - # mockとしての適当な処理、特に意味はない - for i in range(length): - result.append( - round( - float( - ( - vowel_phoneme_list[0][i] - + consonant_phoneme_list[0][i] - + start_accent_list[0][i] - + end_accent_list[0][i] - + start_accent_phrase_list[0][i] - + end_accent_phrase_list[0][i] - ) - * 0.0625 - + speaker_id - ), - 2, - ) - ) - return numpy.array(result)[numpy.newaxis] - - -def decode_mock( - length: int, - phoneme_size: int, - f0: numpy.ndarray, - phoneme: numpy.ndarray, - speaker_id: Union[numpy.ndarray, int], -): - result = [] - # mockとしての適当な処理、特に意味はない - for i in range(length): - # decode forwardはデータサイズがlengthの256倍になるのでとりあえず256回データをresultに入れる - for _ in range(256): - result.append( - float( - f0[i][0] * (numpy.where(phoneme[i] == 1)[0] / phoneme_size) - + speaker_id - ) - ) - return numpy.array(result) - - -def koreha_arimasuka_base_expected(): - return [ - AccentPhrase( - moras=[ - Mora( - text="コ", - consonant="k", - consonant_length=2.44, - vowel="o", - vowel_length=2.88, - pitch=4.38, - ), - Mora( - text="レ", - consonant="r", - consonant_length=3.06, - vowel="e", - vowel_length=1.88, - pitch=4.0, - ), - Mora( - text="ワ", - consonant="w", - consonant_length=3.62, - vowel="a", - vowel_length=1.44, - pitch=4.19, - ), - ], - accent=3, - pause_mora=None, - is_interrogative=False, - ), - AccentPhrase( - moras=[ - Mora( - text="ア", - consonant=None, - consonant_length=None, - vowel="a", - vowel_length=1.44, - pitch=1.44, - ), - Mora( - text="リ", - consonant="r", - consonant_length=3.06, - vowel="i", - vowel_length=2.31, - pitch=4.44, - ), - Mora( - text="マ", - consonant="m", - consonant_length=2.62, - vowel="a", - vowel_length=1.44, - pitch=3.12, - ), - Mora( - text="ス", - consonant="s", - consonant_length=3.19, - vowel="U", - vowel_length=1.38, - pitch=0.0, - ), - Mora( - text="カ", - consonant="k", - consonant_length=2.44, - vowel="a", - vowel_length=1.44, - pitch=2.94, - ), - ], - accent=3, - pause_mora=None, - is_interrogative=False, - ), - ] - - -def create_mock_query(accent_phrases): - return AudioQuery( - accent_phrases=accent_phrases, - speedScale=1, - pitchScale=0, - intonationScale=1, - volumeScale=1, - prePhonemeLength=0.1, - postPhonemeLength=0.1, - outputSamplingRate=24000, - outputStereo=False, - kana="", - ) - - -class MockCore: - yukarin_s_forward = Mock(side_effect=yukarin_s_mock) - yukarin_sa_forward = Mock(side_effect=yukarin_sa_mock) - decode_forward = Mock(side_effect=decode_mock) - - def metas(self): - return "" - - def supported_devices(self): - return "" - - def is_model_loaded(self, speaker_id): - return True - - -class TestSynthesisEngineBase(TestCase): - def setUp(self): - super().setUp() - self.synthesis_engine = SynthesisEngine( - core=MockCore(), - ) - self.synthesis_engine._synthesis_impl = Mock() - - def create_accent_phrases_test_base(self, text: str, expected: List[AccentPhrase]): - actual = self.synthesis_engine.create_accent_phrases(text, 1) - self.assertEqual( - expected, - actual, - "case(text:" + text + ")", - ) - - def create_synthesis_test_base( - self, - text: str, - expected: List[AccentPhrase], - enable_interrogative_upspeak: bool, - ): - """音声合成時に疑問文モーラ処理を行っているかどうかを検証 - (https://github.com/VOICEVOX/voicevox_engine/issues/272#issuecomment-1022610866) - """ - accent_phrases = self.synthesis_engine.create_accent_phrases(text, 1) - query = create_mock_query(accent_phrases=accent_phrases) - self.synthesis_engine.synthesis( - query, 0, enable_interrogative_upspeak=enable_interrogative_upspeak - ) - # _synthesis_implの第一引数に与えられたqueryを検証 - actual = self.synthesis_engine._synthesis_impl.call_args[0][0].accent_phrases - - self.assertEqual( - expected, - actual, - "case(text:" + text + ")", - ) - - def test_create_accent_phrases(self): - """accent_phrasesの作成時では疑問文モーラ処理を行わない - (https://github.com/VOICEVOX/voicevox_engine/issues/272#issuecomment-1022610866) - """ - expected = koreha_arimasuka_base_expected() - expected[-1].is_interrogative = True - self.create_accent_phrases_test_base(text="これはありますか?", expected=expected) - - def test_synthesis_interrogative(self): - expected = koreha_arimasuka_base_expected() - expected[-1].is_interrogative = True - expected[-1].moras += [ - Mora( - text="ア", - consonant=None, - consonant_length=None, - vowel="a", - vowel_length=0.15, - pitch=expected[-1].moras[-1].pitch + 0.3, - ) - ] - self.create_synthesis_test_base( - text="これはありますか?", - expected=expected, - enable_interrogative_upspeak=True, - ) - - expected = koreha_arimasuka_base_expected() - expected[-1].is_interrogative = True - self.create_synthesis_test_base( - text="これはありますか?", - expected=expected, - enable_interrogative_upspeak=False, - ) - - expected = koreha_arimasuka_base_expected() - self.create_synthesis_test_base( - text="これはありますか", - expected=expected, - enable_interrogative_upspeak=True, - ) - - def nn_base_expected(): - return [ - AccentPhrase( - moras=[ - Mora( - text="ン", - consonant=None, - consonant_length=None, - vowel="N", - vowel_length=1.25, - pitch=1.44, - ) - ], - accent=1, - pause_mora=None, - is_interrogative=False, - ) - ] - - expected = nn_base_expected() - self.create_synthesis_test_base( - text="ん", - expected=expected, - enable_interrogative_upspeak=True, - ) - - expected = nn_base_expected() - expected[-1].is_interrogative = True - expected[-1].moras += [ - Mora( - text="ン", - consonant=None, - consonant_length=None, - vowel="N", - vowel_length=0.15, - pitch=expected[-1].moras[-1].pitch + 0.3, - ) - ] - self.create_synthesis_test_base( - text="ん?", - expected=expected, - enable_interrogative_upspeak=True, - ) - - expected = nn_base_expected() - expected[-1].is_interrogative = True - self.create_synthesis_test_base( - text="ん?", - expected=expected, - enable_interrogative_upspeak=False, - ) - - def ltu_base_expected(): - return [ - AccentPhrase( - moras=[ - Mora( - text="ッ", - consonant=None, - consonant_length=None, - vowel="cl", - vowel_length=1.69, - pitch=0.0, - ) - ], - accent=1, - pause_mora=None, - is_interrogative=False, - ) - ] - - expected = ltu_base_expected() - self.create_synthesis_test_base( - text="っ", - expected=expected, - enable_interrogative_upspeak=True, - ) - - expected = ltu_base_expected() - expected[-1].is_interrogative = True - self.create_synthesis_test_base( - text="っ?", - expected=expected, - enable_interrogative_upspeak=True, - ) - - expected = ltu_base_expected() - expected[-1].is_interrogative = True - self.create_synthesis_test_base( - text="っ?", - expected=expected, - enable_interrogative_upspeak=False, - ) - - def su_base_expected(): - return [ - AccentPhrase( - moras=[ - Mora( - text="ス", - consonant="s", - consonant_length=3.19, - vowel="u", - vowel_length=3.5, - pitch=5.94, - ) - ], - accent=1, - pause_mora=None, - is_interrogative=False, - ) - ] - - expected = su_base_expected() - self.create_synthesis_test_base( - text="す", - expected=expected, - enable_interrogative_upspeak=True, - ) - - expected = su_base_expected() - expected[-1].is_interrogative = True - expected[-1].moras += [ - Mora( - text="ウ", - consonant=None, - consonant_length=None, - vowel="u", - vowel_length=0.15, - pitch=expected[-1].moras[-1].pitch + 0.3, - ) - ] - self.create_synthesis_test_base( - text="す?", - expected=expected, - enable_interrogative_upspeak=True, - ) - - expected = su_base_expected() - expected[-1].is_interrogative = True - self.create_synthesis_test_base( - text="す?", - expected=expected, - enable_interrogative_upspeak=False, - ) diff --git a/spaces/Gen-Sim/Gen-Sim/notebooks/affordance.py b/spaces/Gen-Sim/Gen-Sim/notebooks/affordance.py deleted file mode 100644 index cfb3da7d1eab04bd940fc9331e8d9c78c4c8a3ed..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/notebooks/affordance.py +++ /dev/null @@ -1,246 +0,0 @@ -import os -import sys -import json - -import numpy as np -from cliport import tasks -from cliport import agents -from cliport.utils import utils - -import torch -import cv2 -from cliport.dataset import RavensDataset -from cliport.environments.environment import Environment -from torch.utils.data import DataLoader -import IPython - -import matplotlib -import numpy as np -import matplotlib.pyplot as plt - -train_demos = 10 # number training demonstrations used to train agent -n_eval = 1 # number of evaluation instances -mode = 'test' # val or test - -agent_name = 'cliport' -model_task = 'place-red-in-green' # multi-task agent conditioned with language goals -task_type = 'gpt5_mixcliport2' # gpt5_mixcliport2 -model_folder = f'exps/exp-{task_type}_task_new_demo{train_demos}_2023-08-01_16-13-10-smaller' # path to pre-trained checkpoint -ckpt_name = 'last.ckpt' # name of checkpoint to load - -draw_grasp_lines = True -affordance_heatmap_scale = 30 - -### Uncomment the task you want to evaluate on ### -# eval_task = 'align-rope' -# eval_task = 'assembling-kits-seq-seen-colors' -# eval_task = 'assembling-kits-seq-unseen-colors' -# eval_task = 'packing-shapes' -# eval_task = 'packing-boxes-pairs-seen-colors' -# eval_task = 'packing-boxes-pairs-unseen-colors' -# eval_task = 'packing-seen-google-objects-seq' -# eval_task = 'packing-unseen-google-objects-seq' -# eval_task = 'packing-seen-google-objects-group' -# eval_task = 'packing-unseen-google-objects-group' -# eval_task = 'put-block-in-bowl-seen-colors' -# eval_task = 'put-block-in-bowl-unseen-colors' -eval_task = 'place-red-in-green' -# eval_task = 'stack-block-pyramid-seq-unseen-colors' -# eval_task = 'separating-piles-seen-colors' -# eval_task = 'separating-piles-unseen-colors' -# eval_task = 'towers-of-hanoi-seq-seen-colors' -# eval_task = 'towers-of-hanoi-seq-unseen-colors' - - -root_dir = os.environ['GENSIM_ROOT'] -assets_root = os.path.join(root_dir, 'cliport/environments/assets/') -config_file = 'eval.yaml' - -vcfg = utils.load_hydra_config(os.path.join(root_dir, f'cliport/cfg/{config_file}')) -vcfg['data_dir'] = os.path.join(root_dir, 'data') -vcfg['mode'] = mode - -vcfg['model_task'] = model_task -vcfg['eval_task'] = eval_task -vcfg['agent'] = agent_name - -# Model and training config paths -model_path = os.path.join(root_dir, model_folder) -if model_folder[-7:] == 'smaller': - vcfg['train_config'] = f"{model_path}/{model_folder[9:-8]}-{vcfg['agent']}-n{train_demos}-train/.hydra/config.yaml" - vcfg['model_path'] = f"{model_path}/{model_folder[9:-8]}-{vcfg['agent']}-n{train_demos}-train/checkpoints/" -else: - vcfg['train_config'] = f"{model_path}/{model_folder[9:]}-{vcfg['agent']}-n{train_demos}-train/.hydra/config.yaml" - vcfg['model_path'] = f"{model_path}/{model_folder[9:]}-{vcfg['agent']}-n{train_demos}-train/checkpoints/" -tcfg = utils.load_hydra_config(vcfg['train_config']) - -# Load dataset -ds = RavensDataset(os.path.join(vcfg['data_dir'], f'{vcfg["eval_task"]}-{vcfg["mode"]}'), - tcfg, - n_demos=n_eval, - augment=False) - -eval_run = 0 -name = '{}-{}-{}-{}'.format(vcfg['eval_task'], vcfg['agent'], n_eval, eval_run) -print(f'\nEval ID: {name}\n') - -# Initialize agent -utils.set_seed(eval_run, torch=True) -agent = agents.names[vcfg['agent']](name, tcfg, DataLoader(ds), DataLoader(ds)) - -# Load checkpoint -ckpt_path = os.path.join(vcfg['model_path'], ckpt_name) -print(f'\nLoading checkpoint: {ckpt_path}') -agent.load(ckpt_path) - - - -env = Environment( - assets_root, - disp=False, - shared_memory=False, - hz=480, - record_cfg=vcfg['record'] -) - - - - -episode = 0 -num_eval_instances = min(n_eval, ds.n_episodes) - -for i in range(num_eval_instances): - print(f'\nEvaluation Instance: {i + 1}/{num_eval_instances}') - - # Load episode - episode, seed = ds.load(i) - goal = episode[-1] - total_reward = 0 - np.random.seed(seed) - - # Set task - task_name = vcfg['eval_task'] - task = tasks.names[task_name]() - task.mode = mode - - # Set environment - env.seed(seed) - env.set_task(task) - obs = env.reset() - info = env.info - reward = 0 - - step = 0 - done = False - - # Rollout - while (step <= task.max_steps) and not done: - print(f"Step: {step} ({task.max_steps} max)") - - # Get batch - if step == task.max_steps-1: - batch = ds.process_goal((obs, None, reward, info), perturb_params=None) - else: - batch = ds.process_sample((obs, None, reward, info), augment=False) - - fig, axs = plt.subplots(2, 2, figsize=(13, 7)) - - # Get color and depth inputs - img = batch['img'] - img = torch.from_numpy(img) - color = np.uint8(img.detach().cpu().numpy())[:,:,:3] - color = color.transpose(1,0,2) - depth = np.array(img.detach().cpu().numpy())[:,:,3] - depth = depth.transpose(1,0) - - # Display input color - axs[0,0].imshow(color) - axs[0,0].axes.xaxis.set_visible(False) - axs[0,0].axes.yaxis.set_visible(False) - axs[0,0].set_title('Input RGB') - - # Display input depth - axs[0,1].imshow(depth) - axs[0,1].axes.xaxis.set_visible(False) - axs[0,1].axes.yaxis.set_visible(False) - axs[0,1].set_title('Input Depth') - - # Display predicted pick affordance - axs[1,0].imshow(color) - axs[1,0].axes.xaxis.set_visible(False) - axs[1,0].axes.yaxis.set_visible(False) - axs[1,0].set_title('Pick Affordance') - - # Display predicted place affordance - axs[1,1].imshow(color) - axs[1,1].axes.xaxis.set_visible(False) - axs[1,1].axes.yaxis.set_visible(False) - axs[1,1].set_title('Place Affordance') - - # Get action predictions - l = str(info['lang_goal']) - act = agent.act(obs, info, goal=None) - pick, place = act['pick'], act['place'] - - # Visualize pick affordance - pick_inp = {'inp_img': batch['img'], 'lang_goal': l} - pick_conf = agent.attn_forward(pick_inp)[0] - print("pick_conf:", pick_conf.shape, pick, place) - # IPython.embed() - logits = pick_conf.detach().cpu().numpy() - - pick_conf = pick_conf.detach().cpu().numpy() - argmax = np.argmax(pick_conf) - argmax = np.unravel_index(argmax, shape=pick_conf.shape) - p0 = argmax[:2] - - p0_theta = (argmax[2] * (2 * np.pi / pick_conf.shape[2])) * -1.0 - - line_len = 30 - pick0 = (pick[0] + line_len/2.0 * np.sin(p0_theta), pick[1] + line_len/2.0 * np.cos(p0_theta)) - pick1 = (pick[0] - line_len/2.0 * np.sin(p0_theta), pick[1] - line_len/2.0 * np.cos(p0_theta)) - - if draw_grasp_lines: - axs[1,0].plot((pick1[0], pick0[0]), (pick1[1], pick0[1]), color='r', linewidth=1) - - # Visualize place affordance - place_inp = {'inp_img': batch['img'], 'p0': pick, 'lang_goal': l} - place_conf = agent.trans_forward(place_inp)[0] - - place_conf = place_conf.permute(1, 2, 0) - place_conf = place_conf.detach().cpu().numpy() - argmax = np.argmax(place_conf) - argmax = np.unravel_index(argmax, shape=place_conf.shape) - p1_pix = argmax[:2] - p1_theta = (argmax[2] * (2 * np.pi / place_conf.shape[2]) + p0_theta) * -1.0 - - line_len = 30 - place0 = (place[0] + line_len/2.0 * np.sin(p1_theta), place[1] + line_len/2.0 * np.cos(p1_theta)) - place1 = (place[0] - line_len/2.0 * np.sin(p1_theta), place[1] - line_len/2.0 * np.cos(p1_theta)) - - if draw_grasp_lines: - axs[1,1].plot((place1[0], place0[0]), (place1[1], place0[1]), color='g', linewidth=1) - - # Overlay affordances on RGB input - pick_logits_disp = np.uint8(logits * 255 * affordance_heatmap_scale).transpose(2,1,0) - place_logits_disp = np.uint8(np.sum(place_conf, axis=2)[:,:,None] * 255 * affordance_heatmap_scale).transpose(1,0,2)# .transpose(1,2,0) - - pick_logits_disp_masked = np.ma.masked_where(pick_logits_disp < 0, pick_logits_disp) - place_logits_disp_masked = np.ma.masked_where(place_logits_disp < 0, place_logits_disp) - # IPython.embed() - - axs[1][0].imshow(pick_logits_disp_masked, alpha=0.75) - axs[1][1].imshow(place_logits_disp_masked, cmap='viridis', alpha=0.75) - - print(f"Lang Goal: {str(info['lang_goal'])}") - print(os.getcwd()) - plt.savefig(f'./test_{step}.png') - - # Act with the predicted actions - obs, reward, done, info = env.step(act) - step += 1 - - if done: - print("Done. Success.") - else: - print("Max steps reached. Task failed.") \ No newline at end of file diff --git a/spaces/Giuliano/Conversational-Wikipedia/README.md b/spaces/Giuliano/Conversational-Wikipedia/README.md deleted file mode 100644 index f46b0dd8b1037b3022e33a5a15cc34217d51cf5e..0000000000000000000000000000000000000000 --- a/spaces/Giuliano/Conversational-Wikipedia/README.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Conversational Wikipedia -emoji: 📚 -colorFrom: yellow -colorTo: yellow -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Gradio-Blocks/clip-guided-faces/README.md b/spaces/Gradio-Blocks/clip-guided-faces/README.md deleted file mode 100644 index 7d0853f83371dfd32eb842cd855f39e782663dd7..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/clip-guided-faces/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Clip Guided Faces -emoji: 👁 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.0.6 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/fovea_head.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/fovea_head.py deleted file mode 100644 index c8ccea787cba3d092284d4a5e209adaf6521c86a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/models/dense_heads/fovea_head.py +++ /dev/null @@ -1,341 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule, normal_init -from mmcv.ops import DeformConv2d - -from mmdet.core import multi_apply, multiclass_nms -from ..builder import HEADS -from .anchor_free_head import AnchorFreeHead - -INF = 1e8 - - -class FeatureAlign(nn.Module): - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - deform_groups=4): - super(FeatureAlign, self).__init__() - offset_channels = kernel_size * kernel_size * 2 - self.conv_offset = nn.Conv2d( - 4, deform_groups * offset_channels, 1, bias=False) - self.conv_adaption = DeformConv2d( - in_channels, - out_channels, - kernel_size=kernel_size, - padding=(kernel_size - 1) // 2, - deform_groups=deform_groups) - self.relu = nn.ReLU(inplace=True) - - def init_weights(self): - normal_init(self.conv_offset, std=0.1) - normal_init(self.conv_adaption, std=0.01) - - def forward(self, x, shape): - offset = self.conv_offset(shape) - x = self.relu(self.conv_adaption(x, offset)) - return x - - -@HEADS.register_module() -class FoveaHead(AnchorFreeHead): - """FoveaBox: Beyond Anchor-based Object Detector - https://arxiv.org/abs/1904.03797 - """ - - def __init__(self, - num_classes, - in_channels, - base_edge_list=(16, 32, 64, 128, 256), - scale_ranges=((8, 32), (16, 64), (32, 128), (64, 256), (128, - 512)), - sigma=0.4, - with_deform=False, - deform_groups=4, - **kwargs): - self.base_edge_list = base_edge_list - self.scale_ranges = scale_ranges - self.sigma = sigma - self.with_deform = with_deform - self.deform_groups = deform_groups - super().__init__(num_classes, in_channels, **kwargs) - - def _init_layers(self): - # box branch - super()._init_reg_convs() - self.conv_reg = nn.Conv2d(self.feat_channels, 4, 3, padding=1) - - # cls branch - if not self.with_deform: - super()._init_cls_convs() - self.conv_cls = nn.Conv2d( - self.feat_channels, self.cls_out_channels, 3, padding=1) - else: - self.cls_convs = nn.ModuleList() - self.cls_convs.append( - ConvModule( - self.feat_channels, (self.feat_channels * 4), - 3, - stride=1, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.norm_cfg is None)) - self.cls_convs.append( - ConvModule((self.feat_channels * 4), (self.feat_channels * 4), - 1, - stride=1, - padding=0, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - bias=self.norm_cfg is None)) - self.feature_adaption = FeatureAlign( - self.feat_channels, - self.feat_channels, - kernel_size=3, - deform_groups=self.deform_groups) - self.conv_cls = nn.Conv2d( - int(self.feat_channels * 4), - self.cls_out_channels, - 3, - padding=1) - - def init_weights(self): - super().init_weights() - if self.with_deform: - self.feature_adaption.init_weights() - - def forward_single(self, x): - cls_feat = x - reg_feat = x - for reg_layer in self.reg_convs: - reg_feat = reg_layer(reg_feat) - bbox_pred = self.conv_reg(reg_feat) - if self.with_deform: - cls_feat = self.feature_adaption(cls_feat, bbox_pred.exp()) - for cls_layer in self.cls_convs: - cls_feat = cls_layer(cls_feat) - cls_score = self.conv_cls(cls_feat) - return cls_score, bbox_pred - - def _get_points_single(self, *args, **kwargs): - y, x = super()._get_points_single(*args, **kwargs) - return y + 0.5, x + 0.5 - - def loss(self, - cls_scores, - bbox_preds, - gt_bbox_list, - gt_label_list, - img_metas, - gt_bboxes_ignore=None): - assert len(cls_scores) == len(bbox_preds) - - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - points = self.get_points(featmap_sizes, bbox_preds[0].dtype, - bbox_preds[0].device) - num_imgs = cls_scores[0].size(0) - flatten_cls_scores = [ - cls_score.permute(0, 2, 3, 1).reshape(-1, self.cls_out_channels) - for cls_score in cls_scores - ] - flatten_bbox_preds = [ - bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - for bbox_pred in bbox_preds - ] - flatten_cls_scores = torch.cat(flatten_cls_scores) - flatten_bbox_preds = torch.cat(flatten_bbox_preds) - flatten_labels, flatten_bbox_targets = self.get_targets( - gt_bbox_list, gt_label_list, featmap_sizes, points) - - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - pos_inds = ((flatten_labels >= 0) - & (flatten_labels < self.num_classes)).nonzero().view(-1) - num_pos = len(pos_inds) - - loss_cls = self.loss_cls( - flatten_cls_scores, flatten_labels, avg_factor=num_pos + num_imgs) - if num_pos > 0: - pos_bbox_preds = flatten_bbox_preds[pos_inds] - pos_bbox_targets = flatten_bbox_targets[pos_inds] - pos_weights = pos_bbox_targets.new_zeros( - pos_bbox_targets.size()) + 1.0 - loss_bbox = self.loss_bbox( - pos_bbox_preds, - pos_bbox_targets, - pos_weights, - avg_factor=num_pos) - else: - loss_bbox = torch.tensor( - 0, - dtype=flatten_bbox_preds.dtype, - device=flatten_bbox_preds.device) - return dict(loss_cls=loss_cls, loss_bbox=loss_bbox) - - def get_targets(self, gt_bbox_list, gt_label_list, featmap_sizes, points): - label_list, bbox_target_list = multi_apply( - self._get_target_single, - gt_bbox_list, - gt_label_list, - featmap_size_list=featmap_sizes, - point_list=points) - flatten_labels = [ - torch.cat([ - labels_level_img.flatten() for labels_level_img in labels_level - ]) for labels_level in zip(*label_list) - ] - flatten_bbox_targets = [ - torch.cat([ - bbox_targets_level_img.reshape(-1, 4) - for bbox_targets_level_img in bbox_targets_level - ]) for bbox_targets_level in zip(*bbox_target_list) - ] - flatten_labels = torch.cat(flatten_labels) - flatten_bbox_targets = torch.cat(flatten_bbox_targets) - return flatten_labels, flatten_bbox_targets - - def _get_target_single(self, - gt_bboxes_raw, - gt_labels_raw, - featmap_size_list=None, - point_list=None): - - gt_areas = torch.sqrt((gt_bboxes_raw[:, 2] - gt_bboxes_raw[:, 0]) * - (gt_bboxes_raw[:, 3] - gt_bboxes_raw[:, 1])) - label_list = [] - bbox_target_list = [] - # for each pyramid, find the cls and box target - for base_len, (lower_bound, upper_bound), stride, featmap_size, \ - (y, x) in zip(self.base_edge_list, self.scale_ranges, - self.strides, featmap_size_list, point_list): - # FG cat_id: [0, num_classes -1], BG cat_id: num_classes - labels = gt_labels_raw.new_zeros(featmap_size) + self.num_classes - bbox_targets = gt_bboxes_raw.new(featmap_size[0], featmap_size[1], - 4) + 1 - # scale assignment - hit_indices = ((gt_areas >= lower_bound) & - (gt_areas <= upper_bound)).nonzero().flatten() - if len(hit_indices) == 0: - label_list.append(labels) - bbox_target_list.append(torch.log(bbox_targets)) - continue - _, hit_index_order = torch.sort(-gt_areas[hit_indices]) - hit_indices = hit_indices[hit_index_order] - gt_bboxes = gt_bboxes_raw[hit_indices, :] / stride - gt_labels = gt_labels_raw[hit_indices] - half_w = 0.5 * (gt_bboxes[:, 2] - gt_bboxes[:, 0]) - half_h = 0.5 * (gt_bboxes[:, 3] - gt_bboxes[:, 1]) - # valid fovea area: left, right, top, down - pos_left = torch.ceil( - gt_bboxes[:, 0] + (1 - self.sigma) * half_w - 0.5).long().\ - clamp(0, featmap_size[1] - 1) - pos_right = torch.floor( - gt_bboxes[:, 0] + (1 + self.sigma) * half_w - 0.5).long().\ - clamp(0, featmap_size[1] - 1) - pos_top = torch.ceil( - gt_bboxes[:, 1] + (1 - self.sigma) * half_h - 0.5).long().\ - clamp(0, featmap_size[0] - 1) - pos_down = torch.floor( - gt_bboxes[:, 1] + (1 + self.sigma) * half_h - 0.5).long().\ - clamp(0, featmap_size[0] - 1) - for px1, py1, px2, py2, label, (gt_x1, gt_y1, gt_x2, gt_y2) in \ - zip(pos_left, pos_top, pos_right, pos_down, gt_labels, - gt_bboxes_raw[hit_indices, :]): - labels[py1:py2 + 1, px1:px2 + 1] = label - bbox_targets[py1:py2 + 1, px1:px2 + 1, 0] = \ - (stride * x[py1:py2 + 1, px1:px2 + 1] - gt_x1) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 1] = \ - (stride * y[py1:py2 + 1, px1:px2 + 1] - gt_y1) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 2] = \ - (gt_x2 - stride * x[py1:py2 + 1, px1:px2 + 1]) / base_len - bbox_targets[py1:py2 + 1, px1:px2 + 1, 3] = \ - (gt_y2 - stride * y[py1:py2 + 1, px1:px2 + 1]) / base_len - bbox_targets = bbox_targets.clamp(min=1. / 16, max=16.) - label_list.append(labels) - bbox_target_list.append(torch.log(bbox_targets)) - return label_list, bbox_target_list - - def get_bboxes(self, - cls_scores, - bbox_preds, - img_metas, - cfg=None, - rescale=None): - assert len(cls_scores) == len(bbox_preds) - num_levels = len(cls_scores) - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - points = self.get_points( - featmap_sizes, - bbox_preds[0].dtype, - bbox_preds[0].device, - flatten=True) - result_list = [] - for img_id in range(len(img_metas)): - cls_score_list = [ - cls_scores[i][img_id].detach() for i in range(num_levels) - ] - bbox_pred_list = [ - bbox_preds[i][img_id].detach() for i in range(num_levels) - ] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - det_bboxes = self._get_bboxes_single(cls_score_list, - bbox_pred_list, featmap_sizes, - points, img_shape, - scale_factor, cfg, rescale) - result_list.append(det_bboxes) - return result_list - - def _get_bboxes_single(self, - cls_scores, - bbox_preds, - featmap_sizes, - point_list, - img_shape, - scale_factor, - cfg, - rescale=False): - cfg = self.test_cfg if cfg is None else cfg - assert len(cls_scores) == len(bbox_preds) == len(point_list) - det_bboxes = [] - det_scores = [] - for cls_score, bbox_pred, featmap_size, stride, base_len, (y, x) \ - in zip(cls_scores, bbox_preds, featmap_sizes, self.strides, - self.base_edge_list, point_list): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - scores = cls_score.permute(1, 2, 0).reshape( - -1, self.cls_out_channels).sigmoid() - bbox_pred = bbox_pred.permute(1, 2, 0).reshape(-1, 4).exp() - nms_pre = cfg.get('nms_pre', -1) - if (nms_pre > 0) and (scores.shape[0] > nms_pre): - max_scores, _ = scores.max(dim=1) - _, topk_inds = max_scores.topk(nms_pre) - bbox_pred = bbox_pred[topk_inds, :] - scores = scores[topk_inds, :] - y = y[topk_inds] - x = x[topk_inds] - x1 = (stride * x - base_len * bbox_pred[:, 0]).\ - clamp(min=0, max=img_shape[1] - 1) - y1 = (stride * y - base_len * bbox_pred[:, 1]).\ - clamp(min=0, max=img_shape[0] - 1) - x2 = (stride * x + base_len * bbox_pred[:, 2]).\ - clamp(min=0, max=img_shape[1] - 1) - y2 = (stride * y + base_len * bbox_pred[:, 3]).\ - clamp(min=0, max=img_shape[0] - 1) - bboxes = torch.stack([x1, y1, x2, y2], -1) - det_bboxes.append(bboxes) - det_scores.append(scores) - det_bboxes = torch.cat(det_bboxes) - if rescale: - det_bboxes /= det_bboxes.new_tensor(scale_factor) - det_scores = torch.cat(det_scores) - padding = det_scores.new_zeros(det_scores.shape[0], 1) - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - det_scores = torch.cat([det_scores, padding], dim=1) - det_bboxes, det_labels = multiclass_nms(det_bboxes, det_scores, - cfg.score_thr, cfg.nms, - cfg.max_per_img) - return det_bboxes, det_labels diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/cgnet.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/cgnet.py deleted file mode 100644 index eff8d9458c877c5db894957e0b1b4597e40da6ab..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/_base_/models/cgnet.py +++ /dev/null @@ -1,35 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', eps=1e-03, requires_grad=True) -model = dict( - type='EncoderDecoder', - backbone=dict( - type='CGNet', - norm_cfg=norm_cfg, - in_channels=3, - num_channels=(32, 64, 128), - num_blocks=(3, 21), - dilations=(2, 4), - reductions=(8, 16)), - decode_head=dict( - type='FCNHead', - in_channels=256, - in_index=2, - channels=256, - num_convs=0, - concat_input=False, - dropout_ratio=0, - num_classes=19, - norm_cfg=norm_cfg, - loss_decode=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0, - class_weight=[ - 2.5959933, 6.7415504, 3.5354059, 9.8663225, 9.690899, 9.369352, - 10.289121, 9.953208, 4.3097677, 9.490387, 7.674431, 9.396905, - 10.347791, 6.3927646, 10.226669, 10.241062, 10.280587, - 10.396974, 10.055647 - ])), - # model training and testing settings - train_cfg=dict(sampler=None), - test_cfg=dict(mode='whole')) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index 401c6ea7330d45d8f7604a1da63fc6e15faea424..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r50-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/fcn_r50-d8.py', '../_base_/datasets/cityscapes.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_40k.py' -] diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_512x512_80k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_512x512_80k_ade20k.py deleted file mode 100644 index fb7c3d55d57b09296ea24889b218f9a0fb997463..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/pspnet/pspnet_r101-d8_512x512_80k_ade20k.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './pspnet_r50-d8_512x512_80k_ade20k.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/losses/balancer.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/losses/balancer.py deleted file mode 100644 index 8a0ac8adebab8cdee8f82351965195dc02800d18..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/losses/balancer.py +++ /dev/null @@ -1,136 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp - -import flashy -import torch -from torch import autograd - - -class Balancer: - """Loss balancer. - - The loss balancer combines losses together to compute gradients for the backward. - Given `y = f(...)`, and a number of losses `l1(y, ...)`, `l2(y, ...)`, with `...` - not having any dependence on `f`, the balancer can efficiently normalize the partial gradients - `d l1 / d y`, `d l2 / dy` before summing them in order to achieve a desired ratio between - the losses. For instance if `weights = {'l1': 2, 'l2': 1}`, 66% of the gradient - going into `f(...)` will come from `l1` on average, and 33% from `l2`. This allows for an easy - interpration of the weights even if the intrisic scale of `l1`, `l2` ... is unknown. - - Noting `g1 = d l1 / dy`, etc., the balanced gradient `G` will be - (with `avg` an exponential moving average over the updates), - - G = sum_i total_norm * g_i / avg(||g_i||) * w_i / sum(w_i) - - If `balance_grads` is False, this is deactivated, and instead the gradient will just be the - standard sum of the partial gradients with the given weights. - - A call to the backward method of the balancer will compute the the partial gradients, - combining all the losses and potentially rescaling the gradients, - which can help stabilize the training and reason about multiple losses with varying scales. - The obtained gradient with respect to `y` is then back-propagated to `f(...)`. - - Expected usage: - - weights = {'loss_a': 1, 'loss_b': 4} - balancer = Balancer(weights, ...) - losses: dict = {} - losses['loss_a'] = compute_loss_a(x, y) - losses['loss_b'] = compute_loss_b(x, y) - if model.training(): - effective_loss = balancer.backward(losses, x) - - Args: - weights (dict[str, float]): Weight coefficient for each loss. The balancer expect the losses keys - from the backward method to match the weights keys to assign weight to each of the provided loss. - balance_grads (bool): Whether to rescale gradients so that weights reflect the fraction of the - overall gradient, rather than a constant multiplier. - total_norm (float): Reference norm when rescaling gradients, ignored otherwise. - emay_decay (float): EMA decay for averaging the norms. - per_batch_item (bool): Whether to compute the averaged norm per batch item or not. This only holds - when rescaling the gradients. - epsilon (float): Epsilon value for numerical stability. - monitor (bool): If True, stores in `self.metrics` the relative ratio between the norm of the gradients - coming from each loss, when calling `backward()`. - """ - def __init__(self, weights: tp.Dict[str, float], balance_grads: bool = True, total_norm: float = 1., - ema_decay: float = 0.999, per_batch_item: bool = True, epsilon: float = 1e-12, - monitor: bool = False): - self.weights = weights - self.per_batch_item = per_batch_item - self.total_norm = total_norm or 1. - self.averager = flashy.averager(ema_decay or 1.) - self.epsilon = epsilon - self.monitor = monitor - self.balance_grads = balance_grads - self._metrics: tp.Dict[str, tp.Any] = {} - - @property - def metrics(self): - return self._metrics - - def backward(self, losses: tp.Dict[str, torch.Tensor], input: torch.Tensor) -> torch.Tensor: - """Compute the backward and return the effective train loss, e.g. the loss obtained from - computing the effective weights. If `balance_grads` is True, the effective weights - are the one that needs to be applied to each gradient to respect the desired relative - scale of gradients coming from each loss. - - Args: - losses (Dict[str, torch.Tensor]): dictionary with the same keys as `self.weights`. - input (torch.Tensor): the input of the losses, typically the output of the model. - This should be the single point of dependence between the losses - and the model being trained. - """ - norms = {} - grads = {} - for name, loss in losses.items(): - # Compute partial derivative of the less with respect to the input. - grad, = autograd.grad(loss, [input], retain_graph=True) - if self.per_batch_item: - # We do not average the gradient over the batch dimension. - dims = tuple(range(1, grad.dim())) - norm = grad.norm(dim=dims, p=2).mean() - else: - norm = grad.norm(p=2) - norms[name] = norm - grads[name] = grad - - count = 1 - if self.per_batch_item: - count = len(grad) - # Average norms across workers. Theoretically we should average the - # squared norm, then take the sqrt, but it worked fine like that. - avg_norms = flashy.distrib.average_metrics(self.averager(norms), count) - # We approximate the total norm of the gradient as the sums of the norms. - # Obviously this can be very incorrect if all gradients are aligned, but it works fine. - total = sum(avg_norms.values()) - - self._metrics = {} - if self.monitor: - # Store the ratio of the total gradient represented by each loss. - for k, v in avg_norms.items(): - self._metrics[f'ratio_{k}'] = v / total - - total_weights = sum([self.weights[k] for k in avg_norms]) - assert total_weights > 0. - desired_ratios = {k: w / total_weights for k, w in self.weights.items()} - - out_grad = torch.zeros_like(input) - effective_loss = torch.tensor(0., device=input.device, dtype=input.dtype) - for name, avg_norm in avg_norms.items(): - if self.balance_grads: - # g_balanced = g / avg(||g||) * total_norm * desired_ratio - scale = desired_ratios[name] * self.total_norm / (self.epsilon + avg_norm) - else: - # We just do regular weighted sum of the gradients. - scale = self.weights[name] - out_grad.add_(grads[name], alpha=scale) - effective_loss += scale * losses[name].detach() - # Send the computed partial derivative with respect to the output of the model to the model. - input.backward(out_grad) - return effective_loss diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/README - Old.md b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/README - Old.md deleted file mode 100644 index 6dcad10e406b0ab0988debb40bac46ff8cfb33f7..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/README - Old.md +++ /dev/null @@ -1,135 +0,0 @@ ---- -title: "MusicGen+ V1.2.7 (HuggingFace Version)" -emoji: "🎼" -colorFrom: "green" -colorTo: "blue" -sdk: "gradio" -sdk_version: "3.35.2" -app_file: app.py -pinned: true ---- - -# Audiocraft -![docs badge](https://github.com/facebookresearch/audiocraft/workflows/audiocraft_docs/badge.svg) -![linter badge](https://github.com/facebookresearch/audiocraft/workflows/audiocraft_linter/badge.svg) -![tests badge](https://github.com/facebookresearch/audiocraft/workflows/audiocraft_tests/badge.svg) - -Audiocraft is a PyTorch library for deep learning research on audio generation. At the moment, it contains the code for MusicGen, a state-of-the-art controllable text-to-music model. - -## MusicGen - -Audiocraft provides the code and models for MusicGen, [a simple and controllable model for music generation][arxiv]. MusicGen is a single stage auto-regressive -Transformer model trained over a 32kHz EnCodec tokenizer with 4 codebooks sampled at 50 Hz. Unlike existing methods like [MusicLM](https://arxiv.org/abs/2301.11325), MusicGen doesn't require a self-supervised semantic representation, and it generates -all 4 codebooks in one pass. By introducing a small delay between the codebooks, we show we can predict -them in parallel, thus having only 50 auto-regressive steps per second of audio. -Check out our [sample page][musicgen_samples] or test the available demo! - - - Open In Colab - - - Open in HugginFace - -
- -We use 20K hours of licensed music to train MusicGen. Specifically, we rely on an internal dataset of 10K high-quality music tracks, and on the ShutterStock and Pond5 music data. - -## Installation -Audiocraft requires Python 3.9, PyTorch 2.0.0, and a GPU with at least 16 GB of memory (for the medium-sized model). To install Audiocraft, you can run the following: - -```shell -# Best to make sure you have torch installed first, in particular before installing xformers. -# Don't run this if you already have PyTorch installed. -pip install 'torch>=2.0' -# Then proceed to one of the following -pip install -U audiocraft # stable release -pip install -U git+https://git@github.com/facebookresearch/audiocraft#egg=audiocraft # bleeding edge -pip install -e . # or if you cloned the repo locally -``` - -## Usage -We offer a number of way to interact with MusicGen: -1. A demo is also available on the [`facebook/MusicGen` HuggingFace Space](https://huggingface.co/spaces/facebook/MusicGen) (huge thanks to all the HF team for their support). -2. You can run the extended demo on a Colab: [colab notebook](https://colab.research.google.com/drive/1fxGqfg96RBUvGxZ1XXN07s3DthrKUl4-?usp=sharing). -3. You can use the gradio demo locally by running `python app.py`. -4. You can play with MusicGen by running the jupyter notebook at [`demo.ipynb`](./demo.ipynb) locally (if you have a GPU). -5. Finally, checkout [@camenduru Colab page](https://github.com/camenduru/MusicGen-colab) which is regularly - updated with contributions from @camenduru and the community. - -## API - -We provide a simple API and 4 pre-trained models. The pre trained models are: -- `small`: 300M model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-small) -- `medium`: 1.5B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-medium) -- `melody`: 1.5B model, text to music and text+melody to music - [🤗 Hub](https://huggingface.co/facebook/musicgen-melody) -- `large`: 3.3B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-large) - -We observe the best trade-off between quality and compute with the `medium` or `melody` model. -In order to use MusicGen locally **you must have a GPU**. We recommend 16GB of memory, but smaller -GPUs will be able to generate short sequences, or longer sequences with the `small` model. - -**Note**: Please make sure to have [ffmpeg](https://ffmpeg.org/download.html) installed when using newer version of `torchaudio`. -You can install it with: -``` -apt-get install ffmpeg -``` - -See after a quick example for using the API. - -```python -import torchaudio -from audiocraft.models import MusicGen -from audiocraft.data.audio import audio_write - -model = MusicGen.get_pretrained('melody') -model.set_generation_params(duration=8) # generate 8 seconds. -wav = model.generate_unconditional(4) # generates 4 unconditional audio samples -descriptions = ['happy rock', 'energetic EDM', 'sad jazz'] -wav = model.generate(descriptions) # generates 3 samples. - -melody, sr = torchaudio.load('./assets/bach.mp3') -# generates using the melody from the given audio and the provided descriptions. -wav = model.generate_with_chroma(descriptions, melody[None].expand(3, -1, -1), sr) - -for idx, one_wav in enumerate(wav): - # Will save under {idx}.wav, with loudness normalization at -14 db LUFS. - audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True) -``` - - -## Model Card - -See [the model card page](./MODEL_CARD.md). - -## FAQ - -#### Will the training code be released? - -Yes. We will soon release the training code for MusicGen and EnCodec. - - -#### I need help on Windows - -@FurkanGozukara made a complete tutorial for [Audiocraft/MusicGen on Windows](https://youtu.be/v-YpvPkhdO4) - -#### I need help for running the demo on Colab - -Check [@camenduru tutorial on Youtube](https://www.youtube.com/watch?v=EGfxuTy9Eeo). - - -## Citation -``` -@article{copet2023simple, - title={Simple and Controllable Music Generation}, - author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez}, - year={2023}, - journal={arXiv preprint arXiv:2306.05284}, -} -``` - -## License -* The code in this repository is released under the MIT license as found in the [LICENSE file](LICENSE). -* The weights in this repository are released under the CC-BY-NC 4.0 license as found in the [LICENSE_weights file](LICENSE_weights). - -[arxiv]: https://arxiv.org/abs/2306.05284 -[musicgen_samples]: https://ai.honu.io/papers/musicgen/ diff --git a/spaces/HMinions/new-Bing-with_your_cookies/README.md b/spaces/HMinions/new-Bing-with_your_cookies/README.md deleted file mode 100644 index 3763a6531c275a7e6e1d23d916fc76b2b9788d38..0000000000000000000000000000000000000000 --- a/spaces/HMinions/new-Bing-with_your_cookies/README.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: New-Bing-with Your Cookies -emoji: 🐨 -colorFrom: green -colorTo: pink -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: other -duplicated_from: hOTZR/new-Bing-with_your_cookies ---- -## Inspired By: -- [EdgeGPT](https://github.com/acheong08/EdgeGPT) -- [DiscordBot-EdgeGPT](https://github.com/FuseFairy/DiscordBot-EdgeGPT) -- [chatdemo](https://github.com/simpx/chatdemo) -- [Chatbot](https://medium.datadriveninvestor.com/build-your-own-chatbot-using-chatgpt-for-inspiration-2a2ae6ebb288) - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/fs_zen2_large_afqmc.sh b/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/fs_zen2_large_afqmc.sh deleted file mode 100644 index 1f44844a127b5bb39226c56b70bba85957dd735a..0000000000000000000000000000000000000000 --- a/spaces/HaloMaster/chinesesummary/fengshen/examples/zen2_finetune/fs_zen2_large_afqmc.sh +++ /dev/null @@ -1,93 +0,0 @@ -#!/bin/bash -#SBATCH --job-name=zen2_large_afqmc # create a short name for your job -#SBATCH --nodes=1 # node count -#SBATCH --ntasks=1 # total number of tasks across all nodes -#SBATCH --cpus-per-task=30 # cpu-cores per task (>1 if multi-threaded tasks) -#SBATCH --gres=gpu:1 # number of gpus per node -#SBATCH --mail-type=ALL # send email when job begins, ends or failed etc. -#SBATCH -o %x-%j.log # output and error file name (%x=job name, %j=job id) - - -export CUDA_VISIBLE_DEVICES='1' -export TORCH_EXTENSIONS_DIR=/cognitive_comp/ganruyi/tmp/torch_extendsions - -MODEL_NAME=zen2_large - -TASK=afqmc - -ZERO_STAGE=1 -STRATEGY=deepspeed_stage_${ZERO_STAGE} - -ROOT_DIR=/cognitive_comp/ganruyi/experiments/classification_finetune/${MODEL_NAME}_${TASK} -if [ ! -d ${ROOT_DIR} ];then - mkdir -p ${ROOT_DIR} - echo ${ROOT_DIR} created!!!!!!!!!!!!!! -else - echo ${ROOT_DIR} exist!!!!!!!!!!!!!!! -fi - -DATA_DIR=/cognitive_comp/yangping/data/ChineseCLUE_DATA/${TASK}_public/ -PRETRAINED_MODEL_PATH=/cognitive_comp/ganruyi/hf_models/zen/zh_zen_large_2.0 - -CHECKPOINT_PATH=${ROOT_DIR}/ckpt/ -OUTPUT_PATH=${ROOT_DIR}/predict.json - -DATA_ARGS="\ - --data_dir $DATA_DIR \ - --train_data train.json \ - --valid_data dev.json \ - --test_data test.json \ - --train_batchsize 32 \ - --valid_batchsize 16 \ - --max_seq_length 128 \ - --texta_name sentence \ - --label_name label \ - --id_name id \ - --task_name afqmc \ - " - -MODEL_ARGS="\ - --learning_rate 2e-5 \ - --weight_decay 0.1 \ - --warmup_ratio 0.01 \ - --num_labels 2 \ - " - -MODEL_CHECKPOINT_ARGS="\ - --monitor val_acc \ - --save_top_k 3 \ - --mode max \ - --every_n_train_steps 100 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_PATH \ - --filename model-{epoch:02d}-{val_acc:.4f} \ - " - -TRAINER_ARGS="\ - --max_epochs 10 \ - --gpus 1 \ - --check_val_every_n_epoch 1 \ - --val_check_interval 100 \ - --default_root_dir $ROOT_DIR \ - " - - -options=" \ - --pretrained_model_path $PRETRAINED_MODEL_PATH \ - --vocab_file $PRETRAINED_MODEL_PATH/vocab.txt \ - --do_lower_case \ - --output_save_path $OUTPUT_PATH \ - $DATA_ARGS \ - $MODEL_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $TRAINER_ARGS \ -" -SCRIPT_PATH=/cognitive_comp/ganruyi/Fengshenbang-LM/fengshen/examples/zen2_finetune/fengshen_sequence_level_ft_task.py -/home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - -# SINGULARITY_PATH=/cognitive_comp/ganruyi/pytorch21_06_py3_docker_image_v2.sif -# python3 $SCRIPT_PATH $options -# source activate base -# singularity exec --nv -B /cognitive_comp/:/cognitive_comp/ $SINGULARITY_PATH /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options -# /home/ganruyi/anaconda3/bin/python $SCRIPT_PATH $options - diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/models/__init__.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/models/__init__.py deleted file mode 100644 index 3e3039b7081a9e3228c8abefb6391a75b4864439..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/wav2vec/unsupervised/models/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .wav2vec_u import Wav2vec_U - - -__all__ = [ - "Wav2vec_U", -] diff --git a/spaces/Harveenchadha/BioGPT/app.py b/spaces/Harveenchadha/BioGPT/app.py deleted file mode 100644 index c7c7cbd343dda5ee536931b0ad0c3728e30866ed..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/BioGPT/app.py +++ /dev/null @@ -1,37 +0,0 @@ -import torch -import gradio as gr -from transformers import BioGptTokenizer, BioGptForCausalLM, set_seed - -tokenizer = BioGptTokenizer.from_pretrained("microsoft/biogpt") -model = BioGptForCausalLM.from_pretrained("microsoft/biogpt") - -sentence = "COVID-19 is" - - -set_seed(42) - -def get_beam_output(sentence): - inputs = tokenizer(sentence, return_tensors="pt") - with torch.no_grad(): - beam_output = model.generate(**inputs, - min_length=100, - max_length=1024, - num_beams=5, - early_stopping=True - ) - output=tokenizer.decode(beam_output[0], skip_special_tokens=True) - return output - - -txt1 = gr.Textbox( - label="Input", - lines=3, - ) - -txt2 = gr.Textbox( - label="Output", - lines=10, - ) - -demo = gr.Interface(fn=get_beam_output, inputs=txt1, outputs=txt2) -demo.launch() \ No newline at end of file diff --git a/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/utils/inference/__init__.py b/spaces/Harveenchadha/Vakyansh-Tamil-TTS/ttsv/utils/inference/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Has-ai/text-speech/app.py b/spaces/Has-ai/text-speech/app.py deleted file mode 100644 index c652b507995accca288a88cec2ba16f225f964da..0000000000000000000000000000000000000000 --- a/spaces/Has-ai/text-speech/app.py +++ /dev/null @@ -1,92 +0,0 @@ - - -import logging -from typing import cast - -import gradio as gr -from balacoon_tts import TTS -from huggingface_hub import hf_hub_download, list_repo_files - - -# global tts module, initialized from a model selected -tts = None - - -def main(): - logging.basicConfig(level=logging.INFO) - - with gr.Blocks() as demo: - gr.Markdown( - """ -

Clone your voice

- - 1. Write an utterance to generate, - 2. Select the model to synthesize with - 3. Select speaker - 4. Hit "Generate" and listen to the result! - - When you select model for the first time, - it will take a little time to download it. - """ - ) - with gr.Row(variant="panel"): - text = gr.Textbox(label="Text", placeholder="Type something here...") - - with gr.Row(): - with gr.Column(variant="panel"): - repo_files = list_repo_files(repo_id="balacoon/tts") - model_files = [x for x in repo_files if x.endswith("_cpu.addon")] - model_name = gr.Dropdown( - label="Model", - choices=model_files, - ) - with gr.Column(variant="panel"): - speaker = gr.Dropdown(label="Speaker", choices=[]) - - def set_model(model_name_str: str): - """ - gets value from `model_name`, loads model, - re-initializes tts object, gets list of - speakers that model supports and set them to `speaker` - """ - model_path = hf_hub_download( - repo_id="balacoon/tts", filename=model_name_str - ) - global tts - tts = TTS(model_path) - speakers = tts.get_speakers() - value = speakers[-1] - return gr.Dropdown.update( - choices=speakers, value=value, visible=True - ) - - model_name.change(set_model, inputs=model_name, outputs=speaker) - - with gr.Row(variant="panel"): - generate = gr.Button("Generate") - with gr.Row(variant="panel"): - audio = gr.Audio() - - def synthesize_audio(text_str: str, speaker_str: str = ""): - """ - gets utterance to synthesize from `text` Textbox - and speaker name from `speaker` dropdown list. - speaker name might be empty for single-speaker models. - Synthesizes the waveform and updates `audio` with it. - """ - if not text_str: - logging.info("text or speaker are not provided") - return None - global tts - if len(text_str) > 1024: - text_str = text_str[:1024] - samples = cast(TTS, tts).synthesize(text_str, speaker_str) - return gr.Audio.update(value=(cast(TTS, tts).get_sampling_rate(), samples)) - - generate.click(synthesize_audio, inputs=[text, speaker], outputs=audio) - - demo.launch() - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/Hermit591/anime-remove-background/app.py b/spaces/Hermit591/anime-remove-background/app.py deleted file mode 100644 index 230a0d5f8a3da6ab18ecb8db1cd90016a489b96a..0000000000000000000000000000000000000000 --- a/spaces/Hermit591/anime-remove-background/app.py +++ /dev/null @@ -1,52 +0,0 @@ -import gradio as gr -import huggingface_hub -import onnxruntime as rt -import numpy as np -import cv2 - - -def get_mask(img, s=1024): - img = (img / 255).astype(np.float32) - h, w = h0, w0 = img.shape[:-1] - h, w = (s, int(s * w / h)) if h > w else (int(s * h / w), s) - ph, pw = s - h, s - w - img_input = np.zeros([s, s, 3], dtype=np.float32) - img_input[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] = cv2.resize(img, (w, h)) - img_input = np.transpose(img_input, (2, 0, 1)) - img_input = img_input[np.newaxis, :] - mask = rmbg_model.run(None, {'img': img_input})[0][0] - mask = np.transpose(mask, (1, 2, 0)) - mask = mask[ph // 2:ph // 2 + h, pw // 2:pw // 2 + w] - mask = cv2.resize(mask, (w0, h0))[:, :, np.newaxis] - return mask - - -def rmbg_fn(img): - mask = get_mask(img) - img = (mask * img + 255 * (1 - mask)).astype(np.uint8) - mask = (mask * 255).astype(np.uint8) - img = np.concatenate([img, mask], axis=2, dtype=np.uint8) - mask = mask.repeat(3, axis=2) - return mask, img - - -if __name__ == "__main__": - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - model_path = huggingface_hub.hf_hub_download("skytnt/anime-seg", "isnetis.onnx") - rmbg_model = rt.InferenceSession(model_path, providers=providers) - app = gr.Blocks() - with app: - gr.Markdown("# Anime Remove Background\n\n" - "![visitor badge](https://visitor-badge.glitch.me/badge?page_id=skytnt.animeseg)\n\n" - "demo for [https://github.com/SkyTNT/anime-segmentation/](https://github.com/SkyTNT/anime-segmentation/)") - with gr.Row(): - with gr.Column(): - input_img = gr.Image(label="input image") - examples_data = [[f"examples/{x:02d}.jpg"] for x in range(1, 4)] - examples = gr.Dataset(components=[input_img], samples=examples_data) - run_btn = gr.Button(variant="primary") - output_mask = gr.Image(label="mask") - output_img = gr.Image(label="result", image_mode="RGBA") - examples.click(lambda x: x[0], [examples], [input_img]) - run_btn.click(rmbg_fn, [input_img], [output_mask, output_img]) - app.launch() diff --git a/spaces/HighCWu/starganv2vc-paddle/README.md b/spaces/HighCWu/starganv2vc-paddle/README.md deleted file mode 100644 index ec4483aa8bf52fa9ed7aa6bb7e04f89ab8f6a509..0000000000000000000000000000000000000000 --- a/spaces/HighCWu/starganv2vc-paddle/README.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -title: StarGANv2 Voice Conversion on PaddlePaddle -emoji: 🗣️ -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: mit ---- - -# StarGANv2-VC-Paddle -[![Baidu AI Studio](https://img.shields.io/static/v1?label=Baidu&message=AI%20Studio%20Free%20A100&color=blue)](https://aistudio.baidu.com/aistudio/projectdetail/3955253) -[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/HighCWu/starganv2vc-paddle) - -A paddlepaddle version of [StarGANv2-VC](https://github.com/yl4579/StarGANv2-VC). - -Download pretrained models [here](https://aistudio.baidu.com/aistudio/datasetdetail/145012). - -Getting started with free v100/a100 in [AI Studio](https://aistudio.baidu.com/aistudio/projectdetail/3955253) or fast try with [HugginFace Spaces](https://huggingface.co/spaces/HighCWu/starganv2vc-paddle). - ---- - -Original PyTorch Repo [README](https://github.com/yl4579/StarGANv2-VC) 👇 - ---- - - -# StarGANv2-VC: A Diverse, Unsupervised, Non-parallel Framework for Natural-Sounding Voice Conversion - -### Yinghao Aaron Li, Ali Zare, Nima Mesgarani - -> We present an unsupervised non-parallel many-to-many voice conversion (VC) method using a generative adversarial network (GAN) called StarGAN v2. Using a combination of adversarial source classifier loss and perceptual loss, our model significantly outperforms previous VC models. Although our model is trained only with 20 English speakers, it generalizes to a variety of voice conversion tasks, such as any-to-many, cross-lingual, and singing conversion. Using a style encoder, our framework can also convert plain reading speech into stylistic speech, such as emotional and falsetto speech. Subjective and objective evaluation experiments on a non-parallel many-to-many voice conversion task revealed that our model produces natural sounding voices, close to the sound quality of state-of-the-art text-tospeech (TTS) based voice conversion methods without the need for text labels. Moreover, our model is completely convolutional and with a faster-than-real-time vocoder such as Parallel WaveGAN can perform real-time voice conversion. - -Paper: https://arxiv.org/abs/2107.10394 - -Audio samples: https://starganv2-vc.github.io/ - -## Pre-requisites -1. Python >= 3.7 -2. Clone this repository: -```bash -git https://github.com/yl4579/StarGANv2-VC.git -cd StarGANv2-VC -``` -3. Install python requirements: -```bash -pip install SoundFile torchaudio munch parallel_wavegan torch pydub -``` -4. Download and extract the [VCTK dataset](https://datashare.ed.ac.uk/handle/10283/3443) -and use [VCTK.ipynb](https://github.com/yl4579/StarGANv2-VC/blob/main/Data/VCTK.ipynb) to prepare the data (downsample to 24 kHz etc.). You can also [download the dataset](https://drive.google.com/file/d/1t7QQbu4YC_P1mv9puA_KgSomSFDsSzD6/view?usp=sharing) we have prepared and unzip it to the `Data` folder, use the provided `config.yml` to reproduce our models. - -## Training -```bash -python train.py --config_path ./Configs/config.yml -``` -Please specify the training and validation data in `config.yml` file. Change `num_domains` to the number of speakers in the dataset. The data list format needs to be `filename.wav|speaker_number`, see [train_list.txt](https://github.com/yl4579/StarGANv2-VC/blob/main/Data/train_list.txt) as an example. - -Checkpoints and Tensorboard logs will be saved at `log_dir`. To speed up training, you may want to make `batch_size` as large as your GPU RAM can take. However, please note that `batch_size = 5` will take around 10G GPU RAM. - -## Inference - -Please refer to [inference.ipynb](https://github.com/yl4579/StarGANv2-VC/blob/main/Demo/inference.ipynb) for details. - -The pretrained StarGANv2 and ParallelWaveGAN on VCTK corpus can be downloaded at [StarGANv2 Link](https://drive.google.com/file/d/1nzTyyl-9A1Hmqya2Q_f2bpZkUoRjbZsY/view?usp=sharing) and [ParallelWaveGAN Link](https://drive.google.com/file/d/1q8oSAzwkqi99oOGXDZyLypCiz0Qzn3Ab/view?usp=sharing). Please unzip to `Models` and `Vocoder` respectivey and run each cell in the notebook. - -## ASR & F0 Models - -The pretrained F0 and ASR models are provided under the `Utils` folder. Both the F0 and ASR models are trained with melspectrograms preprocessed using [meldataset.py](https://github.com/yl4579/StarGANv2-VC/blob/main/meldataset.py), and both models are trained on speech data only. - -The ASR model is trained on English corpus, but it appears to work when training StarGANv2 models in other languages such as Japanese. The F0 model also appears to work with singing data. For the best performance, however, training your own ASR and F0 models is encouraged for non-English and non-speech data. - -You can edit the [meldataset.py](https://github.com/yl4579/StarGANv2-VC/blob/main/meldataset.py) with your own melspectrogram preprocessing, but the provided pretrained models will no longer work. You will need to train your own ASR and F0 models with the new preprocessing. You may refer to repo [Diamondfan/CTC_pytorch](https://github.com/Diamondfan/CTC_pytorch) and [keums/melodyExtraction_JDC](https://github.com/keums/melodyExtraction_JDC) to train your own the ASR and F0 models, for example. - -## References -- [clovaai/stargan-v2](https://github.com/clovaai/stargan-v2) -- [kan-bayashi/ParallelWaveGAN](https://github.com/kan-bayashi/ParallelWaveGAN) -- [tosaka-m/japanese_realtime_tts](https://github.com/tosaka-m/japanese_realtime_tts) -- [keums/melodyExtraction_JDC](https://github.com/keums/melodyExtraction_JDC) -- [Diamondfan/CTC_pytorch](https://github.com/Diamondfan/CTC_pytorch) - -## Acknowledgement -The author would like to thank [@tosaka-m](https://github.com/tosaka-m) for his great repository and valuable discussions. diff --git a/spaces/Hugorowan/BardJukebox/README.md b/spaces/Hugorowan/BardJukebox/README.md deleted file mode 100644 index 226df0a3c4a472c47580dd7fd88c90eda9a3b56c..0000000000000000000000000000000000000000 --- a/spaces/Hugorowan/BardJukebox/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: BardJukebox -emoji: 🌍 -colorFrom: blue -colorTo: green -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: true -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/stft.py b/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/stft.py deleted file mode 100644 index 63fcd431e2d7746b696aaa0d4172bc04ffb88efa..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/textless_nlp/gslm/unit2speech/tacotron2/stft.py +++ /dev/null @@ -1,141 +0,0 @@ -""" -BSD 3-Clause License - -Copyright (c) 2017, Prem Seetharaman -All rights reserved. - -* Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions are met: - -* Redistributions of source code must retain the above copyright notice, - this list of conditions and the following disclaimer. - -* Redistributions in binary form must reproduce the above copyright notice, this - list of conditions and the following disclaimer in the - documentation and/or other materials provided with the distribution. - -* Neither the name of the copyright holder nor the names of its - contributors may be used to endorse or promote products derived from this - software without specific prior written permission. - -THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR -ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON -ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -""" - -import torch -import numpy as np -import torch.nn.functional as F -from torch.autograd import Variable -from scipy.signal import get_window -from librosa.util import pad_center, tiny -from .audio_processing import window_sumsquare - - -class STFT(torch.nn.Module): - """adapted from Prem Seetharaman's https://github.com/pseeth/pytorch-stft""" - def __init__(self, filter_length=800, hop_length=200, win_length=800, - window='hann'): - super(STFT, self).__init__() - self.filter_length = filter_length - self.hop_length = hop_length - self.win_length = win_length - self.window = window - self.forward_transform = None - scale = self.filter_length / self.hop_length - fourier_basis = np.fft.fft(np.eye(self.filter_length)) - - cutoff = int((self.filter_length / 2 + 1)) - fourier_basis = np.vstack([np.real(fourier_basis[:cutoff, :]), - np.imag(fourier_basis[:cutoff, :])]) - - forward_basis = torch.FloatTensor(fourier_basis[:, None, :]) - inverse_basis = torch.FloatTensor( - np.linalg.pinv(scale * fourier_basis).T[:, None, :]) - - if window is not None: - assert(filter_length >= win_length) - # get window and zero center pad it to filter_length - fft_window = get_window(window, win_length, fftbins=True) - fft_window = pad_center(fft_window, filter_length) - fft_window = torch.from_numpy(fft_window).float() - - # window the bases - forward_basis *= fft_window - inverse_basis *= fft_window - - self.register_buffer('forward_basis', forward_basis.float()) - self.register_buffer('inverse_basis', inverse_basis.float()) - - def transform(self, input_data): - num_batches = input_data.size(0) - num_samples = input_data.size(1) - - self.num_samples = num_samples - - # similar to librosa, reflect-pad the input - input_data = input_data.view(num_batches, 1, num_samples) - input_data = F.pad( - input_data.unsqueeze(1), - (int(self.filter_length / 2), int(self.filter_length / 2), 0, 0), - mode='reflect') - input_data = input_data.squeeze(1) - - forward_transform = F.conv1d( - input_data, - Variable(self.forward_basis, requires_grad=False), - stride=self.hop_length, - padding=0) - - cutoff = int((self.filter_length / 2) + 1) - real_part = forward_transform[:, :cutoff, :] - imag_part = forward_transform[:, cutoff:, :] - - magnitude = torch.sqrt(real_part**2 + imag_part**2) - phase = torch.autograd.Variable( - torch.atan2(imag_part.data, real_part.data)) - - return magnitude, phase - - def inverse(self, magnitude, phase): - recombine_magnitude_phase = torch.cat( - [magnitude*torch.cos(phase), magnitude*torch.sin(phase)], dim=1) - - inverse_transform = F.conv_transpose1d( - recombine_magnitude_phase, - Variable(self.inverse_basis, requires_grad=False), - stride=self.hop_length, - padding=0) - - if self.window is not None: - window_sum = window_sumsquare( - self.window, magnitude.size(-1), hop_length=self.hop_length, - win_length=self.win_length, n_fft=self.filter_length, - dtype=np.float32) - # remove modulation effects - approx_nonzero_indices = torch.from_numpy( - np.where(window_sum > tiny(window_sum))[0]) - window_sum = torch.autograd.Variable( - torch.from_numpy(window_sum), requires_grad=False) - window_sum = window_sum.cuda() if magnitude.is_cuda else window_sum - inverse_transform[:, :, approx_nonzero_indices] /= window_sum[approx_nonzero_indices] - - # scale by hop ratio - inverse_transform *= float(self.filter_length) / self.hop_length - - inverse_transform = inverse_transform[:, :, int(self.filter_length/2):] - inverse_transform = inverse_transform[:, :, :-int(self.filter_length/2):] - - return inverse_transform - - def forward(self, input_data): - self.magnitude, self.phase = self.transform(input_data) - reconstruction = self.inverse(self.magnitude, self.phase) - return reconstruction diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/binarizer.py b/spaces/ICML2022/OFA/fairseq/fairseq/binarizer.py deleted file mode 100644 index ae4d02a6dbbb523b76eb8684e87e38c74fe7c4a1..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/binarizer.py +++ /dev/null @@ -1,80 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from collections import Counter -from typing import Dict - -import torch - -from fairseq.file_chunker_utils import Chunker -from fairseq.file_io import PathManager -from fairseq.tokenizer import tokenize_line - - -class Binarizer: - @staticmethod - def binarize( - filename, - dict, - consumer, - tokenize=tokenize_line, - append_eos=True, - reverse_order=False, - offset=0, - end=-1, - already_numberized=False, - ) -> Dict[str, int]: - nseq, ntok = 0, 0 - replaced = Counter() - - def replaced_consumer(word, idx): - if idx == dict.unk_index and word != dict.unk_word: - replaced.update([word]) - - with Chunker( - PathManager.get_local_path(filename), offset, end - ) as line_iterator: - for line in line_iterator: - if already_numberized: - id_strings = line.strip().split() - id_list = [int(id_string) for id_string in id_strings] - if reverse_order: - id_list.reverse() - if append_eos: - id_list.append(dict.eos()) - ids = torch.IntTensor(id_list) - else: - ids = dict.encode_line( - line=line, - line_tokenizer=tokenize, - add_if_not_exist=False, - consumer=replaced_consumer, - append_eos=append_eos, - reverse_order=reverse_order, - ) - nseq += 1 - ntok += len(ids) - consumer(ids) - return { - "nseq": nseq, - "nunk": sum(replaced.values()), - "ntok": ntok, - "replaced": replaced, - } - - @staticmethod - def binarize_alignments( - filename, alignment_parser, consumer, offset=0, end=-1 - ) -> Dict[str, int]: - nseq = 0 - - with Chunker( - PathManager.get_local_path(filename), offset, end - ) as line_iterator: - for line in line_iterator: - ids = alignment_parser(line) - nseq += 1 - consumer(ids) - return {"nseq": nseq} diff --git a/spaces/ICML2022/YourTTS/README.md b/spaces/ICML2022/YourTTS/README.md deleted file mode 100644 index 334d7b7a700b3ef950a22b594a0200c3aa62f499..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/YourTTS/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: YourTTS -emoji: 🔥 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.1.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/backbone/position_encoding.py b/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/backbone/position_encoding.py deleted file mode 100644 index eac7e896bbe85a670824bfe8ef487d0535d5bd99..0000000000000000000000000000000000000000 --- a/spaces/IDEA-Research/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/backbone/position_encoding.py +++ /dev/null @@ -1,186 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# DINO -# Copyright (c) 2022 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copied from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ - -""" -Various positional encodings for the transformer. -""" -import math - -import torch -from torch import nn - -from groundingdino.util.misc import NestedTensor - - -class PositionEmbeddingSine(nn.Module): - """ - This is a more standard version of the position embedding, very similar to the one - used by the Attention is all you need paper, generalized to work on images. - """ - - def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None): - super().__init__() - self.num_pos_feats = num_pos_feats - self.temperature = temperature - self.normalize = normalize - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - if scale is None: - scale = 2 * math.pi - self.scale = scale - - def forward(self, tensor_list: NestedTensor): - x = tensor_list.tensors - mask = tensor_list.mask - assert mask is not None - not_mask = ~mask - y_embed = not_mask.cumsum(1, dtype=torch.float32) - x_embed = not_mask.cumsum(2, dtype=torch.float32) - if self.normalize: - eps = 1e-6 - # if os.environ.get("SHILONG_AMP", None) == '1': - # eps = 1e-4 - # else: - # eps = 1e-6 - y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale - x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale - - dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) - - pos_x = x_embed[:, :, :, None] / dim_t - pos_y = y_embed[:, :, :, None] / dim_t - pos_x = torch.stack( - (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos_y = torch.stack( - (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - return pos - - -class PositionEmbeddingSineHW(nn.Module): - """ - This is a more standard version of the position embedding, very similar to the one - used by the Attention is all you need paper, generalized to work on images. - """ - - def __init__( - self, num_pos_feats=64, temperatureH=10000, temperatureW=10000, normalize=False, scale=None - ): - super().__init__() - self.num_pos_feats = num_pos_feats - self.temperatureH = temperatureH - self.temperatureW = temperatureW - self.normalize = normalize - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - if scale is None: - scale = 2 * math.pi - self.scale = scale - - def forward(self, tensor_list: NestedTensor): - x = tensor_list.tensors - mask = tensor_list.mask - assert mask is not None - not_mask = ~mask - y_embed = not_mask.cumsum(1, dtype=torch.float32) - x_embed = not_mask.cumsum(2, dtype=torch.float32) - - # import ipdb; ipdb.set_trace() - - if self.normalize: - eps = 1e-6 - y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale - x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale - - dim_tx = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_tx = self.temperatureW ** (2 * (torch.div(dim_tx, 2, rounding_mode='floor')) / self.num_pos_feats) - pos_x = x_embed[:, :, :, None] / dim_tx - - dim_ty = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_ty = self.temperatureH ** (2 * (torch.div(dim_ty, 2, rounding_mode='floor')) / self.num_pos_feats) - pos_y = y_embed[:, :, :, None] / dim_ty - - pos_x = torch.stack( - (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos_y = torch.stack( - (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - - # import ipdb; ipdb.set_trace() - - return pos - - -class PositionEmbeddingLearned(nn.Module): - """ - Absolute pos embedding, learned. - """ - - def __init__(self, num_pos_feats=256): - super().__init__() - self.row_embed = nn.Embedding(50, num_pos_feats) - self.col_embed = nn.Embedding(50, num_pos_feats) - self.reset_parameters() - - def reset_parameters(self): - nn.init.uniform_(self.row_embed.weight) - nn.init.uniform_(self.col_embed.weight) - - def forward(self, tensor_list: NestedTensor): - x = tensor_list.tensors - h, w = x.shape[-2:] - i = torch.arange(w, device=x.device) - j = torch.arange(h, device=x.device) - x_emb = self.col_embed(i) - y_emb = self.row_embed(j) - pos = ( - torch.cat( - [ - x_emb.unsqueeze(0).repeat(h, 1, 1), - y_emb.unsqueeze(1).repeat(1, w, 1), - ], - dim=-1, - ) - .permute(2, 0, 1) - .unsqueeze(0) - .repeat(x.shape[0], 1, 1, 1) - ) - return pos - - -def build_position_encoding(args): - N_steps = args.hidden_dim // 2 - if args.position_embedding in ("v2", "sine"): - # TODO find a better way of exposing other arguments - position_embedding = PositionEmbeddingSineHW( - N_steps, - temperatureH=args.pe_temperatureH, - temperatureW=args.pe_temperatureW, - normalize=True, - ) - elif args.position_embedding in ("v3", "learned"): - position_embedding = PositionEmbeddingLearned(N_steps) - else: - raise ValueError(f"not supported {args.position_embedding}") - - return position_embedding diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/scaleHelper.tsx b/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/scaleHelper.tsx deleted file mode 100644 index 815ceaac472a18915b33e78c70231b88e5dd2eee..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/scaleHelper.tsx +++ /dev/null @@ -1,18 +0,0 @@ -// Copyright (c) Meta Platforms, Inc. and affiliates. -// All rights reserved. - -// This source code is licensed under the license found in the -// LICENSE file in the root directory of this source tree. - - -// Helper function for handling image scaling needed for SAM -const handleImageScale = (image: HTMLImageElement) => { - // Input images to SAM must be resized so the longest side is 1024 - const LONG_SIDE_LENGTH = 1024; - let w = image.naturalWidth; - let h = image.naturalHeight; - const samScale = LONG_SIDE_LENGTH / Math.max(h, w); - return { height: h, width: w, samScale }; -}; - -export { handleImageScale }; diff --git a/spaces/JDWebProgrammer/space-weather/README.md b/spaces/JDWebProgrammer/space-weather/README.md deleted file mode 100644 index cd3a2385a709844d2f2fcb6bd80596abbad99ea1..0000000000000000000000000000000000000000 --- a/spaces/JDWebProgrammer/space-weather/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Space Weather -emoji: 📈 -colorFrom: indigo -colorTo: purple -sdk: gradio -sdk_version: 4.1.2 -app_file: app.py -pinned: false -license: mit ---- - -Machine learning functions applied to GOES XRS data diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/utils/dummy_pt_objects.py b/spaces/Jackflack09/diffuse-custom/diffusers/utils/dummy_pt_objects.py deleted file mode 100644 index 23afb51cf30c0273507d296a47e96da087ea5f2d..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/utils/dummy_pt_objects.py +++ /dev/null @@ -1,527 +0,0 @@ -# This file is autogenerated by the command `make fix-copies`, do not edit. -# flake8: noqa - -from ..utils import DummyObject, requires_backends - - -class ModelMixin(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class AutoencoderKL(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class Transformer2DModel(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class UNet1DModel(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class UNet2DConditionModel(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class UNet2DModel(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class VQModel(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -def get_constant_schedule(*args, **kwargs): - requires_backends(get_constant_schedule, ["torch"]) - - -def get_constant_schedule_with_warmup(*args, **kwargs): - requires_backends(get_constant_schedule_with_warmup, ["torch"]) - - -def get_cosine_schedule_with_warmup(*args, **kwargs): - requires_backends(get_cosine_schedule_with_warmup, ["torch"]) - - -def get_cosine_with_hard_restarts_schedule_with_warmup(*args, **kwargs): - requires_backends(get_cosine_with_hard_restarts_schedule_with_warmup, ["torch"]) - - -def get_linear_schedule_with_warmup(*args, **kwargs): - requires_backends(get_linear_schedule_with_warmup, ["torch"]) - - -def get_polynomial_decay_schedule_with_warmup(*args, **kwargs): - requires_backends(get_polynomial_decay_schedule_with_warmup, ["torch"]) - - -def get_scheduler(*args, **kwargs): - requires_backends(get_scheduler, ["torch"]) - - -class DiffusionPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DanceDiffusionPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DDIMPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DDPMPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class KarrasVePipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class LDMPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class LDMSuperResolutionPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class PNDMPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class RePaintPipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class ScoreSdeVePipeline(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DDIMScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DDPMScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class DPMSolverMultistepScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class EulerAncestralDiscreteScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class EulerDiscreteScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class HeunDiscreteScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class IPNDMScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class KarrasVeScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class KDPM2AncestralDiscreteScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class KDPM2DiscreteScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class PNDMScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class RePaintScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class SchedulerMixin(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class ScoreSdeVeScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class VQDiffusionScheduler(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - -class EMAModel(metaclass=DummyObject): - _backends = ["torch"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch"]) diff --git a/spaces/JamesStratford/Identify-Pest-Predators-Demo/model.py b/spaces/JamesStratford/Identify-Pest-Predators-Demo/model.py deleted file mode 100644 index c687d06b14798be8e94cf6de2e8bd7117a30fbd7..0000000000000000000000000000000000000000 --- a/spaces/JamesStratford/Identify-Pest-Predators-Demo/model.py +++ /dev/null @@ -1,22 +0,0 @@ -import pytorch_lightning as pl -from transformers import AutoModelForImageClassification, ViTConfig, AutoConfig -import torch - -class ImageClassificationModel(pl.LightningModule): - def __init__(self, HF_MODEL_NAME): - super().__init__() - config = AutoConfig.from_pretrained(HF_MODEL_NAME, num_labels=6) - self.model = AutoModelForImageClassification.from_config(config=config) - self.model.config = config - self.model.config.id2label = { - 0: "None", - 1: "Cat", - 2: "Mouse", - 3: "Possum", - 4: "Rat", - 5: "Stoat", - } - self.model.config.label2id = {v: k for k, v in self.model.config.id2label.items()} - - def forward(self, x): - return self.model(x) \ No newline at end of file diff --git a/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/utils/__init__.py b/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/utils/__init__.py deleted file mode 100644 index f03b1c2bafcd7759cb7e8722a0c6715f201a46dc..0000000000000000000000000000000000000000 --- a/spaces/Jasonyoyo/CodeFormer/CodeFormer/facelib/utils/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -from .face_utils import align_crop_face_landmarks, compute_increased_bbox, get_valid_bboxes, paste_face_back -from .misc import img2tensor, load_file_from_url, download_pretrained_models, scandir - -__all__ = [ - 'align_crop_face_landmarks', 'compute_increased_bbox', 'get_valid_bboxes', 'load_file_from_url', - 'download_pretrained_models', 'paste_face_back', 'img2tensor', 'scandir' -] diff --git a/spaces/JohanDL/GPT4Readability/app.py b/spaces/JohanDL/GPT4Readability/app.py deleted file mode 100644 index 544021798b44c990e75d4c13510bef3c34d69a92..0000000000000000000000000000000000000000 --- a/spaces/JohanDL/GPT4Readability/app.py +++ /dev/null @@ -1,93 +0,0 @@ -import gradio as gr -import os -import subprocess -import shutil -import openai - -css_style = """ -.gradio-container { - font-family: "IBM Plex Mono"; -} -""" - -def process_repository(url, model): - - - # Split the URL to get the repo name - repo_name = url.split('/')[-1] - if repo_name.endswith('.git'): - repo_name = repo_name[:-4] - - # Change permissions - subprocess.run(['chmod', 'u+w', '.']) - - # Clone the repo - subprocess.run(['git', 'clone', url], check=True) - - try: - # Change directory to the cloned repo - os.chdir(repo_name) - - # Run your package command - subprocess.run(['gpt4readability', '.', '--function', 'readme','--include-md', 'false', '--model', model]) - - # Open the README.md file and return its contents - with open('README.md', 'r') as readme_file: - readme_contents = readme_file.read() - - return readme_contents - finally: - # Change back to the original directory - os.chdir('..') - - # Delete the repo directory - if os.path.exists(repo_name): - shutil.rmtree(repo_name) - -def generate_repo(url, api_key, model): - if api_key: - os.environ['OPENAI_API_KEY'] = api_key.strip() - # if model == 'gpt-4': - # try: - # response = openai.Completion.create( - # model="gpt-4", # or whatever the exact model ID is - # prompt="test", - # max_tokens=5 - # ) - # print("Access to GPT-4 confirmed!") - # except: - # return "The API key either does not have access to GPT-4 or is not valid." - return process_repository(url, model) - else: - return "Please add a valid OpenAI API Key (you can get them [here](https://platform.openai.com/account/api-keys))" - -with gr.Blocks(css=css_style) as demo: - gr.Markdown(f""" - # Hello from GPT4Readability (v0.1.3) - - *Project by Dennis Loevlie ([@DennisLoevlie](https://twitter.com/DennisLoevlie))* - [![License Badge](https://img.shields.io/github/license/loevlie/GPT4Readability)](https://github.com/loevlie/GPT4Readability/blob/main/LICENSE) - - Welcome to GPT4Readability, a tool designed to help you generate README.md files and suggest improvements for your code repositories. - - - You can find the source code at [GPT4Readability](https://github.com/loevlie/GPT4Readability). - - It's making use of the [langchain](https://github.com/hwchase17/langchain) library. - - ## Here's how to get started: - 1. Please enter your API Key ([Need more information?](https://platform.openai.com/account/api-keys)) - 2. Provide the GitHub Repository URL that you'd like to analyze - 3. Select a model (Please note, the gpt-4 API isn't available to all as of July 2023) - 4. Click to generate a README or suggestions markdown file - """) - - openai_api_key = gr.Textbox( - label="OpenAI API Key", placeholder="sk-...", type="password") - url = gr.Textbox(label="GitHub Repository URL") - model = gr.Dropdown(["gpt-3.5-turbo", "gpt-4"], type="value", label='Model Type') - output = gr.Markdown(label="README.md") - btn = gr.Button("Generate README.md") - btn.click(fn=generate_repo, inputs=[url, openai_api_key, model], outputs=[output], api_name="Generate README.md") - - -demo.queue(concurrency_count=20) -demo.launch(share=False) \ No newline at end of file diff --git a/spaces/Jonni/03-Streamlit-Vido_ASR-NLP/README.md b/spaces/Jonni/03-Streamlit-Vido_ASR-NLP/README.md deleted file mode 100644 index ec01dba379092a7b7504b4abfbc35a65e588967a..0000000000000000000000000000000000000000 --- a/spaces/Jonni/03-Streamlit-Vido_ASR-NLP/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 03-Streamlit-Vido ASR-NLP -emoji: 🐢 -colorFrom: yellow -colorTo: indigo -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Joom/Front-end-code-generation-from-images/compiler/Utils.py b/spaces/Joom/Front-end-code-generation-from-images/compiler/Utils.py deleted file mode 100644 index d84bae6559c8e752d4c034663cae22dd7b631952..0000000000000000000000000000000000000000 --- a/spaces/Joom/Front-end-code-generation-from-images/compiler/Utils.py +++ /dev/null @@ -1,51 +0,0 @@ -__author__ = 'Taneem Jan, taneemishere.github.io' - -import string -import random - - -class Utils: - @staticmethod - def get_random_text(length_text=10, space_number=1, with_upper_case=True): - results = [] - while len(results) < length_text: - char = random.choice(string.ascii_letters[:26]) - results.append(char) - if with_upper_case: - results[0] = results[0].upper() - - current_spaces = [] - while len(current_spaces) < space_number: - space_pos = random.randint(2, length_text - 3) - if space_pos in current_spaces: - break - results[space_pos] = " " - if with_upper_case: - results[space_pos + 1] = results[space_pos - 1].upper() - - current_spaces.append(space_pos) - - return ''.join(results) - - @staticmethod - def get_ios_id(length=10): - results = [] - - while len(results) < length: - char = random.choice(string.digits + string.ascii_letters) - results.append(char) - - results[3] = "-" - results[6] = "-" - - return ''.join(results) - - @staticmethod - def get_android_id(length=10): - results = [] - - while len(results) < length: - char = random.choice(string.ascii_letters) - results.append(char) - - return ''.join(results) diff --git a/spaces/KPCGD/bingo/src/components/tailwind-indicator.tsx b/spaces/KPCGD/bingo/src/components/tailwind-indicator.tsx deleted file mode 100644 index f2a1291213dd67055fcebe67fab574c8441338df..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/src/components/tailwind-indicator.tsx +++ /dev/null @@ -1,14 +0,0 @@ -export function TailwindIndicator() { - if (process.env.NODE_ENV === 'production') return null - - return ( -
-
xs
-
sm
-
md
-
lg
-
xl
-
2xl
-
- ) -} diff --git a/spaces/Kamtera/persian-tts-mimic3/README.md b/spaces/Kamtera/persian-tts-mimic3/README.md deleted file mode 100644 index 0f3341bdbfdf5f8b21bf353a27db1068b6c197ad..0000000000000000000000000000000000000000 --- a/spaces/Kamtera/persian-tts-mimic3/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Persian Tts Mimic3 -emoji: 🌍 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/KevlarVK/content_summarizer/README.md b/spaces/KevlarVK/content_summarizer/README.md deleted file mode 100644 index e0bf4f6d01a9c5e105d3203cc006f1c81266e664..0000000000000000000000000000000000000000 --- a/spaces/KevlarVK/content_summarizer/README.md +++ /dev/null @@ -1,30 +0,0 @@ ---- -title: Content Summarizer -emoji: 🔥 -colorFrom: purple -colorTo: green -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -### Content Summarizer - -The Content Summarizer is a project that can generate summaries for various types of content including text, URLs, audio, video, and YouTube. It utilizes the transformers library and leverages the BART-large-CNN, T5-small and Whisper-tiny.en models to provide effective summarization. - -It contains two options for summarization: - - Overall summary - - Auto-Chapters summary - -#### Overall summary -The overall summary is generated using BART-large-CNN with chunk split algorithm. - -#### Auto Chapters summary -In this type, the text content is split using clustering techniques and chunk split algorithm and uses BART-large-CNN and T5-small for summarization which gives blocks of summary with headings for each. - -To run the app, install the packages from requirements.txt and execute the command `streamlit run app.py` from the root of this project. - -This repository has also been added as a space in huggingface: https://huggingface.co/spaces/KevlarVK/content_summarizer diff --git a/spaces/Kirokowa/hakurei-waifu-diffusion/app.py b/spaces/Kirokowa/hakurei-waifu-diffusion/app.py deleted file mode 100644 index ccef706bf3035fe470bf6a4f5bd701b18bf59133..0000000000000000000000000000000000000000 --- a/spaces/Kirokowa/hakurei-waifu-diffusion/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/hakurei/waifu-diffusion").launch() \ No newline at end of file diff --git a/spaces/KuraiYuki/openai-reverse-proxy/server.js b/spaces/KuraiYuki/openai-reverse-proxy/server.js deleted file mode 100644 index 04a48b7a429c4d0ad0b772ba1edf503e349eda21..0000000000000000000000000000000000000000 --- a/spaces/KuraiYuki/openai-reverse-proxy/server.js +++ /dev/null @@ -1,32 +0,0 @@ -const express = require('express'); -const proxy = require('express-http-proxy'); -const app = express(); -const targetUrl = 'https://api.openai.com'; -const openaiKey = process.env.OPENAI_KEY -const port = 7860; -const baseUrl = getExternalUrl(process.env.SPACE_ID); - -app.use('/api', proxy(targetUrl, { - proxyReqOptDecorator: (proxyReqOpts, srcReq) => { - // Modify the request headers if necessary - proxyReqOpts.headers['Authorization'] = 'Bearer '+openaiKey; - return proxyReqOpts; - }, -})); - -app.get("/", (req, res) => { - res.send(`This is your OpenAI Reverse Proxy URL: ${baseUrl}`); -}); - -function getExternalUrl(spaceId) { - try { - const [username, spacename] = spaceId.split("/"); - return `https://${username}-${spacename.replace(/_/g, "-")}.hf.space/api/v1`; - } catch (e) { - return ""; - } -} - -app.listen(port, () => { - console.log(`Reverse proxy server running on ${baseUrl}`); -}); \ No newline at end of file diff --git a/spaces/KyanChen/FunSR/models/baselines/OverNet.py b/spaces/KyanChen/FunSR/models/baselines/OverNet.py deleted file mode 100644 index 92bec65fbc7e1b6ded88c34bacdd35b194e20857..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/models/baselines/OverNet.py +++ /dev/null @@ -1,239 +0,0 @@ -import torch.nn as nn -import torch -import torch.nn.functional as F -import math -from models import register - - -class MeanShift(nn.Module): - def __init__(self, mean_rgb, sub): - super(MeanShift, self).__init__() - - sign = -1 if sub else 1 - r = mean_rgb[0] * sign - g = mean_rgb[1] * sign - b = mean_rgb[2] * sign - - self.shifter = nn.Conv2d(3, 3, 1, 1, 0) - self.shifter.weight.data = torch.eye(3).view(3, 3, 1, 1) - self.shifter.bias.data = torch.Tensor([r, g, b]) - - # Freeze the mean shift layer - for params in self.shifter.parameters(): - params.requires_grad = False - - def forward(self, x): - x = self.shifter(x) - return x - -class Scale(nn.Module): - - def __init__(self, init_value=1e-3): - super(Scale, self).__init__() - self.scale = nn.Parameter(torch.FloatTensor([init_value])) - - def forward(self, input): - return input * self.scale - -class SE(nn.Module): - def __init__(self, channel, reduction=16): - super(SE, self).__init__() - - self.avg_pool = nn.AdaptiveAvgPool2d(1) - - self.conv = nn.Sequential( - nn.Conv2d(channel, channel // reduction, 1, padding=0, bias=True), - nn.ReLU(inplace=True), - nn.Conv2d(channel // reduction, channel, 1, padding=0, bias=True), - nn.Sigmoid() - ) - - def forward(self, x): - y = self.avg_pool(x) - y = self.conv(y) - return x * y - - -class ResidualBlock(nn.Module): - def __init__(self, - wn, in_channels, out_channels): - super(ResidualBlock, self).__init__() - self.res_scale = Scale(1) - self.x_scale = Scale(1) - self.SE = SE(64, reduction=16) - body = [] - expand = 6 - linear = 0.8 - body.append( - wn(nn.Conv2d(64, 64*expand, 1, padding=1//2))) - body.append(nn.ReLU(inplace=True)) - body.append( - wn(nn.Conv2d(64*expand, int(64*linear), 1, padding=1//2))) - body.append( - wn(nn.Conv2d(int(64*linear), 64, 3, padding=3//2))) - self.body = nn.Sequential(*body) - - - def forward(self, x): - - out = self.body(x) - out = self.SE(out) - out = self.res_scale(out) + self.x_scale(x) - return out - - - -class BasicConv2d(nn.Module): - def __init__(self, wn, in_planes, out_planes, kernel_size, stride, padding=0): - super(BasicConv2d, self).__init__() - self.conv = wn(nn.Conv2d(in_planes, out_planes, - kernel_size=kernel_size, stride=stride, - padding=padding, bias=True)) - - self.LR = nn.ReLU(inplace=True) - - def forward(self, x): - x = self.conv(x) - x = self.LR(x) - return x - - - -class UpsampleBlock(nn.Module): - def __init__(self, n_channels, upscale, wn, group=1): - super(UpsampleBlock, self).__init__() - - self.up = _UpsampleBlock(n_channels, upscale=upscale, wn=wn, group=group) - - - def forward(self, x, upscale): - return self.up(x) - - -class _UpsampleBlock(nn.Module): - def __init__(self, n_channels, upscale, wn, group=1): - super(_UpsampleBlock, self).__init__() - - modules = [] - - if upscale == 2 or upscale == 4 or upscale == 8: - for _ in range(int(math.log(upscale, 2))): - modules += [wn(nn.Conv2d(n_channels, 4 * n_channels, 3, 1, 1, groups=group)), - nn.ReLU(inplace=True)] - modules += [nn.PixelShuffle(2)] - - elif upscale == 3: - modules += [wn(nn.Conv2d(n_channels, 9 * n_channels, 3, 1, 1, groups=group)), - nn.ReLU(inplace=True)] - modules += [nn.PixelShuffle(3)] - - elif upscale == 5: - modules += [wn(nn.Conv2d(n_channels, 25 * n_channels, 3, 1, 1, groups=group)), - nn.ReLU(inplace=True)] - modules += [nn.PixelShuffle(5)] - - self.body = nn.Sequential(*modules) - - def forward(self, x): - out = self.body(x) - return out - -#Local Dense Groups (LDGs) -class LDGs(nn.Module): - def __init__(self, - in_channels, out_channels, wn, - group=1): - super(LDGs, self).__init__() - - self.RB1 = ResidualBlock(wn, in_channels, out_channels) - self.RB2 = ResidualBlock(wn, in_channels, out_channels) - self.RB3 = ResidualBlock(wn, in_channels, out_channels) - - self.reduction1 = BasicConv2d(wn, in_channels*2, out_channels, 1, 1, 0) - self.reduction2 = BasicConv2d(wn, in_channels*3, out_channels, 1, 1, 0) - self.reduction3 = BasicConv2d(wn, in_channels*4, out_channels, 1, 1, 0) - - def forward(self, x): - c0 = o0 = x - - RB1 = self.RB1(o0) - concat1 = torch.cat([c0, RB1], dim=1) - out1 = self.reduction1(concat1) - - RB2 = self.RB2(out1) - concat2 = torch.cat([concat1, RB2], dim=1) - out2 = self.reduction2(concat2) - - RB3 = self.RB3(out2) - concat3 = torch.cat([concat2, RB3], dim=1) - out3 = self.reduction3(concat3) - - return out3 - - -@register('overnet') -class OverNet(nn.Module): - - def __init__(self, upscale=5, group=4, *args, **kwargs): - super(OverNet, self).__init__() - wn = lambda x: torch.nn.utils.weight_norm(x) - self.upscale = upscale - - # self.sub_mean = MeanShift((0.4488, 0.4371, 0.4040), sub=True) - # self.add_mean = MeanShift((0.4488, 0.4371, 0.4040), sub=False) - - self.entry_1 = wn(nn.Conv2d(3, 64, 3, 1, 1)) - - self.GDG1 = LDGs(64, 64, wn=wn) - self.GDG2 = LDGs(64, 64, wn=wn) - self.GDG3 = LDGs(64, 64, wn=wn) - - self.reduction1 = BasicConv2d(wn, 64*2, 64, 1, 1, 0) - self.reduction2 = BasicConv2d(wn, 64*3, 64, 1, 1, 0) - self.reduction3 = BasicConv2d(wn, 64*4, 64, 1, 1, 0) - - self.reduction = BasicConv2d(wn, 64*3, 64, 1, 1, 0) - - self.Global_skip = nn.Sequential(nn.AdaptiveAvgPool2d(1), nn.Conv2d(64, 64, 1, 1, 0), nn.ReLU(inplace=True)) - - self.upsample = UpsampleBlock(64, upscale=upscale, wn=wn, group=group) - - self.exit1 = wn(nn.Conv2d(64, 3, 3, 1, 1)) - - self.res_scale = Scale(1) - self.x_scale = Scale(1) - - def forward(self, x, out_size): - ori_h, ori_w = x.shape[-2:] - target_h, target_w = out_size - # x = self.sub_mean(x) - skip = x - - x = self.entry_1(x) - - c0 = o0 = x - - GDG1 = self.GDG1(o0) - concat1 = torch.cat([c0, GDG1], dim=1) - out1 = self.reduction1(concat1) - - GDG2 = self.GDG2(out1) - concat2 = torch.cat([concat1, GDG2], dim=1) - out2 = self.reduction2(concat2) - - GDG3 = self.GDG3(out2) - concat3 = torch.cat([concat2, GDG3], dim=1) - out3 = self.reduction3(concat3) - - output = self.reduction(torch.cat((out1, out2, out3),1)) - output = self.res_scale(output) + self.x_scale(self.Global_skip(x)) - - output = self.upsample(output, upscale=self.upscale) - - output = F.interpolate(output, out_size, mode='bicubic', align_corners=False) - skip = F.interpolate(skip, out_size, mode='bicubic', align_corners=False) - - output = self.exit1(output) + skip - # output = self.add_mean(output) - - return output diff --git a/spaces/KyanChen/FunSR/models/models.py b/spaces/KyanChen/FunSR/models/models.py deleted file mode 100644 index 0625a2d2fd2d8ee147930ec3cda0a643c895d8aa..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/models/models.py +++ /dev/null @@ -1,23 +0,0 @@ -import copy - - -models = {} - - -def register(name): - def decorator(cls): - models[name] = cls - return cls - return decorator - - -def make(model_spec, args=None, load_sd=False): - if args is not None: - model_args = copy.deepcopy(model_spec['args']) - model_args.update(args) - else: - model_args = model_spec['args'] - model = models[model_spec['name']](**model_args) - if load_sd: - model.load_state_dict(model_spec['sd'], strict=True) - return model diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/point_rend.py b/spaces/KyanChen/RSPrompter/mmdet/models/detectors/point_rend.py deleted file mode 100644 index 5062ac0c945e79bd53e66e1642aec51113475cad..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/detectors/point_rend.py +++ /dev/null @@ -1,35 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from mmengine.config import ConfigDict - -from mmdet.registry import MODELS -from mmdet.utils import OptConfigType, OptMultiConfig -from .two_stage import TwoStageDetector - - -@MODELS.register_module() -class PointRend(TwoStageDetector): - """PointRend: Image Segmentation as Rendering - - This detector is the implementation of - `PointRend `_. - - """ - - def __init__(self, - backbone: ConfigDict, - rpn_head: ConfigDict, - roi_head: ConfigDict, - train_cfg: ConfigDict, - test_cfg: ConfigDict, - neck: OptConfigType = None, - data_preprocessor: OptConfigType = None, - init_cfg: OptMultiConfig = None) -> None: - super().__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - init_cfg=init_cfg, - data_preprocessor=data_preprocessor) diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/__init__.py b/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/__init__.py deleted file mode 100644 index 90ae8f8e76b06b482ecaa200e02ff482ae4ff4a5..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/task_modules/assigners/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .approx_max_iou_assigner import ApproxMaxIoUAssigner -from .assign_result import AssignResult -from .atss_assigner import ATSSAssigner -from .base_assigner import BaseAssigner -from .center_region_assigner import CenterRegionAssigner -from .dynamic_soft_label_assigner import DynamicSoftLabelAssigner -from .grid_assigner import GridAssigner -from .hungarian_assigner import HungarianAssigner -from .iou2d_calculator import BboxOverlaps2D -from .match_cost import (BBoxL1Cost, ClassificationCost, CrossEntropyLossCost, - DiceCost, FocalLossCost, IoUCost) -from .max_iou_assigner import MaxIoUAssigner -from .multi_instance_assigner import MultiInstanceAssigner -from .point_assigner import PointAssigner -from .region_assigner import RegionAssigner -from .sim_ota_assigner import SimOTAAssigner -from .task_aligned_assigner import TaskAlignedAssigner -from .uniform_assigner import UniformAssigner - -__all__ = [ - 'BaseAssigner', 'MaxIoUAssigner', 'ApproxMaxIoUAssigner', 'AssignResult', - 'PointAssigner', 'ATSSAssigner', 'CenterRegionAssigner', 'GridAssigner', - 'HungarianAssigner', 'RegionAssigner', 'UniformAssigner', 'SimOTAAssigner', - 'TaskAlignedAssigner', 'BBoxL1Cost', 'ClassificationCost', - 'CrossEntropyLossCost', 'DiceCost', 'FocalLossCost', 'IoUCost', - 'BboxOverlaps2D', 'DynamicSoftLabelAssigner', 'MultiInstanceAssigner' -] diff --git a/spaces/LEL-A/translated-german-alpaca-validation/README.md b/spaces/LEL-A/translated-german-alpaca-validation/README.md deleted file mode 100644 index db8e9c2c3e3ae9123b207ed1e1ddf02bc3d2e17b..0000000000000000000000000000000000000000 --- a/spaces/LEL-A/translated-german-alpaca-validation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Alpaca Dataset Validation with Argilla -emoji: 🦙 🏷️ -colorFrom: purple -colorTo: red -sdk: docker -app_port: 6900 -fullWidth: true -tags: -- argilla -- somosnlp -duplicated_from: dvilasuero/alpaca-cleaned-de ---- diff --git a/spaces/Lamai/LAMAIGPT/tests/local_cache_test.py b/spaces/Lamai/LAMAIGPT/tests/local_cache_test.py deleted file mode 100644 index bb10862656bb500f319ac231ff5bd5438d6fe7e2..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/tests/local_cache_test.py +++ /dev/null @@ -1,67 +0,0 @@ -# sourcery skip: snake-case-functions -"""Tests for LocalCache class""" -import os -import sys -import unittest - -import pytest - -from autogpt.memory.local import LocalCache - - -def mock_config() -> dict: - """Mock the Config class""" - return type( - "MockConfig", - (object,), - { - "debug_mode": False, - "continuous_mode": False, - "speak_mode": False, - "memory_index": "auto-gpt", - }, - ) - - -@pytest.mark.integration_test -class TestLocalCache(unittest.TestCase): - """Tests for LocalCache class""" - - def setUp(self) -> None: - """Set up the test environment""" - self.cfg = mock_config() - self.cache = LocalCache(self.cfg) - - def test_add(self) -> None: - """Test adding a text to the cache""" - text = "Sample text" - self.cache.add(text) - self.assertIn(text, self.cache.data.texts) - - def test_clear(self) -> None: - """Test clearing the cache""" - self.cache.clear() - self.assertEqual(self.cache.data.texts, []) - - def test_get(self) -> None: - """Test getting a text from the cache""" - text = "Sample text" - self.cache.add(text) - result = self.cache.get(text) - self.assertEqual(result, [text]) - - def test_get_relevant(self) -> None: - """Test getting relevant texts from the cache""" - text1 = "Sample text 1" - text2 = "Sample text 2" - self.cache.add(text1) - self.cache.add(text2) - result = self.cache.get_relevant(text1, 1) - self.assertEqual(result, [text1]) - - def test_get_stats(self) -> None: - """Test getting the cache stats""" - text = "Sample text" - self.cache.add(text) - stats = self.cache.get_stats() - self.assertEqual(stats, (4, self.cache.data.embeddings.shape)) diff --git a/spaces/Lippppxy/AiAnimeVoice/text/__init__.py b/spaces/Lippppxy/AiAnimeVoice/text/__init__.py deleted file mode 100644 index 663c4b6416affb53c9dc56dddbc8b2b65d4bf518..0000000000000000000000000000000000000000 --- a/spaces/Lippppxy/AiAnimeVoice/text/__init__.py +++ /dev/null @@ -1,57 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence, clean_text - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/Liu-LAB/GPT-academic/crazy_functions/test_project/python/dqn/dqn.py b/spaces/Liu-LAB/GPT-academic/crazy_functions/test_project/python/dqn/dqn.py deleted file mode 100644 index 6cea64d39baa7ff4c1e549869aaa4b0ae17779a9..0000000000000000000000000000000000000000 --- a/spaces/Liu-LAB/GPT-academic/crazy_functions/test_project/python/dqn/dqn.py +++ /dev/null @@ -1,245 +0,0 @@ -from typing import Any, Dict, List, Optional, Tuple, Type, Union - -import gym -import numpy as np -import torch as th -from torch.nn import functional as F - -from stable_baselines3.common import logger -from stable_baselines3.common.off_policy_algorithm import OffPolicyAlgorithm -from stable_baselines3.common.preprocessing import maybe_transpose -from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule -from stable_baselines3.common.utils import get_linear_fn, is_vectorized_observation, polyak_update -from stable_baselines3.dqn.policies import DQNPolicy - - -class DQN(OffPolicyAlgorithm): - """ - Deep Q-Network (DQN) - - Paper: https://arxiv.org/abs/1312.5602, https://www.nature.com/articles/nature14236 - Default hyperparameters are taken from the nature paper, - except for the optimizer and learning rate that were taken from Stable Baselines defaults. - - :param policy: The policy model to use (MlpPolicy, CnnPolicy, ...) - :param env: The environment to learn from (if registered in Gym, can be str) - :param learning_rate: The learning rate, it can be a function - of the current progress remaining (from 1 to 0) - :param buffer_size: size of the replay buffer - :param learning_starts: how many steps of the model to collect transitions for before learning starts - :param batch_size: Minibatch size for each gradient update - :param tau: the soft update coefficient ("Polyak update", between 0 and 1) default 1 for hard update - :param gamma: the discount factor - :param train_freq: Update the model every ``train_freq`` steps. Alternatively pass a tuple of frequency and unit - like ``(5, "step")`` or ``(2, "episode")``. - :param gradient_steps: How many gradient steps to do after each rollout (see ``train_freq``) - Set to ``-1`` means to do as many gradient steps as steps done in the environment - during the rollout. - :param optimize_memory_usage: Enable a memory efficient variant of the replay buffer - at a cost of more complexity. - See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195 - :param target_update_interval: update the target network every ``target_update_interval`` - environment steps. - :param exploration_fraction: fraction of entire training period over which the exploration rate is reduced - :param exploration_initial_eps: initial value of random action probability - :param exploration_final_eps: final value of random action probability - :param max_grad_norm: The maximum value for the gradient clipping - :param tensorboard_log: the log location for tensorboard (if None, no logging) - :param create_eval_env: Whether to create a second environment that will be - used for evaluating the agent periodically. (Only available when passing string for the environment) - :param policy_kwargs: additional arguments to be passed to the policy on creation - :param verbose: the verbosity level: 0 no output, 1 info, 2 debug - :param seed: Seed for the pseudo random generators - :param device: Device (cpu, cuda, ...) on which the code should be run. - Setting it to auto, the code will be run on the GPU if possible. - :param _init_setup_model: Whether or not to build the network at the creation of the instance - """ - - def __init__( - self, - policy: Union[str, Type[DQNPolicy]], - env: Union[GymEnv, str], - learning_rate: Union[float, Schedule] = 1e-4, - buffer_size: int = 1000000, - learning_starts: int = 50000, - batch_size: Optional[int] = 32, - tau: float = 1.0, - gamma: float = 0.99, - train_freq: Union[int, Tuple[int, str]] = 4, - gradient_steps: int = 1, - optimize_memory_usage: bool = False, - target_update_interval: int = 10000, - exploration_fraction: float = 0.1, - exploration_initial_eps: float = 1.0, - exploration_final_eps: float = 0.05, - max_grad_norm: float = 10, - tensorboard_log: Optional[str] = None, - create_eval_env: bool = False, - policy_kwargs: Optional[Dict[str, Any]] = None, - verbose: int = 0, - seed: Optional[int] = None, - device: Union[th.device, str] = "auto", - _init_setup_model: bool = True, - ): - - super(DQN, self).__init__( - policy, - env, - DQNPolicy, - learning_rate, - buffer_size, - learning_starts, - batch_size, - tau, - gamma, - train_freq, - gradient_steps, - action_noise=None, # No action noise - policy_kwargs=policy_kwargs, - tensorboard_log=tensorboard_log, - verbose=verbose, - device=device, - create_eval_env=create_eval_env, - seed=seed, - sde_support=False, - optimize_memory_usage=optimize_memory_usage, - supported_action_spaces=(gym.spaces.Discrete,), - ) - - self.exploration_initial_eps = exploration_initial_eps - self.exploration_final_eps = exploration_final_eps - self.exploration_fraction = exploration_fraction - self.target_update_interval = target_update_interval - self.max_grad_norm = max_grad_norm - # "epsilon" for the epsilon-greedy exploration - self.exploration_rate = 0.0 - # Linear schedule will be defined in `_setup_model()` - self.exploration_schedule = None - self.q_net, self.q_net_target = None, None - - if _init_setup_model: - self._setup_model() - - def _setup_model(self) -> None: - super(DQN, self)._setup_model() - self._create_aliases() - self.exploration_schedule = get_linear_fn( - self.exploration_initial_eps, self.exploration_final_eps, self.exploration_fraction - ) - - def _create_aliases(self) -> None: - self.q_net = self.policy.q_net - self.q_net_target = self.policy.q_net_target - - def _on_step(self) -> None: - """ - Update the exploration rate and target network if needed. - This method is called in ``collect_rollouts()`` after each step in the environment. - """ - if self.num_timesteps % self.target_update_interval == 0: - polyak_update(self.q_net.parameters(), self.q_net_target.parameters(), self.tau) - - self.exploration_rate = self.exploration_schedule(self._current_progress_remaining) - logger.record("rollout/exploration rate", self.exploration_rate) - - def train(self, gradient_steps: int, batch_size: int = 100) -> None: - # Update learning rate according to schedule - self._update_learning_rate(self.policy.optimizer) - - losses = [] - for _ in range(gradient_steps): - # Sample replay buffer - replay_data = self.replay_buffer.sample(batch_size, env=self._vec_normalize_env) - - with th.no_grad(): - # Compute the next Q-values using the target network - next_q_values = self.q_net_target(replay_data.next_observations) - # Follow greedy policy: use the one with the highest value - next_q_values, _ = next_q_values.max(dim=1) - # Avoid potential broadcast issue - next_q_values = next_q_values.reshape(-1, 1) - # 1-step TD target - target_q_values = replay_data.rewards + (1 - replay_data.dones) * self.gamma * next_q_values - - # Get current Q-values estimates - current_q_values = self.q_net(replay_data.observations) - - # Retrieve the q-values for the actions from the replay buffer - current_q_values = th.gather(current_q_values, dim=1, index=replay_data.actions.long()) - - # Compute Huber loss (less sensitive to outliers) - loss = F.smooth_l1_loss(current_q_values, target_q_values) - losses.append(loss.item()) - - # Optimize the policy - self.policy.optimizer.zero_grad() - loss.backward() - # Clip gradient norm - th.nn.utils.clip_grad_norm_(self.policy.parameters(), self.max_grad_norm) - self.policy.optimizer.step() - - # Increase update counter - self._n_updates += gradient_steps - - logger.record("train/n_updates", self._n_updates, exclude="tensorboard") - logger.record("train/loss", np.mean(losses)) - - def predict( - self, - observation: np.ndarray, - state: Optional[np.ndarray] = None, - mask: Optional[np.ndarray] = None, - deterministic: bool = False, - ) -> Tuple[np.ndarray, Optional[np.ndarray]]: - """ - Overrides the base_class predict function to include epsilon-greedy exploration. - - :param observation: the input observation - :param state: The last states (can be None, used in recurrent policies) - :param mask: The last masks (can be None, used in recurrent policies) - :param deterministic: Whether or not to return deterministic actions. - :return: the model's action and the next state - (used in recurrent policies) - """ - if not deterministic and np.random.rand() < self.exploration_rate: - if is_vectorized_observation(maybe_transpose(observation, self.observation_space), self.observation_space): - n_batch = observation.shape[0] - action = np.array([self.action_space.sample() for _ in range(n_batch)]) - else: - action = np.array(self.action_space.sample()) - else: - action, state = self.policy.predict(observation, state, mask, deterministic) - return action, state - - def learn( - self, - total_timesteps: int, - callback: MaybeCallback = None, - log_interval: int = 4, - eval_env: Optional[GymEnv] = None, - eval_freq: int = -1, - n_eval_episodes: int = 5, - tb_log_name: str = "DQN", - eval_log_path: Optional[str] = None, - reset_num_timesteps: bool = True, - ) -> OffPolicyAlgorithm: - - return super(DQN, self).learn( - total_timesteps=total_timesteps, - callback=callback, - log_interval=log_interval, - eval_env=eval_env, - eval_freq=eval_freq, - n_eval_episodes=n_eval_episodes, - tb_log_name=tb_log_name, - eval_log_path=eval_log_path, - reset_num_timesteps=reset_num_timesteps, - ) - - def _excluded_save_params(self) -> List[str]: - return super(DQN, self)._excluded_save_params() + ["q_net", "q_net_target"] - - def _get_torch_save_params(self) -> Tuple[List[str], List[str]]: - state_dicts = ["policy", "policy.optimizer"] - - return state_dicts, [] diff --git a/spaces/LucasCodeBreak/MusicGen/audiocraft/modules/codebooks_patterns.py b/spaces/LucasCodeBreak/MusicGen/audiocraft/modules/codebooks_patterns.py deleted file mode 100644 index c5b35cbea8cff84aa56116dbdd860fc72a913a13..0000000000000000000000000000000000000000 --- a/spaces/LucasCodeBreak/MusicGen/audiocraft/modules/codebooks_patterns.py +++ /dev/null @@ -1,539 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import namedtuple -from dataclasses import dataclass -from functools import lru_cache -import logging -import typing as tp - -from abc import ABC, abstractmethod -import torch - -LayoutCoord = namedtuple('LayoutCoord', ['t', 'q']) # (timestep, codebook index) -PatternLayout = tp.List[tp.List[LayoutCoord]] # Sequence of coordinates -logger = logging.getLogger(__name__) - - -@dataclass -class Pattern: - """Base implementation of a pattern over a sequence with multiple codebooks. - - The codebook pattern consists in a layout, defining for each sequence step - the list of coordinates of each codebook timestep in the resulting interleaved sequence. - The first item of the pattern is always an empty list in order to properly insert a special token - to start with. For convenience, we also keep track of ``n_q`` the number of codebooks used for the pattern - and ``timesteps`` the number of timesteps corresponding to the original sequence. - - The pattern provides convenient methods to build and revert interleaved sequences from it: - ``build_pattern_sequence`` maps a given a dense input tensor of multi-codebook sequence from [B, K, T] - to the interleaved sequence of shape [B, K, S] applying the pattern, with S being the batch size, - K being the number of codebooks, T the number of original timesteps and S the number of sequence steps - for the output sequence. The unfilled positions are replaced with a special token and the built sequence - is returned along with a mask indicating valid tokens. - ``revert_pattern_sequence`` maps back an interleaved sequence of shape [B, K, S] to the original alignment - of codebooks across timesteps to an output tensor of shape [B, K, T], using again a special token and a mask - to fill and specify invalid positions if needed. - See the dedicated methods for more details. - """ - # Pattern layout, for each sequence step, we have a list of coordinates - # corresponding to the original codebook timestep and position. - # The first list is always an empty list in order to properly insert - # a special token to start with. - layout: PatternLayout - timesteps: int - n_q: int - - def __post_init__(self): - assert len(self.layout) > 0 - assert self.layout[0] == [] - self._validate_layout() - self._build_reverted_sequence_scatter_indexes = lru_cache(100)(self._build_reverted_sequence_scatter_indexes) - self._build_pattern_sequence_scatter_indexes = lru_cache(100)(self._build_pattern_sequence_scatter_indexes) - logger.info("New pattern, time steps: %d, sequence steps: %d", self.timesteps, len(self.layout)) - - def _validate_layout(self): - """Runs checks on the layout to ensure a valid pattern is defined. - A pattern is considered invalid if: - - Multiple timesteps for a same codebook are defined in the same sequence step - - The timesteps for a given codebook are not in ascending order as we advance in the sequence - (this would mean that we have future timesteps before past timesteps). - """ - q_timesteps = {q: 0 for q in range(self.n_q)} - for s, seq_coords in enumerate(self.layout): - if len(seq_coords) > 0: - qs = set() - for coord in seq_coords: - qs.add(coord.q) - last_q_timestep = q_timesteps[coord.q] - assert coord.t >= last_q_timestep, \ - f"Past timesteps are found in the sequence for codebook = {coord.q} at step {s}" - q_timesteps[coord.q] = coord.t - # each sequence step contains at max 1 coordinate per codebook - assert len(qs) == len(seq_coords), \ - f"Multiple entries for a same codebook are found at step {s}" - - @property - def num_sequence_steps(self): - return len(self.layout) - 1 - - @property - def max_delay(self): - max_t_in_seq_coords = 0 - for seq_coords in self.layout[1:]: - for coords in seq_coords: - max_t_in_seq_coords = max(max_t_in_seq_coords, coords.t + 1) - return max_t_in_seq_coords - self.timesteps - - @property - def valid_layout(self): - valid_step = len(self.layout) - self.max_delay - return self.layout[:valid_step] - - def get_sequence_coords_with_timestep(self, t: int, q: tp.Optional[int] = None): - """Get codebook coordinates in the layout that corresponds to the specified timestep t - and optionally to the codebook q. Coordinates are returned as a tuple with the sequence step - and the actual codebook coordinates. - """ - assert t <= self.timesteps, "provided timesteps is greater than the pattern's number of timesteps" - if q is not None: - assert q <= self.n_q, "provided number of codebooks is greater than the pattern's number of codebooks" - coords = [] - for s, seq_codes in enumerate(self.layout): - for code in seq_codes: - if code.t == t and (q is None or code.q == q): - coords.append((s, code)) - return coords - - def get_steps_with_timestep(self, t: int, q: tp.Optional[int] = None) -> tp.List[int]: - return [step for step, coords in self.get_sequence_coords_with_timestep(t, q)] - - def get_first_step_with_timesteps(self, t: int, q: tp.Optional[int] = None) -> tp.Optional[int]: - steps_with_timesteps = self.get_steps_with_timestep(t, q) - return steps_with_timesteps[0] if len(steps_with_timesteps) > 0 else None - - def _build_pattern_sequence_scatter_indexes(self, timesteps: int, n_q: int, keep_only_valid_steps: bool, - device: tp.Union[torch.device, str] = 'cpu'): - """Build scatter indexes corresponding to the pattern, up to the provided sequence_steps. - - Args: - timesteps (int): Maximum number of timesteps steps to consider. - keep_only_valid_steps (bool): Restrict the pattern layout to match only valid steps. - device (Union[torch.device, str]): Device for created tensors. - Returns: - indexes (torch.Tensor): Indexes corresponding to the sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes, of shape [K, S]. - """ - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert timesteps <= self.timesteps, "invalid number of timesteps used to build the sequence from the pattern" - # use the proper layout based on whether we limit ourselves to valid steps only or not, - # note that using the valid_layout will result in a truncated sequence up to the valid steps - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, len(ref_layout), dtype=torch.long).numpy() - mask = torch.zeros(n_q, len(ref_layout), dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - # the last value is n_q * timesteps as we have flattened z and append special token as the last token - # which will correspond to the index: n_q * timesteps - indexes[:] = n_q * timesteps - # iterate over the pattern and fill scattered indexes and mask - for s, sequence_coords in enumerate(ref_layout): - for coords in sequence_coords: - if coords.t < timesteps: - indexes[coords.q, s] = coords.t + coords.q * timesteps - mask[coords.q, s] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def build_pattern_sequence(self, z: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Build sequence corresponding to the pattern from the input tensor z. - The sequence is built using up to sequence_steps if specified, and non-pattern - coordinates are filled with the special token. - - Args: - z (torch.Tensor): Input tensor of multi-codebooks sequence, of shape [B, K, T]. - special_token (int): Special token used to fill non-pattern coordinates in the new sequence. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, S] with S - corresponding either to the sequence_steps if provided, otherwise to the length of the pattern. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, S]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, S]. - """ - B, K, T = z.shape - indexes, mask = self._build_pattern_sequence_scatter_indexes( - T, K, keep_only_valid_steps=keep_only_valid_steps, device=str(z.device) - ) - z = z.view(B, -1) - # we append the special token as the last index of our flattened z tensor - z = torch.cat([z, torch.zeros_like(z[:, :1]) + special_token], dim=1) - values = z[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def _build_reverted_sequence_scatter_indexes(self, sequence_steps: int, n_q: int, - keep_only_valid_steps: bool = False, - is_model_output: bool = False, - device: tp.Union[torch.device, str] = 'cpu'): - """Builds scatter indexes required to retrieve the original multi-codebook sequence - from interleaving pattern. - - Args: - sequence_steps (int): Sequence steps. - n_q (int): Number of codebooks. - keep_only_valid_steps (bool): Build a sequence from the pattern up to valid (= fully defined) steps. - Steps that are beyond valid steps will be replaced by the special_token in that case. - is_model_output (bool): Whether to keep the sequence item corresponding to initial special token or not. - device (Union[torch.device, str]): Device for created tensors. - Returns: - torch.Tensor: Indexes for reconstructing the output, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - ref_layout = self.valid_layout if keep_only_valid_steps else self.layout - # TODO(jade): Do we want to further truncate to only valid timesteps here as well? - timesteps = self.timesteps - assert n_q == self.n_q, f"invalid number of codebooks for the sequence and the pattern: {n_q} != {self.n_q}" - assert sequence_steps <= len(ref_layout), \ - f"sequence to revert is longer than the defined pattern: {sequence_steps} > {len(ref_layout)}" - - # ensure we take the appropriate indexes to keep the model output from the first special token as well - if is_model_output: - ref_layout = ref_layout[1:] - - # single item indexing being super slow with pytorch vs. numpy, so we use numpy here - indexes = torch.zeros(n_q, timesteps, dtype=torch.long).numpy() - mask = torch.zeros(n_q, timesteps, dtype=torch.bool).numpy() - # fill indexes with last sequence step value that will correspond to our special token - indexes[:] = n_q * sequence_steps - for s, sequence_codes in enumerate(ref_layout): - if s < sequence_steps: - for code in sequence_codes: - if code.t < timesteps: - indexes[code.q, code.t] = s + code.q * sequence_steps - mask[code.q, code.t] = 1 - indexes = torch.from_numpy(indexes).to(device) - mask = torch.from_numpy(mask).to(device) - return indexes, mask - - def revert_pattern_sequence(self, s: torch.Tensor, special_token: int, keep_only_valid_steps: bool = False): - """Revert a sequence built from the pattern back to the original multi-codebook sequence without interleaving. - The sequence is reverted using up to timesteps if specified, and non-pattern coordinates - are filled with the special token. - - Args: - s (torch.Tensor): Interleaved sequence tensor obtained from the pattern, of shape [B, K, S]. - special_token (int or float): Special token used to fill non-pattern coordinates in the new sequence. - Returns: - values (torch.Tensor): Interleaved sequence matching the pattern, of shape [B, K, T] with T - corresponding either to the timesteps if provided, or the total timesteps in pattern otherwise. - indexes (torch.Tensor): Indexes corresponding to the interleaved sequence, of shape [K, T]. - mask (torch.Tensor): Mask corresponding to indexes that matches valid indexes of shape [K, T]. - """ - B, K, S = s.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=False, device=str(s.device) - ) - s = s.view(B, -1) - # we append the special token as the last index of our flattened z tensor - s = torch.cat([s, torch.zeros_like(s[:, :1]) + special_token], dim=1) - values = s[:, indexes.view(-1)] - values = values.view(B, K, indexes.shape[-1]) - return values, indexes, mask - - def revert_pattern_logits(self, logits: torch.Tensor, special_token: float, keep_only_valid_steps: bool = False): - """Revert model logits obtained on a sequence built from the pattern - back to a tensor matching the original sequence. - - This method is similar to ``revert_pattern_sequence`` with the following specificities: - 1. It is designed to work with the extra cardinality dimension - 2. We return the logits for the first sequence item that matches the special_token and - which matching target in the original sequence is the first item of the sequence, - while we skip the last logits as there is no matching target - """ - B, card, K, S = logits.shape - indexes, mask = self._build_reverted_sequence_scatter_indexes( - S, K, keep_only_valid_steps, is_model_output=True, device=logits.device - ) - logits = logits.reshape(B, card, -1) - # we append the special token as the last index of our flattened z tensor - logits = torch.cat([logits, torch.zeros_like(logits[:, :, :1]) + special_token], dim=-1) # [B, card, K x S] - values = logits[:, :, indexes.view(-1)] - values = values.view(B, card, K, indexes.shape[-1]) - return values, indexes, mask - - -class CodebooksPatternProvider(ABC): - """Abstraction around providing pattern for interleaving codebooks. - - The CodebooksPatternProvider abstraction allows to implement various strategies to - define interleaving pattern of sequences composed of multiple codebooks. For a given - number of codebooks `n_q`, the pattern provider can generate a specified pattern - corresponding to a sequence of `T` timesteps with `n_q` parallel codebooks. This pattern - can be used to construct a new sequence from the original codes respecting the specified - pattern. The pattern is defined as a list of list of code coordinates, code coordinate - being a tuple with the original timestep and codebook to build the new sequence. - Note that all patterns must start with an empty list that is then used to insert a first - sequence step of special tokens in the newly generated sequence. - - Args: - n_q (int): number of codebooks. - cached (bool): if True, patterns for a given length are cached. In general - that should be true for efficiency reason to avoid synchronization points. - """ - def __init__(self, n_q: int, cached: bool = True): - assert n_q > 0 - self.n_q = n_q - self.get_pattern = lru_cache(100)(self.get_pattern) # type: ignore - - @abstractmethod - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern with specific interleaving between codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - raise NotImplementedError() - - -class DelayedPatternProvider(CodebooksPatternProvider): - """Provider for delayed pattern across delayed codebooks. - Codebooks are delayed in the sequence and sequence steps will contain codebooks - from different timesteps. - - Example: - Taking timesteps=4 and n_q=3, delays=None, the multi-codebook sequence: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - The resulting sequence obtained from the returned pattern is: - [[S, 1, 2, 3, 4], - [S, S, 1, 2, 3], - [S, S, S, 1, 2]] - (with S being a special token) - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - flatten_first (int): Flatten the first N timesteps. - empty_initial (int): Prepend with N empty list of coordinates. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None, - flatten_first: int = 0, empty_initial: int = 0): - super().__init__(n_q) - if delays is None: - delays = list(range(n_q)) - self.delays = delays - self.flatten_first = flatten_first - self.empty_initial = empty_initial - assert len(self.delays) == self.n_q - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - max_delay = max(self.delays) - if self.empty_initial: - out += [[] for _ in range(self.empty_initial)] - if self.flatten_first: - for t in range(min(timesteps, self.flatten_first)): - for q in range(self.n_q): - out.append([LayoutCoord(t, q)]) - for t in range(self.flatten_first, timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= self.flatten_first: - v.append(LayoutCoord(t_for_q, q)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class ParallelPatternProvider(DelayedPatternProvider): - """Provider for parallel pattern across codebooks. - This pattern provider is a special case of the delayed pattern with actually no delay, - hence delays=repeat(0, n_q). - - Args: - n_q (int): Number of codebooks. - """ - def __init__(self, n_q: int): - super().__init__(n_q, [0] * n_q) - - -class UnrolledPatternProvider(CodebooksPatternProvider): - """Provider for unrolling codebooks pattern. - This pattern provider enables to represent the codebook flattened completely or only to some extend - while also specifying a given delay between the flattened codebooks representation, allowing to - unroll the codebooks in the sequence. - - Example: - 1. Flattening of the codebooks. - By default, the pattern provider will fully flatten the codebooks such as flattening=range(n_q), - taking n_q = 3 and timesteps = 4: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, 1, S, S, 2, S, S, 3, S, S, 4], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 2. Partial flattening of the codebooks. The ``flattening`` parameter allows to specify the inner step - for each of the codebook, allowing to define which codebook to flatten (or keep in parallel), for example - taking n_q = 3, timesteps = 4 and flattening = [0, 1, 1]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [S, 1, S, S, 2, S, S, 3, S, S, 4, S], - [1, S, S, 2, S, S, 3, S, S, 4, S, S]] - 3. Flattening with delay. The ``delay`` parameter allows to further unroll the sequence of codebooks - allowing to specify the delay per codebook. Note that the delay between codebooks flattened to the - same inner timestep should be coherent. For example, taking n_q = 3, timesteps = 4, flattening = [0, 1, 1] - and delays = [0, 3, 3]: - [[1, 2, 3, 4], - [1, 2, 3, 4], - [1, 2, 3, 4]] - will result into: - [[S, S, S, 1, S, 2, S, 3, S, 4], - [S, S, S, 1, S, 2, S, 3, S, 4], - [1, 2, 3, S, 4, S, 5, S, 6, S]] - - Args: - n_q (int): Number of codebooks. - flattening (Optional[List[int]]): Flattening schema over the codebooks. If not defined, - the codebooks will be flattened to 1 codebook per step, meaning that the sequence will - have n_q extra steps for each timestep. - delays (Optional[List[int]]): Delay for each of the codebooks. If not defined, - no delay is added and therefore will default to [0] * ``n_q``. - Note that two codebooks that will be flattened to the same inner step - should have the same delay, otherwise the pattern is considered as invalid. - """ - FlattenedCodebook = namedtuple('FlattenedCodebook', ['codebooks', 'delay']) - - def __init__(self, n_q: int, flattening: tp.Optional[tp.List[int]] = None, - delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if flattening is None: - flattening = list(range(n_q)) - if delays is None: - delays = [0] * n_q - assert len(flattening) == n_q - assert len(delays) == n_q - assert sorted(flattening) == flattening - assert sorted(delays) == delays - self._flattened_codebooks = self._build_flattened_codebooks(delays, flattening) - self.max_delay = max(delays) - - def _build_flattened_codebooks(self, delays: tp.List[int], flattening: tp.List[int]): - """Build a flattened codebooks representation as a dictionary of inner step - and the actual codebook indices corresponding to the flattened codebook. For convenience, we - also store the delay associated to the flattened codebook to avoid maintaining an extra mapping. - """ - flattened_codebooks: dict = {} - for q, (inner_step, delay) in enumerate(zip(flattening, delays)): - if inner_step not in flattened_codebooks: - flat_codebook = UnrolledPatternProvider.FlattenedCodebook(codebooks=[q], delay=delay) - else: - flat_codebook = flattened_codebooks[inner_step] - assert flat_codebook.delay == delay, ( - "Delay and flattening between codebooks is inconsistent: ", - "two codebooks flattened to the same position should have the same delay." - ) - flat_codebook.codebooks.append(q) - flattened_codebooks[inner_step] = flat_codebook - return flattened_codebooks - - @property - def _num_inner_steps(self): - """Number of inner steps to unroll between timesteps in order to flatten the codebooks. - """ - return max([inner_step for inner_step in self._flattened_codebooks.keys()]) + 1 - - def num_virtual_steps(self, timesteps: int) -> int: - return timesteps * self._num_inner_steps + 1 - - def get_pattern(self, timesteps: int) -> Pattern: - """Builds pattern for delay across codebooks. - - Args: - timesteps (int): Total numer of timesteps. - """ - # the PatternLayout is built as a tuple of sequence position and list of coordinates - # so that it can be reordered properly given the required delay between codebooks of given timesteps - indexed_out: list = [(-1, [])] - max_timesteps = timesteps + self.max_delay - for t in range(max_timesteps): - # for each timestep, we unroll the flattened codebooks, - # emitting the sequence step with the corresponding delay - for step in range(self._num_inner_steps): - if step in self._flattened_codebooks: - # we have codebooks at this virtual step to emit - step_codebooks = self._flattened_codebooks[step] - t_for_q = t + step_codebooks.delay - coords = [LayoutCoord(t, q) for q in step_codebooks.codebooks] - if t_for_q < max_timesteps and t < max_timesteps: - indexed_out.append((t_for_q, coords)) - else: - # there is no codebook in this virtual step so we emit an empty list - indexed_out.append((t, [])) - out = [coords for _, coords in sorted(indexed_out)] - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class VALLEPattern(CodebooksPatternProvider): - """Almost VALL-E style pattern. We futher allow some delays for the - codebooks other than the first one. - - Args: - n_q (int): Number of codebooks. - delays (Optional[List[int]]): Delay for each of the codebooks. - If delays not defined, each codebook is delayed by 1 compared to the previous one. - """ - def __init__(self, n_q: int, delays: tp.Optional[tp.List[int]] = None): - super().__init__(n_q) - if delays is None: - delays = [0] * (n_q - 1) - self.delays = delays - assert len(self.delays) == self.n_q - 1 - assert sorted(self.delays) == self.delays - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for t in range(timesteps): - out.append([LayoutCoord(t, 0)]) - max_delay = max(self.delays) - for t in range(timesteps + max_delay): - v = [] - for q, delay in enumerate(self.delays): - t_for_q = t - delay - if t_for_q >= 0: - v.append(LayoutCoord(t_for_q, q + 1)) - out.append(v) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) - - -class MusicLMPattern(CodebooksPatternProvider): - """Almost MusicLM style pattern. This is equivalent to full flattening - but in a different order. - - Args: - n_q (int): Number of codebooks. - group_by (int): Number of codebooks to group together. - """ - def __init__(self, n_q: int, group_by: int = 2): - super().__init__(n_q) - self.group_by = group_by - - def get_pattern(self, timesteps: int) -> Pattern: - out: PatternLayout = [[]] - for offset in range(0, self.n_q, self.group_by): - for t in range(timesteps): - for q in range(offset, offset + self.group_by): - out.append([LayoutCoord(t, q)]) - return Pattern(out, n_q=self.n_q, timesteps=timesteps) diff --git a/spaces/MWilinski/bot/tests/api/question_answering/test_response.py b/spaces/MWilinski/bot/tests/api/question_answering/test_response.py deleted file mode 100644 index 3f0a3297713eb451917cb7d5e860628924b921e6..0000000000000000000000000000000000000000 --- a/spaces/MWilinski/bot/tests/api/question_answering/test_response.py +++ /dev/null @@ -1,27 +0,0 @@ -import pytest -from api.question_answering.response import Response - - -def test_set_answer(): - r = Response() - r.set_answer('Hello, World!') - assert r.get_answer() == 'Hello, World!' - - -def test_set_sources(): - r = Response() - r.set_sources(['source1', 'source1', 'source2']) - assert len(r.get_sources()) == 2 - - -def test_get_sources_as_text(): - r = Response() - r.set_sources(['source1', 'source2']) - assert isinstance(r.get_sources_as_text(), str) - - -def test_get_response_include_sources(): - r = Response() - r.set_answer('Hello, World!') - r.set_sources(['source1', 'source2']) - assert len(r.get_answer(include_sources=True)) > len('Hello, World!') diff --git a/spaces/MarcusSu1216/XingTong/onnx_export.py b/spaces/MarcusSu1216/XingTong/onnx_export.py deleted file mode 100644 index a70a912cc1b6dd908ff6496bbc6fa8dd576e233b..0000000000000000000000000000000000000000 --- a/spaces/MarcusSu1216/XingTong/onnx_export.py +++ /dev/null @@ -1,54 +0,0 @@ -import torch -from onnxexport.model_onnx import SynthesizerTrn -import utils - -def main(NetExport): - path = "SoVits4.0" - if NetExport: - device = torch.device("cpu") - hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json") - SVCVITS = SynthesizerTrn( - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - **hps.model) - _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", SVCVITS, None) - _ = SVCVITS.eval().to(device) - for i in SVCVITS.parameters(): - i.requires_grad = False - - n_frame = 10 - test_hidden_unit = torch.rand(1, n_frame, 256) - test_pitch = torch.rand(1, n_frame) - test_mel2ph = torch.arange(0, n_frame, dtype=torch.int64)[None] # torch.LongTensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]).unsqueeze(0) - test_uv = torch.ones(1, n_frame, dtype=torch.float32) - test_noise = torch.randn(1, 192, n_frame) - test_sid = torch.LongTensor([0]) - input_names = ["c", "f0", "mel2ph", "uv", "noise", "sid"] - output_names = ["audio", ] - - torch.onnx.export(SVCVITS, - ( - test_hidden_unit.to(device), - test_pitch.to(device), - test_mel2ph.to(device), - test_uv.to(device), - test_noise.to(device), - test_sid.to(device) - ), - f"checkpoints/{path}/model.onnx", - dynamic_axes={ - "c": [0, 1], - "f0": [1], - "mel2ph": [1], - "uv": [1], - "noise": [2], - }, - do_constant_folding=False, - opset_version=16, - verbose=False, - input_names=input_names, - output_names=output_names) - - -if __name__ == '__main__': - main(True) diff --git a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/tools/merge_lvis_coco.py b/spaces/MattyWhite/ChatGPT-ImageCaptioner2/tools/merge_lvis_coco.py deleted file mode 100644 index abc2b673a30541fd71679a549acd9a53f7693183..0000000000000000000000000000000000000000 --- a/spaces/MattyWhite/ChatGPT-ImageCaptioner2/tools/merge_lvis_coco.py +++ /dev/null @@ -1,202 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from collections import defaultdict -import torch -import sys -import json -import numpy as np - -from detectron2.structures import Boxes, pairwise_iou -COCO_PATH = 'datasets/coco/annotations/instances_train2017.json' -IMG_PATH = 'datasets/coco/train2017/' -LVIS_PATH = 'datasets/lvis/lvis_v1_train.json' -NO_SEG = False -if NO_SEG: - SAVE_PATH = 'datasets/lvis/lvis_v1_train+coco_box.json' -else: - SAVE_PATH = 'datasets/lvis/lvis_v1_train+coco_mask.json' -THRESH = 0.7 -DEBUG = False - -# This mapping is extracted from the official LVIS mapping: -# https://github.com/lvis-dataset/lvis-api/blob/master/data/coco_to_synset.json -COCO_SYNSET_CATEGORIES = [ - {"synset": "person.n.01", "coco_cat_id": 1}, - {"synset": "bicycle.n.01", "coco_cat_id": 2}, - {"synset": "car.n.01", "coco_cat_id": 3}, - {"synset": "motorcycle.n.01", "coco_cat_id": 4}, - {"synset": "airplane.n.01", "coco_cat_id": 5}, - {"synset": "bus.n.01", "coco_cat_id": 6}, - {"synset": "train.n.01", "coco_cat_id": 7}, - {"synset": "truck.n.01", "coco_cat_id": 8}, - {"synset": "boat.n.01", "coco_cat_id": 9}, - {"synset": "traffic_light.n.01", "coco_cat_id": 10}, - {"synset": "fireplug.n.01", "coco_cat_id": 11}, - {"synset": "stop_sign.n.01", "coco_cat_id": 13}, - {"synset": "parking_meter.n.01", "coco_cat_id": 14}, - {"synset": "bench.n.01", "coco_cat_id": 15}, - {"synset": "bird.n.01", "coco_cat_id": 16}, - {"synset": "cat.n.01", "coco_cat_id": 17}, - {"synset": "dog.n.01", "coco_cat_id": 18}, - {"synset": "horse.n.01", "coco_cat_id": 19}, - {"synset": "sheep.n.01", "coco_cat_id": 20}, - {"synset": "beef.n.01", "coco_cat_id": 21}, - {"synset": "elephant.n.01", "coco_cat_id": 22}, - {"synset": "bear.n.01", "coco_cat_id": 23}, - {"synset": "zebra.n.01", "coco_cat_id": 24}, - {"synset": "giraffe.n.01", "coco_cat_id": 25}, - {"synset": "backpack.n.01", "coco_cat_id": 27}, - {"synset": "umbrella.n.01", "coco_cat_id": 28}, - {"synset": "bag.n.04", "coco_cat_id": 31}, - {"synset": "necktie.n.01", "coco_cat_id": 32}, - {"synset": "bag.n.06", "coco_cat_id": 33}, - {"synset": "frisbee.n.01", "coco_cat_id": 34}, - {"synset": "ski.n.01", "coco_cat_id": 35}, - {"synset": "snowboard.n.01", "coco_cat_id": 36}, - {"synset": "ball.n.06", "coco_cat_id": 37}, - {"synset": "kite.n.03", "coco_cat_id": 38}, - {"synset": "baseball_bat.n.01", "coco_cat_id": 39}, - {"synset": "baseball_glove.n.01", "coco_cat_id": 40}, - {"synset": "skateboard.n.01", "coco_cat_id": 41}, - {"synset": "surfboard.n.01", "coco_cat_id": 42}, - {"synset": "tennis_racket.n.01", "coco_cat_id": 43}, - {"synset": "bottle.n.01", "coco_cat_id": 44}, - {"synset": "wineglass.n.01", "coco_cat_id": 46}, - {"synset": "cup.n.01", "coco_cat_id": 47}, - {"synset": "fork.n.01", "coco_cat_id": 48}, - {"synset": "knife.n.01", "coco_cat_id": 49}, - {"synset": "spoon.n.01", "coco_cat_id": 50}, - {"synset": "bowl.n.03", "coco_cat_id": 51}, - {"synset": "banana.n.02", "coco_cat_id": 52}, - {"synset": "apple.n.01", "coco_cat_id": 53}, - {"synset": "sandwich.n.01", "coco_cat_id": 54}, - {"synset": "orange.n.01", "coco_cat_id": 55}, - {"synset": "broccoli.n.01", "coco_cat_id": 56}, - {"synset": "carrot.n.01", "coco_cat_id": 57}, - # {"synset": "frank.n.02", "coco_cat_id": 58}, - {"synset": "sausage.n.01", "coco_cat_id": 58}, - {"synset": "pizza.n.01", "coco_cat_id": 59}, - {"synset": "doughnut.n.02", "coco_cat_id": 60}, - {"synset": "cake.n.03", "coco_cat_id": 61}, - {"synset": "chair.n.01", "coco_cat_id": 62}, - {"synset": "sofa.n.01", "coco_cat_id": 63}, - {"synset": "pot.n.04", "coco_cat_id": 64}, - {"synset": "bed.n.01", "coco_cat_id": 65}, - {"synset": "dining_table.n.01", "coco_cat_id": 67}, - {"synset": "toilet.n.02", "coco_cat_id": 70}, - {"synset": "television_receiver.n.01", "coco_cat_id": 72}, - {"synset": "laptop.n.01", "coco_cat_id": 73}, - {"synset": "mouse.n.04", "coco_cat_id": 74}, - {"synset": "remote_control.n.01", "coco_cat_id": 75}, - {"synset": "computer_keyboard.n.01", "coco_cat_id": 76}, - {"synset": "cellular_telephone.n.01", "coco_cat_id": 77}, - {"synset": "microwave.n.02", "coco_cat_id": 78}, - {"synset": "oven.n.01", "coco_cat_id": 79}, - {"synset": "toaster.n.02", "coco_cat_id": 80}, - {"synset": "sink.n.01", "coco_cat_id": 81}, - {"synset": "electric_refrigerator.n.01", "coco_cat_id": 82}, - {"synset": "book.n.01", "coco_cat_id": 84}, - {"synset": "clock.n.01", "coco_cat_id": 85}, - {"synset": "vase.n.01", "coco_cat_id": 86}, - {"synset": "scissors.n.01", "coco_cat_id": 87}, - {"synset": "teddy.n.01", "coco_cat_id": 88}, - {"synset": "hand_blower.n.01", "coco_cat_id": 89}, - {"synset": "toothbrush.n.01", "coco_cat_id": 90}, -] - - -def get_bbox(ann): - bbox = ann['bbox'] - return [bbox[0], bbox[1], bbox[0] + bbox[2], bbox[1] + bbox[3]] - - -if __name__ == '__main__': - file_name_key = 'file_name' if 'v0.5' in LVIS_PATH else 'coco_url' - coco_data = json.load(open(COCO_PATH, 'r')) - lvis_data = json.load(open(LVIS_PATH, 'r')) - - coco_cats = coco_data['categories'] - lvis_cats = lvis_data['categories'] - - num_find = 0 - num_not_find = 0 - num_twice = 0 - coco2lviscats = {} - synset2lvisid = {x['synset']: x['id'] for x in lvis_cats} - # cocoid2synset = {x['coco_cat_id']: x['synset'] for x in COCO_SYNSET_CATEGORIES} - coco2lviscats = {x['coco_cat_id']: synset2lvisid[x['synset']] \ - for x in COCO_SYNSET_CATEGORIES if x['synset'] in synset2lvisid} - print(len(coco2lviscats)) - - lvis_file2id = {x[file_name_key][-16:]: x['id'] for x in lvis_data['images']} - lvis_id2img = {x['id']: x for x in lvis_data['images']} - lvis_catid2name = {x['id']: x['name'] for x in lvis_data['categories']} - - coco_file2anns = {} - coco_id2img = {x['id']: x for x in coco_data['images']} - coco_img2anns = defaultdict(list) - for ann in coco_data['annotations']: - coco_img = coco_id2img[ann['image_id']] - file_name = coco_img['file_name'][-16:] - if ann['category_id'] in coco2lviscats and \ - file_name in lvis_file2id: - lvis_image_id = lvis_file2id[file_name] - lvis_image = lvis_id2img[lvis_image_id] - lvis_cat_id = coco2lviscats[ann['category_id']] - if lvis_cat_id in lvis_image['neg_category_ids']: - continue - if DEBUG: - import cv2 - img_path = IMG_PATH + file_name - img = cv2.imread(img_path) - print(lvis_catid2name[lvis_cat_id]) - print('neg', [lvis_catid2name[x] for x in lvis_image['neg_category_ids']]) - cv2.imshow('img', img) - cv2.waitKey() - ann['category_id'] = lvis_cat_id - ann['image_id'] = lvis_image_id - coco_img2anns[file_name].append(ann) - - lvis_img2anns = defaultdict(list) - for ann in lvis_data['annotations']: - lvis_img = lvis_id2img[ann['image_id']] - file_name = lvis_img[file_name_key][-16:] - lvis_img2anns[file_name].append(ann) - - ann_id_count = 0 - anns = [] - for file_name in lvis_img2anns: - coco_anns = coco_img2anns[file_name] - lvis_anns = lvis_img2anns[file_name] - ious = pairwise_iou( - Boxes(torch.tensor([get_bbox(x) for x in coco_anns])), - Boxes(torch.tensor([get_bbox(x) for x in lvis_anns])) - ) - - for ann in lvis_anns: - ann_id_count = ann_id_count + 1 - ann['id'] = ann_id_count - anns.append(ann) - - for i, ann in enumerate(coco_anns): - if len(ious[i]) == 0 or ious[i].max() < THRESH: - ann_id_count = ann_id_count + 1 - ann['id'] = ann_id_count - anns.append(ann) - else: - duplicated = False - for j in range(len(ious[i])): - if ious[i, j] >= THRESH and \ - coco_anns[i]['category_id'] == lvis_anns[j]['category_id']: - duplicated = True - if not duplicated: - ann_id_count = ann_id_count + 1 - ann['id'] = ann_id_count - anns.append(ann) - if NO_SEG: - for ann in anns: - del ann['segmentation'] - lvis_data['annotations'] = anns - - print('# Images', len(lvis_data['images'])) - print('# Anns', len(lvis_data['annotations'])) - json.dump(lvis_data, open(SAVE_PATH, 'w')) diff --git a/spaces/MedicalAILabo/Xp-age/README.md b/spaces/MedicalAILabo/Xp-age/README.md deleted file mode 100644 index ba4936232c6636691207dd3726b6ebfc341dcf50..0000000000000000000000000000000000000000 --- a/spaces/MedicalAILabo/Xp-age/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Xp Age -emoji: 🌍 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/__init__.py b/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/__init__.py deleted file mode 100644 index 8b9046b07bb4ddea7a707a392b42e72db7c9df67..0000000000000000000000000000000000000000 --- a/spaces/Mellow-ai/PhotoAI_Mellow/annotator/uniformer/mmseg/datasets/pipelines/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -from .compose import Compose -from .formating import (Collect, ImageToTensor, ToDataContainer, ToTensor, - Transpose, to_tensor) -from .loading import LoadAnnotations, LoadImageFromFile -from .test_time_aug import MultiScaleFlipAug -from .transforms import (CLAHE, AdjustGamma, Normalize, Pad, - PhotoMetricDistortion, RandomCrop, RandomFlip, - RandomRotate, Rerange, Resize, RGB2Gray, SegRescale) - -__all__ = [ - 'Compose', 'to_tensor', 'ToTensor', 'ImageToTensor', 'ToDataContainer', - 'Transpose', 'Collect', 'LoadAnnotations', 'LoadImageFromFile', - 'MultiScaleFlipAug', 'Resize', 'RandomFlip', 'Pad', 'RandomCrop', - 'Normalize', 'SegRescale', 'PhotoMetricDistortion', 'RandomRotate', - 'AdjustGamma', 'CLAHE', 'Rerange', 'RGB2Gray' -] diff --git a/spaces/Miuzarte/SUI-svc-3.0/mel_processing.py b/spaces/Miuzarte/SUI-svc-3.0/mel_processing.py deleted file mode 100644 index 99c5b35beb83f3b288af0fac5b49ebf2c69f062c..0000000000000000000000000000000000000000 --- a/spaces/Miuzarte/SUI-svc-3.0/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/MrBodean/VoiceClone/samples/README.md b/spaces/MrBodean/VoiceClone/samples/README.md deleted file mode 100644 index 1a392d86e42f72e83954619f563f4881da327236..0000000000000000000000000000000000000000 --- a/spaces/MrBodean/VoiceClone/samples/README.md +++ /dev/null @@ -1,22 +0,0 @@ -The audio files in this folder are provided for toolbox testing and -benchmarking purposes. These are the same reference utterances -used by the SV2TTS authors to generate the audio samples located at: -https://google.github.io/tacotron/publications/speaker_adaptation/index.html - -The `p240_00000.mp3` and `p260_00000.mp3` files are compressed -versions of audios from the VCTK corpus available at: -https://datashare.is.ed.ac.uk/handle/10283/3443 -VCTK.txt contains the copyright notices and licensing information. - -The `1320_00000.mp3`, `3575_00000.mp3`, `6829_00000.mp3` -and `8230_00000.mp3` files are compressed versions of audios -from the LibriSpeech dataset available at: https://openslr.org/12 -For these files, the following notice applies: -``` -LibriSpeech (c) 2014 by Vassil Panayotov - -LibriSpeech ASR corpus is licensed under a -Creative Commons Attribution 4.0 International License. - -See . -``` diff --git a/spaces/Mycroft756/artificialguybr-StickersRedmond/README.md b/spaces/Mycroft756/artificialguybr-StickersRedmond/README.md deleted file mode 100644 index f0b675537b9de64d28117479349da02e1826cd7a..0000000000000000000000000000000000000000 --- a/spaces/Mycroft756/artificialguybr-StickersRedmond/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Artificialguybr StickersRedmond -emoji: 🏃 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/transformer_test.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/transformer_test.py deleted file mode 100644 index 841feb9948cb69abe1b1b73364b6f09fa2bde836..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/transformer_test.py +++ /dev/null @@ -1,253 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for Keras-based transformer block layer.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -from absl.testing import parameterized -import numpy as np -import tensorflow as tf - -from tensorflow.python.keras import keras_parameterized # pylint: disable=g-direct-tensorflow-import -from official.nlp.modeling.layers import transformer - - -# This decorator runs the test in V1, V2-Eager, and V2-Functional mode. It -# guarantees forward compatibility of this code for the V2 switchover. -@keras_parameterized.run_all_keras_modes -@parameterized.named_parameters(('base', transformer.Transformer), - ('xla', transformer.CompiledTransformer)) -class TransformerLayerTest(keras_parameterized.TestCase): - - def tearDown(self): - super(TransformerLayerTest, self).tearDown() - tf.keras.mixed_precision.experimental.set_policy('float32') - - def test_layer_creation(self, transformer_cls): - test_layer = transformer_cls( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu') - sequence_length = 21 - width = 80 - # Create a 3-dimensional input (the first dimension is implicit). - data_tensor = tf.keras.Input(shape=(sequence_length, width)) - output_tensor = test_layer(data_tensor) - # The default output of a transformer layer should be the same as the input. - self.assertEqual(data_tensor.shape.as_list(), output_tensor.shape.as_list()) - - def test_layer_creation_with_mask(self, transformer_cls): - test_layer = transformer_cls( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu') - sequence_length = 21 - width = 80 - # Create a 3-dimensional input (the first dimension is implicit). - data_tensor = tf.keras.Input(shape=(sequence_length, width)) - # Create a 2-dimensional input (the first dimension is implicit). - mask_tensor = tf.keras.Input(shape=(sequence_length, sequence_length)) - output_tensor = test_layer([data_tensor, mask_tensor]) - # The default output of a transformer layer should be the same as the input. - self.assertEqual(data_tensor.shape.as_list(), output_tensor.shape.as_list()) - - def test_layer_creation_with_incorrect_mask_fails(self, transformer_cls): - test_layer = transformer_cls( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu') - sequence_length = 21 - width = 80 - # Create a 3-dimensional input (the first dimension is implicit). - data_tensor = tf.keras.Input(shape=(sequence_length, width)) - # Create a 2-dimensional input (the first dimension is implicit). - mask_tensor = tf.keras.Input(shape=(sequence_length, sequence_length - 3)) - with self.assertRaisesRegex(ValueError, 'When passing a mask tensor.*'): - _ = test_layer([data_tensor, mask_tensor]) - - def test_layer_invocation(self, transformer_cls): - test_layer = transformer_cls( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu') - sequence_length = 21 - width = 80 - # Create a 3-dimensional input (the first dimension is implicit). - data_tensor = tf.keras.Input(shape=(sequence_length, width)) - output_tensor = test_layer(data_tensor) - - # Create a model from the test layer. - model = tf.keras.Model(data_tensor, output_tensor) - - # Invoke the model on test data. We can't validate the output data itself - # (the NN is too complex) but this will rule out structural runtime errors. - batch_size = 6 - input_data = 10 * np.random.random_sample( - (batch_size, sequence_length, width)) - _ = model.predict(input_data) - - def test_layer_invocation_with_mask(self, transformer_cls): - test_layer = transformer_cls( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu') - sequence_length = 21 - width = 80 - # Create a 3-dimensional input (the first dimension is implicit). - data_tensor = tf.keras.Input(shape=(sequence_length, width)) - # Create a 2-dimensional input (the first dimension is implicit). - mask_tensor = tf.keras.Input(shape=(sequence_length, sequence_length)) - output_tensor = test_layer([data_tensor, mask_tensor]) - - # Create a model from the test layer. - model = tf.keras.Model([data_tensor, mask_tensor], output_tensor) - - # Invoke the model on test data. We can't validate the output data itself - # (the NN is too complex) but this will rule out structural runtime errors. - batch_size = 6 - input_data = 10 * np.random.random_sample( - (batch_size, sequence_length, width)) - # The attention mask should be of shape (batch, from_seq_len, to_seq_len), - # which here is (batch, sequence_length, sequence_length) - mask_data = np.random.randint( - 2, size=(batch_size, sequence_length, sequence_length)) - _ = model.predict([input_data, mask_data]) - - def test_layer_output_range(self, transformer_cls): - test_layer = transformer_cls( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu') - sequence_length = 21 - width = 80 - - batch_size = 6 - input_data = 10 * np.random.random_sample( - (batch_size, sequence_length, width)) - mask_data = np.random.randint( - 2, size=(batch_size, sequence_length, sequence_length)) - output_tensor = test_layer([input_data, mask_data]) - - # The layer only attends to the first token and outputs the first token - # embeeding. - new_layer = transformer_cls( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu', - output_range=1) - _ = new_layer([input_data, mask_data]) - new_layer.set_weights(test_layer.get_weights()) - new_output_tensor = new_layer([input_data, mask_data]) - self.assertAllClose(new_output_tensor, output_tensor[:, 0:1, :]) - - def test_layer_invocation_with_float16_dtype(self, transformer_cls): - tf.keras.mixed_precision.experimental.set_policy('mixed_float16') - test_layer = transformer_cls( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu') - sequence_length = 21 - width = 80 - # Create a 3-dimensional input (the first dimension is implicit). - data_tensor = tf.keras.Input(shape=(sequence_length, width)) - # Create a 2-dimensional input (the first dimension is implicit). - mask_tensor = tf.keras.Input(shape=(sequence_length, sequence_length)) - output_tensor = test_layer([data_tensor, mask_tensor]) - - # Create a model from the test layer. - model = tf.keras.Model([data_tensor, mask_tensor], output_tensor) - - # Invoke the model on test data. We can't validate the output data itself - # (the NN is too complex) but this will rule out structural runtime errors. - batch_size = 6 - input_data = (10 * np.random.random_sample( - (batch_size, sequence_length, width))) - # The attention mask should be of shape (batch, from_seq_len, to_seq_len), - # which here is (batch, sequence_length, sequence_length) - mask_data = np.random.randint( - 2, size=(batch_size, sequence_length, sequence_length)) - _ = model.predict([input_data, mask_data]) - - def test_transform_with_initializer(self, transformer_cls): - test_layer = transformer_cls( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu', - kernel_initializer=tf.keras.initializers.TruncatedNormal(stddev=0.02)) - sequence_length = 21 - width = 80 - # Create a 3-dimensional input (the first dimension is implicit). - data_tensor = tf.keras.Input(shape=(sequence_length, width)) - output = test_layer(data_tensor) - # The default output of a transformer layer should be the same as the input. - self.assertEqual(data_tensor.shape.as_list(), output.shape.as_list()) - - def test_dynamic_layer_sequence(self, transformer_cls): - test_layer = transformer_cls( - num_attention_heads=10, - intermediate_size=2048, - intermediate_activation='relu', - kernel_initializer=tf.keras.initializers.TruncatedNormal(stddev=0.02)) - # Create a 3-dimensional input (the first dimension is implicit). - width = 30 - input_tensor = tf.keras.Input(shape=(None, width)) - output_tensor = test_layer(input_tensor) - model = tf.keras.Model(input_tensor, output_tensor) - - input_length = 17 - input_data = np.ones((1, input_length, width)) - output_data = model.predict(input_data) - - self.assertAllEqual([1, input_length, width], output_data.shape) - - -def _create_cache(batch_size, init_decode_length, num_heads, head_size): - return { - 'key': - tf.zeros([batch_size, init_decode_length, num_heads, head_size], - dtype=tf.float32), - 'value': - tf.zeros([batch_size, init_decode_length, num_heads, head_size], - dtype=tf.float32) - } - - -@keras_parameterized.run_all_keras_modes -class TransformerDecoderLayerTest(keras_parameterized.TestCase): - - def test_decoder_block_with_cache(self): - num_attention_heads = 2 - hidden_size = 16 - decoder_block = transformer.TransformerDecoderLayer( - num_attention_heads=num_attention_heads, - intermediate_size=32, - intermediate_activation='relu', - dropout_rate=0.1, - attention_dropout_rate=0.1) - # Forward path. - dummy_tensor = tf.zeros([2, 4, 16], dtype=tf.float32) - dummy_mask = tf.zeros([2, 4, 4], dtype=tf.float32) - inputs = [dummy_tensor, dummy_tensor, dummy_mask, dummy_mask] - cache = _create_cache(2, 0, num_attention_heads, - hidden_size // num_attention_heads) - output, cache = decoder_block(inputs, cache) - self.assertEqual(output.shape, (2, 4, hidden_size)) - self.assertEqual(cache['value'].shape, (2, 4, 2, 8)) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/NCTCMumbai/NCTC/models/official/recommendation/popen_helper.py b/spaces/NCTCMumbai/NCTC/models/official/recommendation/popen_helper.py deleted file mode 100644 index dcdca4ced8e0b45294023c4675d16efd875694b7..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/recommendation/popen_helper.py +++ /dev/null @@ -1,64 +0,0 @@ -# Copyright 2018 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Helper file for running the async data generation process in OSS.""" - -import contextlib -import multiprocessing -import multiprocessing.pool - - -def get_forkpool(num_workers, init_worker=None, closing=True): - pool = multiprocessing.Pool(processes=num_workers, initializer=init_worker) - return contextlib.closing(pool) if closing else pool - - -def get_threadpool(num_workers, init_worker=None, closing=True): - pool = multiprocessing.pool.ThreadPool(processes=num_workers, - initializer=init_worker) - return contextlib.closing(pool) if closing else pool - - -class FauxPool(object): - """Mimic a pool using for loops. - - This class is used in place of proper pools when true determinism is desired - for testing or debugging. - """ - def __init__(self, *args, **kwargs): - pass - - def map(self, func, iterable, chunksize=None): - return [func(i) for i in iterable] - - def imap(self, func, iterable, chunksize=1): - for i in iterable: - yield func(i) - - def close(self): - pass - - def terminate(self): - pass - - def join(self): - pass - -def get_fauxpool(num_workers, init_worker=None, closing=True): - pool = FauxPool(processes=num_workers, initializer=init_worker) - return contextlib.closing(pool) if closing else pool - - -def worker_job(): - return "worker" diff --git a/spaces/Nekomaru180/rvc-model/infer_pack/transforms.py b/spaces/Nekomaru180/rvc-model/infer_pack/transforms.py deleted file mode 100644 index a11f799e023864ff7082c1f49c0cc18351a13b47..0000000000000000000000000000000000000000 --- a/spaces/Nekomaru180/rvc-model/infer_pack/transforms.py +++ /dev/null @@ -1,209 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = {"tails": tails, "tail_bound": tail_bound} - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum(inputs[..., None] >= bin_locations, dim=-1) - 1 - - -def unconstrained_rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails="linear", - tail_bound=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == "linear": - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError("{} tails are not implemented.".format(tails)) - - ( - outputs[inside_interval_mask], - logabsdet[inside_interval_mask], - ) = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, - right=tail_bound, - bottom=-tail_bound, - top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - ) - - return outputs, logabsdet - - -def rational_quadratic_spline( - inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0.0, - right=1.0, - bottom=0.0, - top=1.0, - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE, -): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError("Input to a transform is not within its domain") - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError("Minimal bin width too large for the number of bins") - if min_bin_height * num_bins > 1.0: - raise ValueError("Minimal bin height too large for the number of bins") - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode="constant", value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode="constant", value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) + input_heights * (input_delta - input_derivatives) - b = input_heights * input_derivatives - (inputs - input_cumheights) * ( - input_derivatives + input_derivatives_plus_one - 2 * input_delta - ) - c = -input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * ( - input_delta * theta.pow(2) + input_derivatives * theta_one_minus_theta - ) - denominator = input_delta + ( - (input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta - ) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * ( - input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2) - ) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/NewBing520997/bingo/Dockerfile b/spaces/NewBing520997/bingo/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/NewBing520997/bingo/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/NimaKL/spamd/app.py b/spaces/NimaKL/spamd/app.py deleted file mode 100644 index 07b5707dcb2319a63fc155635674d0438f5d04da..0000000000000000000000000000000000000000 --- a/spaces/NimaKL/spamd/app.py +++ /dev/null @@ -1,62 +0,0 @@ -import streamlit as st -from transformers import pipeline -from textblob import TextBlob -from transformers import BertForSequenceClassification, AdamW, BertConfig -st.set_page_config(layout='wide', initial_sidebar_state='expanded') -col1, col2= st.columns(2) -with col2: - text = st.text_input("Enter the text you'd like to analyze for spam.") - aButton = st.button('Analyze') -with col1: - st.title("Spamd: Turkish Spam Detector") - st.markdown("Message spam detection tool for Turkish language. Due the small size of the dataset, I decided to go with transformers technology Google BERT. Using the Turkish pre-trained model BERTurk, I imporved the accuracy of the tool by 18 percent compared to the previous model which used fastText.") - st.markdown("Original file is located at") - st.markdown("https://colab.research.google.com/drive/1QuorqAuLsmomesZHsaQHEZgzbPEM8YTH") - -import torch -import numpy as np -from transformers import AutoTokenizer -tokenizer = AutoTokenizer.from_pretrained("dbmdz/bert-base-turkish-uncased") -from transformers import AutoModel -model = BertForSequenceClassification.from_pretrained("NimaKL/spamd_model") -token_id = [] -attention_masks = [] -def preprocessing(input_text, tokenizer): - ''' - Returns with the following fields: - - input_ids: list of token ids - - token_type_ids: list of token type ids - - attention_mask: list of indices (0,1) specifying which tokens should considered by the model (return_attention_mask = True). - ''' - return tokenizer.encode_plus( - input_text, - add_special_tokens = True, - max_length = 32, - pad_to_max_length = True, - return_attention_mask = True, - return_tensors = 'pt' - ) -device = 'cpu' - -def predict(new_sentence): - # We need Token IDs and Attention Mask for inference on the new sentence - test_ids = [] - test_attention_mask = [] - # Apply the tokenizer - encoding = preprocessing(new_sentence, tokenizer) - # Extract IDs and Attention Mask - test_ids.append(encoding['input_ids']) - test_attention_mask.append(encoding['attention_mask']) - test_ids = torch.cat(test_ids, dim = 0) - test_attention_mask = torch.cat(test_attention_mask, dim = 0) - # Forward pass, calculate logit predictions - with torch.no_grad(): - output = model(test_ids.to(device), token_type_ids = None, attention_mask = test_attention_mask.to(device)) - prediction = 'Spam' if np.argmax(output.logits.cpu().numpy()).flatten().item() == 1 else 'Normal' - pred = 'Predicted Class: '+ prediction - return pred - -if text or aButton: - with col2: - with st.spinner('Wait for it...'): - st.success(predict(text)) \ No newline at end of file diff --git a/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/data_objects/speaker_verification_dataset.py b/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/data_objects/speaker_verification_dataset.py deleted file mode 100644 index cecd8ed8ac100b80d5087fa47f22f92c84fea032..0000000000000000000000000000000000000000 --- a/spaces/Nunchakuka/FrenchAnonymizer/speaker_encoder/data_objects/speaker_verification_dataset.py +++ /dev/null @@ -1,56 +0,0 @@ -from speaker_encoder.data_objects.random_cycler import RandomCycler -from speaker_encoder.data_objects.speaker_batch import SpeakerBatch -from speaker_encoder.data_objects.speaker import Speaker -from speaker_encoder.params_data import partials_n_frames -from torch.utils.data import Dataset, DataLoader -from pathlib import Path - -# TODO: improve with a pool of speakers for data efficiency - -class SpeakerVerificationDataset(Dataset): - def __init__(self, datasets_root: Path): - self.root = datasets_root - speaker_dirs = [f for f in self.root.glob("*") if f.is_dir()] - if len(speaker_dirs) == 0: - raise Exception("No speakers found. Make sure you are pointing to the directory " - "containing all preprocessed speaker directories.") - self.speakers = [Speaker(speaker_dir) for speaker_dir in speaker_dirs] - self.speaker_cycler = RandomCycler(self.speakers) - - def __len__(self): - return int(1e10) - - def __getitem__(self, index): - return next(self.speaker_cycler) - - def get_logs(self): - log_string = "" - for log_fpath in self.root.glob("*.txt"): - with log_fpath.open("r") as log_file: - log_string += "".join(log_file.readlines()) - return log_string - - -class SpeakerVerificationDataLoader(DataLoader): - def __init__(self, dataset, speakers_per_batch, utterances_per_speaker, sampler=None, - batch_sampler=None, num_workers=0, pin_memory=False, timeout=0, - worker_init_fn=None): - self.utterances_per_speaker = utterances_per_speaker - - super().__init__( - dataset=dataset, - batch_size=speakers_per_batch, - shuffle=False, - sampler=sampler, - batch_sampler=batch_sampler, - num_workers=num_workers, - collate_fn=self.collate, - pin_memory=pin_memory, - drop_last=False, - timeout=timeout, - worker_init_fn=worker_init_fn - ) - - def collate(self, speakers): - return SpeakerBatch(speakers, self.utterances_per_speaker, partials_n_frames) - \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/vggblock.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/vggblock.py deleted file mode 100644 index ee5ee19a34816c7350c21fba7c4907fec8ca7a61..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/vggblock.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from __future__ import absolute_import, division, print_function, unicode_literals - -from collections.abc import Iterable -from itertools import repeat - -import torch -import torch.nn as nn - - -def _pair(v): - if isinstance(v, Iterable): - assert len(v) == 2, "len(v) != 2" - return v - return tuple(repeat(v, 2)) - - -def infer_conv_output_dim(conv_op, input_dim, sample_inchannel): - sample_seq_len = 200 - sample_bsz = 10 - x = torch.randn(sample_bsz, sample_inchannel, sample_seq_len, input_dim) - # N x C x H x W - # N: sample_bsz, C: sample_inchannel, H: sample_seq_len, W: input_dim - x = conv_op(x) - # N x C x H x W - x = x.transpose(1, 2) - # N x H x C x W - bsz, seq = x.size()[:2] - per_channel_dim = x.size()[3] - # bsz: N, seq: H, CxW the rest - return x.contiguous().view(bsz, seq, -1).size(-1), per_channel_dim - - -class VGGBlock(torch.nn.Module): - """ - VGG motibated cnn module https://arxiv.org/pdf/1409.1556.pdf - - Args: - in_channels: (int) number of input channels (typically 1) - out_channels: (int) number of output channels - conv_kernel_size: convolution channels - pooling_kernel_size: the size of the pooling window to take a max over - num_conv_layers: (int) number of convolution layers - input_dim: (int) input dimension - conv_stride: the stride of the convolving kernel. - Can be a single number or a tuple (sH, sW) Default: 1 - padding: implicit paddings on both sides of the input. - Can be a single number or a tuple (padH, padW). Default: None - layer_norm: (bool) if layer norm is going to be applied. Default: False - - Shape: - Input: BxCxTxfeat, i.e. (batch_size, input_size, timesteps, features) - Output: BxCxTxfeat, i.e. (batch_size, input_size, timesteps, features) - """ - - def __init__( - self, - in_channels, - out_channels, - conv_kernel_size, - pooling_kernel_size, - num_conv_layers, - input_dim, - conv_stride=1, - padding=None, - layer_norm=False, - ): - assert ( - input_dim is not None - ), "Need input_dim for LayerNorm and infer_conv_output_dim" - super(VGGBlock, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.conv_kernel_size = _pair(conv_kernel_size) - self.pooling_kernel_size = _pair(pooling_kernel_size) - self.num_conv_layers = num_conv_layers - self.padding = ( - tuple(e // 2 for e in self.conv_kernel_size) - if padding is None - else _pair(padding) - ) - self.conv_stride = _pair(conv_stride) - - self.layers = nn.ModuleList() - for layer in range(num_conv_layers): - conv_op = nn.Conv2d( - in_channels if layer == 0 else out_channels, - out_channels, - self.conv_kernel_size, - stride=self.conv_stride, - padding=self.padding, - ) - self.layers.append(conv_op) - if layer_norm: - conv_output_dim, per_channel_dim = infer_conv_output_dim( - conv_op, input_dim, in_channels if layer == 0 else out_channels - ) - self.layers.append(nn.LayerNorm(per_channel_dim)) - input_dim = per_channel_dim - self.layers.append(nn.ReLU()) - - if self.pooling_kernel_size is not None: - pool_op = nn.MaxPool2d(kernel_size=self.pooling_kernel_size, ceil_mode=True) - self.layers.append(pool_op) - self.total_output_dim, self.output_dim = infer_conv_output_dim( - pool_op, input_dim, out_channels - ) - - def forward(self, x): - for i, _ in enumerate(self.layers): - x = self.layers[i](x) - return x diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/data_scripts/download_lotus.sh b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/data_scripts/download_lotus.sh deleted file mode 100644 index c08c701314a8e575637deff78381ab02c2ef6728..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/multilingual/data_scripts/download_lotus.sh +++ /dev/null @@ -1,46 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - - -if [ -z $WORKDIR_ROOT ] ; -then - echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..." - exit -fi - - -SRCDIR=$WORKDIR_ROOT/indic_languages_corpus -DESTDIR=${WORKDIR_ROOT}/ML50/raw/ -mkdir -p $SRCDIR -mkdir -p $DESTDIR - -cd $SRCDIR -wget http://lotus.kuee.kyoto-u.ac.jp/WAT/indic-multilingual/indic_languages_corpus.tar.gz -tar -xvzf indic_languages_corpus.tar.gz - -SRC_EXTRACT_DIR=$SRCDIR/indic_languages_corpus/bilingual - -cp $SRC_EXTRACT_DIR/ml-en/train.ml $DESTDIR/train.ml_IN-en_XX.ml_IN -cp $SRC_EXTRACT_DIR/ml-en/train.en $DESTDIR/train.ml_IN-en_XX.en_XX -cp $SRC_EXTRACT_DIR/ml-en/dev.ml $DESTDIR/valid.ml_IN-en_XX.ml_IN -cp $SRC_EXTRACT_DIR/ml-en/dev.en $DESTDIR/valid.ml_IN-en_XX.en_XX -cp $SRC_EXTRACT_DIR/ml-en/test.ml $DESTDIR/test.ml_IN-en_XX.ml_IN -cp $SRC_EXTRACT_DIR/ml-en/test.en $DESTDIR/test.ml_IN-en_XX.en_XX - -cp $SRC_EXTRACT_DIR/ur-en/train.ur $DESTDIR/train.ur_PK-en_XX.ur_PK -cp $SRC_EXTRACT_DIR/ur-en/train.en $DESTDIR/train.ur_PK-en_XX.en_XX -cp $SRC_EXTRACT_DIR/ur-en/dev.ur $DESTDIR/valid.ur_PK-en_XX.ur_PK -cp $SRC_EXTRACT_DIR/ur-en/dev.en $DESTDIR/valid.ur_PK-en_XX.en_XX -cp $SRC_EXTRACT_DIR/ur-en/test.ur $DESTDIR/test.ur_PK-en_XX.ur_PK -cp $SRC_EXTRACT_DIR/ur-en/test.en $DESTDIR/test.ur_PK-en_XX.en_XX - -cp $SRC_EXTRACT_DIR/te-en/train.te $DESTDIR/train.te_IN-en_XX.te_IN -cp $SRC_EXTRACT_DIR/te-en/train.en $DESTDIR/train.te_IN-en_XX.en_XX -cp $SRC_EXTRACT_DIR/te-en/dev.te $DESTDIR/valid.te_IN-en_XX.te_IN -cp $SRC_EXTRACT_DIR/te-en/dev.en $DESTDIR/valid.te_IN-en_XX.en_XX -cp $SRC_EXTRACT_DIR/te-en/test.te $DESTDIR/test.te_IN-en_XX.te_IN -cp $SRC_EXTRACT_DIR/te-en/test.en $DESTDIR/test.te_IN-en_XX.en_XX diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/scalar/modules/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/scalar/modules/__init__.py deleted file mode 100644 index 8031d9cdb23f2bc72596f8bc9cfa4965f96e3e6c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quantization/scalar/modules/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .qact import ActivationQuantizer # NOQA -from .qconv import IntConv2d # NOQA -from .qemb import IntEmbedding # NOQA -from .qlinear import IntLinear # NOQA diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/utils.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/utils.py deleted file mode 100644 index f61a8d38d456edf7605c31a87d09413e778658f3..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/utils.py +++ /dev/null @@ -1,829 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import contextlib -import copy -import importlib -import logging -import os -import sys -import warnings -from itertools import accumulate -from typing import Callable, Dict, List, Optional, TYPE_CHECKING - -import torch -import torch.nn.functional as F -from torch import Tensor -import collections - -if TYPE_CHECKING: - from fairseq.modules.multihead_attention import MultiheadAttention - -try: - from amp_C import multi_tensor_l2norm - - multi_tensor_l2norm_available = True -except ImportError: - multi_tensor_l2norm_available = False - -try: - import torch_xla.core.xla_model as xm -except ImportError: - xm = None - - -logger = logging.getLogger(__name__) - - -MANIFOLD_PATH_SEP = "|" - - -class FileContentsAction(argparse.Action): - def __init__(self, option_strings, dest, nargs=None, **kwargs): - if nargs is not None: - raise ValueError("nargs not allowed") - super(FileContentsAction, self).__init__(option_strings, dest, **kwargs) - - def __call__(self, parser, namespace, values, option_string=None): - from fairseq.file_io import PathManager - - if PathManager.isfile(values): - with PathManager.open(values) as f: - argument = f.read().strip() - else: - argument = values - setattr(namespace, self.dest, argument) - - -def split_paths(paths: str, separator=os.pathsep) -> List[str]: - return ( - paths.split(separator) if "://" not in paths else paths.split(MANIFOLD_PATH_SEP) - ) - - -def load_ensemble_for_inference(filenames, task, model_arg_overrides=None): - from fairseq import checkpoint_utils - - deprecation_warning( - "utils.load_ensemble_for_inference is deprecated. " - "Please use checkpoint_utils.load_model_ensemble instead." - ) - return checkpoint_utils.load_model_ensemble( - filenames, arg_overrides=model_arg_overrides, task=task - ) - - -def apply_to_sample(f, sample): - if hasattr(sample, "__len__") and len(sample) == 0: - return {} - - def _apply(x): - if torch.is_tensor(x): - return f(x) - elif isinstance(x, collections.OrderedDict): - # OrderedDict has attributes that needs to be preserved - od = collections.OrderedDict((key, _apply(value)) for key, value in x.items()) - od.__dict__ = x.__dict__ - return od - elif isinstance(x, dict): - return {key: _apply(value) for key, value in x.items()} - elif isinstance(x, list): - return [_apply(x) for x in x] - elif isinstance(x, tuple): - return tuple(_apply(x) for x in x) - elif isinstance(x, set): - return {_apply(x) for x in x} - else: - return x - - return _apply(sample) - - -def move_to_cuda(sample, device=None): - device = device or torch.cuda.current_device() - - def _move_to_cuda(tensor): - # non_blocking is ignored if tensor is not pinned, so we can always set - # to True (see github.com/PyTorchLightning/pytorch-lightning/issues/620) - return tensor.to(device=device, non_blocking=True) - - return apply_to_sample(_move_to_cuda, sample) - - -def move_to_cpu(sample): - def _move_to_cpu(tensor): - # PyTorch has poor support for half tensors (float16) on CPU. - # Move any such tensors to float32. - if tensor.dtype in {torch.bfloat16, torch.float16}: - tensor = tensor.to(dtype=torch.float32) - return tensor.cpu() - - return apply_to_sample(_move_to_cpu, sample) - - -def move_to_tpu(sample): - - import torch_xla.core.xla_model as xm - - device = xm.xla_device() - - def _move_to_tpu(tensor): - return tensor.to(device) - - return apply_to_sample(_move_to_tpu, sample) - - -def get_incremental_state( - module: "MultiheadAttention", - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], - key: str, -) -> Optional[Dict[str, Optional[Tensor]]]: - """Helper for getting incremental state for an nn.Module.""" - return module.get_incremental_state(incremental_state, key) - - -def set_incremental_state( - module: "MultiheadAttention", - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]], - key: str, - value: Dict[str, Optional[Tensor]], -) -> Optional[Dict[str, Dict[str, Optional[Tensor]]]]: - """Helper for setting incremental state for an nn.Module.""" - if incremental_state is not None: - result = module.set_incremental_state(incremental_state, key, value) - if result is not None: - incremental_state = result - return incremental_state - - -def load_align_dict(replace_unk): - if replace_unk is None: - align_dict = None - elif isinstance(replace_unk, str) and len(replace_unk) > 0: - # Load alignment dictionary for unknown word replacement if it was passed as an argument. - align_dict = {} - with open(replace_unk, "r") as f: - for line in f: - cols = line.split() - align_dict[cols[0]] = cols[1] - else: - # No alignment dictionary provided but we still want to perform unknown word replacement by copying the - # original source word. - align_dict = {} - return align_dict - - -def print_embed_overlap(embed_dict, vocab_dict): - embed_keys = set(embed_dict.keys()) - vocab_keys = set(vocab_dict.symbols) - overlap = len(embed_keys & vocab_keys) - logger.info("found {}/{} types in embedding file".format(overlap, len(vocab_dict))) - - -def parse_embedding(embed_path): - """Parse embedding text file into a dictionary of word and embedding tensors. - - The first line can have vocabulary size and dimension. The following lines - should contain word and embedding separated by spaces. - - Example: - 2 5 - the -0.0230 -0.0264 0.0287 0.0171 0.1403 - at -0.0395 -0.1286 0.0275 0.0254 -0.0932 - """ - embed_dict = {} - with open(embed_path) as f_embed: - next(f_embed) # skip header - for line in f_embed: - pieces = line.rstrip().split(" ") - embed_dict[pieces[0]] = torch.Tensor( - [float(weight) for weight in pieces[1:]] - ) - return embed_dict - - -def load_embedding(embed_dict, vocab, embedding): - for idx in range(len(vocab)): - token = vocab[idx] - if token in embed_dict: - embedding.weight.data[idx] = embed_dict[token] - return embedding - - -def replace_unk(hypo_str, src_str, alignment, align_dict, unk): - from fairseq import tokenizer - - # Tokens are strings here - hypo_tokens = tokenizer.tokenize_line(hypo_str) - # TODO: Very rare cases where the replacement is '' should be handled gracefully - src_tokens = tokenizer.tokenize_line(src_str) + [""] - for i, ht in enumerate(hypo_tokens): - if ht == unk: - src_token = src_tokens[alignment[i]] - # Either take the corresponding value in the aligned dictionary or just copy the original value. - hypo_tokens[i] = align_dict.get(src_token, src_token) - return " ".join(hypo_tokens) - - -def post_process_prediction( - hypo_tokens, - src_str, - alignment, - align_dict, - tgt_dict, - remove_bpe=None, - extra_symbols_to_ignore=None, -): - hypo_str = tgt_dict.string( - hypo_tokens, remove_bpe, extra_symbols_to_ignore=extra_symbols_to_ignore - ) - if align_dict is not None: - hypo_str = replace_unk( - hypo_str, src_str, alignment, align_dict, tgt_dict.unk_string() - ) - if align_dict is not None or remove_bpe is not None: - # Convert back to tokens for evaluating with unk replacement or without BPE - # Note that the dictionary can be modified inside the method. - hypo_tokens = tgt_dict.encode_line(hypo_str, add_if_not_exist=True) - return hypo_tokens, hypo_str, alignment - - -def make_positions(tensor, padding_idx: int, onnx_trace: bool = False): - """Replace non-padding symbols with their position numbers. - - Position numbers begin at padding_idx+1. Padding symbols are ignored. - """ - # The series of casts and type-conversions here are carefully - # balanced to both work with ONNX export and XLA. In particular XLA - # prefers ints, cumsum defaults to output longs, and ONNX doesn't know - # how to handle the dtype kwarg in cumsum. - mask = tensor.ne(padding_idx).int() - return (torch.cumsum(mask, dim=1).type_as(mask) * mask).long() + padding_idx - - -def strip_pad(tensor, pad): - return tensor[tensor.ne(pad)] - - -def buffered_arange(max): - if not hasattr(buffered_arange, "buf"): - buffered_arange.buf = torch.LongTensor() - if max > buffered_arange.buf.numel(): - buffered_arange.buf.resize_(max) - torch.arange(max, out=buffered_arange.buf) - return buffered_arange.buf[:max] - - -def convert_padding_direction( - src_tokens, padding_idx, right_to_left: bool = False, left_to_right: bool = False -): - assert right_to_left ^ left_to_right - pad_mask = src_tokens.eq(padding_idx) - if not pad_mask.any(): - # no padding, return early - return src_tokens - if left_to_right and not pad_mask[:, 0].any(): - # already right padded - return src_tokens - if right_to_left and not pad_mask[:, -1].any(): - # already left padded - return src_tokens - max_len = src_tokens.size(1) - buffered = torch.empty(0).long() - if max_len > 0: - torch.arange(max_len, out=buffered) - range = buffered.type_as(src_tokens).expand_as(src_tokens) - num_pads = pad_mask.long().sum(dim=1, keepdim=True) - if right_to_left: - index = torch.remainder(range - num_pads, max_len) - else: - index = torch.remainder(range + num_pads, max_len) - return src_tokens.gather(1, index) - - -def item(tensor): - # tpu-comment: making this a no-op for xla devices. - if torch.is_tensor(tensor) and tensor.device.type == "xla": - return tensor.detach() - if hasattr(tensor, "item"): - return tensor.item() - if hasattr(tensor, "__getitem__"): - return tensor[0] - return tensor - - -def multi_tensor_total_norm(grads, chunk_size=2048 * 32) -> torch.Tensor: - per_device_grads = {} - norms = [] - for grad in grads: - device = grad.device - cur_device_grads = per_device_grads.get(device) - if cur_device_grads is None: - cur_device_grads = [] - per_device_grads[device] = cur_device_grads - cur_device_grads.append(grad) - for device in per_device_grads.keys(): - cur_device_grads = per_device_grads[device] - if device.type == "cuda": - # TODO(msb) return has_inf - has_inf = torch.zeros((1, 1), dtype=torch.int, device=device) - with torch.cuda.device(device): - norm = multi_tensor_l2norm( - chunk_size, has_inf, [cur_device_grads], False - ) - norms.append(norm[0].to(torch.cuda.current_device())) - else: - norms += [torch.norm(g, p=2, dtype=torch.float32) for g in cur_device_grads] - total_norm = torch.norm(torch.stack(norms)) - return total_norm - - -@torch.no_grad() -def clip_grad_norm_(params, max_norm, aggregate_norm_fn=None) -> torch.Tensor: - def grad_exists(p): - return p is not None and getattr(p, "grad", None) is not None - - if isinstance(params, torch.Tensor): - params = [params] - params = list(params) - grads = [ - p.grad.detach() for p in params if grad_exists(p) and not hasattr(p, "expert") - ] - expert_grads = [ - p.grad.detach() for p in params if grad_exists(p) and hasattr(p, "expert") - ] - - if len(grads) == 0: - if len(params) > 0: - return params[0].new_tensor(0.0) - else: - return torch.tensor(0.0) - - if len(grads) == 1: - total_norm = torch.norm(grads[0], p=2, dtype=torch.float32) - else: - if multi_tensor_l2norm_available: - total_norm = multi_tensor_total_norm(grads) - else: - if torch.cuda.is_available(): - warnings.warn( - "amp_C fused kernels unavailable, disabling multi_tensor_l2norm; " - "you may get better performance by installing NVIDIA's apex library" - ) - device = torch.cuda.current_device() - elif grads[0].device.type == "xla": - device = grads[0].device - else: - device = torch.device("cpu") - total_norm = torch.norm( - torch.stack( - [torch.norm(g, p=2, dtype=torch.float32).to(device) for g in grads] - ) - ) - - if aggregate_norm_fn is not None: - total_norm = aggregate_norm_fn(total_norm) - - if max_norm > 0: - max_norm = float(max_norm) - clip_coef = (max_norm / (total_norm + 1e-6)).clamp_(max=1) - for g in grads + expert_grads: - g.mul_(clip_coef) - return total_norm - - -def fill_with_neg_inf(t): - """FP16-compatible function that fills a tensor with -inf.""" - return t.float().fill_(float("-inf")).type_as(t) - - -def _match_types(arg1, arg2): - """Convert the numerical argument to the same type as the other argument""" - - def upgrade(arg_number, arg_structure): - if isinstance(arg_structure, tuple): - return tuple([arg_number] * len(arg_structure)) - elif isinstance(arg_structure, dict): - arg = copy.deepcopy(arg_structure) - for k in arg: - arg[k] = upgrade(arg_number, arg_structure[k]) - return arg - else: - return arg_number - - if isinstance(arg1, float) or isinstance(arg1, int): - return upgrade(arg1, arg2), arg2 - elif isinstance(arg2, float) or isinstance(arg2, int): - return arg1, upgrade(arg2, arg1) - - return arg1, arg2 - - -def resolve_max_positions(*args): - """Resolve max position constraints from multiple sources.""" - - def map_value_update(d1, d2): - updated_value = copy.deepcopy(d1) - for key in d2: - if key not in updated_value: - updated_value[key] = d2[key] - else: - updated_value[key] = min(d1[key], d2[key]) - return updated_value - - def nullsafe_min(l): - minim = None - for item in l: - if minim is None: - minim = item - elif item is not None and item < minim: - minim = item - return minim - - max_positions = None - for arg in args: - if max_positions is None: - max_positions = arg - elif arg is not None: - max_positions, arg = _match_types(max_positions, arg) - if isinstance(arg, float) or isinstance(arg, int): - max_positions = min(max_positions, arg) - elif isinstance(arg, dict): - max_positions = map_value_update(max_positions, arg) - else: - max_positions = tuple(map(nullsafe_min, zip(max_positions, arg))) - - return max_positions - - -def import_user_module(args): - module_path = getattr(args, "user_dir", None) - if module_path is not None: - module_path = os.path.abspath(args.user_dir) - if not os.path.exists(module_path) and not os.path.isfile( - os.path.dirname(module_path) - ): - fairseq_rel_path = os.path.join(os.path.dirname(__file__), args.user_dir) - if os.path.exists(fairseq_rel_path): - module_path = fairseq_rel_path - else: - fairseq_rel_path = os.path.join( - os.path.dirname(__file__), "..", args.user_dir - ) - if os.path.exists(fairseq_rel_path): - module_path = fairseq_rel_path - else: - raise FileNotFoundError(module_path) - - # ensure that user modules are only imported once - import_user_module.memo = getattr(import_user_module, "memo", set()) - if module_path not in import_user_module.memo: - import_user_module.memo.add(module_path) - - module_parent, module_name = os.path.split(module_path) - if module_name not in sys.modules: - sys.path.insert(0, module_parent) - importlib.import_module(module_name) - - tasks_path = os.path.join(module_path, "tasks") - if os.path.exists(tasks_path): - from fairseq.tasks import import_tasks - - import_tasks(tasks_path, f"{module_name}.tasks") - - models_path = os.path.join(module_path, "models") - if os.path.exists(models_path): - from fairseq.models import import_models - - import_models(models_path, f"{module_name}.models") - else: - raise ImportError( - "Failed to import --user-dir={} because the corresponding module name " - "({}) is not globally unique. Please rename the directory to " - "something unique and try again.".format(module_path, module_name) - ) - - -def softmax(x, dim: int, onnx_trace: bool = False): - if onnx_trace: - return F.softmax(x.float(), dim=dim) - else: - return F.softmax(x, dim=dim, dtype=torch.float32) - - -def log_softmax(x, dim: int, onnx_trace: bool = False): - if onnx_trace: - return F.log_softmax(x.float(), dim=dim) - else: - return F.log_softmax(x, dim=dim, dtype=torch.float32) - - -def get_perplexity(loss, round=2, base=2): - from fairseq.logging.meters import safe_round - - if loss is None: - return 0.0 - try: - return safe_round(base ** loss, round) - except OverflowError: - return float("inf") - - -def deprecation_warning(message, stacklevel=3): - # don't use DeprecationWarning, since it's ignored by default - warnings.warn(message, stacklevel=stacklevel) - - -def get_activation_fn(activation: str) -> Callable: - """Returns the activation function corresponding to `activation`""" - from fairseq.modules import gelu, gelu_accurate - - if activation == "relu": - return F.relu - elif activation == "gelu": - return gelu - elif activation == "gelu_fast": - deprecation_warning( - "--activation-fn=gelu_fast has been renamed to gelu_accurate" - ) - return gelu_accurate - elif activation == "gelu_accurate": - return gelu_accurate - elif activation == "tanh": - return torch.tanh - elif activation == "linear": - return lambda x: x - else: - raise RuntimeError("--activation-fn {} not supported".format(activation)) - - -def get_available_activation_fns() -> List: - return [ - "relu", - "gelu", - "gelu_fast", # deprecated - "gelu_accurate", - "tanh", - "linear", - ] - - -@contextlib.contextmanager -def model_eval(model): - is_training = model.training - model.eval() - yield - model.train(is_training) - - -def has_parameters(module): - try: - next(module.parameters()) - return True - except StopIteration: - return False - - -def get_rng_state(): - state = {"torch_rng_state": torch.get_rng_state()} - if xm is not None: - state["xla_rng_state"] = xm.get_rng_state() - if torch.cuda.is_available(): - state["cuda_rng_state"] = torch.cuda.get_rng_state() - return state - - -def set_rng_state(state): - torch.set_rng_state(state["torch_rng_state"]) - if xm is not None: - xm.set_rng_state(state["xla_rng_state"]) - if torch.cuda.is_available(): - torch.cuda.set_rng_state(state["cuda_rng_state"]) - - -class set_torch_seed(object): - def __init__(self, seed): - assert isinstance(seed, int) - self.rng_state = get_rng_state() - - torch.manual_seed(seed) - if xm is not None: - xm.set_rng_state(seed) - if torch.cuda.is_available(): - torch.cuda.manual_seed(seed) - - def __enter__(self): - return self - - def __exit__(self, *exc): - set_rng_state(self.rng_state) - - -def parse_alignment(line): - """ - Parses a single line from the alingment file. - - Args: - line (str): String containing the alignment of the format: - - - .. - -. All indices are 0 indexed. - - Returns: - torch.IntTensor: packed alignments of shape (2 * m). - """ - alignments = line.strip().split() - parsed_alignment = torch.IntTensor(2 * len(alignments)) - for idx, alignment in enumerate(alignments): - src_idx, tgt_idx = alignment.split("-") - parsed_alignment[2 * idx] = int(src_idx) - parsed_alignment[2 * idx + 1] = int(tgt_idx) - return parsed_alignment - - -def get_token_to_word_mapping(tokens, exclude_list): - n = len(tokens) - word_start = [int(token not in exclude_list) for token in tokens] - word_idx = list(accumulate(word_start)) - token_to_word = {i: word_idx[i] for i in range(n)} - return token_to_word - - -def extract_hard_alignment(attn, src_sent, tgt_sent, pad, eos): - tgt_valid = ( - ((tgt_sent != pad) & (tgt_sent != eos)).nonzero(as_tuple=False).squeeze(dim=-1) - ) - src_invalid = ( - ((src_sent == pad) | (src_sent == eos)).nonzero(as_tuple=False).squeeze(dim=-1) - ) - src_token_to_word = get_token_to_word_mapping(src_sent, [eos, pad]) - tgt_token_to_word = get_token_to_word_mapping(tgt_sent, [eos, pad]) - alignment = [] - if len(tgt_valid) != 0 and len(src_invalid) < len(src_sent): - attn_valid = attn[tgt_valid] - attn_valid[:, src_invalid] = float("-inf") - _, src_indices = attn_valid.max(dim=1) - for tgt_idx, src_idx in zip(tgt_valid, src_indices): - alignment.append( - ( - src_token_to_word[src_idx.item()] - 1, - tgt_token_to_word[tgt_idx.item()] - 1, - ) - ) - return alignment - - -def extract_soft_alignment(attn, src_sent, tgt_sent, pad, eos): - tgt_valid = ((tgt_sent != pad)).nonzero(as_tuple=False) - src_valid = ((src_sent != pad)).nonzero(as_tuple=False).squeeze(dim=-1) - alignment = [] - if len(tgt_valid) != 0 and len(src_valid) != 0: - attn_valid = attn[tgt_valid, src_valid] - alignment = [ - ["{:.6f}".format(p) for p in src_probs.tolist()] for src_probs in attn_valid - ] - return alignment - - -def new_arange(x, *size): - """ - Return a Tensor of `size` filled with a range function on the device of x. - If size is empty, using the size of the variable x. - """ - if len(size) == 0: - size = x.size() - return torch.arange(size[-1], device=x.device).expand(*size).contiguous() - - -def get_tpu_device(): - return xm.xla_device() - - -def tpu_data_loader(itr): - import torch_xla.core.xla_model as xm - import torch_xla.distributed.parallel_loader as pl - from fairseq.data import iterators - - xm.rendezvous("tpu_data_loader") # wait for all workers - xm.mark_step() - device = xm.xla_device() - return iterators.CountingIterator( - pl.ParallelLoader(itr, [device]).per_device_loader(device), - start=getattr(itr, "n", 0), - total=len(itr), - ) - - -def is_xla_tensor(tensor): - return torch.is_tensor(tensor) and tensor.device.type == "xla" - - -def index_put(tensor, indices, value): - if is_xla_tensor(tensor): - for _ in range(indices.dim(), tensor.dim()): - indices = indices.unsqueeze(-1) - if indices.size(-1) < tensor.size(-1): - indices = indices.expand_as(tensor) - tensor = torch.mul(tensor, ~indices) + torch.mul(value, indices) - else: - tensor[indices] = value - return tensor - - -def xla_device_to_cpu(dat): - import torch_xla.core.xla_model as xm - - return xm._maybe_convert_to_cpu(dat) - - -class CudaEnvironment(object): - def __init__(self): - cur_device = torch.cuda.current_device() - prop = torch.cuda.get_device_properties("cuda:{}".format(cur_device)) - self.name = prop.name - self.major = prop.major - self.minor = prop.minor - self.total_memory_in_GB = prop.total_memory / 1024 / 1024 / 1024 - - @staticmethod - def pretty_print_cuda_env_list(cuda_env_list): - """ - Given a list of CudaEnviorments, pretty print them - """ - num_workers = len(cuda_env_list) - center = "CUDA enviroments for all {} workers".format(num_workers) - banner_len = 40 - len(center) // 2 - first_line = "*" * banner_len + center + "*" * banner_len - logger.info(first_line) - for r, env in enumerate(cuda_env_list): - logger.info( - "rank {:3d}: ".format(r) - + "capabilities = {:2d}.{:<2d} ; ".format(env.major, env.minor) - + "total memory = {:.3f} GB ; ".format(env.total_memory_in_GB) - + "name = {:40s}".format(env.name) - ) - logger.info(first_line) - - -def csv_str_list(x): - return x.split(",") - - -def eval_str_list(x, type=float): - if x is None: - return None - if isinstance(x, str): - x = eval(x) - try: - return list(map(type, x)) - except TypeError: - return [type(x)] - - -def eval_str_dict(x, type=dict): - if x is None: - return None - if isinstance(x, str): - x = eval(x) - return x - - -def eval_bool(x, default=False): - if x is None: - return default - try: - return bool(eval(x)) - except TypeError: - return default - - -def reset_logging(): - root = logging.getLogger() - for handler in root.handlers: - root.removeHandler(handler) - root.setLevel(os.environ.get("LOGLEVEL", "INFO").upper()) - handler = logging.StreamHandler(sys.stdout) - handler.setFormatter( - logging.Formatter( - fmt="%(asctime)s | %(levelname)s | %(name)s | %(message)s", - datefmt="%Y-%m-%d %H:%M:%S", - ) - ) - root.addHandler(handler) - - -def safe_getattr(obj, k, default=None): - """Returns obj[k] if it exists and is not None, otherwise returns default.""" - from omegaconf import OmegaConf - - if OmegaConf.is_config(obj): - return obj[k] if k in obj and obj[k] is not None else default - - return getattr(obj, k, default) - - -def safe_hasattr(obj, k): - """Returns True if the given key exists and is not None.""" - return getattr(obj, k, None) is not None diff --git a/spaces/OFA-Sys/OFA-Text2Image_Generation/README.md b/spaces/OFA-Sys/OFA-Text2Image_Generation/README.md deleted file mode 100644 index 26865124a64a5e401e2c20c325ffed188212cf47..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Text2Image_Generation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: OFA-Text2Image_Generation -emoji: 🎨 -colorFrom: pink -colorTo: pink -sdk: static -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference - diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/.github/ISSUE_TEMPLATE/documentation.md b/spaces/OFA-Sys/OFA-vqa/fairseq/.github/ISSUE_TEMPLATE/documentation.md deleted file mode 100644 index 3a6e2e9ea4bb71102122c17ff53051eb3770cb5e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/.github/ISSUE_TEMPLATE/documentation.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -name: 📚 Documentation/Typos -about: Report an issue related to documentation or a typo -labels: 'documentation, needs triage' ---- - -## 📚 Documentation - -For typos and doc fixes, please go ahead and: - -1. Create an issue. -2. Fix the typo. -3. Submit a PR. - -Thanks! diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/simple_kmeans/README.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/simple_kmeans/README.md deleted file mode 100644 index cd17da3b3e6f3e39083f7a76a56ff46c3a63b929..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/hubert/simple_kmeans/README.md +++ /dev/null @@ -1,71 +0,0 @@ -# Sharded Feature Extraction and K-means Application - -This folder contains scripts for preparing HUBERT labels from tsv files, the -steps are: -1. feature extraction -2. k-means clustering -3. k-means application - - -## Data preparation - -`*.tsv` files contains a list of audio, where each line is the root, and -following lines are the subpath for each audio: -``` - - - -... -``` - - -## Feature extraction - -### MFCC feature -Suppose the tsv file is at `${tsv_dir}/${split}.tsv`. To extract 39-D -mfcc+delta+ddelta features for the 1st iteration HUBERT training, run: -```sh -python dump_mfcc_feature.py ${tsv_dir} ${split} ${nshard} ${rank} ${feat_dir} -``` -This would shard the tsv file into `${nshard}` and extract features for the -`${rank}`-th shard, where rank is an integer in `[0, nshard-1]`. Features would -be saved at `${feat_dir}/${split}_${rank}_${nshard}.{npy,len}`. - - -### HUBERT feature -To extract features from the `${layer}`-th transformer layer of a trained -HUBERT model saved at `${ckpt_path}`, run: -```sh -python dump_hubert_feature.py ${tsv_dir} ${split} ${ckpt_path} ${layer} ${nshard} ${rank} ${feat_dir} -``` -Features would also be saved at `${feat_dir}/${split}_${rank}_${nshard}.{npy,len}`. - -- if out-of-memory, decrease the chunk size with `--max_chunk` - - -## K-means clustering -To fit a k-means model with `${n_clusters}` clusters on 10% of the `${split}` data, run -```sh -python learn_kmeans.py ${feat_dir} ${split} ${nshard} ${km_path} ${n_cluster} --percent 0.1 -``` -This saves the k-means model to `${km_path}`. - -- set `--precent -1` to use all data -- more kmeans options can be found with `-h` flag - - -## K-means application -To apply a trained k-means model `${km_path}` to obtain labels for `${split}`, run -```sh -python dump_km_label.py ${feat_dir} ${split} ${km_path} ${nshard} ${rank} ${lab_dir} -``` -This would extract labels for the `${rank}`-th shard out of `${nshard}` shards -and dump them to `${lab_dir}/${split}_${rank}_${shard}.km` - - -Finally, merge shards for `${split}` by running -```sh -for rank in $(seq 0 $((nshard - 1))); do - cat $lab_dir/${split}_${rank}_${nshard}.km -done > $lab_dir/${split}.km -``` diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/tasks/speech_text_joint.py b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/tasks/speech_text_joint.py deleted file mode 100644 index f2b3966d2d6b103f3dc2ff170c12ab9663875684..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/speech_text_joint_to_text/tasks/speech_text_joint.py +++ /dev/null @@ -1,372 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import logging -import os -from argparse import Namespace -from pathlib import Path - -import torch -from fairseq.data import ( - encoders, - Dictionary, - ResamplingDataset, - TransformEosLangPairDataset, - ConcatDataset, -) -from fairseq.data.iterators import GroupedEpochBatchIterator -from fairseq.data.audio.multi_modality_dataset import ( - MultiModalityDataset, - LangPairMaskDataset, - ModalityDatasetItem, -) -from fairseq.data.audio.speech_to_text_dataset import SpeechToTextDataset, SpeechToTextDatasetCreator -from fairseq.data.audio.speech_to_text_joint_dataset import ( - S2TJointDataConfig, - SpeechToTextJointDatasetCreator, -) -from fairseq.tasks import register_task -from fairseq.tasks.speech_to_text import SpeechToTextTask -from fairseq.tasks.translation import load_langpair_dataset - -logger = logging.getLogger(__name__) -LANG_TAG_TEMPLATE = "" - - -@register_task("speech_text_joint_to_text") -class SpeechTextJointToTextTask(SpeechToTextTask): - """ - Task for joint training speech and text to text. - """ - - @classmethod - def add_args(cls, parser): - """Add task-specific arguments to the parser.""" - super(SpeechTextJointToTextTask, cls).add_args(parser) - ### - parser.add_argument( - "--parallel-text-data", - default="", - help="path to parallel text data directory", - ) - parser.add_argument( - "--max-tokens-text", - type=int, - metavar="N", - help="maximum tokens for encoder text input ", - ) - parser.add_argument( - "--max-positions-text", - type=int, - metavar="N", - default=400, - help="maximum tokens for per encoder text input ", - ) - parser.add_argument( - "--langpairs", - default=None, - metavar="S", - help='language pairs for text training, separated with ","', - ) - parser.add_argument( - "--speech-sample-ratio", - default=1, - type=float, - metavar="N", - help="Multiple Ratio for speech dataset with transcripts ", - ) - parser.add_argument( - "--text-sample-ratio", - default=1, - type=float, - metavar="N", - help="Multiple Ratio for text set ", - ) - parser.add_argument( - "--update-mix-data", - action="store_true", - help="use mixed data in one update when update-freq > 1", - ) - parser.add_argument( - "--load-speech-only", - action="store_true", - help="load speech data only", - ) - parser.add_argument( - "--mask-text-ratio", - type=float, - metavar="V", - default=0.0, - help="mask V source tokens for text only mode", - ) - parser.add_argument( - "--mask-text-type", - default="random", - choices=["random", "tail"], - help="mask text typed", - ) - parser.add_argument( - "--noise-token", - default="", - help="noise token for masking src text tokens if mask-text-ratio > 0", - ) - parser.add_argument( - "--infer-target-lang", - default="", - metavar="S", - help="target language for inference", - ) - - def __init__(self, args, src_dict, tgt_dict, infer_tgt_lang_id=None): - super().__init__(args, tgt_dict) - self.src_dict = src_dict - self.data_cfg = S2TJointDataConfig(Path(args.data) / args.config_yaml) - assert self.tgt_dict.pad() == self.src_dict.pad() - assert self.tgt_dict.eos() == self.src_dict.eos() - self.speech_only = args.load_speech_only - self._infer_tgt_lang_id = infer_tgt_lang_id - - @classmethod - def setup_task(cls, args, **kwargs): - """Setup the task (e.g., load dictionaries).""" - data_cfg = S2TJointDataConfig(Path(args.data) / args.config_yaml) - tgt_dict_path = Path(args.data) / data_cfg.vocab_filename - src_dict_path = Path(args.data) / data_cfg.src_vocab_filename - if (not os.path.isfile(src_dict_path)) or (not os.path.isfile(tgt_dict_path)): - raise FileNotFoundError("Dict not found: {}".format(args.data)) - src_dict = Dictionary.load(src_dict_path.as_posix()) - tgt_dict = Dictionary.load(tgt_dict_path.as_posix()) - - print("| src dictionary: {} types".format(len(src_dict))) - print("| tgt dictionary: {} types".format(len(tgt_dict))) - - if args.parallel_text_data != "": - if not os.path.isabs(args.parallel_text_data): - args.parallel_text_data = os.path.join( - args.data, args.parallel_text_data - ) - - if args.langpairs is None: - raise Exception( - "Could not infer language pair, please provide it explicitly" - ) - infer_tgt_lang_id = None - if args.infer_target_lang != "" and data_cfg.prepend_tgt_lang_tag_no_change: - tgt_lang_tag = SpeechToTextDataset.LANG_TAG_TEMPLATE.format( - args.infer_target_lang - ) - infer_tgt_lang_id = tgt_dict.index(tgt_lang_tag) - assert infer_tgt_lang_id != tgt_dict.unk() - return cls(args, src_dict, tgt_dict, infer_tgt_lang_id=infer_tgt_lang_id) - - def load_langpair_dataset(self, prepend_tgt_lang_tag=False, sampling_alpha=1.0, epoch=0): - lang_pairs = [] - text_dataset = None - split = "train" - for lp in self.args.langpairs.split(","): - src, tgt = lp.split("-") - text_dataset = load_langpair_dataset( - self.args.parallel_text_data, - split, - src, - self.src_dict, - tgt, - self.tgt_dict, - combine=True, - dataset_impl=None, - upsample_primary=1, - left_pad_source=False, - left_pad_target=False, - max_source_positions=self.args.max_positions_text, - max_target_positions=self.args.max_target_positions, - load_alignments=False, - truncate_source=False, - ) - if prepend_tgt_lang_tag: - # TODO - text_dataset = TransformEosLangPairDataset( - text_dataset, - src_eos=self.src_dict.eos(), - tgt_bos=self.tgt_dict.eos(), # 'prev_output_tokens' starts with eos - new_tgt_bos=self.tgt_dict.index(LANG_TAG_TEMPLATE.format(tgt)), - ) - lang_pairs.append(text_dataset) - if len(lang_pairs) > 1: - if sampling_alpha != 1.0: - size_ratios = SpeechToTextDatasetCreator.get_size_ratios( - self.args.langpairs.split(","), - [len(s) for s in lang_pairs], - alpha=sampling_alpha, - ) - lang_pairs = [ - ResamplingDataset( - d, size_ratio=r, epoch=epoch, replace=(r >= 1.0) - ) - for d, r in zip(lang_pairs, size_ratios) - ] - return ConcatDataset(lang_pairs) - return text_dataset - - def inference_step( - self, generator, models, sample, prefix_tokens=None, constraints=None - ): - with torch.no_grad(): - return generator.generate( - models, - sample, - prefix_tokens=prefix_tokens, - constraints=constraints, - bos_token=self._infer_tgt_lang_id, - ) - - def build_src_tokenizer(self, args): - logger.info(f"src-pre-tokenizer: {self.data_cfg.src_pre_tokenizer}") - return encoders.build_tokenizer(Namespace(**self.data_cfg.src_pre_tokenizer)) - - def build_src_bpe(self, args): - logger.info(f"tokenizer: {self.data_cfg.src_bpe_tokenizer}") - return encoders.build_bpe(Namespace(**self.data_cfg.src_bpe_tokenizer)) - - def load_dataset(self, split, epoch=1, combine=False, **kwargs): - """Load a given dataset split. - - Args: - split (str): name of the split (e.g., train, valid, test) - """ - is_train_split = split.startswith("train") - pre_tokenizer = self.build_tokenizer(self.args) - bpe_tokenizer = self.build_bpe(self.args) - src_pre_tokenizer = self.build_src_tokenizer(self.args) - src_bpe_tokenizer = self.build_src_bpe(self.args) - ast_dataset = SpeechToTextJointDatasetCreator.from_tsv( - self.args.data, - self.data_cfg, - split, - self.tgt_dict, - src_dict=None if self.speech_only else self.src_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - src_pre_tokenizer=src_pre_tokenizer, - src_bpe_tokenizer=src_bpe_tokenizer, - is_train_split=is_train_split, - epoch=epoch, - seed=self.args.seed, - ) - noise_token_id = -1 - text_dataset = None - if self.args.parallel_text_data != "" and is_train_split: - text_dataset = self.load_langpair_dataset( - self.data_cfg.prepend_tgt_lang_tag_no_change, - 1.0, - epoch=epoch, - ) - if self.args.mask_text_ratio > 0: - # add mask - noise_token_id = ( - self.src_dict.unk() - if self.args.noise_token == "" - else self.src_dict.index(self.args.noise_token) - ) - text_dataset = LangPairMaskDataset( - text_dataset, - src_bos=self.src_dict.bos(), - src_eos=self.src_dict.eos(), - noise_id=noise_token_id, - mask_ratio=self.args.mask_text_ratio, - mask_type=self.args.mask_text_type, - ) - - if text_dataset is not None: - mdsets = [ - ModalityDatasetItem( - "sup_speech", - ast_dataset, - (self.args.max_source_positions, self.args.max_target_positions), - self.args.max_tokens, - self.args.batch_size, - ), - ModalityDatasetItem( - "text", - text_dataset, - (self.args.max_positions_text, self.args.max_target_positions), - self.args.max_tokens_text - if self.args.max_tokens_text is not None - else self.args.max_tokens, - self.args.batch_size, - ), - ] - ast_dataset = MultiModalityDataset(mdsets) - self.datasets[split] = ast_dataset - - @property - def target_dictionary(self): - """Return the :class:`~fairseq.data.Dictionary` for the language - model.""" - return self.tgt_dict - - @property - def source_dictionary(self): - """Return the source :class:`~fairseq.data.Dictionary` (if applicable - for this task).""" - return None if self.speech_only else self.src_dict - - def get_batch_iterator( - self, - dataset, - max_tokens=None, - max_sentences=None, - max_positions=None, - ignore_invalid_inputs=False, - required_batch_size_multiple=1, - seed=1, - num_shards=1, - shard_id=0, - num_workers=0, - epoch=0, - data_buffer_size=0, - disable_iterator_cache=False, - ): - - if not isinstance(dataset, MultiModalityDataset): - return super(SpeechTextJointToTextTask, self).get_batch_iterator( - dataset, - max_tokens, - max_sentences, - max_positions, - ignore_invalid_inputs, - required_batch_size_multiple, - seed, - num_shards, - shard_id, - num_workers, - epoch, - data_buffer_size, - disable_iterator_cache, - ) - - mult_ratio = [self.args.speech_sample_ratio, self.args.text_sample_ratio] - assert len(dataset.datasets) == 2 - - # initialize the dataset with the correct starting epoch - dataset.set_epoch(epoch) - - batch_samplers = dataset.get_batch_samplers( - mult_ratio, required_batch_size_multiple, seed - ) - - # return a reusable, sharded iterator - epoch_iter = GroupedEpochBatchIterator( - dataset=dataset, - collate_fn=dataset.collater, - batch_samplers=batch_samplers, - seed=seed, - num_shards=num_shards, - shard_id=shard_id, - num_workers=num_workers, - epoch=epoch, - mult_rate=1 if self.args.update_mix_data else max(self.args.update_freq), - buffer_size=data_buffer_size, - ) - self.dataset_to_epoch_iter[dataset] = {} # refresh it every epoch - return epoch_iter diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/hubert/__init__.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/hubert/__init__.py deleted file mode 100644 index a1b0eabbdbcaf12b15bb96b329ab1e276256f79a..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/models/hubert/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .hubert import * # noqa -from .hubert_asr import * # noqa diff --git a/spaces/OFA-Sys/OFA-vqa/run_scripts/refcoco/train_refcoco.sh b/spaces/OFA-Sys/OFA-vqa/run_scripts/refcoco/train_refcoco.sh deleted file mode 100644 index 735d48c93f9da9c5c254d795d18eddfa08199ef8..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/run_scripts/refcoco/train_refcoco.sh +++ /dev/null @@ -1,97 +0,0 @@ -#!/usr/bin/env - -log_dir=./refcoco_logs -save_dir=./refcoco_checkpoints -mkdir -p $log_dir $save_dir - -bpe_dir=../../utils/BPE -user_dir=../../ofa_module - -data_dir=../../dataset/refcoco_data -data=${data_dir}/refcoco_train.tsv,${data_dir}/refcoco_val.tsv -restore_file=../../checkpoints/ofa_large.pt -selected_cols=0,4,2,3 - -task=refcoco -arch=ofa_large -criterion=ajust_label_smoothed_cross_entropy -label_smoothing=0.1 -lr=3e-5 -max_epoch=5 -warmup_ratio=0.06 -batch_size=4 -update_freq=8 -resnet_drop_path_rate=0.0 -encoder_drop_path_rate=0.2 -decoder_drop_path_rate=0.2 -dropout=0.1 -attention_dropout=0.0 -max_src_length=80 -max_tgt_length=20 -num_bins=1000 -patch_image_size=512 - -for max_epoch in {10,}; do - echo "max_epoch "${max_epoch} - for lr in {3e-5,}; do - echo "lr "${lr} - for patch_image_size in {512,}; do - echo "patch_image_size "${patch_image_size} - - log_file=${log_dir}/${max_epoch}"_"${lr}"_"${patch_image_size}".log" - save_path=${save_dir}/${max_epoch}"_"${lr}"_"${patch_image_size} - mkdir -p $save_path - - CUDA_VISIBLE_DEVICES=0,1,2,3 python3 ../../train.py \ - $data \ - --selected-cols=${selected_cols} \ - --bpe-dir=${bpe_dir} \ - --user-dir=${user_dir} \ - --restore-file=${restore_file} \ - --reset-optimizer --reset-dataloader --reset-meters \ - --save-dir=${save_path} \ - --task=${task} \ - --arch=${arch} \ - --criterion=${criterion} \ - --label-smoothing=${label_smoothing} \ - --batch-size=${batch_size} \ - --update-freq=${update_freq} \ - --encoder-normalize-before \ - --decoder-normalize-before \ - --share-decoder-input-output-embed \ - --share-all-embeddings \ - --layernorm-embedding \ - --patch-layernorm-embedding \ - --code-layernorm-embedding \ - --resnet-drop-path-rate=${resnet_drop_path_rate} \ - --encoder-drop-path-rate=${encoder_drop_path_rate} \ - --decoder-drop-path-rate=${decoder_drop_path_rate} \ - --dropout=${dropout} \ - --attention-dropout=${attention_dropout} \ - --weight-decay=0.01 --optimizer=adam --adam-betas="(0.9,0.999)" --adam-eps=1e-08 --clip-norm=1.0 \ - --lr-scheduler=polynomial_decay --lr=${lr} \ - --max-epoch=${max_epoch} --warmup-ratio=${warmup_ratio} \ - --log-format=simple --log-interval=10 \ - --fixed-validation-seed=7 \ - --no-epoch-checkpoints --keep-best-checkpoints=1 \ - --save-interval=1 --validate-interval=1 \ - --save-interval-updates=500 --validate-interval-updates=500 \ - --eval-acc \ - --eval-args='{"beam":5,"min_len":4,"max_len_a":0,"max_len_b":4}' \ - --best-checkpoint-metric=score --maximize-best-checkpoint-metric \ - --max-src-length=${max_src_length} \ - --max-tgt-length=${max_tgt_length} \ - --find-unused-parameters \ - --add-type-embedding \ - --scale-attn \ - --scale-fc \ - --scale-heads \ - --disable-entangle \ - --num-bins=${num_bins} \ - --patch-image-size=${patch_image_size} \ - --fp16 \ - --fp16-scale-window=512 \ - --num-workers=0 >> ${log_file} 2>&1 - done - done -done \ No newline at end of file diff --git a/spaces/OIUGLK/bingo/src/lib/hooks/use-bing.ts b/spaces/OIUGLK/bingo/src/lib/hooks/use-bing.ts deleted file mode 100644 index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000 --- a/spaces/OIUGLK/bingo/src/lib/hooks/use-bing.ts +++ /dev/null @@ -1,173 +0,0 @@ -'use client' - -import { useState, useCallback, useEffect, useMemo } from 'react' -import { useAtom, useAtomValue } from 'jotai' -import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state' -import { setConversationMessages } from './chat-history' -import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types' -import { nanoid } from '../utils' -import { TTS } from '../bots/bing/tts' - -export function useBing(botId: BotId = 'bing') { - const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId]) - const [enableTTS] = useAtom(voiceAtom) - const speaker = useMemo(() => new TTS(), []) - const [hash, setHash] = useAtom(hashAtom) - const bingConversationStyle = useAtomValue(bingConversationStyleAtom) - const [chatState, setChatState] = useAtom(chatAtom) - const [input, setInput] = useState('') - const [attachmentList, setAttachmentList] = useState([]) - - const updateMessage = useCallback( - (messageId: string, updater: (message: ChatMessageModel) => void) => { - setChatState((draft) => { - const message = draft.messages.find((m) => m.id === messageId) - if (message) { - updater(message) - } - }) - }, - [setChatState], - ) - - const sendMessage = useCallback( - async (input: string, options = {}) => { - const botMessageId = nanoid() - const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined - setChatState((draft) => { - const text = imageUrl ? `${input}\n\n![image](${imageUrl})` : input - draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' }) - setAttachmentList([]) - }) - const abortController = new AbortController() - setChatState((draft) => { - draft.generatingMessageId = botMessageId - draft.abortController = abortController - }) - speaker.reset() - await chatState.bot.sendMessage({ - prompt: input, - imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl, - options: { - ...options, - bingConversationStyle, - }, - signal: abortController.signal, - onEvent(event) { - if (event.type === 'UPDATE_ANSWER') { - updateMessage(botMessageId, (message) => { - if (event.data.text.length > message.text.length) { - message.text = event.data.text - } - - if (event.data.spokenText && enableTTS) { - speaker.speak(event.data.spokenText) - } - - message.throttling = event.data.throttling || message.throttling - message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions - message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses - }) - } else if (event.type === 'ERROR') { - updateMessage(botMessageId, (message) => { - message.error = event.error - }) - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } else if (event.type === 'DONE') { - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } - }, - }) - }, - [botId, attachmentList, chatState.bot, setChatState, updateMessage], - ) - - const uploadImage = useCallback(async (imgUrl: string) => { - setAttachmentList([{ url: imgUrl, status: 'loading' }]) - const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle) - if (response?.blobId) { - setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }]) - } else { - setAttachmentList([{ url: imgUrl, status: 'error' }]) - } - }, [chatState.bot]) - - const resetConversation = useCallback(() => { - chatState.bot.resetConversation() - speaker.abort() - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }] - draft.conversationId = nanoid() - }) - }, [chatState.bot, setChatState]) - - const stopGenerating = useCallback(() => { - chatState.abortController?.abort() - if (chatState.generatingMessageId) { - updateMessage(chatState.generatingMessageId, (message) => { - if (!message.text && !message.error) { - message.text = 'Cancelled' - } - }) - } - setChatState((draft) => { - draft.generatingMessageId = '' - }) - }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage]) - - useEffect(() => { - if (chatState.messages.length) { - setConversationMessages(botId, chatState.conversationId, chatState.messages) - } - }, [botId, chatState.conversationId, chatState.messages]) - - useEffect(() => { - if (hash === 'reset') { - resetConversation() - setHash('') - } - }, [hash, setHash]) - - const chat = useMemo( - () => ({ - botId, - bot: chatState.bot, - isSpeaking: speaker.isSpeaking, - messages: chatState.messages, - sendMessage, - setInput, - input, - resetConversation, - generating: !!chatState.generatingMessageId, - stopGenerating, - uploadImage, - setAttachmentList, - attachmentList, - }), - [ - botId, - bingConversationStyle, - chatState.bot, - chatState.generatingMessageId, - chatState.messages, - speaker.isSpeaking, - setInput, - input, - setAttachmentList, - attachmentList, - resetConversation, - sendMessage, - stopGenerating, - ], - ) - - return chat -} diff --git a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/themes/default/index.ts b/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/themes/default/index.ts deleted file mode 100644 index c228ed40791b5c9c112a1859c53f9b41d1634c24..0000000000000000000000000000000000000000 --- a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/themes/default/index.ts +++ /dev/null @@ -1,40 +0,0 @@ -import { Theme } from '../interface'; - -const icons = [ - `🎨`, - `🌈`, - `⚙️`, - `💻`, - `📚`, - `🐯`, - `🐤`, - `🐼`, - `🐏`, - `🍀`, -]; - -export type DefaultSoundNames = 'button-click' | 'triple'; - -import soundButtonClickUrl from './sounds/sound-button-click.mp3'; -import soundTripleUrl from './sounds/sound-triple.mp3'; -export const defaultSounds: Theme['sounds'] = [ - { - name: 'button-click', - src: soundButtonClickUrl, - }, - { - name: 'triple', - src: soundTripleUrl, - }, -]; - -export const defaultTheme: Theme = { - name: 'default', - icons: icons.map((icon) => ({ - name: icon, - content: icon, - clickSound: 'button-click', - tripleSound: 'triple', - })), - sounds: defaultSounds, -}; diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/engine/train_loop.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/engine/train_loop.py deleted file mode 100644 index c4a86b52a5604f2b5799abac299ca4726345b7a6..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/engine/train_loop.py +++ /dev/null @@ -1,417 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -import numpy as np -import time -import weakref -from typing import List, Mapping, Optional -import torch -from torch.nn.parallel import DataParallel, DistributedDataParallel - -import detectron2.utils.comm as comm -from detectron2.utils.events import EventStorage, get_event_storage -from detectron2.utils.logger import _log_api_usage - -__all__ = ["HookBase", "TrainerBase", "SimpleTrainer", "AMPTrainer"] - - -class HookBase: - """ - Base class for hooks that can be registered with :class:`TrainerBase`. - - Each hook can implement 4 methods. The way they are called is demonstrated - in the following snippet: - :: - hook.before_train() - for iter in range(start_iter, max_iter): - hook.before_step() - trainer.run_step() - hook.after_step() - iter += 1 - hook.after_train() - - Notes: - 1. In the hook method, users can access ``self.trainer`` to access more - properties about the context (e.g., model, current iteration, or config - if using :class:`DefaultTrainer`). - - 2. A hook that does something in :meth:`before_step` can often be - implemented equivalently in :meth:`after_step`. - If the hook takes non-trivial time, it is strongly recommended to - implement the hook in :meth:`after_step` instead of :meth:`before_step`. - The convention is that :meth:`before_step` should only take negligible time. - - Following this convention will allow hooks that do care about the difference - between :meth:`before_step` and :meth:`after_step` (e.g., timer) to - function properly. - - """ - - trainer: "TrainerBase" = None - """ - A weak reference to the trainer object. Set by the trainer when the hook is registered. - """ - - def before_train(self): - """ - Called before the first iteration. - """ - pass - - def after_train(self): - """ - Called after the last iteration. - """ - pass - - def before_step(self): - """ - Called before each iteration. - """ - pass - - def after_step(self): - """ - Called after each iteration. - """ - pass - - def state_dict(self): - """ - Hooks are stateless by default, but can be made checkpointable by - implementing `state_dict` and `load_state_dict`. - """ - return {} - - -class TrainerBase: - """ - Base class for iterative trainer with hooks. - - The only assumption we made here is: the training runs in a loop. - A subclass can implement what the loop is. - We made no assumptions about the existence of dataloader, optimizer, model, etc. - - Attributes: - iter(int): the current iteration. - - start_iter(int): The iteration to start with. - By convention the minimum possible value is 0. - - max_iter(int): The iteration to end training. - - storage(EventStorage): An EventStorage that's opened during the course of training. - """ - - def __init__(self) -> None: - self._hooks: List[HookBase] = [] - self.iter: int = 0 - self.start_iter: int = 0 - self.max_iter: int - self.storage: EventStorage - _log_api_usage("trainer." + self.__class__.__name__) - - def register_hooks(self, hooks: List[Optional[HookBase]]) -> None: - """ - Register hooks to the trainer. The hooks are executed in the order - they are registered. - - Args: - hooks (list[Optional[HookBase]]): list of hooks - """ - hooks = [h for h in hooks if h is not None] - for h in hooks: - assert isinstance(h, HookBase) - # To avoid circular reference, hooks and trainer cannot own each other. - # This normally does not matter, but will cause memory leak if the - # involved objects contain __del__: - # See http://engineering.hearsaysocial.com/2013/06/16/circular-references-in-python/ - h.trainer = weakref.proxy(self) - self._hooks.extend(hooks) - - def train(self, start_iter: int, max_iter: int): - """ - Args: - start_iter, max_iter (int): See docs above - """ - logger = logging.getLogger(__name__) - logger.info("Starting training from iteration {}".format(start_iter)) - - self.iter = self.start_iter = start_iter - self.max_iter = max_iter - - with EventStorage(start_iter) as self.storage: - try: - self.before_train() - for self.iter in range(start_iter, max_iter): - self.before_step() - self.run_step() - self.after_step() - # self.iter == max_iter can be used by `after_train` to - # tell whether the training successfully finished or failed - # due to exceptions. - self.iter += 1 - except Exception: - logger.exception("Exception during training:") - raise - finally: - self.after_train() - - def before_train(self): - for h in self._hooks: - h.before_train() - - def after_train(self): - self.storage.iter = self.iter - for h in self._hooks: - h.after_train() - - def before_step(self): - # Maintain the invariant that storage.iter == trainer.iter - # for the entire execution of each step - self.storage.iter = self.iter - - for h in self._hooks: - h.before_step() - - def after_step(self): - for h in self._hooks: - h.after_step() - - def run_step(self): - raise NotImplementedError - - def state_dict(self): - ret = {"iteration": self.iter} - hooks_state = {} - for h in self._hooks: - sd = h.state_dict() - if sd: - name = type(h).__qualname__ - if name in hooks_state: - # TODO handle repetitive stateful hooks - continue - hooks_state[name] = sd - if hooks_state: - ret["hooks"] = hooks_state - return ret - - def load_state_dict(self, state_dict): - logger = logging.getLogger(__name__) - self.iter = state_dict["iteration"] - for key, value in state_dict.get("hooks", {}).items(): - for h in self._hooks: - try: - name = type(h).__qualname__ - except AttributeError: - continue - if name == key: - h.load_state_dict(value) - break - else: - logger.warning(f"Cannot find the hook '{key}', its state_dict is ignored.") - - -class SimpleTrainer(TrainerBase): - """ - A simple trainer for the most common type of task: - single-cost single-optimizer single-data-source iterative optimization, - optionally using data-parallelism. - It assumes that every step, you: - - 1. Compute the loss with a data from the data_loader. - 2. Compute the gradients with the above loss. - 3. Update the model with the optimizer. - - All other tasks during training (checkpointing, logging, evaluation, LR schedule) - are maintained by hooks, which can be registered by :meth:`TrainerBase.register_hooks`. - - If you want to do anything fancier than this, - either subclass TrainerBase and implement your own `run_step`, - or write your own training loop. - """ - - def __init__(self, model, data_loader, optimizer): - """ - Args: - model: a torch Module. Takes a data from data_loader and returns a - dict of losses. - data_loader: an iterable. Contains data to be used to call model. - optimizer: a torch optimizer. - """ - super().__init__() - - """ - We set the model to training mode in the trainer. - However it's valid to train a model that's in eval mode. - If you want your model (or a submodule of it) to behave - like evaluation during training, you can overwrite its train() method. - """ - model.train() - - self.model = model - self.data_loader = data_loader - self._data_loader_iter = iter(data_loader) - self.optimizer = optimizer - - def run_step(self): - """ - Implement the standard training logic described above. - """ - assert self.model.training, "[SimpleTrainer] model was changed to eval mode!" - start = time.perf_counter() - """ - If you want to do something with the data, you can wrap the dataloader. - """ - data = next(self._data_loader_iter) - data_time = time.perf_counter() - start - - """ - If you want to do something with the losses, you can wrap the model. - """ - loss_dict = self.model(data) - if isinstance(loss_dict, torch.Tensor): - losses = loss_dict - loss_dict = {"total_loss": loss_dict} - else: - losses = sum(loss_dict.values()) - - """ - If you need to accumulate gradients or do something similar, you can - wrap the optimizer with your custom `zero_grad()` method. - """ - self.optimizer.zero_grad() - losses.backward() - - self._write_metrics(loss_dict, data_time) - - """ - If you need gradient clipping/scaling or other processing, you can - wrap the optimizer with your custom `step()` method. But it is - suboptimal as explained in https://arxiv.org/abs/2006.15704 Sec 3.2.4 - """ - self.optimizer.step() - - def _write_metrics( - self, - loss_dict: Mapping[str, torch.Tensor], - data_time: float, - prefix: str = "", - ) -> None: - SimpleTrainer.write_metrics(loss_dict, data_time, prefix) - - @staticmethod - def write_metrics( - loss_dict: Mapping[str, torch.Tensor], - data_time: float, - prefix: str = "", - ) -> None: - """ - Args: - loss_dict (dict): dict of scalar losses - data_time (float): time taken by the dataloader iteration - prefix (str): prefix for logging keys - """ - metrics_dict = {k: v.detach().cpu().item() for k, v in loss_dict.items()} - metrics_dict["data_time"] = data_time - - # Gather metrics among all workers for logging - # This assumes we do DDP-style training, which is currently the only - # supported method in detectron2. - all_metrics_dict = comm.gather(metrics_dict) - - if comm.is_main_process(): - storage = get_event_storage() - - # data_time among workers can have high variance. The actual latency - # caused by data_time is the maximum among workers. - data_time = np.max([x.pop("data_time") for x in all_metrics_dict]) - storage.put_scalar("data_time", data_time) - - # average the rest metrics - metrics_dict = { - k: np.mean([x[k] for x in all_metrics_dict]) for k in all_metrics_dict[0].keys() - } - total_losses_reduced = sum(metrics_dict.values()) - if not np.isfinite(total_losses_reduced): - raise FloatingPointError( - f"Loss became infinite or NaN at iteration={storage.iter}!\n" - f"loss_dict = {metrics_dict}" - ) - - storage.put_scalar("{}total_loss".format(prefix), total_losses_reduced) - if len(metrics_dict) > 1: - storage.put_scalars(**metrics_dict) - - def state_dict(self): - ret = super().state_dict() - ret["optimizer"] = self.optimizer.state_dict() - return ret - - def load_state_dict(self, state_dict): - super().load_state_dict(state_dict) - self.optimizer.load_state_dict(state_dict["optimizer"]) - - -class AMPTrainer(SimpleTrainer): - """ - Like :class:`SimpleTrainer`, but uses PyTorch's native automatic mixed precision - in the training loop. - """ - - def __init__(self, model, data_loader, optimizer, grad_scaler=None): - """ - Args: - model, data_loader, optimizer: same as in :class:`SimpleTrainer`. - grad_scaler: torch GradScaler to automatically scale gradients. - """ - unsupported = "AMPTrainer does not support single-process multi-device training!" - if isinstance(model, DistributedDataParallel): - assert not (model.device_ids and len(model.device_ids) > 1), unsupported - assert not isinstance(model, DataParallel), unsupported - - super().__init__(model, data_loader, optimizer) - - if grad_scaler is None: - from torch.cuda.amp import GradScaler - - grad_scaler = GradScaler() - self.grad_scaler = grad_scaler - - def run_step(self): - """ - Implement the AMP training logic. - """ - assert self.model.training, "[AMPTrainer] model was changed to eval mode!" - assert torch.cuda.is_available(), "[AMPTrainer] CUDA is required for AMP training!" - from torch.cuda.amp import autocast - - start = time.perf_counter() - data = next(self._data_loader_iter) - data_time = time.perf_counter() - start - - with autocast(): - loss_dict = self.model(data) - if isinstance(loss_dict, torch.Tensor): - losses = loss_dict - loss_dict = {"total_loss": loss_dict} - else: - losses = sum(loss_dict.values()) - - self.optimizer.zero_grad() - self.grad_scaler.scale(losses).backward() - - self._write_metrics(loss_dict, data_time) - - self.grad_scaler.step(self.optimizer) - self.grad_scaler.update() - - def state_dict(self): - ret = super().state_dict() - ret["grad_scaler"] = self.grad_scaler.state_dict() - return ret - - def load_state_dict(self, state_dict): - super().load_state_dict(state_dict) - self.grad_scaler.load_state_dict(state_dict["grad_scaler"]) diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/losses/fid/inception.py b/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/losses/fid/inception.py deleted file mode 100644 index e9bd0863b457aaa40c770eaa4acbb142b18fc18b..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/saicinpainting/evaluation/losses/fid/inception.py +++ /dev/null @@ -1,323 +0,0 @@ -import logging - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchvision import models - -try: - from torchvision.models.utils import load_state_dict_from_url -except ImportError: - from torch.utils.model_zoo import load_url as load_state_dict_from_url - -# Inception weights ported to Pytorch from -# http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz -FID_WEIGHTS_URL = 'https://github.com/mseitzer/pytorch-fid/releases/download/fid_weights/pt_inception-2015-12-05-6726825d.pth' - - -LOGGER = logging.getLogger(__name__) - - -class InceptionV3(nn.Module): - """Pretrained InceptionV3 network returning feature maps""" - - # Index of default block of inception to return, - # corresponds to output of final average pooling - DEFAULT_BLOCK_INDEX = 3 - - # Maps feature dimensionality to their output blocks indices - BLOCK_INDEX_BY_DIM = { - 64: 0, # First max pooling features - 192: 1, # Second max pooling featurs - 768: 2, # Pre-aux classifier features - 2048: 3 # Final average pooling features - } - - def __init__(self, - output_blocks=[DEFAULT_BLOCK_INDEX], - resize_input=True, - normalize_input=True, - requires_grad=False, - use_fid_inception=True): - """Build pretrained InceptionV3 - - Parameters - ---------- - output_blocks : list of int - Indices of blocks to return features of. Possible values are: - - 0: corresponds to output of first max pooling - - 1: corresponds to output of second max pooling - - 2: corresponds to output which is fed to aux classifier - - 3: corresponds to output of final average pooling - resize_input : bool - If true, bilinearly resizes input to width and height 299 before - feeding input to model. As the network without fully connected - layers is fully convolutional, it should be able to handle inputs - of arbitrary size, so resizing might not be strictly needed - normalize_input : bool - If true, scales the input from range (0, 1) to the range the - pretrained Inception network expects, namely (-1, 1) - requires_grad : bool - If true, parameters of the model require gradients. Possibly useful - for finetuning the network - use_fid_inception : bool - If true, uses the pretrained Inception model used in Tensorflow's - FID implementation. If false, uses the pretrained Inception model - available in torchvision. The FID Inception model has different - weights and a slightly different structure from torchvision's - Inception model. If you want to compute FID scores, you are - strongly advised to set this parameter to true to get comparable - results. - """ - super(InceptionV3, self).__init__() - - self.resize_input = resize_input - self.normalize_input = normalize_input - self.output_blocks = sorted(output_blocks) - self.last_needed_block = max(output_blocks) - - assert self.last_needed_block <= 3, \ - 'Last possible output block index is 3' - - self.blocks = nn.ModuleList() - - if use_fid_inception: - inception = fid_inception_v3() - else: - inception = models.inception_v3(pretrained=True) - - # Block 0: input to maxpool1 - block0 = [ - inception.Conv2d_1a_3x3, - inception.Conv2d_2a_3x3, - inception.Conv2d_2b_3x3, - nn.MaxPool2d(kernel_size=3, stride=2) - ] - self.blocks.append(nn.Sequential(*block0)) - - # Block 1: maxpool1 to maxpool2 - if self.last_needed_block >= 1: - block1 = [ - inception.Conv2d_3b_1x1, - inception.Conv2d_4a_3x3, - nn.MaxPool2d(kernel_size=3, stride=2) - ] - self.blocks.append(nn.Sequential(*block1)) - - # Block 2: maxpool2 to aux classifier - if self.last_needed_block >= 2: - block2 = [ - inception.Mixed_5b, - inception.Mixed_5c, - inception.Mixed_5d, - inception.Mixed_6a, - inception.Mixed_6b, - inception.Mixed_6c, - inception.Mixed_6d, - inception.Mixed_6e, - ] - self.blocks.append(nn.Sequential(*block2)) - - # Block 3: aux classifier to final avgpool - if self.last_needed_block >= 3: - block3 = [ - inception.Mixed_7a, - inception.Mixed_7b, - inception.Mixed_7c, - nn.AdaptiveAvgPool2d(output_size=(1, 1)) - ] - self.blocks.append(nn.Sequential(*block3)) - - for param in self.parameters(): - param.requires_grad = requires_grad - - def forward(self, inp): - """Get Inception feature maps - - Parameters - ---------- - inp : torch.autograd.Variable - Input tensor of shape Bx3xHxW. Values are expected to be in - range (0, 1) - - Returns - ------- - List of torch.autograd.Variable, corresponding to the selected output - block, sorted ascending by index - """ - outp = [] - x = inp - - if self.resize_input: - x = F.interpolate(x, - size=(299, 299), - mode='bilinear', - align_corners=False) - - if self.normalize_input: - x = 2 * x - 1 # Scale from range (0, 1) to range (-1, 1) - - for idx, block in enumerate(self.blocks): - x = block(x) - if idx in self.output_blocks: - outp.append(x) - - if idx == self.last_needed_block: - break - - return outp - - -def fid_inception_v3(): - """Build pretrained Inception model for FID computation - - The Inception model for FID computation uses a different set of weights - and has a slightly different structure than torchvision's Inception. - - This method first constructs torchvision's Inception and then patches the - necessary parts that are different in the FID Inception model. - """ - LOGGER.info('fid_inception_v3 called') - inception = models.inception_v3(num_classes=1008, - aux_logits=False, - pretrained=False) - LOGGER.info('models.inception_v3 done') - inception.Mixed_5b = FIDInceptionA(192, pool_features=32) - inception.Mixed_5c = FIDInceptionA(256, pool_features=64) - inception.Mixed_5d = FIDInceptionA(288, pool_features=64) - inception.Mixed_6b = FIDInceptionC(768, channels_7x7=128) - inception.Mixed_6c = FIDInceptionC(768, channels_7x7=160) - inception.Mixed_6d = FIDInceptionC(768, channels_7x7=160) - inception.Mixed_6e = FIDInceptionC(768, channels_7x7=192) - inception.Mixed_7b = FIDInceptionE_1(1280) - inception.Mixed_7c = FIDInceptionE_2(2048) - - LOGGER.info('fid_inception_v3 patching done') - - state_dict = load_state_dict_from_url(FID_WEIGHTS_URL, progress=True) - LOGGER.info('fid_inception_v3 weights downloaded') - - inception.load_state_dict(state_dict) - LOGGER.info('fid_inception_v3 weights loaded into model') - - return inception - - -class FIDInceptionA(models.inception.InceptionA): - """InceptionA block patched for FID computation""" - def __init__(self, in_channels, pool_features): - super(FIDInceptionA, self).__init__(in_channels, pool_features) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch5x5 = self.branch5x5_1(x) - branch5x5 = self.branch5x5_2(branch5x5) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = self.branch3x3dbl_3(branch3x3dbl) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, - count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch5x5, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionC(models.inception.InceptionC): - """InceptionC block patched for FID computation""" - def __init__(self, in_channels, channels_7x7): - super(FIDInceptionC, self).__init__(in_channels, channels_7x7) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch7x7 = self.branch7x7_1(x) - branch7x7 = self.branch7x7_2(branch7x7) - branch7x7 = self.branch7x7_3(branch7x7) - - branch7x7dbl = self.branch7x7dbl_1(x) - branch7x7dbl = self.branch7x7dbl_2(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_3(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_4(branch7x7dbl) - branch7x7dbl = self.branch7x7dbl_5(branch7x7dbl) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, - count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch7x7, branch7x7dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionE_1(models.inception.InceptionE): - """First InceptionE block patched for FID computation""" - def __init__(self, in_channels): - super(FIDInceptionE_1, self).__init__(in_channels) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch3x3 = self.branch3x3_1(x) - branch3x3 = [ - self.branch3x3_2a(branch3x3), - self.branch3x3_2b(branch3x3), - ] - branch3x3 = torch.cat(branch3x3, 1) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = [ - self.branch3x3dbl_3a(branch3x3dbl), - self.branch3x3dbl_3b(branch3x3dbl), - ] - branch3x3dbl = torch.cat(branch3x3dbl, 1) - - # Patch: Tensorflow's average pool does not use the padded zero's in - # its average calculation - branch_pool = F.avg_pool2d(x, kernel_size=3, stride=1, padding=1, - count_include_pad=False) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch3x3, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) - - -class FIDInceptionE_2(models.inception.InceptionE): - """Second InceptionE block patched for FID computation""" - def __init__(self, in_channels): - super(FIDInceptionE_2, self).__init__(in_channels) - - def forward(self, x): - branch1x1 = self.branch1x1(x) - - branch3x3 = self.branch3x3_1(x) - branch3x3 = [ - self.branch3x3_2a(branch3x3), - self.branch3x3_2b(branch3x3), - ] - branch3x3 = torch.cat(branch3x3, 1) - - branch3x3dbl = self.branch3x3dbl_1(x) - branch3x3dbl = self.branch3x3dbl_2(branch3x3dbl) - branch3x3dbl = [ - self.branch3x3dbl_3a(branch3x3dbl), - self.branch3x3dbl_3b(branch3x3dbl), - ] - branch3x3dbl = torch.cat(branch3x3dbl, 1) - - # Patch: The FID Inception model uses max pooling instead of average - # pooling. This is likely an error in this specific Inception - # implementation, as other Inception models use average pooling here - # (which matches the description in the paper). - branch_pool = F.max_pool2d(x, kernel_size=3, stride=1, padding=1) - branch_pool = self.branch_pool(branch_pool) - - outputs = [branch1x1, branch3x3, branch3x3dbl, branch_pool] - return torch.cat(outputs, 1) diff --git a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/datasets/prepare_ade20k_ins_seg.py b/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/datasets/prepare_ade20k_ins_seg.py deleted file mode 100644 index e4e951adcd84dbd08b3d6570aee56887bf1c69a6..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/annotator/OneFormer/datasets/prepare_ade20k_ins_seg.py +++ /dev/null @@ -1,112 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. -import glob -import json -import os -from collections import Counter - -import numpy as np -import tqdm -from panopticapi.utils import IdGenerator, save_json -from PIL import Image -import pycocotools.mask as mask_util - - -if __name__ == "__main__": - dataset_dir = os.getenv("DETECTRON2_DATASETS", "datasets") - - for name, dirname in [("train", "training"), ("val", "validation")]: - image_dir = os.path.join(dataset_dir, f"ADEChallengeData2016/images/{dirname}/") - instance_dir = os.path.join( - dataset_dir, f"ADEChallengeData2016/annotations_instance/{dirname}/" - ) - - # img_id = 0 - ann_id = 1 - - # json - out_file = os.path.join(dataset_dir, f"ADEChallengeData2016/ade20k_instance_{name}.json") - - # json config - instance_config_file = "datasets/ade20k_instance_imgCatIds.json" - with open(instance_config_file) as f: - category_dict = json.load(f)["categories"] - - # load catid mapping - # it is important to share category id for both instance and panoptic annotations - mapping_file = "datasets/ade20k_instance_catid_mapping.txt" - with open(mapping_file) as f: - map_id = {} - for i, line in enumerate(f.readlines()): - if i == 0: - continue - ins_id, sem_id, _ = line.strip().split() - # shift id by 1 because we want it to start from 0! - # ignore_label becomes 255 - map_id[int(ins_id)] = int(sem_id) - 1 - - for cat in category_dict: - cat["id"] = map_id[cat["id"]] - - filenames = sorted(glob.glob(os.path.join(image_dir, "*.jpg"))) - - ann_dict = {} - images = [] - annotations = [] - - for idx, filename in enumerate(tqdm.tqdm(filenames)): - image = {} - image_id = os.path.basename(filename).split(".")[0] - - image["id"] = image_id - image["file_name"] = os.path.basename(filename) - - original_format = np.array(Image.open(filename)) - image["width"] = original_format.shape[1] - image["height"] = original_format.shape[0] - - images.append(image) - - filename_instance = os.path.join(instance_dir, image_id + ".png") - ins_seg = np.asarray(Image.open(filename_instance)) - assert ins_seg.dtype == np.uint8 - - instance_cat_ids = ins_seg[..., 0] - # instance id starts from 1! - # because 0 is reserved as VOID label - instance_ins_ids = ins_seg[..., 1] - - # process things - for thing_id in np.unique(instance_ins_ids): - if thing_id == 0: - continue - mask = instance_ins_ids == thing_id - instance_cat_id = np.unique(instance_cat_ids[mask]) - assert len(instance_cat_id) == 1 - - anno = {} - anno['id'] = ann_id - ann_id += 1 - anno['image_id'] = image['id'] - anno["iscrowd"] = int(0) - anno["category_id"] = int(map_id[instance_cat_id[0]]) - - inds = np.nonzero(mask) - ymin, ymax = inds[0].min(), inds[0].max() - xmin, xmax = inds[1].min(), inds[1].max() - anno["bbox"] = [int(xmin), int(ymin), int(xmax - xmin + 1), int(ymax - ymin + 1)] - # if xmax <= xmin or ymax <= ymin: - # continue - rle = mask_util.encode(np.array(mask[:, :, None], order="F", dtype="uint8"))[0] - rle["counts"] = rle["counts"].decode("utf-8") - anno["segmentation"] = rle - anno["area"] = int(mask_util.area(rle)) - annotations.append(anno) - - # save this - ann_dict['images'] = images - ann_dict['categories'] = category_dict - ann_dict['annotations'] = annotations - - save_json(ann_dict, out_file) diff --git a/spaces/PAIR/PAIR-Diffusion/ldm/modules/attention.py b/spaces/PAIR/PAIR-Diffusion/ldm/modules/attention.py deleted file mode 100644 index 509cd873768f0dd75a75ab3fcdd652822b12b59f..0000000000000000000000000000000000000000 --- a/spaces/PAIR/PAIR-Diffusion/ldm/modules/attention.py +++ /dev/null @@ -1,341 +0,0 @@ -from inspect import isfunction -import math -import torch -import torch.nn.functional as F -from torch import nn, einsum -from einops import rearrange, repeat -from typing import Optional, Any - -from ldm.modules.diffusionmodules.util import checkpoint - - -try: - import xformers - import xformers.ops - XFORMERS_IS_AVAILBLE = True -except: - XFORMERS_IS_AVAILBLE = False - -# CrossAttn precision handling -import os -_ATTN_PRECISION = os.environ.get("ATTN_PRECISION", "fp32") - -def exists(val): - return val is not None - - -def uniq(arr): - return{el: True for el in arr}.keys() - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def max_neg_value(t): - return -torch.finfo(t.dtype).max - - -def init_(tensor): - dim = tensor.shape[-1] - std = 1 / math.sqrt(dim) - tensor.uniform_(-std, std) - return tensor - - -# feedforward -class GEGLU(nn.Module): - def __init__(self, dim_in, dim_out): - super().__init__() - self.proj = nn.Linear(dim_in, dim_out * 2) - - def forward(self, x): - x, gate = self.proj(x).chunk(2, dim=-1) - return x * F.gelu(gate) - - -class FeedForward(nn.Module): - def __init__(self, dim, dim_out=None, mult=4, glu=False, dropout=0.): - super().__init__() - inner_dim = int(dim * mult) - dim_out = default(dim_out, dim) - project_in = nn.Sequential( - nn.Linear(dim, inner_dim), - nn.GELU() - ) if not glu else GEGLU(dim, inner_dim) - - self.net = nn.Sequential( - project_in, - nn.Dropout(dropout), - nn.Linear(inner_dim, dim_out) - ) - - def forward(self, x): - return self.net(x) - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def Normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -class SpatialSelfAttention(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = Normalize(in_channels) - self.q = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.k = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.v = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - self.proj_out = torch.nn.Conv2d(in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b,c,h,w = q.shape - q = rearrange(q, 'b c h w -> b (h w) c') - k = rearrange(k, 'b c h w -> b c (h w)') - w_ = torch.einsum('bij,bjk->bik', q, k) - - w_ = w_ * (int(c)**(-0.5)) - w_ = torch.nn.functional.softmax(w_, dim=2) - - # attend to values - v = rearrange(v, 'b c h w -> b c (h w)') - w_ = rearrange(w_, 'b i j -> b j i') - h_ = torch.einsum('bij,bjk->bik', v, w_) - h_ = rearrange(h_, 'b c (h w) -> b c h w', h=h) - h_ = self.proj_out(h_) - - return x+h_ - - -class CrossAttention(nn.Module): - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.): - super().__init__() - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.scale = dim_head ** -0.5 - self.heads = heads - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential( - nn.Linear(inner_dim, query_dim), - nn.Dropout(dropout) - ) - - def forward(self, x, context=None, mask=None): - h = self.heads - - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - - q, k, v = map(lambda t: rearrange(t, 'b n (h d) -> (b h) n d', h=h), (q, k, v)) - - # force cast to fp32 to avoid overflowing - if _ATTN_PRECISION =="fp32": - with torch.autocast(enabled=False, device_type = 'cuda'): - q, k = q.float(), k.float() - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - else: - sim = einsum('b i d, b j d -> b i j', q, k) * self.scale - - del q, k - - if exists(mask): - mask = rearrange(mask, 'b ... -> b (...)') - max_neg_value = -torch.finfo(sim.dtype).max - mask = repeat(mask, 'b j -> (b h) () j', h=h) - sim.masked_fill_(~mask, max_neg_value) - - # attention, what we cannot get enough of - sim = sim.softmax(dim=-1) - - out = einsum('b i j, b j d -> b i d', sim, v) - out = rearrange(out, '(b h) n d -> b n (h d)', h=h) - return self.to_out(out) - - -class MemoryEfficientCrossAttention(nn.Module): - # https://github.com/MatthieuTPHR/diffusers/blob/d80b531ff8060ec1ea982b65a1b8df70f73aa67c/src/diffusers/models/attention.py#L223 - def __init__(self, query_dim, context_dim=None, heads=8, dim_head=64, dropout=0.0): - super().__init__() - print(f"Setting up {self.__class__.__name__}. Query dim is {query_dim}, context_dim is {context_dim} and using " - f"{heads} heads.") - inner_dim = dim_head * heads - context_dim = default(context_dim, query_dim) - - self.heads = heads - self.dim_head = dim_head - - self.to_q = nn.Linear(query_dim, inner_dim, bias=False) - self.to_k = nn.Linear(context_dim, inner_dim, bias=False) - self.to_v = nn.Linear(context_dim, inner_dim, bias=False) - - self.to_out = nn.Sequential(nn.Linear(inner_dim, query_dim), nn.Dropout(dropout)) - self.attention_op: Optional[Any] = None - - def forward(self, x, context=None, mask=None): - q = self.to_q(x) - context = default(context, x) - k = self.to_k(context) - v = self.to_v(context) - - b, _, _ = q.shape - q, k, v = map( - lambda t: t.unsqueeze(3) - .reshape(b, t.shape[1], self.heads, self.dim_head) - .permute(0, 2, 1, 3) - .reshape(b * self.heads, t.shape[1], self.dim_head) - .contiguous(), - (q, k, v), - ) - - # actually compute the attention, what we cannot get enough of - out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op) - - if exists(mask): - raise NotImplementedError - out = ( - out.unsqueeze(0) - .reshape(b, self.heads, out.shape[1], self.dim_head) - .permute(0, 2, 1, 3) - .reshape(b, out.shape[1], self.heads * self.dim_head) - ) - return self.to_out(out) - - -class BasicTransformerBlock(nn.Module): - ATTENTION_MODES = { - "softmax": CrossAttention, # vanilla attention - "softmax-xformers": MemoryEfficientCrossAttention - } - def __init__(self, dim, n_heads, d_head, dropout=0., context_dim=None, gated_ff=True, checkpoint=True, - disable_self_attn=False): - super().__init__() - attn_mode = "softmax-xformers" if XFORMERS_IS_AVAILBLE else "softmax" - assert attn_mode in self.ATTENTION_MODES - attn_cls = self.ATTENTION_MODES[attn_mode] - self.disable_self_attn = disable_self_attn - self.attn1 = attn_cls(query_dim=dim, heads=n_heads, dim_head=d_head, dropout=dropout, - context_dim=context_dim if self.disable_self_attn else None) # is a self-attention if not self.disable_self_attn - self.ff = FeedForward(dim, dropout=dropout, glu=gated_ff) - self.attn2 = attn_cls(query_dim=dim, context_dim=context_dim, - heads=n_heads, dim_head=d_head, dropout=dropout) # is self-attn if context is none - self.norm1 = nn.LayerNorm(dim) - self.norm2 = nn.LayerNorm(dim) - self.norm3 = nn.LayerNorm(dim) - self.checkpoint = checkpoint - - def forward(self, x, context=None): - return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint) - - def _forward(self, x, context=None): - x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x - x = self.attn2(self.norm2(x), context=context) + x - x = self.ff(self.norm3(x)) + x - return x - - -class SpatialTransformer(nn.Module): - """ - Transformer block for image-like data. - First, project the input (aka embedding) - and reshape to b, t, d. - Then apply standard transformer action. - Finally, reshape to image - NEW: use_linear for more efficiency instead of the 1x1 convs - """ - def __init__(self, in_channels, n_heads, d_head, - depth=1, dropout=0., context_dim=None, - disable_self_attn=False, use_linear=False, - use_checkpoint=True): - super().__init__() - if exists(context_dim) and not isinstance(context_dim, list): - context_dim = [context_dim] - self.in_channels = in_channels - inner_dim = n_heads * d_head - self.norm = Normalize(in_channels) - if not use_linear: - self.proj_in = nn.Conv2d(in_channels, - inner_dim, - kernel_size=1, - stride=1, - padding=0) - else: - self.proj_in = nn.Linear(in_channels, inner_dim) - - self.transformer_blocks = nn.ModuleList( - [BasicTransformerBlock(inner_dim, n_heads, d_head, dropout=dropout, context_dim=context_dim[d], - disable_self_attn=disable_self_attn, checkpoint=use_checkpoint) - for d in range(depth)] - ) - if not use_linear: - self.proj_out = zero_module(nn.Conv2d(inner_dim, - in_channels, - kernel_size=1, - stride=1, - padding=0)) - else: - self.proj_out = zero_module(nn.Linear(in_channels, inner_dim)) - self.use_linear = use_linear - - def forward(self, x, context=None): - # note: if no context is given, cross-attention defaults to self-attention - if not isinstance(context, list): - context = [context] - b, c, h, w = x.shape - x_in = x - x = self.norm(x) - if not self.use_linear: - x = self.proj_in(x) - x = rearrange(x, 'b c h w -> b (h w) c').contiguous() - if self.use_linear: - x = self.proj_in(x) - for i, block in enumerate(self.transformer_blocks): - x = block(x, context=context[i]) - if self.use_linear: - x = self.proj_out(x) - x = rearrange(x, 'b (h w) c -> b c h w', h=h, w=w).contiguous() - if not self.use_linear: - x = self.proj_out(x) - return x + x_in - diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/core/evaluation/eval_hooks.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/core/evaluation/eval_hooks.py deleted file mode 100644 index 6fc100c8f96e817a6ed2666f7c9f762af2463b48..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmseg/core/evaluation/eval_hooks.py +++ /dev/null @@ -1,109 +0,0 @@ -import os.path as osp - -from annotator.uniformer.mmcv.runner import DistEvalHook as _DistEvalHook -from annotator.uniformer.mmcv.runner import EvalHook as _EvalHook - - -class EvalHook(_EvalHook): - """Single GPU EvalHook, with efficient test support. - - Args: - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - Default: False. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Default: False. - Returns: - list: The prediction results. - """ - - greater_keys = ['mIoU', 'mAcc', 'aAcc'] - - def __init__(self, *args, by_epoch=False, efficient_test=False, **kwargs): - super().__init__(*args, by_epoch=by_epoch, **kwargs) - self.efficient_test = efficient_test - - def after_train_iter(self, runner): - """After train epoch hook. - - Override default ``single_gpu_test``. - """ - if self.by_epoch or not self.every_n_iters(runner, self.interval): - return - from annotator.uniformer.mmseg.apis import single_gpu_test - runner.log_buffer.clear() - results = single_gpu_test( - runner.model, - self.dataloader, - show=False, - efficient_test=self.efficient_test) - self.evaluate(runner, results) - - def after_train_epoch(self, runner): - """After train epoch hook. - - Override default ``single_gpu_test``. - """ - if not self.by_epoch or not self.every_n_epochs(runner, self.interval): - return - from annotator.uniformer.mmseg.apis import single_gpu_test - runner.log_buffer.clear() - results = single_gpu_test(runner.model, self.dataloader, show=False) - self.evaluate(runner, results) - - -class DistEvalHook(_DistEvalHook): - """Distributed EvalHook, with efficient test support. - - Args: - by_epoch (bool): Determine perform evaluation by epoch or by iteration. - If set to True, it will perform by epoch. Otherwise, by iteration. - Default: False. - efficient_test (bool): Whether save the results as local numpy files to - save CPU memory during evaluation. Default: False. - Returns: - list: The prediction results. - """ - - greater_keys = ['mIoU', 'mAcc', 'aAcc'] - - def __init__(self, *args, by_epoch=False, efficient_test=False, **kwargs): - super().__init__(*args, by_epoch=by_epoch, **kwargs) - self.efficient_test = efficient_test - - def after_train_iter(self, runner): - """After train epoch hook. - - Override default ``multi_gpu_test``. - """ - if self.by_epoch or not self.every_n_iters(runner, self.interval): - return - from annotator.uniformer.mmseg.apis import multi_gpu_test - runner.log_buffer.clear() - results = multi_gpu_test( - runner.model, - self.dataloader, - tmpdir=osp.join(runner.work_dir, '.eval_hook'), - gpu_collect=self.gpu_collect, - efficient_test=self.efficient_test) - if runner.rank == 0: - print('\n') - self.evaluate(runner, results) - - def after_train_epoch(self, runner): - """After train epoch hook. - - Override default ``multi_gpu_test``. - """ - if not self.by_epoch or not self.every_n_epochs(runner, self.interval): - return - from annotator.uniformer.mmseg.apis import multi_gpu_test - runner.log_buffer.clear() - results = multi_gpu_test( - runner.model, - self.dataloader, - tmpdir=osp.join(runner.work_dir, '.eval_hook'), - gpu_collect=self.gpu_collect) - if runner.rank == 0: - print('\n') - self.evaluate(runner, results) diff --git a/spaces/PKUWilliamYang/StyleGANEX/models/psp.py b/spaces/PKUWilliamYang/StyleGANEX/models/psp.py deleted file mode 100644 index 607a05aa8aa0d29ca58a4959e78c9b2065953a9e..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/StyleGANEX/models/psp.py +++ /dev/null @@ -1,148 +0,0 @@ -""" -This file defines the core research contribution -""" -import matplotlib -matplotlib.use('Agg') -import math - -import torch -from torch import nn -from models.encoders import psp_encoders -from models.stylegan2.model import Generator -from configs.paths_config import model_paths -import torch.nn.functional as F - -def get_keys(d, name): - if 'state_dict' in d: - d = d['state_dict'] - d_filt = {k[len(name) + 1:]: v for k, v in d.items() if k[:len(name)] == name} - return d_filt - - -class pSp(nn.Module): - - def __init__(self, opts, ckpt=None): - super(pSp, self).__init__() - self.set_opts(opts) - # compute number of style inputs based on the output resolution - self.opts.n_styles = int(math.log(self.opts.output_size, 2)) * 2 - 2 - # Define architecture - self.encoder = self.set_encoder() - self.decoder = Generator(self.opts.output_size, 512, 8) - self.face_pool = torch.nn.AdaptiveAvgPool2d((256, 256)) - # Load weights if needed - self.load_weights(ckpt) - - def set_encoder(self): - if self.opts.encoder_type == 'GradualStyleEncoder': - encoder = psp_encoders.GradualStyleEncoder(50, 'ir_se', self.opts) - elif self.opts.encoder_type == 'BackboneEncoderUsingLastLayerIntoW': - encoder = psp_encoders.BackboneEncoderUsingLastLayerIntoW(50, 'ir_se', self.opts) - elif self.opts.encoder_type == 'BackboneEncoderUsingLastLayerIntoWPlus': - encoder = psp_encoders.BackboneEncoderUsingLastLayerIntoWPlus(50, 'ir_se', self.opts) - else: - raise Exception('{} is not a valid encoders'.format(self.opts.encoder_type)) - return encoder - - def load_weights(self, ckpt=None): - if self.opts.checkpoint_path is not None: - print('Loading pSp from checkpoint: {}'.format(self.opts.checkpoint_path)) - if ckpt is None: - ckpt = torch.load(self.opts.checkpoint_path, map_location='cpu') - self.encoder.load_state_dict(get_keys(ckpt, 'encoder'), strict=False) - self.decoder.load_state_dict(get_keys(ckpt, 'decoder'), strict=False) - self.__load_latent_avg(ckpt) - else: - print('Loading encoders weights from irse50!') - encoder_ckpt = torch.load(model_paths['ir_se50']) - # if input to encoder is not an RGB image, do not load the input layer weights - if self.opts.label_nc != 0: - encoder_ckpt = {k: v for k, v in encoder_ckpt.items() if "input_layer" not in k} - self.encoder.load_state_dict(encoder_ckpt, strict=False) - print('Loading decoder weights from pretrained!') - ckpt = torch.load(self.opts.stylegan_weights) - self.decoder.load_state_dict(ckpt['g_ema'], strict=False) - if self.opts.learn_in_w: - self.__load_latent_avg(ckpt, repeat=1) - else: - self.__load_latent_avg(ckpt, repeat=self.opts.n_styles) - # for video toonification, we load G0' model - if self.opts.toonify_weights is not None: ##### modified - ckpt = torch.load(self.opts.toonify_weights) - self.decoder.load_state_dict(ckpt['g_ema'], strict=False) - self.opts.toonify_weights = None - - # x1: image for first-layer feature f. - # x2: image for style latent code w+. If not specified, x2=x1. - # inject_latent: for sketch/mask-to-face translation, another latent code to fuse with w+ - # latent_mask: fuse w+ and inject_latent with the mask (1~7 use w+ and 8~18 use inject_latent) - # use_feature: use f. Otherwise, use the orginal StyleGAN first-layer constant 4*4 feature - # first_layer_feature_ind: always=0, means the 1st layer of G accept f - # use_skip: use skip connection. - # zero_noise: use zero noises. - # editing_w: the editing vector v for video face editing - def forward(self, x1, x2=None, resize=True, latent_mask=None, randomize_noise=True, - inject_latent=None, return_latents=False, alpha=None, use_feature=True, - first_layer_feature_ind=0, use_skip=False, zero_noise=False, editing_w=None): ##### modified - - feats = None # f and the skipped encoder features - codes, feats = self.encoder(x1, return_feat=True, return_full=use_skip) ##### modified - if x2 is not None: ##### modified - codes = self.encoder(x2) ##### modified - # normalize with respect to the center of an average face - if self.opts.start_from_latent_avg: - if self.opts.learn_in_w: - codes = codes + self.latent_avg.repeat(codes.shape[0], 1) - else: - codes = codes + self.latent_avg.repeat(codes.shape[0], 1, 1) - - # E_W^{1:7}(T(x1)) concatenate E_W^{8:18}(w~) - if latent_mask is not None: - for i in latent_mask: - if inject_latent is not None: - if alpha is not None: - codes[:, i] = alpha * inject_latent[:, i] + (1 - alpha) * codes[:, i] - else: - codes[:, i] = inject_latent[:, i] - else: - codes[:, i] = 0 - - first_layer_feats, skip_layer_feats, fusion = None, None, None ##### modified - if use_feature: ##### modified - first_layer_feats = feats[0:2] # use f - if use_skip: ##### modified - skip_layer_feats = feats[2:] # use skipped encoder feature - fusion = self.encoder.fusion # use fusion layer to fuse encoder feature and decoder feature. - - images, result_latent = self.decoder([codes], - input_is_latent=True, - randomize_noise=randomize_noise, - return_latents=return_latents, - first_layer_feature=first_layer_feats, - first_layer_feature_ind=first_layer_feature_ind, - skip_layer_feature=skip_layer_feats, - fusion_block=fusion, - zero_noise=zero_noise, - editing_w=editing_w) ##### modified - - if resize: - if self.opts.output_size == 1024: ##### modified - images = F.adaptive_avg_pool2d(images, (images.shape[2]//4, images.shape[3]//4)) ##### modified - else: - images = self.face_pool(images) - - if return_latents: - return images, result_latent - else: - return images - - def set_opts(self, opts): - self.opts = opts - - def __load_latent_avg(self, ckpt, repeat=None): - if 'latent_avg' in ckpt: - self.latent_avg = ckpt['latent_avg'].to(self.opts.device) - if repeat is not None: - self.latent_avg = self.latent_avg.repeat(repeat, 1) - else: - self.latent_avg = None diff --git a/spaces/PSLD/PSLD/diffusion-posterior-sampling/motionblur/__init__.py b/spaces/PSLD/PSLD/diffusion-posterior-sampling/motionblur/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/PaSathees/FoodVision_Big/README.md b/spaces/PaSathees/FoodVision_Big/README.md deleted file mode 100644 index bb2f4f51bba24a27de1cde9cd90d270eb2259dfa..0000000000000000000000000000000000000000 --- a/spaces/PaSathees/FoodVision_Big/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: FoodVision Big -emoji: 🐠 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/engine/singlepath_trainer.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/engine/singlepath_trainer.py deleted file mode 100644 index c73ba7e60a8d5367a314b98b1379386cfcc4ffac..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/engine/singlepath_trainer.py +++ /dev/null @@ -1,141 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import datetime -import logging -import time -import random -import torch -import torch.distributed as dist -from maskrcnn_benchmark.utils.comm import get_world_size, synchronize, broadcast_data -from maskrcnn_benchmark.utils.metric_logger import MetricLogger -from maskrcnn_benchmark.utils.ema import ModelEma - - -def reduce_loss_dict(loss_dict): - """ - Reduce the loss dictionary from all processes so that process with rank - 0 has the averaged results. Returns a dict with the same fields as - loss_dict, after reduction. - """ - world_size = get_world_size() - if world_size < 2: - return loss_dict - with torch.no_grad(): - loss_names = [] - all_losses = [] - for k in sorted(loss_dict.keys()): - loss_names.append(k) - all_losses.append(loss_dict[k]) - all_losses = torch.stack(all_losses, dim=0) - dist.reduce(all_losses, dst=0) - if dist.get_rank() == 0: - # only main process gets accumulated, so only divide by - # world_size in this case - all_losses /= world_size - reduced_losses = {k: v for k, v in zip(loss_names, all_losses)} - return reduced_losses - - -def do_train( - cfg, - model, - data_loader, - optimizer, - scheduler, - checkpointer, - device, - checkpoint_period, - arguments, - rngs=None -): - logger = logging.getLogger("maskrcnn_benchmark.trainer") - logger.info("Start training") - meters = MetricLogger(delimiter=" ") - max_iter = len(data_loader) - start_iter = arguments["iteration"] - model.train() - model_ema = None - if cfg.SOLVER.MODEL_EMA>0: - model_ema = ModelEma(model, decay=cfg.SOLVER.MODEL_EMA) - start_training_time = time.time() - end = time.time() - - for iteration, (images, targets, _) in enumerate(data_loader, start_iter): - - if any(len(target) < 1 for target in targets): - logger.error("Iteration={iteration + 1} || Image Ids used for training {_} || targets Length={[len(target) for target in targets]}" ) - continue - data_time = time.time() - end - iteration = iteration + 1 - arguments["iteration"] = iteration - - images = images.to(device) - targets = [target.to(device) for target in targets] - - # synchronize rngs - if rngs is None: - if isinstance(model, torch.nn.parallel.DistributedDataParallel): - mix_nums = model.module.mix_nums - else: - mix_nums = model.mix_nums - rngs = [random.randint(0, mix-1) for mix in mix_nums] - rngs = broadcast_data(rngs) - - for param in model.parameters(): - param.requires_grad = False - loss_dict = model(images, targets, rngs) - - losses = sum(loss for loss in loss_dict.values()) - - # reduce losses over all GPUs for logging purposes - loss_dict_reduced = reduce_loss_dict(loss_dict) - losses_reduced = sum(loss for loss in loss_dict_reduced.values()) - meters.update(loss=losses_reduced, **loss_dict_reduced) - - optimizer.zero_grad() - losses.backward() - optimizer.step() - scheduler.step() - - if model_ema is not None: - model_ema.update(model) - arguments["model_ema"] = model_ema.state_dict() - - batch_time = time.time() - end - end = time.time() - meters.update(time=batch_time, data=data_time) - - eta_seconds = meters.time.global_avg * (max_iter - iteration) - eta_string = str(datetime.timedelta(seconds=int(eta_seconds))) - - if iteration % 20 == 0 or iteration == max_iter: - logger.info( - meters.delimiter.join( - [ - "eta: {eta}", - "iter: {iter}", - "{meters}", - "lr: {lr:.6f}", - "max mem: {memory:.0f}", - ] - ).format( - eta=eta_string, - iter=iteration, - meters=str(meters), - lr=optimizer.param_groups[0]["lr"], - memory=torch.cuda.max_memory_allocated() / 1024.0 / 1024.0, - ) - ) - if iteration % checkpoint_period == 0: - checkpointer.save("model_{:07d}".format(iteration), **arguments) - if iteration == max_iter: - if model_ema is not None: - model.load_state_dict(model_ema.state_dict()) - checkpointer.save("model_final", **arguments) - - total_training_time = time.time() - start_training_time - total_time_str = str(datetime.timedelta(seconds=total_training_time)) - logger.info( - "Total training time: {} ({:.4f} s / it)".format( - total_time_str, total_training_time / (max_iter) - ) - ) diff --git a/spaces/Podtekatel/ArcaneSVK2/README.md b/spaces/Podtekatel/ArcaneSVK2/README.md deleted file mode 100644 index 3e2b012a3f7e5eaacd6d284ef7ec02a72759a81b..0000000000000000000000000000000000000000 --- a/spaces/Podtekatel/ArcaneSVK2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Arcane Style Transfer V2 -emoji: 👩💎2️⃣ -colorFrom: green -colorTo: blue -sdk: gradio -sdk_version: 3.8.2 -app_file: app.py -pinned: true -license: bsd-3-clause ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Potanin/12345/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py b/spaces/Potanin/12345/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py deleted file mode 100644 index b412ba2814e114ca7bb00b6fd6ef217f63d788a3..0000000000000000000000000000000000000000 --- a/spaces/Potanin/12345/lib/infer_pack/modules/F0Predictor/HarvestF0Predictor.py +++ /dev/null @@ -1,86 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class HarvestF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.hop_length, - f0_ceil=self.f0_max, - f0_floor=self.f0_min, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.fs) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.harvest( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/Pranjal2041/SemSup-XC/cleaned_code/src/constants.py b/spaces/Pranjal2041/SemSup-XC/cleaned_code/src/constants.py deleted file mode 100644 index 05a0b420940344fd2f7c7596e17633e7e226d70a..0000000000000000000000000000000000000000 --- a/spaces/Pranjal2041/SemSup-XC/cleaned_code/src/constants.py +++ /dev/null @@ -1,29 +0,0 @@ -task_to_keys = { - "eurlex57k" : ("text", None), - "eurlex4k" : ("text", None), - "amazon13k" : ("text", None), - "wiki1m" : ("text", None), -} - -task_to_label_keys = { - "eurlex57k" : 'label', - "eurlex4k" : 'label', - "amazon13k": 'label', - "wiki1m" : "label" -} - - - -dataset_classification_type = { - "eurlex57k" : 'multi_label_classification', - "eurlex4k" : 'multi_label_classification', - "amazon13k" : 'multi_label_classification', - "wiki1m" : 'multi_label_classification', -} - -dataset_to_numlabels = { - "eurlex57k" : 4271, - "eurlex4k" : 3956, - "amazon13k" : 13330, - "wiki1m" : 1200000, # TODO: Enter precise value, though doesn't matter -} \ No newline at end of file diff --git a/spaces/PushkarA07/Sanskrit-Text-To-Speech/app.py b/spaces/PushkarA07/Sanskrit-Text-To-Speech/app.py deleted file mode 100644 index d8cb33bc92a38f81b735ca2135b8f018b3d4c9f5..0000000000000000000000000000000000000000 --- a/spaces/PushkarA07/Sanskrit-Text-To-Speech/app.py +++ /dev/null @@ -1,113 +0,0 @@ -import torch -import librosa -import commons -import utils -from models import SynthesizerTrn -from text import text_to_sequence -import numpy as np -from mel_processing import spectrogram_torch -import gradio as gr -from indic_transliteration import sanscript - - -SCRIPT_DICT={ - 'Devanagari':sanscript.DEVANAGARI, - 'IAST':sanscript.IAST, - 'SLP1':sanscript.SLP1, - 'HK':sanscript.HK -} - -DEFAULT_TEXT='संस्कृतम् जगतः एकतमा अतिप्राचीना समृद्धा शास्त्रीया च भाषासु वर्तते । संस्कृतं भारतस्य जगत: वा भाषासु एकतमा‌ प्राचीनतमा ।' - - -def get_text(text, hps, cleaned=False): - if cleaned: - text_norm = text_to_sequence(text, hps.symbols, []) - else: - text_norm = text_to_sequence(text, hps.symbols, hps.data.text_cleaners) - if hps.data.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - -def default_text(script): - if script=='Devanagari': - return DEFAULT_TEXT - else: - return sanscript.transliterate(DEFAULT_TEXT,sanscript.DEVANAGARI,SCRIPT_DICT[script]) - - -def speech_synthesize(text,script, speaker_id, length_scale): - text=text.replace('\n','') - if script!='Devanagari': - text=sanscript.transliterate(text,SCRIPT_DICT[script],sanscript.DEVANAGARI) - print(text) - stn_tst = get_text(text, hps_ms) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0) - x_tst_lengths = torch.LongTensor([stn_tst.size(0)]) - sid = torch.LongTensor([speaker_id]) - audio = net_g_ms.infer(x_tst, x_tst_lengths, sid=sid, noise_scale=0.667, noise_scale_w=0.8, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - return (hps_ms.data.sampling_rate, audio) - -def voice_convert(audio,origin_id,target_id): - sampling_rate, audio = audio - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != hps_ms.data.sampling_rate: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=hps_ms.data.sampling_rate) - - with torch.no_grad(): - y = torch.FloatTensor(audio).unsqueeze(0) - spec = spectrogram_torch(y, hps_ms.data.filter_length, - hps_ms.data.sampling_rate, hps_ms.data.hop_length, hps_ms.data.win_length, - center=False) - spec_lengths = torch.LongTensor([spec.size(-1)]) - sid_src = torch.LongTensor([origin_id]) - sid_tgt = torch.LongTensor([target_id]) - audio = net_g_ms.voice_conversion(spec, spec_lengths, sid_src=sid_src, sid_tgt=sid_tgt)[0][0,0].data.cpu().float().numpy() - return (hps_ms.data.sampling_rate, audio) - - -if __name__=='__main__': - hps_ms = utils.get_hparams_from_file('model/config.json') - n_speakers = hps_ms.data.n_speakers - n_symbols = len(hps_ms.symbols) - speakers = hps_ms.speakers - - net_g_ms = SynthesizerTrn( - n_symbols, - hps_ms.data.filter_length // 2 + 1, - hps_ms.train.segment_size // hps_ms.data.hop_length, - n_speakers=n_speakers, - **hps_ms.model) - _ = net_g_ms.eval() - utils.load_checkpoint('model/model.pth', net_g_ms) - - with gr.Blocks() as app: - gr.Markdown('# Sanskrit Text to Speech\n' - '![visitor badge](https://visitor-badge.glitch.me/badge?page_id=cjangcjengh.sanskrit-tts)') - with gr.Tab('Text to Speech'): - text_script=gr.Radio(['Devanagari','IAST','SLP1','HK'],label='Script',interactive=True,value='Devanagari') - text_input = gr.TextArea(label='Text', placeholder='Type your text here',value=DEFAULT_TEXT) - speaker_id=gr.Dropdown(speakers,label='Speaker',type='index',interactive=True,value=speakers[0]) - length_scale=gr.Slider(0.5,2,1,step=0.1,label='Speaking Speed',interactive=True) - tts_button = gr.Button('Synthesize') - audio_output = gr.Audio(label='Speech Synthesized') - text_script.change(default_text,[text_script],[text_input]) - tts_button.click(speech_synthesize,[text_input,text_script,speaker_id,length_scale],[audio_output]) - with gr.Tab('Voice Conversion'): - audio_input = gr.Audio(label='Audio',interactive=True) - speaker_input = gr.Dropdown(speakers, label='Original Speaker',type='index',interactive=True, value=speakers[0]) - speaker_output = gr.Dropdown(speakers, label='Target Speaker',type='index',interactive=True, value=speakers[0]) - vc_button = gr.Button('Convert') - audio_output_vc = gr.Audio(label='Voice Converted') - vc_button.click(voice_convert,[audio_input,speaker_input,speaker_output],[audio_output_vc]) - gr.Markdown('## Based on\n' - '- [VITS](https://github.com/jaywalnut310/vits)\n\n' - '## Dataset\n' - '- [Vāksañcayaḥ](https://www.cse.iitb.ac.in/~asr/)') - - app.launch() \ No newline at end of file diff --git a/spaces/PushkarA07/Sanskrit-Text-To-Speech/text/cleaners.py b/spaces/PushkarA07/Sanskrit-Text-To-Speech/text/cleaners.py deleted file mode 100644 index 868a236f3fa483f12e7a56120834662c80e1450d..0000000000000000000000000000000000000000 --- a/spaces/PushkarA07/Sanskrit-Text-To-Speech/text/cleaners.py +++ /dev/null @@ -1,5 +0,0 @@ -def sanskrit_cleaners(text): - text = text.replace('॥', '।').replace('ॐ', 'ओम्') - if len(text)==0 or text[-1] != '।': - text += ' ।' - return text diff --git a/spaces/RMXK/RVC_HFF/demucs/wav.py b/spaces/RMXK/RVC_HFF/demucs/wav.py deleted file mode 100644 index a65c3b2ba5aacb1fcab3753f1f85ff7b8db7fc11..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/demucs/wav.py +++ /dev/null @@ -1,174 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from collections import OrderedDict -import hashlib -import math -import json -from pathlib import Path - -import julius -import torch as th -from torch import distributed -import torchaudio as ta -from torch.nn import functional as F - -from .audio import convert_audio_channels -from .compressed import get_musdb_tracks - -MIXTURE = "mixture" -EXT = ".wav" - - -def _track_metadata(track, sources): - track_length = None - track_samplerate = None - for source in sources + [MIXTURE]: - file = track / f"{source}{EXT}" - info = ta.info(str(file)) - length = info.num_frames - if track_length is None: - track_length = length - track_samplerate = info.sample_rate - elif track_length != length: - raise ValueError( - f"Invalid length for file {file}: " - f"expecting {track_length} but got {length}.") - elif info.sample_rate != track_samplerate: - raise ValueError( - f"Invalid sample rate for file {file}: " - f"expecting {track_samplerate} but got {info.sample_rate}.") - if source == MIXTURE: - wav, _ = ta.load(str(file)) - wav = wav.mean(0) - mean = wav.mean().item() - std = wav.std().item() - - return {"length": length, "mean": mean, "std": std, "samplerate": track_samplerate} - - -def _build_metadata(path, sources): - meta = {} - path = Path(path) - for file in path.iterdir(): - meta[file.name] = _track_metadata(file, sources) - return meta - - -class Wavset: - def __init__( - self, - root, metadata, sources, - length=None, stride=None, normalize=True, - samplerate=44100, channels=2): - """ - Waveset (or mp3 set for that matter). Can be used to train - with arbitrary sources. Each track should be one folder inside of `path`. - The folder should contain files named `{source}.{ext}`. - Files will be grouped according to `sources` (each source is a list of - filenames). - - Sample rate and channels will be converted on the fly. - - `length` is the sample size to extract (in samples, not duration). - `stride` is how many samples to move by between each example. - """ - self.root = Path(root) - self.metadata = OrderedDict(metadata) - self.length = length - self.stride = stride or length - self.normalize = normalize - self.sources = sources - self.channels = channels - self.samplerate = samplerate - self.num_examples = [] - for name, meta in self.metadata.items(): - track_length = int(self.samplerate * meta['length'] / meta['samplerate']) - if length is None or track_length < length: - examples = 1 - else: - examples = int(math.ceil((track_length - self.length) / self.stride) + 1) - self.num_examples.append(examples) - - def __len__(self): - return sum(self.num_examples) - - def get_file(self, name, source): - return self.root / name / f"{source}{EXT}" - - def __getitem__(self, index): - for name, examples in zip(self.metadata, self.num_examples): - if index >= examples: - index -= examples - continue - meta = self.metadata[name] - num_frames = -1 - offset = 0 - if self.length is not None: - offset = int(math.ceil( - meta['samplerate'] * self.stride * index / self.samplerate)) - num_frames = int(math.ceil( - meta['samplerate'] * self.length / self.samplerate)) - wavs = [] - for source in self.sources: - file = self.get_file(name, source) - wav, _ = ta.load(str(file), frame_offset=offset, num_frames=num_frames) - wav = convert_audio_channels(wav, self.channels) - wavs.append(wav) - - example = th.stack(wavs) - example = julius.resample_frac(example, meta['samplerate'], self.samplerate) - if self.normalize: - example = (example - meta['mean']) / meta['std'] - if self.length: - example = example[..., :self.length] - example = F.pad(example, (0, self.length - example.shape[-1])) - return example - - -def get_wav_datasets(args, samples, sources): - sig = hashlib.sha1(str(args.wav).encode()).hexdigest()[:8] - metadata_file = args.metadata / (sig + ".json") - train_path = args.wav / "train" - valid_path = args.wav / "valid" - if not metadata_file.is_file() and args.rank == 0: - train = _build_metadata(train_path, sources) - valid = _build_metadata(valid_path, sources) - json.dump([train, valid], open(metadata_file, "w")) - if args.world_size > 1: - distributed.barrier() - train, valid = json.load(open(metadata_file)) - train_set = Wavset(train_path, train, sources, - length=samples, stride=args.data_stride, - samplerate=args.samplerate, channels=args.audio_channels, - normalize=args.norm_wav) - valid_set = Wavset(valid_path, valid, [MIXTURE] + sources, - samplerate=args.samplerate, channels=args.audio_channels, - normalize=args.norm_wav) - return train_set, valid_set - - -def get_musdb_wav_datasets(args, samples, sources): - metadata_file = args.metadata / "musdb_wav.json" - root = args.musdb / "train" - if not metadata_file.is_file() and args.rank == 0: - metadata = _build_metadata(root, sources) - json.dump(metadata, open(metadata_file, "w")) - if args.world_size > 1: - distributed.barrier() - metadata = json.load(open(metadata_file)) - - train_tracks = get_musdb_tracks(args.musdb, is_wav=True, subsets=["train"], split="train") - metadata_train = {name: meta for name, meta in metadata.items() if name in train_tracks} - metadata_valid = {name: meta for name, meta in metadata.items() if name not in train_tracks} - train_set = Wavset(root, metadata_train, sources, - length=samples, stride=args.data_stride, - samplerate=args.samplerate, channels=args.audio_channels, - normalize=args.norm_wav) - valid_set = Wavset(root, metadata_valid, [MIXTURE] + sources, - samplerate=args.samplerate, channels=args.audio_channels, - normalize=args.norm_wav) - return train_set, valid_set diff --git a/spaces/RMeli/gnina-torch/md/intro.md b/spaces/RMeli/gnina-torch/md/intro.md deleted file mode 100644 index d8f53c8329b982855f71a473026a1674dd0c520e..0000000000000000000000000000000000000000 --- a/spaces/RMeli/gnina-torch/md/intro.md +++ /dev/null @@ -1,6 +0,0 @@ -# Gnina-Torch - -Score your protein-ligand compex and predict the binding affinity with -[Gnina](https://github.com/gnina/gnina)'s scoring function. Powered by -[gnina-torch](https://github.com/RMeli/gnina-torch), a PyTorch implementation of Gnina's -scoring function. diff --git a/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/model_param_init.py b/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/model_param_init.py deleted file mode 100644 index b995c0bfb1194746187692e2ab1c2a6dbaaaec6c..0000000000000000000000000000000000000000 --- a/spaces/Ricecake123/RVC-demo/lib/uvr5_pack/lib_v5/model_param_init.py +++ /dev/null @@ -1,69 +0,0 @@ -import json -import os -import pathlib - -default_param = {} -default_param["bins"] = 768 -default_param["unstable_bins"] = 9 # training only -default_param["reduction_bins"] = 762 # training only -default_param["sr"] = 44100 -default_param["pre_filter_start"] = 757 -default_param["pre_filter_stop"] = 768 -default_param["band"] = {} - - -default_param["band"][1] = { - "sr": 11025, - "hl": 128, - "n_fft": 960, - "crop_start": 0, - "crop_stop": 245, - "lpf_start": 61, # inference only - "res_type": "polyphase", -} - -default_param["band"][2] = { - "sr": 44100, - "hl": 512, - "n_fft": 1536, - "crop_start": 24, - "crop_stop": 547, - "hpf_start": 81, # inference only - "res_type": "sinc_best", -} - - -def int_keys(d): - r = {} - for k, v in d: - if k.isdigit(): - k = int(k) - r[k] = v - return r - - -class ModelParameters(object): - def __init__(self, config_path=""): - if ".pth" == pathlib.Path(config_path).suffix: - import zipfile - - with zipfile.ZipFile(config_path, "r") as zip: - self.param = json.loads( - zip.read("param.json"), object_pairs_hook=int_keys - ) - elif ".json" == pathlib.Path(config_path).suffix: - with open(config_path, "r") as f: - self.param = json.loads(f.read(), object_pairs_hook=int_keys) - else: - self.param = default_param - - for k in [ - "mid_side", - "mid_side_b", - "mid_side_b2", - "stereo_w", - "stereo_n", - "reverse", - ]: - if not k in self.param: - self.param[k] = False diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/varifocal_loss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/varifocal_loss.py deleted file mode 100644 index 7f00bd6916c04fef45a9aeecb50888266420daf9..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet_null/models/losses/varifocal_loss.py +++ /dev/null @@ -1,133 +0,0 @@ -import mmcv -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -@mmcv.jit(derivate=True, coderize=True) -def varifocal_loss(pred, - target, - weight=None, - alpha=0.75, - gamma=2.0, - iou_weighted=True, - reduction='mean', - avg_factor=None): - """`Varifocal Loss `_ - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the - number of classes - target (torch.Tensor): The learning target of the iou-aware - classification score with shape (N, C), C is the number of classes. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - alpha (float, optional): A balance factor for the negative part of - Varifocal Loss, which is different from the alpha of Focal Loss. - Defaults to 0.75. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - iou_weighted (bool, optional): Whether to weight the loss of the - positive example with the iou target. Defaults to True. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and - "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - # pred and target should be of the same size - assert pred.size() == target.size() - pred_sigmoid = pred.sigmoid() - target = target.type_as(pred) - if iou_weighted: - focal_weight = target * (target > 0.0).float() + \ - alpha * (pred_sigmoid - target).abs().pow(gamma) * \ - (target <= 0.0).float() - else: - focal_weight = (target > 0.0).float() + \ - alpha * (pred_sigmoid - target).abs().pow(gamma) * \ - (target <= 0.0).float() - loss = F.binary_cross_entropy_with_logits( - pred, target, reduction='none') * focal_weight - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -@LOSSES.register_module() -class VarifocalLoss(nn.Module): - - def __init__(self, - use_sigmoid=True, - alpha=0.75, - gamma=2.0, - iou_weighted=True, - reduction='mean', - loss_weight=1.0): - """`Varifocal Loss `_ - - Args: - use_sigmoid (bool, optional): Whether the prediction is - used for sigmoid or softmax. Defaults to True. - alpha (float, optional): A balance factor for the negative part of - Varifocal Loss, which is different from the alpha of Focal - Loss. Defaults to 0.75. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - iou_weighted (bool, optional): Whether to weight the loss of the - positive examples with the iou target. Defaults to True. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and - "sum". - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - """ - super(VarifocalLoss, self).__init__() - assert use_sigmoid is True, \ - 'Only sigmoid varifocal loss supported now.' - assert alpha >= 0.0 - self.use_sigmoid = use_sigmoid - self.alpha = alpha - self.gamma = gamma - self.iou_weighted = iou_weighted - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.use_sigmoid: - loss_cls = self.loss_weight * varifocal_loss( - pred, - target, - weight, - alpha=self.alpha, - gamma=self.gamma, - iou_weighted=self.iou_weighted, - reduction=reduction, - avg_factor=avg_factor) - else: - raise NotImplementedError - return loss_cls diff --git a/spaces/SWHL/RapidOCRDemo/app.py b/spaces/SWHL/RapidOCRDemo/app.py deleted file mode 100644 index b6e56a981498aaf03eddbb56ec759d35fc90e867..0000000000000000000000000000000000000000 --- a/spaces/SWHL/RapidOCRDemo/app.py +++ /dev/null @@ -1,206 +0,0 @@ -# -*- encoding: utf-8 -*- -# @Author: SWHL -# @Contact: liekkaskono@163.com -import time -from pathlib import Path - -import cv2 -import numpy as np -import pandas as pd -import streamlit as st -from PIL import Image -from rapidocr_onnxruntime import RapidOCR -from streamlit_image_select import image_select - -from utils import visualize - -font_dict = { - "ch": "chinese_cht.ttf", - "japan": "japan.ttc", - "korean": "korean.ttf", - "en": "chinese_cht.ttf", -} - - -def init_sidebar(): - st.session_state["params"] = {} - - st.sidebar.markdown( - "### [🛠️ Parameter Settings](https://github.com/RapidAI/RapidOCR/wiki/config_parameter)" - ) - box_thresh = st.sidebar.slider( - "box_thresh", - min_value=0.0, - max_value=1.0, - value=0.5, - step=0.1, - help="检测到的框是文本的概率,值越大,框中是文本的概率就越大。存在漏检时,调低该值。取值范围:[0, 1.0],默认值为0.5", - ) - st.session_state["params"]["box_thresh"] = box_thresh - - unclip_ratio = st.sidebar.slider( - "unclip_ratio", - min_value=1.5, - max_value=2.0, - value=1.6, - step=0.1, - help="控制文本检测框的大小,值越大,检测框整体越大。在出现框截断文字的情况,调大该值。取值范围:[1.5, 2.0],默认值为1.6", - ) - st.session_state["params"]["unclip_ratio"] = unclip_ratio - - text_score = st.sidebar.slider( - "text_score", - min_value=0.0, - max_value=1.0, - value=0.5, - step=0.1, - help="文本识别结果是正确的置信度,值越大,显示出的识别结果更准确。存在漏检时,调低该值。取值范围:[0, 1.0],默认值为0.5", - ) - st.session_state["params"]["text_score"] = text_score - - with st.sidebar.container(): - img_path = image_select( - label="Examples(click to select):", - images=examples, - key="equation_default", - use_container_width=True, - ) - img = cv2.imread(img_path) - st.session_state["img"] = img - - -def inference( - text_det=None, - text_rec=None, -): - img = st.session_state.get("img") - box_thresh = st.session_state["params"].get("box_thresh") - unclip_ratio = st.session_state["params"].get("unclip_ratio") - text_score = st.session_state["params"].get("text_score") - - det_model_path = str(Path("models") / "text_det" / text_det) - rec_model_path = str(Path("models") / "text_rec" / text_rec) - if ( - "v2" in rec_model_path - or "korean" in rec_model_path - or "japan" in rec_model_path - ): - rec_image_shape = [3, 32, 320] - else: - rec_image_shape = [3, 48, 320] - - rapid_ocr = RapidOCR( - det_model_path=det_model_path, - rec_model_path=rec_model_path, - rec_img_shape=rec_image_shape, - ) - - if "ch" in rec_model_path or "en" in rec_model_path: - lan_name = "ch" - elif "japan" in rec_model_path: - lan_name = "japan" - elif "korean" in rec_model_path: - lan_name = "korean" - else: - lan_name = "ch" - - ocr_result, infer_elapse = rapid_ocr( - img, box_thresh=box_thresh, unclip_ratio=unclip_ratio, text_score=text_score - ) - if not ocr_result or not infer_elapse: - return None, None, None - - det_cost, cls_cost, rec_cost = infer_elapse - elapse = f"- `det cost`: {det_cost:.5f}\n - `cls cost`: {cls_cost:.5f}\n - `rec cost`: {rec_cost:.5f}" - dt_boxes, rec_res, scores = list(zip(*ocr_result)) - font_path = Path("fonts") / font_dict.get(lan_name) - vis_img = visualize( - Image.fromarray(img), dt_boxes, rec_res, scores, font_path=str(font_path) - ) - out_df = pd.DataFrame( - [[rec, score] for rec, score in zip(rec_res, scores)], - columns=("Rec", "Score"), - ) - return vis_img, out_df, elapse - - -def tips(txt: str, wait_time: int = 2, icon: str = "🎉"): - st.toast(txt, icon=icon) - time.sleep(wait_time) - - -if __name__ == "__main__": - st.markdown( - "

Rapid⚡OCR

", - unsafe_allow_html=True, - ) - st.markdown( - """ -

- - - - PyPI -

- """, - unsafe_allow_html=True, - ) - - examples = [ - "images/1.jpg", - "images/ch_en_num.jpg", - "images/air_ticket.jpg", - "images/car_plate.jpeg", - "images/train_ticket.jpeg", - "images/japan_2.jpg", - "images/korean_1.jpg", - ] - - init_sidebar() - - menu_det, menu_rec = st.columns([1, 1]) - det_models = [ - "ch_PP-OCRv4_det_infer.onnx", - "ch_PP-OCRv3_det_infer.onnx", - "ch_PP-OCRv2_det_infer.onnx", - "ch_ppocr_server_v2.0_det_infer.onnx", - ] - select_det = menu_det.selectbox("Det model:", det_models) - - rec_models = [ - "ch_PP-OCRv4_rec_infer.onnx", - "ch_PP-OCRv3_rec_infer.onnx", - "ch_PP-OCRv2_rec_infer.onnx", - "ch_PP-OCRv4_det_server_infer.onnx", - "ch_ppocr_server_v2.0_rec_infer.onnx", - "en_PP-OCRv3_rec_infer.onnx", - "en_number_mobile_v2.0_rec_infer.onnx", - "korean_mobile_v2.0_rec_infer.onnx", - "japan_rec_crnn_v2.onnx", - ] - select_rec = menu_rec.selectbox("Rec model:", rec_models) - - with st.form("my-form", clear_on_submit=True): - img_file_buffer = st.file_uploader( - "Upload an image", - accept_multiple_files=False, - label_visibility="visible", - type=["png", "jpg", "jpeg", "bmp"], - ) - submit = st.form_submit_button("Upload") - if submit and img_file_buffer is not None: - image = Image.open(img_file_buffer) - img = np.array(image) - st.session_state["img"] = img - - if st.session_state["img"] is not None: - out_img, out_json, elapse = inference(select_det, select_rec) - if all(v is not None for v in [out_img, out_json, elapse]): - st.markdown("#### Visualize:") - st.image(out_img) - - st.markdown("### Rec Result:") - st.markdown(elapse) - st.dataframe(out_json, use_container_width=True) - else: - tips("识别结果为空", wait_time=5, icon="⚠️") diff --git a/spaces/SakshiRathi77/SakshiRathi77-Wav2Vec2-hi-kagglex/app.py b/spaces/SakshiRathi77/SakshiRathi77-Wav2Vec2-hi-kagglex/app.py deleted file mode 100644 index 4cf090b5d75c52f35944fd078ac57109b73596c5..0000000000000000000000000000000000000000 --- a/spaces/SakshiRathi77/SakshiRathi77-Wav2Vec2-hi-kagglex/app.py +++ /dev/null @@ -1,99 +0,0 @@ -import torch -import gradio as gr -import pytube as pt -from transformers import pipeline -from huggingface_hub import model_info -import time -import unicodedata -from gradio.themes.utils.theme_dropdown import create_theme_dropdown - -MODEL_NAME = "SakshiRathi77/wav2vec2-large-xlsr-300m-hi-kagglex" -lang = "hi" - -my_theme = gr.Theme.from_hub('freddyaboulton/dracula_revamped') -device = 0 if torch.cuda.is_available() else "cpu" -pipe = pipeline( - task="automatic-speech-recognition", - model=MODEL_NAME, - device=device, -) - -def transcribe(microphone, file_upload): - warn_output = "" - if (microphone is not None) and (file_upload is not None): - warn_output = ( - "WARNING: You've uploaded an audio file and used the microphone. " - "The recorded file from the microphone will be used and the uploaded audio will be discarded.\n" - ) - - elif (microphone is None) and (file_upload is None): - return "ERROR: You have to either use the microphone or upload an audio file" - file = microphone if microphone is not None else file_upload - text = pipe(file)["text"] - - return warn_output + text - - -def rt_transcribe(audio, state=""): - time.sleep(2) - text = pipe(audio)["text"] - state += unicodedata.normalize("NFC",text) + " " - - return state, state - - - -demo = gr.Blocks(theme=my_theme) -examples=[["examples/example1.mp3"], ["examples/example2.mp3"],["examples/example3.mp3"]] - -title =""" -HindiSpeechPro: WAV2VEC-Powered ASR Interface -""" - -description = """ -

-

-Welcome to the HindiSpeechPro, a cutting-edge interface powered by a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. -logo -
-

-""" - - -# article = "

Source Code on Github

Reference

Feedback Form

" - - -mf_transcribe = gr.Interface( - fn=transcribe, - inputs=[ - gr.inputs.Audio(source="microphone", type="filepath"), - gr.inputs.Audio(source="upload", type="filepath"), - ], - outputs="text", - # theme="huggingface", - title=title, - description= description , - allow_flagging="never", - examples=examples, -) - -rt_transcribe = gr.Interface( - fn=rt_transcribe, - inputs=[ - gr.Audio(source="microphone", type="filepath", streaming=True), - "state" - ], - outputs=[ "textbox", - "state"], - # theme="huggingface", - title=title, - description= description , - allow_flagging="never", - live=True, -) - - -with demo: - gr.TabbedInterface([mf_transcribe, rt_transcribe], ["Transcribe Audio", "Transcribe Realtime Voice"]) - -demo.launch(share=True) diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_onnx.py b/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_onnx.py deleted file mode 100644 index 7ff3ff22fc21014fa7b6c12fba96a2ca36fc9cc4..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_onnx.py +++ /dev/null @@ -1,165 +0,0 @@ -import inspect -from typing import List, Optional, Union - -import numpy as np - -from transformers import CLIPFeatureExtractor, CLIPTokenizer - -from ...onnx_utils import OnnxRuntimeModel -from ...pipeline_utils import DiffusionPipeline -from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler -from . import StableDiffusionPipelineOutput - - -class StableDiffusionOnnxPipeline(DiffusionPipeline): - vae_decoder: OnnxRuntimeModel - text_encoder: OnnxRuntimeModel - tokenizer: CLIPTokenizer - unet: OnnxRuntimeModel - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler] - safety_checker: OnnxRuntimeModel - feature_extractor: CLIPFeatureExtractor - - def __init__( - self, - vae_decoder: OnnxRuntimeModel, - text_encoder: OnnxRuntimeModel, - tokenizer: CLIPTokenizer, - unet: OnnxRuntimeModel, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - safety_checker: OnnxRuntimeModel, - feature_extractor: CLIPFeatureExtractor, - ): - super().__init__() - scheduler = scheduler.set_format("np") - self.register_modules( - vae_decoder=vae_decoder, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - def __call__( - self, - prompt: Union[str, List[str]], - height: Optional[int] = 512, - width: Optional[int] = 512, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - eta: Optional[float] = 0.0, - latents: Optional[np.ndarray] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - **kwargs, - ): - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - # get prompt text embeddings - text_input = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="np", - ) - text_embeddings = self.text_encoder(input_ids=text_input.input_ids.astype(np.int32))[0] - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - max_length = text_input.input_ids.shape[-1] - uncond_input = self.tokenizer( - [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="np" - ) - uncond_embeddings = self.text_encoder(input_ids=uncond_input.input_ids.astype(np.int32))[0] - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = np.concatenate([uncond_embeddings, text_embeddings]) - - # get the initial random noise unless the user supplied it - latents_shape = (batch_size, 4, height // 8, width // 8) - if latents is None: - latents = np.random.randn(*latents_shape).astype(np.float32) - elif latents.shape != latents_shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {latents_shape}") - - # set timesteps - accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys()) - extra_set_kwargs = {} - if accepts_offset: - extra_set_kwargs["offset"] = 1 - - self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs) - - # if we use LMSDiscreteScheduler, let's make sure latents are mulitplied by sigmas - if isinstance(self.scheduler, LMSDiscreteScheduler): - latents = latents * self.scheduler.sigmas[0] - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - for i, t in enumerate(self.progress_bar(self.scheduler.timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = np.concatenate([latents] * 2) if do_classifier_free_guidance else latents - if isinstance(self.scheduler, LMSDiscreteScheduler): - sigma = self.scheduler.sigmas[i] - # the model input needs to be scaled to match the continuous ODE formulation in K-LMS - latent_model_input = latent_model_input / ((sigma**2 + 1) ** 0.5) - - # predict the noise residual - noise_pred = self.unet( - sample=latent_model_input, timestep=np.array([t]), encoder_hidden_states=text_embeddings - ) - noise_pred = noise_pred[0] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = np.split(noise_pred, 2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - if isinstance(self.scheduler, LMSDiscreteScheduler): - latents = self.scheduler.step(noise_pred, i, latents, **extra_step_kwargs).prev_sample - else: - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # scale and decode the image latents with vae - latents = 1 / 0.18215 * latents - image = self.vae_decoder(latent_sample=latents)[0] - - image = np.clip(image / 2 + 0.5, 0, 1) - image = image.transpose((0, 2, 3, 1)) - - # run safety checker - safety_checker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="np") - image, has_nsfw_concept = self.safety_checker(clip_input=safety_checker_input.pixel_values, images=image) - - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image, has_nsfw_concept) - - return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) diff --git a/spaces/SeViLA/SeViLA/lavis/datasets/download_scripts/DownloadConceptualCaptions/README.md b/spaces/SeViLA/SeViLA/lavis/datasets/download_scripts/DownloadConceptualCaptions/README.md deleted file mode 100644 index 0dd0b9d5bfe304770d06b2adc363f33a6c390ced..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/datasets/download_scripts/DownloadConceptualCaptions/README.md +++ /dev/null @@ -1,22 +0,0 @@ - - -# Download Conceptual Captions Data - -Place data from: https://ai.google.com/research/ConceptualCaptions/download in this folder - -`Train_GCC-training.tsv / cc3m.tsv` Training Split (3,318,333) - -run `download_data_cc3m.py` or `download_data_cc12m.py`. - -Images will be in default LAVIS cache folders. You can stop and resume, the settings for splitting downloads into chunks / threads are not optimal, but it maxed out my connection so i kept them as is. - -Note: A previous version of this script used a different file naming scheme, this changed and if you are resuming a previously started download, you will get duplicates. - -A bunch of them will fail to download, and return web pages instead. These will need to be cleaned up later. See `downloaded_validation_report.tsv` after it downloads for HTTP errors. Around 8% of images are gone, based on validation set results. Setting the user agent could fix some errors too maybe - not sure if any requests are rejected by sites based on this. - -It should take about a day or two to download the training data, keep an eye on disk space. diff --git a/spaces/SeViLA/SeViLA/lavis/models/blip2_models/blip2_fmr.py b/spaces/SeViLA/SeViLA/lavis/models/blip2_models/blip2_fmr.py deleted file mode 100644 index 8a6c3ee70d2ac4262054cda223e48a2581e6f7bf..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/blip2_models/blip2_fmr.py +++ /dev/null @@ -1,397 +0,0 @@ -""" - Copyright (c) 2023, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" -import logging - -import copy -import torch -import torch.nn as nn -from torch.cuda.amp import autocast as autocast -from transformers import T5TokenizerFast, BertTokenizer - -from lavis.common.registry import registry -from lavis.models.blip2_models.blip2 import Blip2Base, disabled_train -from lavis.models.blip2_models.modeling_t5 import T5Config, T5ForConditionalGeneration - -@registry.register_model("blip2_fmr") # frame-level moment retrieval -class Blip2FMR(Blip2Base): - """ - BLIP2 T5 model. - Supported model types: - - pretrain_flant5xl: pretrained model with FlanT5-XL - - pretrain_flant5xxl: pretrained model with FlanT5-XXL - - caption_coco_flant5xl: fintuned image captioning model with FlanT5-XL - Usage: - >>> from lavis.models import load_model - >>> model = load_model("blip2_t5", "pretrain_flant5xl") - """ - - PRETRAINED_MODEL_CONFIG_DICT = { - "pretrain_flant5xl": "configs/models/blip2/blip2_pretrain_flant5xl.yaml", - "pretrain_flant5xxl": "configs/models/blip2/blip2_pretrain_flant5xxl.yaml", - "caption_coco_flant5xl": "configs/models/blip2/blip2_caption_flant5xl.yaml", - } - - def __init__( self, img_size=224, drop_path_rate=0, - use_grad_checkpoint=False, vit_precision="fp16", freeze_vit=True, - num_query_token=32, t5_model="google/flan-t5-xl", prompt="", - max_txt_len=32, frame_num=8, answer_num=5, apply_lemmatizer=False, task='qa'): - """ - apply_lemmatizer: when set to True, postprocess predict_answers() result with lemmas. - """ - super().__init__() - - self.task = task - - # vision backbone - self.visual_encoder, self.ln_vision_loc, _ = self.init_vision_encoder( - img_size, drop_path_rate, use_grad_checkpoint, vit_precision) - # Freeze ViT - if freeze_vit: - for name, param in self.visual_encoder.named_parameters(): - param.requires_grad = False - self.visual_encoder = self.visual_encoder.eval() - self.visual_encoder.train = disabled_train - logging.info("freeze vision encoder") - - # text backbone - self.t5_tokenizer = T5TokenizerFast.from_pretrained(t5_model) - t5_config = T5Config.from_pretrained(t5_model) - t5_config.dense_act_fn = "gelu" - self.t5_model = T5ForConditionalGeneration.from_pretrained( - t5_model, config=t5_config) - # Freeze T5 - for name, param in self.t5_model.named_parameters(): - param.requires_grad = False - param.data = param.data.bfloat16() - - # Q-Former for Frame Localization - self.Qformer_loc, self.query_tokens_loc = self.init_Qformer( - num_query_token, self.visual_encoder.num_features) - - self.Qformer_loc.cls = None - self.Qformer_loc.bert.embeddings.word_embeddings = None - self.Qformer_loc.bert.embeddings.position_embeddings = None - for layer in self.Qformer_loc.bert.encoder.layer: - layer.output = None - layer.intermediate = None - self.t5_proj_loc = nn.Linear( - self.Qformer_loc.config.hidden_size, self.t5_model.config.hidden_size - ) - - self.max_txt_len = 77 - #self.prompt = prompt - answer_id = [71, 272, 205, 309, 262] # A B C D E - self.answer_id = answer_id[:answer_num] - # self.answer_id = [71, 272] - self.yes_id, self.no_id = 4273, 150 - - self._apply_lemmatizer = apply_lemmatizer - self._lemmatizer = None - - self.frame_num = frame_num - self.ANS_MAP = {'A':0, 'B':1, 'C':2, 'D':3, 'E':4} - self.frame_prefix = ['Frame: '] - - def forward(self, samples): - - image = samples["video"] - text_input = samples['loc_input'] # query + options + Prompt - bs_answer = samples['qa_output'] # yes or no - flat_answer = [] - for answer in bs_answer: - answer = answer.split('_') - for a in answer: - flat_answer.append(a) - - b, t, c, w, h = image.shape - image = image.reshape(-1, c, w, h) - image_embeds = self.ln_vision_loc(self.visual_encoder(image)) # bt, n, c - _, n, _ = image_embeds.shape - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(image.device) # bt n c - - #pass - query_tokens = self.query_tokens_loc.expand(image_embeds.shape[0], -1, -1) - query_output = self.Qformer_loc.bert( - query_embeds=query_tokens, encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, return_dict=True) - inputs_t5 = self.t5_proj_loc(query_output.last_hidden_state) - atts_t5 = torch.ones(inputs_t5.size()[:-1], dtype=torch.long).to(image.device) - - with torch.cuda.amp.autocast(dtype=torch.bfloat16): - # Frame Prefix - frame_prefix = self.t5_tokenizer( - self.frame_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt", - ).to(image.device) # - # print('frame_prefix 1', frame_prefix.input_ids.shape) 8, 4 - frame_prefix_id = torch.repeat_interleave(frame_prefix.input_ids, b*t, 0) - frame_prefix_mask = torch.repeat_interleave(frame_prefix.attention_mask, b*t, 0) - # Question, Options input - input_tokens = self.t5_tokenizer( - text_input, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - input_ids = torch.repeat_interleave(input_tokens.input_ids, t, 0) - input_attention_mask = torch.repeat_interleave(input_tokens.attention_mask, t, 0) - - # Output target - output_tokens = self.t5_tokenizer( - flat_answer, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - targets = output_tokens.input_ids.masked_fill( - output_tokens.input_ids == self.t5_tokenizer.pad_token_id, -100) - output_tokens_mask = output_tokens.attention_mask #torch.repeat_interleave(output_tokens.attention_mask, t, dim=0) - #targets = torch.repeat_interleave(targets, t, dim=0) - # input for QA - frame_predix_embed = self.t5_model.encoder.embed_tokens(frame_prefix_id) - inputs_embeds = self.t5_model.encoder.embed_tokens(input_ids) - inputs_embeds = torch.cat([frame_predix_embed, inputs_t5, inputs_embeds], dim=1) - encoder_atts = torch.cat([frame_prefix_mask, atts_t5, input_attention_mask], dim=1) - - outputs = self.t5_model( - inputs_embeds=inputs_embeds, attention_mask=encoder_atts, - decoder_attention_mask=output_tokens_mask, return_dict=True, labels=targets) - loss = outputs.loss - - return {"loss": loss} - - @torch.no_grad() - def generate(self, - samples, - use_nucleus_sampling=False, - num_beams=5, max_length=30, - min_length=1, top_p=0.9, - repetition_penalty=1.0, length_penalty=1.0, - num_captions=1, temperature=1,): - """ - Args: - samples (dict): A dictionary containing the following keys: - - image (torch.Tensor): A tensor of shape (batch_size, 3, H, W) - use_nucleus_sampling (bool): Whether to use nucleus sampling. If False, use top-k sampling. - num_beams (int): Number of beams for beam search. 1 means no beam search. - max_length (int): The maximum length of the sequence to be generated. - min_length (int): The minimum length of the sequence to be generated. - top_p (float): The cumulative probability for nucleus sampling. - repetition_penalty (float): The parameter for repetition penalty. 1.0 means no penalty. - num_captions (int): Number of captions to be generated for each image. - Returns: - captions (list): A list of strings of length batch_size * num_captions. - """ - out = {} - image, qid = samples["video"], samples['question_id'] - text_input, bs_answer = samples['loc_input'], samples['qa_output'] # Q + Options + Prompt: Choose an answer from options based on the frame. - # print('text_input', text_input) - flat_answer = [] - # print('bs_answer', bs_answer) - for answer in bs_answer: - answer = answer.split('_') - for a in answer: - flat_answer.append(a) - # print('flat_answer', flat_answer) - - b, t, c, w, h = image.shape - image = image.reshape(-1, c, w, h) - with torch.cuda.amp.autocast(enabled=(self.device != torch.device("cpu"))): - image_embeds = self.ln_vision_loc(self.visual_encoder(image)) # bt, n, c - _, n, _ = image_embeds.shape - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to(image.device) # bt n c - - query_tokens = self.query_tokens_loc.expand(image_embeds.shape[0], -1, -1) - query_output = self.Qformer_loc.bert( - query_embeds=query_tokens, encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, return_dict=True) - inputs_t5 = self.t5_proj_loc(query_output.last_hidden_state) - atts_t5 = torch.ones(inputs_t5.size()[:-1], dtype=torch.long).to(image.device) - - with torch.cuda.amp.autocast(dtype=torch.bfloat16): - - frame_prefix = self.t5_tokenizer( - self.frame_prefix, padding="longest", add_special_tokens=False, - truncation=True, max_length=self.max_txt_len, return_tensors="pt", - ).to(image.device) # - #print('frame_prefix 1', frame_prefix.input_ids.shape) 8, 4 - frame_prefix_id = torch.repeat_interleave(frame_prefix.input_ids, b*t, 0) - frame_prefix_mask = torch.repeat_interleave(frame_prefix.attention_mask, b*t, 0) - # Question, Options input - input_tokens = self.t5_tokenizer( - text_input, padding="longest", truncation=True, - max_length=self.max_txt_len, return_tensors="pt").to(image.device) - input_ids = torch.repeat_interleave(input_tokens.input_ids, t, 0) - input_attention_mask = torch.repeat_interleave(input_tokens.attention_mask, t, 0) - - frame_predix_embed = self.t5_model.encoder.embed_tokens(frame_prefix_id) - inputs_embeds = self.t5_model.encoder.embed_tokens(input_ids) - inputs_embeds = torch.cat([frame_predix_embed, inputs_t5, inputs_embeds], dim=1) - encoder_atts = torch.cat([frame_prefix_mask, atts_t5, input_attention_mask], dim=1) - - outputs = self.t5_model.generate( - inputs_embeds=inputs_embeds, attention_mask=encoder_atts, - do_sample=use_nucleus_sampling, top_p=top_p, - temperature=temperature, num_beams=1, - max_new_tokens=max_length, min_length=min_length, - repetition_penalty=repetition_penalty, length_penalty=length_penalty, - num_return_sequences=num_captions, return_dict_in_generate=True, - output_hidden_states=True, output_scores=True) - # print('answer', answer) - pred_logits = outputs.scores[0] #outputs_embed_qa.logits.detach() - pred_logits = pred_logits[:, [self.no_id, self.yes_id]] # b, 5 - pred_yes_score = pred_logits[:, 1].cpu().tolist() - pred_ans = torch.argmax(pred_logits, dim=-1).cpu().tolist() - - out['answer'] = flat_answer - multiframe_qid = [] - for q in qid: - for i in range(t): - multiframe_qid.append(q) - - out['qid'] = multiframe_qid - out['yes_score'] = pred_yes_score - out['pred_ans'] = pred_ans - - return out - - def predict_answers( - self, - samples, - num_beams=5, - inference_method="generate", - max_len=10, - min_len=1, - num_ans_candidates=128, - answer_list=None, - prompt="", - length_penalty=-1, - **kwargs - ): - image = samples["image"] - with torch.cuda.amp.autocast(enabled=(self.device != torch.device("cpu"))): - image_embeds = self.ln_vision_loc(self.visual_encoder(image)) - image_atts = torch.ones(image_embeds.size()[:-1], dtype=torch.long).to( - image.device - ) - - query_tokens = self.query_tokens.expand(image_embeds.shape[0], -1, -1) - query_output = self.Qformer.bert( - query_embeds=query_tokens, - encoder_hidden_states=image_embeds, - encoder_attention_mask=image_atts, - return_dict=True, - ) - - inputs_t5 = self.t5_proj(query_output.last_hidden_state) - atts_t5 = torch.ones(inputs_t5.size()[:-1], dtype=torch.long).to(image.device) - - if isinstance(samples["text_input"], str): - samples["text_input"] = [samples["text_input"]] - if prompt: - text_input = [prompt.format(question) for question in samples["text_input"]] - else: - text_input = samples["text_input"] - - input_tokens = self.t5_tokenizer( - text_input, padding="longest", return_tensors="pt" - ).to(image.device) - - encoder_atts = torch.cat([atts_t5, input_tokens.attention_mask], dim=1) - - device_type = "cuda" if "cuda" in str(self.device) else "cpu" - with torch.amp.autocast(device_type=device_type, dtype=torch.bfloat16): - inputs_embeds = self.t5_model.encoder.embed_tokens(input_tokens.input_ids) - inputs_embeds = torch.cat([inputs_t5, inputs_embeds], dim=1) - - outputs = self.t5_model.generate( - inputs_embeds=inputs_embeds, - attention_mask=encoder_atts, - do_sample=False, - num_beams=num_beams, - max_new_tokens=max_len, - min_length=min_len, - length_penalty=length_penalty, - ) - output_text = self.t5_tokenizer.batch_decode( - outputs, skip_special_tokens=True - ) - - if self._apply_lemmatizer: - output_text = self._lemmatize(output_text) - - return output_text - - def _lemmatize(self, answers): - def apply(answer): - doc = self.lemmatizer(answer) - - words = [] - for token in doc: - if token.pos_ in ["NOUN", "VERB"]: - words.append(token.lemma_) - else: - words.append(token.text) - answer = " ".join(words) - - return answer - - return [apply(answer) for answer in answers] - - @property - def lemmatizer(self): - if self._lemmatizer is None: - try: - import spacy - - self._lemmatizer = spacy.load("en_core_web_sm") - except ImportError: - logging.error( - """ - Please install spacy and en_core_web_sm model to apply lemmatization. - python -m spacy download en_core_web_sm - OR - import spacy.cli - spacy.cli.download("en_core_web_sm") - """ - ) - exit(1) - - return self._lemmatizer - - @classmethod - def from_config(cls, cfg): - img_size = cfg.get("image_size") - num_query_token = cfg.get("num_query_token") - t5_model = cfg.get("t5_model") - - drop_path_rate = cfg.get("drop_path_rate", 0) - use_grad_checkpoint = cfg.get("use_grad_checkpoint", False) - vit_precision = cfg.get("vit_precision", "fp16") - freeze_vit = cfg.get("freeze_vit", True) - - prompt = cfg.get("prompt", "") - max_txt_len = cfg.get("max_txt_len", 32) - frame_num = cfg.get("frame_num", 8) - answer_num = cfg.get("answer_num", 5) - apply_lemmatizer = cfg.get("apply_lemmatizer", False) - task = cfg.get("task", 'train_loc_freeze_qa') - - model = cls( - img_size=img_size, - drop_path_rate=drop_path_rate, - use_grad_checkpoint=use_grad_checkpoint, - vit_precision=vit_precision, - freeze_vit=freeze_vit, - num_query_token=num_query_token, - t5_model=t5_model, - prompt=prompt, - max_txt_len=max_txt_len, - apply_lemmatizer=apply_lemmatizer, - frame_num=frame_num, - answer_num=answer_num, - task=task, - ) - model.load_checkpoint_from_config(cfg) - # if 'pretrain_loc' in task: - # model.load_qformer_loc() - - return model \ No newline at end of file diff --git a/spaces/SeyedAli/Arabic-Speech-Synthesis/app.py b/spaces/SeyedAli/Arabic-Speech-Synthesis/app.py deleted file mode 100644 index 5afded8bb8ad57abc6b27fb7146f2e65d792a64b..0000000000000000000000000000000000000000 --- a/spaces/SeyedAli/Arabic-Speech-Synthesis/app.py +++ /dev/null @@ -1,19 +0,0 @@ -import tempfile ,os -import gradio as gr -from transformers import VitsModel, AutoTokenizer -import torch -import numpy as np -import torchaudio - -model = VitsModel.from_pretrained("SeyedAli/Arabic-Speech-synthesis") -tokenizer = AutoTokenizer.from_pretrained("SeyedAli/Arabic-Speech-synthesis") - -def TTS(text): - inputs = tokenizer(text, return_tensors="pt") - with torch.no_grad(): - output = model(**inputs).waveform - with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp: - torchaudio.save(fp, output, model.config.sampling_rate,format="wav") - return fp.name -iface = gr.Interface(fn=TTS, inputs="text", outputs="audio") -iface.launch(share=False) \ No newline at end of file diff --git a/spaces/SeyedAli/Persian-To-English-Translation/README.md b/spaces/SeyedAli/Persian-To-English-Translation/README.md deleted file mode 100644 index e2263d6e188e4c6f2503a0547524339b89411127..0000000000000000000000000000000000000000 --- a/spaces/SeyedAli/Persian-To-English-Translation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Persian To English Translation -emoji: 🌍 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.44.4 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sky5408er/vits-uma-genshin-honkai/attentions.py b/spaces/Sky5408er/vits-uma-genshin-honkai/attentions.py deleted file mode 100644 index 86bc73b5fe98cc7b443e9078553920346c996707..0000000000000000000000000000000000000000 --- a/spaces/Sky5408er/vits-uma-genshin-honkai/attentions.py +++ /dev/null @@ -1,300 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/Spjkjlkkklj/dalle/README.md b/spaces/Spjkjlkkklj/dalle/README.md deleted file mode 100644 index d964d2465e364c0314127536f5ad26c2711a1075..0000000000000000000000000000000000000000 --- a/spaces/Spjkjlkkklj/dalle/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Dalle -emoji: 💻 -colorFrom: pink -colorTo: gray -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sreezx/Sentzi/test/test_utils/debug.py b/spaces/Sreezx/Sentzi/test/test_utils/debug.py deleted file mode 100644 index 73e9a511a445e43b5d962368a635882822f92d6f..0000000000000000000000000000000000000000 --- a/spaces/Sreezx/Sentzi/test/test_utils/debug.py +++ /dev/null @@ -1,12 +0,0 @@ -# all the debug functions go here -from loguru import logger -import sys - -logger.remove() - -logger.add( - sys.stdout, - level="DEBUG", - format="[sentzi-log] [{time:DD-MMM-YYYY HH:mm:ss}] [{level}] {message}", - colorize=True -) diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/modules/activations.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/modules/activations.py deleted file mode 100644 index 2d83d7c4c2dc84c64b724eadbe06157507d4f20d..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/modules/activations.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -from torch import Tensor -from typing import Union, Callable - - -class CustomGLU(nn.Module): - """Custom Gated Linear Unit activation. - Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half - of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation - function (i.e. sigmoid, swish, etc.). - - Args: - activation (nn.Module): The custom activation to apply in the Gated Linear Unit - dim (int): the dimension on which to split the input. Default: -1 - - Shape: - - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional - dimensions - - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2` - - Examples:: - >>> m = CustomGLU(nn.Sigmoid()) - >>> input = torch.randn(4, 2) - >>> output = m(input) - """ - def __init__(self, activation: nn.Module, dim: int = -1): - super(CustomGLU, self).__init__() - self.dim = dim - self.activation = activation - - def forward(self, x: Tensor): - assert x.shape[self.dim] % 2 == 0 # M = N / 2 - a, b = torch.chunk(x, 2, dim=self.dim) - return a * self.activation(b) - - -class SwiGLU(CustomGLU): - """SiLU Gated Linear Unit activation. - Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(SwiGLU, self).__init__(nn.SiLU(), dim) - - -class GeGLU(CustomGLU): - """GeLU Gated Linear Unit activation. - Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(GeGLU, self).__init__(nn.GELU(), dim) - - -class ReGLU(CustomGLU): - """ReLU Gated Linear Unit activation. - Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(ReGLU, self).__init__(nn.ReLU(), dim) - - -def get_activation_fn( - activation: Union[str, Callable[[Tensor], Tensor]] -) -> Union[str, Callable[[Tensor], Tensor]]: - """Helper function to map an activation string to the activation class. - If the supplied activation is not a string that is recognized, the activation is passed back. - - Args: - activation (str, or Callable[[Tensor], Tensor]): Activation to check - """ - if isinstance(activation, str): - if activation == "reglu": - return ReGLU() - elif activation == "geglu": - return GeGLU() - elif activation == "swiglu": - return SwiGLU() - return activation diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/QoiImagePlugin.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/QoiImagePlugin.py deleted file mode 100644 index ef91b90abca87ff6526cd10f89f1c0dfc9f0b848..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/QoiImagePlugin.py +++ /dev/null @@ -1,105 +0,0 @@ -# -# The Python Imaging Library. -# -# QOI support for PIL -# -# See the README file for information on usage and redistribution. -# - -import os - -from . import Image, ImageFile -from ._binary import i32be as i32 -from ._binary import o8 - - -def _accept(prefix): - return prefix[:4] == b"qoif" - - -class QoiImageFile(ImageFile.ImageFile): - format = "QOI" - format_description = "Quite OK Image" - - def _open(self): - if not _accept(self.fp.read(4)): - msg = "not a QOI file" - raise SyntaxError(msg) - - self._size = tuple(i32(self.fp.read(4)) for i in range(2)) - - channels = self.fp.read(1)[0] - self.mode = "RGB" if channels == 3 else "RGBA" - - self.fp.seek(1, os.SEEK_CUR) # colorspace - self.tile = [("qoi", (0, 0) + self._size, self.fp.tell(), None)] - - -class QoiDecoder(ImageFile.PyDecoder): - _pulls_fd = True - - def _add_to_previous_pixels(self, value): - self._previous_pixel = value - - r, g, b, a = value - hash_value = (r * 3 + g * 5 + b * 7 + a * 11) % 64 - self._previously_seen_pixels[hash_value] = value - - def decode(self, buffer): - self._previously_seen_pixels = {} - self._previous_pixel = None - self._add_to_previous_pixels(b"".join(o8(i) for i in (0, 0, 0, 255))) - - data = bytearray() - bands = Image.getmodebands(self.mode) - while len(data) < self.state.xsize * self.state.ysize * bands: - byte = self.fd.read(1)[0] - if byte == 0b11111110: # QOI_OP_RGB - value = self.fd.read(3) + o8(255) - elif byte == 0b11111111: # QOI_OP_RGBA - value = self.fd.read(4) - else: - op = byte >> 6 - if op == 0: # QOI_OP_INDEX - op_index = byte & 0b00111111 - value = self._previously_seen_pixels.get(op_index, (0, 0, 0, 0)) - elif op == 1: # QOI_OP_DIFF - value = ( - (self._previous_pixel[0] + ((byte & 0b00110000) >> 4) - 2) - % 256, - (self._previous_pixel[1] + ((byte & 0b00001100) >> 2) - 2) - % 256, - (self._previous_pixel[2] + (byte & 0b00000011) - 2) % 256, - ) - value += (self._previous_pixel[3],) - elif op == 2: # QOI_OP_LUMA - second_byte = self.fd.read(1)[0] - diff_green = (byte & 0b00111111) - 32 - diff_red = ((second_byte & 0b11110000) >> 4) - 8 - diff_blue = (second_byte & 0b00001111) - 8 - - value = tuple( - (self._previous_pixel[i] + diff_green + diff) % 256 - for i, diff in enumerate((diff_red, 0, diff_blue)) - ) - value += (self._previous_pixel[3],) - elif op == 3: # QOI_OP_RUN - run_length = (byte & 0b00111111) + 1 - value = self._previous_pixel - if bands == 3: - value = value[:3] - data += value * run_length - continue - value = b"".join(o8(i) for i in value) - self._add_to_previous_pixels(value) - - if bands == 3: - value = value[:3] - data += value - self.set_as_raw(bytes(data)) - return -1, 0 - - -Image.register_open(QoiImageFile.format, QoiImageFile, _accept) -Image.register_decoder("qoi", QoiDecoder) -Image.register_extension(QoiImageFile.format, ".qoi") diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/common/py_custom_pyeval_settrace_311.hpp b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/common/py_custom_pyeval_settrace_311.hpp deleted file mode 100644 index d3086adfa72d73b430f0c6123bd6e2e156fd8d8c..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/debugpy/_vendored/pydevd/pydevd_attach_to_process/common/py_custom_pyeval_settrace_311.hpp +++ /dev/null @@ -1,120 +0,0 @@ -#ifndef _PY_CUSTOM_PYEVAL_SETTRACE_311_HPP_ -#define _PY_CUSTOM_PYEVAL_SETTRACE_311_HPP_ - -#include "python.h" -#include "py_utils.hpp" - -static PyObject * -InternalCallTrampoline311(PyObject* callback, - PyFrameObject311 *frame, int what, PyObject *arg) -{ - PyObject *result; - PyObject *stack[3]; - -// Note: this is commented out from CPython (we shouldn't need it and it adds a reasonable overhead). -// if (PyFrame_FastToLocalsWithError(frame) < 0) { -// return NULL; -// } -// - stack[0] = (PyObject *)frame; - stack[1] = InternalWhatstrings_37[what]; - stack[2] = (arg != NULL) ? arg : internalInitializeCustomPyEvalSetTrace->pyNone; - - - // Helper to print info. - //printf("--- start\n"); - //printf("%s\n", internalInitializeCustomPyEvalSetTrace->pyUnicode_AsUTF8(internalInitializeCustomPyEvalSetTrace->pyObject_Repr((PyObject *)stack[0]))); - //printf("%s\n", internalInitializeCustomPyEvalSetTrace->pyUnicode_AsUTF8(internalInitializeCustomPyEvalSetTrace->pyObject_Repr((PyObject *)stack[1]))); - //printf("%s\n", internalInitializeCustomPyEvalSetTrace->pyUnicode_AsUTF8(internalInitializeCustomPyEvalSetTrace->pyObject_Repr((PyObject *)stack[2]))); - //printf("--- end\n"); - - result = internalInitializeCustomPyEvalSetTrace->pyObject_FastCallDict(callback, stack, 3, NULL); - -// Note: this is commented out from CPython (we shouldn't need it and it adds a reasonable overhead). -// PyFrame_LocalsToFast(frame, 1); - - if (result == NULL) { - internalInitializeCustomPyEvalSetTrace->pyTraceBack_Here(frame); - } - - return result; -} - -// See: static int trace_trampoline(PyObject *self, PyFrameObject *frame, int what, PyObject *arg) -// in: https://github.com/python/cpython/blob/3.11/Python/sysmodule.c -static int -InternalTraceTrampoline311(PyObject *self, PyFrameObject *frameParam, - int what, PyObject *arg) -{ - PyObject *callback; - PyObject *result; - - PyFrameObject311 *frame = reinterpret_cast(frameParam); - - if (what == PyTrace_CALL){ - callback = self; - } else { - callback = frame->f_trace; - } - - if (callback == NULL){ - return 0; - } - - result = InternalCallTrampoline311(callback, frame, what, arg); - if (result == NULL) { - // Note: calling the original sys.settrace here. - internalInitializeCustomPyEvalSetTrace->pyEval_SetTrace(NULL, NULL); - PyObject *temp_f_trace = frame->f_trace; - frame->f_trace = NULL; - if(temp_f_trace != NULL){ - DecRef(temp_f_trace, internalInitializeCustomPyEvalSetTrace->isDebug); - } - return -1; - } - if (result != internalInitializeCustomPyEvalSetTrace->pyNone) { - PyObject *tmp = frame->f_trace; - frame->f_trace = result; - DecRef(tmp, internalInitializeCustomPyEvalSetTrace->isDebug); - } - else { - DecRef(result, internalInitializeCustomPyEvalSetTrace->isDebug); - } - return 0; -} - -// Based on ceval.c (PyEval_SetTrace(Py_tracefunc func, PyObject *arg)) -// https://github.com/python/cpython/blob/3.11/Python/ceval.c -template -void InternalPySetTrace_Template311(T tstate, PyObjectHolder* traceFunc, bool isDebug) -{ - PyObject *traceobj = tstate->c_traceobj; - - PyObject *arg = traceFunc->ToPython(); - IncRef(arg); - tstate->c_tracefunc = NULL; - tstate->c_traceobj = NULL; - - // This is different (previously it was just: tstate->use_tracing, now - // this flag is per-frame). - int use_tracing = (tstate->c_profilefunc != NULL); - - // Note: before 3.11 this was just 1 or 0, now it needs to be 255 or 0. - tstate->cframe->use_tracing = (use_tracing ? 255 : 0); - - if(traceobj != NULL){ - DecRef(traceobj, isDebug); - } - tstate->c_tracefunc = InternalTraceTrampoline311; - tstate->c_traceobj = arg; - /* Flag that tracing or profiling is turned on */ - use_tracing = ((InternalTraceTrampoline311 != NULL) - || (tstate->c_profilefunc != NULL)); - - // Note: before 3.11 this was just 1 or 0, now it needs to be 255 or 0. - tstate->cframe->use_tracing = (use_tracing ? 255 : 0); - -}; - - -#endif //_PY_CUSTOM_PYEVAL_SETTRACE_311_HPP_ \ No newline at end of file diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/__init__.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/__init__.py deleted file mode 100644 index 4e1338369a958062d6ca4a122435b2be6ad27315..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/oneformer/modeling/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .backbone.swin import D2SwinTransformer -from .backbone.dinat import D2DiNAT -from .pixel_decoder.fpn import BasePixelDecoder -from .pixel_decoder.msdeformattn import MSDeformAttnPixelDecoder -from .meta_arch.oneformer_head import OneFormerHead diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/backbones/resnet.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/backbones/resnet.py deleted file mode 100644 index 4e52bf048d28ecb069db4728e5f05ad85ac53198..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/models/backbones/resnet.py +++ /dev/null @@ -1,688 +0,0 @@ -import torch.nn as nn -import torch.utils.checkpoint as cp -from annotator.uniformer.mmcv.cnn import (build_conv_layer, build_norm_layer, build_plugin_layer, - constant_init, kaiming_init) -from annotator.uniformer.mmcv.runner import load_checkpoint -from annotator.uniformer.mmcv.utils.parrots_wrapper import _BatchNorm - -from annotator.uniformer.mmseg.utils import get_root_logger -from ..builder import BACKBONES -from ..utils import ResLayer - - -class BasicBlock(nn.Module): - """Basic block for ResNet.""" - - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None): - super(BasicBlock, self).__init__() - assert dcn is None, 'Not implemented yet.' - assert plugins is None, 'Not implemented yet.' - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - 3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - self.add_module(self.norm1_name, norm1) - self.conv2 = build_conv_layer( - conv_cfg, planes, planes, 3, padding=1, bias=False) - self.add_module(self.norm2_name, norm2) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.norm2(out) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - """Bottleneck block for ResNet. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if it is - "caffe", the stride-two layer is the first 1x1 conv layer. - """ - - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - dcn=None, - plugins=None): - super(Bottleneck, self).__init__() - assert style in ['pytorch', 'caffe'] - assert dcn is None or isinstance(dcn, dict) - assert plugins is None or isinstance(plugins, list) - if plugins is not None: - allowed_position = ['after_conv1', 'after_conv2', 'after_conv3'] - assert all(p['position'] in allowed_position for p in plugins) - - self.inplanes = inplanes - self.planes = planes - self.stride = stride - self.dilation = dilation - self.style = style - self.with_cp = with_cp - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.dcn = dcn - self.with_dcn = dcn is not None - self.plugins = plugins - self.with_plugins = plugins is not None - - if self.with_plugins: - # collect plugins for conv1/conv2/conv3 - self.after_conv1_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv1' - ] - self.after_conv2_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv2' - ] - self.after_conv3_plugins = [ - plugin['cfg'] for plugin in plugins - if plugin['position'] == 'after_conv3' - ] - - if self.style == 'pytorch': - self.conv1_stride = 1 - self.conv2_stride = stride - else: - self.conv1_stride = stride - self.conv2_stride = 1 - - self.norm1_name, norm1 = build_norm_layer(norm_cfg, planes, postfix=1) - self.norm2_name, norm2 = build_norm_layer(norm_cfg, planes, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - norm_cfg, planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - conv_cfg, - inplanes, - planes, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - if self.with_dcn: - fallback_on_stride = dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - conv_cfg, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - dcn, - planes, - planes, - kernel_size=3, - stride=self.conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - conv_cfg, - planes, - planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - - if self.with_plugins: - self.after_conv1_plugin_names = self.make_block_plugins( - planes, self.after_conv1_plugins) - self.after_conv2_plugin_names = self.make_block_plugins( - planes, self.after_conv2_plugins) - self.after_conv3_plugin_names = self.make_block_plugins( - planes * self.expansion, self.after_conv3_plugins) - - def make_block_plugins(self, in_channels, plugins): - """make plugins for block. - - Args: - in_channels (int): Input channels of plugin. - plugins (list[dict]): List of plugins cfg to build. - - Returns: - list[str]: List of the names of plugin. - """ - assert isinstance(plugins, list) - plugin_names = [] - for plugin in plugins: - plugin = plugin.copy() - name, layer = build_plugin_layer( - plugin, - in_channels=in_channels, - postfix=plugin.pop('postfix', '')) - assert not hasattr(self, name), f'duplicate plugin {name}' - self.add_module(name, layer) - plugin_names.append(name) - return plugin_names - - def forward_plugin(self, x, plugin_names): - """Forward function for plugins.""" - out = x - for name in plugin_names: - out = getattr(self, name)(x) - return out - - @property - def norm1(self): - """nn.Module: normalization layer after the first convolution layer""" - return getattr(self, self.norm1_name) - - @property - def norm2(self): - """nn.Module: normalization layer after the second convolution layer""" - return getattr(self, self.norm2_name) - - @property - def norm3(self): - """nn.Module: normalization layer after the third convolution layer""" - return getattr(self, self.norm3_name) - - def forward(self, x): - """Forward function.""" - - def _inner_forward(x): - identity = x - - out = self.conv1(x) - out = self.norm1(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv1_plugin_names) - - out = self.conv2(out) - out = self.norm2(out) - out = self.relu(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv2_plugin_names) - - out = self.conv3(out) - out = self.norm3(out) - - if self.with_plugins: - out = self.forward_plugin(out, self.after_conv3_plugin_names) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -@BACKBONES.register_module() -class ResNet(nn.Module): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (int): Number of input image channels. Default" 3. - stem_channels (int): Number of stem channels. Default: 64. - base_channels (int): Number of base channels of res layer. Default: 64. - num_stages (int): Resnet stages, normally 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - deep_stem (bool): Replace 7x7 conv in input stem with 3 3x3 conv - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. - frozen_stages (int): Stages to be frozen (stop grad and set eval mode). - -1 means not freezing any parameters. - norm_cfg (dict): Dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - plugins (list[dict]): List of plugins for stages, each dict contains: - - - cfg (dict, required): Cfg dict to build plugin. - - - position (str, required): Position inside block to insert plugin, - options: 'after_conv1', 'after_conv2', 'after_conv3'. - - - stages (tuple[bool], optional): Stages to apply plugin, length - should be same as 'num_stages' - multi_grid (Sequence[int]|None): Multi grid dilation rates of last - stage. Default: None - contract_dilation (bool): Whether contract first dilation of each layer - Default: False - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): Whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from annotator.uniformer.mmseg.models import ResNet - >>> import torch - >>> self = ResNet(depth=18) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 64, 8, 8) - (1, 128, 4, 4) - (1, 256, 2, 2) - (1, 512, 1, 1) - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - in_channels=3, - stem_channels=64, - base_channels=64, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - deep_stem=False, - avg_down=False, - frozen_stages=-1, - conv_cfg=None, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=False, - dcn=None, - stage_with_dcn=(False, False, False, False), - plugins=None, - multi_grid=None, - contract_dilation=False, - with_cp=False, - zero_init_residual=True): - super(ResNet, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - self.depth = depth - self.stem_channels = stem_channels - self.base_channels = base_channels - self.num_stages = num_stages - assert num_stages >= 1 and num_stages <= 4 - self.strides = strides - self.dilations = dilations - assert len(strides) == len(dilations) == num_stages - self.out_indices = out_indices - assert max(out_indices) < num_stages - self.style = style - self.deep_stem = deep_stem - self.avg_down = avg_down - self.frozen_stages = frozen_stages - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.with_cp = with_cp - self.norm_eval = norm_eval - self.dcn = dcn - self.stage_with_dcn = stage_with_dcn - if dcn is not None: - assert len(stage_with_dcn) == num_stages - self.plugins = plugins - self.multi_grid = multi_grid - self.contract_dilation = contract_dilation - self.zero_init_residual = zero_init_residual - self.block, stage_blocks = self.arch_settings[depth] - self.stage_blocks = stage_blocks[:num_stages] - self.inplanes = stem_channels - - self._make_stem_layer(in_channels, stem_channels) - - self.res_layers = [] - for i, num_blocks in enumerate(self.stage_blocks): - stride = strides[i] - dilation = dilations[i] - dcn = self.dcn if self.stage_with_dcn[i] else None - if plugins is not None: - stage_plugins = self.make_stage_plugins(plugins, i) - else: - stage_plugins = None - # multi grid is applied to last layer only - stage_multi_grid = multi_grid if i == len( - self.stage_blocks) - 1 else None - planes = base_channels * 2**i - res_layer = self.make_res_layer( - block=self.block, - inplanes=self.inplanes, - planes=planes, - num_blocks=num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - avg_down=self.avg_down, - with_cp=with_cp, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - dcn=dcn, - plugins=stage_plugins, - multi_grid=stage_multi_grid, - contract_dilation=contract_dilation) - self.inplanes = planes * self.block.expansion - layer_name = f'layer{i+1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self._freeze_stages() - - self.feat_dim = self.block.expansion * base_channels * 2**( - len(self.stage_blocks) - 1) - - def make_stage_plugins(self, plugins, stage_idx): - """make plugins for ResNet 'stage_idx'th stage . - - Currently we support to insert 'context_block', - 'empirical_attention_block', 'nonlocal_block' into the backbone like - ResNet/ResNeXt. They could be inserted after conv1/conv2/conv3 of - Bottleneck. - - An example of plugins format could be : - >>> plugins=[ - ... dict(cfg=dict(type='xxx', arg1='xxx'), - ... stages=(False, True, True, True), - ... position='after_conv2'), - ... dict(cfg=dict(type='yyy'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='1'), - ... stages=(True, True, True, True), - ... position='after_conv3'), - ... dict(cfg=dict(type='zzz', postfix='2'), - ... stages=(True, True, True, True), - ... position='after_conv3') - ... ] - >>> self = ResNet(depth=18) - >>> stage_plugins = self.make_stage_plugins(plugins, 0) - >>> assert len(stage_plugins) == 3 - - Suppose 'stage_idx=0', the structure of blocks in the stage would be: - conv1-> conv2->conv3->yyy->zzz1->zzz2 - Suppose 'stage_idx=1', the structure of blocks in the stage would be: - conv1-> conv2->xxx->conv3->yyy->zzz1->zzz2 - - If stages is missing, the plugin would be applied to all stages. - - Args: - plugins (list[dict]): List of plugins cfg to build. The postfix is - required if multiple same type plugins are inserted. - stage_idx (int): Index of stage to build - - Returns: - list[dict]: Plugins for current stage - """ - stage_plugins = [] - for plugin in plugins: - plugin = plugin.copy() - stages = plugin.pop('stages', None) - assert stages is None or len(stages) == self.num_stages - # whether to insert plugin into current stage - if stages is None or stages[stage_idx]: - stage_plugins.append(plugin) - - return stage_plugins - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``.""" - return ResLayer(**kwargs) - - @property - def norm1(self): - """nn.Module: the normalization layer named "norm1" """ - return getattr(self, self.norm1_name) - - def _make_stem_layer(self, in_channels, stem_channels): - """Make stem layer for ResNet.""" - if self.deep_stem: - self.stem = nn.Sequential( - build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels // 2, - kernel_size=3, - stride=2, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels // 2, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels // 2)[1], - nn.ReLU(inplace=True), - build_conv_layer( - self.conv_cfg, - stem_channels // 2, - stem_channels, - kernel_size=3, - stride=1, - padding=1, - bias=False), - build_norm_layer(self.norm_cfg, stem_channels)[1], - nn.ReLU(inplace=True)) - else: - self.conv1 = build_conv_layer( - self.conv_cfg, - in_channels, - stem_channels, - kernel_size=7, - stride=2, - padding=3, - bias=False) - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, stem_channels, postfix=1) - self.add_module(self.norm1_name, norm1) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - def _freeze_stages(self): - """Freeze stages param and norm stats.""" - if self.frozen_stages >= 0: - if self.deep_stem: - self.stem.eval() - for param in self.stem.parameters(): - param.requires_grad = False - else: - self.norm1.eval() - for m in [self.conv1, self.norm1]: - for param in m.parameters(): - param.requires_grad = False - - for i in range(1, self.frozen_stages + 1): - m = getattr(self, f'layer{i}') - m.eval() - for param in m.parameters(): - param.requires_grad = False - - def init_weights(self, pretrained=None): - """Initialize the weights in backbone. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, (_BatchNorm, nn.GroupNorm)): - constant_init(m, 1) - - if self.dcn is not None: - for m in self.modules(): - if isinstance(m, Bottleneck) and hasattr( - m, 'conv2_offset'): - constant_init(m.conv2_offset, 0) - - if self.zero_init_residual: - for m in self.modules(): - if isinstance(m, Bottleneck): - constant_init(m.norm3, 0) - elif isinstance(m, BasicBlock): - constant_init(m.norm2, 0) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - """Forward function.""" - if self.deep_stem: - x = self.stem(x) - else: - x = self.conv1(x) - x = self.norm1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - return tuple(outs) - - def train(self, mode=True): - """Convert the model into training mode while keep normalization layer - freezed.""" - super(ResNet, self).train(mode) - self._freeze_stages() - if mode and self.norm_eval: - for m in self.modules(): - # trick: eval have effect on BatchNorm only - if isinstance(m, _BatchNorm): - m.eval() - - -@BACKBONES.register_module() -class ResNetV1c(ResNet): - """ResNetV1c variant described in [1]_. - - Compared with default ResNet(ResNetV1b), ResNetV1c replaces the 7x7 conv - in the input stem with three 3x3 convs. - - References: - .. [1] https://arxiv.org/pdf/1812.01187.pdf - """ - - def __init__(self, **kwargs): - super(ResNetV1c, self).__init__( - deep_stem=True, avg_down=False, **kwargs) - - -@BACKBONES.register_module() -class ResNetV1d(ResNet): - """ResNetV1d variant described in [1]_. - - Compared with default ResNet(ResNetV1b), ResNetV1d replaces the 7x7 conv in - the input stem with three 3x3 convs. And in the downsampling block, a 2x2 - avg_pool with stride 2 is added before conv, whose stride is changed to 1. - """ - - def __init__(self, **kwargs): - super(ResNetV1d, self).__init__( - deep_stem=True, avg_down=True, **kwargs) diff --git a/spaces/TCheruy/SRGAN/utils.py b/spaces/TCheruy/SRGAN/utils.py deleted file mode 100644 index d7befedad55b6108ca89ddad728599fb3f53624e..0000000000000000000000000000000000000000 --- a/spaces/TCheruy/SRGAN/utils.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright 2022 Dakewe Biotech Corporation. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -import os -import shutil -from enum import Enum -from typing import Any - -import torch -from torch import nn -from torch.nn import Module -from torch.optim import Optimizer - -__all__ = [ - "load_state_dict", "make_directory", "save_checkpoint", - "Summary", "AverageMeter", "ProgressMeter" -] - - -def load_state_dict( - model: nn.Module, - model_weights_path: str, - ema_model: nn.Module = None, - optimizer: torch.optim.Optimizer = None, - scheduler: torch.optim.lr_scheduler = None, - load_mode: str = None, -) -> tuple[Module, Module, Any, Any, Any, Optimizer | None, Any] | tuple[Module, Any, Any, Any, Optimizer | None, Any] | Module: - # Load model weights - checkpoint = torch.load(model_weights_path, map_location=lambda storage, loc: storage) - - if load_mode == "resume": - # Restore the parameters in the training node to this point - start_epoch = checkpoint["epoch"] - best_psnr = checkpoint["best_psnr"] - best_ssim = checkpoint["best_ssim"] - # Load model state dict. Extract the fitted model weights - model_state_dict = model.state_dict() - state_dict = {k: v for k, v in checkpoint["state_dict"].items() if k in model_state_dict.keys()} - # Overwrite the model weights to the current model (base model) - model_state_dict.update(state_dict) - model.load_state_dict(model_state_dict) - # Load the optimizer model - optimizer.load_state_dict(checkpoint["optimizer"]) - - if scheduler is not None: - # Load the scheduler model - scheduler.load_state_dict(checkpoint["scheduler"]) - - if ema_model is not None: - # Load ema model state dict. Extract the fitted model weights - ema_model_state_dict = ema_model.state_dict() - ema_state_dict = {k: v for k, v in checkpoint["ema_state_dict"].items() if k in ema_model_state_dict.keys()} - # Overwrite the model weights to the current model (ema model) - ema_model_state_dict.update(ema_state_dict) - ema_model.load_state_dict(ema_model_state_dict) - - return model, ema_model, start_epoch, best_psnr, best_ssim, optimizer, scheduler - else: - # Load model state dict. Extract the fitted model weights - model_state_dict = model.state_dict() - state_dict = {k: v for k, v in checkpoint["state_dict"].items() if - k in model_state_dict.keys() and v.size() == model_state_dict[k].size()} - # Overwrite the model weights to the current model - model_state_dict.update(state_dict) - model.load_state_dict(model_state_dict) - - return model - - -def make_directory(dir_path: str) -> None: - if not os.path.exists(dir_path): - os.makedirs(dir_path) - - -def save_checkpoint( - state_dict: dict, - file_name: str, - samples_dir: str, - results_dir: str, - best_file_name: str, - last_file_name: str, - is_best: bool = False, - is_last: bool = False, -) -> None: - checkpoint_path = os.path.join(samples_dir, file_name) - torch.save(state_dict, checkpoint_path) - - if is_best: - shutil.copyfile(checkpoint_path, os.path.join(results_dir, best_file_name)) - if is_last: - shutil.copyfile(checkpoint_path, os.path.join(results_dir, last_file_name)) - - -class Summary(Enum): - NONE = 0 - AVERAGE = 1 - SUM = 2 - COUNT = 3 - - -class AverageMeter(object): - def __init__(self, name, fmt=":f", summary_type=Summary.AVERAGE): - self.name = name - self.fmt = fmt - self.summary_type = summary_type - self.reset() - - def reset(self): - self.val = 0 - self.avg = 0 - self.sum = 0 - self.count = 0 - - def update(self, val, n=1): - self.val = val - self.sum += val * n - self.count += n - self.avg = self.sum / self.count - - def __str__(self): - fmtstr = "{name} {val" + self.fmt + "} ({avg" + self.fmt + "})" - return fmtstr.format(**self.__dict__) - - def summary(self): - if self.summary_type is Summary.NONE: - fmtstr = "" - elif self.summary_type is Summary.AVERAGE: - fmtstr = "{name} {avg:.2f}" - elif self.summary_type is Summary.SUM: - fmtstr = "{name} {sum:.2f}" - elif self.summary_type is Summary.COUNT: - fmtstr = "{name} {count:.2f}" - else: - raise ValueError(f"Invalid summary type {self.summary_type}") - - return fmtstr.format(**self.__dict__) - - -class ProgressMeter(object): - def __init__(self, num_batches, meters, prefix=""): - self.batch_fmtstr = self._get_batch_fmtstr(num_batches) - self.meters = meters - self.prefix = prefix - - def display(self, batch): - entries = [self.prefix + self.batch_fmtstr.format(batch)] - entries += [str(meter) for meter in self.meters] - print("\t".join(entries)) - - def display_summary(self): - entries = [" *"] - entries += [meter.summary() for meter in self.meters] - print(" ".join(entries)) - - def _get_batch_fmtstr(self, num_batches): - num_digits = len(str(num_batches // 1)) - fmt = "{:" + str(num_digits) + "d}" - return "[" + fmt + "/" + fmt.format(num_batches) + "]" diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/_collections.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/_collections.py deleted file mode 100644 index da9857e986d89acac3ba05a6735dc08c249bde1a..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/urllib3/_collections.py +++ /dev/null @@ -1,337 +0,0 @@ -from __future__ import absolute_import - -try: - from collections.abc import Mapping, MutableMapping -except ImportError: - from collections import Mapping, MutableMapping -try: - from threading import RLock -except ImportError: # Platform-specific: No threads available - - class RLock: - def __enter__(self): - pass - - def __exit__(self, exc_type, exc_value, traceback): - pass - - -from collections import OrderedDict - -from .exceptions import InvalidHeader -from .packages import six -from .packages.six import iterkeys, itervalues - -__all__ = ["RecentlyUsedContainer", "HTTPHeaderDict"] - - -_Null = object() - - -class RecentlyUsedContainer(MutableMapping): - """ - Provides a thread-safe dict-like container which maintains up to - ``maxsize`` keys while throwing away the least-recently-used keys beyond - ``maxsize``. - - :param maxsize: - Maximum number of recent elements to retain. - - :param dispose_func: - Every time an item is evicted from the container, - ``dispose_func(value)`` is called. Callback which will get called - """ - - ContainerCls = OrderedDict - - def __init__(self, maxsize=10, dispose_func=None): - self._maxsize = maxsize - self.dispose_func = dispose_func - - self._container = self.ContainerCls() - self.lock = RLock() - - def __getitem__(self, key): - # Re-insert the item, moving it to the end of the eviction line. - with self.lock: - item = self._container.pop(key) - self._container[key] = item - return item - - def __setitem__(self, key, value): - evicted_value = _Null - with self.lock: - # Possibly evict the existing value of 'key' - evicted_value = self._container.get(key, _Null) - self._container[key] = value - - # If we didn't evict an existing value, we might have to evict the - # least recently used item from the beginning of the container. - if len(self._container) > self._maxsize: - _key, evicted_value = self._container.popitem(last=False) - - if self.dispose_func and evicted_value is not _Null: - self.dispose_func(evicted_value) - - def __delitem__(self, key): - with self.lock: - value = self._container.pop(key) - - if self.dispose_func: - self.dispose_func(value) - - def __len__(self): - with self.lock: - return len(self._container) - - def __iter__(self): - raise NotImplementedError( - "Iteration over this class is unlikely to be threadsafe." - ) - - def clear(self): - with self.lock: - # Copy pointers to all values, then wipe the mapping - values = list(itervalues(self._container)) - self._container.clear() - - if self.dispose_func: - for value in values: - self.dispose_func(value) - - def keys(self): - with self.lock: - return list(iterkeys(self._container)) - - -class HTTPHeaderDict(MutableMapping): - """ - :param headers: - An iterable of field-value pairs. Must not contain multiple field names - when compared case-insensitively. - - :param kwargs: - Additional field-value pairs to pass in to ``dict.update``. - - A ``dict`` like container for storing HTTP Headers. - - Field names are stored and compared case-insensitively in compliance with - RFC 7230. Iteration provides the first case-sensitive key seen for each - case-insensitive pair. - - Using ``__setitem__`` syntax overwrites fields that compare equal - case-insensitively in order to maintain ``dict``'s api. For fields that - compare equal, instead create a new ``HTTPHeaderDict`` and use ``.add`` - in a loop. - - If multiple fields that are equal case-insensitively are passed to the - constructor or ``.update``, the behavior is undefined and some will be - lost. - - >>> headers = HTTPHeaderDict() - >>> headers.add('Set-Cookie', 'foo=bar') - >>> headers.add('set-cookie', 'baz=quxx') - >>> headers['content-length'] = '7' - >>> headers['SET-cookie'] - 'foo=bar, baz=quxx' - >>> headers['Content-Length'] - '7' - """ - - def __init__(self, headers=None, **kwargs): - super(HTTPHeaderDict, self).__init__() - self._container = OrderedDict() - if headers is not None: - if isinstance(headers, HTTPHeaderDict): - self._copy_from(headers) - else: - self.extend(headers) - if kwargs: - self.extend(kwargs) - - def __setitem__(self, key, val): - self._container[key.lower()] = [key, val] - return self._container[key.lower()] - - def __getitem__(self, key): - val = self._container[key.lower()] - return ", ".join(val[1:]) - - def __delitem__(self, key): - del self._container[key.lower()] - - def __contains__(self, key): - return key.lower() in self._container - - def __eq__(self, other): - if not isinstance(other, Mapping) and not hasattr(other, "keys"): - return False - if not isinstance(other, type(self)): - other = type(self)(other) - return dict((k.lower(), v) for k, v in self.itermerged()) == dict( - (k.lower(), v) for k, v in other.itermerged() - ) - - def __ne__(self, other): - return not self.__eq__(other) - - if six.PY2: # Python 2 - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - - __marker = object() - - def __len__(self): - return len(self._container) - - def __iter__(self): - # Only provide the originally cased names - for vals in self._container.values(): - yield vals[0] - - def pop(self, key, default=__marker): - """D.pop(k[,d]) -> v, remove specified key and return the corresponding value. - If key is not found, d is returned if given, otherwise KeyError is raised. - """ - # Using the MutableMapping function directly fails due to the private marker. - # Using ordinary dict.pop would expose the internal structures. - # So let's reinvent the wheel. - try: - value = self[key] - except KeyError: - if default is self.__marker: - raise - return default - else: - del self[key] - return value - - def discard(self, key): - try: - del self[key] - except KeyError: - pass - - def add(self, key, val): - """Adds a (name, value) pair, doesn't overwrite the value if it already - exists. - - >>> headers = HTTPHeaderDict(foo='bar') - >>> headers.add('Foo', 'baz') - >>> headers['foo'] - 'bar, baz' - """ - key_lower = key.lower() - new_vals = [key, val] - # Keep the common case aka no item present as fast as possible - vals = self._container.setdefault(key_lower, new_vals) - if new_vals is not vals: - vals.append(val) - - def extend(self, *args, **kwargs): - """Generic import function for any type of header-like object. - Adapted version of MutableMapping.update in order to insert items - with self.add instead of self.__setitem__ - """ - if len(args) > 1: - raise TypeError( - "extend() takes at most 1 positional " - "arguments ({0} given)".format(len(args)) - ) - other = args[0] if len(args) >= 1 else () - - if isinstance(other, HTTPHeaderDict): - for key, val in other.iteritems(): - self.add(key, val) - elif isinstance(other, Mapping): - for key in other: - self.add(key, other[key]) - elif hasattr(other, "keys"): - for key in other.keys(): - self.add(key, other[key]) - else: - for key, value in other: - self.add(key, value) - - for key, value in kwargs.items(): - self.add(key, value) - - def getlist(self, key, default=__marker): - """Returns a list of all the values for the named field. Returns an - empty list if the key doesn't exist.""" - try: - vals = self._container[key.lower()] - except KeyError: - if default is self.__marker: - return [] - return default - else: - return vals[1:] - - # Backwards compatibility for httplib - getheaders = getlist - getallmatchingheaders = getlist - iget = getlist - - # Backwards compatibility for http.cookiejar - get_all = getlist - - def __repr__(self): - return "%s(%s)" % (type(self).__name__, dict(self.itermerged())) - - def _copy_from(self, other): - for key in other: - val = other.getlist(key) - if isinstance(val, list): - # Don't need to convert tuples - val = list(val) - self._container[key.lower()] = [key] + val - - def copy(self): - clone = type(self)() - clone._copy_from(self) - return clone - - def iteritems(self): - """Iterate over all header lines, including duplicate ones.""" - for key in self: - vals = self._container[key.lower()] - for val in vals[1:]: - yield vals[0], val - - def itermerged(self): - """Iterate over all headers, merging duplicate ones together.""" - for key in self: - val = self._container[key.lower()] - yield val[0], ", ".join(val[1:]) - - def items(self): - return list(self.iteritems()) - - @classmethod - def from_httplib(cls, message): # Python 2 - """Read headers from a Python 2 httplib message object.""" - # python2.7 does not expose a proper API for exporting multiheaders - # efficiently. This function re-reads raw lines from the message - # object and extracts the multiheaders properly. - obs_fold_continued_leaders = (" ", "\t") - headers = [] - - for line in message.headers: - if line.startswith(obs_fold_continued_leaders): - if not headers: - # We received a header line that starts with OWS as described - # in RFC-7230 S3.2.4. This indicates a multiline header, but - # there exists no previous header to which we can attach it. - raise InvalidHeader( - "Header continuation with no previous header: %s" % line - ) - else: - key, value = headers[-1] - headers[-1] = (key, value + " " + line.strip()) - continue - - key, value = line.split(":", 1) - headers.append((key, value.strip())) - - return cls(headers) diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/structures/test_imagelist.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/structures/test_imagelist.py deleted file mode 100644 index e446e44a37f5d8f9a68362e4b93a291d314d5d68..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/structures/test_imagelist.py +++ /dev/null @@ -1,75 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import unittest -from typing import List, Sequence, Tuple -import torch - -from detectron2.structures import ImageList - - -class TestImageList(unittest.TestCase): - def test_imagelist_padding_tracing(self): - # test that the trace does not contain hard-coded constant sizes - def to_imagelist(tensors: Sequence[torch.Tensor]): - image_list = ImageList.from_tensors(tensors, 4) - return image_list.tensor, image_list.image_sizes - - def _tensor(*shape): - return torch.ones(shape, dtype=torch.float32) - - # test CHW (inputs needs padding vs. no padding) - for shape in [(3, 10, 10), (3, 12, 12)]: - func = torch.jit.trace(to_imagelist, ([_tensor(*shape)],)) - tensor, image_sizes = func([_tensor(3, 15, 20)]) - self.assertEqual(tensor.shape, (1, 3, 16, 20), tensor.shape) - self.assertEqual(image_sizes[0].tolist(), [15, 20], image_sizes[0]) - - # test HW - func = torch.jit.trace(to_imagelist, ([_tensor(10, 10)],)) - tensor, image_sizes = func([_tensor(15, 20)]) - self.assertEqual(tensor.shape, (1, 16, 20), tensor.shape) - self.assertEqual(image_sizes[0].tolist(), [15, 20], image_sizes[0]) - - # test 2x CHW - func = torch.jit.trace( - to_imagelist, - ([_tensor(3, 16, 10), _tensor(3, 13, 11)],), - ) - tensor, image_sizes = func([_tensor(3, 25, 20), _tensor(3, 10, 10)]) - self.assertEqual(tensor.shape, (2, 3, 28, 20), tensor.shape) - self.assertEqual(image_sizes[0].tolist(), [25, 20], image_sizes[0]) - self.assertEqual(image_sizes[1].tolist(), [10, 10], image_sizes[1]) - # support calling with different spatial sizes, but not with different #images - - def test_imagelist_scriptability(self): - image_nums = 2 - image_tensor = torch.randn((image_nums, 10, 20), dtype=torch.float32) - image_shape = [(10, 20)] * image_nums - - def f(image_tensor, image_shape: List[Tuple[int, int]]): - return ImageList(image_tensor, image_shape) - - ret = f(image_tensor, image_shape) - ret_script = torch.jit.script(f)(image_tensor, image_shape) - - self.assertEqual(len(ret), len(ret_script)) - for i in range(image_nums): - self.assertTrue(torch.equal(ret[i], ret_script[i])) - - def test_imagelist_from_tensors_scriptability(self): - image_tensor_0 = torch.randn(10, 20, dtype=torch.float32) - image_tensor_1 = torch.randn(12, 22, dtype=torch.float32) - inputs = [image_tensor_0, image_tensor_1] - - def f(image_tensor: List[torch.Tensor]): - return ImageList.from_tensors(image_tensor, 10) - - ret = f(inputs) - ret_script = torch.jit.script(f)(inputs) - - self.assertEqual(len(ret), len(ret_script)) - self.assertTrue(torch.equal(ret.tensor, ret_script.tensor)) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/Um124/Global_Warming_Analysis/pages/Carbon dioxide data Analysis.py b/spaces/Um124/Global_Warming_Analysis/pages/Carbon dioxide data Analysis.py deleted file mode 100644 index 95d2372658037865b49aa8a8a2cda5f536f60a6a..0000000000000000000000000000000000000000 --- a/spaces/Um124/Global_Warming_Analysis/pages/Carbon dioxide data Analysis.py +++ /dev/null @@ -1,87 +0,0 @@ -import pandas as pd -import numpy as np -import plotly.express as px -import streamlit as st - - -st.set_page_config( - page_title='Carbon dioxide data Analysis', - page_icon='📈', - layout='wide' -) - -years=['1895','1896','1897','1898','1899','1900','1901','1902','1903','1904','1905','1906','1907','1908','1909', -'1910','1911','1912','1913','1914','1915','1916','1917','1918','1919','1920','1921','1922','1923','1924', -'1925','1926','1927','1928','1929','1930','1931','1932','1933','1934','1935','1936','1937','1938','1939', -'1940','1941','1942','1943','1944','1945','1946','1947','1948','1949','1950','1951','1952','1953','1954', -'1955','1956','1957','1958','1959','1960','1961','1962','1963','1964','1965','1966','1967','1968','1969', -'1970','1971','1972','1973','1974','1975','1976','1977','1978','1979','1980','1981','1982','1983','1984', -'1985','1986','1987','1988','1989','1990','1991','1992','1993','1994','1995','1996','1997','1998','1999', -'2000','2001','2002','2003','2004','2005','2006','2007','2008','2009','2010','2011','2012','2013','2014'] - - -@st.cache_data -def load_data(): - df=pd.read_csv('data/co2_emissions_tonnes_per_person.csv') - df.rename({'geo':'Country'},axis=1,inplace=True) - df.set_index('Country',inplace=True) - df.drop(['1800','1801','1802', '1803', '1804', '1805', '1806', '1807', '1808', '1809', '1810', '1811', '1812', '1813', - '1814','1815', '1816', '1817', '1818', '1819', '1820', '1821', '1822', '1823', '1824', '1825', '1826', '1827', '1828', - '1829', '1830', '1831','1832', '1833', '1834','1835','1836', '1837', '1838', '1839', '1840', '1841', '1842', '1843', - '1844', '1845', '1846', '1847', '1848', '1849', '1850', '1851', '1852', '1853', '1854', '1855', '1856', '1857', '1858', '1859', '1860', '1861', '1862', - '1863', '1864', '1865', '1866', '1867','1868', '1869', '1870','1871', '1872', '1873', '1874','1875', '1876', '1877', '1878', '1879', '1880', '1881', - '1882','1883', '1884', '1885', '1886', '1887','1888', '1889','1890', '1891','1892', '1893', '1894'], axis=1,inplace=True) - df['Total']=df[years].sum(axis=1) - df['Average']=df.mean(axis=1) - df['Maximum']=df.max(axis=1) - df.sort_index(inplace=True) - return df -st.title('CO2 Emissions Tonnes Per Person') -df=load_data() -st.dataframe(df,use_container_width=True) - -countries= df.index.unique().tolist() -Graphs = ['bar','pie','line','area','funnel'] -c1,c2 = st.columns(2) -country = c1.selectbox("Select a Country", countries) -Graph = c2.selectbox("Select a Graph type", Graphs) - - - -st.header('Country wise Visualization') -cdf = df.loc[country,years].reset_index() -cdf.rename({'index':'Years'},axis=1, inplace=True) -if Graph == Graphs[0]: - fig = px.bar(cdf, 'Years',country, title=f'{country} co2 emissions tonnes by per person') -if Graph == Graphs[1]: - fig = px.pie(cdf, 'Years',country, title=f'{country} co2 emissions tonnes by per person') -if Graph == Graphs[2]: - fig = px.line(cdf, 'Years',country, title=f'{country} co2 emissions tonnes by per person') -if Graph == Graphs[3]: - fig = px.area(cdf, 'Years',country, title=f'{country} co2 emissions tonnes by per person') -if Graph == Graphs[4]: - fig = px.funnel(cdf, 'Years',country, title=f'{country} co2 emissions tonnes by per person') -st.plotly_chart(fig, use_container_width=True) - -st.header('Comparison of Country') -clist = st.multiselect("Select countries to compare", countries, default='India') -cdf = df.loc[clist, years].T # T to rotate the data in 90deg -cdf.rename({'index':'Years'},axis=1,inplace=True) -st.write(cdf) -figc = px.line(cdf,cdf.index, clist, title=f'Comparing {", ".join(clist)}') - -st.plotly_chart(figc, use_container_width=True) - -df.sort_values(by='Total', ascending=False, inplace=True) -fig1=px.bar(df, x=df.index, y='Total',title='Total co2 emissions tonnes per person') -st.plotly_chart(fig1, use_container_width=True) - -dfavg = df.sort_values(by='Average').reset_index() -dfavg.rename({'index':'Country'},axis=1,inplace=True) -fig2=px.bar(dfavg, 'Country', 'Average', title="Avgrage Use of vehicle per 1000 person") -st.plotly_chart(fig2, use_container_width=True) - -dfmax=df.sort_values(by='Maximum').reset_index() -dfmax.rename({'index':'Country'},axis=1,inplace=True) -fig3=px.bar(dfmax,'Country','Maximum',title='Maximum co2 emission tonnes per person by Country' ) -st.plotly_chart(fig3, use_container_width=True) \ No newline at end of file diff --git a/spaces/Vageesh1/PDF_QA/app.py b/spaces/Vageesh1/PDF_QA/app.py deleted file mode 100644 index 0f6de3f51a7bccca1e7e907077b8374a103debfb..0000000000000000000000000000000000000000 --- a/spaces/Vageesh1/PDF_QA/app.py +++ /dev/null @@ -1,114 +0,0 @@ -import tempfile -import streamlit as st -from streamlit_chat import message - -import torch -import torch.nn - -import transformers -from transformers import ( - AutoModelForCausalLM, - AutoTokenizer, - BitsAndBytesConfig, - HfArgumentParser, - TrainingArguments, - pipeline, - logging, -) - - -import pandas as pd -import numpy as np -import os -import io - -from langchain.document_loaders import TextLoader -from langchain import PromptTemplate -from langchain.text_splitter import CharacterTextSplitter -from langchain.document_loaders import PyPDFLoader -from langchain.embeddings import HuggingFaceEmbeddings -from langchain.vectorstores import FAISS -from langchain.chains.question_answering import load_qa_chain -from langchain.chains import RetrievalQA -from langchain import HuggingFacePipeline -from langchain.chains import ConversationalRetrievalChain - -from helper import pdf_loader,splitDoc,makeEmbeddings,create_flan_t5_base - - -def conversational_chat(chain,query): - result = chain({"question": query, - "chat_history": st.session_state['history']}) - st.session_state['history'].append((query, result["answer"])) - - return result["answer"] - - -def ui(): - st.title('PDF Question Answer Bot') - # hugging_face_key = os.environ["HUGGINGFACE_HUB_TOKEN"] - llm = create_flan_t5_base(load_in_8bit=False) - hf_llm = HuggingFacePipeline(pipeline=llm) - - uploaded_file = st.file_uploader("Choose a PDF file", type=["pdf"]) - #saving the uploaded pdf file - if uploaded_file is not None: - save_path = "./uploaded_file.pdf" - with open(save_path, "wb") as f: - f.write(uploaded_file.read()) - - #loading the pdf file - pdf_doc=pdf_loader('./uploaded_file.pdf') - pdf_doc=splitDoc(pdf_doc) - vector_database = makeEmbeddings(pdf_doc) - #making the retriever of the vector database - retriever = vector_database.as_retriever(search_kwargs={"k":10}) - qa_chain = ConversationalRetrievalChain.from_llm(llm = hf_llm, - retriever=vector_database.as_retriever()) - - # Create an empty container to hold the PDF loader section - pdf_loader_container = st.empty() - - # Check if the PDF file is uploaded or not - if uploaded_file is not None: - st.text("The file has been uploaded successfully") - # Hide the PDF loader interface when the file is uploaded - pdf_loader_container.empty() - # Show the chat interface - show_chat_interface(qa_chain) - -def show_chat_interface(qa_chain): - if 'history' not in st.session_state: - st.session_state['history'] = [] - - if 'generated' not in st.session_state: - st.session_state['generated'] = ["Hello ! Ask me anything about the Uploaded PDF " + " 🤗"] - - if 'past' not in st.session_state: - st.session_state['past'] = ["Hey ! 👋"] - - response_container = st.container() - #container for the user's text input - container = st.container() - - with container: - with st.form(key='my_form', clear_on_submit=True): - - user_input = st.text_input("Query:", placeholder="Talk about your PDF data here (:", key='input') - submit_button = st.form_submit_button(label='Send') - - if submit_button and user_input: - output = conversational_chat(qa_chain,user_input) - - st.session_state['past'].append(user_input) - st.session_state['generated'].append(output) - - if st.session_state['generated']: - with response_container: - for i in range(len(st.session_state['generated'])): - message(st.session_state["past"][i], is_user=True, key=str(i) + '_user', avatar_style="big-smile") - message(st.session_state["generated"][i], key=str(i), avatar_style="thumbs") - - -if __name__=='__main__': - ui() \ No newline at end of file diff --git a/spaces/Vision-CAIR/minigpt4/minigpt4/common/__init__.py b/spaces/Vision-CAIR/minigpt4/minigpt4/common/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/Xenova/semantic-image-search/Dockerfile b/spaces/Xenova/semantic-image-search/Dockerfile deleted file mode 100644 index a99d2b5846c127ed08f34dabc9d8524b6c934056..0000000000000000000000000000000000000000 --- a/spaces/Xenova/semantic-image-search/Dockerfile +++ /dev/null @@ -1,69 +0,0 @@ -# syntax=docker/dockerfile:1.4 - -# Adapted from https://github.com/vercel/next.js/blob/e60a1e747c3f521fc24dfd9ee2989e13afeb0a9b/examples/with-docker/Dockerfile -# For more information, see https://nextjs.org/docs/pages/building-your-application/deploying#docker-image - -FROM node:18 AS base - -# Install dependencies only when needed -FROM base AS deps -WORKDIR /app - -# Install dependencies based on the preferred package manager -COPY --link package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./ -RUN \ - if [ -f yarn.lock ]; then yarn --frozen-lockfile; \ - elif [ -f package-lock.json ]; then npm ci; \ - elif [ -f pnpm-lock.yaml ]; then yarn global add pnpm && pnpm i --frozen-lockfile; \ - else echo "Lockfile not found." && exit 1; \ - fi - - -# Rebuild the source code only when needed -FROM base AS builder -WORKDIR /app -COPY --from=deps --link /app/node_modules ./node_modules -COPY --link . . - -# Next.js collects completely anonymous telemetry data about general usage. -# Learn more here: https://nextjs.org/telemetry -# Uncomment the following line in case you want to disable telemetry during the build. -# ENV NEXT_TELEMETRY_DISABLED 1 - -RUN npm run build - -# If using yarn comment out above and use below instead -# RUN yarn build - -# Production image, copy all the files and run next -FROM base AS runner -WORKDIR /app - -ENV NODE_ENV production -# Uncomment the following line in case you want to disable telemetry during runtime. -# ENV NEXT_TELEMETRY_DISABLED 1 - -RUN \ - addgroup --system --gid 1001 nodejs; \ - adduser --system --uid 1001 nextjs - -COPY --from=builder --link /app/public ./public - -# Automatically leverage output traces to reduce image size -# https://nextjs.org/docs/advanced-features/output-file-tracing -COPY --from=builder --link --chown=1001:1001 /app/.next/standalone ./ -COPY --from=builder --link --chown=1001:1001 /app/.next/static ./.next/static - -USER nextjs - -EXPOSE 3000 - -ENV PORT 3000 -ENV HOSTNAME localhost - -# Allow the running process to write model files to the cache folder. -# NOTE: In practice, you would probably want to pre-download the model files to avoid having to download them on-the-fly. -RUN mkdir -p /app/node_modules/@xenova/.cache/ -RUN chmod 777 -R /app/node_modules/@xenova/ - -CMD ["node", "server.js"] \ No newline at end of file diff --git a/spaces/XzJosh/JM-Bert-VITS2/monotonic_align/__init__.py b/spaces/XzJosh/JM-Bert-VITS2/monotonic_align/__init__.py deleted file mode 100644 index 75603d26cf2b8d6196f5a68a89f9e49d8e519bc8..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/JM-Bert-VITS2/monotonic_align/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from numpy import zeros, int32, float32 -from torch import from_numpy - -from .core import maximum_path_jit - -def maximum_path(neg_cent, mask): - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(float32) - path = zeros(neg_cent.shape, dtype=int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(int32) - maximum_path_jit(path, neg_cent, t_t_max, t_s_max) - return from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/XzJosh/Jianmo-Bert-VITS2/commons.py b/spaces/XzJosh/Jianmo-Bert-VITS2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Jianmo-Bert-VITS2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/YUANAI/DiffspeechResearch/modules/vocoder/hifigan/hifigan.py b/spaces/YUANAI/DiffspeechResearch/modules/vocoder/hifigan/hifigan.py deleted file mode 100644 index fddd5278760427d5d93b9b38240319ba5bdb0bdf..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/modules/vocoder/hifigan/hifigan.py +++ /dev/null @@ -1,338 +0,0 @@ -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -import numpy as np - -LRELU_SLOPE = 0.1 - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Conv1d1x1(Conv1d): - """1x1 Conv1d with customized initialization.""" - - def __init__(self, in_channels, out_channels, bias): - """Initialize 1x1 Conv1d module.""" - super(Conv1d1x1, self).__init__(in_channels, out_channels, - kernel_size=1, padding=0, - dilation=1, bias=bias) - - -class HifiGanGenerator(torch.nn.Module): - def __init__(self, h, c_out=1): - super(HifiGanGenerator, self).__init__() - self.h = h - self.num_kernels = len(h['resblock_kernel_sizes']) - self.num_upsamples = len(h['upsample_rates']) - - self.conv_pre = weight_norm(Conv1d(80, h['upsample_initial_channel'], 7, 1, padding=3)) - resblock = ResBlock1 if h['resblock'] == '1' else ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h['upsample_rates'], h['upsample_kernel_sizes'])): - c_cur = h['upsample_initial_channel'] // (2 ** (i + 1)) - self.ups.append(weight_norm( - ConvTranspose1d(c_cur * 2, c_cur, k, u, padding=(k - u) // 2))) - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h['upsample_initial_channel'] // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(h['resblock_kernel_sizes'], h['resblock_dilation_sizes'])): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, c_out, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x, f0=None): - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False, use_cond=False, c_in=1): - super(DiscriminatorP, self).__init__() - self.use_cond = use_cond - if use_cond: - from utils.commons.hparams import hparams - t = hparams['hop_size'] - self.cond_net = torch.nn.ConvTranspose1d(80, 1, t * 2, stride=t, padding=t // 2) - c_in = 2 - - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(c_in, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x, mel): - fmap = [] - if self.use_cond: - x_mel = self.cond_net(mel) - x = torch.cat([x_mel, x], 1) - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_cond=False, c_in=1): - super(MultiPeriodDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorP(2, use_cond=use_cond, c_in=c_in), - DiscriminatorP(3, use_cond=use_cond, c_in=c_in), - DiscriminatorP(5, use_cond=use_cond, c_in=c_in), - DiscriminatorP(7, use_cond=use_cond, c_in=c_in), - DiscriminatorP(11, use_cond=use_cond, c_in=c_in), - ]) - - def forward(self, y, y_hat, mel=None): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y, mel) - y_d_g, fmap_g = d(y_hat, mel) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False, use_cond=False, upsample_rates=None, c_in=1): - super(DiscriminatorS, self).__init__() - self.use_cond = use_cond - if use_cond: - t = np.prod(upsample_rates) - self.cond_net = torch.nn.ConvTranspose1d(80, 1, t * 2, stride=t, padding=t // 2) - c_in = 2 - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(c_in, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x, mel): - if self.use_cond: - x_mel = self.cond_net(mel) - x = torch.cat([x_mel, x], 1) - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self, use_cond=False, c_in=1): - super(MultiScaleDiscriminator, self).__init__() - from utils.commons.hparams import hparams - self.discriminators = nn.ModuleList([ - DiscriminatorS(use_spectral_norm=True, use_cond=use_cond, - upsample_rates=[4, 4, hparams['hop_size'] // 16], - c_in=c_in), - DiscriminatorS(use_cond=use_cond, - upsample_rates=[4, 4, hparams['hop_size'] // 32], - c_in=c_in), - DiscriminatorS(use_cond=use_cond, - upsample_rates=[4, 4, hparams['hop_size'] // 64], - c_in=c_in), - ]) - self.meanpools = nn.ModuleList([ - AvgPool1d(4, 2, padding=1), - AvgPool1d(4, 2, padding=1) - ]) - - def forward(self, y, y_hat, mel=None): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y, mel) - y_d_g, fmap_g = d(y_hat, mel) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - r_losses = 0 - g_losses = 0 - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - r_losses += r_loss - g_losses += g_loss - r_losses = r_losses / len(disc_real_outputs) - g_losses = g_losses / len(disc_real_outputs) - return r_losses, g_losses - - -def cond_discriminator_loss(outputs): - loss = 0 - for dg in outputs: - g_loss = torch.mean(dg ** 2) - loss += g_loss - loss = loss / len(outputs) - return loss - - -def generator_loss(disc_outputs): - loss = 0 - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - loss += l - loss = loss / len(disc_outputs) - return loss diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/datasets/builtin_meta.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/datasets/builtin_meta.py deleted file mode 100644 index 63c7a1a31b31dd89b82011effee26471faccacf5..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/data/datasets/builtin_meta.py +++ /dev/null @@ -1,350 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -""" -Note: -For your custom dataset, there is no need to hard-code metadata anywhere in the code. -For example, for COCO-format dataset, metadata will be obtained automatically -when calling `load_coco_json`. For other dataset, metadata may also be obtained in other ways -during loading. - -However, we hard-coded metadata for a few common dataset here. -The only goal is to allow users who don't have these dataset to use pre-trained models. -Users don't have to download a COCO json (which contains metadata), in order to visualize a -COCO model (with correct class names and colors). -""" - - -# All coco categories, together with their nice-looking visualization colors -# It's from https://github.com/cocodataset/panopticapi/blob/master/panoptic_coco_categories.json -COCO_CATEGORIES = [ - {"color": [220, 20, 60], "isthing": 1, "id": 1, "name": "person"}, - {"color": [119, 11, 32], "isthing": 1, "id": 2, "name": "bicycle"}, - {"color": [0, 0, 142], "isthing": 1, "id": 3, "name": "car"}, - {"color": [0, 0, 230], "isthing": 1, "id": 4, "name": "motorcycle"}, - {"color": [106, 0, 228], "isthing": 1, "id": 5, "name": "airplane"}, - {"color": [0, 60, 100], "isthing": 1, "id": 6, "name": "bus"}, - {"color": [0, 80, 100], "isthing": 1, "id": 7, "name": "train"}, - {"color": [0, 0, 70], "isthing": 1, "id": 8, "name": "truck"}, - {"color": [0, 0, 192], "isthing": 1, "id": 9, "name": "boat"}, - {"color": [250, 170, 30], "isthing": 1, "id": 10, "name": "traffic light"}, - {"color": [100, 170, 30], "isthing": 1, "id": 11, "name": "fire hydrant"}, - {"color": [220, 220, 0], "isthing": 1, "id": 13, "name": "stop sign"}, - {"color": [175, 116, 175], "isthing": 1, "id": 14, "name": "parking meter"}, - {"color": [250, 0, 30], "isthing": 1, "id": 15, "name": "bench"}, - {"color": [165, 42, 42], "isthing": 1, "id": 16, "name": "bird"}, - {"color": [255, 77, 255], "isthing": 1, "id": 17, "name": "cat"}, - {"color": [0, 226, 252], "isthing": 1, "id": 18, "name": "dog"}, - {"color": [182, 182, 255], "isthing": 1, "id": 19, "name": "horse"}, - {"color": [0, 82, 0], "isthing": 1, "id": 20, "name": "sheep"}, - {"color": [120, 166, 157], "isthing": 1, "id": 21, "name": "cow"}, - {"color": [110, 76, 0], "isthing": 1, "id": 22, "name": "elephant"}, - {"color": [174, 57, 255], "isthing": 1, "id": 23, "name": "bear"}, - {"color": [199, 100, 0], "isthing": 1, "id": 24, "name": "zebra"}, - {"color": [72, 0, 118], "isthing": 1, "id": 25, "name": "giraffe"}, - {"color": [255, 179, 240], "isthing": 1, "id": 27, "name": "backpack"}, - {"color": [0, 125, 92], "isthing": 1, "id": 28, "name": "umbrella"}, - {"color": [209, 0, 151], "isthing": 1, "id": 31, "name": "handbag"}, - {"color": [188, 208, 182], "isthing": 1, "id": 32, "name": "tie"}, - {"color": [0, 220, 176], "isthing": 1, "id": 33, "name": "suitcase"}, - {"color": [255, 99, 164], "isthing": 1, "id": 34, "name": "frisbee"}, - {"color": [92, 0, 73], "isthing": 1, "id": 35, "name": "skis"}, - {"color": [133, 129, 255], "isthing": 1, "id": 36, "name": "snowboard"}, - {"color": [78, 180, 255], "isthing": 1, "id": 37, "name": "sports ball"}, - {"color": [0, 228, 0], "isthing": 1, "id": 38, "name": "kite"}, - {"color": [174, 255, 243], "isthing": 1, "id": 39, "name": "baseball bat"}, - {"color": [45, 89, 255], "isthing": 1, "id": 40, "name": "baseball glove"}, - {"color": [134, 134, 103], "isthing": 1, "id": 41, "name": "skateboard"}, - {"color": [145, 148, 174], "isthing": 1, "id": 42, "name": "surfboard"}, - {"color": [255, 208, 186], "isthing": 1, "id": 43, "name": "tennis racket"}, - {"color": [197, 226, 255], "isthing": 1, "id": 44, "name": "bottle"}, - {"color": [171, 134, 1], "isthing": 1, "id": 46, "name": "wine glass"}, - {"color": [109, 63, 54], "isthing": 1, "id": 47, "name": "cup"}, - {"color": [207, 138, 255], "isthing": 1, "id": 48, "name": "fork"}, - {"color": [151, 0, 95], "isthing": 1, "id": 49, "name": "knife"}, - {"color": [9, 80, 61], "isthing": 1, "id": 50, "name": "spoon"}, - {"color": [84, 105, 51], "isthing": 1, "id": 51, "name": "bowl"}, - {"color": [74, 65, 105], "isthing": 1, "id": 52, "name": "banana"}, - {"color": [166, 196, 102], "isthing": 1, "id": 53, "name": "apple"}, - {"color": [208, 195, 210], "isthing": 1, "id": 54, "name": "sandwich"}, - {"color": [255, 109, 65], "isthing": 1, "id": 55, "name": "orange"}, - {"color": [0, 143, 149], "isthing": 1, "id": 56, "name": "broccoli"}, - {"color": [179, 0, 194], "isthing": 1, "id": 57, "name": "carrot"}, - {"color": [209, 99, 106], "isthing": 1, "id": 58, "name": "hot dog"}, - {"color": [5, 121, 0], "isthing": 1, "id": 59, "name": "pizza"}, - {"color": [227, 255, 205], "isthing": 1, "id": 60, "name": "donut"}, - {"color": [147, 186, 208], "isthing": 1, "id": 61, "name": "cake"}, - {"color": [153, 69, 1], "isthing": 1, "id": 62, "name": "chair"}, - {"color": [3, 95, 161], "isthing": 1, "id": 63, "name": "couch"}, - {"color": [163, 255, 0], "isthing": 1, "id": 64, "name": "potted plant"}, - {"color": [119, 0, 170], "isthing": 1, "id": 65, "name": "bed"}, - {"color": [0, 182, 199], "isthing": 1, "id": 67, "name": "dining table"}, - {"color": [0, 165, 120], "isthing": 1, "id": 70, "name": "toilet"}, - {"color": [183, 130, 88], "isthing": 1, "id": 72, "name": "tv"}, - {"color": [95, 32, 0], "isthing": 1, "id": 73, "name": "laptop"}, - {"color": [130, 114, 135], "isthing": 1, "id": 74, "name": "mouse"}, - {"color": [110, 129, 133], "isthing": 1, "id": 75, "name": "remote"}, - {"color": [166, 74, 118], "isthing": 1, "id": 76, "name": "keyboard"}, - {"color": [219, 142, 185], "isthing": 1, "id": 77, "name": "cell phone"}, - {"color": [79, 210, 114], "isthing": 1, "id": 78, "name": "microwave"}, - {"color": [178, 90, 62], "isthing": 1, "id": 79, "name": "oven"}, - {"color": [65, 70, 15], "isthing": 1, "id": 80, "name": "toaster"}, - {"color": [127, 167, 115], "isthing": 1, "id": 81, "name": "sink"}, - {"color": [59, 105, 106], "isthing": 1, "id": 82, "name": "refrigerator"}, - {"color": [142, 108, 45], "isthing": 1, "id": 84, "name": "book"}, - {"color": [196, 172, 0], "isthing": 1, "id": 85, "name": "clock"}, - {"color": [95, 54, 80], "isthing": 1, "id": 86, "name": "vase"}, - {"color": [128, 76, 255], "isthing": 1, "id": 87, "name": "scissors"}, - {"color": [201, 57, 1], "isthing": 1, "id": 88, "name": "teddy bear"}, - {"color": [246, 0, 122], "isthing": 1, "id": 89, "name": "hair drier"}, - {"color": [191, 162, 208], "isthing": 1, "id": 90, "name": "toothbrush"}, - {"color": [255, 255, 128], "isthing": 0, "id": 92, "name": "banner"}, - {"color": [147, 211, 203], "isthing": 0, "id": 93, "name": "blanket"}, - {"color": [150, 100, 100], "isthing": 0, "id": 95, "name": "bridge"}, - {"color": [168, 171, 172], "isthing": 0, "id": 100, "name": "cardboard"}, - {"color": [146, 112, 198], "isthing": 0, "id": 107, "name": "counter"}, - {"color": [210, 170, 100], "isthing": 0, "id": 109, "name": "curtain"}, - {"color": [92, 136, 89], "isthing": 0, "id": 112, "name": "door-stuff"}, - {"color": [218, 88, 184], "isthing": 0, "id": 118, "name": "floor-wood"}, - {"color": [241, 129, 0], "isthing": 0, "id": 119, "name": "flower"}, - {"color": [217, 17, 255], "isthing": 0, "id": 122, "name": "fruit"}, - {"color": [124, 74, 181], "isthing": 0, "id": 125, "name": "gravel"}, - {"color": [70, 70, 70], "isthing": 0, "id": 128, "name": "house"}, - {"color": [255, 228, 255], "isthing": 0, "id": 130, "name": "light"}, - {"color": [154, 208, 0], "isthing": 0, "id": 133, "name": "mirror-stuff"}, - {"color": [193, 0, 92], "isthing": 0, "id": 138, "name": "net"}, - {"color": [76, 91, 113], "isthing": 0, "id": 141, "name": "pillow"}, - {"color": [255, 180, 195], "isthing": 0, "id": 144, "name": "platform"}, - {"color": [106, 154, 176], "isthing": 0, "id": 145, "name": "playingfield"}, - {"color": [230, 150, 140], "isthing": 0, "id": 147, "name": "railroad"}, - {"color": [60, 143, 255], "isthing": 0, "id": 148, "name": "river"}, - {"color": [128, 64, 128], "isthing": 0, "id": 149, "name": "road"}, - {"color": [92, 82, 55], "isthing": 0, "id": 151, "name": "roof"}, - {"color": [254, 212, 124], "isthing": 0, "id": 154, "name": "sand"}, - {"color": [73, 77, 174], "isthing": 0, "id": 155, "name": "sea"}, - {"color": [255, 160, 98], "isthing": 0, "id": 156, "name": "shelf"}, - {"color": [255, 255, 255], "isthing": 0, "id": 159, "name": "snow"}, - {"color": [104, 84, 109], "isthing": 0, "id": 161, "name": "stairs"}, - {"color": [169, 164, 131], "isthing": 0, "id": 166, "name": "tent"}, - {"color": [225, 199, 255], "isthing": 0, "id": 168, "name": "towel"}, - {"color": [137, 54, 74], "isthing": 0, "id": 171, "name": "wall-brick"}, - {"color": [135, 158, 223], "isthing": 0, "id": 175, "name": "wall-stone"}, - {"color": [7, 246, 231], "isthing": 0, "id": 176, "name": "wall-tile"}, - {"color": [107, 255, 200], "isthing": 0, "id": 177, "name": "wall-wood"}, - {"color": [58, 41, 149], "isthing": 0, "id": 178, "name": "water-other"}, - {"color": [183, 121, 142], "isthing": 0, "id": 180, "name": "window-blind"}, - {"color": [255, 73, 97], "isthing": 0, "id": 181, "name": "window-other"}, - {"color": [107, 142, 35], "isthing": 0, "id": 184, "name": "tree-merged"}, - {"color": [190, 153, 153], "isthing": 0, "id": 185, "name": "fence-merged"}, - {"color": [146, 139, 141], "isthing": 0, "id": 186, "name": "ceiling-merged"}, - {"color": [70, 130, 180], "isthing": 0, "id": 187, "name": "sky-other-merged"}, - {"color": [134, 199, 156], "isthing": 0, "id": 188, "name": "cabinet-merged"}, - {"color": [209, 226, 140], "isthing": 0, "id": 189, "name": "table-merged"}, - {"color": [96, 36, 108], "isthing": 0, "id": 190, "name": "floor-other-merged"}, - {"color": [96, 96, 96], "isthing": 0, "id": 191, "name": "pavement-merged"}, - {"color": [64, 170, 64], "isthing": 0, "id": 192, "name": "mountain-merged"}, - {"color": [152, 251, 152], "isthing": 0, "id": 193, "name": "grass-merged"}, - {"color": [208, 229, 228], "isthing": 0, "id": 194, "name": "dirt-merged"}, - {"color": [206, 186, 171], "isthing": 0, "id": 195, "name": "paper-merged"}, - {"color": [152, 161, 64], "isthing": 0, "id": 196, "name": "food-other-merged"}, - {"color": [116, 112, 0], "isthing": 0, "id": 197, "name": "building-other-merged"}, - {"color": [0, 114, 143], "isthing": 0, "id": 198, "name": "rock-merged"}, - {"color": [102, 102, 156], "isthing": 0, "id": 199, "name": "wall-other-merged"}, - {"color": [250, 141, 255], "isthing": 0, "id": 200, "name": "rug-merged"}, -] - -# fmt: off -COCO_PERSON_KEYPOINT_NAMES = ( - "nose", - "left_eye", "right_eye", - "left_ear", "right_ear", - "left_shoulder", "right_shoulder", - "left_elbow", "right_elbow", - "left_wrist", "right_wrist", - "left_hip", "right_hip", - "left_knee", "right_knee", - "left_ankle", "right_ankle", -) -# fmt: on - -# Pairs of keypoints that should be exchanged under horizontal flipping -COCO_PERSON_KEYPOINT_FLIP_MAP = ( - ("left_eye", "right_eye"), - ("left_ear", "right_ear"), - ("left_shoulder", "right_shoulder"), - ("left_elbow", "right_elbow"), - ("left_wrist", "right_wrist"), - ("left_hip", "right_hip"), - ("left_knee", "right_knee"), - ("left_ankle", "right_ankle"), -) - -# rules for pairs of keypoints to draw a line between, and the line color to use. -KEYPOINT_CONNECTION_RULES = [ - # face - ("left_ear", "left_eye", (102, 204, 255)), - ("right_ear", "right_eye", (51, 153, 255)), - ("left_eye", "nose", (102, 0, 204)), - ("nose", "right_eye", (51, 102, 255)), - # upper-body - ("left_shoulder", "right_shoulder", (255, 128, 0)), - ("left_shoulder", "left_elbow", (153, 255, 204)), - ("right_shoulder", "right_elbow", (128, 229, 255)), - ("left_elbow", "left_wrist", (153, 255, 153)), - ("right_elbow", "right_wrist", (102, 255, 224)), - # lower-body - ("left_hip", "right_hip", (255, 102, 0)), - ("left_hip", "left_knee", (255, 255, 77)), - ("right_hip", "right_knee", (153, 255, 204)), - ("left_knee", "left_ankle", (191, 255, 128)), - ("right_knee", "right_ankle", (255, 195, 77)), -] - -# All Cityscapes categories, together with their nice-looking visualization colors -# It's from https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/helpers/labels.py # noqa -CITYSCAPES_CATEGORIES = [ - {"color": (128, 64, 128), "isthing": 0, "id": 7, "trainId": 0, "name": "road"}, - {"color": (244, 35, 232), "isthing": 0, "id": 8, "trainId": 1, "name": "sidewalk"}, - {"color": (70, 70, 70), "isthing": 0, "id": 11, "trainId": 2, "name": "building"}, - {"color": (102, 102, 156), "isthing": 0, "id": 12, "trainId": 3, "name": "wall"}, - {"color": (190, 153, 153), "isthing": 0, "id": 13, "trainId": 4, "name": "fence"}, - {"color": (153, 153, 153), "isthing": 0, "id": 17, "trainId": 5, "name": "pole"}, - {"color": (250, 170, 30), "isthing": 0, "id": 19, "trainId": 6, "name": "traffic light"}, - {"color": (220, 220, 0), "isthing": 0, "id": 20, "trainId": 7, "name": "traffic sign"}, - {"color": (107, 142, 35), "isthing": 0, "id": 21, "trainId": 8, "name": "vegetation"}, - {"color": (152, 251, 152), "isthing": 0, "id": 22, "trainId": 9, "name": "terrain"}, - {"color": (70, 130, 180), "isthing": 0, "id": 23, "trainId": 10, "name": "sky"}, - {"color": (220, 20, 60), "isthing": 1, "id": 24, "trainId": 11, "name": "person"}, - {"color": (255, 0, 0), "isthing": 1, "id": 25, "trainId": 12, "name": "rider"}, - {"color": (0, 0, 142), "isthing": 1, "id": 26, "trainId": 13, "name": "car"}, - {"color": (0, 0, 70), "isthing": 1, "id": 27, "trainId": 14, "name": "truck"}, - {"color": (0, 60, 100), "isthing": 1, "id": 28, "trainId": 15, "name": "bus"}, - {"color": (0, 80, 100), "isthing": 1, "id": 31, "trainId": 16, "name": "train"}, - {"color": (0, 0, 230), "isthing": 1, "id": 32, "trainId": 17, "name": "motorcycle"}, - {"color": (119, 11, 32), "isthing": 1, "id": 33, "trainId": 18, "name": "bicycle"}, -] - -# fmt: off -ADE20K_SEM_SEG_CATEGORIES = [ - "wall", "building", "sky", "floor", "tree", "ceiling", "road, route", "bed", "window ", "grass", "cabinet", "sidewalk, pavement", "person", "earth, ground", "door", "table", "mountain, mount", "plant", "curtain", "chair", "car", "water", "painting, picture", "sofa", "shelf", "house", "sea", "mirror", "rug", "field", "armchair", "seat", "fence", "desk", "rock, stone", "wardrobe, closet, press", "lamp", "tub", "rail", "cushion", "base, pedestal, stand", "box", "column, pillar", "signboard, sign", "chest of drawers, chest, bureau, dresser", "counter", "sand", "sink", "skyscraper", "fireplace", "refrigerator, icebox", "grandstand, covered stand", "path", "stairs", "runway", "case, display case, showcase, vitrine", "pool table, billiard table, snooker table", "pillow", "screen door, screen", "stairway, staircase", "river", "bridge, span", "bookcase", "blind, screen", "coffee table", "toilet, can, commode, crapper, pot, potty, stool, throne", "flower", "book", "hill", "bench", "countertop", "stove", "palm, palm tree", "kitchen island", "computer", "swivel chair", "boat", "bar", "arcade machine", "hovel, hut, hutch, shack, shanty", "bus", "towel", "light", "truck", "tower", "chandelier", "awning, sunshade, sunblind", "street lamp", "booth", "tv", "plane", "dirt track", "clothes", "pole", "land, ground, soil", "bannister, banister, balustrade, balusters, handrail", "escalator, moving staircase, moving stairway", "ottoman, pouf, pouffe, puff, hassock", "bottle", "buffet, counter, sideboard", "poster, posting, placard, notice, bill, card", "stage", "van", "ship", "fountain", "conveyer belt, conveyor belt, conveyer, conveyor, transporter", "canopy", "washer, automatic washer, washing machine", "plaything, toy", "pool", "stool", "barrel, cask", "basket, handbasket", "falls", "tent", "bag", "minibike, motorbike", "cradle", "oven", "ball", "food, solid food", "step, stair", "tank, storage tank", "trade name", "microwave", "pot", "animal", "bicycle", "lake", "dishwasher", "screen", "blanket, cover", "sculpture", "hood, exhaust hood", "sconce", "vase", "traffic light", "tray", "trash can", "fan", "pier", "crt screen", "plate", "monitor", "bulletin board", "shower", "radiator", "glass, drinking glass", "clock", "flag", # noqa -] -# After processed by `prepare_ade20k_sem_seg.py`, id 255 means ignore -# fmt: on - - -def _get_coco_instances_meta(): - thing_ids = [k["id"] for k in COCO_CATEGORIES if k["isthing"] == 1] - thing_colors = [k["color"] for k in COCO_CATEGORIES if k["isthing"] == 1] - assert len(thing_ids) == 80, len(thing_ids) - # Mapping from the incontiguous COCO category id to an id in [0, 79] - thing_dataset_id_to_contiguous_id = {k: i for i, k in enumerate(thing_ids)} - thing_classes = [k["name"] for k in COCO_CATEGORIES if k["isthing"] == 1] - ret = { - "thing_dataset_id_to_contiguous_id": thing_dataset_id_to_contiguous_id, - "thing_classes": thing_classes, - "thing_colors": thing_colors, - } - return ret - - -def _get_coco_panoptic_separated_meta(): - """ - Returns metadata for "separated" version of the panoptic segmentation dataset. - """ - stuff_ids = [k["id"] for k in COCO_CATEGORIES if k["isthing"] == 0] - assert len(stuff_ids) == 53, len(stuff_ids) - - # For semantic segmentation, this mapping maps from contiguous stuff id - # (in [0, 53], used in models) to ids in the dataset (used for processing results) - # The id 0 is mapped to an extra category "thing". - stuff_dataset_id_to_contiguous_id = {k: i + 1 for i, k in enumerate(stuff_ids)} - # When converting COCO panoptic annotations to semantic annotations - # We label the "thing" category to 0 - stuff_dataset_id_to_contiguous_id[0] = 0 - - # 54 names for COCO stuff categories (including "things") - stuff_classes = ["things"] + [ - k["name"].replace("-other", "").replace("-merged", "") - for k in COCO_CATEGORIES - if k["isthing"] == 0 - ] - - # NOTE: I randomly picked a color for things - stuff_colors = [[82, 18, 128]] + [k["color"] for k in COCO_CATEGORIES if k["isthing"] == 0] - ret = { - "stuff_dataset_id_to_contiguous_id": stuff_dataset_id_to_contiguous_id, - "stuff_classes": stuff_classes, - "stuff_colors": stuff_colors, - } - ret.update(_get_coco_instances_meta()) - return ret - - -def _get_builtin_metadata(dataset_name): - if dataset_name == "coco": - return _get_coco_instances_meta() - if dataset_name == "coco_panoptic_separated": - return _get_coco_panoptic_separated_meta() - elif dataset_name == "coco_panoptic_standard": - meta = {} - # The following metadata maps contiguous id from [0, #thing categories + - # #stuff categories) to their names and colors. We have to replica of the - # same name and color under "thing_*" and "stuff_*" because the current - # visualization function in D2 handles thing and class classes differently - # due to some heuristic used in Panoptic FPN. We keep the same naming to - # enable reusing existing visualization functions. - thing_classes = [k["name"] for k in COCO_CATEGORIES] - thing_colors = [k["color"] for k in COCO_CATEGORIES] - stuff_classes = [k["name"] for k in COCO_CATEGORIES] - stuff_colors = [k["color"] for k in COCO_CATEGORIES] - - meta["thing_classes"] = thing_classes - meta["thing_colors"] = thing_colors - meta["stuff_classes"] = stuff_classes - meta["stuff_colors"] = stuff_colors - - # Convert category id for training: - # category id: like semantic segmentation, it is the class id for each - # pixel. Since there are some classes not used in evaluation, the category - # id is not always contiguous and thus we have two set of category ids: - # - original category id: category id in the original dataset, mainly - # used for evaluation. - # - contiguous category id: [0, #classes), in order to train the linear - # softmax classifier. - thing_dataset_id_to_contiguous_id = {} - stuff_dataset_id_to_contiguous_id = {} - - for i, cat in enumerate(COCO_CATEGORIES): - if cat["isthing"]: - thing_dataset_id_to_contiguous_id[cat["id"]] = i - else: - stuff_dataset_id_to_contiguous_id[cat["id"]] = i - - meta["thing_dataset_id_to_contiguous_id"] = thing_dataset_id_to_contiguous_id - meta["stuff_dataset_id_to_contiguous_id"] = stuff_dataset_id_to_contiguous_id - - return meta - elif dataset_name == "coco_person": - return { - "thing_classes": ["person"], - "keypoint_names": COCO_PERSON_KEYPOINT_NAMES, - "keypoint_flip_map": COCO_PERSON_KEYPOINT_FLIP_MAP, - "keypoint_connection_rules": KEYPOINT_CONNECTION_RULES, - } - elif dataset_name == "cityscapes": - # fmt: off - CITYSCAPES_THING_CLASSES = [ - "person", "rider", "car", "truck", - "bus", "train", "motorcycle", "bicycle", - ] - CITYSCAPES_STUFF_CLASSES = [ - "road", "sidewalk", "building", "wall", "fence", "pole", "traffic light", - "traffic sign", "vegetation", "terrain", "sky", "person", "rider", "car", - "truck", "bus", "train", "motorcycle", "bicycle", - ] - # fmt: on - return { - "thing_classes": CITYSCAPES_THING_CLASSES, - "stuff_classes": CITYSCAPES_STUFF_CLASSES, - } - raise KeyError("No built-in metadata for dataset {}".format(dataset_name)) diff --git a/spaces/YueMafighting/FollowYourPose/inference_followyourpose.py b/spaces/YueMafighting/FollowYourPose/inference_followyourpose.py deleted file mode 100644 index 31bc165f91df8e330dd6f599c85bc8498e15d251..0000000000000000000000000000000000000000 --- a/spaces/YueMafighting/FollowYourPose/inference_followyourpose.py +++ /dev/null @@ -1,90 +0,0 @@ - -from FollowYourPose.test_followyourpose import * - -import copy -import gradio as gr -from transformers import AutoTokenizer, CLIPTextModel -from huggingface_hub import snapshot_download -from inference_mmpose import * -import sys -sys.path.append('FollowYourPose') - -def get_time_string() -> str: - x = datetime.datetime.now() - return f"{(x.year - 2000):02d}{x.month:02d}{x.day:02d}-{x.hour:02d}{x.minute:02d}{x.second:02d}" - - -class merge_config_then_run(): - def __init__(self) -> None: - # Load the tokenizer - self.tokenizer = None - self.text_encoder = None - self.vae = None - self.unet = None - self.download_model() - self.mmpose = gr.Interface.load(name="spaces/YueMafighting/mmpose-estimation") - - def download_model(self): - REPO_ID = 'YueMafighting/FollowYourPose_v1' - snapshot_download(repo_id=REPO_ID, local_dir='./FollowYourPose/checkpoints/', local_dir_use_symlinks=False) - - - def run( - self, - data_path, - target_prompt, - num_steps, - guidance_scale, - video_type, - user_input_video=None, - start_sample_frame=0, - n_sample_frame=8, - stride=1, - left_crop=0, - right_crop=0, - top_crop=0, - bottom_crop=0, - ): - if video_type == "Raw Video": - infer_skeleton(self.mmpose, data_path) - - default_edit_config='./FollowYourPose/configs/pose_sample.yaml' - Omegadict_default_edit_config = OmegaConf.load(default_edit_config) - - dataset_time_string = get_time_string() - config_now = copy.deepcopy(Omegadict_default_edit_config) - - offset_dict = { - "left": left_crop, - "right": right_crop, - "top": top_crop, - "bottom": bottom_crop, - } - ImageSequenceDataset_dict = { - "start_sample_frame" : start_sample_frame, - "n_sample_frame" : n_sample_frame, - "sampling_rate" : stride, - "offset": offset_dict, - } - config_now['validation_data'].update(ImageSequenceDataset_dict) - if user_input_video and data_path is None: - raise gr.Error('You need to upload a video or choose a provided video') - if user_input_video is not None: - if isinstance(user_input_video, str): - config_now['validation_data']['path'] = user_input_video - elif hasattr(user_input_video, 'name') and user_input_video.name is not None: - config_now['validation_data']['path'] = user_input_video.name - config_now['validation_data']['prompts'] = [target_prompt] - # ddim config - config_now['validation_data']['guidance_scale'] = guidance_scale - config_now['validation_data']['num_inference_steps'] = num_steps - - if video_type == "Raw Video": - config_now['skeleton_path'] = './mmpose_result.mp4' - else: - config_now['skeleton_path'] = data_path - - save_path = test(**config_now) - mp4_path = save_path.replace('_0.gif', '_0_0_0.mp4') - return mp4_path - diff --git a/spaces/abhishek/first-order-motion-model/crop-video.py b/spaces/abhishek/first-order-motion-model/crop-video.py deleted file mode 100644 index 1a7740ee151ed104f4da887ac41b42dd693da2cc..0000000000000000000000000000000000000000 --- a/spaces/abhishek/first-order-motion-model/crop-video.py +++ /dev/null @@ -1,158 +0,0 @@ -import face_alignment -import skimage.io -import numpy -from argparse import ArgumentParser -from skimage import img_as_ubyte -from skimage.transform import resize -from tqdm import tqdm -import os -import imageio -import numpy as np -import warnings -warnings.filterwarnings("ignore") - -def extract_bbox(frame, fa): - if max(frame.shape[0], frame.shape[1]) > 640: - scale_factor = max(frame.shape[0], frame.shape[1]) / 640.0 - frame = resize(frame, (int(frame.shape[0] / scale_factor), int(frame.shape[1] / scale_factor))) - frame = img_as_ubyte(frame) - else: - scale_factor = 1 - frame = frame[..., :3] - bboxes = fa.face_detector.detect_from_image(frame[..., ::-1]) - if len(bboxes) == 0: - return [] - return np.array(bboxes)[:, :-1] * scale_factor - - - -def bb_intersection_over_union(boxA, boxB): - xA = max(boxA[0], boxB[0]) - yA = max(boxA[1], boxB[1]) - xB = min(boxA[2], boxB[2]) - yB = min(boxA[3], boxB[3]) - interArea = max(0, xB - xA + 1) * max(0, yB - yA + 1) - boxAArea = (boxA[2] - boxA[0] + 1) * (boxA[3] - boxA[1] + 1) - boxBArea = (boxB[2] - boxB[0] + 1) * (boxB[3] - boxB[1] + 1) - iou = interArea / float(boxAArea + boxBArea - interArea) - return iou - - -def join(tube_bbox, bbox): - xA = min(tube_bbox[0], bbox[0]) - yA = min(tube_bbox[1], bbox[1]) - xB = max(tube_bbox[2], bbox[2]) - yB = max(tube_bbox[3], bbox[3]) - return (xA, yA, xB, yB) - - -def compute_bbox(start, end, fps, tube_bbox, frame_shape, inp, image_shape, increase_area=0.1): - left, top, right, bot = tube_bbox - width = right - left - height = bot - top - - #Computing aspect preserving bbox - width_increase = max(increase_area, ((1 + 2 * increase_area) * height - width) / (2 * width)) - height_increase = max(increase_area, ((1 + 2 * increase_area) * width - height) / (2 * height)) - - left = int(left - width_increase * width) - top = int(top - height_increase * height) - right = int(right + width_increase * width) - bot = int(bot + height_increase * height) - - top, bot, left, right = max(0, top), min(bot, frame_shape[0]), max(0, left), min(right, frame_shape[1]) - h, w = bot - top, right - left - - start = start / fps - end = end / fps - time = end - start - - scale = f'{image_shape[0]}:{image_shape[1]}' - - return f'ffmpeg -i {inp} -ss {start} -t {time} -filter:v "crop={w}:{h}:{left}:{top}, scale={scale}" crop.mp4' - - -def compute_bbox_trajectories(trajectories, fps, frame_shape, args): - commands = [] - for i, (bbox, tube_bbox, start, end) in enumerate(trajectories): - if (end - start) > args.min_frames: - command = compute_bbox(start, end, fps, tube_bbox, frame_shape, inp=args.inp, image_shape=args.image_shape, increase_area=args.increase) - commands.append(command) - return commands - - -def process_video(args): - device = 'cpu' if args.cpu else 'cuda' - fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=False, device=device) - video = imageio.get_reader(args.inp) - - trajectories = [] - previous_frame = None - fps = video.get_meta_data()['fps'] - commands = [] - try: - for i, frame in tqdm(enumerate(video)): - frame_shape = frame.shape - bboxes = extract_bbox(frame, fa) - ## For each trajectory check the criterion - not_valid_trajectories = [] - valid_trajectories = [] - - for trajectory in trajectories: - tube_bbox = trajectory[0] - intersection = 0 - for bbox in bboxes: - intersection = max(intersection, bb_intersection_over_union(tube_bbox, bbox)) - if intersection > args.iou_with_initial: - valid_trajectories.append(trajectory) - else: - not_valid_trajectories.append(trajectory) - - commands += compute_bbox_trajectories(not_valid_trajectories, fps, frame_shape, args) - trajectories = valid_trajectories - - ## Assign bbox to trajectories, create new trajectories - for bbox in bboxes: - intersection = 0 - current_trajectory = None - for trajectory in trajectories: - tube_bbox = trajectory[0] - current_intersection = bb_intersection_over_union(tube_bbox, bbox) - if intersection < current_intersection and current_intersection > args.iou_with_initial: - intersection = bb_intersection_over_union(tube_bbox, bbox) - current_trajectory = trajectory - - ## Create new trajectory - if current_trajectory is None: - trajectories.append([bbox, bbox, i, i]) - else: - current_trajectory[3] = i - current_trajectory[1] = join(current_trajectory[1], bbox) - - - except IndexError as e: - raise (e) - - commands += compute_bbox_trajectories(trajectories, fps, frame_shape, args) - return commands - - -if __name__ == "__main__": - parser = ArgumentParser() - - parser.add_argument("--image_shape", default=(256, 256), type=lambda x: tuple(map(int, x.split(','))), - help="Image shape") - parser.add_argument("--increase", default=0.1, type=float, help='Increase bbox by this amount') - parser.add_argument("--iou_with_initial", type=float, default=0.25, help="The minimal allowed iou with inital bbox") - parser.add_argument("--inp", required=True, help='Input image or video') - parser.add_argument("--min_frames", type=int, default=150, help='Minimum number of frames') - parser.add_argument("--cpu", dest="cpu", action="store_true", help="cpu mode.") - - - args = parser.parse_args() - - commands = process_video(args) - for command in commands: - print (command) - - \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/utils/parrots_jit.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/utils/parrots_jit.py deleted file mode 100644 index 61873f6dbb9b10ed972c90aa8faa321e3cb3249e..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmcv/utils/parrots_jit.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os - -from .parrots_wrapper import TORCH_VERSION - -parrots_jit_option = os.getenv('PARROTS_JIT_OPTION') - -if TORCH_VERSION == 'parrots' and parrots_jit_option == 'ON': - from parrots.jit import pat as jit -else: - - def jit(func=None, - check_input=None, - full_shape=True, - derivate=False, - coderize=False, - optimize=False): - - def wrapper(func): - - def wrapper_inner(*args, **kargs): - return func(*args, **kargs) - - return wrapper_inner - - if func is None: - return wrapper - else: - return func - - -if TORCH_VERSION == 'parrots': - from parrots.utils.tester import skip_no_elena -else: - - def skip_no_elena(func): - - def wrapper(*args, **kargs): - return func(*args, **kargs) - - return wrapper diff --git a/spaces/adasddas/dsaaaaaaaa2/README.md b/spaces/adasddas/dsaaaaaaaa2/README.md deleted file mode 100644 index b4649c66bff3ff8bc6626e25cd3203c7ad1a210d..0000000000000000000000000000000000000000 --- a/spaces/adasddas/dsaaaaaaaa2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dsaaaaaaaa -emoji: 🏆 -colorFrom: purple -colorTo: indigo -sdk: docker -pinned: false -license: bigscience-openrail-m -duplicated_from: adasddas/dsaaaaaaaa22 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/akhaliq/lama/saicinpainting/evaluation/masks/countless/countless3d.py b/spaces/akhaliq/lama/saicinpainting/evaluation/masks/countless/countless3d.py deleted file mode 100644 index 810a71e4b1fa344dd2d731186516dbfa96c9cd03..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/saicinpainting/evaluation/masks/countless/countless3d.py +++ /dev/null @@ -1,356 +0,0 @@ -from six.moves import range -from PIL import Image -import numpy as np -import io -import time -import math -import random -import sys -from collections import defaultdict -from copy import deepcopy -from itertools import combinations -from functools import reduce -from tqdm import tqdm - -from memory_profiler import profile - -def countless5(a,b,c,d,e): - """First stage of generalizing from countless2d. - - You have five slots: A, B, C, D, E - - You can decide if something is the winner by first checking for - matches of three, then matches of two, then picking just one if - the other two tries fail. In countless2d, you just check for matches - of two and then pick one of them otherwise. - - Unfortunately, you need to check ABC, ABD, ABE, BCD, BDE, & CDE. - Then you need to check AB, AC, AD, BC, BD - We skip checking E because if none of these match, we pick E. We can - skip checking AE, BE, CE, DE since if any of those match, E is our boy - so it's redundant. - - So countless grows cominatorially in complexity. - """ - sections = [ a,b,c,d,e ] - - p2 = lambda q,r: q * (q == r) # q if p == q else 0 - p3 = lambda q,r,s: q * ( (q == r) & (r == s) ) # q if q == r == s else 0 - - lor = lambda x,y: x + (x == 0) * y - - results3 = ( p3(x,y,z) for x,y,z in combinations(sections, 3) ) - results3 = reduce(lor, results3) - - results2 = ( p2(x,y) for x,y in combinations(sections[:-1], 2) ) - results2 = reduce(lor, results2) - - return reduce(lor, (results3, results2, e)) - -def countless8(a,b,c,d,e,f,g,h): - """Extend countless5 to countless8. Same deal, except we also - need to check for matches of length 4.""" - sections = [ a, b, c, d, e, f, g, h ] - - p2 = lambda q,r: q * (q == r) - p3 = lambda q,r,s: q * ( (q == r) & (r == s) ) - p4 = lambda p,q,r,s: p * ( (p == q) & (q == r) & (r == s) ) - - lor = lambda x,y: x + (x == 0) * y - - results4 = ( p4(x,y,z,w) for x,y,z,w in combinations(sections, 4) ) - results4 = reduce(lor, results4) - - results3 = ( p3(x,y,z) for x,y,z in combinations(sections, 3) ) - results3 = reduce(lor, results3) - - # We can always use our shortcut of omitting the last element - # for N choose 2 - results2 = ( p2(x,y) for x,y in combinations(sections[:-1], 2) ) - results2 = reduce(lor, results2) - - return reduce(lor, [ results4, results3, results2, h ]) - -def dynamic_countless3d(data): - """countless8 + dynamic programming. ~2x faster""" - sections = [] - - # shift zeros up one so they don't interfere with bitwise operators - # we'll shift down at the end - data += 1 - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - pick = lambda a,b: a * (a == b) - lor = lambda x,y: x + (x == 0) * y - - subproblems2 = {} - - results2 = None - for x,y in combinations(range(7), 2): - res = pick(sections[x], sections[y]) - subproblems2[(x,y)] = res - if results2 is not None: - results2 += (results2 == 0) * res - else: - results2 = res - - subproblems3 = {} - - results3 = None - for x,y,z in combinations(range(8), 3): - res = pick(subproblems2[(x,y)], sections[z]) - - if z != 7: - subproblems3[(x,y,z)] = res - - if results3 is not None: - results3 += (results3 == 0) * res - else: - results3 = res - - results3 = reduce(lor, (results3, results2, sections[-1])) - - # free memory - results2 = None - subproblems2 = None - res = None - - results4 = ( pick(subproblems3[(x,y,z)], sections[w]) for x,y,z,w in combinations(range(8), 4) ) - results4 = reduce(lor, results4) - subproblems3 = None # free memory - - final_result = lor(results4, results3) - 1 - data -= 1 - return final_result - -def countless3d(data): - """Now write countless8 in such a way that it could be used - to process an image.""" - sections = [] - - # shift zeros up one so they don't interfere with bitwise operators - # we'll shift down at the end - data += 1 - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - factor = (2,2,2) - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - p2 = lambda q,r: q * (q == r) - p3 = lambda q,r,s: q * ( (q == r) & (r == s) ) - p4 = lambda p,q,r,s: p * ( (p == q) & (q == r) & (r == s) ) - - lor = lambda x,y: x + (x == 0) * y - - results4 = ( p4(x,y,z,w) for x,y,z,w in combinations(sections, 4) ) - results4 = reduce(lor, results4) - - results3 = ( p3(x,y,z) for x,y,z in combinations(sections, 3) ) - results3 = reduce(lor, results3) - - results2 = ( p2(x,y) for x,y in combinations(sections[:-1], 2) ) - results2 = reduce(lor, results2) - - final_result = reduce(lor, (results4, results3, results2, sections[-1])) - 1 - data -= 1 - return final_result - -def countless_generalized(data, factor): - assert len(data.shape) == len(factor) - - sections = [] - - mode_of = reduce(lambda x,y: x * y, factor) - majority = int(math.ceil(float(mode_of) / 2)) - - data += 1 - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - def pick(elements): - eq = ( elements[i] == elements[i+1] for i in range(len(elements) - 1) ) - anded = reduce(lambda p,q: p & q, eq) - return elements[0] * anded - - def logical_or(x,y): - return x + (x == 0) * y - - result = ( pick(combo) for combo in combinations(sections, majority) ) - result = reduce(logical_or, result) - for i in range(majority - 1, 3-1, -1): # 3-1 b/c of exclusive bounds - partial_result = ( pick(combo) for combo in combinations(sections, i) ) - partial_result = reduce(logical_or, partial_result) - result = logical_or(result, partial_result) - - partial_result = ( pick(combo) for combo in combinations(sections[:-1], 2) ) - partial_result = reduce(logical_or, partial_result) - result = logical_or(result, partial_result) - - result = logical_or(result, sections[-1]) - 1 - data -= 1 - return result - -def dynamic_countless_generalized(data, factor): - assert len(data.shape) == len(factor) - - sections = [] - - mode_of = reduce(lambda x,y: x * y, factor) - majority = int(math.ceil(float(mode_of) / 2)) - - data += 1 # offset from zero - - # This loop splits the 2D array apart into four arrays that are - # all the result of striding by 2 and offset by (0,0), (0,1), (1,0), - # and (1,1) representing the A, B, C, and D positions from Figure 1. - for offset in np.ndindex(factor): - part = data[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - pick = lambda a,b: a * (a == b) - lor = lambda x,y: x + (x == 0) * y # logical or - - subproblems = [ {}, {} ] - results2 = None - for x,y in combinations(range(len(sections) - 1), 2): - res = pick(sections[x], sections[y]) - subproblems[0][(x,y)] = res - if results2 is not None: - results2 = lor(results2, res) - else: - results2 = res - - results = [ results2 ] - for r in range(3, majority+1): - r_results = None - for combo in combinations(range(len(sections)), r): - res = pick(subproblems[0][combo[:-1]], sections[combo[-1]]) - - if combo[-1] != len(sections) - 1: - subproblems[1][combo] = res - - if r_results is not None: - r_results = lor(r_results, res) - else: - r_results = res - results.append(r_results) - subproblems[0] = subproblems[1] - subproblems[1] = {} - - results.reverse() - final_result = lor(reduce(lor, results), sections[-1]) - 1 - data -= 1 - return final_result - -def downsample_with_averaging(array): - """ - Downsample x by factor using averaging. - - @return: The downsampled array, of the same type as x. - """ - factor = (2,2,2) - - if np.array_equal(factor[:3], np.array([1,1,1])): - return array - - output_shape = tuple(int(math.ceil(s / f)) for s, f in zip(array.shape, factor)) - temp = np.zeros(output_shape, float) - counts = np.zeros(output_shape, np.int) - for offset in np.ndindex(factor): - part = array[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - indexing_expr = tuple(np.s_[:s] for s in part.shape) - temp[indexing_expr] += part - counts[indexing_expr] += 1 - return np.cast[array.dtype](temp / counts) - -def downsample_with_max_pooling(array): - - factor = (2,2,2) - - sections = [] - - for offset in np.ndindex(factor): - part = array[tuple(np.s_[o::f] for o, f in zip(offset, factor))] - sections.append(part) - - output = sections[0].copy() - - for section in sections[1:]: - np.maximum(output, section, output) - - return output - -def striding(array): - """Downsample x by factor using striding. - - @return: The downsampled array, of the same type as x. - """ - factor = (2,2,2) - if np.all(np.array(factor, int) == 1): - return array - return array[tuple(np.s_[::f] for f in factor)] - -def benchmark(): - def countless3d_generalized(img): - return countless_generalized(img, (2,8,1)) - def countless3d_dynamic_generalized(img): - return dynamic_countless_generalized(img, (8,8,1)) - - methods = [ - # countless3d, - # dynamic_countless3d, - countless3d_generalized, - # countless3d_dynamic_generalized, - # striding, - # downsample_with_averaging, - # downsample_with_max_pooling - ] - - data = np.zeros(shape=(16**2, 16**2, 16**2), dtype=np.uint8) + 1 - - N = 5 - - print('Algorithm\tMPx\tMB/sec\tSec\tN=%d' % N) - - for fn in methods: - start = time.time() - for _ in range(N): - result = fn(data) - end = time.time() - - total_time = (end - start) - mpx = N * float(data.shape[0] * data.shape[1] * data.shape[2]) / total_time / 1024.0 / 1024.0 - mbytes = mpx * np.dtype(data.dtype).itemsize - # Output in tab separated format to enable copy-paste into excel/numbers - print("%s\t%.3f\t%.3f\t%.2f" % (fn.__name__, mpx, mbytes, total_time)) - -if __name__ == '__main__': - benchmark() - -# Algorithm MPx MB/sec Sec N=5 -# countless3d 10.564 10.564 60.58 -# dynamic_countless3d 22.717 22.717 28.17 -# countless3d_generalized 9.702 9.702 65.96 -# countless3d_dynamic_generalized 22.720 22.720 28.17 -# striding 253360.506 253360.506 0.00 -# downsample_with_averaging 224.098 224.098 2.86 -# downsample_with_max_pooling 690.474 690.474 0.93 - - - diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/vcs/git.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/vcs/git.py deleted file mode 100644 index 8d1d499376744954308bdf96f80e5b5a39a24195..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_internal/vcs/git.py +++ /dev/null @@ -1,526 +0,0 @@ -import logging -import os.path -import pathlib -import re -import urllib.parse -import urllib.request -from typing import List, Optional, Tuple - -from pip._internal.exceptions import BadCommand, InstallationError -from pip._internal.utils.misc import HiddenText, display_path, hide_url -from pip._internal.utils.subprocess import make_command -from pip._internal.vcs.versioncontrol import ( - AuthInfo, - RemoteNotFoundError, - RemoteNotValidError, - RevOptions, - VersionControl, - find_path_to_project_root_from_repo_root, - vcs, -) - -urlsplit = urllib.parse.urlsplit -urlunsplit = urllib.parse.urlunsplit - - -logger = logging.getLogger(__name__) - - -GIT_VERSION_REGEX = re.compile( - r"^git version " # Prefix. - r"(\d+)" # Major. - r"\.(\d+)" # Dot, minor. - r"(?:\.(\d+))?" # Optional dot, patch. - r".*$" # Suffix, including any pre- and post-release segments we don't care about. -) - -HASH_REGEX = re.compile("^[a-fA-F0-9]{40}$") - -# SCP (Secure copy protocol) shorthand. e.g. 'git@example.com:foo/bar.git' -SCP_REGEX = re.compile( - r"""^ - # Optional user, e.g. 'git@' - (\w+@)? - # Server, e.g. 'github.com'. - ([^/:]+): - # The server-side path. e.g. 'user/project.git'. Must start with an - # alphanumeric character so as not to be confusable with a Windows paths - # like 'C:/foo/bar' or 'C:\foo\bar'. - (\w[^:]*) - $""", - re.VERBOSE, -) - - -def looks_like_hash(sha: str) -> bool: - return bool(HASH_REGEX.match(sha)) - - -class Git(VersionControl): - name = "git" - dirname = ".git" - repo_name = "clone" - schemes = ( - "git+http", - "git+https", - "git+ssh", - "git+git", - "git+file", - ) - # Prevent the user's environment variables from interfering with pip: - # https://github.com/pypa/pip/issues/1130 - unset_environ = ("GIT_DIR", "GIT_WORK_TREE") - default_arg_rev = "HEAD" - - @staticmethod - def get_base_rev_args(rev: str) -> List[str]: - return [rev] - - def is_immutable_rev_checkout(self, url: str, dest: str) -> bool: - _, rev_options = self.get_url_rev_options(hide_url(url)) - if not rev_options.rev: - return False - if not self.is_commit_id_equal(dest, rev_options.rev): - # the current commit is different from rev, - # which means rev was something else than a commit hash - return False - # return False in the rare case rev is both a commit hash - # and a tag or a branch; we don't want to cache in that case - # because that branch/tag could point to something else in the future - is_tag_or_branch = bool(self.get_revision_sha(dest, rev_options.rev)[0]) - return not is_tag_or_branch - - def get_git_version(self) -> Tuple[int, ...]: - version = self.run_command( - ["version"], - command_desc="git version", - show_stdout=False, - stdout_only=True, - ) - match = GIT_VERSION_REGEX.match(version) - if not match: - logger.warning("Can't parse git version: %s", version) - return () - return tuple(int(c) for c in match.groups()) - - @classmethod - def get_current_branch(cls, location: str) -> Optional[str]: - """ - Return the current branch, or None if HEAD isn't at a branch - (e.g. detached HEAD). - """ - # git-symbolic-ref exits with empty stdout if "HEAD" is a detached - # HEAD rather than a symbolic ref. In addition, the -q causes the - # command to exit with status code 1 instead of 128 in this case - # and to suppress the message to stderr. - args = ["symbolic-ref", "-q", "HEAD"] - output = cls.run_command( - args, - extra_ok_returncodes=(1,), - show_stdout=False, - stdout_only=True, - cwd=location, - ) - ref = output.strip() - - if ref.startswith("refs/heads/"): - return ref[len("refs/heads/") :] - - return None - - @classmethod - def get_revision_sha(cls, dest: str, rev: str) -> Tuple[Optional[str], bool]: - """ - Return (sha_or_none, is_branch), where sha_or_none is a commit hash - if the revision names a remote branch or tag, otherwise None. - - Args: - dest: the repository directory. - rev: the revision name. - """ - # Pass rev to pre-filter the list. - output = cls.run_command( - ["show-ref", rev], - cwd=dest, - show_stdout=False, - stdout_only=True, - on_returncode="ignore", - ) - refs = {} - # NOTE: We do not use splitlines here since that would split on other - # unicode separators, which can be maliciously used to install a - # different revision. - for line in output.strip().split("\n"): - line = line.rstrip("\r") - if not line: - continue - try: - ref_sha, ref_name = line.split(" ", maxsplit=2) - except ValueError: - # Include the offending line to simplify troubleshooting if - # this error ever occurs. - raise ValueError(f"unexpected show-ref line: {line!r}") - - refs[ref_name] = ref_sha - - branch_ref = f"refs/remotes/origin/{rev}" - tag_ref = f"refs/tags/{rev}" - - sha = refs.get(branch_ref) - if sha is not None: - return (sha, True) - - sha = refs.get(tag_ref) - - return (sha, False) - - @classmethod - def _should_fetch(cls, dest: str, rev: str) -> bool: - """ - Return true if rev is a ref or is a commit that we don't have locally. - - Branches and tags are not considered in this method because they are - assumed to be always available locally (which is a normal outcome of - ``git clone`` and ``git fetch --tags``). - """ - if rev.startswith("refs/"): - # Always fetch remote refs. - return True - - if not looks_like_hash(rev): - # Git fetch would fail with abbreviated commits. - return False - - if cls.has_commit(dest, rev): - # Don't fetch if we have the commit locally. - return False - - return True - - @classmethod - def resolve_revision( - cls, dest: str, url: HiddenText, rev_options: RevOptions - ) -> RevOptions: - """ - Resolve a revision to a new RevOptions object with the SHA1 of the - branch, tag, or ref if found. - - Args: - rev_options: a RevOptions object. - """ - rev = rev_options.arg_rev - # The arg_rev property's implementation for Git ensures that the - # rev return value is always non-None. - assert rev is not None - - sha, is_branch = cls.get_revision_sha(dest, rev) - - if sha is not None: - rev_options = rev_options.make_new(sha) - rev_options.branch_name = rev if is_branch else None - - return rev_options - - # Do not show a warning for the common case of something that has - # the form of a Git commit hash. - if not looks_like_hash(rev): - logger.warning( - "Did not find branch or tag '%s', assuming revision or ref.", - rev, - ) - - if not cls._should_fetch(dest, rev): - return rev_options - - # fetch the requested revision - cls.run_command( - make_command("fetch", "-q", url, rev_options.to_args()), - cwd=dest, - ) - # Change the revision to the SHA of the ref we fetched - sha = cls.get_revision(dest, rev="FETCH_HEAD") - rev_options = rev_options.make_new(sha) - - return rev_options - - @classmethod - def is_commit_id_equal(cls, dest: str, name: Optional[str]) -> bool: - """ - Return whether the current commit hash equals the given name. - - Args: - dest: the repository directory. - name: a string name. - """ - if not name: - # Then avoid an unnecessary subprocess call. - return False - - return cls.get_revision(dest) == name - - def fetch_new( - self, dest: str, url: HiddenText, rev_options: RevOptions, verbosity: int - ) -> None: - rev_display = rev_options.to_display() - logger.info("Cloning %s%s to %s", url, rev_display, display_path(dest)) - if verbosity <= 0: - flags: Tuple[str, ...] = ("--quiet",) - elif verbosity == 1: - flags = () - else: - flags = ("--verbose", "--progress") - if self.get_git_version() >= (2, 17): - # Git added support for partial clone in 2.17 - # https://git-scm.com/docs/partial-clone - # Speeds up cloning by functioning without a complete copy of repository - self.run_command( - make_command( - "clone", - "--filter=blob:none", - *flags, - url, - dest, - ) - ) - else: - self.run_command(make_command("clone", *flags, url, dest)) - - if rev_options.rev: - # Then a specific revision was requested. - rev_options = self.resolve_revision(dest, url, rev_options) - branch_name = getattr(rev_options, "branch_name", None) - logger.debug("Rev options %s, branch_name %s", rev_options, branch_name) - if branch_name is None: - # Only do a checkout if the current commit id doesn't match - # the requested revision. - if not self.is_commit_id_equal(dest, rev_options.rev): - cmd_args = make_command( - "checkout", - "-q", - rev_options.to_args(), - ) - self.run_command(cmd_args, cwd=dest) - elif self.get_current_branch(dest) != branch_name: - # Then a specific branch was requested, and that branch - # is not yet checked out. - track_branch = f"origin/{branch_name}" - cmd_args = [ - "checkout", - "-b", - branch_name, - "--track", - track_branch, - ] - self.run_command(cmd_args, cwd=dest) - else: - sha = self.get_revision(dest) - rev_options = rev_options.make_new(sha) - - logger.info("Resolved %s to commit %s", url, rev_options.rev) - - #: repo may contain submodules - self.update_submodules(dest) - - def switch(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - self.run_command( - make_command("config", "remote.origin.url", url), - cwd=dest, - ) - cmd_args = make_command("checkout", "-q", rev_options.to_args()) - self.run_command(cmd_args, cwd=dest) - - self.update_submodules(dest) - - def update(self, dest: str, url: HiddenText, rev_options: RevOptions) -> None: - # First fetch changes from the default remote - if self.get_git_version() >= (1, 9): - # fetch tags in addition to everything else - self.run_command(["fetch", "-q", "--tags"], cwd=dest) - else: - self.run_command(["fetch", "-q"], cwd=dest) - # Then reset to wanted revision (maybe even origin/master) - rev_options = self.resolve_revision(dest, url, rev_options) - cmd_args = make_command("reset", "--hard", "-q", rev_options.to_args()) - self.run_command(cmd_args, cwd=dest) - #: update submodules - self.update_submodules(dest) - - @classmethod - def get_remote_url(cls, location: str) -> str: - """ - Return URL of the first remote encountered. - - Raises RemoteNotFoundError if the repository does not have a remote - url configured. - """ - # We need to pass 1 for extra_ok_returncodes since the command - # exits with return code 1 if there are no matching lines. - stdout = cls.run_command( - ["config", "--get-regexp", r"remote\..*\.url"], - extra_ok_returncodes=(1,), - show_stdout=False, - stdout_only=True, - cwd=location, - ) - remotes = stdout.splitlines() - try: - found_remote = remotes[0] - except IndexError: - raise RemoteNotFoundError - - for remote in remotes: - if remote.startswith("remote.origin.url "): - found_remote = remote - break - url = found_remote.split(" ")[1] - return cls._git_remote_to_pip_url(url.strip()) - - @staticmethod - def _git_remote_to_pip_url(url: str) -> str: - """ - Convert a remote url from what git uses to what pip accepts. - - There are 3 legal forms **url** may take: - - 1. A fully qualified url: ssh://git@example.com/foo/bar.git - 2. A local project.git folder: /path/to/bare/repository.git - 3. SCP shorthand for form 1: git@example.com:foo/bar.git - - Form 1 is output as-is. Form 2 must be converted to URI and form 3 must - be converted to form 1. - - See the corresponding test test_git_remote_url_to_pip() for examples of - sample inputs/outputs. - """ - if re.match(r"\w+://", url): - # This is already valid. Pass it though as-is. - return url - if os.path.exists(url): - # A local bare remote (git clone --mirror). - # Needs a file:// prefix. - return pathlib.PurePath(url).as_uri() - scp_match = SCP_REGEX.match(url) - if scp_match: - # Add an ssh:// prefix and replace the ':' with a '/'. - return scp_match.expand(r"ssh://\1\2/\3") - # Otherwise, bail out. - raise RemoteNotValidError(url) - - @classmethod - def has_commit(cls, location: str, rev: str) -> bool: - """ - Check if rev is a commit that is available in the local repository. - """ - try: - cls.run_command( - ["rev-parse", "-q", "--verify", "sha^" + rev], - cwd=location, - log_failed_cmd=False, - ) - except InstallationError: - return False - else: - return True - - @classmethod - def get_revision(cls, location: str, rev: Optional[str] = None) -> str: - if rev is None: - rev = "HEAD" - current_rev = cls.run_command( - ["rev-parse", rev], - show_stdout=False, - stdout_only=True, - cwd=location, - ) - return current_rev.strip() - - @classmethod - def get_subdirectory(cls, location: str) -> Optional[str]: - """ - Return the path to Python project root, relative to the repo root. - Return None if the project root is in the repo root. - """ - # find the repo root - git_dir = cls.run_command( - ["rev-parse", "--git-dir"], - show_stdout=False, - stdout_only=True, - cwd=location, - ).strip() - if not os.path.isabs(git_dir): - git_dir = os.path.join(location, git_dir) - repo_root = os.path.abspath(os.path.join(git_dir, "..")) - return find_path_to_project_root_from_repo_root(location, repo_root) - - @classmethod - def get_url_rev_and_auth(cls, url: str) -> Tuple[str, Optional[str], AuthInfo]: - """ - Prefixes stub URLs like 'user@hostname:user/repo.git' with 'ssh://'. - That's required because although they use SSH they sometimes don't - work with a ssh:// scheme (e.g. GitHub). But we need a scheme for - parsing. Hence we remove it again afterwards and return it as a stub. - """ - # Works around an apparent Git bug - # (see https://article.gmane.org/gmane.comp.version-control.git/146500) - scheme, netloc, path, query, fragment = urlsplit(url) - if scheme.endswith("file"): - initial_slashes = path[: -len(path.lstrip("/"))] - newpath = initial_slashes + urllib.request.url2pathname(path).replace( - "\\", "/" - ).lstrip("/") - after_plus = scheme.find("+") + 1 - url = scheme[:after_plus] + urlunsplit( - (scheme[after_plus:], netloc, newpath, query, fragment), - ) - - if "://" not in url: - assert "file:" not in url - url = url.replace("git+", "git+ssh://") - url, rev, user_pass = super().get_url_rev_and_auth(url) - url = url.replace("ssh://", "") - else: - url, rev, user_pass = super().get_url_rev_and_auth(url) - - return url, rev, user_pass - - @classmethod - def update_submodules(cls, location: str) -> None: - if not os.path.exists(os.path.join(location, ".gitmodules")): - return - cls.run_command( - ["submodule", "update", "--init", "--recursive", "-q"], - cwd=location, - ) - - @classmethod - def get_repository_root(cls, location: str) -> Optional[str]: - loc = super().get_repository_root(location) - if loc: - return loc - try: - r = cls.run_command( - ["rev-parse", "--show-toplevel"], - cwd=location, - show_stdout=False, - stdout_only=True, - on_returncode="raise", - log_failed_cmd=False, - ) - except BadCommand: - logger.debug( - "could not determine if %s is under git control " - "because git is not available", - location, - ) - return None - except InstallationError: - return None - return os.path.normpath(r.rstrip("\r\n")) - - @staticmethod - def should_add_vcs_url_prefix(repo_url: str) -> bool: - """In either https or ssh form, requirements must be prefixed with git+.""" - return True - - -vcs.register(Git) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/unistring.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/unistring.py deleted file mode 100644 index 2872985c14e205bc2d464e64c768a1bfc816ad40..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/unistring.py +++ /dev/null @@ -1,153 +0,0 @@ -""" - pygments.unistring - ~~~~~~~~~~~~~~~~~~ - - Strings of all Unicode characters of a certain category. - Used for matching in Unicode-aware languages. Run to regenerate. - - Inspired by chartypes_create.py from the MoinMoin project. - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -Cc = '\x00-\x1f\x7f-\x9f' - -Cf = '\xad\u0600-\u0605\u061c\u06dd\u070f\u08e2\u180e\u200b-\u200f\u202a-\u202e\u2060-\u2064\u2066-\u206f\ufeff\ufff9-\ufffb\U000110bd\U000110cd\U0001bca0-\U0001bca3\U0001d173-\U0001d17a\U000e0001\U000e0020-\U000e007f' - -Cn = '\u0378-\u0379\u0380-\u0383\u038b\u038d\u03a2\u0530\u0557-\u0558\u058b-\u058c\u0590\u05c8-\u05cf\u05eb-\u05ee\u05f5-\u05ff\u061d\u070e\u074b-\u074c\u07b2-\u07bf\u07fb-\u07fc\u082e-\u082f\u083f\u085c-\u085d\u085f\u086b-\u089f\u08b5\u08be-\u08d2\u0984\u098d-\u098e\u0991-\u0992\u09a9\u09b1\u09b3-\u09b5\u09ba-\u09bb\u09c5-\u09c6\u09c9-\u09ca\u09cf-\u09d6\u09d8-\u09db\u09de\u09e4-\u09e5\u09ff-\u0a00\u0a04\u0a0b-\u0a0e\u0a11-\u0a12\u0a29\u0a31\u0a34\u0a37\u0a3a-\u0a3b\u0a3d\u0a43-\u0a46\u0a49-\u0a4a\u0a4e-\u0a50\u0a52-\u0a58\u0a5d\u0a5f-\u0a65\u0a77-\u0a80\u0a84\u0a8e\u0a92\u0aa9\u0ab1\u0ab4\u0aba-\u0abb\u0ac6\u0aca\u0ace-\u0acf\u0ad1-\u0adf\u0ae4-\u0ae5\u0af2-\u0af8\u0b00\u0b04\u0b0d-\u0b0e\u0b11-\u0b12\u0b29\u0b31\u0b34\u0b3a-\u0b3b\u0b45-\u0b46\u0b49-\u0b4a\u0b4e-\u0b55\u0b58-\u0b5b\u0b5e\u0b64-\u0b65\u0b78-\u0b81\u0b84\u0b8b-\u0b8d\u0b91\u0b96-\u0b98\u0b9b\u0b9d\u0ba0-\u0ba2\u0ba5-\u0ba7\u0bab-\u0bad\u0bba-\u0bbd\u0bc3-\u0bc5\u0bc9\u0bce-\u0bcf\u0bd1-\u0bd6\u0bd8-\u0be5\u0bfb-\u0bff\u0c0d\u0c11\u0c29\u0c3a-\u0c3c\u0c45\u0c49\u0c4e-\u0c54\u0c57\u0c5b-\u0c5f\u0c64-\u0c65\u0c70-\u0c77\u0c8d\u0c91\u0ca9\u0cb4\u0cba-\u0cbb\u0cc5\u0cc9\u0cce-\u0cd4\u0cd7-\u0cdd\u0cdf\u0ce4-\u0ce5\u0cf0\u0cf3-\u0cff\u0d04\u0d0d\u0d11\u0d45\u0d49\u0d50-\u0d53\u0d64-\u0d65\u0d80-\u0d81\u0d84\u0d97-\u0d99\u0db2\u0dbc\u0dbe-\u0dbf\u0dc7-\u0dc9\u0dcb-\u0dce\u0dd5\u0dd7\u0de0-\u0de5\u0df0-\u0df1\u0df5-\u0e00\u0e3b-\u0e3e\u0e5c-\u0e80\u0e83\u0e85-\u0e86\u0e89\u0e8b-\u0e8c\u0e8e-\u0e93\u0e98\u0ea0\u0ea4\u0ea6\u0ea8-\u0ea9\u0eac\u0eba\u0ebe-\u0ebf\u0ec5\u0ec7\u0ece-\u0ecf\u0eda-\u0edb\u0ee0-\u0eff\u0f48\u0f6d-\u0f70\u0f98\u0fbd\u0fcd\u0fdb-\u0fff\u10c6\u10c8-\u10cc\u10ce-\u10cf\u1249\u124e-\u124f\u1257\u1259\u125e-\u125f\u1289\u128e-\u128f\u12b1\u12b6-\u12b7\u12bf\u12c1\u12c6-\u12c7\u12d7\u1311\u1316-\u1317\u135b-\u135c\u137d-\u137f\u139a-\u139f\u13f6-\u13f7\u13fe-\u13ff\u169d-\u169f\u16f9-\u16ff\u170d\u1715-\u171f\u1737-\u173f\u1754-\u175f\u176d\u1771\u1774-\u177f\u17de-\u17df\u17ea-\u17ef\u17fa-\u17ff\u180f\u181a-\u181f\u1879-\u187f\u18ab-\u18af\u18f6-\u18ff\u191f\u192c-\u192f\u193c-\u193f\u1941-\u1943\u196e-\u196f\u1975-\u197f\u19ac-\u19af\u19ca-\u19cf\u19db-\u19dd\u1a1c-\u1a1d\u1a5f\u1a7d-\u1a7e\u1a8a-\u1a8f\u1a9a-\u1a9f\u1aae-\u1aaf\u1abf-\u1aff\u1b4c-\u1b4f\u1b7d-\u1b7f\u1bf4-\u1bfb\u1c38-\u1c3a\u1c4a-\u1c4c\u1c89-\u1c8f\u1cbb-\u1cbc\u1cc8-\u1ccf\u1cfa-\u1cff\u1dfa\u1f16-\u1f17\u1f1e-\u1f1f\u1f46-\u1f47\u1f4e-\u1f4f\u1f58\u1f5a\u1f5c\u1f5e\u1f7e-\u1f7f\u1fb5\u1fc5\u1fd4-\u1fd5\u1fdc\u1ff0-\u1ff1\u1ff5\u1fff\u2065\u2072-\u2073\u208f\u209d-\u209f\u20c0-\u20cf\u20f1-\u20ff\u218c-\u218f\u2427-\u243f\u244b-\u245f\u2b74-\u2b75\u2b96-\u2b97\u2bc9\u2bff\u2c2f\u2c5f\u2cf4-\u2cf8\u2d26\u2d28-\u2d2c\u2d2e-\u2d2f\u2d68-\u2d6e\u2d71-\u2d7e\u2d97-\u2d9f\u2da7\u2daf\u2db7\u2dbf\u2dc7\u2dcf\u2dd7\u2ddf\u2e4f-\u2e7f\u2e9a\u2ef4-\u2eff\u2fd6-\u2fef\u2ffc-\u2fff\u3040\u3097-\u3098\u3100-\u3104\u3130\u318f\u31bb-\u31bf\u31e4-\u31ef\u321f\u32ff\u4db6-\u4dbf\u9ff0-\u9fff\ua48d-\ua48f\ua4c7-\ua4cf\ua62c-\ua63f\ua6f8-\ua6ff\ua7ba-\ua7f6\ua82c-\ua82f\ua83a-\ua83f\ua878-\ua87f\ua8c6-\ua8cd\ua8da-\ua8df\ua954-\ua95e\ua97d-\ua97f\ua9ce\ua9da-\ua9dd\ua9ff\uaa37-\uaa3f\uaa4e-\uaa4f\uaa5a-\uaa5b\uaac3-\uaada\uaaf7-\uab00\uab07-\uab08\uab0f-\uab10\uab17-\uab1f\uab27\uab2f\uab66-\uab6f\uabee-\uabef\uabfa-\uabff\ud7a4-\ud7af\ud7c7-\ud7ca\ud7fc-\ud7ff\ufa6e-\ufa6f\ufada-\ufaff\ufb07-\ufb12\ufb18-\ufb1c\ufb37\ufb3d\ufb3f\ufb42\ufb45\ufbc2-\ufbd2\ufd40-\ufd4f\ufd90-\ufd91\ufdc8-\ufdef\ufdfe-\ufdff\ufe1a-\ufe1f\ufe53\ufe67\ufe6c-\ufe6f\ufe75\ufefd-\ufefe\uff00\uffbf-\uffc1\uffc8-\uffc9\uffd0-\uffd1\uffd8-\uffd9\uffdd-\uffdf\uffe7\uffef-\ufff8\ufffe-\uffff\U0001000c\U00010027\U0001003b\U0001003e\U0001004e-\U0001004f\U0001005e-\U0001007f\U000100fb-\U000100ff\U00010103-\U00010106\U00010134-\U00010136\U0001018f\U0001019c-\U0001019f\U000101a1-\U000101cf\U000101fe-\U0001027f\U0001029d-\U0001029f\U000102d1-\U000102df\U000102fc-\U000102ff\U00010324-\U0001032c\U0001034b-\U0001034f\U0001037b-\U0001037f\U0001039e\U000103c4-\U000103c7\U000103d6-\U000103ff\U0001049e-\U0001049f\U000104aa-\U000104af\U000104d4-\U000104d7\U000104fc-\U000104ff\U00010528-\U0001052f\U00010564-\U0001056e\U00010570-\U000105ff\U00010737-\U0001073f\U00010756-\U0001075f\U00010768-\U000107ff\U00010806-\U00010807\U00010809\U00010836\U00010839-\U0001083b\U0001083d-\U0001083e\U00010856\U0001089f-\U000108a6\U000108b0-\U000108df\U000108f3\U000108f6-\U000108fa\U0001091c-\U0001091e\U0001093a-\U0001093e\U00010940-\U0001097f\U000109b8-\U000109bb\U000109d0-\U000109d1\U00010a04\U00010a07-\U00010a0b\U00010a14\U00010a18\U00010a36-\U00010a37\U00010a3b-\U00010a3e\U00010a49-\U00010a4f\U00010a59-\U00010a5f\U00010aa0-\U00010abf\U00010ae7-\U00010aea\U00010af7-\U00010aff\U00010b36-\U00010b38\U00010b56-\U00010b57\U00010b73-\U00010b77\U00010b92-\U00010b98\U00010b9d-\U00010ba8\U00010bb0-\U00010bff\U00010c49-\U00010c7f\U00010cb3-\U00010cbf\U00010cf3-\U00010cf9\U00010d28-\U00010d2f\U00010d3a-\U00010e5f\U00010e7f-\U00010eff\U00010f28-\U00010f2f\U00010f5a-\U00010fff\U0001104e-\U00011051\U00011070-\U0001107e\U000110c2-\U000110cc\U000110ce-\U000110cf\U000110e9-\U000110ef\U000110fa-\U000110ff\U00011135\U00011147-\U0001114f\U00011177-\U0001117f\U000111ce-\U000111cf\U000111e0\U000111f5-\U000111ff\U00011212\U0001123f-\U0001127f\U00011287\U00011289\U0001128e\U0001129e\U000112aa-\U000112af\U000112eb-\U000112ef\U000112fa-\U000112ff\U00011304\U0001130d-\U0001130e\U00011311-\U00011312\U00011329\U00011331\U00011334\U0001133a\U00011345-\U00011346\U00011349-\U0001134a\U0001134e-\U0001134f\U00011351-\U00011356\U00011358-\U0001135c\U00011364-\U00011365\U0001136d-\U0001136f\U00011375-\U000113ff\U0001145a\U0001145c\U0001145f-\U0001147f\U000114c8-\U000114cf\U000114da-\U0001157f\U000115b6-\U000115b7\U000115de-\U000115ff\U00011645-\U0001164f\U0001165a-\U0001165f\U0001166d-\U0001167f\U000116b8-\U000116bf\U000116ca-\U000116ff\U0001171b-\U0001171c\U0001172c-\U0001172f\U00011740-\U000117ff\U0001183c-\U0001189f\U000118f3-\U000118fe\U00011900-\U000119ff\U00011a48-\U00011a4f\U00011a84-\U00011a85\U00011aa3-\U00011abf\U00011af9-\U00011bff\U00011c09\U00011c37\U00011c46-\U00011c4f\U00011c6d-\U00011c6f\U00011c90-\U00011c91\U00011ca8\U00011cb7-\U00011cff\U00011d07\U00011d0a\U00011d37-\U00011d39\U00011d3b\U00011d3e\U00011d48-\U00011d4f\U00011d5a-\U00011d5f\U00011d66\U00011d69\U00011d8f\U00011d92\U00011d99-\U00011d9f\U00011daa-\U00011edf\U00011ef9-\U00011fff\U0001239a-\U000123ff\U0001246f\U00012475-\U0001247f\U00012544-\U00012fff\U0001342f-\U000143ff\U00014647-\U000167ff\U00016a39-\U00016a3f\U00016a5f\U00016a6a-\U00016a6d\U00016a70-\U00016acf\U00016aee-\U00016aef\U00016af6-\U00016aff\U00016b46-\U00016b4f\U00016b5a\U00016b62\U00016b78-\U00016b7c\U00016b90-\U00016e3f\U00016e9b-\U00016eff\U00016f45-\U00016f4f\U00016f7f-\U00016f8e\U00016fa0-\U00016fdf\U00016fe2-\U00016fff\U000187f2-\U000187ff\U00018af3-\U0001afff\U0001b11f-\U0001b16f\U0001b2fc-\U0001bbff\U0001bc6b-\U0001bc6f\U0001bc7d-\U0001bc7f\U0001bc89-\U0001bc8f\U0001bc9a-\U0001bc9b\U0001bca4-\U0001cfff\U0001d0f6-\U0001d0ff\U0001d127-\U0001d128\U0001d1e9-\U0001d1ff\U0001d246-\U0001d2df\U0001d2f4-\U0001d2ff\U0001d357-\U0001d35f\U0001d379-\U0001d3ff\U0001d455\U0001d49d\U0001d4a0-\U0001d4a1\U0001d4a3-\U0001d4a4\U0001d4a7-\U0001d4a8\U0001d4ad\U0001d4ba\U0001d4bc\U0001d4c4\U0001d506\U0001d50b-\U0001d50c\U0001d515\U0001d51d\U0001d53a\U0001d53f\U0001d545\U0001d547-\U0001d549\U0001d551\U0001d6a6-\U0001d6a7\U0001d7cc-\U0001d7cd\U0001da8c-\U0001da9a\U0001daa0\U0001dab0-\U0001dfff\U0001e007\U0001e019-\U0001e01a\U0001e022\U0001e025\U0001e02b-\U0001e7ff\U0001e8c5-\U0001e8c6\U0001e8d7-\U0001e8ff\U0001e94b-\U0001e94f\U0001e95a-\U0001e95d\U0001e960-\U0001ec70\U0001ecb5-\U0001edff\U0001ee04\U0001ee20\U0001ee23\U0001ee25-\U0001ee26\U0001ee28\U0001ee33\U0001ee38\U0001ee3a\U0001ee3c-\U0001ee41\U0001ee43-\U0001ee46\U0001ee48\U0001ee4a\U0001ee4c\U0001ee50\U0001ee53\U0001ee55-\U0001ee56\U0001ee58\U0001ee5a\U0001ee5c\U0001ee5e\U0001ee60\U0001ee63\U0001ee65-\U0001ee66\U0001ee6b\U0001ee73\U0001ee78\U0001ee7d\U0001ee7f\U0001ee8a\U0001ee9c-\U0001eea0\U0001eea4\U0001eeaa\U0001eebc-\U0001eeef\U0001eef2-\U0001efff\U0001f02c-\U0001f02f\U0001f094-\U0001f09f\U0001f0af-\U0001f0b0\U0001f0c0\U0001f0d0\U0001f0f6-\U0001f0ff\U0001f10d-\U0001f10f\U0001f16c-\U0001f16f\U0001f1ad-\U0001f1e5\U0001f203-\U0001f20f\U0001f23c-\U0001f23f\U0001f249-\U0001f24f\U0001f252-\U0001f25f\U0001f266-\U0001f2ff\U0001f6d5-\U0001f6df\U0001f6ed-\U0001f6ef\U0001f6fa-\U0001f6ff\U0001f774-\U0001f77f\U0001f7d9-\U0001f7ff\U0001f80c-\U0001f80f\U0001f848-\U0001f84f\U0001f85a-\U0001f85f\U0001f888-\U0001f88f\U0001f8ae-\U0001f8ff\U0001f90c-\U0001f90f\U0001f93f\U0001f971-\U0001f972\U0001f977-\U0001f979\U0001f97b\U0001f9a3-\U0001f9af\U0001f9ba-\U0001f9bf\U0001f9c3-\U0001f9cf\U0001fa00-\U0001fa5f\U0001fa6e-\U0001ffff\U0002a6d7-\U0002a6ff\U0002b735-\U0002b73f\U0002b81e-\U0002b81f\U0002cea2-\U0002ceaf\U0002ebe1-\U0002f7ff\U0002fa1e-\U000e0000\U000e0002-\U000e001f\U000e0080-\U000e00ff\U000e01f0-\U000effff\U000ffffe-\U000fffff\U0010fffe-\U0010ffff' - -Co = '\ue000-\uf8ff\U000f0000-\U000ffffd\U00100000-\U0010fffd' - -Cs = '\ud800-\udbff\\\udc00\udc01-\udfff' - -Ll = 'a-z\xb5\xdf-\xf6\xf8-\xff\u0101\u0103\u0105\u0107\u0109\u010b\u010d\u010f\u0111\u0113\u0115\u0117\u0119\u011b\u011d\u011f\u0121\u0123\u0125\u0127\u0129\u012b\u012d\u012f\u0131\u0133\u0135\u0137-\u0138\u013a\u013c\u013e\u0140\u0142\u0144\u0146\u0148-\u0149\u014b\u014d\u014f\u0151\u0153\u0155\u0157\u0159\u015b\u015d\u015f\u0161\u0163\u0165\u0167\u0169\u016b\u016d\u016f\u0171\u0173\u0175\u0177\u017a\u017c\u017e-\u0180\u0183\u0185\u0188\u018c-\u018d\u0192\u0195\u0199-\u019b\u019e\u01a1\u01a3\u01a5\u01a8\u01aa-\u01ab\u01ad\u01b0\u01b4\u01b6\u01b9-\u01ba\u01bd-\u01bf\u01c6\u01c9\u01cc\u01ce\u01d0\u01d2\u01d4\u01d6\u01d8\u01da\u01dc-\u01dd\u01df\u01e1\u01e3\u01e5\u01e7\u01e9\u01eb\u01ed\u01ef-\u01f0\u01f3\u01f5\u01f9\u01fb\u01fd\u01ff\u0201\u0203\u0205\u0207\u0209\u020b\u020d\u020f\u0211\u0213\u0215\u0217\u0219\u021b\u021d\u021f\u0221\u0223\u0225\u0227\u0229\u022b\u022d\u022f\u0231\u0233-\u0239\u023c\u023f-\u0240\u0242\u0247\u0249\u024b\u024d\u024f-\u0293\u0295-\u02af\u0371\u0373\u0377\u037b-\u037d\u0390\u03ac-\u03ce\u03d0-\u03d1\u03d5-\u03d7\u03d9\u03db\u03dd\u03df\u03e1\u03e3\u03e5\u03e7\u03e9\u03eb\u03ed\u03ef-\u03f3\u03f5\u03f8\u03fb-\u03fc\u0430-\u045f\u0461\u0463\u0465\u0467\u0469\u046b\u046d\u046f\u0471\u0473\u0475\u0477\u0479\u047b\u047d\u047f\u0481\u048b\u048d\u048f\u0491\u0493\u0495\u0497\u0499\u049b\u049d\u049f\u04a1\u04a3\u04a5\u04a7\u04a9\u04ab\u04ad\u04af\u04b1\u04b3\u04b5\u04b7\u04b9\u04bb\u04bd\u04bf\u04c2\u04c4\u04c6\u04c8\u04ca\u04cc\u04ce-\u04cf\u04d1\u04d3\u04d5\u04d7\u04d9\u04db\u04dd\u04df\u04e1\u04e3\u04e5\u04e7\u04e9\u04eb\u04ed\u04ef\u04f1\u04f3\u04f5\u04f7\u04f9\u04fb\u04fd\u04ff\u0501\u0503\u0505\u0507\u0509\u050b\u050d\u050f\u0511\u0513\u0515\u0517\u0519\u051b\u051d\u051f\u0521\u0523\u0525\u0527\u0529\u052b\u052d\u052f\u0560-\u0588\u10d0-\u10fa\u10fd-\u10ff\u13f8-\u13fd\u1c80-\u1c88\u1d00-\u1d2b\u1d6b-\u1d77\u1d79-\u1d9a\u1e01\u1e03\u1e05\u1e07\u1e09\u1e0b\u1e0d\u1e0f\u1e11\u1e13\u1e15\u1e17\u1e19\u1e1b\u1e1d\u1e1f\u1e21\u1e23\u1e25\u1e27\u1e29\u1e2b\u1e2d\u1e2f\u1e31\u1e33\u1e35\u1e37\u1e39\u1e3b\u1e3d\u1e3f\u1e41\u1e43\u1e45\u1e47\u1e49\u1e4b\u1e4d\u1e4f\u1e51\u1e53\u1e55\u1e57\u1e59\u1e5b\u1e5d\u1e5f\u1e61\u1e63\u1e65\u1e67\u1e69\u1e6b\u1e6d\u1e6f\u1e71\u1e73\u1e75\u1e77\u1e79\u1e7b\u1e7d\u1e7f\u1e81\u1e83\u1e85\u1e87\u1e89\u1e8b\u1e8d\u1e8f\u1e91\u1e93\u1e95-\u1e9d\u1e9f\u1ea1\u1ea3\u1ea5\u1ea7\u1ea9\u1eab\u1ead\u1eaf\u1eb1\u1eb3\u1eb5\u1eb7\u1eb9\u1ebb\u1ebd\u1ebf\u1ec1\u1ec3\u1ec5\u1ec7\u1ec9\u1ecb\u1ecd\u1ecf\u1ed1\u1ed3\u1ed5\u1ed7\u1ed9\u1edb\u1edd\u1edf\u1ee1\u1ee3\u1ee5\u1ee7\u1ee9\u1eeb\u1eed\u1eef\u1ef1\u1ef3\u1ef5\u1ef7\u1ef9\u1efb\u1efd\u1eff-\u1f07\u1f10-\u1f15\u1f20-\u1f27\u1f30-\u1f37\u1f40-\u1f45\u1f50-\u1f57\u1f60-\u1f67\u1f70-\u1f7d\u1f80-\u1f87\u1f90-\u1f97\u1fa0-\u1fa7\u1fb0-\u1fb4\u1fb6-\u1fb7\u1fbe\u1fc2-\u1fc4\u1fc6-\u1fc7\u1fd0-\u1fd3\u1fd6-\u1fd7\u1fe0-\u1fe7\u1ff2-\u1ff4\u1ff6-\u1ff7\u210a\u210e-\u210f\u2113\u212f\u2134\u2139\u213c-\u213d\u2146-\u2149\u214e\u2184\u2c30-\u2c5e\u2c61\u2c65-\u2c66\u2c68\u2c6a\u2c6c\u2c71\u2c73-\u2c74\u2c76-\u2c7b\u2c81\u2c83\u2c85\u2c87\u2c89\u2c8b\u2c8d\u2c8f\u2c91\u2c93\u2c95\u2c97\u2c99\u2c9b\u2c9d\u2c9f\u2ca1\u2ca3\u2ca5\u2ca7\u2ca9\u2cab\u2cad\u2caf\u2cb1\u2cb3\u2cb5\u2cb7\u2cb9\u2cbb\u2cbd\u2cbf\u2cc1\u2cc3\u2cc5\u2cc7\u2cc9\u2ccb\u2ccd\u2ccf\u2cd1\u2cd3\u2cd5\u2cd7\u2cd9\u2cdb\u2cdd\u2cdf\u2ce1\u2ce3-\u2ce4\u2cec\u2cee\u2cf3\u2d00-\u2d25\u2d27\u2d2d\ua641\ua643\ua645\ua647\ua649\ua64b\ua64d\ua64f\ua651\ua653\ua655\ua657\ua659\ua65b\ua65d\ua65f\ua661\ua663\ua665\ua667\ua669\ua66b\ua66d\ua681\ua683\ua685\ua687\ua689\ua68b\ua68d\ua68f\ua691\ua693\ua695\ua697\ua699\ua69b\ua723\ua725\ua727\ua729\ua72b\ua72d\ua72f-\ua731\ua733\ua735\ua737\ua739\ua73b\ua73d\ua73f\ua741\ua743\ua745\ua747\ua749\ua74b\ua74d\ua74f\ua751\ua753\ua755\ua757\ua759\ua75b\ua75d\ua75f\ua761\ua763\ua765\ua767\ua769\ua76b\ua76d\ua76f\ua771-\ua778\ua77a\ua77c\ua77f\ua781\ua783\ua785\ua787\ua78c\ua78e\ua791\ua793-\ua795\ua797\ua799\ua79b\ua79d\ua79f\ua7a1\ua7a3\ua7a5\ua7a7\ua7a9\ua7af\ua7b5\ua7b7\ua7b9\ua7fa\uab30-\uab5a\uab60-\uab65\uab70-\uabbf\ufb00-\ufb06\ufb13-\ufb17\uff41-\uff5a\U00010428-\U0001044f\U000104d8-\U000104fb\U00010cc0-\U00010cf2\U000118c0-\U000118df\U00016e60-\U00016e7f\U0001d41a-\U0001d433\U0001d44e-\U0001d454\U0001d456-\U0001d467\U0001d482-\U0001d49b\U0001d4b6-\U0001d4b9\U0001d4bb\U0001d4bd-\U0001d4c3\U0001d4c5-\U0001d4cf\U0001d4ea-\U0001d503\U0001d51e-\U0001d537\U0001d552-\U0001d56b\U0001d586-\U0001d59f\U0001d5ba-\U0001d5d3\U0001d5ee-\U0001d607\U0001d622-\U0001d63b\U0001d656-\U0001d66f\U0001d68a-\U0001d6a5\U0001d6c2-\U0001d6da\U0001d6dc-\U0001d6e1\U0001d6fc-\U0001d714\U0001d716-\U0001d71b\U0001d736-\U0001d74e\U0001d750-\U0001d755\U0001d770-\U0001d788\U0001d78a-\U0001d78f\U0001d7aa-\U0001d7c2\U0001d7c4-\U0001d7c9\U0001d7cb\U0001e922-\U0001e943' - -Lm = '\u02b0-\u02c1\u02c6-\u02d1\u02e0-\u02e4\u02ec\u02ee\u0374\u037a\u0559\u0640\u06e5-\u06e6\u07f4-\u07f5\u07fa\u081a\u0824\u0828\u0971\u0e46\u0ec6\u10fc\u17d7\u1843\u1aa7\u1c78-\u1c7d\u1d2c-\u1d6a\u1d78\u1d9b-\u1dbf\u2071\u207f\u2090-\u209c\u2c7c-\u2c7d\u2d6f\u2e2f\u3005\u3031-\u3035\u303b\u309d-\u309e\u30fc-\u30fe\ua015\ua4f8-\ua4fd\ua60c\ua67f\ua69c-\ua69d\ua717-\ua71f\ua770\ua788\ua7f8-\ua7f9\ua9cf\ua9e6\uaa70\uaadd\uaaf3-\uaaf4\uab5c-\uab5f\uff70\uff9e-\uff9f\U00016b40-\U00016b43\U00016f93-\U00016f9f\U00016fe0-\U00016fe1' - -Lo = '\xaa\xba\u01bb\u01c0-\u01c3\u0294\u05d0-\u05ea\u05ef-\u05f2\u0620-\u063f\u0641-\u064a\u066e-\u066f\u0671-\u06d3\u06d5\u06ee-\u06ef\u06fa-\u06fc\u06ff\u0710\u0712-\u072f\u074d-\u07a5\u07b1\u07ca-\u07ea\u0800-\u0815\u0840-\u0858\u0860-\u086a\u08a0-\u08b4\u08b6-\u08bd\u0904-\u0939\u093d\u0950\u0958-\u0961\u0972-\u0980\u0985-\u098c\u098f-\u0990\u0993-\u09a8\u09aa-\u09b0\u09b2\u09b6-\u09b9\u09bd\u09ce\u09dc-\u09dd\u09df-\u09e1\u09f0-\u09f1\u09fc\u0a05-\u0a0a\u0a0f-\u0a10\u0a13-\u0a28\u0a2a-\u0a30\u0a32-\u0a33\u0a35-\u0a36\u0a38-\u0a39\u0a59-\u0a5c\u0a5e\u0a72-\u0a74\u0a85-\u0a8d\u0a8f-\u0a91\u0a93-\u0aa8\u0aaa-\u0ab0\u0ab2-\u0ab3\u0ab5-\u0ab9\u0abd\u0ad0\u0ae0-\u0ae1\u0af9\u0b05-\u0b0c\u0b0f-\u0b10\u0b13-\u0b28\u0b2a-\u0b30\u0b32-\u0b33\u0b35-\u0b39\u0b3d\u0b5c-\u0b5d\u0b5f-\u0b61\u0b71\u0b83\u0b85-\u0b8a\u0b8e-\u0b90\u0b92-\u0b95\u0b99-\u0b9a\u0b9c\u0b9e-\u0b9f\u0ba3-\u0ba4\u0ba8-\u0baa\u0bae-\u0bb9\u0bd0\u0c05-\u0c0c\u0c0e-\u0c10\u0c12-\u0c28\u0c2a-\u0c39\u0c3d\u0c58-\u0c5a\u0c60-\u0c61\u0c80\u0c85-\u0c8c\u0c8e-\u0c90\u0c92-\u0ca8\u0caa-\u0cb3\u0cb5-\u0cb9\u0cbd\u0cde\u0ce0-\u0ce1\u0cf1-\u0cf2\u0d05-\u0d0c\u0d0e-\u0d10\u0d12-\u0d3a\u0d3d\u0d4e\u0d54-\u0d56\u0d5f-\u0d61\u0d7a-\u0d7f\u0d85-\u0d96\u0d9a-\u0db1\u0db3-\u0dbb\u0dbd\u0dc0-\u0dc6\u0e01-\u0e30\u0e32-\u0e33\u0e40-\u0e45\u0e81-\u0e82\u0e84\u0e87-\u0e88\u0e8a\u0e8d\u0e94-\u0e97\u0e99-\u0e9f\u0ea1-\u0ea3\u0ea5\u0ea7\u0eaa-\u0eab\u0ead-\u0eb0\u0eb2-\u0eb3\u0ebd\u0ec0-\u0ec4\u0edc-\u0edf\u0f00\u0f40-\u0f47\u0f49-\u0f6c\u0f88-\u0f8c\u1000-\u102a\u103f\u1050-\u1055\u105a-\u105d\u1061\u1065-\u1066\u106e-\u1070\u1075-\u1081\u108e\u1100-\u1248\u124a-\u124d\u1250-\u1256\u1258\u125a-\u125d\u1260-\u1288\u128a-\u128d\u1290-\u12b0\u12b2-\u12b5\u12b8-\u12be\u12c0\u12c2-\u12c5\u12c8-\u12d6\u12d8-\u1310\u1312-\u1315\u1318-\u135a\u1380-\u138f\u1401-\u166c\u166f-\u167f\u1681-\u169a\u16a0-\u16ea\u16f1-\u16f8\u1700-\u170c\u170e-\u1711\u1720-\u1731\u1740-\u1751\u1760-\u176c\u176e-\u1770\u1780-\u17b3\u17dc\u1820-\u1842\u1844-\u1878\u1880-\u1884\u1887-\u18a8\u18aa\u18b0-\u18f5\u1900-\u191e\u1950-\u196d\u1970-\u1974\u1980-\u19ab\u19b0-\u19c9\u1a00-\u1a16\u1a20-\u1a54\u1b05-\u1b33\u1b45-\u1b4b\u1b83-\u1ba0\u1bae-\u1baf\u1bba-\u1be5\u1c00-\u1c23\u1c4d-\u1c4f\u1c5a-\u1c77\u1ce9-\u1cec\u1cee-\u1cf1\u1cf5-\u1cf6\u2135-\u2138\u2d30-\u2d67\u2d80-\u2d96\u2da0-\u2da6\u2da8-\u2dae\u2db0-\u2db6\u2db8-\u2dbe\u2dc0-\u2dc6\u2dc8-\u2dce\u2dd0-\u2dd6\u2dd8-\u2dde\u3006\u303c\u3041-\u3096\u309f\u30a1-\u30fa\u30ff\u3105-\u312f\u3131-\u318e\u31a0-\u31ba\u31f0-\u31ff\u3400-\u4db5\u4e00-\u9fef\ua000-\ua014\ua016-\ua48c\ua4d0-\ua4f7\ua500-\ua60b\ua610-\ua61f\ua62a-\ua62b\ua66e\ua6a0-\ua6e5\ua78f\ua7f7\ua7fb-\ua801\ua803-\ua805\ua807-\ua80a\ua80c-\ua822\ua840-\ua873\ua882-\ua8b3\ua8f2-\ua8f7\ua8fb\ua8fd-\ua8fe\ua90a-\ua925\ua930-\ua946\ua960-\ua97c\ua984-\ua9b2\ua9e0-\ua9e4\ua9e7-\ua9ef\ua9fa-\ua9fe\uaa00-\uaa28\uaa40-\uaa42\uaa44-\uaa4b\uaa60-\uaa6f\uaa71-\uaa76\uaa7a\uaa7e-\uaaaf\uaab1\uaab5-\uaab6\uaab9-\uaabd\uaac0\uaac2\uaadb-\uaadc\uaae0-\uaaea\uaaf2\uab01-\uab06\uab09-\uab0e\uab11-\uab16\uab20-\uab26\uab28-\uab2e\uabc0-\uabe2\uac00-\ud7a3\ud7b0-\ud7c6\ud7cb-\ud7fb\uf900-\ufa6d\ufa70-\ufad9\ufb1d\ufb1f-\ufb28\ufb2a-\ufb36\ufb38-\ufb3c\ufb3e\ufb40-\ufb41\ufb43-\ufb44\ufb46-\ufbb1\ufbd3-\ufd3d\ufd50-\ufd8f\ufd92-\ufdc7\ufdf0-\ufdfb\ufe70-\ufe74\ufe76-\ufefc\uff66-\uff6f\uff71-\uff9d\uffa0-\uffbe\uffc2-\uffc7\uffca-\uffcf\uffd2-\uffd7\uffda-\uffdc\U00010000-\U0001000b\U0001000d-\U00010026\U00010028-\U0001003a\U0001003c-\U0001003d\U0001003f-\U0001004d\U00010050-\U0001005d\U00010080-\U000100fa\U00010280-\U0001029c\U000102a0-\U000102d0\U00010300-\U0001031f\U0001032d-\U00010340\U00010342-\U00010349\U00010350-\U00010375\U00010380-\U0001039d\U000103a0-\U000103c3\U000103c8-\U000103cf\U00010450-\U0001049d\U00010500-\U00010527\U00010530-\U00010563\U00010600-\U00010736\U00010740-\U00010755\U00010760-\U00010767\U00010800-\U00010805\U00010808\U0001080a-\U00010835\U00010837-\U00010838\U0001083c\U0001083f-\U00010855\U00010860-\U00010876\U00010880-\U0001089e\U000108e0-\U000108f2\U000108f4-\U000108f5\U00010900-\U00010915\U00010920-\U00010939\U00010980-\U000109b7\U000109be-\U000109bf\U00010a00\U00010a10-\U00010a13\U00010a15-\U00010a17\U00010a19-\U00010a35\U00010a60-\U00010a7c\U00010a80-\U00010a9c\U00010ac0-\U00010ac7\U00010ac9-\U00010ae4\U00010b00-\U00010b35\U00010b40-\U00010b55\U00010b60-\U00010b72\U00010b80-\U00010b91\U00010c00-\U00010c48\U00010d00-\U00010d23\U00010f00-\U00010f1c\U00010f27\U00010f30-\U00010f45\U00011003-\U00011037\U00011083-\U000110af\U000110d0-\U000110e8\U00011103-\U00011126\U00011144\U00011150-\U00011172\U00011176\U00011183-\U000111b2\U000111c1-\U000111c4\U000111da\U000111dc\U00011200-\U00011211\U00011213-\U0001122b\U00011280-\U00011286\U00011288\U0001128a-\U0001128d\U0001128f-\U0001129d\U0001129f-\U000112a8\U000112b0-\U000112de\U00011305-\U0001130c\U0001130f-\U00011310\U00011313-\U00011328\U0001132a-\U00011330\U00011332-\U00011333\U00011335-\U00011339\U0001133d\U00011350\U0001135d-\U00011361\U00011400-\U00011434\U00011447-\U0001144a\U00011480-\U000114af\U000114c4-\U000114c5\U000114c7\U00011580-\U000115ae\U000115d8-\U000115db\U00011600-\U0001162f\U00011644\U00011680-\U000116aa\U00011700-\U0001171a\U00011800-\U0001182b\U000118ff\U00011a00\U00011a0b-\U00011a32\U00011a3a\U00011a50\U00011a5c-\U00011a83\U00011a86-\U00011a89\U00011a9d\U00011ac0-\U00011af8\U00011c00-\U00011c08\U00011c0a-\U00011c2e\U00011c40\U00011c72-\U00011c8f\U00011d00-\U00011d06\U00011d08-\U00011d09\U00011d0b-\U00011d30\U00011d46\U00011d60-\U00011d65\U00011d67-\U00011d68\U00011d6a-\U00011d89\U00011d98\U00011ee0-\U00011ef2\U00012000-\U00012399\U00012480-\U00012543\U00013000-\U0001342e\U00014400-\U00014646\U00016800-\U00016a38\U00016a40-\U00016a5e\U00016ad0-\U00016aed\U00016b00-\U00016b2f\U00016b63-\U00016b77\U00016b7d-\U00016b8f\U00016f00-\U00016f44\U00016f50\U00017000-\U000187f1\U00018800-\U00018af2\U0001b000-\U0001b11e\U0001b170-\U0001b2fb\U0001bc00-\U0001bc6a\U0001bc70-\U0001bc7c\U0001bc80-\U0001bc88\U0001bc90-\U0001bc99\U0001e800-\U0001e8c4\U0001ee00-\U0001ee03\U0001ee05-\U0001ee1f\U0001ee21-\U0001ee22\U0001ee24\U0001ee27\U0001ee29-\U0001ee32\U0001ee34-\U0001ee37\U0001ee39\U0001ee3b\U0001ee42\U0001ee47\U0001ee49\U0001ee4b\U0001ee4d-\U0001ee4f\U0001ee51-\U0001ee52\U0001ee54\U0001ee57\U0001ee59\U0001ee5b\U0001ee5d\U0001ee5f\U0001ee61-\U0001ee62\U0001ee64\U0001ee67-\U0001ee6a\U0001ee6c-\U0001ee72\U0001ee74-\U0001ee77\U0001ee79-\U0001ee7c\U0001ee7e\U0001ee80-\U0001ee89\U0001ee8b-\U0001ee9b\U0001eea1-\U0001eea3\U0001eea5-\U0001eea9\U0001eeab-\U0001eebb\U00020000-\U0002a6d6\U0002a700-\U0002b734\U0002b740-\U0002b81d\U0002b820-\U0002cea1\U0002ceb0-\U0002ebe0\U0002f800-\U0002fa1d' - -Lt = '\u01c5\u01c8\u01cb\u01f2\u1f88-\u1f8f\u1f98-\u1f9f\u1fa8-\u1faf\u1fbc\u1fcc\u1ffc' - -Lu = 'A-Z\xc0-\xd6\xd8-\xde\u0100\u0102\u0104\u0106\u0108\u010a\u010c\u010e\u0110\u0112\u0114\u0116\u0118\u011a\u011c\u011e\u0120\u0122\u0124\u0126\u0128\u012a\u012c\u012e\u0130\u0132\u0134\u0136\u0139\u013b\u013d\u013f\u0141\u0143\u0145\u0147\u014a\u014c\u014e\u0150\u0152\u0154\u0156\u0158\u015a\u015c\u015e\u0160\u0162\u0164\u0166\u0168\u016a\u016c\u016e\u0170\u0172\u0174\u0176\u0178-\u0179\u017b\u017d\u0181-\u0182\u0184\u0186-\u0187\u0189-\u018b\u018e-\u0191\u0193-\u0194\u0196-\u0198\u019c-\u019d\u019f-\u01a0\u01a2\u01a4\u01a6-\u01a7\u01a9\u01ac\u01ae-\u01af\u01b1-\u01b3\u01b5\u01b7-\u01b8\u01bc\u01c4\u01c7\u01ca\u01cd\u01cf\u01d1\u01d3\u01d5\u01d7\u01d9\u01db\u01de\u01e0\u01e2\u01e4\u01e6\u01e8\u01ea\u01ec\u01ee\u01f1\u01f4\u01f6-\u01f8\u01fa\u01fc\u01fe\u0200\u0202\u0204\u0206\u0208\u020a\u020c\u020e\u0210\u0212\u0214\u0216\u0218\u021a\u021c\u021e\u0220\u0222\u0224\u0226\u0228\u022a\u022c\u022e\u0230\u0232\u023a-\u023b\u023d-\u023e\u0241\u0243-\u0246\u0248\u024a\u024c\u024e\u0370\u0372\u0376\u037f\u0386\u0388-\u038a\u038c\u038e-\u038f\u0391-\u03a1\u03a3-\u03ab\u03cf\u03d2-\u03d4\u03d8\u03da\u03dc\u03de\u03e0\u03e2\u03e4\u03e6\u03e8\u03ea\u03ec\u03ee\u03f4\u03f7\u03f9-\u03fa\u03fd-\u042f\u0460\u0462\u0464\u0466\u0468\u046a\u046c\u046e\u0470\u0472\u0474\u0476\u0478\u047a\u047c\u047e\u0480\u048a\u048c\u048e\u0490\u0492\u0494\u0496\u0498\u049a\u049c\u049e\u04a0\u04a2\u04a4\u04a6\u04a8\u04aa\u04ac\u04ae\u04b0\u04b2\u04b4\u04b6\u04b8\u04ba\u04bc\u04be\u04c0-\u04c1\u04c3\u04c5\u04c7\u04c9\u04cb\u04cd\u04d0\u04d2\u04d4\u04d6\u04d8\u04da\u04dc\u04de\u04e0\u04e2\u04e4\u04e6\u04e8\u04ea\u04ec\u04ee\u04f0\u04f2\u04f4\u04f6\u04f8\u04fa\u04fc\u04fe\u0500\u0502\u0504\u0506\u0508\u050a\u050c\u050e\u0510\u0512\u0514\u0516\u0518\u051a\u051c\u051e\u0520\u0522\u0524\u0526\u0528\u052a\u052c\u052e\u0531-\u0556\u10a0-\u10c5\u10c7\u10cd\u13a0-\u13f5\u1c90-\u1cba\u1cbd-\u1cbf\u1e00\u1e02\u1e04\u1e06\u1e08\u1e0a\u1e0c\u1e0e\u1e10\u1e12\u1e14\u1e16\u1e18\u1e1a\u1e1c\u1e1e\u1e20\u1e22\u1e24\u1e26\u1e28\u1e2a\u1e2c\u1e2e\u1e30\u1e32\u1e34\u1e36\u1e38\u1e3a\u1e3c\u1e3e\u1e40\u1e42\u1e44\u1e46\u1e48\u1e4a\u1e4c\u1e4e\u1e50\u1e52\u1e54\u1e56\u1e58\u1e5a\u1e5c\u1e5e\u1e60\u1e62\u1e64\u1e66\u1e68\u1e6a\u1e6c\u1e6e\u1e70\u1e72\u1e74\u1e76\u1e78\u1e7a\u1e7c\u1e7e\u1e80\u1e82\u1e84\u1e86\u1e88\u1e8a\u1e8c\u1e8e\u1e90\u1e92\u1e94\u1e9e\u1ea0\u1ea2\u1ea4\u1ea6\u1ea8\u1eaa\u1eac\u1eae\u1eb0\u1eb2\u1eb4\u1eb6\u1eb8\u1eba\u1ebc\u1ebe\u1ec0\u1ec2\u1ec4\u1ec6\u1ec8\u1eca\u1ecc\u1ece\u1ed0\u1ed2\u1ed4\u1ed6\u1ed8\u1eda\u1edc\u1ede\u1ee0\u1ee2\u1ee4\u1ee6\u1ee8\u1eea\u1eec\u1eee\u1ef0\u1ef2\u1ef4\u1ef6\u1ef8\u1efa\u1efc\u1efe\u1f08-\u1f0f\u1f18-\u1f1d\u1f28-\u1f2f\u1f38-\u1f3f\u1f48-\u1f4d\u1f59\u1f5b\u1f5d\u1f5f\u1f68-\u1f6f\u1fb8-\u1fbb\u1fc8-\u1fcb\u1fd8-\u1fdb\u1fe8-\u1fec\u1ff8-\u1ffb\u2102\u2107\u210b-\u210d\u2110-\u2112\u2115\u2119-\u211d\u2124\u2126\u2128\u212a-\u212d\u2130-\u2133\u213e-\u213f\u2145\u2183\u2c00-\u2c2e\u2c60\u2c62-\u2c64\u2c67\u2c69\u2c6b\u2c6d-\u2c70\u2c72\u2c75\u2c7e-\u2c80\u2c82\u2c84\u2c86\u2c88\u2c8a\u2c8c\u2c8e\u2c90\u2c92\u2c94\u2c96\u2c98\u2c9a\u2c9c\u2c9e\u2ca0\u2ca2\u2ca4\u2ca6\u2ca8\u2caa\u2cac\u2cae\u2cb0\u2cb2\u2cb4\u2cb6\u2cb8\u2cba\u2cbc\u2cbe\u2cc0\u2cc2\u2cc4\u2cc6\u2cc8\u2cca\u2ccc\u2cce\u2cd0\u2cd2\u2cd4\u2cd6\u2cd8\u2cda\u2cdc\u2cde\u2ce0\u2ce2\u2ceb\u2ced\u2cf2\ua640\ua642\ua644\ua646\ua648\ua64a\ua64c\ua64e\ua650\ua652\ua654\ua656\ua658\ua65a\ua65c\ua65e\ua660\ua662\ua664\ua666\ua668\ua66a\ua66c\ua680\ua682\ua684\ua686\ua688\ua68a\ua68c\ua68e\ua690\ua692\ua694\ua696\ua698\ua69a\ua722\ua724\ua726\ua728\ua72a\ua72c\ua72e\ua732\ua734\ua736\ua738\ua73a\ua73c\ua73e\ua740\ua742\ua744\ua746\ua748\ua74a\ua74c\ua74e\ua750\ua752\ua754\ua756\ua758\ua75a\ua75c\ua75e\ua760\ua762\ua764\ua766\ua768\ua76a\ua76c\ua76e\ua779\ua77b\ua77d-\ua77e\ua780\ua782\ua784\ua786\ua78b\ua78d\ua790\ua792\ua796\ua798\ua79a\ua79c\ua79e\ua7a0\ua7a2\ua7a4\ua7a6\ua7a8\ua7aa-\ua7ae\ua7b0-\ua7b4\ua7b6\ua7b8\uff21-\uff3a\U00010400-\U00010427\U000104b0-\U000104d3\U00010c80-\U00010cb2\U000118a0-\U000118bf\U00016e40-\U00016e5f\U0001d400-\U0001d419\U0001d434-\U0001d44d\U0001d468-\U0001d481\U0001d49c\U0001d49e-\U0001d49f\U0001d4a2\U0001d4a5-\U0001d4a6\U0001d4a9-\U0001d4ac\U0001d4ae-\U0001d4b5\U0001d4d0-\U0001d4e9\U0001d504-\U0001d505\U0001d507-\U0001d50a\U0001d50d-\U0001d514\U0001d516-\U0001d51c\U0001d538-\U0001d539\U0001d53b-\U0001d53e\U0001d540-\U0001d544\U0001d546\U0001d54a-\U0001d550\U0001d56c-\U0001d585\U0001d5a0-\U0001d5b9\U0001d5d4-\U0001d5ed\U0001d608-\U0001d621\U0001d63c-\U0001d655\U0001d670-\U0001d689\U0001d6a8-\U0001d6c0\U0001d6e2-\U0001d6fa\U0001d71c-\U0001d734\U0001d756-\U0001d76e\U0001d790-\U0001d7a8\U0001d7ca\U0001e900-\U0001e921' - -Mc = '\u0903\u093b\u093e-\u0940\u0949-\u094c\u094e-\u094f\u0982-\u0983\u09be-\u09c0\u09c7-\u09c8\u09cb-\u09cc\u09d7\u0a03\u0a3e-\u0a40\u0a83\u0abe-\u0ac0\u0ac9\u0acb-\u0acc\u0b02-\u0b03\u0b3e\u0b40\u0b47-\u0b48\u0b4b-\u0b4c\u0b57\u0bbe-\u0bbf\u0bc1-\u0bc2\u0bc6-\u0bc8\u0bca-\u0bcc\u0bd7\u0c01-\u0c03\u0c41-\u0c44\u0c82-\u0c83\u0cbe\u0cc0-\u0cc4\u0cc7-\u0cc8\u0cca-\u0ccb\u0cd5-\u0cd6\u0d02-\u0d03\u0d3e-\u0d40\u0d46-\u0d48\u0d4a-\u0d4c\u0d57\u0d82-\u0d83\u0dcf-\u0dd1\u0dd8-\u0ddf\u0df2-\u0df3\u0f3e-\u0f3f\u0f7f\u102b-\u102c\u1031\u1038\u103b-\u103c\u1056-\u1057\u1062-\u1064\u1067-\u106d\u1083-\u1084\u1087-\u108c\u108f\u109a-\u109c\u17b6\u17be-\u17c5\u17c7-\u17c8\u1923-\u1926\u1929-\u192b\u1930-\u1931\u1933-\u1938\u1a19-\u1a1a\u1a55\u1a57\u1a61\u1a63-\u1a64\u1a6d-\u1a72\u1b04\u1b35\u1b3b\u1b3d-\u1b41\u1b43-\u1b44\u1b82\u1ba1\u1ba6-\u1ba7\u1baa\u1be7\u1bea-\u1bec\u1bee\u1bf2-\u1bf3\u1c24-\u1c2b\u1c34-\u1c35\u1ce1\u1cf2-\u1cf3\u1cf7\u302e-\u302f\ua823-\ua824\ua827\ua880-\ua881\ua8b4-\ua8c3\ua952-\ua953\ua983\ua9b4-\ua9b5\ua9ba-\ua9bb\ua9bd-\ua9c0\uaa2f-\uaa30\uaa33-\uaa34\uaa4d\uaa7b\uaa7d\uaaeb\uaaee-\uaaef\uaaf5\uabe3-\uabe4\uabe6-\uabe7\uabe9-\uabea\uabec\U00011000\U00011002\U00011082\U000110b0-\U000110b2\U000110b7-\U000110b8\U0001112c\U00011145-\U00011146\U00011182\U000111b3-\U000111b5\U000111bf-\U000111c0\U0001122c-\U0001122e\U00011232-\U00011233\U00011235\U000112e0-\U000112e2\U00011302-\U00011303\U0001133e-\U0001133f\U00011341-\U00011344\U00011347-\U00011348\U0001134b-\U0001134d\U00011357\U00011362-\U00011363\U00011435-\U00011437\U00011440-\U00011441\U00011445\U000114b0-\U000114b2\U000114b9\U000114bb-\U000114be\U000114c1\U000115af-\U000115b1\U000115b8-\U000115bb\U000115be\U00011630-\U00011632\U0001163b-\U0001163c\U0001163e\U000116ac\U000116ae-\U000116af\U000116b6\U00011720-\U00011721\U00011726\U0001182c-\U0001182e\U00011838\U00011a39\U00011a57-\U00011a58\U00011a97\U00011c2f\U00011c3e\U00011ca9\U00011cb1\U00011cb4\U00011d8a-\U00011d8e\U00011d93-\U00011d94\U00011d96\U00011ef5-\U00011ef6\U00016f51-\U00016f7e\U0001d165-\U0001d166\U0001d16d-\U0001d172' - -Me = '\u0488-\u0489\u1abe\u20dd-\u20e0\u20e2-\u20e4\ua670-\ua672' - -Mn = '\u0300-\u036f\u0483-\u0487\u0591-\u05bd\u05bf\u05c1-\u05c2\u05c4-\u05c5\u05c7\u0610-\u061a\u064b-\u065f\u0670\u06d6-\u06dc\u06df-\u06e4\u06e7-\u06e8\u06ea-\u06ed\u0711\u0730-\u074a\u07a6-\u07b0\u07eb-\u07f3\u07fd\u0816-\u0819\u081b-\u0823\u0825-\u0827\u0829-\u082d\u0859-\u085b\u08d3-\u08e1\u08e3-\u0902\u093a\u093c\u0941-\u0948\u094d\u0951-\u0957\u0962-\u0963\u0981\u09bc\u09c1-\u09c4\u09cd\u09e2-\u09e3\u09fe\u0a01-\u0a02\u0a3c\u0a41-\u0a42\u0a47-\u0a48\u0a4b-\u0a4d\u0a51\u0a70-\u0a71\u0a75\u0a81-\u0a82\u0abc\u0ac1-\u0ac5\u0ac7-\u0ac8\u0acd\u0ae2-\u0ae3\u0afa-\u0aff\u0b01\u0b3c\u0b3f\u0b41-\u0b44\u0b4d\u0b56\u0b62-\u0b63\u0b82\u0bc0\u0bcd\u0c00\u0c04\u0c3e-\u0c40\u0c46-\u0c48\u0c4a-\u0c4d\u0c55-\u0c56\u0c62-\u0c63\u0c81\u0cbc\u0cbf\u0cc6\u0ccc-\u0ccd\u0ce2-\u0ce3\u0d00-\u0d01\u0d3b-\u0d3c\u0d41-\u0d44\u0d4d\u0d62-\u0d63\u0dca\u0dd2-\u0dd4\u0dd6\u0e31\u0e34-\u0e3a\u0e47-\u0e4e\u0eb1\u0eb4-\u0eb9\u0ebb-\u0ebc\u0ec8-\u0ecd\u0f18-\u0f19\u0f35\u0f37\u0f39\u0f71-\u0f7e\u0f80-\u0f84\u0f86-\u0f87\u0f8d-\u0f97\u0f99-\u0fbc\u0fc6\u102d-\u1030\u1032-\u1037\u1039-\u103a\u103d-\u103e\u1058-\u1059\u105e-\u1060\u1071-\u1074\u1082\u1085-\u1086\u108d\u109d\u135d-\u135f\u1712-\u1714\u1732-\u1734\u1752-\u1753\u1772-\u1773\u17b4-\u17b5\u17b7-\u17bd\u17c6\u17c9-\u17d3\u17dd\u180b-\u180d\u1885-\u1886\u18a9\u1920-\u1922\u1927-\u1928\u1932\u1939-\u193b\u1a17-\u1a18\u1a1b\u1a56\u1a58-\u1a5e\u1a60\u1a62\u1a65-\u1a6c\u1a73-\u1a7c\u1a7f\u1ab0-\u1abd\u1b00-\u1b03\u1b34\u1b36-\u1b3a\u1b3c\u1b42\u1b6b-\u1b73\u1b80-\u1b81\u1ba2-\u1ba5\u1ba8-\u1ba9\u1bab-\u1bad\u1be6\u1be8-\u1be9\u1bed\u1bef-\u1bf1\u1c2c-\u1c33\u1c36-\u1c37\u1cd0-\u1cd2\u1cd4-\u1ce0\u1ce2-\u1ce8\u1ced\u1cf4\u1cf8-\u1cf9\u1dc0-\u1df9\u1dfb-\u1dff\u20d0-\u20dc\u20e1\u20e5-\u20f0\u2cef-\u2cf1\u2d7f\u2de0-\u2dff\u302a-\u302d\u3099-\u309a\ua66f\ua674-\ua67d\ua69e-\ua69f\ua6f0-\ua6f1\ua802\ua806\ua80b\ua825-\ua826\ua8c4-\ua8c5\ua8e0-\ua8f1\ua8ff\ua926-\ua92d\ua947-\ua951\ua980-\ua982\ua9b3\ua9b6-\ua9b9\ua9bc\ua9e5\uaa29-\uaa2e\uaa31-\uaa32\uaa35-\uaa36\uaa43\uaa4c\uaa7c\uaab0\uaab2-\uaab4\uaab7-\uaab8\uaabe-\uaabf\uaac1\uaaec-\uaaed\uaaf6\uabe5\uabe8\uabed\ufb1e\ufe00-\ufe0f\ufe20-\ufe2f\U000101fd\U000102e0\U00010376-\U0001037a\U00010a01-\U00010a03\U00010a05-\U00010a06\U00010a0c-\U00010a0f\U00010a38-\U00010a3a\U00010a3f\U00010ae5-\U00010ae6\U00010d24-\U00010d27\U00010f46-\U00010f50\U00011001\U00011038-\U00011046\U0001107f-\U00011081\U000110b3-\U000110b6\U000110b9-\U000110ba\U00011100-\U00011102\U00011127-\U0001112b\U0001112d-\U00011134\U00011173\U00011180-\U00011181\U000111b6-\U000111be\U000111c9-\U000111cc\U0001122f-\U00011231\U00011234\U00011236-\U00011237\U0001123e\U000112df\U000112e3-\U000112ea\U00011300-\U00011301\U0001133b-\U0001133c\U00011340\U00011366-\U0001136c\U00011370-\U00011374\U00011438-\U0001143f\U00011442-\U00011444\U00011446\U0001145e\U000114b3-\U000114b8\U000114ba\U000114bf-\U000114c0\U000114c2-\U000114c3\U000115b2-\U000115b5\U000115bc-\U000115bd\U000115bf-\U000115c0\U000115dc-\U000115dd\U00011633-\U0001163a\U0001163d\U0001163f-\U00011640\U000116ab\U000116ad\U000116b0-\U000116b5\U000116b7\U0001171d-\U0001171f\U00011722-\U00011725\U00011727-\U0001172b\U0001182f-\U00011837\U00011839-\U0001183a\U00011a01-\U00011a0a\U00011a33-\U00011a38\U00011a3b-\U00011a3e\U00011a47\U00011a51-\U00011a56\U00011a59-\U00011a5b\U00011a8a-\U00011a96\U00011a98-\U00011a99\U00011c30-\U00011c36\U00011c38-\U00011c3d\U00011c3f\U00011c92-\U00011ca7\U00011caa-\U00011cb0\U00011cb2-\U00011cb3\U00011cb5-\U00011cb6\U00011d31-\U00011d36\U00011d3a\U00011d3c-\U00011d3d\U00011d3f-\U00011d45\U00011d47\U00011d90-\U00011d91\U00011d95\U00011d97\U00011ef3-\U00011ef4\U00016af0-\U00016af4\U00016b30-\U00016b36\U00016f8f-\U00016f92\U0001bc9d-\U0001bc9e\U0001d167-\U0001d169\U0001d17b-\U0001d182\U0001d185-\U0001d18b\U0001d1aa-\U0001d1ad\U0001d242-\U0001d244\U0001da00-\U0001da36\U0001da3b-\U0001da6c\U0001da75\U0001da84\U0001da9b-\U0001da9f\U0001daa1-\U0001daaf\U0001e000-\U0001e006\U0001e008-\U0001e018\U0001e01b-\U0001e021\U0001e023-\U0001e024\U0001e026-\U0001e02a\U0001e8d0-\U0001e8d6\U0001e944-\U0001e94a\U000e0100-\U000e01ef' - -Nd = '0-9\u0660-\u0669\u06f0-\u06f9\u07c0-\u07c9\u0966-\u096f\u09e6-\u09ef\u0a66-\u0a6f\u0ae6-\u0aef\u0b66-\u0b6f\u0be6-\u0bef\u0c66-\u0c6f\u0ce6-\u0cef\u0d66-\u0d6f\u0de6-\u0def\u0e50-\u0e59\u0ed0-\u0ed9\u0f20-\u0f29\u1040-\u1049\u1090-\u1099\u17e0-\u17e9\u1810-\u1819\u1946-\u194f\u19d0-\u19d9\u1a80-\u1a89\u1a90-\u1a99\u1b50-\u1b59\u1bb0-\u1bb9\u1c40-\u1c49\u1c50-\u1c59\ua620-\ua629\ua8d0-\ua8d9\ua900-\ua909\ua9d0-\ua9d9\ua9f0-\ua9f9\uaa50-\uaa59\uabf0-\uabf9\uff10-\uff19\U000104a0-\U000104a9\U00010d30-\U00010d39\U00011066-\U0001106f\U000110f0-\U000110f9\U00011136-\U0001113f\U000111d0-\U000111d9\U000112f0-\U000112f9\U00011450-\U00011459\U000114d0-\U000114d9\U00011650-\U00011659\U000116c0-\U000116c9\U00011730-\U00011739\U000118e0-\U000118e9\U00011c50-\U00011c59\U00011d50-\U00011d59\U00011da0-\U00011da9\U00016a60-\U00016a69\U00016b50-\U00016b59\U0001d7ce-\U0001d7ff\U0001e950-\U0001e959' - -Nl = '\u16ee-\u16f0\u2160-\u2182\u2185-\u2188\u3007\u3021-\u3029\u3038-\u303a\ua6e6-\ua6ef\U00010140-\U00010174\U00010341\U0001034a\U000103d1-\U000103d5\U00012400-\U0001246e' - -No = '\xb2-\xb3\xb9\xbc-\xbe\u09f4-\u09f9\u0b72-\u0b77\u0bf0-\u0bf2\u0c78-\u0c7e\u0d58-\u0d5e\u0d70-\u0d78\u0f2a-\u0f33\u1369-\u137c\u17f0-\u17f9\u19da\u2070\u2074-\u2079\u2080-\u2089\u2150-\u215f\u2189\u2460-\u249b\u24ea-\u24ff\u2776-\u2793\u2cfd\u3192-\u3195\u3220-\u3229\u3248-\u324f\u3251-\u325f\u3280-\u3289\u32b1-\u32bf\ua830-\ua835\U00010107-\U00010133\U00010175-\U00010178\U0001018a-\U0001018b\U000102e1-\U000102fb\U00010320-\U00010323\U00010858-\U0001085f\U00010879-\U0001087f\U000108a7-\U000108af\U000108fb-\U000108ff\U00010916-\U0001091b\U000109bc-\U000109bd\U000109c0-\U000109cf\U000109d2-\U000109ff\U00010a40-\U00010a48\U00010a7d-\U00010a7e\U00010a9d-\U00010a9f\U00010aeb-\U00010aef\U00010b58-\U00010b5f\U00010b78-\U00010b7f\U00010ba9-\U00010baf\U00010cfa-\U00010cff\U00010e60-\U00010e7e\U00010f1d-\U00010f26\U00010f51-\U00010f54\U00011052-\U00011065\U000111e1-\U000111f4\U0001173a-\U0001173b\U000118ea-\U000118f2\U00011c5a-\U00011c6c\U00016b5b-\U00016b61\U00016e80-\U00016e96\U0001d2e0-\U0001d2f3\U0001d360-\U0001d378\U0001e8c7-\U0001e8cf\U0001ec71-\U0001ecab\U0001ecad-\U0001ecaf\U0001ecb1-\U0001ecb4\U0001f100-\U0001f10c' - -Pc = '_\u203f-\u2040\u2054\ufe33-\ufe34\ufe4d-\ufe4f\uff3f' - -Pd = '\\-\u058a\u05be\u1400\u1806\u2010-\u2015\u2e17\u2e1a\u2e3a-\u2e3b\u2e40\u301c\u3030\u30a0\ufe31-\ufe32\ufe58\ufe63\uff0d' - -Pe = ')\\]}\u0f3b\u0f3d\u169c\u2046\u207e\u208e\u2309\u230b\u232a\u2769\u276b\u276d\u276f\u2771\u2773\u2775\u27c6\u27e7\u27e9\u27eb\u27ed\u27ef\u2984\u2986\u2988\u298a\u298c\u298e\u2990\u2992\u2994\u2996\u2998\u29d9\u29db\u29fd\u2e23\u2e25\u2e27\u2e29\u3009\u300b\u300d\u300f\u3011\u3015\u3017\u3019\u301b\u301e-\u301f\ufd3e\ufe18\ufe36\ufe38\ufe3a\ufe3c\ufe3e\ufe40\ufe42\ufe44\ufe48\ufe5a\ufe5c\ufe5e\uff09\uff3d\uff5d\uff60\uff63' - -Pf = '\xbb\u2019\u201d\u203a\u2e03\u2e05\u2e0a\u2e0d\u2e1d\u2e21' - -Pi = '\xab\u2018\u201b-\u201c\u201f\u2039\u2e02\u2e04\u2e09\u2e0c\u2e1c\u2e20' - -Po = "!-#%-'*,.-/:-;?-@\\\\\xa1\xa7\xb6-\xb7\xbf\u037e\u0387\u055a-\u055f\u0589\u05c0\u05c3\u05c6\u05f3-\u05f4\u0609-\u060a\u060c-\u060d\u061b\u061e-\u061f\u066a-\u066d\u06d4\u0700-\u070d\u07f7-\u07f9\u0830-\u083e\u085e\u0964-\u0965\u0970\u09fd\u0a76\u0af0\u0c84\u0df4\u0e4f\u0e5a-\u0e5b\u0f04-\u0f12\u0f14\u0f85\u0fd0-\u0fd4\u0fd9-\u0fda\u104a-\u104f\u10fb\u1360-\u1368\u166d-\u166e\u16eb-\u16ed\u1735-\u1736\u17d4-\u17d6\u17d8-\u17da\u1800-\u1805\u1807-\u180a\u1944-\u1945\u1a1e-\u1a1f\u1aa0-\u1aa6\u1aa8-\u1aad\u1b5a-\u1b60\u1bfc-\u1bff\u1c3b-\u1c3f\u1c7e-\u1c7f\u1cc0-\u1cc7\u1cd3\u2016-\u2017\u2020-\u2027\u2030-\u2038\u203b-\u203e\u2041-\u2043\u2047-\u2051\u2053\u2055-\u205e\u2cf9-\u2cfc\u2cfe-\u2cff\u2d70\u2e00-\u2e01\u2e06-\u2e08\u2e0b\u2e0e-\u2e16\u2e18-\u2e19\u2e1b\u2e1e-\u2e1f\u2e2a-\u2e2e\u2e30-\u2e39\u2e3c-\u2e3f\u2e41\u2e43-\u2e4e\u3001-\u3003\u303d\u30fb\ua4fe-\ua4ff\ua60d-\ua60f\ua673\ua67e\ua6f2-\ua6f7\ua874-\ua877\ua8ce-\ua8cf\ua8f8-\ua8fa\ua8fc\ua92e-\ua92f\ua95f\ua9c1-\ua9cd\ua9de-\ua9df\uaa5c-\uaa5f\uaade-\uaadf\uaaf0-\uaaf1\uabeb\ufe10-\ufe16\ufe19\ufe30\ufe45-\ufe46\ufe49-\ufe4c\ufe50-\ufe52\ufe54-\ufe57\ufe5f-\ufe61\ufe68\ufe6a-\ufe6b\uff01-\uff03\uff05-\uff07\uff0a\uff0c\uff0e-\uff0f\uff1a-\uff1b\uff1f-\uff20\uff3c\uff61\uff64-\uff65\U00010100-\U00010102\U0001039f\U000103d0\U0001056f\U00010857\U0001091f\U0001093f\U00010a50-\U00010a58\U00010a7f\U00010af0-\U00010af6\U00010b39-\U00010b3f\U00010b99-\U00010b9c\U00010f55-\U00010f59\U00011047-\U0001104d\U000110bb-\U000110bc\U000110be-\U000110c1\U00011140-\U00011143\U00011174-\U00011175\U000111c5-\U000111c8\U000111cd\U000111db\U000111dd-\U000111df\U00011238-\U0001123d\U000112a9\U0001144b-\U0001144f\U0001145b\U0001145d\U000114c6\U000115c1-\U000115d7\U00011641-\U00011643\U00011660-\U0001166c\U0001173c-\U0001173e\U0001183b\U00011a3f-\U00011a46\U00011a9a-\U00011a9c\U00011a9e-\U00011aa2\U00011c41-\U00011c45\U00011c70-\U00011c71\U00011ef7-\U00011ef8\U00012470-\U00012474\U00016a6e-\U00016a6f\U00016af5\U00016b37-\U00016b3b\U00016b44\U00016e97-\U00016e9a\U0001bc9f\U0001da87-\U0001da8b\U0001e95e-\U0001e95f" - -Ps = '(\\[{\u0f3a\u0f3c\u169b\u201a\u201e\u2045\u207d\u208d\u2308\u230a\u2329\u2768\u276a\u276c\u276e\u2770\u2772\u2774\u27c5\u27e6\u27e8\u27ea\u27ec\u27ee\u2983\u2985\u2987\u2989\u298b\u298d\u298f\u2991\u2993\u2995\u2997\u29d8\u29da\u29fc\u2e22\u2e24\u2e26\u2e28\u2e42\u3008\u300a\u300c\u300e\u3010\u3014\u3016\u3018\u301a\u301d\ufd3f\ufe17\ufe35\ufe37\ufe39\ufe3b\ufe3d\ufe3f\ufe41\ufe43\ufe47\ufe59\ufe5b\ufe5d\uff08\uff3b\uff5b\uff5f\uff62' - -Sc = '$\xa2-\xa5\u058f\u060b\u07fe-\u07ff\u09f2-\u09f3\u09fb\u0af1\u0bf9\u0e3f\u17db\u20a0-\u20bf\ua838\ufdfc\ufe69\uff04\uffe0-\uffe1\uffe5-\uffe6\U0001ecb0' - -Sk = '\\^`\xa8\xaf\xb4\xb8\u02c2-\u02c5\u02d2-\u02df\u02e5-\u02eb\u02ed\u02ef-\u02ff\u0375\u0384-\u0385\u1fbd\u1fbf-\u1fc1\u1fcd-\u1fcf\u1fdd-\u1fdf\u1fed-\u1fef\u1ffd-\u1ffe\u309b-\u309c\ua700-\ua716\ua720-\ua721\ua789-\ua78a\uab5b\ufbb2-\ufbc1\uff3e\uff40\uffe3\U0001f3fb-\U0001f3ff' - -Sm = '+<->|~\xac\xb1\xd7\xf7\u03f6\u0606-\u0608\u2044\u2052\u207a-\u207c\u208a-\u208c\u2118\u2140-\u2144\u214b\u2190-\u2194\u219a-\u219b\u21a0\u21a3\u21a6\u21ae\u21ce-\u21cf\u21d2\u21d4\u21f4-\u22ff\u2320-\u2321\u237c\u239b-\u23b3\u23dc-\u23e1\u25b7\u25c1\u25f8-\u25ff\u266f\u27c0-\u27c4\u27c7-\u27e5\u27f0-\u27ff\u2900-\u2982\u2999-\u29d7\u29dc-\u29fb\u29fe-\u2aff\u2b30-\u2b44\u2b47-\u2b4c\ufb29\ufe62\ufe64-\ufe66\uff0b\uff1c-\uff1e\uff5c\uff5e\uffe2\uffe9-\uffec\U0001d6c1\U0001d6db\U0001d6fb\U0001d715\U0001d735\U0001d74f\U0001d76f\U0001d789\U0001d7a9\U0001d7c3\U0001eef0-\U0001eef1' - -So = '\xa6\xa9\xae\xb0\u0482\u058d-\u058e\u060e-\u060f\u06de\u06e9\u06fd-\u06fe\u07f6\u09fa\u0b70\u0bf3-\u0bf8\u0bfa\u0c7f\u0d4f\u0d79\u0f01-\u0f03\u0f13\u0f15-\u0f17\u0f1a-\u0f1f\u0f34\u0f36\u0f38\u0fbe-\u0fc5\u0fc7-\u0fcc\u0fce-\u0fcf\u0fd5-\u0fd8\u109e-\u109f\u1390-\u1399\u1940\u19de-\u19ff\u1b61-\u1b6a\u1b74-\u1b7c\u2100-\u2101\u2103-\u2106\u2108-\u2109\u2114\u2116-\u2117\u211e-\u2123\u2125\u2127\u2129\u212e\u213a-\u213b\u214a\u214c-\u214d\u214f\u218a-\u218b\u2195-\u2199\u219c-\u219f\u21a1-\u21a2\u21a4-\u21a5\u21a7-\u21ad\u21af-\u21cd\u21d0-\u21d1\u21d3\u21d5-\u21f3\u2300-\u2307\u230c-\u231f\u2322-\u2328\u232b-\u237b\u237d-\u239a\u23b4-\u23db\u23e2-\u2426\u2440-\u244a\u249c-\u24e9\u2500-\u25b6\u25b8-\u25c0\u25c2-\u25f7\u2600-\u266e\u2670-\u2767\u2794-\u27bf\u2800-\u28ff\u2b00-\u2b2f\u2b45-\u2b46\u2b4d-\u2b73\u2b76-\u2b95\u2b98-\u2bc8\u2bca-\u2bfe\u2ce5-\u2cea\u2e80-\u2e99\u2e9b-\u2ef3\u2f00-\u2fd5\u2ff0-\u2ffb\u3004\u3012-\u3013\u3020\u3036-\u3037\u303e-\u303f\u3190-\u3191\u3196-\u319f\u31c0-\u31e3\u3200-\u321e\u322a-\u3247\u3250\u3260-\u327f\u328a-\u32b0\u32c0-\u32fe\u3300-\u33ff\u4dc0-\u4dff\ua490-\ua4c6\ua828-\ua82b\ua836-\ua837\ua839\uaa77-\uaa79\ufdfd\uffe4\uffe8\uffed-\uffee\ufffc-\ufffd\U00010137-\U0001013f\U00010179-\U00010189\U0001018c-\U0001018e\U00010190-\U0001019b\U000101a0\U000101d0-\U000101fc\U00010877-\U00010878\U00010ac8\U0001173f\U00016b3c-\U00016b3f\U00016b45\U0001bc9c\U0001d000-\U0001d0f5\U0001d100-\U0001d126\U0001d129-\U0001d164\U0001d16a-\U0001d16c\U0001d183-\U0001d184\U0001d18c-\U0001d1a9\U0001d1ae-\U0001d1e8\U0001d200-\U0001d241\U0001d245\U0001d300-\U0001d356\U0001d800-\U0001d9ff\U0001da37-\U0001da3a\U0001da6d-\U0001da74\U0001da76-\U0001da83\U0001da85-\U0001da86\U0001ecac\U0001f000-\U0001f02b\U0001f030-\U0001f093\U0001f0a0-\U0001f0ae\U0001f0b1-\U0001f0bf\U0001f0c1-\U0001f0cf\U0001f0d1-\U0001f0f5\U0001f110-\U0001f16b\U0001f170-\U0001f1ac\U0001f1e6-\U0001f202\U0001f210-\U0001f23b\U0001f240-\U0001f248\U0001f250-\U0001f251\U0001f260-\U0001f265\U0001f300-\U0001f3fa\U0001f400-\U0001f6d4\U0001f6e0-\U0001f6ec\U0001f6f0-\U0001f6f9\U0001f700-\U0001f773\U0001f780-\U0001f7d8\U0001f800-\U0001f80b\U0001f810-\U0001f847\U0001f850-\U0001f859\U0001f860-\U0001f887\U0001f890-\U0001f8ad\U0001f900-\U0001f90b\U0001f910-\U0001f93e\U0001f940-\U0001f970\U0001f973-\U0001f976\U0001f97a\U0001f97c-\U0001f9a2\U0001f9b0-\U0001f9b9\U0001f9c0-\U0001f9c2\U0001f9d0-\U0001f9ff\U0001fa60-\U0001fa6d' - -Zl = '\u2028' - -Zp = '\u2029' - -Zs = ' \xa0\u1680\u2000-\u200a\u202f\u205f\u3000' - -xid_continue = '0-9A-Z_a-z\xaa\xb5\xb7\xba\xc0-\xd6\xd8-\xf6\xf8-\u02c1\u02c6-\u02d1\u02e0-\u02e4\u02ec\u02ee\u0300-\u0374\u0376-\u0377\u037b-\u037d\u037f\u0386-\u038a\u038c\u038e-\u03a1\u03a3-\u03f5\u03f7-\u0481\u0483-\u0487\u048a-\u052f\u0531-\u0556\u0559\u0560-\u0588\u0591-\u05bd\u05bf\u05c1-\u05c2\u05c4-\u05c5\u05c7\u05d0-\u05ea\u05ef-\u05f2\u0610-\u061a\u0620-\u0669\u066e-\u06d3\u06d5-\u06dc\u06df-\u06e8\u06ea-\u06fc\u06ff\u0710-\u074a\u074d-\u07b1\u07c0-\u07f5\u07fa\u07fd\u0800-\u082d\u0840-\u085b\u0860-\u086a\u08a0-\u08b4\u08b6-\u08bd\u08d3-\u08e1\u08e3-\u0963\u0966-\u096f\u0971-\u0983\u0985-\u098c\u098f-\u0990\u0993-\u09a8\u09aa-\u09b0\u09b2\u09b6-\u09b9\u09bc-\u09c4\u09c7-\u09c8\u09cb-\u09ce\u09d7\u09dc-\u09dd\u09df-\u09e3\u09e6-\u09f1\u09fc\u09fe\u0a01-\u0a03\u0a05-\u0a0a\u0a0f-\u0a10\u0a13-\u0a28\u0a2a-\u0a30\u0a32-\u0a33\u0a35-\u0a36\u0a38-\u0a39\u0a3c\u0a3e-\u0a42\u0a47-\u0a48\u0a4b-\u0a4d\u0a51\u0a59-\u0a5c\u0a5e\u0a66-\u0a75\u0a81-\u0a83\u0a85-\u0a8d\u0a8f-\u0a91\u0a93-\u0aa8\u0aaa-\u0ab0\u0ab2-\u0ab3\u0ab5-\u0ab9\u0abc-\u0ac5\u0ac7-\u0ac9\u0acb-\u0acd\u0ad0\u0ae0-\u0ae3\u0ae6-\u0aef\u0af9-\u0aff\u0b01-\u0b03\u0b05-\u0b0c\u0b0f-\u0b10\u0b13-\u0b28\u0b2a-\u0b30\u0b32-\u0b33\u0b35-\u0b39\u0b3c-\u0b44\u0b47-\u0b48\u0b4b-\u0b4d\u0b56-\u0b57\u0b5c-\u0b5d\u0b5f-\u0b63\u0b66-\u0b6f\u0b71\u0b82-\u0b83\u0b85-\u0b8a\u0b8e-\u0b90\u0b92-\u0b95\u0b99-\u0b9a\u0b9c\u0b9e-\u0b9f\u0ba3-\u0ba4\u0ba8-\u0baa\u0bae-\u0bb9\u0bbe-\u0bc2\u0bc6-\u0bc8\u0bca-\u0bcd\u0bd0\u0bd7\u0be6-\u0bef\u0c00-\u0c0c\u0c0e-\u0c10\u0c12-\u0c28\u0c2a-\u0c39\u0c3d-\u0c44\u0c46-\u0c48\u0c4a-\u0c4d\u0c55-\u0c56\u0c58-\u0c5a\u0c60-\u0c63\u0c66-\u0c6f\u0c80-\u0c83\u0c85-\u0c8c\u0c8e-\u0c90\u0c92-\u0ca8\u0caa-\u0cb3\u0cb5-\u0cb9\u0cbc-\u0cc4\u0cc6-\u0cc8\u0cca-\u0ccd\u0cd5-\u0cd6\u0cde\u0ce0-\u0ce3\u0ce6-\u0cef\u0cf1-\u0cf2\u0d00-\u0d03\u0d05-\u0d0c\u0d0e-\u0d10\u0d12-\u0d44\u0d46-\u0d48\u0d4a-\u0d4e\u0d54-\u0d57\u0d5f-\u0d63\u0d66-\u0d6f\u0d7a-\u0d7f\u0d82-\u0d83\u0d85-\u0d96\u0d9a-\u0db1\u0db3-\u0dbb\u0dbd\u0dc0-\u0dc6\u0dca\u0dcf-\u0dd4\u0dd6\u0dd8-\u0ddf\u0de6-\u0def\u0df2-\u0df3\u0e01-\u0e3a\u0e40-\u0e4e\u0e50-\u0e59\u0e81-\u0e82\u0e84\u0e87-\u0e88\u0e8a\u0e8d\u0e94-\u0e97\u0e99-\u0e9f\u0ea1-\u0ea3\u0ea5\u0ea7\u0eaa-\u0eab\u0ead-\u0eb9\u0ebb-\u0ebd\u0ec0-\u0ec4\u0ec6\u0ec8-\u0ecd\u0ed0-\u0ed9\u0edc-\u0edf\u0f00\u0f18-\u0f19\u0f20-\u0f29\u0f35\u0f37\u0f39\u0f3e-\u0f47\u0f49-\u0f6c\u0f71-\u0f84\u0f86-\u0f97\u0f99-\u0fbc\u0fc6\u1000-\u1049\u1050-\u109d\u10a0-\u10c5\u10c7\u10cd\u10d0-\u10fa\u10fc-\u1248\u124a-\u124d\u1250-\u1256\u1258\u125a-\u125d\u1260-\u1288\u128a-\u128d\u1290-\u12b0\u12b2-\u12b5\u12b8-\u12be\u12c0\u12c2-\u12c5\u12c8-\u12d6\u12d8-\u1310\u1312-\u1315\u1318-\u135a\u135d-\u135f\u1369-\u1371\u1380-\u138f\u13a0-\u13f5\u13f8-\u13fd\u1401-\u166c\u166f-\u167f\u1681-\u169a\u16a0-\u16ea\u16ee-\u16f8\u1700-\u170c\u170e-\u1714\u1720-\u1734\u1740-\u1753\u1760-\u176c\u176e-\u1770\u1772-\u1773\u1780-\u17d3\u17d7\u17dc-\u17dd\u17e0-\u17e9\u180b-\u180d\u1810-\u1819\u1820-\u1878\u1880-\u18aa\u18b0-\u18f5\u1900-\u191e\u1920-\u192b\u1930-\u193b\u1946-\u196d\u1970-\u1974\u1980-\u19ab\u19b0-\u19c9\u19d0-\u19da\u1a00-\u1a1b\u1a20-\u1a5e\u1a60-\u1a7c\u1a7f-\u1a89\u1a90-\u1a99\u1aa7\u1ab0-\u1abd\u1b00-\u1b4b\u1b50-\u1b59\u1b6b-\u1b73\u1b80-\u1bf3\u1c00-\u1c37\u1c40-\u1c49\u1c4d-\u1c7d\u1c80-\u1c88\u1c90-\u1cba\u1cbd-\u1cbf\u1cd0-\u1cd2\u1cd4-\u1cf9\u1d00-\u1df9\u1dfb-\u1f15\u1f18-\u1f1d\u1f20-\u1f45\u1f48-\u1f4d\u1f50-\u1f57\u1f59\u1f5b\u1f5d\u1f5f-\u1f7d\u1f80-\u1fb4\u1fb6-\u1fbc\u1fbe\u1fc2-\u1fc4\u1fc6-\u1fcc\u1fd0-\u1fd3\u1fd6-\u1fdb\u1fe0-\u1fec\u1ff2-\u1ff4\u1ff6-\u1ffc\u203f-\u2040\u2054\u2071\u207f\u2090-\u209c\u20d0-\u20dc\u20e1\u20e5-\u20f0\u2102\u2107\u210a-\u2113\u2115\u2118-\u211d\u2124\u2126\u2128\u212a-\u2139\u213c-\u213f\u2145-\u2149\u214e\u2160-\u2188\u2c00-\u2c2e\u2c30-\u2c5e\u2c60-\u2ce4\u2ceb-\u2cf3\u2d00-\u2d25\u2d27\u2d2d\u2d30-\u2d67\u2d6f\u2d7f-\u2d96\u2da0-\u2da6\u2da8-\u2dae\u2db0-\u2db6\u2db8-\u2dbe\u2dc0-\u2dc6\u2dc8-\u2dce\u2dd0-\u2dd6\u2dd8-\u2dde\u2de0-\u2dff\u3005-\u3007\u3021-\u302f\u3031-\u3035\u3038-\u303c\u3041-\u3096\u3099-\u309a\u309d-\u309f\u30a1-\u30fa\u30fc-\u30ff\u3105-\u312f\u3131-\u318e\u31a0-\u31ba\u31f0-\u31ff\u3400-\u4db5\u4e00-\u9fef\ua000-\ua48c\ua4d0-\ua4fd\ua500-\ua60c\ua610-\ua62b\ua640-\ua66f\ua674-\ua67d\ua67f-\ua6f1\ua717-\ua71f\ua722-\ua788\ua78b-\ua7b9\ua7f7-\ua827\ua840-\ua873\ua880-\ua8c5\ua8d0-\ua8d9\ua8e0-\ua8f7\ua8fb\ua8fd-\ua92d\ua930-\ua953\ua960-\ua97c\ua980-\ua9c0\ua9cf-\ua9d9\ua9e0-\ua9fe\uaa00-\uaa36\uaa40-\uaa4d\uaa50-\uaa59\uaa60-\uaa76\uaa7a-\uaac2\uaadb-\uaadd\uaae0-\uaaef\uaaf2-\uaaf6\uab01-\uab06\uab09-\uab0e\uab11-\uab16\uab20-\uab26\uab28-\uab2e\uab30-\uab5a\uab5c-\uab65\uab70-\uabea\uabec-\uabed\uabf0-\uabf9\uac00-\ud7a3\ud7b0-\ud7c6\ud7cb-\ud7fb\uf900-\ufa6d\ufa70-\ufad9\ufb00-\ufb06\ufb13-\ufb17\ufb1d-\ufb28\ufb2a-\ufb36\ufb38-\ufb3c\ufb3e\ufb40-\ufb41\ufb43-\ufb44\ufb46-\ufbb1\ufbd3-\ufc5d\ufc64-\ufd3d\ufd50-\ufd8f\ufd92-\ufdc7\ufdf0-\ufdf9\ufe00-\ufe0f\ufe20-\ufe2f\ufe33-\ufe34\ufe4d-\ufe4f\ufe71\ufe73\ufe77\ufe79\ufe7b\ufe7d\ufe7f-\ufefc\uff10-\uff19\uff21-\uff3a\uff3f\uff41-\uff5a\uff66-\uffbe\uffc2-\uffc7\uffca-\uffcf\uffd2-\uffd7\uffda-\uffdc\U00010000-\U0001000b\U0001000d-\U00010026\U00010028-\U0001003a\U0001003c-\U0001003d\U0001003f-\U0001004d\U00010050-\U0001005d\U00010080-\U000100fa\U00010140-\U00010174\U000101fd\U00010280-\U0001029c\U000102a0-\U000102d0\U000102e0\U00010300-\U0001031f\U0001032d-\U0001034a\U00010350-\U0001037a\U00010380-\U0001039d\U000103a0-\U000103c3\U000103c8-\U000103cf\U000103d1-\U000103d5\U00010400-\U0001049d\U000104a0-\U000104a9\U000104b0-\U000104d3\U000104d8-\U000104fb\U00010500-\U00010527\U00010530-\U00010563\U00010600-\U00010736\U00010740-\U00010755\U00010760-\U00010767\U00010800-\U00010805\U00010808\U0001080a-\U00010835\U00010837-\U00010838\U0001083c\U0001083f-\U00010855\U00010860-\U00010876\U00010880-\U0001089e\U000108e0-\U000108f2\U000108f4-\U000108f5\U00010900-\U00010915\U00010920-\U00010939\U00010980-\U000109b7\U000109be-\U000109bf\U00010a00-\U00010a03\U00010a05-\U00010a06\U00010a0c-\U00010a13\U00010a15-\U00010a17\U00010a19-\U00010a35\U00010a38-\U00010a3a\U00010a3f\U00010a60-\U00010a7c\U00010a80-\U00010a9c\U00010ac0-\U00010ac7\U00010ac9-\U00010ae6\U00010b00-\U00010b35\U00010b40-\U00010b55\U00010b60-\U00010b72\U00010b80-\U00010b91\U00010c00-\U00010c48\U00010c80-\U00010cb2\U00010cc0-\U00010cf2\U00010d00-\U00010d27\U00010d30-\U00010d39\U00010f00-\U00010f1c\U00010f27\U00010f30-\U00010f50\U00011000-\U00011046\U00011066-\U0001106f\U0001107f-\U000110ba\U000110d0-\U000110e8\U000110f0-\U000110f9\U00011100-\U00011134\U00011136-\U0001113f\U00011144-\U00011146\U00011150-\U00011173\U00011176\U00011180-\U000111c4\U000111c9-\U000111cc\U000111d0-\U000111da\U000111dc\U00011200-\U00011211\U00011213-\U00011237\U0001123e\U00011280-\U00011286\U00011288\U0001128a-\U0001128d\U0001128f-\U0001129d\U0001129f-\U000112a8\U000112b0-\U000112ea\U000112f0-\U000112f9\U00011300-\U00011303\U00011305-\U0001130c\U0001130f-\U00011310\U00011313-\U00011328\U0001132a-\U00011330\U00011332-\U00011333\U00011335-\U00011339\U0001133b-\U00011344\U00011347-\U00011348\U0001134b-\U0001134d\U00011350\U00011357\U0001135d-\U00011363\U00011366-\U0001136c\U00011370-\U00011374\U00011400-\U0001144a\U00011450-\U00011459\U0001145e\U00011480-\U000114c5\U000114c7\U000114d0-\U000114d9\U00011580-\U000115b5\U000115b8-\U000115c0\U000115d8-\U000115dd\U00011600-\U00011640\U00011644\U00011650-\U00011659\U00011680-\U000116b7\U000116c0-\U000116c9\U00011700-\U0001171a\U0001171d-\U0001172b\U00011730-\U00011739\U00011800-\U0001183a\U000118a0-\U000118e9\U000118ff\U00011a00-\U00011a3e\U00011a47\U00011a50-\U00011a83\U00011a86-\U00011a99\U00011a9d\U00011ac0-\U00011af8\U00011c00-\U00011c08\U00011c0a-\U00011c36\U00011c38-\U00011c40\U00011c50-\U00011c59\U00011c72-\U00011c8f\U00011c92-\U00011ca7\U00011ca9-\U00011cb6\U00011d00-\U00011d06\U00011d08-\U00011d09\U00011d0b-\U00011d36\U00011d3a\U00011d3c-\U00011d3d\U00011d3f-\U00011d47\U00011d50-\U00011d59\U00011d60-\U00011d65\U00011d67-\U00011d68\U00011d6a-\U00011d8e\U00011d90-\U00011d91\U00011d93-\U00011d98\U00011da0-\U00011da9\U00011ee0-\U00011ef6\U00012000-\U00012399\U00012400-\U0001246e\U00012480-\U00012543\U00013000-\U0001342e\U00014400-\U00014646\U00016800-\U00016a38\U00016a40-\U00016a5e\U00016a60-\U00016a69\U00016ad0-\U00016aed\U00016af0-\U00016af4\U00016b00-\U00016b36\U00016b40-\U00016b43\U00016b50-\U00016b59\U00016b63-\U00016b77\U00016b7d-\U00016b8f\U00016e40-\U00016e7f\U00016f00-\U00016f44\U00016f50-\U00016f7e\U00016f8f-\U00016f9f\U00016fe0-\U00016fe1\U00017000-\U000187f1\U00018800-\U00018af2\U0001b000-\U0001b11e\U0001b170-\U0001b2fb\U0001bc00-\U0001bc6a\U0001bc70-\U0001bc7c\U0001bc80-\U0001bc88\U0001bc90-\U0001bc99\U0001bc9d-\U0001bc9e\U0001d165-\U0001d169\U0001d16d-\U0001d172\U0001d17b-\U0001d182\U0001d185-\U0001d18b\U0001d1aa-\U0001d1ad\U0001d242-\U0001d244\U0001d400-\U0001d454\U0001d456-\U0001d49c\U0001d49e-\U0001d49f\U0001d4a2\U0001d4a5-\U0001d4a6\U0001d4a9-\U0001d4ac\U0001d4ae-\U0001d4b9\U0001d4bb\U0001d4bd-\U0001d4c3\U0001d4c5-\U0001d505\U0001d507-\U0001d50a\U0001d50d-\U0001d514\U0001d516-\U0001d51c\U0001d51e-\U0001d539\U0001d53b-\U0001d53e\U0001d540-\U0001d544\U0001d546\U0001d54a-\U0001d550\U0001d552-\U0001d6a5\U0001d6a8-\U0001d6c0\U0001d6c2-\U0001d6da\U0001d6dc-\U0001d6fa\U0001d6fc-\U0001d714\U0001d716-\U0001d734\U0001d736-\U0001d74e\U0001d750-\U0001d76e\U0001d770-\U0001d788\U0001d78a-\U0001d7a8\U0001d7aa-\U0001d7c2\U0001d7c4-\U0001d7cb\U0001d7ce-\U0001d7ff\U0001da00-\U0001da36\U0001da3b-\U0001da6c\U0001da75\U0001da84\U0001da9b-\U0001da9f\U0001daa1-\U0001daaf\U0001e000-\U0001e006\U0001e008-\U0001e018\U0001e01b-\U0001e021\U0001e023-\U0001e024\U0001e026-\U0001e02a\U0001e800-\U0001e8c4\U0001e8d0-\U0001e8d6\U0001e900-\U0001e94a\U0001e950-\U0001e959\U0001ee00-\U0001ee03\U0001ee05-\U0001ee1f\U0001ee21-\U0001ee22\U0001ee24\U0001ee27\U0001ee29-\U0001ee32\U0001ee34-\U0001ee37\U0001ee39\U0001ee3b\U0001ee42\U0001ee47\U0001ee49\U0001ee4b\U0001ee4d-\U0001ee4f\U0001ee51-\U0001ee52\U0001ee54\U0001ee57\U0001ee59\U0001ee5b\U0001ee5d\U0001ee5f\U0001ee61-\U0001ee62\U0001ee64\U0001ee67-\U0001ee6a\U0001ee6c-\U0001ee72\U0001ee74-\U0001ee77\U0001ee79-\U0001ee7c\U0001ee7e\U0001ee80-\U0001ee89\U0001ee8b-\U0001ee9b\U0001eea1-\U0001eea3\U0001eea5-\U0001eea9\U0001eeab-\U0001eebb\U00020000-\U0002a6d6\U0002a700-\U0002b734\U0002b740-\U0002b81d\U0002b820-\U0002cea1\U0002ceb0-\U0002ebe0\U0002f800-\U0002fa1d\U000e0100-\U000e01ef' - -xid_start = 'A-Z_a-z\xaa\xb5\xba\xc0-\xd6\xd8-\xf6\xf8-\u02c1\u02c6-\u02d1\u02e0-\u02e4\u02ec\u02ee\u0370-\u0374\u0376-\u0377\u037b-\u037d\u037f\u0386\u0388-\u038a\u038c\u038e-\u03a1\u03a3-\u03f5\u03f7-\u0481\u048a-\u052f\u0531-\u0556\u0559\u0560-\u0588\u05d0-\u05ea\u05ef-\u05f2\u0620-\u064a\u066e-\u066f\u0671-\u06d3\u06d5\u06e5-\u06e6\u06ee-\u06ef\u06fa-\u06fc\u06ff\u0710\u0712-\u072f\u074d-\u07a5\u07b1\u07ca-\u07ea\u07f4-\u07f5\u07fa\u0800-\u0815\u081a\u0824\u0828\u0840-\u0858\u0860-\u086a\u08a0-\u08b4\u08b6-\u08bd\u0904-\u0939\u093d\u0950\u0958-\u0961\u0971-\u0980\u0985-\u098c\u098f-\u0990\u0993-\u09a8\u09aa-\u09b0\u09b2\u09b6-\u09b9\u09bd\u09ce\u09dc-\u09dd\u09df-\u09e1\u09f0-\u09f1\u09fc\u0a05-\u0a0a\u0a0f-\u0a10\u0a13-\u0a28\u0a2a-\u0a30\u0a32-\u0a33\u0a35-\u0a36\u0a38-\u0a39\u0a59-\u0a5c\u0a5e\u0a72-\u0a74\u0a85-\u0a8d\u0a8f-\u0a91\u0a93-\u0aa8\u0aaa-\u0ab0\u0ab2-\u0ab3\u0ab5-\u0ab9\u0abd\u0ad0\u0ae0-\u0ae1\u0af9\u0b05-\u0b0c\u0b0f-\u0b10\u0b13-\u0b28\u0b2a-\u0b30\u0b32-\u0b33\u0b35-\u0b39\u0b3d\u0b5c-\u0b5d\u0b5f-\u0b61\u0b71\u0b83\u0b85-\u0b8a\u0b8e-\u0b90\u0b92-\u0b95\u0b99-\u0b9a\u0b9c\u0b9e-\u0b9f\u0ba3-\u0ba4\u0ba8-\u0baa\u0bae-\u0bb9\u0bd0\u0c05-\u0c0c\u0c0e-\u0c10\u0c12-\u0c28\u0c2a-\u0c39\u0c3d\u0c58-\u0c5a\u0c60-\u0c61\u0c80\u0c85-\u0c8c\u0c8e-\u0c90\u0c92-\u0ca8\u0caa-\u0cb3\u0cb5-\u0cb9\u0cbd\u0cde\u0ce0-\u0ce1\u0cf1-\u0cf2\u0d05-\u0d0c\u0d0e-\u0d10\u0d12-\u0d3a\u0d3d\u0d4e\u0d54-\u0d56\u0d5f-\u0d61\u0d7a-\u0d7f\u0d85-\u0d96\u0d9a-\u0db1\u0db3-\u0dbb\u0dbd\u0dc0-\u0dc6\u0e01-\u0e30\u0e32\u0e40-\u0e46\u0e81-\u0e82\u0e84\u0e87-\u0e88\u0e8a\u0e8d\u0e94-\u0e97\u0e99-\u0e9f\u0ea1-\u0ea3\u0ea5\u0ea7\u0eaa-\u0eab\u0ead-\u0eb0\u0eb2\u0ebd\u0ec0-\u0ec4\u0ec6\u0edc-\u0edf\u0f00\u0f40-\u0f47\u0f49-\u0f6c\u0f88-\u0f8c\u1000-\u102a\u103f\u1050-\u1055\u105a-\u105d\u1061\u1065-\u1066\u106e-\u1070\u1075-\u1081\u108e\u10a0-\u10c5\u10c7\u10cd\u10d0-\u10fa\u10fc-\u1248\u124a-\u124d\u1250-\u1256\u1258\u125a-\u125d\u1260-\u1288\u128a-\u128d\u1290-\u12b0\u12b2-\u12b5\u12b8-\u12be\u12c0\u12c2-\u12c5\u12c8-\u12d6\u12d8-\u1310\u1312-\u1315\u1318-\u135a\u1380-\u138f\u13a0-\u13f5\u13f8-\u13fd\u1401-\u166c\u166f-\u167f\u1681-\u169a\u16a0-\u16ea\u16ee-\u16f8\u1700-\u170c\u170e-\u1711\u1720-\u1731\u1740-\u1751\u1760-\u176c\u176e-\u1770\u1780-\u17b3\u17d7\u17dc\u1820-\u1878\u1880-\u18a8\u18aa\u18b0-\u18f5\u1900-\u191e\u1950-\u196d\u1970-\u1974\u1980-\u19ab\u19b0-\u19c9\u1a00-\u1a16\u1a20-\u1a54\u1aa7\u1b05-\u1b33\u1b45-\u1b4b\u1b83-\u1ba0\u1bae-\u1baf\u1bba-\u1be5\u1c00-\u1c23\u1c4d-\u1c4f\u1c5a-\u1c7d\u1c80-\u1c88\u1c90-\u1cba\u1cbd-\u1cbf\u1ce9-\u1cec\u1cee-\u1cf1\u1cf5-\u1cf6\u1d00-\u1dbf\u1e00-\u1f15\u1f18-\u1f1d\u1f20-\u1f45\u1f48-\u1f4d\u1f50-\u1f57\u1f59\u1f5b\u1f5d\u1f5f-\u1f7d\u1f80-\u1fb4\u1fb6-\u1fbc\u1fbe\u1fc2-\u1fc4\u1fc6-\u1fcc\u1fd0-\u1fd3\u1fd6-\u1fdb\u1fe0-\u1fec\u1ff2-\u1ff4\u1ff6-\u1ffc\u2071\u207f\u2090-\u209c\u2102\u2107\u210a-\u2113\u2115\u2118-\u211d\u2124\u2126\u2128\u212a-\u2139\u213c-\u213f\u2145-\u2149\u214e\u2160-\u2188\u2c00-\u2c2e\u2c30-\u2c5e\u2c60-\u2ce4\u2ceb-\u2cee\u2cf2-\u2cf3\u2d00-\u2d25\u2d27\u2d2d\u2d30-\u2d67\u2d6f\u2d80-\u2d96\u2da0-\u2da6\u2da8-\u2dae\u2db0-\u2db6\u2db8-\u2dbe\u2dc0-\u2dc6\u2dc8-\u2dce\u2dd0-\u2dd6\u2dd8-\u2dde\u3005-\u3007\u3021-\u3029\u3031-\u3035\u3038-\u303c\u3041-\u3096\u309d-\u309f\u30a1-\u30fa\u30fc-\u30ff\u3105-\u312f\u3131-\u318e\u31a0-\u31ba\u31f0-\u31ff\u3400-\u4db5\u4e00-\u9fef\ua000-\ua48c\ua4d0-\ua4fd\ua500-\ua60c\ua610-\ua61f\ua62a-\ua62b\ua640-\ua66e\ua67f-\ua69d\ua6a0-\ua6ef\ua717-\ua71f\ua722-\ua788\ua78b-\ua7b9\ua7f7-\ua801\ua803-\ua805\ua807-\ua80a\ua80c-\ua822\ua840-\ua873\ua882-\ua8b3\ua8f2-\ua8f7\ua8fb\ua8fd-\ua8fe\ua90a-\ua925\ua930-\ua946\ua960-\ua97c\ua984-\ua9b2\ua9cf\ua9e0-\ua9e4\ua9e6-\ua9ef\ua9fa-\ua9fe\uaa00-\uaa28\uaa40-\uaa42\uaa44-\uaa4b\uaa60-\uaa76\uaa7a\uaa7e-\uaaaf\uaab1\uaab5-\uaab6\uaab9-\uaabd\uaac0\uaac2\uaadb-\uaadd\uaae0-\uaaea\uaaf2-\uaaf4\uab01-\uab06\uab09-\uab0e\uab11-\uab16\uab20-\uab26\uab28-\uab2e\uab30-\uab5a\uab5c-\uab65\uab70-\uabe2\uac00-\ud7a3\ud7b0-\ud7c6\ud7cb-\ud7fb\uf900-\ufa6d\ufa70-\ufad9\ufb00-\ufb06\ufb13-\ufb17\ufb1d\ufb1f-\ufb28\ufb2a-\ufb36\ufb38-\ufb3c\ufb3e\ufb40-\ufb41\ufb43-\ufb44\ufb46-\ufbb1\ufbd3-\ufc5d\ufc64-\ufd3d\ufd50-\ufd8f\ufd92-\ufdc7\ufdf0-\ufdf9\ufe71\ufe73\ufe77\ufe79\ufe7b\ufe7d\ufe7f-\ufefc\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d\uffa0-\uffbe\uffc2-\uffc7\uffca-\uffcf\uffd2-\uffd7\uffda-\uffdc\U00010000-\U0001000b\U0001000d-\U00010026\U00010028-\U0001003a\U0001003c-\U0001003d\U0001003f-\U0001004d\U00010050-\U0001005d\U00010080-\U000100fa\U00010140-\U00010174\U00010280-\U0001029c\U000102a0-\U000102d0\U00010300-\U0001031f\U0001032d-\U0001034a\U00010350-\U00010375\U00010380-\U0001039d\U000103a0-\U000103c3\U000103c8-\U000103cf\U000103d1-\U000103d5\U00010400-\U0001049d\U000104b0-\U000104d3\U000104d8-\U000104fb\U00010500-\U00010527\U00010530-\U00010563\U00010600-\U00010736\U00010740-\U00010755\U00010760-\U00010767\U00010800-\U00010805\U00010808\U0001080a-\U00010835\U00010837-\U00010838\U0001083c\U0001083f-\U00010855\U00010860-\U00010876\U00010880-\U0001089e\U000108e0-\U000108f2\U000108f4-\U000108f5\U00010900-\U00010915\U00010920-\U00010939\U00010980-\U000109b7\U000109be-\U000109bf\U00010a00\U00010a10-\U00010a13\U00010a15-\U00010a17\U00010a19-\U00010a35\U00010a60-\U00010a7c\U00010a80-\U00010a9c\U00010ac0-\U00010ac7\U00010ac9-\U00010ae4\U00010b00-\U00010b35\U00010b40-\U00010b55\U00010b60-\U00010b72\U00010b80-\U00010b91\U00010c00-\U00010c48\U00010c80-\U00010cb2\U00010cc0-\U00010cf2\U00010d00-\U00010d23\U00010f00-\U00010f1c\U00010f27\U00010f30-\U00010f45\U00011003-\U00011037\U00011083-\U000110af\U000110d0-\U000110e8\U00011103-\U00011126\U00011144\U00011150-\U00011172\U00011176\U00011183-\U000111b2\U000111c1-\U000111c4\U000111da\U000111dc\U00011200-\U00011211\U00011213-\U0001122b\U00011280-\U00011286\U00011288\U0001128a-\U0001128d\U0001128f-\U0001129d\U0001129f-\U000112a8\U000112b0-\U000112de\U00011305-\U0001130c\U0001130f-\U00011310\U00011313-\U00011328\U0001132a-\U00011330\U00011332-\U00011333\U00011335-\U00011339\U0001133d\U00011350\U0001135d-\U00011361\U00011400-\U00011434\U00011447-\U0001144a\U00011480-\U000114af\U000114c4-\U000114c5\U000114c7\U00011580-\U000115ae\U000115d8-\U000115db\U00011600-\U0001162f\U00011644\U00011680-\U000116aa\U00011700-\U0001171a\U00011800-\U0001182b\U000118a0-\U000118df\U000118ff\U00011a00\U00011a0b-\U00011a32\U00011a3a\U00011a50\U00011a5c-\U00011a83\U00011a86-\U00011a89\U00011a9d\U00011ac0-\U00011af8\U00011c00-\U00011c08\U00011c0a-\U00011c2e\U00011c40\U00011c72-\U00011c8f\U00011d00-\U00011d06\U00011d08-\U00011d09\U00011d0b-\U00011d30\U00011d46\U00011d60-\U00011d65\U00011d67-\U00011d68\U00011d6a-\U00011d89\U00011d98\U00011ee0-\U00011ef2\U00012000-\U00012399\U00012400-\U0001246e\U00012480-\U00012543\U00013000-\U0001342e\U00014400-\U00014646\U00016800-\U00016a38\U00016a40-\U00016a5e\U00016ad0-\U00016aed\U00016b00-\U00016b2f\U00016b40-\U00016b43\U00016b63-\U00016b77\U00016b7d-\U00016b8f\U00016e40-\U00016e7f\U00016f00-\U00016f44\U00016f50\U00016f93-\U00016f9f\U00016fe0-\U00016fe1\U00017000-\U000187f1\U00018800-\U00018af2\U0001b000-\U0001b11e\U0001b170-\U0001b2fb\U0001bc00-\U0001bc6a\U0001bc70-\U0001bc7c\U0001bc80-\U0001bc88\U0001bc90-\U0001bc99\U0001d400-\U0001d454\U0001d456-\U0001d49c\U0001d49e-\U0001d49f\U0001d4a2\U0001d4a5-\U0001d4a6\U0001d4a9-\U0001d4ac\U0001d4ae-\U0001d4b9\U0001d4bb\U0001d4bd-\U0001d4c3\U0001d4c5-\U0001d505\U0001d507-\U0001d50a\U0001d50d-\U0001d514\U0001d516-\U0001d51c\U0001d51e-\U0001d539\U0001d53b-\U0001d53e\U0001d540-\U0001d544\U0001d546\U0001d54a-\U0001d550\U0001d552-\U0001d6a5\U0001d6a8-\U0001d6c0\U0001d6c2-\U0001d6da\U0001d6dc-\U0001d6fa\U0001d6fc-\U0001d714\U0001d716-\U0001d734\U0001d736-\U0001d74e\U0001d750-\U0001d76e\U0001d770-\U0001d788\U0001d78a-\U0001d7a8\U0001d7aa-\U0001d7c2\U0001d7c4-\U0001d7cb\U0001e800-\U0001e8c4\U0001e900-\U0001e943\U0001ee00-\U0001ee03\U0001ee05-\U0001ee1f\U0001ee21-\U0001ee22\U0001ee24\U0001ee27\U0001ee29-\U0001ee32\U0001ee34-\U0001ee37\U0001ee39\U0001ee3b\U0001ee42\U0001ee47\U0001ee49\U0001ee4b\U0001ee4d-\U0001ee4f\U0001ee51-\U0001ee52\U0001ee54\U0001ee57\U0001ee59\U0001ee5b\U0001ee5d\U0001ee5f\U0001ee61-\U0001ee62\U0001ee64\U0001ee67-\U0001ee6a\U0001ee6c-\U0001ee72\U0001ee74-\U0001ee77\U0001ee79-\U0001ee7c\U0001ee7e\U0001ee80-\U0001ee89\U0001ee8b-\U0001ee9b\U0001eea1-\U0001eea3\U0001eea5-\U0001eea9\U0001eeab-\U0001eebb\U00020000-\U0002a6d6\U0002a700-\U0002b734\U0002b740-\U0002b81d\U0002b820-\U0002cea1\U0002ceb0-\U0002ebe0\U0002f800-\U0002fa1d' - -cats = ['Cc', 'Cf', 'Cn', 'Co', 'Cs', 'Ll', 'Lm', 'Lo', 'Lt', 'Lu', 'Mc', 'Me', 'Mn', 'Nd', 'Nl', 'No', 'Pc', 'Pd', 'Pe', 'Pf', 'Pi', 'Po', 'Ps', 'Sc', 'Sk', 'Sm', 'So', 'Zl', 'Zp', 'Zs'] - -# Generated from unidata 11.0.0 - -def combine(*args): - return ''.join(globals()[cat] for cat in args) - - -def allexcept(*args): - newcats = cats[:] - for arg in args: - newcats.remove(arg) - return ''.join(globals()[cat] for cat in newcats) - - -def _handle_runs(char_list): # pragma: no cover - buf = [] - for c in char_list: - if len(c) == 1: - if buf and buf[-1][1] == chr(ord(c)-1): - buf[-1] = (buf[-1][0], c) - else: - buf.append((c, c)) - else: - buf.append((c, c)) - for a, b in buf: - if a == b: - yield a - else: - yield '%s-%s' % (a, b) - - -if __name__ == '__main__': # pragma: no cover - import unicodedata - - categories = {'xid_start': [], 'xid_continue': []} - - with open(__file__) as fp: - content = fp.read() - - header = content[:content.find('Cc =')] - footer = content[content.find("def combine("):] - - for code in range(0x110000): - c = chr(code) - cat = unicodedata.category(c) - if ord(c) == 0xdc00: - # Hack to avoid combining this combining with the preceeding high - # surrogate, 0xdbff, when doing a repr. - c = '\\' + c - elif ord(c) in (0x2d, 0x5b, 0x5c, 0x5d, 0x5e): - # Escape regex metachars. - c = '\\' + c - categories.setdefault(cat, []).append(c) - # XID_START and XID_CONTINUE are special categories used for matching - # identifiers in Python 3. - if c.isidentifier(): - categories['xid_start'].append(c) - if ('a' + c).isidentifier(): - categories['xid_continue'].append(c) - - with open(__file__, 'w') as fp: - fp.write(header) - - for cat in sorted(categories): - val = ''.join(_handle_runs(categories[cat])) - fp.write('%s = %a\n\n' % (cat, val)) - - cats = sorted(categories) - cats.remove('xid_start') - cats.remove('xid_continue') - fp.write('cats = %r\n\n' % cats) - - fp.write('# Generated from unidata %s\n\n' % (unicodedata.unidata_version,)) - - fp.write(footer) diff --git a/spaces/alexrods/Smartcity-Traffic-Detection/README.md b/spaces/alexrods/Smartcity-Traffic-Detection/README.md deleted file mode 100644 index 20946cf52a2729614b7300b5d16f4af19f20fd6a..0000000000000000000000000000000000000000 --- a/spaces/alexrods/Smartcity-Traffic-Detection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Smartcity Traffic Detection -emoji: 🦀 -colorFrom: purple -colorTo: red -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/aliabd/SummerTime/model/single_doc/textrank_model.py b/spaces/aliabd/SummerTime/model/single_doc/textrank_model.py deleted file mode 100644 index 233d57559d1db67ece3a7ba27a63b94b5a78a954..0000000000000000000000000000000000000000 --- a/spaces/aliabd/SummerTime/model/single_doc/textrank_model.py +++ /dev/null @@ -1,89 +0,0 @@ -import spacy -import pytextrank # noqa: F401 -from math import sqrt -from operator import itemgetter -from .base_single_doc_model import SingleDocSummModel -from typing import Union, List - - -class TextRankModel(SingleDocSummModel): - # static variables - model_name = "TextRank" - is_extractive = True - is_neural = False - - def __init__(self, num_sentences=1): - super(TextRankModel, self).__init__() - - self.num_sentences = num_sentences - # load a spaCy model, depending on language, scale, etc. - self.nlp = spacy.load("en_core_web_sm") - self.nlp.add_pipe("textrank", last=True) - - def summarize( - self, corpus: Union[List[str], List[List[str]]], queries: List[str] = None - ) -> List[str]: - self.assert_summ_input_type(corpus, queries) - - return list(map(lambda x: " ".join(self.summarize_single(x)), corpus)) - - def summarize_single(self, corpus) -> List[str]: - # add PyTextRank to the spaCy pipeline - doc = self.nlp(corpus) - sent_bounds = [[s.start, s.end, set([])] for s in doc.sents] - - limit_phrases = self.num_sentences - phrase_id = 0 - unit_vector = [] - for p in doc._.phrases: - unit_vector.append(p.rank) - for chunk in p.chunks: - for sent_start, sent_end, sent_vector in sent_bounds: - if chunk.start >= sent_start and chunk.end <= sent_end: - sent_vector.add(phrase_id) - break - phrase_id += 1 - if phrase_id == limit_phrases: - break - - sum_ranks = sum(unit_vector) - - unit_vector = [rank / sum_ranks for rank in unit_vector] - - sent_rank = {} - sent_id = 0 - for sent_start, sent_end, sent_vector in sent_bounds: - sum_sq = 0.0 - for phrase_id in range(len(unit_vector)): - if phrase_id not in sent_vector: - sum_sq += unit_vector[phrase_id] ** 2.0 - sent_rank[sent_id] = sqrt(sum_sq) - sent_id += 1 - - sorted(sent_rank.items(), key=itemgetter(1)) - - sent_text = {} - sent_id = 0 - limit_sentences = self.num_sentences - summary_sentences = [] - for sent in doc.sents: - sent_text[sent_id] = sent.text - sent_id += 1 - num_sent = 0 - for sent_id, rank in sorted(sent_rank.items(), key=itemgetter(1)): - summary_sentences.append(sent_text[sent_id]) - num_sent += 1 - if num_sent == limit_sentences: - break - - return summary_sentences - - @classmethod - def show_capability(cls): - basic_description = cls.generate_basic_description() - more_details = ( - "A graphbased ranking model for text processing. Extractive sentence summarization. \n " - "Strengths: \n - Fast with low memory usage \n - Allows for control of summary length \n " - "Weaknesses: \n - Not as accurate as neural methods." - ) - print(f"{basic_description} \n {'#'*20} \n {more_details}") diff --git a/spaces/aliabd/SummerTime/tests/__init__.py b/spaces/aliabd/SummerTime/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/allknowingroger/Image-Models-Test78/README.md b/spaces/allknowingroger/Image-Models-Test78/README.md deleted file mode 100644 index bce7a68e9154f8c9347b3bb6443b8de1fdd0599e..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test78/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: true -duplicated_from: allknowingroger/Image-Models-Test77 ---- - - \ No newline at end of file diff --git a/spaces/amankishore/sjc/adapt_ncsn.py b/spaces/amankishore/sjc/adapt_ncsn.py deleted file mode 100644 index 9a3cfda3160a27aa42667b7390a95bd111f134dd..0000000000000000000000000000000000000000 --- a/spaces/amankishore/sjc/adapt_ncsn.py +++ /dev/null @@ -1,101 +0,0 @@ -from pathlib import Path -import argparse -import yaml - -import numpy as np -import torch - -from ncsn.ncsnv2 import NCSNv2, NCSNv2Deeper, NCSNv2Deepest, get_sigmas -from ncsn.ema import EMAHelper - -from adapt import ScoreAdapter - -device = torch.device("cuda") - - -def get_model(config): - if config.data.dataset == 'CIFAR10' or config.data.dataset == 'CELEBA': - return NCSNv2(config).to(config.device) - elif config.data.dataset == "FFHQ": - return NCSNv2Deepest(config).to(config.device) - elif config.data.dataset == 'LSUN': - return NCSNv2Deeper(config).to(config.device) - - -def dict2namespace(config): - namespace = argparse.Namespace() - for key, value in config.items(): - if isinstance(value, dict): - new_value = dict2namespace(value) - else: - new_value = value - setattr(namespace, key, new_value) - return namespace - - -class NCSN(ScoreAdapter): - def __init__(self): - config_fname = Path(__file__).resolve().parent / "ncsn" / "bedroom.yml" - with config_fname.open("r") as f: - config = yaml.safe_load(f) - config = dict2namespace(config) - - config.device = device - - states = torch.load( - self.checkpoint_root() / "ncsn/exp/logs/bedroom/checkpoint_150000.pth" - ) - - model = get_model(config) - model = torch.nn.DataParallel(model) - model.load_state_dict(states[0], strict=True) - - if config.model.ema: - ema_helper = EMAHelper(mu=config.model.ema_rate) - ema_helper.register(model) - ema_helper.load_state_dict(states[-1]) - # HC: update the model param with history ema. - # if don't do this the colors of images become strangely saturated. - # this is reported in the paper. - ema_helper.ema(model) - - model = model.module # remove DataParallel - model.eval() - self.model = model - self._data_shape = (3, config.data.image_size, config.data.image_size) - - self.σs = model.sigmas.cpu().numpy() - self._device = device - - def data_shape(self): - return self._data_shape - - def samps_centered(self): - return False - - @property - def σ_max(self): - return self.σs[0] - - @property - def σ_min(self): - return self.σs[-1] - - @torch.no_grad() - def denoise(self, xs, σ): - σ, j = self.snap_t_to_nearest_tick(σ) - N = xs.shape[0] - cond_t = torch.tensor([j] * N, dtype=torch.long, device=self.device) - score = self.model(xs, cond_t) - Ds = xs + score * (σ ** 2) - return Ds - - def unet_is_cond(self): - return False - - def use_cls_guidance(self): - return False - - def snap_t_to_nearest_tick(self, t): - j = np.abs(t - self.σs).argmin() - return self.σs[j], j diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_hostapi.h b/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_hostapi.h deleted file mode 100644 index 4ac3ab60e9299f32e5ec78912fc199d6cfdfbdf3..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/src/common/pa_hostapi.h +++ /dev/null @@ -1,362 +0,0 @@ -#ifndef PA_HOSTAPI_H -#define PA_HOSTAPI_H -/* - * $Id$ - * Portable Audio I/O Library - * host api representation - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 1999-2008 Ross Bencina, Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup common_src - - @brief Interfaces and representation structures used by pa_front.c - to manage and communicate with host API implementations. -*/ - -#include "portaudio.h" - -/** -The PA_NO_* host API macros are now deprecated in favor of PA_USE_* macros. -PA_USE_* indicates whether a particular host API will be initialized by PortAudio. -An undefined or 0 value indicates that the host API will not be used. A value of 1 -indicates that the host API will be used. PA_USE_* macros should be left undefined -or defined to either 0 or 1. - -The code below ensures that PA_USE_* macros are always defined and have value -0 or 1. Undefined symbols are defaulted to 0. Symbols that are neither 0 nor 1 -are defaulted to 1. -*/ - -#ifndef PA_USE_SKELETON -#define PA_USE_SKELETON 0 -#elif (PA_USE_SKELETON != 0) && (PA_USE_SKELETON != 1) -#undef PA_USE_SKELETON -#define PA_USE_SKELETON 1 -#endif - -#if defined(PA_NO_ASIO) || defined(PA_NO_DS) || defined(PA_NO_WMME) || defined(PA_NO_WASAPI) || defined(PA_NO_WDMKS) -#error "Portaudio: PA_NO_ is no longer supported, please remove definition and use PA_USE_ instead" -#endif - -#ifndef PA_USE_ASIO -#define PA_USE_ASIO 0 -#elif (PA_USE_ASIO != 0) && (PA_USE_ASIO != 1) -#undef PA_USE_ASIO -#define PA_USE_ASIO 1 -#endif - -#ifndef PA_USE_DS -#define PA_USE_DS 0 -#elif (PA_USE_DS != 0) && (PA_USE_DS != 1) -#undef PA_USE_DS -#define PA_USE_DS 1 -#endif - -#ifndef PA_USE_WMME -#define PA_USE_WMME 0 -#elif (PA_USE_WMME != 0) && (PA_USE_WMME != 1) -#undef PA_USE_WMME -#define PA_USE_WMME 1 -#endif - -#ifndef PA_USE_WASAPI -#define PA_USE_WASAPI 0 -#elif (PA_USE_WASAPI != 0) && (PA_USE_WASAPI != 1) -#undef PA_USE_WASAPI -#define PA_USE_WASAPI 1 -#endif - -#ifndef PA_USE_WDMKS -#define PA_USE_WDMKS 0 -#elif (PA_USE_WDMKS != 0) && (PA_USE_WDMKS != 1) -#undef PA_USE_WDMKS -#define PA_USE_WDMKS 1 -#endif - -/* Set default values for Unix based APIs. */ -#if defined(PA_NO_OSS) || defined(PA_NO_ALSA) || defined(PA_NO_JACK) || defined(PA_NO_COREAUDIO) || defined(PA_NO_SGI) || defined(PA_NO_ASIHPI) -#error "Portaudio: PA_NO_ is no longer supported, please remove definition and use PA_USE_ instead" -#endif - -#ifndef PA_USE_OSS -#define PA_USE_OSS 0 -#elif (PA_USE_OSS != 0) && (PA_USE_OSS != 1) -#undef PA_USE_OSS -#define PA_USE_OSS 1 -#endif - -#ifndef PA_USE_ALSA -#define PA_USE_ALSA 0 -#elif (PA_USE_ALSA != 0) && (PA_USE_ALSA != 1) -#undef PA_USE_ALSA -#define PA_USE_ALSA 1 -#endif - -#ifndef PA_USE_JACK -#define PA_USE_JACK 0 -#elif (PA_USE_JACK != 0) && (PA_USE_JACK != 1) -#undef PA_USE_JACK -#define PA_USE_JACK 1 -#endif - -#ifndef PA_USE_SGI -#define PA_USE_SGI 0 -#elif (PA_USE_SGI != 0) && (PA_USE_SGI != 1) -#undef PA_USE_SGI -#define PA_USE_SGI 1 -#endif - -#ifndef PA_USE_COREAUDIO -#define PA_USE_COREAUDIO 0 -#elif (PA_USE_COREAUDIO != 0) && (PA_USE_COREAUDIO != 1) -#undef PA_USE_COREAUDIO -#define PA_USE_COREAUDIO 1 -#endif - -#ifndef PA_USE_ASIHPI -#define PA_USE_ASIHPI 0 -#elif (PA_USE_ASIHPI != 0) && (PA_USE_ASIHPI != 1) -#undef PA_USE_ASIHPI -#define PA_USE_ASIHPI 1 -#endif - -#ifdef __cplusplus -extern "C" -{ -#endif /* __cplusplus */ - - -/** **FOR THE USE OF pa_front.c ONLY** - Do NOT use fields in this structure, they my change at any time. - Use functions defined in pa_util.h if you think you need functionality - which can be derived from here. -*/ -typedef struct PaUtilPrivatePaFrontHostApiInfo { - - - unsigned long baseDeviceIndex; -}PaUtilPrivatePaFrontHostApiInfo; - - -/** The common header for all data structures whose pointers are passed through - the hostApiSpecificStreamInfo field of the PaStreamParameters structure. - Note that in order to keep the public PortAudio interface clean, this structure - is not used explicitly when declaring hostApiSpecificStreamInfo data structures. - However, some code in pa_front depends on the first 3 members being equivalent - with this structure. - @see PaStreamParameters -*/ -typedef struct PaUtilHostApiSpecificStreamInfoHeader -{ - unsigned long size; /**< size of whole structure including this header */ - PaHostApiTypeId hostApiType; /**< host API for which this data is intended */ - unsigned long version; /**< structure version */ -} PaUtilHostApiSpecificStreamInfoHeader; - - - -/** A structure representing the interface to a host API. Contains both - concrete data and pointers to functions which implement the interface. -*/ -typedef struct PaUtilHostApiRepresentation { - PaUtilPrivatePaFrontHostApiInfo privatePaFrontInfo; - - /** The host api implementation should populate the info field. In the - case of info.defaultInputDevice and info.defaultOutputDevice the - values stored should be 0 based indices within the host api's own - device index range (0 to deviceCount). These values will be converted - to global device indices by pa_front after PaUtilHostApiInitializer() - returns. - */ - PaHostApiInfo info; - - PaDeviceInfo** deviceInfos; - - /** - (*Terminate)() is guaranteed to be called with a valid - parameter, which was previously returned from the same implementation's - initializer. - */ - void (*Terminate)( struct PaUtilHostApiRepresentation *hostApi ); - - /** - The inputParameters and outputParameters pointers should not be saved - as they will not remain valid after OpenStream is called. - - - The following guarantees are made about parameters to (*OpenStream)(): - - [NOTE: the following list up to *END PA FRONT VALIDATIONS* should be - kept in sync with the one for ValidateOpenStreamParameters and - Pa_OpenStream in pa_front.c] - - PaHostApiRepresentation *hostApi - - is valid for this implementation - - PaStream** stream - - is non-null - - - at least one of inputParameters & outputParmeters is valid (not NULL) - - - if inputParameters & outputParmeters are both valid, that - inputParameters->device & outputParmeters->device both use the same host api - - PaDeviceIndex inputParameters->device - - is within range (0 to Pa_CountDevices-1) Or: - - is paUseHostApiSpecificDeviceSpecification and - inputParameters->hostApiSpecificStreamInfo is non-NULL and refers - to a valid host api - - int inputParameters->numChannels - - if inputParameters->device is not paUseHostApiSpecificDeviceSpecification, numInputChannels is > 0 - - upper bound is NOT validated against device capabilities - - PaSampleFormat inputParameters->sampleFormat - - is one of the sample formats defined in portaudio.h - - void *inputParameters->hostApiSpecificStreamInfo - - if supplied its hostApi field matches the input device's host Api - - PaDeviceIndex outputParmeters->device - - is within range (0 to Pa_CountDevices-1) - - int outputParmeters->numChannels - - if inputDevice is valid, numInputChannels is > 0 - - upper bound is NOT validated against device capabilities - - PaSampleFormat outputParmeters->sampleFormat - - is one of the sample formats defined in portaudio.h - - void *outputParmeters->hostApiSpecificStreamInfo - - if supplied its hostApi field matches the output device's host Api - - double sampleRate - - is not an 'absurd' rate (less than 1000. or greater than 384000.) - - sampleRate is NOT validated against device capabilities - - PaStreamFlags streamFlags - - unused platform neutral flags are zero - - paNeverDropInput is only used for full-duplex callback streams - with variable buffer size (paFramesPerBufferUnspecified) - - [*END PA FRONT VALIDATIONS*] - - - The following validations MUST be performed by (*OpenStream)(): - - - check that input device can support numInputChannels - - - check that input device can support inputSampleFormat, or that - we have the capability to convert from outputSampleFormat to - a native format - - - if inputStreamInfo is supplied, validate its contents, - or return an error if no inputStreamInfo is expected - - - check that output device can support numOutputChannels - - - check that output device can support outputSampleFormat, or that - we have the capability to convert from outputSampleFormat to - a native format - - - if outputStreamInfo is supplied, validate its contents, - or return an error if no outputStreamInfo is expected - - - if a full duplex stream is requested, check that the combination - of input and output parameters is supported - - - check that the device supports sampleRate - - - alter sampleRate to a close allowable rate if necessary - - - validate inputLatency and outputLatency - - - validate any platform specific flags, if flags are supplied they - must be valid. - */ - PaError (*OpenStream)( struct PaUtilHostApiRepresentation *hostApi, - PaStream** stream, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate, - unsigned long framesPerCallback, - PaStreamFlags streamFlags, - PaStreamCallback *streamCallback, - void *userData ); - - - PaError (*IsFormatSupported)( struct PaUtilHostApiRepresentation *hostApi, - const PaStreamParameters *inputParameters, - const PaStreamParameters *outputParameters, - double sampleRate ); -} PaUtilHostApiRepresentation; - - -/** Prototype for the initialization function which must be implemented by every - host API. - - This function should only return an error other than paNoError if it encounters - an unexpected and fatal error (memory allocation error for example). In general, - there may be conditions under which it returns a NULL interface pointer and also - returns paNoError. For example, if the ASIO implementation detects that ASIO is - not installed, it should return a NULL interface, and paNoError. - - @see paHostApiInitializers -*/ -typedef PaError PaUtilHostApiInitializer( PaUtilHostApiRepresentation**, PaHostApiIndex ); - - -/** paHostApiInitializers is a NULL-terminated array of host API initialization - functions. These functions are called by pa_front.c to initialize the host APIs - when the client calls Pa_Initialize(). - - The initialization functions are invoked in order. - - The first successfully initialized host API that has a default input *or* output - device is used as the default PortAudio host API. This is based on the logic that - there is only one default host API, and it must contain the default input and output - devices (if defined). - - There is a platform specific file that defines paHostApiInitializers for that - platform, pa_win/pa_win_hostapis.c contains the Win32 definitions for example. -*/ -extern PaUtilHostApiInitializer *paHostApiInitializers[]; - - -#ifdef __cplusplus -} -#endif /* __cplusplus */ -#endif /* PA_HOSTAPI_H */ diff --git a/spaces/anilkumar-kanasani/cloths_order_bot/app.py b/spaces/anilkumar-kanasani/cloths_order_bot/app.py deleted file mode 100644 index 34303b02cc90905a86ce62e1701903545ff91dde..0000000000000000000000000000000000000000 --- a/spaces/anilkumar-kanasani/cloths_order_bot/app.py +++ /dev/null @@ -1,71 +0,0 @@ - -from utils import (submit_prompt_to_gpt, - check_password) - -system_content = { - "role": "system", - "content": """ -You are OrderBot, an automated service to collect orders for a Cloths selling retailer. \ -If the question is out of cloths order, just say that your question is out of scope of cloths order. \ -You first greet the customer, then collects the order, \ -and then asks if it's a pickup or delivery. \ -You wait to collect the entire order, then summarize it and check for a final \ -time if the customer wants to add anything else. \ -If it's a delivery, you ask for an address. \ -Finally you collect the payment.\ -Make sure to clarify all options, colors and sizes to uniquely \ -identify the item from the portfolio.\ -You respond in a short, very conversational friendly style. -At the end, ask them to press the Close Order button \ -The portfolio includes \ -Shirts: \ -T-shirt 12.95, 10.00, 7.00 \ -Polo Shirt 10.95, 9.25, 6.50 \ -Night Shirt 11.95, 9.75, 6.75 \ -Jean Jacket 11.95, 9.75, 6.75 \ -Hoodie 4.50, 3.50, 2.50\ -Pants: \ -Jean Pant 12.95, 10.00, 7.00 \ -Casual Pant 10.95, 9.25, 6.50 \ -Night Pant 11.95, 9.75, 6.75 \ -6-pocket Jean Pant 11.95, 9.75, 6.75 \ -Half pant 4.50, 3.50, 2.50\ -Accessories: \ -Ties 3.00, 2.00, 1.00 \ -Belts 3.00, 2.00, 1.00 \ -Watches 35.00 \ -""", -} - -import streamlit as st -if check_password(): - st.title("Cloths Retailer : Order Bot") - st.markdown("Welcome to our cloth selling service. How can I assist you today?") - st.markdown("Feel free to ask me any question to order cloths.") - # Initialize chat history - if "messages" not in st.session_state: - st.session_state.messages = [system_content] - - # Display chat messages from history on app rerun - for message in st.session_state.messages: - if message["role"] == "system": - pass - else: - with st.chat_message(message["role"]): - st.markdown(message["content"]) - - - # React to user input - if prompt := st.chat_input("Please type some thing here?"): - # Display user message in chat message container - with st.chat_message("user"): - st.markdown(prompt) - # Add user message to chat history - st.session_state.messages.append({"role": "user", "content": prompt}) - - response = submit_prompt_to_gpt(st.session_state.messages) - # Display assistant response in chat message container - with st.chat_message("assistant"): - st.markdown(response) - # Add assistant response to chat history - st.session_state.messages.append({"role": "assistant", "content": response}) \ No newline at end of file diff --git a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/util/__init__.py b/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/util/__init__.py deleted file mode 100644 index 168f9979a4623806934b0ff1102ac166704e7dec..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/panoptic-segment-anything/GroundingDINO/groundingdino/util/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py b/spaces/aodianyun/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py deleted file mode 100644 index 62c44737f83bed6f42c3cc5155ba59eb0d63afcb..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/modules/textual_inversion/textual_inversion.py +++ /dev/null @@ -1,657 +0,0 @@ -import os -import sys -import traceback -import inspect -from collections import namedtuple - -import torch -import tqdm -import html -import datetime -import csv -import safetensors.torch - -import numpy as np -from PIL import Image, PngImagePlugin -from torch.utils.tensorboard import SummaryWriter - -from modules import shared, devices, sd_hijack, processing, sd_models, images, sd_samplers, sd_hijack_checkpoint -import modules.textual_inversion.dataset -from modules.textual_inversion.learn_schedule import LearnRateScheduler - -from modules.textual_inversion.image_embedding import embedding_to_b64, embedding_from_b64, insert_image_data_embed, extract_image_data_embed, caption_image_overlay -from modules.textual_inversion.logging import save_settings_to_file - - -TextualInversionTemplate = namedtuple("TextualInversionTemplate", ["name", "path"]) -textual_inversion_templates = {} - - -def list_textual_inversion_templates(): - textual_inversion_templates.clear() - - for root, dirs, fns in os.walk(shared.cmd_opts.textual_inversion_templates_dir): - for fn in fns: - path = os.path.join(root, fn) - - textual_inversion_templates[fn] = TextualInversionTemplate(fn, path) - - return textual_inversion_templates - - -class Embedding: - def __init__(self, vec, name, step=None): - self.vec = vec - self.name = name - self.step = step - self.shape = None - self.vectors = 0 - self.cached_checksum = None - self.sd_checkpoint = None - self.sd_checkpoint_name = None - self.optimizer_state_dict = None - self.filename = None - - def save(self, filename): - embedding_data = { - "string_to_token": {"*": 265}, - "string_to_param": {"*": self.vec}, - "name": self.name, - "step": self.step, - "sd_checkpoint": self.sd_checkpoint, - "sd_checkpoint_name": self.sd_checkpoint_name, - } - - torch.save(embedding_data, filename) - - if shared.opts.save_optimizer_state and self.optimizer_state_dict is not None: - optimizer_saved_dict = { - 'hash': self.checksum(), - 'optimizer_state_dict': self.optimizer_state_dict, - } - torch.save(optimizer_saved_dict, filename + '.optim') - - def checksum(self): - if self.cached_checksum is not None: - return self.cached_checksum - - def const_hash(a): - r = 0 - for v in a: - r = (r * 281 ^ int(v) * 997) & 0xFFFFFFFF - return r - - self.cached_checksum = f'{const_hash(self.vec.reshape(-1) * 100) & 0xffff:04x}' - return self.cached_checksum - - -class DirWithTextualInversionEmbeddings: - def __init__(self, path): - self.path = path - self.mtime = None - - def has_changed(self): - if not os.path.isdir(self.path): - return False - - mt = os.path.getmtime(self.path) - if self.mtime is None or mt > self.mtime: - return True - - def update(self): - if not os.path.isdir(self.path): - return - - self.mtime = os.path.getmtime(self.path) - - -class EmbeddingDatabase: - def __init__(self): - self.ids_lookup = {} - self.word_embeddings = {} - self.skipped_embeddings = {} - self.expected_shape = -1 - self.embedding_dirs = {} - self.previously_displayed_embeddings = () - - def add_embedding_dir(self, path): - self.embedding_dirs[path] = DirWithTextualInversionEmbeddings(path) - - def clear_embedding_dirs(self): - self.embedding_dirs.clear() - - def register_embedding(self, embedding, model): - self.word_embeddings[embedding.name] = embedding - - ids = model.cond_stage_model.tokenize([embedding.name])[0] - - first_id = ids[0] - if first_id not in self.ids_lookup: - self.ids_lookup[first_id] = [] - - self.ids_lookup[first_id] = sorted(self.ids_lookup[first_id] + [(ids, embedding)], key=lambda x: len(x[0]), reverse=True) - - return embedding - - def get_expected_shape(self): - vec = shared.sd_model.cond_stage_model.encode_embedding_init_text(",", 1) - return vec.shape[1] - - def load_from_file(self, path, filename): - name, ext = os.path.splitext(filename) - ext = ext.upper() - - if ext in ['.PNG', '.WEBP', '.JXL', '.AVIF']: - _, second_ext = os.path.splitext(name) - if second_ext.upper() == '.PREVIEW': - return - - embed_image = Image.open(path) - if hasattr(embed_image, 'text') and 'sd-ti-embedding' in embed_image.text: - data = embedding_from_b64(embed_image.text['sd-ti-embedding']) - name = data.get('name', name) - else: - data = extract_image_data_embed(embed_image) - name = data.get('name', name) - elif ext in ['.BIN', '.PT']: - data = torch.load(path, map_location="cpu") - elif ext in ['.SAFETENSORS']: - data = safetensors.torch.load_file(path, device="cpu") - else: - return - - # textual inversion embeddings - if 'string_to_param' in data: - param_dict = data['string_to_param'] - if hasattr(param_dict, '_parameters'): - param_dict = getattr(param_dict, '_parameters') # fix for torch 1.12.1 loading saved file from torch 1.11 - assert len(param_dict) == 1, 'embedding file has multiple terms in it' - emb = next(iter(param_dict.items()))[1] - # diffuser concepts - elif type(data) == dict and type(next(iter(data.values()))) == torch.Tensor: - assert len(data.keys()) == 1, 'embedding file has multiple terms in it' - - emb = next(iter(data.values())) - if len(emb.shape) == 1: - emb = emb.unsqueeze(0) - else: - raise Exception(f"Couldn't identify {filename} as neither textual inversion embedding nor diffuser concept.") - - vec = emb.detach().to(devices.device, dtype=torch.float32) - embedding = Embedding(vec, name) - embedding.step = data.get('step', None) - embedding.sd_checkpoint = data.get('sd_checkpoint', None) - embedding.sd_checkpoint_name = data.get('sd_checkpoint_name', None) - embedding.vectors = vec.shape[0] - embedding.shape = vec.shape[-1] - embedding.filename = path - - if self.expected_shape == -1 or self.expected_shape == embedding.shape: - self.register_embedding(embedding, shared.sd_model) - else: - self.skipped_embeddings[name] = embedding - - def load_from_dir(self, embdir): - if not os.path.isdir(embdir.path): - return - - for root, dirs, fns in os.walk(embdir.path, followlinks=True): - for fn in fns: - try: - fullfn = os.path.join(root, fn) - - if os.stat(fullfn).st_size == 0: - continue - - self.load_from_file(fullfn, fn) - except Exception: - print(f"Error loading embedding {fn}:", file=sys.stderr) - print(traceback.format_exc(), file=sys.stderr) - continue - - def load_textual_inversion_embeddings(self, force_reload=False): - if not force_reload: - need_reload = False - for path, embdir in self.embedding_dirs.items(): - if embdir.has_changed(): - need_reload = True - break - - if not need_reload: - return - - self.ids_lookup.clear() - self.word_embeddings.clear() - self.skipped_embeddings.clear() - self.expected_shape = self.get_expected_shape() - - for path, embdir in self.embedding_dirs.items(): - self.load_from_dir(embdir) - embdir.update() - - displayed_embeddings = (tuple(self.word_embeddings.keys()), tuple(self.skipped_embeddings.keys())) - if self.previously_displayed_embeddings != displayed_embeddings: - self.previously_displayed_embeddings = displayed_embeddings - print(f"Textual inversion embeddings loaded({len(self.word_embeddings)}): {', '.join(self.word_embeddings.keys())}") - if len(self.skipped_embeddings) > 0: - print(f"Textual inversion embeddings skipped({len(self.skipped_embeddings)}): {', '.join(self.skipped_embeddings.keys())}") - - def find_embedding_at_position(self, tokens, offset): - token = tokens[offset] - possible_matches = self.ids_lookup.get(token, None) - - if possible_matches is None: - return None, None - - for ids, embedding in possible_matches: - if tokens[offset:offset + len(ids)] == ids: - return embedding, len(ids) - - return None, None - - -def create_embedding(name, num_vectors_per_token, overwrite_old, init_text='*'): - cond_model = shared.sd_model.cond_stage_model - - with devices.autocast(): - cond_model([""]) # will send cond model to GPU if lowvram/medvram is active - - #cond_model expects at least some text, so we provide '*' as backup. - embedded = cond_model.encode_embedding_init_text(init_text or '*', num_vectors_per_token) - vec = torch.zeros((num_vectors_per_token, embedded.shape[1]), device=devices.device) - - #Only copy if we provided an init_text, otherwise keep vectors as zeros - if init_text: - for i in range(num_vectors_per_token): - vec[i] = embedded[i * int(embedded.shape[0]) // num_vectors_per_token] - - # Remove illegal characters from name. - name = "".join( x for x in name if (x.isalnum() or x in "._- ")) - fn = os.path.join(shared.cmd_opts.embeddings_dir, f"{name}.pt") - if not overwrite_old: - assert not os.path.exists(fn), f"file {fn} already exists" - - embedding = Embedding(vec, name) - embedding.step = 0 - embedding.save(fn) - - return fn - - -def write_loss(log_directory, filename, step, epoch_len, values): - if shared.opts.training_write_csv_every == 0: - return - - if step % shared.opts.training_write_csv_every != 0: - return - write_csv_header = False if os.path.exists(os.path.join(log_directory, filename)) else True - - with open(os.path.join(log_directory, filename), "a+", newline='') as fout: - csv_writer = csv.DictWriter(fout, fieldnames=["step", "epoch", "epoch_step", *(values.keys())]) - - if write_csv_header: - csv_writer.writeheader() - - epoch = (step - 1) // epoch_len - epoch_step = (step - 1) % epoch_len - - csv_writer.writerow({ - "step": step, - "epoch": epoch, - "epoch_step": epoch_step, - **values, - }) - -def tensorboard_setup(log_directory): - os.makedirs(os.path.join(log_directory, "tensorboard"), exist_ok=True) - return SummaryWriter( - log_dir=os.path.join(log_directory, "tensorboard"), - flush_secs=shared.opts.training_tensorboard_flush_every) - -def tensorboard_add(tensorboard_writer, loss, global_step, step, learn_rate, epoch_num): - tensorboard_add_scaler(tensorboard_writer, "Loss/train", loss, global_step) - tensorboard_add_scaler(tensorboard_writer, f"Loss/train/epoch-{epoch_num}", loss, step) - tensorboard_add_scaler(tensorboard_writer, "Learn rate/train", learn_rate, global_step) - tensorboard_add_scaler(tensorboard_writer, f"Learn rate/train/epoch-{epoch_num}", learn_rate, step) - -def tensorboard_add_scaler(tensorboard_writer, tag, value, step): - tensorboard_writer.add_scalar(tag=tag, - scalar_value=value, global_step=step) - -def tensorboard_add_image(tensorboard_writer, tag, pil_image, step): - # Convert a pil image to a torch tensor - img_tensor = torch.as_tensor(np.array(pil_image, copy=True)) - img_tensor = img_tensor.view(pil_image.size[1], pil_image.size[0], - len(pil_image.getbands())) - img_tensor = img_tensor.permute((2, 0, 1)) - - tensorboard_writer.add_image(tag, img_tensor, global_step=step) - -def validate_train_inputs(model_name, learn_rate, batch_size, gradient_step, data_root, template_file, template_filename, steps, save_model_every, create_image_every, log_directory, name="embedding"): - assert model_name, f"{name} not selected" - assert learn_rate, "Learning rate is empty or 0" - assert isinstance(batch_size, int), "Batch size must be integer" - assert batch_size > 0, "Batch size must be positive" - assert isinstance(gradient_step, int), "Gradient accumulation step must be integer" - assert gradient_step > 0, "Gradient accumulation step must be positive" - assert data_root, "Dataset directory is empty" - assert os.path.isdir(data_root), "Dataset directory doesn't exist" - assert os.listdir(data_root), "Dataset directory is empty" - assert template_filename, "Prompt template file not selected" - assert template_file, f"Prompt template file {template_filename} not found" - assert os.path.isfile(template_file.path), f"Prompt template file {template_filename} doesn't exist" - assert steps, "Max steps is empty or 0" - assert isinstance(steps, int), "Max steps must be integer" - assert steps > 0, "Max steps must be positive" - assert isinstance(save_model_every, int), "Save {name} must be integer" - assert save_model_every >= 0, "Save {name} must be positive or 0" - assert isinstance(create_image_every, int), "Create image must be integer" - assert create_image_every >= 0, "Create image must be positive or 0" - if save_model_every or create_image_every: - assert log_directory, "Log directory is empty" - - -def train_embedding(id_task, embedding_name, learn_rate, batch_size, gradient_step, data_root, log_directory, training_width, training_height, varsize, steps, clip_grad_mode, clip_grad_value, shuffle_tags, tag_drop_out, latent_sampling_method, use_weight, create_image_every, save_embedding_every, template_filename, save_image_with_stored_embedding, preview_from_txt2img, preview_prompt, preview_negative_prompt, preview_steps, preview_sampler_index, preview_cfg_scale, preview_seed, preview_width, preview_height): - save_embedding_every = save_embedding_every or 0 - create_image_every = create_image_every or 0 - template_file = textual_inversion_templates.get(template_filename, None) - validate_train_inputs(embedding_name, learn_rate, batch_size, gradient_step, data_root, template_file, template_filename, steps, save_embedding_every, create_image_every, log_directory, name="embedding") - template_file = template_file.path - - shared.state.job = "train-embedding" - shared.state.textinfo = "Initializing textual inversion training..." - shared.state.job_count = steps - - filename = os.path.join(shared.cmd_opts.embeddings_dir, f'{embedding_name}.pt') - - log_directory = os.path.join(log_directory, datetime.datetime.now().strftime("%Y-%m-%d"), embedding_name) - unload = shared.opts.unload_models_when_training - - if save_embedding_every > 0: - embedding_dir = os.path.join(log_directory, "embeddings") - os.makedirs(embedding_dir, exist_ok=True) - else: - embedding_dir = None - - if create_image_every > 0: - images_dir = os.path.join(log_directory, "images") - os.makedirs(images_dir, exist_ok=True) - else: - images_dir = None - - if create_image_every > 0 and save_image_with_stored_embedding: - images_embeds_dir = os.path.join(log_directory, "image_embeddings") - os.makedirs(images_embeds_dir, exist_ok=True) - else: - images_embeds_dir = None - - hijack = sd_hijack.model_hijack - - embedding = hijack.embedding_db.word_embeddings[embedding_name] - checkpoint = sd_models.select_checkpoint() - - initial_step = embedding.step or 0 - if initial_step >= steps: - shared.state.textinfo = "Model has already been trained beyond specified max steps" - return embedding, filename - - scheduler = LearnRateScheduler(learn_rate, steps, initial_step) - clip_grad = torch.nn.utils.clip_grad_value_ if clip_grad_mode == "value" else \ - torch.nn.utils.clip_grad_norm_ if clip_grad_mode == "norm" else \ - None - if clip_grad: - clip_grad_sched = LearnRateScheduler(clip_grad_value, steps, initial_step, verbose=False) - # dataset loading may take a while, so input validations and early returns should be done before this - shared.state.textinfo = f"Preparing dataset from {html.escape(data_root)}..." - old_parallel_processing_allowed = shared.parallel_processing_allowed - - if shared.opts.training_enable_tensorboard: - tensorboard_writer = tensorboard_setup(log_directory) - - pin_memory = shared.opts.pin_memory - - ds = modules.textual_inversion.dataset.PersonalizedBase(data_root=data_root, width=training_width, height=training_height, repeats=shared.opts.training_image_repeats_per_epoch, placeholder_token=embedding_name, model=shared.sd_model, cond_model=shared.sd_model.cond_stage_model, device=devices.device, template_file=template_file, batch_size=batch_size, gradient_step=gradient_step, shuffle_tags=shuffle_tags, tag_drop_out=tag_drop_out, latent_sampling_method=latent_sampling_method, varsize=varsize, use_weight=use_weight) - - if shared.opts.save_training_settings_to_txt: - save_settings_to_file(log_directory, {**dict(model_name=checkpoint.model_name, model_hash=checkpoint.shorthash, num_of_dataset_images=len(ds), num_vectors_per_token=len(embedding.vec)), **locals()}) - - latent_sampling_method = ds.latent_sampling_method - - dl = modules.textual_inversion.dataset.PersonalizedDataLoader(ds, latent_sampling_method=latent_sampling_method, batch_size=ds.batch_size, pin_memory=pin_memory) - - if unload: - shared.parallel_processing_allowed = False - shared.sd_model.first_stage_model.to(devices.cpu) - - embedding.vec.requires_grad = True - optimizer = torch.optim.AdamW([embedding.vec], lr=scheduler.learn_rate, weight_decay=0.0) - if shared.opts.save_optimizer_state: - optimizer_state_dict = None - if os.path.exists(filename + '.optim'): - optimizer_saved_dict = torch.load(filename + '.optim', map_location='cpu') - if embedding.checksum() == optimizer_saved_dict.get('hash', None): - optimizer_state_dict = optimizer_saved_dict.get('optimizer_state_dict', None) - - if optimizer_state_dict is not None: - optimizer.load_state_dict(optimizer_state_dict) - print("Loaded existing optimizer from checkpoint") - else: - print("No saved optimizer exists in checkpoint") - - scaler = torch.cuda.amp.GradScaler() - - batch_size = ds.batch_size - gradient_step = ds.gradient_step - # n steps = batch_size * gradient_step * n image processed - steps_per_epoch = len(ds) // batch_size // gradient_step - max_steps_per_epoch = len(ds) // batch_size - (len(ds) // batch_size) % gradient_step - loss_step = 0 - _loss_step = 0 #internal - - last_saved_file = "" - last_saved_image = "" - forced_filename = "" - embedding_yet_to_be_embedded = False - - is_training_inpainting_model = shared.sd_model.model.conditioning_key in {'hybrid', 'concat'} - img_c = None - - pbar = tqdm.tqdm(total=steps - initial_step) - try: - sd_hijack_checkpoint.add() - - for i in range((steps-initial_step) * gradient_step): - if scheduler.finished: - break - if shared.state.interrupted: - break - for j, batch in enumerate(dl): - # works as a drop_last=True for gradient accumulation - if j == max_steps_per_epoch: - break - scheduler.apply(optimizer, embedding.step) - if scheduler.finished: - break - if shared.state.interrupted: - break - - if clip_grad: - clip_grad_sched.step(embedding.step) - - with devices.autocast(): - x = batch.latent_sample.to(devices.device, non_blocking=pin_memory) - if use_weight: - w = batch.weight.to(devices.device, non_blocking=pin_memory) - c = shared.sd_model.cond_stage_model(batch.cond_text) - - if is_training_inpainting_model: - if img_c is None: - img_c = processing.txt2img_image_conditioning(shared.sd_model, c, training_width, training_height) - - cond = {"c_concat": [img_c], "c_crossattn": [c]} - else: - cond = c - - if use_weight: - loss = shared.sd_model.weighted_forward(x, cond, w)[0] / gradient_step - del w - else: - loss = shared.sd_model.forward(x, cond)[0] / gradient_step - del x - - _loss_step += loss.item() - scaler.scale(loss).backward() - - # go back until we reach gradient accumulation steps - if (j + 1) % gradient_step != 0: - continue - - if clip_grad: - clip_grad(embedding.vec, clip_grad_sched.learn_rate) - - scaler.step(optimizer) - scaler.update() - embedding.step += 1 - pbar.update() - optimizer.zero_grad(set_to_none=True) - loss_step = _loss_step - _loss_step = 0 - - steps_done = embedding.step + 1 - - epoch_num = embedding.step // steps_per_epoch - epoch_step = embedding.step % steps_per_epoch - - description = f"Training textual inversion [Epoch {epoch_num}: {epoch_step+1}/{steps_per_epoch}] loss: {loss_step:.7f}" - pbar.set_description(description) - if embedding_dir is not None and steps_done % save_embedding_every == 0: - # Before saving, change name to match current checkpoint. - embedding_name_every = f'{embedding_name}-{steps_done}' - last_saved_file = os.path.join(embedding_dir, f'{embedding_name_every}.pt') - save_embedding(embedding, optimizer, checkpoint, embedding_name_every, last_saved_file, remove_cached_checksum=True) - embedding_yet_to_be_embedded = True - - write_loss(log_directory, "textual_inversion_loss.csv", embedding.step, steps_per_epoch, { - "loss": f"{loss_step:.7f}", - "learn_rate": scheduler.learn_rate - }) - - if images_dir is not None and steps_done % create_image_every == 0: - forced_filename = f'{embedding_name}-{steps_done}' - last_saved_image = os.path.join(images_dir, forced_filename) - - shared.sd_model.first_stage_model.to(devices.device) - - p = processing.StableDiffusionProcessingTxt2Img( - sd_model=shared.sd_model, - do_not_save_grid=True, - do_not_save_samples=True, - do_not_reload_embeddings=True, - ) - - if preview_from_txt2img: - p.prompt = preview_prompt - p.negative_prompt = preview_negative_prompt - p.steps = preview_steps - p.sampler_name = sd_samplers.samplers[preview_sampler_index].name - p.cfg_scale = preview_cfg_scale - p.seed = preview_seed - p.width = preview_width - p.height = preview_height - else: - p.prompt = batch.cond_text[0] - p.steps = 20 - p.width = training_width - p.height = training_height - - preview_text = p.prompt - - processed = processing.process_images(p) - image = processed.images[0] if len(processed.images) > 0 else None - - if unload: - shared.sd_model.first_stage_model.to(devices.cpu) - - if image is not None: - shared.state.assign_current_image(image) - - last_saved_image, last_text_info = images.save_image(image, images_dir, "", p.seed, p.prompt, shared.opts.samples_format, processed.infotexts[0], p=p, forced_filename=forced_filename, save_to_dirs=False) - last_saved_image += f", prompt: {preview_text}" - - if shared.opts.training_enable_tensorboard and shared.opts.training_tensorboard_save_images: - tensorboard_add_image(tensorboard_writer, f"Validation at epoch {epoch_num}", image, embedding.step) - - if save_image_with_stored_embedding and os.path.exists(last_saved_file) and embedding_yet_to_be_embedded: - - last_saved_image_chunks = os.path.join(images_embeds_dir, f'{embedding_name}-{steps_done}.png') - - info = PngImagePlugin.PngInfo() - data = torch.load(last_saved_file) - info.add_text("sd-ti-embedding", embedding_to_b64(data)) - - title = "<{}>".format(data.get('name', '???')) - - try: - vectorSize = list(data['string_to_param'].values())[0].shape[0] - except Exception as e: - vectorSize = '?' - - checkpoint = sd_models.select_checkpoint() - footer_left = checkpoint.model_name - footer_mid = '[{}]'.format(checkpoint.shorthash) - footer_right = '{}v {}s'.format(vectorSize, steps_done) - - captioned_image = caption_image_overlay(image, title, footer_left, footer_mid, footer_right) - captioned_image = insert_image_data_embed(captioned_image, data) - - captioned_image.save(last_saved_image_chunks, "PNG", pnginfo=info) - embedding_yet_to_be_embedded = False - - last_saved_image, last_text_info = images.save_image(image, images_dir, "", p.seed, p.prompt, shared.opts.samples_format, processed.infotexts[0], p=p, forced_filename=forced_filename, save_to_dirs=False) - last_saved_image += f", prompt: {preview_text}" - - shared.state.job_no = embedding.step - - shared.state.textinfo = f""" -

-Loss: {loss_step:.7f}
-Step: {steps_done}
-Last prompt: {html.escape(batch.cond_text[0])}
-Last saved embedding: {html.escape(last_saved_file)}
-Last saved image: {html.escape(last_saved_image)}
-

-""" - filename = os.path.join(shared.cmd_opts.embeddings_dir, f'{embedding_name}.pt') - save_embedding(embedding, optimizer, checkpoint, embedding_name, filename, remove_cached_checksum=True) - except Exception: - print(traceback.format_exc(), file=sys.stderr) - pass - finally: - pbar.leave = False - pbar.close() - shared.sd_model.first_stage_model.to(devices.device) - shared.parallel_processing_allowed = old_parallel_processing_allowed - sd_hijack_checkpoint.remove() - - return embedding, filename - - -def save_embedding(embedding, optimizer, checkpoint, embedding_name, filename, remove_cached_checksum=True): - old_embedding_name = embedding.name - old_sd_checkpoint = embedding.sd_checkpoint if hasattr(embedding, "sd_checkpoint") else None - old_sd_checkpoint_name = embedding.sd_checkpoint_name if hasattr(embedding, "sd_checkpoint_name") else None - old_cached_checksum = embedding.cached_checksum if hasattr(embedding, "cached_checksum") else None - try: - embedding.sd_checkpoint = checkpoint.shorthash - embedding.sd_checkpoint_name = checkpoint.model_name - if remove_cached_checksum: - embedding.cached_checksum = None - embedding.name = embedding_name - embedding.optimizer_state_dict = optimizer.state_dict() - embedding.save(filename) - except: - embedding.sd_checkpoint = old_sd_checkpoint - embedding.sd_checkpoint_name = old_sd_checkpoint_name - embedding.name = old_embedding_name - embedding.cached_checksum = old_cached_checksum - raise diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/ui_extra_networks_hypernets.py b/spaces/aodianyun/stable-diffusion-webui/modules/ui_extra_networks_hypernets.py deleted file mode 100644 index 5fe6516a71e7ca5203fdacec3f750494d5650efd..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/modules/ui_extra_networks_hypernets.py +++ /dev/null @@ -1,36 +0,0 @@ -import json -import os - -from modules import shared, ui_extra_networks - - -class ExtraNetworksPageHypernetworks(ui_extra_networks.ExtraNetworksPage): - def __init__(self): - super().__init__('Hypernetworks') - - def refresh(self): - shared.reload_hypernetworks() - - def list_items(self): - for name, path in shared.hypernetworks.items(): - path, ext = os.path.splitext(path) - previews = [path + ".png", path + ".preview.png"] - - preview = None - for file in previews: - if os.path.isfile(file): - preview = self.link_preview(file) - break - - yield { - "name": name, - "filename": path, - "preview": preview, - "search_term": self.search_terms_from_path(path), - "prompt": json.dumps(f""), - "local_preview": path + ".png", - } - - def allowed_directories_for_previews(self): - return [shared.cmd_opts.hypernetwork_dir] - diff --git a/spaces/apsys/normflows/normflows.py b/spaces/apsys/normflows/normflows.py deleted file mode 100644 index 176e89450076dccb1a7e3debc08f1e671f3f717b..0000000000000000000000000000000000000000 --- a/spaces/apsys/normflows/normflows.py +++ /dev/null @@ -1,353 +0,0 @@ -import torch.nn as nn -import torch -from torch.optim.lr_scheduler import ReduceLROnPlateau,OneCycleLR,CyclicLR -import pandas as pd -from sklearn.preprocessing import StandardScaler,MinMaxScaler -import matplotlib.pyplot as plt -from torch.distributions import MultivariateNormal, LogNormal,Normal, Chi2 -from torch.distributions.distribution import Distribution -from sklearn.metrics import r2_score -import numpy as np - - -# It's a distribution that is a kernel density estimate of a Gaussian distribution -class GaussianKDE(Distribution): - def __init__(self, X, bw): - """ - X : tensor (n, d) - `n` points with `d` dimensions to which KDE will be fit - bw : numeric - bandwidth for Gaussian kernel - """ - self.X = X - self.bw = bw - self.dims = X.shape[-1] - self.n = X.shape[0] - self.mvn = MultivariateNormal(loc=torch.zeros(self.dims), - scale_tril=torch.eye(self.dims)) - - def sample(self, num_samples): - """ - We are sampling from a normal distribution with mean equal to the data points in the dataset and - standard deviation equal to the bandwidth - - :param num_samples: the number of samples to draw from the KDE - :return: A sample of size num_samples from the KDE. - """ - idxs = (np.random.uniform(0, 1, num_samples) * self.n).astype(int) - norm = Normal(loc=self.X[idxs], scale=self.bw) - return norm.sample() - - def score_samples(self, Y, X=None): - """Returns the kernel density estimates of each point in `Y`. - - Parameters - ---------- - Y : tensor (m, d) - `m` points with `d` dimensions for which the probability density will - be calculated - X : tensor (n, d), optional - `n` points with `d` dimensions to which KDE will be fit. Provided to - allow batch calculations in `log_prob`. By default, `X` is None and - all points used to initialize KernelDensityEstimator are included. - - - Returns - ------- - log_probs : tensor (m) - log probability densities for each of the queried points in `Y` - """ - if X == None: - X = self.X - log_probs = self.mvn.log_prob((X.unsqueeze(1) - Y)).sum(dim=0) - - return log_probs - - def log_prob(self, Y): - """Returns the total log probability of one or more points, `Y`, using - a Multivariate Normal kernel fit to `X` and scaled using `bw`. - - Parameters - ---------- - Y : tensor (m, d) - `m` points with `d` dimensions for which the probability density will - be calculated - - Returns - ------- - log_prob : numeric - total log probability density for the queried points, `Y` - """ - - X_chunks = self.X - Y_chunks = Y - self.Y = Y - log_prob = 0 - - for x in X_chunks: - for y in Y_chunks: - - log_prob += self.score_samples(y,x).sum(dim=0) - - return log_prob - -class Chi2KDE(Distribution): - def __init__(self, X, bw): - """ - X : tensor (n, d) - `n` points with `d` dimensions to which KDE will be fit - bw : numeric - bandwidth for Gaussian kernel - """ - self.X = X - self.bw = bw - self.dims = X.shape[-1] - self.n = X.shape[0] - self.mvn = Chi2(self.dims) - - def sample(self, num_samples): - idxs = (np.random.uniform(0, 1, num_samples) * self.n).astype(int) - norm = LogNormal(loc=self.X[idxs], scale=self.bw) - return norm.sample() - - def score_samples(self, Y, X=None): - """Returns the kernel density estimates of each point in `Y`. - - Parameters - ---------- - Y : tensor (m, d) - `m` points with `d` dimensions for which the probability density will - be calculated - X : tensor (n, d), optional - `n` points with `d` dimensions to which KDE will be fit. Provided to - allow batch calculations in `log_prob`. By default, `X` is None and - all points used to initialize KernelDensityEstimator are included. - - - Returns - ------- - log_probs : tensor (m) - log probability densities for each of the queried points in `Y` - """ - if X == None: - X = self.X - log_probs = self.mvn.log_prob(abs(X.unsqueeze(1) - Y)).sum() - - return log_probs - - def log_prob(self, Y): - """Returns the total log probability of one or more points, `Y`, using - a Multivariate Normal kernel fit to `X` and scaled using `bw`. - - Parameters - ---------- - Y : tensor (m, d) - `m` points with `d` dimensions for which the probability density will - be calculated - - Returns - ------- - log_prob : numeric - total log probability density for the queried points, `Y` - """ - - X_chunks = self.X - Y_chunks = Y - self.Y = Y - log_prob = 0 - - for x in X_chunks: - for y in Y_chunks: - - log_prob += self.score_samples(y,x).sum(dim=0) - - return log_prob - - -class PlanarFlow(nn.Module): - """ - A single planar flow, computes T(x) and log(det(jac_T))) - """ - def __init__(self, D): - super(PlanarFlow, self).__init__() - self.u = nn.Parameter(torch.Tensor(1, D), requires_grad=True) - self.w = nn.Parameter(torch.Tensor(1, D), requires_grad=True) - self.b = nn.Parameter(torch.Tensor(1), requires_grad=True) - self.h = torch.tanh - self.init_params() - - def init_params(self): - self.w.data.uniform_(0.4, 1) - self.b.data.uniform_(0.4, 1) - self.u.data.uniform_(0.4, 1) - - - def forward(self, z): - linear_term = torch.mm(z, self.w.T) + self.b - return z + self.u * self.h(linear_term) - - def h_prime(self, x): - """ - Derivative of tanh - """ - return (1 - self.h(x) ** 2) - - def psi(self, z): - inner = torch.mm(z, self.w.T) + self.b - return self.h_prime(inner) * self.w - - def log_det(self, z): - inner = 1 + torch.mm(self.psi(z), self.u.T) - return torch.log(torch.abs(inner)) - - -# It's a normalizing flow that takes in a distribution and outputs a distribution. -class NormalizingFlow(nn.Module): - """ - A normalizng flow composed of a sequence of planar flows. - """ - def __init__(self, D, n_flows=2): - """ - The function takes in two arguments, D and n_flows. D is the dimension of the data, and n_flows - is the number of flows. The function then creates a list of PlanarFlow objects, where the number - of PlanarFlow objects is equal to n_flows - - :param D: the dimensionality of the data - :param n_flows: number of flows to use, defaults to 2 (optional) - """ - super(NormalizingFlow, self).__init__() - self.flows = nn.ModuleList( - [PlanarFlow(D) for _ in range(n_flows)]) - - def sample(self, base_samples): - """ - Transform samples from a simple base distribution - by passing them through a sequence of Planar flows. - """ - samples = base_samples - for flow in self.flows: - samples = flow(samples) - return samples - - def forward(self, x): - """ - Computes and returns the sum of log_det_jacobians - and the transformed samples T(x). - """ - sum_log_det = 0 - transformed_sample = x - - for i in range(len(self.flows)): - log_det_i = (self.flows[i].log_det(transformed_sample)) - sum_log_det += log_det_i - transformed_sample = self.flows[i](transformed_sample) - - return transformed_sample, sum_log_det - -def random_normal_samples(n, dim=2): - return torch.zeros(n, dim).normal_(mean=0, std=1.5) - - - - -class nflow(): - def __init__(self,dim=2,latent=16,batchsize:int=1,dataset=None): - """ - The function __init__ initializes the class NormalizingFlowModel with the parameters dim, - latent, batchsize, and datasetPath - - :param dim: The dimension of the data, defaults to 2 (optional) - :param latent: The number of latent variables in the model, defaults to 16 (optional) - :param batchsize: The number of samples to generate at a time, defaults to 1 - :type batchsize: int (optional) - :param datasetPath: The path to the dataset, defaults to data/dataset.csv - :type datasetPath: str (optional) - """ - self.dim = dim - self.batchsize = batchsize - self.model = NormalizingFlow(dim, latent) - self.dataset = dataset - - def compile(self,optim:torch.optim=torch.optim.Adam,distribution:str='GaussianKDE',lr:float=0.00015,bw:float=0.1,wd=0.0015): - """ - It takes in a dataset, a model, and a distribution, and returns a compiled model - - :param optim: the optimizer to use - :type optim: torch.optim - :param distribution: the type of distribution to use, defaults to GaussianKDE - :type distribution: str (optional) - :param lr: learning rate - :type lr: float - :param bw: bandwidth for the KDE - :type bw: float - """ - if wd: - self.opt = optim( - params=self.model.parameters(), - lr=lr, - weight_decay = wd - # momentum=0.9 - # momentum=0.1 - ) - else: - self.opt = optim( - params=self.model.parameters(), - lr=lr, - # momentum=0.9 - # momentum=0.1 - ) - self.scaler = StandardScaler() - self.scaler_mm = MinMaxScaler(feature_range=(0,1)) - - df = pd.read_csv(self.dataset) - df = df.iloc[:,1:] - - - if 'Chi2' in distribution: - self.scaled=self.scaler_mm.fit_transform(df) - else: self.scaled = self.scaler.fit_transform(df) - - self.density = globals()[distribution](X=torch.tensor(self.scaled, dtype=torch.float32), bw=bw) - - # self.dl = torch.utils.data.DataLoader(scaled,batchsize=self.batchsize) - self.scheduler = ReduceLROnPlateau(self.opt, patience=10000) - self.losses = [] - - def train(self,iters:int=1000): - """ - > We sample from a normal distribution, pass the samples through the model, and then calculate - the loss - - :param iters: number of iterations to train for, defaults to 1000 - :type iters: int (optional) - """ - for idx in range(iters): - if idx % 100 == 0: - print("Iteration {}".format(idx)) - - samples = torch.autograd.Variable(random_normal_samples(self.batchsize,self.dim)) - - z_k, sum_log_det = self.model(samples) - log_p_x = self.density.log_prob(z_k) - # Reverse KL since we can evaluate target density but can't sample - loss = (-sum_log_det - (log_p_x)).mean()/self.density.n - - self.opt.zero_grad() - loss.backward() - self.opt.step() - self.scheduler.step(loss) - - self.losses.append(loss.item()) - - if idx % 100 == 0: - print("Loss {}".format(loss.item())) - yield idx,loss.item() - - def performance(self): - """ - The function takes the model and the scaled data as inputs, samples from the model, and then - prints the r2 score of the samples and the scaled data. - """ - samples = ((self.model.sample(torch.tensor(self.scaled).float())).detach().numpy()) - - print('r2', r2_score(self.scaled,samples)) diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/configs/emotion_encoder_config.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/configs/emotion_encoder_config.py deleted file mode 100644 index 5eda2671be980abce4a0506a075387b601a1596c..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/encoder/configs/emotion_encoder_config.py +++ /dev/null @@ -1,12 +0,0 @@ -from dataclasses import asdict, dataclass - -from TTS.encoder.configs.base_encoder_config import BaseEncoderConfig - - -@dataclass -class EmotionEncoderConfig(BaseEncoderConfig): - """Defines parameters for Emotion Encoder model.""" - - model: str = "emotion_encoder" - map_classid_to_classname: dict = None - class_name_key: str = "emotion_name" diff --git a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/generic/pos_encoding.py b/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/generic/pos_encoding.py deleted file mode 100644 index 913add0d14332bf70c3ecd2a95869d0071310bd4..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/TTS/tts/layers/generic/pos_encoding.py +++ /dev/null @@ -1,69 +0,0 @@ -import math - -import torch -from torch import nn - - -class PositionalEncoding(nn.Module): - """Sinusoidal positional encoding for non-recurrent neural networks. - Implementation based on "Attention Is All You Need" - - Args: - channels (int): embedding size - dropout_p (float): dropout rate applied to the output. - max_len (int): maximum sequence length. - use_scale (bool): whether to use a learnable scaling coefficient. - """ - - def __init__(self, channels, dropout_p=0.0, max_len=5000, use_scale=False): - super().__init__() - if channels % 2 != 0: - raise ValueError( - "Cannot use sin/cos positional encoding with " "odd channels (got channels={:d})".format(channels) - ) - self.use_scale = use_scale - if use_scale: - self.scale = torch.nn.Parameter(torch.ones(1)) - pe = torch.zeros(max_len, channels) - position = torch.arange(0, max_len).unsqueeze(1) - div_term = torch.pow(10000, torch.arange(0, channels, 2).float() / channels) - pe[:, 0::2] = torch.sin(position.float() * div_term) - pe[:, 1::2] = torch.cos(position.float() * div_term) - pe = pe.unsqueeze(0).transpose(1, 2) - self.register_buffer("pe", pe) - if dropout_p > 0: - self.dropout = nn.Dropout(p=dropout_p) - self.channels = channels - - def forward(self, x, mask=None, first_idx=None, last_idx=None): - """ - Shapes: - x: [B, C, T] - mask: [B, 1, T] - first_idx: int - last_idx: int - """ - - x = x * math.sqrt(self.channels) - if first_idx is None: - if self.pe.size(2) < x.size(2): - raise RuntimeError( - f"Sequence is {x.size(2)} but PositionalEncoding is" - f" limited to {self.pe.size(2)}. See max_len argument." - ) - if mask is not None: - pos_enc = self.pe[:, :, : x.size(2)] * mask - else: - pos_enc = self.pe[:, :, : x.size(2)] - if self.use_scale: - x = x + self.scale * pos_enc - else: - x = x + pos_enc - else: - if self.use_scale: - x = x + self.scale * self.pe[:, :, first_idx:last_idx] - else: - x = x + self.pe[:, :, first_idx:last_idx] - if hasattr(self, "dropout"): - x = self.dropout(x) - return x diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_MD2.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_MD2.py deleted file mode 100644 index 93751687f21b7999613353381fa8b036c9ecc3bf..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Crypto/SelfTest/Hash/test_MD2.py +++ /dev/null @@ -1,62 +0,0 @@ -# -*- coding: utf-8 -*- -# -# SelfTest/Hash/MD2.py: Self-test for the MD2 hash function -# -# Written in 2008 by Dwayne C. Litzenberger -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -"""Self-test suite for Crypto.Hash.MD2""" - -from Crypto.Util.py3compat import * - -# This is a list of (expected_result, input[, description]) tuples. -test_data = [ - # Test vectors from RFC 1319 - ('8350e5a3e24c153df2275c9f80692773', '', "'' (empty string)"), - ('32ec01ec4a6dac72c0ab96fb34c0b5d1', 'a'), - ('da853b0d3f88d99b30283a69e6ded6bb', 'abc'), - ('ab4f496bfb2a530b219ff33031fe06b0', 'message digest'), - - ('4e8ddff3650292ab5a4108c3aa47940b', 'abcdefghijklmnopqrstuvwxyz', - 'a-z'), - - ('da33def2a42df13975352846c30338cd', - 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789', - 'A-Z, a-z, 0-9'), - - ('d5976f79d83d3a0dc9806c3c66f3efd8', - '1234567890123456789012345678901234567890123456' - + '7890123456789012345678901234567890', - "'1234567890' * 8"), -] - -def get_tests(config={}): - from Crypto.Hash import MD2 - from .common import make_hash_tests - return make_hash_tests(MD2, "MD2", test_data, - digest_size=16, - oid="1.2.840.113549.2.2") - -if __name__ == '__main__': - import unittest - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') - -# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Tests/TestVisitor.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Tests/TestVisitor.py deleted file mode 100644 index dbc8e0c03ab957b92397635f6cdd9a488006fb42..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/Cython/Compiler/Tests/TestVisitor.py +++ /dev/null @@ -1,61 +0,0 @@ -from Cython.Compiler.ModuleNode import ModuleNode -from Cython.Compiler.Symtab import ModuleScope -from Cython.TestUtils import TransformTest -from Cython.Compiler.Visitor import MethodDispatcherTransform -from Cython.Compiler.ParseTreeTransforms import ( - NormalizeTree, AnalyseDeclarationsTransform, - AnalyseExpressionsTransform, InterpretCompilerDirectives) - - -class TestMethodDispatcherTransform(TransformTest): - _tree = None - - def _build_tree(self): - if self._tree is None: - context = None - - def fake_module(node): - scope = ModuleScope('test', None, None) - return ModuleNode(node.pos, doc=None, body=node, - scope=scope, full_module_name='test', - directive_comments={}) - pipeline = [ - fake_module, - NormalizeTree(context), - InterpretCompilerDirectives(context, {}), - AnalyseDeclarationsTransform(context), - AnalyseExpressionsTransform(context), - ] - self._tree = self.run_pipeline(pipeline, u""" - cdef bytes s = b'asdfg' - cdef dict d = {1:2} - x = s * 3 - d.get('test') - """) - return self._tree - - def test_builtin_method(self): - calls = [0] - class Test(MethodDispatcherTransform): - def _handle_simple_method_dict_get(self, node, func, args, unbound): - calls[0] += 1 - return node - - tree = self._build_tree() - Test(None)(tree) - self.assertEqual(1, calls[0]) - - def test_binop_method(self): - calls = {'bytes': 0, 'object': 0} - class Test(MethodDispatcherTransform): - def _handle_simple_method_bytes___mul__(self, node, func, args, unbound): - calls['bytes'] += 1 - return node - def _handle_simple_method_object___mul__(self, node, func, args, unbound): - calls['object'] += 1 - return node - - tree = self._build_tree() - Test(None)(tree) - self.assertEqual(1, calls['bytes']) - self.assertEqual(0, calls['object']) diff --git a/spaces/arxnov/anotest/text/cantonese.py b/spaces/arxnov/anotest/text/cantonese.py deleted file mode 100644 index b66d12138b81b70b86f18217d24a08fce76305c0..0000000000000000000000000000000000000000 --- a/spaces/arxnov/anotest/text/cantonese.py +++ /dev/null @@ -1,59 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('jyutjyu') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ei˥'), - ('B', 'biː˥'), - ('C', 'siː˥'), - ('D', 'tiː˥'), - ('E', 'iː˥'), - ('F', 'e˥fuː˨˩'), - ('G', 'tsiː˥'), - ('H', 'ɪk̚˥tsʰyː˨˩'), - ('I', 'ɐi˥'), - ('J', 'tsei˥'), - ('K', 'kʰei˥'), - ('L', 'e˥llou˨˩'), - ('M', 'ɛːm˥'), - ('N', 'ɛːn˥'), - ('O', 'ou˥'), - ('P', 'pʰiː˥'), - ('Q', 'kʰiːu˥'), - ('R', 'aː˥lou˨˩'), - ('S', 'ɛː˥siː˨˩'), - ('T', 'tʰiː˥'), - ('U', 'juː˥'), - ('V', 'wiː˥'), - ('W', 'tʊk̚˥piː˥juː˥'), - ('X', 'ɪk̚˥siː˨˩'), - ('Y', 'waːi˥'), - ('Z', 'iː˨sɛːt̚˥') -]] - - -def number_to_cantonese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def cantonese_to_ipa(text): - text = number_to_cantonese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/aryadytm/paraphrase/app.py b/spaces/aryadytm/paraphrase/app.py deleted file mode 100644 index 4398c81ee945b9045a0e6c3a6e19f7049cf5cbf5..0000000000000000000000000000000000000000 --- a/spaces/aryadytm/paraphrase/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import streamlit as st -import time -from src.utils import paraphrase_english, paraphrase_indonesian -from src.st_style import apply_prod_style - - -# apply_prod_style(st) # NOTE: Uncomment this for production! - -st.title("AI Text Paraphraser") -st.image(open("assets/demo.png", "rb").read()) -st.write( - """ - Stop plagiarism! Do not carelessly copy and paste text materials from the internet. - **This AI tool will make your text unique and free from plagiarism.** - """ -) - -language = st.selectbox("Language", ["English", "Bahasa Indonesia"]) -input_text = st.text_area("Input your text (Max 1000 characters)", height=250, max_chars=1000) - -if st.button("Submit") and len(input_text) > 0: - - with st.spinner("AI is doing the magic!") as p: - input_text = input_text.replace("\n\n", "\n").replace("\n", " ").strip() - - if language == "English": - paraphrased = paraphrase_english(input_text) - else: - paraphrased = paraphrase_indonesian(input_text) - - st.write("**Your text is ready!**") - st.write(paraphrased) - st.info("**TIP:** You can submit the same text multiple times to get different results.") \ No newline at end of file diff --git a/spaces/astoken/weather_checker/README.md b/spaces/astoken/weather_checker/README.md deleted file mode 100644 index f88014a33a419640150bf2d037cab07cb3b632cc..0000000000000000000000000000000000000000 --- a/spaces/astoken/weather_checker/README.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -title: Weather_checker -emoji: 🌖 -colorFrom: blue -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`models`: _List[string]_ -HF model IDs (like "gpt2" or "deepset/roberta-base-squad2") used in the Space. -Will be parsed automatically from your code if not specified here. - -`datasets`: _List[string]_ -HF dataset IDs (like "common_voice" or "oscar-corpus/OSCAR-2109") used in the Space. -Will be parsed automatically from your code if not specified here. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/distributions/__init__.py b/spaces/attention-refocusing/Attention-refocusing/gligen/ldm/modules/distributions/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/awaawawawa/iurf7irfuyytruyyugb/app.py b/spaces/awaawawawa/iurf7irfuyytruyyugb/app.py deleted file mode 100644 index ed07c67d344f99a12b1c091f177da8335e883701..0000000000000000000000000000000000000000 --- a/spaces/awaawawawa/iurf7irfuyytruyyugb/app.py +++ /dev/null @@ -1,877 +0,0 @@ -<<<<<<< HEAD -import json -import os, re -import traceback -import torch -import numpy as np -from omegaconf import OmegaConf -from PIL import Image, ImageOps -from tqdm import tqdm, trange -from itertools import islice -from einops import rearrange -import time -from pytorch_lightning import seed_everything -from torch import autocast -from contextlib import nullcontext -from einops import rearrange, repeat -from ldmlib.util import instantiate_from_config -from optimizedSD.optimUtils import split_weighted_subprompts -from transformers import logging - -from gfpgan import GFPGANer -from basicsr.archs.rrdbnet_arch import RRDBNet -from realesrgan import RealESRGANer - -import uuid -import subprocess -subprocess.run("uvicorn modules.app:app --host 0.0.0.0 --port 7860", shell=True) - -AUTH_TOKEN = os.environ.get('AUTH_TOKEN') -if not AUTH_TOKEN: - with open('/root/.huggingface/token') as f: - lines = f.readlines() - AUTH_TOKEN = lines[0] - - - -logging.set_verbosity_error() - -# consts -config_yaml = "optimizedSD/v1-inference.yaml" -filename_regex = re.compile('[^a-zA-Z0-9]') - -# api stuff -from sd_internal import Request, Response, Image as ResponseImage -import base64 -from io import BytesIO -#from colorama import Fore - -# local -stop_processing = False -temp_images = {} - -ckpt_file = None -gfpgan_file = None -real_esrgan_file = None - -model = None -modelCS = None -modelFS = None -model_gfpgan = None -model_real_esrgan = None - -model_is_half = False -model_fs_is_half = False -device = None -unet_bs = 1 -precision = 'autocast' -sampler_plms = None -sampler_ddim = None - -has_valid_gpu = False -force_full_precision = False -try: - gpu = torch.cuda.current_device() - gpu_name = torch.cuda.get_device_name(gpu) - print('GPU detected: ', gpu_name) - - force_full_precision = ('nvidia' in gpu_name.lower() or 'geforce' in gpu_name.lower()) and (' 1660' in gpu_name or ' 1650' in gpu_name) # otherwise these NVIDIA cards create green images - if force_full_precision: - print('forcing full precision on NVIDIA 16xx cards, to avoid green images. GPU detected: ', gpu_name) - - mem_free, mem_total = torch.cuda.mem_get_info(gpu) - mem_total /= float(10**9) - if mem_total < 3.0: - print("GPUs with less than 3 GB of VRAM are not compatible with Stable Diffusion") - raise Exception() - - has_valid_gpu = True -except: - print('WARNING: No compatible GPU found. Using the CPU, but this will be very slow!') - pass - -def load_model_ckpt(ckpt_to_use, device_to_use='cuda', turbo=False, unet_bs_to_use=1, precision_to_use='autocast'): - global ckpt_file, model, modelCS, modelFS, model_is_half, device, unet_bs, precision, model_fs_is_half - - device = device_to_use if has_valid_gpu else 'cpu' - precision = precision_to_use if not force_full_precision else 'full' - unet_bs = unet_bs_to_use - - unload_model() - - if device == 'cpu': - precision = 'full' - - sd = load_model_from_config(f"{ckpt_to_use}.ckpt") - li, lo = [], [] - for key, value in sd.items(): - sp = key.split(".") - if (sp[0]) == "model": - if "input_blocks" in sp: - li.append(key) - elif "middle_block" in sp: - li.append(key) - elif "time_embed" in sp: - li.append(key) - else: - lo.append(key) - for key in li: - sd["model1." + key[6:]] = sd.pop(key) - for key in lo: - sd["model2." + key[6:]] = sd.pop(key) - - config = OmegaConf.load(f"{config_yaml}") - - model = instantiate_from_config(config.modelUNet) - _, _ = model.load_state_dict(sd, strict=False) - model.eval() - model.cdevice = device - model.unet_bs = unet_bs - model.turbo = turbo - - modelCS = instantiate_from_config(config.modelCondStage) - _, _ = modelCS.load_state_dict(sd, strict=False) - modelCS.eval() - modelCS.cond_stage_model.device = device - - modelFS = instantiate_from_config(config.modelFirstStage) - _, _ = modelFS.load_state_dict(sd, strict=False) - modelFS.eval() - del sd - - if device != "cpu" and precision == "autocast": - model.half() - modelCS.half() - modelFS.half() - model_is_half = True - model_fs_is_half = True - else: - model_is_half = False - model_fs_is_half = False - - ckpt_file = ckpt_to_use - - print('loaded ', ckpt_file, 'to', device, 'precision', precision) - -def unload_model(): - global model, modelCS, modelFS - - if model is not None: - del model - del modelCS - del modelFS - - model = None - modelCS = None - modelFS = None - -def load_model_gfpgan(gfpgan_to_use): - global gfpgan_file, model_gfpgan - - if gfpgan_to_use is None: - return - - gfpgan_file = gfpgan_to_use - model_path = gfpgan_to_use + ".pth" - - if device == 'cpu': - model_gfpgan = GFPGANer(model_path=model_path, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None, device=torch.device('cpu')) - else: - model_gfpgan = GFPGANer(model_path=model_path, upscale=1, arch='clean', channel_multiplier=2, bg_upsampler=None, device=torch.device('cuda')) - - print('loaded ', gfpgan_to_use, 'to', device, 'precision', precision) - -def load_model_real_esrgan(real_esrgan_to_use): - global real_esrgan_file, model_real_esrgan - - if real_esrgan_to_use is None: - return - - real_esrgan_file = real_esrgan_to_use - model_path = real_esrgan_to_use + ".pth" - - RealESRGAN_models = { - 'RealESRGAN_x4plus': RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=23, num_grow_ch=32, scale=4), - 'RealESRGAN_x4plus_anime_6B': RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4) - } - - model_to_use = RealESRGAN_models[real_esrgan_to_use] - - if device == 'cpu': - model_real_esrgan = RealESRGANer(scale=2, model_path=model_path, model=model_to_use, pre_pad=0, half=False) # cpu does not support half - model_real_esrgan.device = torch.device('cpu') - model_real_esrgan.model.to('cpu') - else: - model_real_esrgan = RealESRGANer(scale=2, model_path=model_path, model=model_to_use, pre_pad=0, half=model_is_half) - - model_real_esrgan.model.name = real_esrgan_to_use - - print('loaded ', real_esrgan_to_use, 'to', device, 'precision', precision) - -def mk_img(req: Request): - try: - yield from do_mk_img(req) - except Exception as e: - print(traceback.format_exc()) - - gc() - - if device != "cpu": - modelFS.to("cpu") - modelCS.to("cpu") - - model.model1.to("cpu") - model.model2.to("cpu") - - gc() - - yield json.dumps({ - "status": 'failed', - "detail": str(e) - }) - -def do_mk_img(req: Request): - global ckpt_file - global model, modelCS, modelFS, device - global model_gfpgan, model_real_esrgan - global stop_processing - - stop_processing = False - - res = Response() - res.request = req - res.images = [] - - temp_images.clear() - - # custom model support: - # the req.use_stable_diffusion_model needs to be a valid path - # to the ckpt file (without the extension). - - needs_model_reload = False - ckpt_to_use = ckpt_file - if ckpt_to_use != req.use_stable_diffusion_model: - ckpt_to_use = req.use_stable_diffusion_model - needs_model_reload = True - - model.turbo = req.turbo - if req.use_cpu: - if device != 'cpu': - device = 'cpu' - - if model_is_half: - load_model_ckpt(ckpt_to_use, device) - needs_model_reload = False - - load_model_gfpgan(gfpgan_file) - load_model_real_esrgan(real_esrgan_file) - else: - if has_valid_gpu: - prev_device = device - device = 'cuda' - - if (precision == 'autocast' and (req.use_full_precision or not model_is_half)) or \ - (precision == 'full' and not req.use_full_precision and not force_full_precision): - - load_model_ckpt(ckpt_to_use, device, req.turbo, unet_bs, ('full' if req.use_full_precision else 'autocast')) - needs_model_reload = False - - if prev_device != device: - load_model_gfpgan(gfpgan_file) - load_model_real_esrgan(real_esrgan_file) - - if needs_model_reload: - load_model_ckpt(ckpt_to_use, device, req.turbo, unet_bs, precision) - - if req.use_face_correction != gfpgan_file: - load_model_gfpgan(req.use_face_correction) - - if req.use_upscale != real_esrgan_file: - load_model_real_esrgan(req.use_upscale) - - model.cdevice = device - modelCS.cond_stage_model.device = device - - opt_prompt = req.prompt - opt_seed = req.seed - opt_n_samples = req.num_outputs - opt_n_iter = 1 - opt_scale = req.guidance_scale - opt_C = 4 - opt_H = req.height - opt_W = req.width - opt_f = 8 - opt_ddim_steps = req.num_inference_steps - opt_ddim_eta = 0.0 - opt_strength = req.prompt_strength - opt_save_to_disk_path = req.save_to_disk_path - opt_init_img = req.init_image - opt_use_face_correction = req.use_face_correction - opt_use_upscale = req.use_upscale - opt_show_only_filtered = req.show_only_filtered_image - opt_format = req.output_format - opt_sampler_name = req.sampler - - print(req.to_string(), '\n device', device) - - print('\n\n Using precision:', precision) - - seed_everything(opt_seed) - - batch_size = opt_n_samples - prompt = opt_prompt - assert prompt is not None - data = [batch_size * [prompt]] - - if precision == "autocast" and device != "cpu": - precision_scope = autocast - else: - precision_scope = nullcontext - - mask = None - - if req.init_image is None: - handler = _txt2img - - init_latent = None - t_enc = None - else: - handler = _img2img - - init_image = load_img(req.init_image, opt_W, opt_H) - init_image = init_image.to(device) - - if device != "cpu" and precision == "autocast": - init_image = init_image.half() - - modelFS.to(device) - - init_image = repeat(init_image, '1 ... -> b ...', b=batch_size) - init_latent = modelFS.get_first_stage_encoding(modelFS.encode_first_stage(init_image)) # move to latent space - - if req.mask is not None: - mask = load_mask(req.mask, opt_W, opt_H, init_latent.shape[2], init_latent.shape[3], True).to(device) - mask = mask[0][0].unsqueeze(0).repeat(4, 1, 1).unsqueeze(0) - mask = repeat(mask, '1 ... -> b ...', b=batch_size) - - if device != "cpu" and precision == "autocast": - mask = mask.half() - - move_fs_to_cpu() - - assert 0. <= opt_strength <= 1., 'can only work with strength in [0.0, 1.0]' - t_enc = int(opt_strength * opt_ddim_steps) - print(f"target t_enc is {t_enc} steps") - - if opt_save_to_disk_path is not None: - session_out_path = os.path.join(opt_save_to_disk_path, req.session_id) - os.makedirs(session_out_path, exist_ok=True) - else: - session_out_path = None - - seeds = "" - with torch.no_grad(): - for n in trange(opt_n_iter, desc="Sampling"): - for prompts in tqdm(data, desc="data"): - - with precision_scope("cuda"): - modelCS.to(device) - uc = None - if opt_scale != 1.0: - uc = modelCS.get_learned_conditioning(batch_size * [req.negative_prompt]) - if isinstance(prompts, tuple): - prompts = list(prompts) - - subprompts, weights = split_weighted_subprompts(prompts[0]) - if len(subprompts) > 1: - c = torch.zeros_like(uc) - totalWeight = sum(weights) - # normalize each "sub prompt" and add it - for i in range(len(subprompts)): - weight = weights[i] - # if not skip_normalize: - weight = weight / totalWeight - c = torch.add(c, modelCS.get_learned_conditioning(subprompts[i]), alpha=weight) - else: - c = modelCS.get_learned_conditioning(prompts) - - modelFS.to(device) - - partial_x_samples = None - def img_callback(x_samples, i): - nonlocal partial_x_samples - - partial_x_samples = x_samples - - if req.stream_progress_updates: - n_steps = opt_ddim_steps if req.init_image is None else t_enc - progress = {"step": i, "total_steps": n_steps} - - if req.stream_image_progress and i % 5 == 0: - partial_images = [] - - for i in range(batch_size): - x_samples_ddim = modelFS.decode_first_stage(x_samples[i].unsqueeze(0)) - x_sample = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0) - x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c") - x_sample = x_sample.astype(np.uint8) - img = Image.fromarray(x_sample) - buf = BytesIO() - img.save(buf, format='JPEG') - buf.seek(0) - - del img, x_sample, x_samples_ddim - # don't delete x_samples, it is used in the code that called this callback - - temp_images[str(req.session_id) + '/' + str(i)] = buf - partial_images.append({'path': f'/image/tmp/{req.session_id}/{i}'}) - - progress['output'] = partial_images - - yield json.dumps(progress) - - if stop_processing: - raise UserInitiatedStop("User requested that we stop processing") - - # run the handler - try: - if handler == _txt2img: - x_samples = _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, None, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, mask, opt_sampler_name) - else: - x_samples = _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed, img_callback, mask) - - yield from x_samples - - x_samples = partial_x_samples - except UserInitiatedStop: - if partial_x_samples is None: - continue - - x_samples = partial_x_samples - - print("saving images") - for i in range(batch_size): - - x_samples_ddim = modelFS.decode_first_stage(x_samples[i].unsqueeze(0)) - x_sample = torch.clamp((x_samples_ddim + 1.0) / 2.0, min=0.0, max=1.0) - x_sample = 255.0 * rearrange(x_sample[0].cpu().numpy(), "c h w -> h w c") - x_sample = x_sample.astype(np.uint8) - img = Image.fromarray(x_sample) - - has_filters = (opt_use_face_correction is not None and opt_use_face_correction.startswith('GFPGAN')) or \ - (opt_use_upscale is not None and opt_use_upscale.startswith('RealESRGAN')) - - return_orig_img = not has_filters or not opt_show_only_filtered - - if stop_processing: - return_orig_img = True - - if opt_save_to_disk_path is not None: - prompt_flattened = filename_regex.sub('_', prompts[0]) - prompt_flattened = prompt_flattened[:50] - - img_id = str(uuid.uuid4())[-8:] - - file_path = f"{prompt_flattened}_{img_id}" - img_out_path = os.path.join(session_out_path, f"{file_path}.{opt_format}") - meta_out_path = os.path.join(session_out_path, f"{file_path}.txt") - - if return_orig_img: - save_image(img, img_out_path) - - save_metadata(meta_out_path, prompts, opt_seed, opt_W, opt_H, opt_ddim_steps, opt_scale, opt_strength, opt_use_face_correction, opt_use_upscale, opt_sampler_name, req.negative_prompt, ckpt_file) - - if return_orig_img: - img_data = img_to_base64_str(img, opt_format) - res_image_orig = ResponseImage(data=img_data, seed=opt_seed) - res.images.append(res_image_orig) - - if opt_save_to_disk_path is not None: - res_image_orig.path_abs = img_out_path - - del img - - if has_filters and not stop_processing: - print('Applying filters..') - - gc() - filters_applied = [] - - if opt_use_face_correction: - _, _, output = model_gfpgan.enhance(x_sample[:,:,::-1], has_aligned=False, only_center_face=False, paste_back=True) - x_sample = output[:,:,::-1] - filters_applied.append(opt_use_face_correction) - - if opt_use_upscale: - output, _ = model_real_esrgan.enhance(x_sample[:,:,::-1]) - x_sample = output[:,:,::-1] - filters_applied.append(opt_use_upscale) - - filtered_image = Image.fromarray(x_sample) - - filtered_img_data = img_to_base64_str(filtered_image, opt_format) - res_image_filtered = ResponseImage(data=filtered_img_data, seed=opt_seed) - res.images.append(res_image_filtered) - - filters_applied = "_".join(filters_applied) - - if opt_save_to_disk_path is not None: - filtered_img_out_path = os.path.join(session_out_path, f"{file_path}_{filters_applied}.{opt_format}") - save_image(filtered_image, filtered_img_out_path) - res_image_filtered.path_abs = filtered_img_out_path - - del filtered_image - - seeds += str(opt_seed) + "," - opt_seed += 1 - - move_fs_to_cpu() - gc() - del x_samples, x_samples_ddim, x_sample - print("memory_final = ", torch.cuda.memory_allocated() / 1e6) - - print('Task completed') - - yield json.dumps(res.json()) - -def save_image(img, img_out_path): - try: - img.save(img_out_path) - except: - print('could not save the file', traceback.format_exc()) - -def save_metadata(meta_out_path, prompts, opt_seed, opt_W, opt_H, opt_ddim_steps, opt_scale, opt_prompt_strength, opt_correct_face, opt_upscale, sampler_name, negative_prompt, ckpt_file): - metadata = f"{prompts[0]}\nWidth: {opt_W}\nHeight: {opt_H}\nSeed: {opt_seed}\nSteps: {opt_ddim_steps}\nGuidance Scale: {opt_scale}\nPrompt Strength: {opt_prompt_strength}\nUse Face Correction: {opt_correct_face}\nUse Upscaling: {opt_upscale}\nSampler: {sampler_name}\nNegative Prompt: {negative_prompt}\nStable Diffusion Model: {ckpt_file + '.ckpt'}" - - try: - with open(meta_out_path, 'w') as f: - f.write(metadata) - except: - print('could not save the file', traceback.format_exc()) - -def _txt2img(opt_W, opt_H, opt_n_samples, opt_ddim_steps, opt_scale, start_code, opt_C, opt_f, opt_ddim_eta, c, uc, opt_seed, img_callback, mask, sampler_name): - shape = [opt_n_samples, opt_C, opt_H // opt_f, opt_W // opt_f] - - if device != "cpu": - mem = torch.cuda.memory_allocated() / 1e6 - modelCS.to("cpu") - while torch.cuda.memory_allocated() / 1e6 >= mem: - time.sleep(1) - - if sampler_name == 'ddim': - model.make_schedule(ddim_num_steps=opt_ddim_steps, ddim_eta=opt_ddim_eta, verbose=False) - - samples_ddim = model.sample( - S=opt_ddim_steps, - conditioning=c, - seed=opt_seed, - shape=shape, - verbose=False, - unconditional_guidance_scale=opt_scale, - unconditional_conditioning=uc, - eta=opt_ddim_eta, - x_T=start_code, - img_callback=img_callback, - mask=mask, - sampler = sampler_name, - ) - - yield from samples_ddim - -def _img2img(init_latent, t_enc, batch_size, opt_scale, c, uc, opt_ddim_steps, opt_ddim_eta, opt_seed, img_callback, mask): - # encode (scaled latent) - z_enc = model.stochastic_encode( - init_latent, - torch.tensor([t_enc] * batch_size).to(device), - opt_seed, - opt_ddim_eta, - opt_ddim_steps, - ) - x_T = None if mask is None else init_latent - - # decode it - samples_ddim = model.sample( - t_enc, - c, - z_enc, - unconditional_guidance_scale=opt_scale, - unconditional_conditioning=uc, - img_callback=img_callback, - mask=mask, - x_T=x_T, - sampler = 'ddim' - ) - - yield from samples_ddim - -def move_fs_to_cpu(): - if device != "cpu": - mem = torch.cuda.memory_allocated() / 1e6 - modelFS.to("cpu") - while torch.cuda.memory_allocated() / 1e6 >= mem: - time.sleep(1) - -def gc(): - if device == 'cpu': - return - - torch.cuda.empty_cache() - torch.cuda.ipc_collect() - -# internal - -def chunk(it, size): - it = iter(it) - return iter(lambda: tuple(islice(it, size)), ()) - - -def load_model_from_config(ckpt, verbose=False): - print(f"Loading model from {ckpt}") - pl_sd = torch.load(ckpt, map_location="cpu") - if "global_step" in pl_sd: - print(f"Global Step: {pl_sd['global_step']}") - sd = pl_sd["state_dict"] - return sd - -# utils -class UserInitiatedStop(Exception): - pass - -def load_img(img_str, w0, h0): - image = base64_str_to_img(img_str).convert("RGB") - w, h = image.size - print(f"loaded input image of size ({w}, {h}) from base64") - if h0 is not None and w0 is not None: - h, w = h0, w0 - - w, h = map(lambda x: x - x % 64, (w, h)) # resize to integer multiple of 64 - image = image.resize((w, h), resample=Image.Resampling.LANCZOS) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - return 2.*image - 1. - -def load_mask(mask_str, h0, w0, newH, newW, invert=False): - image = base64_str_to_img(mask_str).convert("RGB") - w, h = image.size - print(f"loaded input mask of size ({w}, {h})") - - if invert: - print("inverted") - image = ImageOps.invert(image) - # where_0, where_1 = np.where(image == 0), np.where(image == 255) - # image[where_0], image[where_1] = 255, 0 - - if h0 is not None and w0 is not None: - h, w = h0, w0 - - w, h = map(lambda x: x - x % 64, (w, h)) # resize to integer multiple of 64 - - print(f"New mask size ({w}, {h})") - image = image.resize((newW, newH), resample=Image.Resampling.LANCZOS) - image = np.array(image) - - image = image.astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - return image - -# https://stackoverflow.com/a/61114178 -def img_to_base64_str(img, output_format="PNG"): - buffered = BytesIO() - img.save(buffered, format=output_format) - buffered.seek(0) - img_byte = buffered.getvalue() - img_str = "data:image/png;base64," + base64.b64encode(img_byte).decode() - return img_str - -def base64_str_to_img(img_str): - img_str = img_str[len("data:image/png;base64,"):] - data = base64.b64decode(img_str) - buffered = BytesIO(data) - img = Image.open(buffered) - return img - - - - - - - - - - - - - - - - -from fastapi import FastAPI, HTTPException -from fastapi.staticfiles import StaticFiles -from starlette.responses import FileResponse, StreamingResponse -from pydantic import BaseModel -import logging - -from sd_internal import Request, Response - -import json -import traceback - -import sys -import os - -SD_DIR = os.getcwd() -print('started in ', SD_DIR) - -#SD_UI_DIR = os.getenv('SD_UI_PATH', None) -#sys.path.append(os.path.dirname(SD_UI_DIR)) - -#CONFIG_DIR = os.path.abspath(os.path.join(SD_UI_DIR, '..', 'scripts')) -MODELS_DIR = os.path.abspath(os.path.join(SD_DIR, '..', 'models')) - -OUTPUT_DIRNAME = "Stable Diffusion UI" # in the user's home folder - -app = FastAPI() - -model_loaded = False -model_is_loading = False - -modifiers_cache = None -outpath = os.path.join(os.path.expanduser("~"), OUTPUT_DIRNAME) - -# defaults from https://huggingface.co/blog/stable_diffusion -class ImageRequest(BaseModel): - session_id: str = "session" - prompt: str = "" - negative_prompt: str = "" - init_image: str = None # base64 - mask: str = None # base64 - num_outputs: int = 1 - num_inference_steps: int = 50 - guidance_scale: float = 7.5 - width: int = 512 - height: int = 512 - seed: int = 42 - prompt_strength: float = 0.8 - sampler: str = None # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms" - # allow_nsfw: bool = False - save_to_disk_path: str = None - turbo: bool = True - use_cpu: bool = False - use_full_precision: bool = False - use_face_correction: str = None # or "GFPGANv1.3" - use_upscale: str = None # or "RealESRGAN_x4plus" or "RealESRGAN_x4plus_anime_6B" - use_stable_diffusion_model: str = "sd-v1-4" - show_only_filtered_image: bool = False - output_format: str = "jpeg" # or "png" - - stream_progress_updates: bool = False - stream_image_progress: bool = False - -from starlette.responses import FileResponse, StreamingResponse - -def resolve_model_to_use(model_name): - if model_name in ('sd-v1-4', 'custom-model'): - model_path = os.path.join(MODELS_DIR, 'stable-diffusion', model_name) - - legacy_model_path = os.path.join(SD_DIR, model_name) - if not os.path.exists(model_path + '.ckpt') and os.path.exists(legacy_model_path + '.ckpt'): - model_path = legacy_model_path - else: - model_path = os.path.join(MODELS_DIR, 'stable-diffusion', model_name) - - return model_path - -def image(req : ImageRequest): - r = Request() - r.session_id = req.session_id - r.prompt = req.prompt - r.negative_prompt = req.negative_prompt - r.init_image = req.init_image - r.mask = req.mask - r.num_outputs = req.num_outputs - r.num_inference_steps = req.num_inference_steps - r.guidance_scale = req.guidance_scale - r.width = req.width - r.height = req.height - r.seed = req.seed - r.prompt_strength = req.prompt_strength - r.sampler = req.sampler - # r.allow_nsfw = req.allow_nsfw - r.turbo = req.turbo - r.use_cpu = req.use_cpu - r.use_full_precision = req.use_full_precision - r.save_to_disk_path = req.save_to_disk_path - r.use_upscale: str = req.use_upscale - r.use_face_correction = req.use_face_correction - r.show_only_filtered_image = req.show_only_filtered_image - r.output_format = req.output_format - - r.stream_progress_updates = True # the underlying implementation only supports streaming - r.stream_image_progress = req.stream_image_progress - - r.use_stable_diffusion_model = resolve_model_to_use(req.use_stable_diffusion_model) - - save_model_to_config(req.use_stable_diffusion_model) - - try: - if not req.stream_progress_updates: - r.stream_image_progress = False - - res = mk_img(r) - - if req.stream_progress_updates: - return StreamingResponse(res, media_type='application/json') - else: # compatibility mode: buffer the streaming responses, and return the last one - last_result = None - - for result in res: - last_result = result - - return json.loads(last_result) - except Exception as e: - print(traceback.format_exc()) - return HTTPException(status_code=500, detail=str(e)) - - -def getConfig(): - try: - config_json_path = os.path.join(CONFIG_DIR, 'config.json') - - if not os.path.exists(config_json_path): - return {} - - with open(config_json_path, 'r') as f: - return json.load(f) - except Exception as e: - return {} - -# needs to support the legacy installations -def get_initial_model_to_load(): - custom_weight_path = os.path.join(SD_DIR, 'custom-model.ckpt') - ckpt_to_use = "sd-v1-4" if not os.path.exists(custom_weight_path) else "custom-model" - - ckpt_to_use = os.path.join(SD_DIR, ckpt_to_use) - - config = getConfig() - if 'model' in config and 'stable-diffusion' in config['model']: - model_name = config['model']['stable-diffusion'] - model_path = resolve_model_to_use(model_name) - - if os.path.exists(model_path + '.ckpt'): - ckpt_to_use = model_path - else: - print('Could not find the configured custom model at:', model_path + '.ckpt', '. Using the default one:', ckpt_to_use + '.ckpt') - - return ckpt_to_use - - -#model_is_loading = True -#load_model_ckpt(get_initial_model_to_load(), "cuda") -#model_loaded = True -#model_is_loading = False - -#mk_img(ImageRequest) -======= ->>>>>>> 28cea4f0f809fb6ae1b8e8463506afd99e2a9d19 diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/BabylonLoader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/BabylonLoader.js deleted file mode 100644 index 3eee2e88bd88ca7c255973ca0dd6ded8ff68a8f9..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/BabylonLoader.js +++ /dev/null @@ -1,255 +0,0 @@ -/** - * @author mrdoob / http://mrdoob.com/ - * @author Mugen87 / https://github.com/Mugen87 - */ - -THREE.BabylonLoader = function ( manager ) { - - this.manager = ( manager !== undefined ) ? manager : THREE.DefaultLoadingManager; - -}; - -THREE.BabylonLoader.prototype = { - - constructor: THREE.BabylonLoader, - - load: function ( url, onLoad, onProgress, onError ) { - - var scope = this; - - var loader = new THREE.FileLoader( scope.manager ); - loader.setPath( scope.path ); - loader.load( url, function ( text ) { - - onLoad( scope.parse( JSON.parse( text ) ) ); - - }, onProgress, onError ); - - }, - - setPath: function ( value ) { - - this.path = value; - return this; - - }, - - parse: function ( json ) { - - function parseMaterials( json ) { - - var materials = {}; - - for ( var i = 0, l = json.materials.length; i < l; i ++ ) { - - var data = json.materials[ i ]; - - var material = new THREE.MeshPhongMaterial(); - material.name = data.name; - material.color.fromArray( data.diffuse ); - material.emissive.fromArray( data.emissive ); - material.specular.fromArray( data.specular ); - material.shininess = data.specularPower; - material.opacity = data.alpha; - - materials[ data.id ] = material; - - } - - if ( json.multiMaterials ) { - - for ( var i = 0, l = json.multiMaterials.length; i < l; i ++ ) { - - var data = json.multiMaterials[ i ]; - - console.warn( 'THREE.BabylonLoader: Multi materials not yet supported.' ); - - materials[ data.id ] = new THREE.MeshPhongMaterial(); - - } - - } - - return materials; - - } - - function parseGeometry( json ) { - - var geometry = new THREE.BufferGeometry(); - - var indices = json.indices; - var positions = json.positions; - var normals = json.normals; - var uvs = json.uvs; - - // indices - - geometry.setIndex( indices ); - - // positions - - for ( var j = 2, jl = positions.length; j < jl; j += 3 ) { - - positions[ j ] = - positions[ j ]; - - } - - geometry.addAttribute( 'position', new THREE.Float32BufferAttribute( positions, 3 ) ); - - // normals - - if ( normals ) { - - for ( var j = 2, jl = normals.length; j < jl; j += 3 ) { - - normals[ j ] = - normals[ j ]; - - } - - geometry.addAttribute( 'normal', new THREE.Float32BufferAttribute( normals, 3 ) ); - - } - - // uvs - - if ( uvs ) { - - geometry.addAttribute( 'uv', new THREE.Float32BufferAttribute( uvs, 2 ) ); - - } - - // offsets - - var subMeshes = json.subMeshes; - - if ( subMeshes ) { - - for ( var j = 0, jl = subMeshes.length; j < jl; j ++ ) { - - var subMesh = subMeshes[ j ]; - - geometry.addGroup( subMesh.indexStart, subMesh.indexCount ); - - } - - } - - return geometry; - - } - - function parseObjects( json, materials ) { - - var objects = {}; - var scene = new THREE.Scene(); - - var cameras = json.cameras; - - for ( var i = 0, l = cameras.length; i < l; i ++ ) { - - var data = cameras[ i ]; - - var camera = new THREE.PerspectiveCamera( ( data.fov / Math.PI ) * 180, 1.33, data.minZ, data.maxZ ); - - camera.name = data.name; - camera.position.fromArray( data.position ); - if ( data.rotation ) camera.rotation.fromArray( data.rotation ); - - objects[ data.id ] = camera; - - } - - var lights = json.lights; - - for ( var i = 0, l = lights.length; i < l; i ++ ) { - - var data = lights[ i ]; - - var light; - - switch ( data.type ) { - - case 0: - light = new THREE.PointLight(); - break; - - case 1: - light = new THREE.DirectionalLight(); - break; - - case 2: - light = new THREE.SpotLight(); - break; - - case 3: - light = new THREE.HemisphereLight(); - break; - - } - - light.name = data.name; - if ( data.position ) light.position.set( data.position[ 0 ], data.position[ 1 ], - data.position[ 2 ] ); - light.color.fromArray( data.diffuse ); - if ( data.groundColor ) light.groundColor.fromArray( data.groundColor ); - if ( data.intensity ) light.intensity = data.intensity; - - objects[ data.id ] = light; - - scene.add( light ); - - } - - var meshes = json.meshes; - - for ( var i = 0, l = meshes.length; i < l; i ++ ) { - - var data = meshes[ i ]; - - var object; - - if ( data.indices ) { - - var geometry = parseGeometry( data ); - - object = new THREE.Mesh( geometry, materials[ data.materialId ] ); - - } else { - - object = new THREE.Group(); - - } - - object.name = data.name; - object.position.set( data.position[ 0 ], data.position[ 1 ], - data.position[ 2 ] ); - object.rotation.fromArray( data.rotation ); - if ( data.rotationQuaternion ) object.quaternion.fromArray( data.rotationQuaternion ); - object.scale.fromArray( data.scaling ); - // object.visible = data.isVisible; - - if ( data.parentId ) { - - objects[ data.parentId ].add( object ); - - } else { - - scene.add( object ); - - } - - objects[ data.id ] = object; - - } - - return scene; - - } - - var materials = parseMaterials( json ); - var scene = parseObjects( json, materials ); - - return scene; - - } - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/AfterimageShader.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/AfterimageShader.js deleted file mode 100644 index ab4a5df03444d41b5545004a72e0cd81649f1abf..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/shaders/AfterimageShader.js +++ /dev/null @@ -1,60 +0,0 @@ -/** - * @author HypnosNova / https://www.threejs.org.cn/gallery/ - * - * Afterimage shader - * I created this effect inspired by a demo on codepen: - * https://codepen.io/brunoimbrizi/pen/MoRJaN?page=1& - */ - -THREE.AfterimageShader = { - - uniforms: { - - "damp": { value: 0.96 }, - "tOld": { value: null }, - "tNew": { value: null } - - }, - - vertexShader: [ - - "varying vec2 vUv;", - - "void main() {", - - "vUv = uv;", - "gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );", - - "}" - - ].join( "\n" ), - - fragmentShader: [ - - "uniform float damp;", - - "uniform sampler2D tOld;", - "uniform sampler2D tNew;", - - "varying vec2 vUv;", - - "vec4 when_gt( vec4 x, float y ) {", - - "return max( sign( x - y ), 0.0 );", - - "}", - - "void main() {", - - "vec4 texelOld = texture2D( tOld, vUv );", - "vec4 texelNew = texture2D( tNew, vUv );", - - "texelOld *= damp * when_gt( texelOld, 0.1 );", - - "gl_FragColor = max(texelNew, texelOld);", - - "}" - - ].join( "\n" ) - -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/materials/MeshPhysicalMaterial.js b/spaces/banana-projects/web3d/node_modules/three/src/materials/MeshPhysicalMaterial.js deleted file mode 100644 index a4ea18c03cddbb97e0f9fdc6505c47083afdb4f0..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/materials/MeshPhysicalMaterial.js +++ /dev/null @@ -1,51 +0,0 @@ -import { MeshStandardMaterial } from './MeshStandardMaterial.js'; - -/** - * @author WestLangley / http://github.com/WestLangley - * - * parameters = { - * reflectivity: - * clearCoat: - * clearCoatRoughness: - * } - */ - -function MeshPhysicalMaterial( parameters ) { - - MeshStandardMaterial.call( this ); - - this.defines = { 'PHYSICAL': '' }; - - this.type = 'MeshPhysicalMaterial'; - - this.reflectivity = 0.5; // maps to F0 = 0.04 - - this.clearCoat = 0.0; - this.clearCoatRoughness = 0.0; - - this.setValues( parameters ); - -} - -MeshPhysicalMaterial.prototype = Object.create( MeshStandardMaterial.prototype ); -MeshPhysicalMaterial.prototype.constructor = MeshPhysicalMaterial; - -MeshPhysicalMaterial.prototype.isMeshPhysicalMaterial = true; - -MeshPhysicalMaterial.prototype.copy = function ( source ) { - - MeshStandardMaterial.prototype.copy.call( this, source ); - - this.defines = { 'PHYSICAL': '' }; - - this.reflectivity = source.reflectivity; - - this.clearCoat = source.clearCoat; - this.clearCoatRoughness = source.clearCoatRoughness; - - return this; - -}; - - -export { MeshPhysicalMaterial }; diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327002904.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327002904.py deleted file mode 100644 index 0a38d76ce2ad23d2334dcc1d23d9094842aa1493..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327002904.py +++ /dev/null @@ -1,65 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_faces[0][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

visitor badge
" -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327003455.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327003455.py deleted file mode 100644 index 38234670a7723f4e5edc275d1fd766fb46d7709d..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327003455.py +++ /dev/null @@ -1,65 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_img , restored_faces= restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_img[1][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

visitor badge
" -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/models/video_base_model.py b/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/models/video_base_model.py deleted file mode 100644 index 9f7993a15e585526135d1ede094f4dcff47f64db..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/models/video_base_model.py +++ /dev/null @@ -1,160 +0,0 @@ -import torch -from collections import Counter -from os import path as osp -from torch import distributed as dist -from tqdm import tqdm - -from basicsr.metrics import calculate_metric -from basicsr.utils import get_root_logger, imwrite, tensor2img -from basicsr.utils.dist_util import get_dist_info -from basicsr.utils.registry import MODEL_REGISTRY -from .sr_model import SRModel - - -@MODEL_REGISTRY.register() -class VideoBaseModel(SRModel): - """Base video SR model.""" - - def dist_validation(self, dataloader, current_iter, tb_logger, save_img): - dataset = dataloader.dataset - dataset_name = dataset.opt['name'] - with_metrics = self.opt['val']['metrics'] is not None - # initialize self.metric_results - # It is a dict: { - # 'folder1': tensor (num_frame x len(metrics)), - # 'folder2': tensor (num_frame x len(metrics)) - # } - if with_metrics: - if not hasattr(self, 'metric_results'): # only execute in the first run - self.metric_results = {} - num_frame_each_folder = Counter(dataset.data_info['folder']) - for folder, num_frame in num_frame_each_folder.items(): - self.metric_results[folder] = torch.zeros( - num_frame, len(self.opt['val']['metrics']), dtype=torch.float32, device='cuda') - # initialize the best metric results - self._initialize_best_metric_results(dataset_name) - # zero self.metric_results - rank, world_size = get_dist_info() - if with_metrics: - for _, tensor in self.metric_results.items(): - tensor.zero_() - - metric_data = dict() - # record all frames (border and center frames) - if rank == 0: - pbar = tqdm(total=len(dataset), unit='frame') - for idx in range(rank, len(dataset), world_size): - val_data = dataset[idx] - val_data['lq'].unsqueeze_(0) - val_data['gt'].unsqueeze_(0) - folder = val_data['folder'] - frame_idx, max_idx = val_data['idx'].split('/') - lq_path = val_data['lq_path'] - - self.feed_data(val_data) - self.test() - visuals = self.get_current_visuals() - result_img = tensor2img([visuals['result']]) - metric_data['img'] = result_img - if 'gt' in visuals: - gt_img = tensor2img([visuals['gt']]) - metric_data['img2'] = gt_img - del self.gt - - # tentative for out of GPU memory - del self.lq - del self.output - torch.cuda.empty_cache() - - if save_img: - if self.opt['is_train']: - raise NotImplementedError('saving image is not supported during training.') - else: - if 'vimeo' in dataset_name.lower(): # vimeo90k dataset - split_result = lq_path.split('/') - img_name = f'{split_result[-3]}_{split_result[-2]}_{split_result[-1].split(".")[0]}' - else: # other datasets, e.g., REDS, Vid4 - img_name = osp.splitext(osp.basename(lq_path))[0] - - if self.opt['val']['suffix']: - save_img_path = osp.join(self.opt['path']['visualization'], dataset_name, folder, - f'{img_name}_{self.opt["val"]["suffix"]}.png') - else: - save_img_path = osp.join(self.opt['path']['visualization'], dataset_name, folder, - f'{img_name}_{self.opt["name"]}.png') - imwrite(result_img, save_img_path) - - if with_metrics: - # calculate metrics - for metric_idx, opt_ in enumerate(self.opt['val']['metrics'].values()): - result = calculate_metric(metric_data, opt_) - self.metric_results[folder][int(frame_idx), metric_idx] += result - - # progress bar - if rank == 0: - for _ in range(world_size): - pbar.update(1) - pbar.set_description(f'Test {folder}: {int(frame_idx) + world_size}/{max_idx}') - if rank == 0: - pbar.close() - - if with_metrics: - if self.opt['dist']: - # collect data among GPUs - for _, tensor in self.metric_results.items(): - dist.reduce(tensor, 0) - dist.barrier() - else: - pass # assume use one gpu in non-dist testing - - if rank == 0: - self._log_validation_metric_values(current_iter, dataset_name, tb_logger) - - def nondist_validation(self, dataloader, current_iter, tb_logger, save_img): - logger = get_root_logger() - logger.warning('nondist_validation is not implemented. Run dist_validation.') - self.dist_validation(dataloader, current_iter, tb_logger, save_img) - - def _log_validation_metric_values(self, current_iter, dataset_name, tb_logger): - # ----------------- calculate the average values for each folder, and for each metric ----------------- # - # average all frames for each sub-folder - # metric_results_avg is a dict:{ - # 'folder1': tensor (len(metrics)), - # 'folder2': tensor (len(metrics)) - # } - metric_results_avg = { - folder: torch.mean(tensor, dim=0).cpu() - for (folder, tensor) in self.metric_results.items() - } - # total_avg_results is a dict: { - # 'metric1': float, - # 'metric2': float - # } - total_avg_results = {metric: 0 for metric in self.opt['val']['metrics'].keys()} - for folder, tensor in metric_results_avg.items(): - for idx, metric in enumerate(total_avg_results.keys()): - total_avg_results[metric] += metric_results_avg[folder][idx].item() - # average among folders - for metric in total_avg_results.keys(): - total_avg_results[metric] /= len(metric_results_avg) - # update the best metric result - self._update_best_metric_result(dataset_name, metric, total_avg_results[metric], current_iter) - - # ------------------------------------------ log the metric ------------------------------------------ # - log_str = f'Validation {dataset_name}\n' - for metric_idx, (metric, value) in enumerate(total_avg_results.items()): - log_str += f'\t # {metric}: {value:.4f}' - for folder, tensor in metric_results_avg.items(): - log_str += f'\t # {folder}: {tensor[metric_idx].item():.4f}' - if hasattr(self, 'best_metric_results'): - log_str += (f'\n\t Best: {self.best_metric_results[dataset_name][metric]["val"]:.4f} @ ' - f'{self.best_metric_results[dataset_name][metric]["iter"]} iter') - log_str += '\n' - - logger = get_root_logger() - logger.info(log_str) - if tb_logger: - for metric_idx, (metric, value) in enumerate(total_avg_results.items()): - tb_logger.add_scalar(f'metrics/{metric}', value, current_iter) - for folder, tensor in metric_results_avg.items(): - tb_logger.add_scalar(f'metrics/{metric}/{folder}', tensor[metric_idx].item(), current_iter) diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/utils/download_util.py b/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/utils/download_util.py deleted file mode 100644 index 6adda71320625242b0107f77d328e7afa236aee6..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/utils/download_util.py +++ /dev/null @@ -1,99 +0,0 @@ -import math -import os -import requests -from torch.hub import download_url_to_file, get_dir -from tqdm import tqdm -from urllib.parse import urlparse - -from .misc import sizeof_fmt - - -def download_file_from_google_drive(file_id, save_path): - """Download files from google drive. - - Ref: - https://stackoverflow.com/questions/25010369/wget-curl-large-file-from-google-drive # noqa E501 - - Args: - file_id (str): File id. - save_path (str): Save path. - """ - - session = requests.Session() - URL = 'https://docs.google.com/uc?export=download' - params = {'id': file_id} - - response = session.get(URL, params=params, stream=True) - token = get_confirm_token(response) - if token: - params['confirm'] = token - response = session.get(URL, params=params, stream=True) - - # get file size - response_file_size = session.get(URL, params=params, stream=True, headers={'Range': 'bytes=0-2'}) - if 'Content-Range' in response_file_size.headers: - file_size = int(response_file_size.headers['Content-Range'].split('/')[1]) - else: - file_size = None - - save_response_content(response, save_path, file_size) - - -def get_confirm_token(response): - for key, value in response.cookies.items(): - if key.startswith('download_warning'): - return value - return None - - -def save_response_content(response, destination, file_size=None, chunk_size=32768): - if file_size is not None: - pbar = tqdm(total=math.ceil(file_size / chunk_size), unit='chunk') - - readable_file_size = sizeof_fmt(file_size) - else: - pbar = None - - with open(destination, 'wb') as f: - downloaded_size = 0 - for chunk in response.iter_content(chunk_size): - downloaded_size += chunk_size - if pbar is not None: - pbar.update(1) - pbar.set_description(f'Download {sizeof_fmt(downloaded_size)} / {readable_file_size}') - if chunk: # filter out keep-alive new chunks - f.write(chunk) - if pbar is not None: - pbar.close() - - -def load_file_from_url(url, model_dir=None, progress=True, file_name=None): - """Load file form http url, will download models if necessary. - - Ref:https://github.com/1adrianb/face-alignment/blob/master/face_alignment/utils.py - - Args: - url (str): URL to be downloaded. - model_dir (str): The path to save the downloaded model. Should be a full path. If None, use pytorch hub_dir. - Default: None. - progress (bool): Whether to show the download progress. Default: True. - file_name (str): The downloaded file name. If None, use the file name in the url. Default: None. - - Returns: - str: The path to the downloaded file. - """ - if model_dir is None: # use the pytorch hub_dir - hub_dir = get_dir() - model_dir = os.path.join(hub_dir, 'checkpoints') - - os.makedirs(model_dir, exist_ok=True) - - parts = urlparse(url) - filename = os.path.basename(parts.path) - if file_name is not None: - filename = file_name - cached_file = os.path.abspath(os.path.join(model_dir, filename)) - if not os.path.exists(cached_file): - print(f'Downloading: "{url}" to {cached_file}\n') - download_url_to_file(url, cached_file, hash_prefix=None, progress=progress) - return cached_file diff --git a/spaces/bioriAsaeru/text-to-voice/A Giant Amp 39s Field Hd Full Movie Download LINK.md b/spaces/bioriAsaeru/text-to-voice/A Giant Amp 39s Field Hd Full Movie Download LINK.md deleted file mode 100644 index 7d4982f367c943ee5328ee7d80d25441e0e09a00..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/A Giant Amp 39s Field Hd Full Movie Download LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

A Giant amp; 39;s Field Hd Full Movie Download


DOWNLOAD ✺✺✺ https://urloso.com/2uyOah



-
- aaccfb2cb3
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/JetBrains Rider 2019.3.3 Win Mac Linux ? __TOP__.md b/spaces/bioriAsaeru/text-to-voice/JetBrains Rider 2019.3.3 Win Mac Linux ? __TOP__.md deleted file mode 100644 index 1b55f96be5ec54b22e3ec6b833186ee23f9e5759..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/JetBrains Rider 2019.3.3 Win Mac Linux ? __TOP__.md +++ /dev/null @@ -1,16 +0,0 @@ -

JetBrains Rider 2019.3.3 Win Mac Linux –


Download » https://urloso.com/2uyR9R



-
-In the lower left-hand corner of the window, you should see a button that says free 30-day evaluation or something similar.. Get a free 30-day evaluation of the latest version of Rider for Windows, macOS or Linux.. Make sure you have a working internet connection, and then select Get Installer to download the installer for Rider. - -. Rider Pro/Team Rider Pro/Team Dual licenses include a free 30-day evaluation of all product versions. Rider Pro/Team is a professional and team edition of Rider, the professional engineering development environment. It is designed specifically for professional use, and provides features that are not included in Rider or Rider Dual. Includes advanced language tools for C, C++, C#, F#, VB.NET, SQL, JavaScript, HTML, XML, CSS, ASP.NET, JSON, PHP, Ruby, Python, R, Golang, and XML/XSD. Includes unlimited number of installed users, unlimited connections, unlimited versions, unlimited physical resources and unlimited data files. Also included are more than 20 database types, including MyISAM, MariaDB, and MySQL. - -. Details: The software download includes a 30-day free evaluation of the software. The free evaluation license can be used on one computer or in a network license. - -. Details: This product requires a serial number, activation code, and product key to be used. Installation instructions will be sent via email after the serial number is entered and the product key is confirmed. The product key can be used on up to two computers, each of which is installed by a different user. - -. Details: Requires a serial number, activation code, and product key to be used. Installation instructions will be sent via email after the serial number is entered and the product key is confirmed. The product key can be used on up to two computers, each of which is installed by a different user. - -. Details: Requires a serial number, activation code, and product key to be used. Installation instructions will be sent via email after the serial number is entered and the product key is confirmed. The product key can be used on up to two computers, each of which is 4fefd39f24
-
-
-

diff --git a/spaces/bioriAsaeru/text-to-voice/Kabhi Kahin 2 Movie In Hindi 720p Download ((HOT)) Torrent.md b/spaces/bioriAsaeru/text-to-voice/Kabhi Kahin 2 Movie In Hindi 720p Download ((HOT)) Torrent.md deleted file mode 100644 index a21dee2c6d0b1f991271a4e77f135220acaab52a..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Kabhi Kahin 2 Movie In Hindi 720p Download ((HOT)) Torrent.md +++ /dev/null @@ -1,12 +0,0 @@ -

Kabhi Kahin 2 Movie In Hindi 720p Download Torrent


Download File - https://urloso.com/2uyQA3



-
-atreya sasthak serial full video, free download atreya sasthak mp3 song, kahin bhi kahin bhi online free hindi mp4 video song, hindi movie songs download full hd 720p atreya sasthak, download hindi movie song. Atreya sasthak serial full video, free download atreya sasthak mp3 song, kahin bhi kahin bhi online free hindi mp4 video song, hindi movie songs download full hd 720p. It's Bollywood Drama film, it's action, love story, romantic movie. that's story of the movie. most of the story of the movie is about man, power and money. The movie is story of different characters. and the story of the movie is about man's future. watch the best indian kollywood movie(it's. Free download atreya sasthak serial full video, free download atreya sasthak mp3 song, kahin bhi kahin bhi online free hindi mp4 video song, hindi movie songs download full hd 720p. Atreya sasthak serial full video, free download atreya sasthak mp3 song, kahin bhi kahin bhi online free hindi mp4 video song, hindi movie songs download full hd 720p. It's Bollywood Drama film, it's action, love story, romantic movie. that's story of the movie. most of the story of the movie is about man, power and money. The movie is story of different characters. and the story of the movie is about man's future. watch the best indian kollywood movie(it's.1. Field of the Invention - -The invention relates generally to the field of well bore hydrocarbon fluid production, and more particularly to systems and methods for recovering methane gas from a well bore hydrocarbon fluid production stream. - -2. Background of the Invention - -In the oil and gas industry, it is desirable to extract oil from oil bearing formations or reservoirs that exist below the earth's surface and bring the extracted oil to the surface for collection, processing, and transport to oil refineries or other locations. The most common method for extracting oil from subterranean reservoirs is by utilizing the natural pressure that exists within the reservoir to force the oil to the surface. In this method, referred to as � 4fefd39f24
-
-
-

diff --git a/spaces/brainblow/beat_remixer/beat_manipulator/beatmap.py b/spaces/brainblow/beat_remixer/beat_manipulator/beatmap.py deleted file mode 100644 index 7536a8b66a139d54d7b47abce5f115cabeb8f6fa..0000000000000000000000000000000000000000 --- a/spaces/brainblow/beat_remixer/beat_manipulator/beatmap.py +++ /dev/null @@ -1,195 +0,0 @@ -import numpy as np -from . import utils - - -def scale(beatmap:np.ndarray, scale:float, log = True, integer = True) -> np.ndarray: - if isinstance(scale, str): scale = utils._safer_eval(scale) - assert scale>0, f"scale should be > 0, your scale is {scale}" - if scale == 1: return beatmap - else: - import math - if log is True: print(f'scale={scale}; ') - a = 0 - b = np.array([], dtype=int) - if scale%1==0: - while a < len(beatmap): - b = np.append(b, beatmap[int(a)]) - a += scale - else: - if integer is True: - while a + 1 < len(beatmap): - b = np.append(b, int((1 - (a % 1)) * beatmap[math.floor(a)] + (a % 1) * beatmap[math.ceil(a)])) - a += scale - else: - while a + 1 < len(beatmap): - b = np.append(b, (1 - (a % 1)) * beatmap[math.floor(a)] + (a % 1) * beatmap[math.ceil(a)]) - a += scale - return b - -def shift(beatmap:np.ndarray, shift:float, log = True, mode = 1) -> np.ndarray: - if isinstance(shift, str): shift = utils._safer_eval(shift) - if shift == 0: return beatmap - # positive shift - elif shift > 0: - # full value of beats is removed from the beginning - if shift >= 1: beatmap = beatmap[int(shift//1):] - # shift beatmap by the decimal value - if shift%1 != 0: - shift = shift%1 - for i in range(len(beatmap) - int(shift) - 1): - beatmap[i] = int(beatmap[i] + shift * (beatmap[i + 1] - beatmap[i])) - - # negative shift - else: - shift = -shift - # full values are inserted in between first beats - if shift >= 1: - if mode == 1: - step = int((beatmap[1] - beatmap[0]) / (int(shift//1) + 1)) - beatmap = np.insert(arr = beatmap, obj = 1, values = np.linspace(start = beatmap[0] + step - 1, stop = 1 + beatmap[1] - step, num = int(shift//1))) - elif mode == 2: - for i in range(int(shift//1)): - beatmap = np.insert(arr = beatmap, obj = (i*2)+1, values = int((beatmap[i*2] + beatmap[(i*2)+1])/2)) - # shift beatmap by the decimal value - if shift%1 != 0: - shift = shift%1 - for i in reversed(range(len(beatmap))): - if i==0: continue - beatmap[i] = int(beatmap[i] - shift * (beatmap[i] - beatmap[i-1])) - return beatmap - -def generate(audio: np.ndarray, sr: int, lib='madmom.BeatDetectionProcessor', caching=True, filename: str = None, log = True, load_settings = True, split=None): - """Creates beatmap attribute with a list of positions of beats in samples.""" - if log is True: print(f'Analyzing beats using {lib}; ', end='') - - # load a beatmap if it is cached: - if caching is True and filename is not None: - audio_id=hex(len(audio[0])) - import os - if not os.path.exists('beat_manipulator/beatmaps'): - os.mkdir('beat_manipulator/beatmaps') - cacheDir="beat_manipulator/beatmaps/" + ''.join(filename.replace('\\', '/').split('/')[-1]) + "_"+lib+"_"+audio_id+'.txt' - try: - beatmap=np.loadtxt(cacheDir, dtype=int) - if log is True: print('loaded cached beatmap.') - except OSError: - if log is True:print("beatmap hasn't been generated yet. Generating...") - beatmap = None - - #generate the beatmap - if beatmap is None: - if 'madmom' in lib.lower(): - from collections.abc import MutableMapping, MutableSequence - import madmom - assert len(audio[0])>sr*2, f'Audio file is too short, len={len(audio[0])} samples, or {len(audio[0])/sr} seconds. Minimum length is 2 seconds, audio below that breaks madmom processors.' - if lib=='madmom.BeatTrackingProcessor': - proc = madmom.features.beats.BeatTrackingProcessor(fps=100) - act = madmom.features.beats.RNNBeatProcessor()(madmom.audio.signal.Signal(audio.T, sr)) - beatmap= proc(act)*sr - elif lib=='madmom.BeatTrackingProcessor.constant': - proc = madmom.features.beats.BeatTrackingProcessor(fps=100, look_ahead=None) - act = madmom.features.beats.RNNBeatProcessor()(madmom.audio.signal.Signal(audio.T, sr)) - beatmap= proc(act)*sr - elif lib=='madmom.BeatTrackingProcessor.consistent': - proc = madmom.features.beats.BeatTrackingProcessor(fps=100, look_ahead=None, look_aside=0) - act = madmom.features.beats.RNNBeatProcessor()(madmom.audio.signal.Signal(audio.T, sr)) - beatmap= proc(act)*sr - elif lib=='madmom.BeatDetectionProcessor': - proc = madmom.features.beats.BeatDetectionProcessor(fps=100) - act = madmom.features.beats.RNNBeatProcessor()(madmom.audio.signal.Signal(audio.T, sr)) - beatmap= proc(act)*sr - elif lib=='madmom.BeatDetectionProcessor.consistent': - proc = madmom.features.beats.BeatDetectionProcessor(fps=100, look_aside=0) - act = madmom.features.beats.RNNBeatProcessor()(madmom.audio.signal.Signal(audio.T, sr)) - beatmap= proc(act)*sr - elif lib=='madmom.CRFBeatDetectionProcessor': - proc = madmom.features.beats.CRFBeatDetectionProcessor(fps=100) - act = madmom.features.beats.RNNBeatProcessor()(madmom.audio.signal.Signal(audio.T, sr)) - beatmap= proc(act)*sr - elif lib=='madmom.CRFBeatDetectionProcessor.constant': - proc = madmom.features.beats.CRFBeatDetectionProcessor(fps=100, use_factors=True, factors=[0.5, 1, 2]) - act = madmom.features.beats.RNNBeatProcessor()(madmom.audio.signal.Signal(audio.T, sr)) - beatmap= proc(act)*sr - elif lib=='madmom.DBNBeatTrackingProcessor': - proc = madmom.features.beats.DBNBeatTrackingProcessor(fps=100) - act = madmom.features.beats.RNNBeatProcessor()(madmom.audio.signal.Signal(audio.T, sr)) - beatmap= proc(act)*sr - elif lib=='madmom.DBNBeatTrackingProcessor.1000': - proc = madmom.features.beats.DBNBeatTrackingProcessor(fps=100, transition_lambda=1000) - act = madmom.features.beats.RNNBeatProcessor()(madmom.audio.signal.Signal(audio.T, sr)) - beatmap= proc(act)*sr - elif lib=='madmom.DBNDownBeatTrackingProcessor': - proc = madmom.features.downbeats.DBNDownBeatTrackingProcessor(beats_per_bar=[4], fps=100) - act = madmom.features.downbeats.RNNDownBeatProcessor()(madmom.audio.signal.Signal(audio.T, sr)) - beatmap= proc(act)*sr - beatmap=beatmap[:,0] - elif lib=='madmom.PatternTrackingProcessor': #broken - from madmom.models import PATTERNS_BALLROOM - proc = madmom.features.downbeats.PatternTrackingProcessor(PATTERNS_BALLROOM, fps=50) - from madmom.audio.spectrogram import LogarithmicSpectrogramProcessor, SpectrogramDifferenceProcessor, MultiBandSpectrogramProcessor - from madmom.processors import SequentialProcessor - log = LogarithmicSpectrogramProcessor() - diff = SpectrogramDifferenceProcessor(positive_diffs=True) - mb = MultiBandSpectrogramProcessor(crossover_frequencies=[270]) - pre_proc = SequentialProcessor([log, diff, mb]) - act = pre_proc(madmom.audio.signal.Signal(audio.T, sr)) - beatmap= proc(act)*sr - beatmap=beatmap[:,0] - elif lib=='madmom.DBNBarTrackingProcessor': #broken - beats = generate(audio=audio, sr=sr, filename=filename, lib='madmom.DBNBeatTrackingProcessor', caching = caching) - proc = madmom.features.downbeats.DBNBarTrackingProcessor(beats_per_bar=[4], fps=100) - act = madmom.features.downbeats.RNNBarProcessor()(((madmom.audio.signal.Signal(audio.T, sr)), beats)) - beatmap= proc(act)*sr - elif lib=='librosa': #broken in 3.9, works in 3.8 - import librosa - beat_frames = librosa.beat.beat_track(y=audio[0], sr=sr, hop_length=512) - beatmap = librosa.frames_to_samples(beat_frames[1]) - - # save the beatmap and return - if caching is True: np.savetxt(cacheDir, beatmap.astype(int), fmt='%d') - if not isinstance(beatmap, np.ndarray): beatmap=np.asarray(beatmap, dtype=int) - else: beatmap=beatmap.astype(int) - - if load_settings is True: - settingsDir="beat_manipulator/beatmaps/" + ''.join(filename.split('/')[-1]) + "_"+lib+"_"+audio_id+'_settings.txt' - if os.path.exists(settingsDir): - with open(settingsDir, 'r') as f: - settings = f.read().split(',') - if settings[0] != 'None': beatmap = scale(beatmap, settings[0], log = False) - if settings[1] != 'None': beatmap = shift(beatmap, settings[1], log = False) - if settings[2] != 'None': beatmap = np.sort(np.absolute(beatmap - int(settings[2]))) - - return beatmap - - - -def save_settings(audio: np.ndarray, filename: str = None, lib: str = 'madmom.BeatDetectionProcessor', scale: float = None, shift: float = None, adjust: int = None, normalized: str = None, log = True, overwrite = 'ask'): - if isinstance(overwrite, str): overwrite = overwrite.lower() - audio_id=hex(len(audio[0])) - cacheDir="beat_manipulator/beatmaps/" + ''.join(filename.split('/')[-1]) + "_"+lib+"_"+audio_id+'.txt' - import os - assert os.path.exists(cacheDir), f"Beatmap `{cacheDir}` doesn't exist" - settingsDir="beat_manipulator/beatmaps/" + ''.join(filename.split('/')[-1]) + "_"+lib+"_"+audio_id+'_settings.txt' - - try: - a = utils._safer_eval_strict(scale) - if a == 1: scale = None - except Exception as e: assert scale is None, f'scale = `{scale}` - Not a valid scale, should be either a number, a math expression, or None: {e}' - try: - a = utils._safer_eval_strict(shift) - if a == 0: shift = None - except Exception as e: assert shift is None, f'shift = `{shift}` - Not a valid shift: {e}' - assert isinstance(adjust, int) or adjust is None, f'adjust = `{adjust}` should be int, but it is `{type(adjust)}`' - - if adjust == 0: adjust = None - - if os.path.exists(settingsDir): - if overwrite == 'ask' or overwrite =='a': - what = input(f'`{settingsDir}` already exists. Overwrite (y/n)?: ') - if not (what.lower() == 'y' or what.lower() == 'yes'): return - elif not (overwrite == 'true' or overwrite =='y' or overwrite =='yes' or overwrite is True): return - - with open(settingsDir, 'w') as f: - f.write(f'{scale},{shift},{adjust},{normalized}') - if log is True: print(f"Saved scale = `{scale}`, shift = `{shift}`, adjust = `{adjust}` to `{settingsDir}`") - diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/torchscript_patch.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/torchscript_patch.py deleted file mode 100644 index da9b324f1582e31d1a16d2fe462ac2989bea56ea..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/export/torchscript_patch.py +++ /dev/null @@ -1,406 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import os -import sys -import tempfile -from contextlib import ExitStack, contextmanager -from copy import deepcopy -from unittest import mock -import torch -from torch import nn - -# need some explicit imports due to https://github.com/pytorch/pytorch/issues/38964 -import detectron2 # noqa F401 -from detectron2.structures import Boxes, Instances -from detectron2.utils.env import _import_file - -_counter = 0 - - -def _clear_jit_cache(): - from torch.jit._recursive import concrete_type_store - from torch.jit._state import _jit_caching_layer - - concrete_type_store.type_store.clear() # for modules - _jit_caching_layer.clear() # for free functions - - -def _add_instances_conversion_methods(newInstances): - """ - Add from_instances methods to the scripted Instances class. - """ - cls_name = newInstances.__name__ - - @torch.jit.unused - def from_instances(instances: Instances): - """ - Create scripted Instances from original Instances - """ - fields = instances.get_fields() - image_size = instances.image_size - ret = newInstances(image_size) - for name, val in fields.items(): - assert hasattr(ret, f"_{name}"), f"No attribute named {name} in {cls_name}" - setattr(ret, name, deepcopy(val)) - return ret - - newInstances.from_instances = from_instances - - -@contextmanager -def patch_instances(fields): - """ - A contextmanager, under which the Instances class in detectron2 is replaced - by a statically-typed scriptable class, defined by `fields`. - See more in `scripting_with_instances`. - """ - - with tempfile.TemporaryDirectory(prefix="detectron2") as dir, tempfile.NamedTemporaryFile( - mode="w", encoding="utf-8", suffix=".py", dir=dir, delete=False - ) as f: - try: - # Objects that use Instances should not reuse previously-compiled - # results in cache, because `Instances` could be a new class each time. - _clear_jit_cache() - - cls_name, s = _gen_instance_module(fields) - f.write(s) - f.flush() - f.close() - - module = _import(f.name) - new_instances = getattr(module, cls_name) - _ = torch.jit.script(new_instances) - # let torchscript think Instances was scripted already - Instances.__torch_script_class__ = True - # let torchscript find new_instances when looking for the jit type of Instances - Instances._jit_override_qualname = torch._jit_internal._qualified_name(new_instances) - - _add_instances_conversion_methods(new_instances) - yield new_instances - finally: - try: - del Instances.__torch_script_class__ - del Instances._jit_override_qualname - except AttributeError: - pass - sys.modules.pop(module.__name__) - - -def _gen_instance_class(fields): - """ - Args: - fields (dict[name: type]) - """ - - class _FieldType: - def __init__(self, name, type_): - assert isinstance(name, str), f"Field name must be str, got {name}" - self.name = name - self.type_ = type_ - self.annotation = f"{type_.__module__}.{type_.__name__}" - - fields = [_FieldType(k, v) for k, v in fields.items()] - - def indent(level, s): - return " " * 4 * level + s - - lines = [] - - global _counter - _counter += 1 - - cls_name = "ScriptedInstances{}".format(_counter) - - field_names = tuple(x.name for x in fields) - extra_args = ", ".join([f"{f.name}: Optional[{f.annotation}] = None" for f in fields]) - lines.append( - f""" -class {cls_name}: - def __init__(self, image_size: Tuple[int, int], {extra_args}): - self.image_size = image_size - self._field_names = {field_names} -""" - ) - - for f in fields: - lines.append( - indent(2, f"self._{f.name} = torch.jit.annotate(Optional[{f.annotation}], {f.name})") - ) - - for f in fields: - lines.append( - f""" - @property - def {f.name}(self) -> {f.annotation}: - # has to use a local for type refinement - # https://pytorch.org/docs/stable/jit_language_reference.html#optional-type-refinement - t = self._{f.name} - assert t is not None, "{f.name} is None and cannot be accessed!" - return t - - @{f.name}.setter - def {f.name}(self, value: {f.annotation}) -> None: - self._{f.name} = value -""" - ) - - # support method `__len__` - lines.append( - """ - def __len__(self) -> int: -""" - ) - for f in fields: - lines.append( - f""" - t = self._{f.name} - if t is not None: - return len(t) -""" - ) - lines.append( - """ - raise NotImplementedError("Empty Instances does not support __len__!") -""" - ) - - # support method `has` - lines.append( - """ - def has(self, name: str) -> bool: -""" - ) - for f in fields: - lines.append( - f""" - if name == "{f.name}": - return self._{f.name} is not None -""" - ) - lines.append( - """ - return False -""" - ) - - # support method `to` - none_args = ", None" * len(fields) - lines.append( - f""" - def to(self, device: torch.device) -> "{cls_name}": - ret = {cls_name}(self.image_size{none_args}) -""" - ) - for f in fields: - if hasattr(f.type_, "to"): - lines.append( - f""" - t = self._{f.name} - if t is not None: - ret._{f.name} = t.to(device) -""" - ) - else: - # For now, ignore fields that cannot be moved to devices. - # Maybe can support other tensor-like classes (e.g. __torch_function__) - pass - lines.append( - """ - return ret -""" - ) - - # support method `getitem` - none_args = ", None" * len(fields) - lines.append( - f""" - def __getitem__(self, item) -> "{cls_name}": - ret = {cls_name}(self.image_size{none_args}) -""" - ) - for f in fields: - lines.append( - f""" - t = self._{f.name} - if t is not None: - ret._{f.name} = t[item] -""" - ) - lines.append( - """ - return ret -""" - ) - - # support method `cat` - # this version does not contain checks that all instances have same size and fields - none_args = ", None" * len(fields) - lines.append( - f""" - def cat(self, instances: List["{cls_name}"]) -> "{cls_name}": - ret = {cls_name}(self.image_size{none_args}) -""" - ) - for f in fields: - lines.append( - f""" - t = self._{f.name} - if t is not None: - values: List[{f.annotation}] = [x.{f.name} for x in instances] - if torch.jit.isinstance(t, torch.Tensor): - ret._{f.name} = torch.cat(values, dim=0) - else: - ret._{f.name} = t.cat(values) -""" - ) - lines.append( - """ - return ret""" - ) - - # support method `get_fields()` - lines.append( - """ - def get_fields(self) -> Dict[str, Tensor]: - ret = {} - """ - ) - for f in fields: - if f.type_ == Boxes: - stmt = "t.tensor" - elif f.type_ == torch.Tensor: - stmt = "t" - else: - stmt = f'assert False, "unsupported type {str(f.type_)}"' - lines.append( - f""" - t = self._{f.name} - if t is not None: - ret["{f.name}"] = {stmt} - """ - ) - lines.append( - """ - return ret""" - ) - return cls_name, os.linesep.join(lines) - - -def _gen_instance_module(fields): - # TODO: find a more automatic way to enable import of other classes - s = """ -from copy import deepcopy -import torch -from torch import Tensor -import typing -from typing import * - -import detectron2 -from detectron2.structures import Boxes, Instances - -""" - - cls_name, cls_def = _gen_instance_class(fields) - s += cls_def - return cls_name, s - - -def _import(path): - return _import_file( - "{}{}".format(sys.modules[__name__].__name__, _counter), path, make_importable=True - ) - - -@contextmanager -def patch_builtin_len(modules=()): - """ - Patch the builtin len() function of a few detectron2 modules - to use __len__ instead, because __len__ does not convert values to - integers and therefore is friendly to tracing. - - Args: - modules (list[stsr]): names of extra modules to patch len(), in - addition to those in detectron2. - """ - - def _new_len(obj): - return obj.__len__() - - with ExitStack() as stack: - MODULES = [ - "detectron2.modeling.roi_heads.fast_rcnn", - "detectron2.modeling.roi_heads.mask_head", - "detectron2.modeling.roi_heads.keypoint_head", - ] + list(modules) - ctxs = [stack.enter_context(mock.patch(mod + ".len")) for mod in MODULES] - for m in ctxs: - m.side_effect = _new_len - yield - - -def patch_nonscriptable_classes(): - """ - Apply patches on a few nonscriptable detectron2 classes. - Should not have side-effects on eager usage. - """ - # __prepare_scriptable__ can also be added to models for easier maintenance. - # But it complicates the clean model code. - - from detectron2.modeling.backbone import ResNet, FPN - - # Due to https://github.com/pytorch/pytorch/issues/36061, - # we change backbone to use ModuleList for scripting. - # (note: this changes param names in state_dict) - - def prepare_resnet(self): - ret = deepcopy(self) - ret.stages = nn.ModuleList(ret.stages) - for k in self.stage_names: - delattr(ret, k) - return ret - - ResNet.__prepare_scriptable__ = prepare_resnet - - def prepare_fpn(self): - ret = deepcopy(self) - ret.lateral_convs = nn.ModuleList(ret.lateral_convs) - ret.output_convs = nn.ModuleList(ret.output_convs) - for name, _ in self.named_children(): - if name.startswith("fpn_"): - delattr(ret, name) - return ret - - FPN.__prepare_scriptable__ = prepare_fpn - - # Annotate some attributes to be constants for the purpose of scripting, - # even though they are not constants in eager mode. - from detectron2.modeling.roi_heads import StandardROIHeads - - if hasattr(StandardROIHeads, "__annotations__"): - # copy first to avoid editing annotations of base class - StandardROIHeads.__annotations__ = deepcopy(StandardROIHeads.__annotations__) - StandardROIHeads.__annotations__["mask_on"] = torch.jit.Final[bool] - StandardROIHeads.__annotations__["keypoint_on"] = torch.jit.Final[bool] - - -# These patches are not supposed to have side-effects. -patch_nonscriptable_classes() - - -@contextmanager -def freeze_training_mode(model): - """ - A context manager that annotates the "training" attribute of every submodule - to constant, so that the training codepath in these modules can be - meta-compiled away. Upon exiting, the annotations are reverted. - """ - classes = {type(x) for x in model.modules()} - # __constants__ is the old way to annotate constants and not compatible - # with __annotations__ . - classes = {x for x in classes if not hasattr(x, "__constants__")} - for cls in classes: - cls.__annotations__["training"] = torch.jit.Final[bool] - yield - for cls in classes: - cls.__annotations__["training"] = bool diff --git a/spaces/bzd4576/sovits-sin/losses.py b/spaces/bzd4576/sovits-sin/losses.py deleted file mode 100644 index fb22a0e834dd87edaa37bb8190eee2c3c7abe0d5..0000000000000000000000000000000000000000 --- a/spaces/bzd4576/sovits-sin/losses.py +++ /dev/null @@ -1,61 +0,0 @@ -import torch -from torch.nn import functional as F - -import commons - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p)**2) * torch.exp(-2. * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/caffeinum/VToonify/vtoonify/model/encoder/encoders/model_irse.py b/spaces/caffeinum/VToonify/vtoonify/model/encoder/encoders/model_irse.py deleted file mode 100644 index 6698d9705321dd4a27681ea15204e9ffaa51f62a..0000000000000000000000000000000000000000 --- a/spaces/caffeinum/VToonify/vtoonify/model/encoder/encoders/model_irse.py +++ /dev/null @@ -1,84 +0,0 @@ -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module -from model.encoder.encoders.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm - -""" -Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Backbone(Module): - def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True): - super(Backbone, self).__init__() - assert input_size in [112, 224], "input_size should be 112 or 224" - assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152" - assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se" - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - if input_size == 112: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512, affine=affine)) - else: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 14 * 14, 512), - BatchNorm1d(512, affine=affine)) - - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -def IR_50(input_size): - """Constructs a ir-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_101(input_size): - """Constructs a ir-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_152(input_size): - """Constructs a ir-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_50(input_size): - """Constructs a ir_se-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_101(input_size): - """Constructs a ir_se-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_152(input_size): - """Constructs a ir_se-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False) - return model diff --git a/spaces/camenduru-com/audioldm-text-to-audio-generation/share_btn.py b/spaces/camenduru-com/audioldm-text-to-audio-generation/share_btn.py deleted file mode 100644 index b8c2ed17439625f85fd0e910766c727b29131e3d..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/audioldm-text-to-audio-generation/share_btn.py +++ /dev/null @@ -1,60 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - const gradioEl = document.querySelector('body > gradio-app'); - const imgEls = gradioEl.querySelectorAll('#gallery img'); - const promptTxt = gradioEl.querySelector('#prompt-text-input input').value; - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!imgEls.length){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const files = await Promise.all( - [...imgEls].map(async (imgEl) => { - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const fileName = `diffuse-the-rest-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - }) - ); - const urls = await Promise.all(files.map((f) => uploadFile(f))); - const htmlImgs = urls.map(url => ``); - const descriptionMd = `
-${htmlImgs.join(`\n`)} -
`; - const params = new URLSearchParams({ - title: promptTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/haoheliu/audioldm-text-to-audio-generation/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/CurImagePlugin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/CurImagePlugin.py deleted file mode 100644 index 94efff3415679a5bf5b7038f9a1da15ebc6d04ca..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/CurImagePlugin.py +++ /dev/null @@ -1,75 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# Windows Cursor support for PIL -# -# notes: -# uses BmpImagePlugin.py to read the bitmap data. -# -# history: -# 96-05-27 fl Created -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1996. -# -# See the README file for information on usage and redistribution. -# -from . import BmpImagePlugin, Image -from ._binary import i16le as i16 -from ._binary import i32le as i32 - -# -# -------------------------------------------------------------------- - - -def _accept(prefix): - return prefix[:4] == b"\0\0\2\0" - - -## -# Image plugin for Windows Cursor files. - - -class CurImageFile(BmpImagePlugin.BmpImageFile): - format = "CUR" - format_description = "Windows Cursor" - - def _open(self): - offset = self.fp.tell() - - # check magic - s = self.fp.read(6) - if not _accept(s): - msg = "not a CUR file" - raise SyntaxError(msg) - - # pick the largest cursor in the file - m = b"" - for i in range(i16(s, 4)): - s = self.fp.read(16) - if not m: - m = s - elif s[0] > m[0] and s[1] > m[1]: - m = s - if not m: - msg = "No cursors were found" - raise TypeError(msg) - - # load as bitmap - self._bitmap(i32(m, 12) + offset) - - # patch up the bitmap height - self._size = self.size[0], self.size[1] // 2 - d, e, o, a = self.tile[0] - self.tile[0] = d, (0, 0) + self.size, o, a - - return - - -# -# -------------------------------------------------------------------- - -Image.register_open(CurImageFile.format, CurImageFile, _accept) - -Image.register_extension(CurImageFile.format, ".cur") diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/MspImagePlugin.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/MspImagePlugin.py deleted file mode 100644 index c6567b2ae626fd83ef21575a59374c922d5392a9..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/MspImagePlugin.py +++ /dev/null @@ -1,194 +0,0 @@ -# -# The Python Imaging Library. -# -# MSP file handling -# -# This is the format used by the Paint program in Windows 1 and 2. -# -# History: -# 95-09-05 fl Created -# 97-01-03 fl Read/write MSP images -# 17-02-21 es Fixed RLE interpretation -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1995-97. -# Copyright (c) Eric Soroos 2017. -# -# See the README file for information on usage and redistribution. -# -# More info on this format: https://archive.org/details/gg243631 -# Page 313: -# Figure 205. Windows Paint Version 1: "DanM" Format -# Figure 206. Windows Paint Version 2: "LinS" Format. Used in Windows V2.03 -# -# See also: https://www.fileformat.info/format/mspaint/egff.htm - -import io -import struct - -from . import Image, ImageFile -from ._binary import i16le as i16 -from ._binary import o16le as o16 - -# -# read MSP files - - -def _accept(prefix): - return prefix[:4] in [b"DanM", b"LinS"] - - -## -# Image plugin for Windows MSP images. This plugin supports both -# uncompressed (Windows 1.0). - - -class MspImageFile(ImageFile.ImageFile): - format = "MSP" - format_description = "Windows Paint" - - def _open(self): - # Header - s = self.fp.read(32) - if not _accept(s): - msg = "not an MSP file" - raise SyntaxError(msg) - - # Header checksum - checksum = 0 - for i in range(0, 32, 2): - checksum = checksum ^ i16(s, i) - if checksum != 0: - msg = "bad MSP checksum" - raise SyntaxError(msg) - - self.mode = "1" - self._size = i16(s, 4), i16(s, 6) - - if s[:4] == b"DanM": - self.tile = [("raw", (0, 0) + self.size, 32, ("1", 0, 1))] - else: - self.tile = [("MSP", (0, 0) + self.size, 32, None)] - - -class MspDecoder(ImageFile.PyDecoder): - # The algo for the MSP decoder is from - # https://www.fileformat.info/format/mspaint/egff.htm - # cc-by-attribution -- That page references is taken from the - # Encyclopedia of Graphics File Formats and is licensed by - # O'Reilly under the Creative Common/Attribution license - # - # For RLE encoded files, the 32byte header is followed by a scan - # line map, encoded as one 16bit word of encoded byte length per - # line. - # - # NOTE: the encoded length of the line can be 0. This was not - # handled in the previous version of this encoder, and there's no - # mention of how to handle it in the documentation. From the few - # examples I've seen, I've assumed that it is a fill of the - # background color, in this case, white. - # - # - # Pseudocode of the decoder: - # Read a BYTE value as the RunType - # If the RunType value is zero - # Read next byte as the RunCount - # Read the next byte as the RunValue - # Write the RunValue byte RunCount times - # If the RunType value is non-zero - # Use this value as the RunCount - # Read and write the next RunCount bytes literally - # - # e.g.: - # 0x00 03 ff 05 00 01 02 03 04 - # would yield the bytes: - # 0xff ff ff 00 01 02 03 04 - # - # which are then interpreted as a bit packed mode '1' image - - _pulls_fd = True - - def decode(self, buffer): - img = io.BytesIO() - blank_line = bytearray((0xFF,) * ((self.state.xsize + 7) // 8)) - try: - self.fd.seek(32) - rowmap = struct.unpack_from( - f"<{self.state.ysize}H", self.fd.read(self.state.ysize * 2) - ) - except struct.error as e: - msg = "Truncated MSP file in row map" - raise OSError(msg) from e - - for x, rowlen in enumerate(rowmap): - try: - if rowlen == 0: - img.write(blank_line) - continue - row = self.fd.read(rowlen) - if len(row) != rowlen: - msg = f"Truncated MSP file, expected {rowlen} bytes on row {x}" - raise OSError(msg) - idx = 0 - while idx < rowlen: - runtype = row[idx] - idx += 1 - if runtype == 0: - (runcount, runval) = struct.unpack_from("Bc", row, idx) - img.write(runval * runcount) - idx += 2 - else: - runcount = runtype - img.write(row[idx : idx + runcount]) - idx += runcount - - except struct.error as e: - msg = f"Corrupted MSP file in row {x}" - raise OSError(msg) from e - - self.set_as_raw(img.getvalue(), ("1", 0, 1)) - - return -1, 0 - - -Image.register_decoder("MSP", MspDecoder) - - -# -# write MSP files (uncompressed only) - - -def _save(im, fp, filename): - if im.mode != "1": - msg = f"cannot write mode {im.mode} as MSP" - raise OSError(msg) - - # create MSP header - header = [0] * 16 - - header[0], header[1] = i16(b"Da"), i16(b"nM") # version 1 - header[2], header[3] = im.size - header[4], header[5] = 1, 1 - header[6], header[7] = 1, 1 - header[8], header[9] = im.size - - checksum = 0 - for h in header: - checksum = checksum ^ h - header[12] = checksum # FIXME: is this the right field? - - # header - for h in header: - fp.write(o16(h)) - - # image body - ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 32, ("1", 0, 1))]) - - -# -# registry - -Image.register_open(MspImageFile.format, MspImageFile, _accept) -Image.register_save(MspImageFile.format, _save) - -Image.register_extension(MspImageFile.format, ".msp") diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/PaletteFile.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/PaletteFile.py deleted file mode 100644 index 4a2c497fc495a271cbab204db0197d776442ac5c..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/PaletteFile.py +++ /dev/null @@ -1,51 +0,0 @@ -# -# Python Imaging Library -# $Id$ -# -# stuff to read simple, teragon-style palette files -# -# History: -# 97-08-23 fl Created -# -# Copyright (c) Secret Labs AB 1997. -# Copyright (c) Fredrik Lundh 1997. -# -# See the README file for information on usage and redistribution. -# - -from ._binary import o8 - - -class PaletteFile: - """File handler for Teragon-style palette files.""" - - rawmode = "RGB" - - def __init__(self, fp): - self.palette = [(i, i, i) for i in range(256)] - - while True: - s = fp.readline() - - if not s: - break - if s[:1] == b"#": - continue - if len(s) > 100: - msg = "bad palette file" - raise SyntaxError(msg) - - v = [int(x) for x in s.split()] - try: - [i, r, g, b] = v - except ValueError: - [i, r] = v - g = b = r - - if 0 <= i <= 255: - self.palette[i] = o8(r) + o8(g) + o8(b) - - self.palette = b"".join(self.palette) - - def getpalette(self): - return self.palette, self.rawmode diff --git a/spaces/candlend/vits-hoshimi/sovits/models.py b/spaces/candlend/vits-hoshimi/sovits/models.py deleted file mode 100644 index f4941c211eed9a025536456c2aa110141ab7e3ff..0000000000000000000000000000000000000000 --- a/spaces/candlend/vits-hoshimi/sovits/models.py +++ /dev/null @@ -1,351 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -from sovits import attentions -from sovits import commons -from sovits import modules - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from sovits.commons import init_weights, get_padding -from sovits.vdecoder.hifigan.models import Generator -from sovits.utils import f0_to_coarse - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class Encoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - # print(x.shape,x_lengths.shape) - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - filter_channels=None, - n_heads=None, - p_dropout=None): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - self.f0_emb = nn.Embedding(256, hidden_channels) - - self.enc_ = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - - def forward(self, x, x_lengths, f0=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = x + self.f0_emb(f0).transpose(1,2) - x = self.enc_(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - - return z, m, logs, x_mask - - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class SpeakerEncoder(torch.nn.Module): - def __init__(self, mel_n_channels=80, model_num_layers=3, model_hidden_size=256, model_embedding_size=256): - super(SpeakerEncoder, self).__init__() - self.lstm = nn.LSTM(mel_n_channels, model_hidden_size, model_num_layers, batch_first=True) - self.linear = nn.Linear(model_hidden_size, model_embedding_size) - self.relu = nn.ReLU() - - def forward(self, mels): - self.lstm.flatten_parameters() - _, (hidden, _) = self.lstm(mels) - embeds_raw = self.relu(self.linear(hidden[-1])) - return embeds_raw / torch.norm(embeds_raw, dim=1, keepdim=True) - - def compute_partial_slices(self, total_frames, partial_frames, partial_hop): - mel_slices = [] - for i in range(0, total_frames-partial_frames, partial_hop): - mel_range = torch.arange(i, i+partial_frames) - mel_slices.append(mel_range) - - return mel_slices - - def embed_utterance(self, mel, partial_frames=128, partial_hop=64): - mel_len = mel.size(1) - last_mel = mel[:,-partial_frames:] - - if mel_len > partial_frames: - mel_slices = self.compute_partial_slices(mel_len, partial_frames, partial_hop) - mels = list(mel[:,s] for s in mel_slices) - mels.append(last_mel) - mels = torch.stack(tuple(mels), 0).squeeze(1) - - with torch.no_grad(): - partial_embeds = self(mels) - embed = torch.mean(partial_embeds, axis=0).unsqueeze(0) - #embed = embed / torch.linalg.norm(embed, 2) - else: - with torch.no_grad(): - embed = self(last_mel) - - return embed - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - ssl_dim, - n_speakers, - **kwargs): - - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - self.ssl_dim = ssl_dim - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - self.enc_p_ = TextEncoder(ssl_dim, inter_channels, hidden_channels, 5, 1, 16,0, filter_channels, n_heads, p_dropout) - hps = { - "sampling_rate": 32000, - "inter_channels": 192, - "resblock": "1", - "resblock_kernel_sizes": [3, 7, 11], - "resblock_dilation_sizes": [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - "upsample_rates": [10, 8, 2, 2], - "upsample_initial_channel": 512, - "upsample_kernel_sizes": [16, 16, 4, 4], - "gin_channels": 256, - } - self.dec = Generator(h=hps) - self.enc_q = Encoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - def forward(self, c, f0, spec, g=None, mel=None, c_lengths=None, spec_lengths=None): - if c_lengths == None: - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - if spec_lengths == None: - spec_lengths = (torch.ones(spec.size(0)) * spec.size(-1)).to(spec.device) - - g = self.emb_g(g).transpose(1,2) - - z_ptemp, m_p, logs_p, _ = self.enc_p_(c, c_lengths, f0=f0_to_coarse(f0)) - z, m_q, logs_q, spec_mask = self.enc_q(spec, spec_lengths, g=g) - - z_p = self.flow(z, spec_mask, g=g) - z_slice, pitch_slice, ids_slice = commons.rand_slice_segments_with_pitch(z, f0, spec_lengths, self.segment_size) - - # o = self.dec(z_slice, g=g) - o = self.dec(z_slice, g=g, f0=pitch_slice) - - return o, ids_slice, spec_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, c, f0, g=None, mel=None, c_lengths=None): - if c_lengths == None: - c_lengths = (torch.ones(c.size(0)) * c.size(-1)).to(c.device) - g = self.emb_g(g).transpose(1,2) - - z_p, m_p, logs_p, c_mask = self.enc_p_(c, c_lengths, f0=f0_to_coarse(f0)) - z = self.flow(z_p, c_mask, g=g, reverse=True) - - o = self.dec(z * c_mask, g=g, f0=f0) - - return o diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/solver/__init__.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/solver/__init__.py deleted file mode 100644 index 9a2dbd35bb24f0d4a979bc8f304142376d87e7ec..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/solver/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .build import build_lr_scheduler, build_optimizer, get_default_optimizer_params -from .lr_scheduler import WarmupCosineLR, WarmupMultiStepLR, LRMultiplier, WarmupParamScheduler - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/MViTv2/configs/cascade_mask_rcnn_mvitv2_l_in21k_lsj_50ep.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/MViTv2/configs/cascade_mask_rcnn_mvitv2_l_in21k_lsj_50ep.py deleted file mode 100644 index 38da8958e0174d378555887d72a9956f4b3f8e58..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/projects/MViTv2/configs/cascade_mask_rcnn_mvitv2_l_in21k_lsj_50ep.py +++ /dev/null @@ -1,31 +0,0 @@ -from fvcore.common.param_scheduler import MultiStepParamScheduler - -from detectron2.config import LazyCall as L -from detectron2.solver import WarmupParamScheduler - -from .cascade_mask_rcnn_mvitv2_b_3x import model, optimizer, train -from .common.coco_loader_lsj import dataloader - - -model.backbone.bottom_up.embed_dim = 144 -model.backbone.bottom_up.depth = 48 -model.backbone.bottom_up.num_heads = 2 -model.backbone.bottom_up.last_block_indexes = (1, 7, 43, 47) -model.backbone.bottom_up.drop_path_rate = 0.5 - -train.init_checkpoint = "detectron2://ImageNetPretrained/mvitv2/MViTv2_L_in21k.pyth" - -# Schedule -# 50ep = 184375 // 2 iters * 64 images/iter / 118000 images/ep -train.max_iter = 184375 // 2 -lr_multiplier = L(WarmupParamScheduler)( - scheduler=L(MultiStepParamScheduler)( - values=[1.0, 0.1, 0.01], - milestones=[163889 // 2, 177546 // 2], - num_updates=train.max_iter, - ), - warmup_length=250 / train.max_iter, - warmup_factor=0.001, -) - -optimizer.lr = 1e-4 diff --git a/spaces/cfwef/gpt/crazy_functions/test_project/cpp/cppipc/shm.cpp b/spaces/cfwef/gpt/crazy_functions/test_project/cpp/cppipc/shm.cpp deleted file mode 100644 index 593ce3129dc1574dbc8fc8b088cf595df215de93..0000000000000000000000000000000000000000 --- a/spaces/cfwef/gpt/crazy_functions/test_project/cpp/cppipc/shm.cpp +++ /dev/null @@ -1,103 +0,0 @@ - -#include -#include - -#include "libipc/shm.h" - -#include "libipc/utility/pimpl.h" -#include "libipc/memory/resource.h" - -namespace ipc { -namespace shm { - -class handle::handle_ : public pimpl { -public: - shm::id_t id_ = nullptr; - void* m_ = nullptr; - - ipc::string n_; - std::size_t s_ = 0; -}; - -handle::handle() - : p_(p_->make()) { -} - -handle::handle(char const * name, std::size_t size, unsigned mode) - : handle() { - acquire(name, size, mode); -} - -handle::handle(handle&& rhs) - : handle() { - swap(rhs); -} - -handle::~handle() { - release(); - p_->clear(); -} - -void handle::swap(handle& rhs) { - std::swap(p_, rhs.p_); -} - -handle& handle::operator=(handle rhs) { - swap(rhs); - return *this; -} - -bool handle::valid() const noexcept { - return impl(p_)->m_ != nullptr; -} - -std::size_t handle::size() const noexcept { - return impl(p_)->s_; -} - -char const * handle::name() const noexcept { - return impl(p_)->n_.c_str(); -} - -std::int32_t handle::ref() const noexcept { - return shm::get_ref(impl(p_)->id_); -} - -void handle::sub_ref() noexcept { - shm::sub_ref(impl(p_)->id_); -} - -bool handle::acquire(char const * name, std::size_t size, unsigned mode) { - release(); - impl(p_)->id_ = shm::acquire((impl(p_)->n_ = name).c_str(), size, mode); - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); - return valid(); -} - -std::int32_t handle::release() { - if (impl(p_)->id_ == nullptr) return -1; - return shm::release(detach()); -} - -void* handle::get() const { - return impl(p_)->m_; -} - -void handle::attach(id_t id) { - if (id == nullptr) return; - release(); - impl(p_)->id_ = id; - impl(p_)->m_ = shm::get_mem(impl(p_)->id_, &(impl(p_)->s_)); -} - -id_t handle::detach() { - auto old = impl(p_)->id_; - impl(p_)->id_ = nullptr; - impl(p_)->m_ = nullptr; - impl(p_)->s_ = 0; - impl(p_)->n_.clear(); - return old; -} - -} // namespace shm -} // namespace ipc diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/fsner/setup.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/fsner/setup.py deleted file mode 100644 index 8ce34d0f7d9053b36d3cde98d251dfbc0ffe5a25..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/fsner/setup.py +++ /dev/null @@ -1,27 +0,0 @@ -import setuptools - - -with open("README.md", "r", encoding="utf-8") as fh: - long_description = fh.read() - -setuptools.setup( - name="fsner", - version="0.0.1", - author="msi sayef", - author_email="msi.sayef@gmail.com", - description="Few-shot Named Entity Recognition", - long_description=long_description, - long_description_content_type="text/markdown", - url="https://github.com/huggingface/transformers/tree/main/examples/research_projects/fsner", - project_urls={ - "Bug Tracker": "https://github.com/huggingface/transformers/issues", - }, - classifiers=[ - "Programming Language :: Python :: 3", - "Operating System :: OS Independent", - ], - package_dir={"": "src"}, - packages=setuptools.find_packages(where="src"), - python_requires=">=3.6", - install_requires=["torch>=1.9.0", "transformers>=4.9.2"], -) diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/model_parallel/run_clm_mp.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/model_parallel/run_clm_mp.py deleted file mode 100644 index 7103b5a28111ffc0d4e1dce891dc6b077f721a78..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/model_parallel/run_clm_mp.py +++ /dev/null @@ -1,664 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2021 The HuggingFace Team All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Pre-training/Fine-tuning the GPTNeo model for causal language modeling on a text file or a dataset using model parallelism. -""" - -import logging -import math -import os -import sys -import time -from dataclasses import dataclass, field -from itertools import chain -from pathlib import Path -from typing import Callable, Optional - -import datasets -import jax -import jax.numpy as jnp -import numpy as np -import optax -from datasets import Dataset, load_dataset -from flax.core.frozen_dict import freeze, unfreeze -from flax.training.common_utils import onehot, stack_forest -from jax.experimental.maps import mesh -from jax.experimental.pjit import pjit -from partitions import set_partitions -from tqdm import tqdm - -import transformers -from transformers import ( - CONFIG_MAPPING, - FLAX_MODEL_FOR_CAUSAL_LM_MAPPING, - AutoConfig, - AutoTokenizer, - FlaxAutoModelForCausalLM, - HfArgumentParser, - TrainingArguments, - is_tensorboard_available, -) -from transformers.testing_utils import CaptureLogger - - -logger = logging.getLogger(__name__) - -MODEL_CONFIG_CLASSES = list(FLAX_MODEL_FOR_CAUSAL_LM_MAPPING.keys()) -MODEL_TYPES = tuple(conf.model_type for conf in MODEL_CONFIG_CLASSES) - - -@dataclass -class ModelArguments: - """ - Arguments pertaining to which model/config/tokenizer we are going to fine-tune, or train from scratch. - """ - - model_name_or_path: Optional[str] = field( - default=None, - metadata={ - "help": ( - "The model checkpoint for weights initialization.Don't set if you want to train a model from scratch." - ) - }, - ) - model_type: Optional[str] = field( - default=None, - metadata={"help": "If training from scratch, pass a model type from the list: " + ", ".join(MODEL_TYPES)}, - ) - config_name: Optional[str] = field( - default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} - ) - tokenizer_name: Optional[str] = field( - default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} - ) - cache_dir: Optional[str] = field( - default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from s3"} - ) - use_fast_tokenizer: bool = field( - default=True, - metadata={"help": "Whether to use one of the fast tokenizer (backed by the tokenizers library) or not."}, - ) - dtype: Optional[str] = field( - default="float32", - metadata={ - "help": ( - "Floating-point format in which the model weights should be initialized and trained. Choose one of" - " `[float32, float16, bfloat16]`." - ) - }, - ) - - -@dataclass -class DataTrainingArguments: - """ - Arguments pertaining to what data we are going to input our model for training and eval. - """ - - dataset_name: Optional[str] = field( - default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."} - ) - dataset_config_name: Optional[str] = field( - default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."} - ) - train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."}) - validation_file: Optional[str] = field( - default=None, - metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."}, - ) - max_train_samples: Optional[int] = field( - default=None, - metadata={ - "help": ( - "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - ) - }, - ) - max_eval_samples: Optional[int] = field( - default=None, - metadata={ - "help": ( - "For debugging purposes or quicker training, truncate the number of evaluation examples to this " - "value if set." - ) - }, - ) - overwrite_cache: bool = field( - default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} - ) - validation_split_percentage: Optional[int] = field( - default=5, - metadata={ - "help": "The percentage of the train set used as validation set in case there's no validation split" - }, - ) - block_size: Optional[int] = field( - default=None, - metadata={ - "help": ( - "Optional input sequence length after tokenization. " - "The training dataset will be truncated in block of this size for training. " - "Default to the model max input length for single sentence inputs (take into account special tokens)." - ) - }, - ) - overwrite_cache: bool = field( - default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} - ) - preprocessing_num_workers: Optional[int] = field( - default=None, - metadata={"help": "The number of processes to use for the preprocessing."}, - ) - - def __post_init__(self): - if self.dataset_name is None and self.train_file is None and self.validation_file is None: - raise ValueError("Need either a dataset name or a training/validation file.") - else: - if self.train_file is not None: - extension = self.train_file.split(".")[-1] - assert extension in ["csv", "json", "txt"], "`train_file` should be a csv, a json or a txt file." - if self.validation_file is not None: - extension = self.validation_file.split(".")[-1] - assert extension in ["csv", "json", "txt"], "`validation_file` should be a csv, a json or a txt file." - - -def data_loader(rng: jax.random.PRNGKey, dataset: Dataset, batch_size: int, shuffle: bool = False): - """ - Returns batches of size `batch_size` from truncated `dataset`, sharded over all local devices. - Shuffle batches if `shuffle` is `True`. - """ - steps_per_epoch = len(dataset) // batch_size - - if shuffle: - batch_idx = jax.random.permutation(rng, len(dataset)) - else: - batch_idx = jnp.arange(len(dataset)) - - batch_idx = batch_idx[: steps_per_epoch * batch_size] # Skip incomplete batch. - batch_idx = batch_idx.reshape((steps_per_epoch, batch_size)) - - for idx in batch_idx: - batch = dataset[idx] - batch = {k: jnp.array(v) for k, v in batch.items()} - yield batch - - -def write_train_metric(summary_writer, train_metrics, train_time, step): - summary_writer.scalar("train_time", train_time, step) - - train_metrics = stack_forest(train_metrics) - for key, vals in train_metrics.items(): - tag = f"train_{key}" - for i, val in enumerate(vals): - summary_writer.scalar(tag, val, step - len(vals) + i + 1) - - -def write_eval_metric(summary_writer, eval_metrics, step): - for metric_name, value in eval_metrics.items(): - summary_writer.scalar(f"eval_{metric_name}", value, step) - - -def create_learning_rate_fn( - train_ds_size: int, train_batch_size: int, num_train_epochs: int, num_warmup_steps: int, learning_rate: float -) -> Callable[[int], jnp.array]: - """Returns a linear warmup, linear_decay learning rate function.""" - steps_per_epoch = train_ds_size // train_batch_size - num_train_steps = steps_per_epoch * num_train_epochs - warmup_fn = optax.linear_schedule(init_value=0.0, end_value=learning_rate, transition_steps=num_warmup_steps) - decay_fn = optax.linear_schedule( - init_value=learning_rate, end_value=0, transition_steps=num_train_steps - num_warmup_steps - ) - schedule_fn = optax.join_schedules(schedules=[warmup_fn, decay_fn], boundaries=[num_warmup_steps]) - return schedule_fn - - -def main(): - # See all possible arguments in src/transformers/training_args.py - # or by passing the --help flag to this script. - # We now keep distinct sets of args, for a cleaner separation of concerns. - - parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) - if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): - # If we pass only one argument to the script and it's the path to a json file, - # let's parse it to get our arguments. - model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) - else: - model_args, data_args, training_args = parser.parse_args_into_dataclasses() - - if ( - os.path.exists(training_args.output_dir) - and os.listdir(training_args.output_dir) - and training_args.do_train - and not training_args.overwrite_output_dir - ): - raise ValueError( - f"Output directory ({training_args.output_dir}) already exists and is not empty." - "Use --overwrite_output_dir to overcome." - ) - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - # Setup logging, we only want one process per machine to log things on the screen. - logger.setLevel(logging.INFO if jax.process_index() == 0 else logging.ERROR) - if jax.process_index() == 0: - datasets.utils.logging.set_verbosity_warning() - transformers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - transformers.utils.logging.set_verbosity_error() - - # Set the verbosity to info of the Transformers logger (on main process only): - logger.info(f"Training/evaluation parameters {training_args}") - - # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below) - # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/ - # (the dataset will be downloaded automatically from the datasets Hub). - # - # For CSV/JSON files, this script will use the column called 'text' or the first column if no column called - # 'text' is found. You can easily tweak this behavior (see below). - if data_args.dataset_name is not None: - # Downloading and loading a dataset from the hub. - dataset = load_dataset( - data_args.dataset_name, data_args.dataset_config_name, cache_dir=model_args.cache_dir, keep_in_memory=False - ) - - if "validation" not in dataset.keys(): - dataset["validation"] = load_dataset( - data_args.dataset_name, - data_args.dataset_config_name, - split=f"train[:{data_args.validation_split_percentage}%]", - cache_dir=model_args.cache_dir, - ) - dataset["train"] = load_dataset( - data_args.dataset_name, - data_args.dataset_config_name, - split=f"train[{data_args.validation_split_percentage}%:]", - cache_dir=model_args.cache_dir, - ) - else: - data_files = {} - if data_args.train_file is not None: - data_files["train"] = data_args.train_file - if data_args.validation_file is not None: - data_files["validation"] = data_args.validation_file - extension = data_args.train_file.split(".")[-1] - if extension == "txt": - extension = "text" - dataset = load_dataset(extension, data_files=data_files, cache_dir=model_args.cache_dir) - # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at - # https://huggingface.co/docs/datasets/loading_datasets.html. - - # Load pretrained config and tokenizer - if model_args.config_name: - config = AutoConfig.from_pretrained(model_args.config_name, cache_dir=model_args.cache_dir) - elif model_args.model_name_or_path: - config = AutoConfig.from_pretrained(model_args.model_name_or_path, cache_dir=model_args.cache_dir) - else: - config = CONFIG_MAPPING[model_args.model_type]() - logger.warning("You are instantiating a new config instance from scratch.") - - if model_args.tokenizer_name: - tokenizer = AutoTokenizer.from_pretrained( - model_args.tokenizer_name, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer - ) - elif model_args.model_name_or_path: - tokenizer = AutoTokenizer.from_pretrained( - model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer - ) - else: - raise ValueError( - "You are instantiating a new tokenizer from scratch. This is not supported by this script." - "You can do it from another script, save it, and load it from here, using --tokenizer_name." - ) - - if training_args.do_train: - column_names = dataset["train"].column_names - else: - column_names = dataset["validation"].column_names - text_column_name = "text" if "text" in column_names else column_names[0] - - # since this will be pickled to avoid _LazyModule error in Hasher force logger loading before tokenize_function - tok_logger = transformers.utils.logging.get_logger("transformers.tokenization_utils_base") - - def tokenize_function(examples): - with CaptureLogger(tok_logger) as cl: - output = tokenizer(examples[text_column_name]) - # clm input could be much much longer than block_size - if "Token indices sequence length is longer than the" in cl.out: - tok_logger.warning( - "^^^^^^^^^^^^^^^^ Please ignore the warning above - this long input will be chunked into smaller bits" - " before being passed to the model." - ) - return output - - tokenized_datasets = dataset.map( - tokenize_function, - batched=True, - num_proc=data_args.preprocessing_num_workers, - remove_columns=column_names, - load_from_cache_file=not data_args.overwrite_cache, - ) - - if data_args.block_size is None: - block_size = tokenizer.model_max_length - if block_size > config.max_position_embeddings: - logger.warning( - f"The tokenizer picked seems to have a very large `model_max_length` ({tokenizer.model_max_length}). " - "Picking 1024 instead. You can change that default value by passing --block_size xxx." - ) - block_size = 1024 - else: - if data_args.block_size > tokenizer.model_max_length: - logger.warning( - f"The block_size passed ({data_args.block_size}) is larger than the maximum length for the model" - f"({tokenizer.model_max_length}). Using block_size={tokenizer.model_max_length}." - ) - block_size = min(data_args.block_size, tokenizer.model_max_length) - - # Main data processing function that will concatenate all texts from our dataset and generate chunks of block_size. - def group_texts(examples): - # Concatenate all texts. - concatenated_examples = {k: list(chain(*examples[k])) for k in examples.keys()} - total_length = len(concatenated_examples[list(examples.keys())[0]]) - # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can - # customize this part to your needs. - if total_length >= block_size: - total_length = (total_length // block_size) * block_size - # Split by chunks of max_len. - result = { - k: [t[i : i + block_size] for i in range(0, total_length, block_size)] - for k, t in concatenated_examples.items() - } - result["labels"] = result["input_ids"].copy() - return result - - # Note that with `batched=True`, this map processes 1,000 texts together, so group_texts throws away a remainder - # for each of those groups of 1,000 texts. You can adjust that batch_size here but a higher value might be slower - # to preprocess. - # - # To speed up this part, we use multiprocessing. See the documentation of the map method for more information: - # https://huggingface.co/docs/datasets/package_reference/main_classes.html#datasets.Dataset.map - - lm_datasets = tokenized_datasets.map( - group_texts, - batched=True, - num_proc=data_args.preprocessing_num_workers, - load_from_cache_file=not data_args.overwrite_cache, - ) - - if training_args.do_train: - if "train" not in tokenized_datasets: - raise ValueError("--do_train requires a train dataset") - train_dataset = lm_datasets["train"] - if data_args.max_train_samples is not None: - max_train_samples = min(len(train_dataset), data_args.max_train_samples) - train_dataset = train_dataset.select(range(max_train_samples)) - - if training_args.do_eval: - if "validation" not in tokenized_datasets: - raise ValueError("--do_eval requires a validation dataset") - eval_dataset = lm_datasets["validation"] - if data_args.max_eval_samples is not None: - max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples) - eval_dataset = eval_dataset.select(range(max_eval_samples)) - - # Enable tensorboard only on the master node - has_tensorboard = is_tensorboard_available() - if has_tensorboard and jax.process_index() == 0: - try: - from flax.metrics.tensorboard import SummaryWriter - - summary_writer = SummaryWriter(log_dir=Path(training_args.output_dir)) - except ImportError as ie: - has_tensorboard = False - logger.warning( - f"Unable to display metrics through TensorBoard because some package are not installed: {ie}" - ) - else: - logger.warning( - "Unable to display metrics through TensorBoard because the package is not installed: " - "Please run pip install tensorboard to enable." - ) - - # Initialize our training - rng = jax.random.PRNGKey(training_args.seed) - rng, dropout_rng = jax.random.split(rng) - - # Store some constant - num_epochs = int(training_args.num_train_epochs) - train_batch_size = int(training_args.per_device_train_batch_size) * jax.device_count() - eval_batch_size = int(training_args.per_device_eval_batch_size) * jax.device_count() - steps_per_epoch = len(train_dataset) // train_batch_size - total_train_steps = steps_per_epoch * num_epochs - - # TODO: weights should be initialized in pjitted fun, this won't work for REALLY large models - # TODO: when loading from pre-trained model we need to make sure the vocab is divisible by num_partitions - # GPT2's vocab is odd, we need to resize it for fine-tuning - model = FlaxAutoModelForCausalLM.from_pretrained( - model_args.model_name_or_path, seed=training_args.seed, dtype=getattr(jnp, model_args.dtype) - ) - - # Create learning rate schedule - linear_decay_lr_schedule_fn = create_learning_rate_fn( - len(train_dataset), - train_batch_size, - training_args.num_train_epochs, - training_args.warmup_steps, - training_args.learning_rate, - ) - - optimizer = optax.adamw( - learning_rate=linear_decay_lr_schedule_fn, - b1=training_args.adam_beta1, - b2=training_args.adam_beta2, - eps=training_args.adam_epsilon, - weight_decay=training_args.weight_decay, - ) - - def get_initial_state(params): - state = optimizer.init(params) - return tuple(state), params - - # Get PartitionSpec for model params - param_spec = set_partitions(unfreeze(model.params)) - - # Get the PyTree for opt_state, we don't actually initialize the opt_state yet. - params_shapes = jax.tree_util.tree_map(lambda x: x.shape, model.params) - state_shapes = jax.eval_shape(get_initial_state, params_shapes) - - # get PartitionSpec for opt_state, this is very specific to adamw - # TODO: optax returns different state for different optimizers, how can we handle this generically ? - # or maybe we don't since in our examples we just use adamw or adafactor - def get_opt_spec(x): - if isinstance(x, dict): - return param_spec - return None - - opt_state_spec, param_spec = jax.tree_util.tree_map( - get_opt_spec, state_shapes, is_leaf=lambda x: isinstance(x, (dict, optax.EmptyState)) - ) - - # pjit the get_initial_state function to shard params and init - # optimizer state in sharded way - p_get_initial_state = pjit( - get_initial_state, - in_axis_resources=None, - out_axis_resources=(opt_state_spec, param_spec), - ) - - # hack: move the inital params to CPU to free up device memory - # TODO: allow loading weights on CPU in pre-trained model - model.params = jax.tree_util.tree_map(lambda x: np.asarray(x), model.params) - - # mesh defination - mesh_devices = np.array(jax.devices()).reshape(1, jax.local_device_count()) - - # actually initialize the opt_state - with mesh(mesh_devices, ("dp", "mp")): - opt_state, params = p_get_initial_state(freeze(model.params)) - - # cross-entropy with z loss - def loss_fn(logits, labels, z_loss=0): - shift_logits = logits[..., :-1, :] - shift_labels = labels[..., 1:] - - shift_labels = onehot(shift_labels, shift_logits.shape[-1]) - - shift_logits = shift_logits - jax.lax.stop_gradient(shift_logits.max(axis=-1, keepdims=True)) - log_z = jnp.log(jnp.sum(jnp.exp(shift_logits), axis=-1, keepdims=True)) - log_softmax = shift_logits - log_z - loss = -jnp.sum(shift_labels * log_softmax, axis=-1) - - loss += (1e-4 * jnp.square(log_z.squeeze(-1))) * z_loss - - return loss.mean() - - # Define gradient update step fn - # TODO: try to use TrainState instead of passing params and opt_state individually - def train_step(params, opt_state, dropout_rng, batch, step): - dropout_rng, new_dropout_rng = jax.random.split(dropout_rng) - - def compute_loss(params): - labels = batch.pop("labels") - logits = model(**batch, params=params, dropout_rng=dropout_rng, train=True)[0] - loss = loss_fn(logits, labels, z_loss=1.0) - return loss - - grad_fn = jax.value_and_grad(compute_loss) - loss, grads = grad_fn(params) - - updates, new_opt_state = optimizer.update(grads, opt_state, params) - new_params = optax.apply_updates(params, updates) - - metrics = {"loss": loss, "learning_rate": linear_decay_lr_schedule_fn(step)} - return new_params, tuple(new_opt_state), new_dropout_rng, metrics, step + 1 - - # Define eval fn - def eval_step(input_ids, labels, params): - logits = model(input_ids=input_ids, params=params, train=False)[0] - loss = loss_fn(logits, labels) - # metrics - return {"loss": loss} - - p_train_step = pjit( - train_step, - in_axis_resources=(param_spec, opt_state_spec, None, None, None), - out_axis_resources=(param_spec, opt_state_spec, None, None, None), - donate_argnums=(0, 1), - ) - - p_eval_step = pjit( - eval_step, - in_axis_resources=(None, None, param_spec), - out_axis_resources=None, - ) - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {num_epochs}") - logger.info(f" Instantaneous batch size per device = {training_args.per_device_train_batch_size}") - logger.info(f" Total train batch size (w. parallel & distributed) = {train_batch_size}") - logger.info(f" Total optimization steps = {total_train_steps}") - - train_time = 0 - train_metrics = [] - epochs = tqdm(range(num_epochs), desc=f"Epoch ... (1/{num_epochs})", position=0) - global_step = 0 - # we are not doing 2D parallelism (yet!), this just does model parallelism - with mesh(mesh_devices, ("dp", "mp")): - for _ in epochs: - # ======================== Training ================================ - train_start = time.time() - - # Create sampling rng - rng, input_rng = jax.random.split(rng) - - # Generate an epoch by shuffling sampling indices from the train dataset - train_metrics = [] - train_loader = data_loader(input_rng, train_dataset, train_batch_size, shuffle=True) - steps_per_epoch = len(train_dataset) // train_batch_size - - # train - for _ in tqdm(range(steps_per_epoch), desc="Training...", position=1, leave=False): - batch = next(train_loader) - params, opt_state, dropout_rng, train_metric, global_step = p_train_step( - params, - opt_state, - dropout_rng, - batch, - global_step, - ) - train_metrics.append(train_metric) - - cur_step = global_step - - if cur_step % training_args.logging_steps == 0 and cur_step > 0: - # Save metrics - train_time += time.time() - train_start - if has_tensorboard and jax.process_index() == 0: - write_train_metric(summary_writer, train_metrics, train_time, cur_step) - - epochs.write( - f"Step... ({cur_step} | Loss: {train_metric['loss']}, Learning Rate:" - f" {train_metric['learning_rate']})" - ) - - train_metrics = [] - - if cur_step % training_args.eval_steps == 0 and cur_step > 0: - # ======================== Evaluating ============================== - eval_metrics = [] - eval_loader = data_loader(input_rng, eval_dataset, eval_batch_size) - eval_steps = len(eval_dataset) // eval_batch_size - - for _ in tqdm(range(eval_steps), desc="Evaluating...", position=2, leave=False): - batch = next(eval_loader) - metrics = p_eval_step(batch["input_ids"], batch["labels"], params) - eval_metrics.append(metrics) - - # normalize eval metrics - eval_metrics = stack_forest(eval_metrics) - eval_metrics = jax.tree_util.tree_map(jnp.mean, eval_metrics) - - try: - eval_metrics["perplexity"] = math.exp(eval_metrics["loss"]) - except OverflowError: - eval_metrics["perplexity"] = float("inf") - - logger.info( - f"Step... ({cur_step} | Eval loss: {eval_metrics['loss']} | Eval Perplexity:" - f" {eval_metrics['perplexity']}" - ) - - if cur_step % training_args.save_steps == 0 and cur_step > 0: - # save checkpoint after each epoch and push checkpoint to the hub - if jax.process_index() == 0: - params = jax.device_get(params) - model.save_pretrained( - training_args.output_dir, - params=params, - push_to_hub=training_args.push_to_hub, - commit_message=f"Saving weights and logs of step {cur_step}", - ) - - -if __name__ == "__main__": - main() diff --git a/spaces/chenxiYan/ChatHaruhi-OpenAI/app.py b/spaces/chenxiYan/ChatHaruhi-OpenAI/app.py deleted file mode 100644 index 4b438577225ffd09e062f82a83c41fdb11ad8f09..0000000000000000000000000000000000000000 --- a/spaces/chenxiYan/ChatHaruhi-OpenAI/app.py +++ /dev/null @@ -1,128 +0,0 @@ -import zipfile -import gradio as gr -from PIL import Image -from chatharuhi import ChatHaruhi -import requests -import os -import openai -import copy - - -NAME_DICT = {'汤师爷': 'tangshiye', '慕容复': 'murongfu', '李云龙': 'liyunlong', 'Luna': 'Luna', '王多鱼': 'wangduoyu', - 'Ron': 'Ron', '鸠摩智': 'jiumozhi', 'Snape': 'Snape', - '凉宫春日': 'haruhi', 'Malfoy': 'Malfoy', '虚竹': 'xuzhu', '萧峰': 'xiaofeng', '段誉': 'duanyu', - 'Hermione': 'Hermione', 'Dumbledore': 'Dumbledore', '王语嫣': 'wangyuyan', - 'Harry': 'Harry', 'McGonagall': 'McGonagall', '白展堂': 'baizhantang', '佟湘玉': 'tongxiangyu', - '郭芙蓉': 'guofurong', '旅行者': 'wanderer', '钟离': 'zhongli', - '胡桃': 'hutao', 'Sheldon': 'Sheldon', 'Raj': 'Raj', 'Penny': 'Penny', '韦小宝': 'weixiaobao', - '乔峰': 'qiaofeng', '神里绫华': 'ayaka', '雷电将军': 'raidenShogun', '于谦': 'yuqian'} - - - -try: - os.makedirs("characters_zip") -except: - pass -try: - os.makedirs("characters") -except: - pass -ai_roles_obj = {} -for ai_role_en in NAME_DICT.values(): - file_url = f"https://github.com/LC1332/Haruhi-2-Dev/raw/main/data/character_in_zip/{ai_role_en}.zip" - try: - os.makedirs(f"characters/{ai_role_en}") - except: - pass - if f"{ai_role_en}.zip" not in os.listdir(f"characters_zip"): - destination_file = f"characters_zip/{ai_role_en}.zip" - max_retries = 3 # 最大重试次数 - for attempt in range(1, max_retries+1): - response = requests.get(file_url) - if response.status_code == 200: - with open(destination_file, "wb") as file: - file.write(response.content) - print(ai_role_en) - break - else: - print(f"{ai_role_en}第{attempt}次下载失败") - # wget.download(file_url, destination_file) # 503 - destination_folder = f"characters/{ai_role_en}" - with zipfile.ZipFile(destination_file, 'r') as zip_ref: - zip_ref.extractall(destination_folder) - db_folder = f"./characters/{ai_role_en}/content/{ai_role_en}" - system_prompt = f"./characters/{ai_role_en}/content/system_prompt.txt" - ai_roles_obj[ai_role_en] = ChatHaruhi(system_prompt=system_prompt, - llm="openai", - story_db=db_folder, - verbose=True) - - -async def get_response(user_role, user_text, ai_role, chatbot): - role_en = NAME_DICT[ai_role] - ai_roles_obj[role_en].dialogue_history = copy.deepcopy(chatbot) - response = ai_roles_obj[role_en].chat(role=user_role, text=user_text) - user_msg = user_role + ':「' + user_text + '」' - latest_msg = (user_msg, response) - print(latest_msg) - chatbot.append(latest_msg) - return chatbot - -async def respond(user_role, user_text, ai_role, chatbot): - return await get_response(user_role, user_text, ai_role, chatbot), None - - -def clear(user_role, user_text, chatbot): - return None, None, [] - - -def get_image(ai_role): - role_en = NAME_DICT[ai_role] - return Image.open(f'images/{role_en}.jpg'), None, None, [] - - -with gr.Blocks() as demo: - gr.Markdown( - """ - # Chat凉宫春日 ChatHaruhi - ## Reviving Anime Character in Reality via Large Language Model - - ChatHaruhi2.0的demo implemented by [chenxi](https://github.com/todochenxi) - - 更多信息见项目github链接 [https://github.com/LC1332/Chat-Haruhi-Suzumiya](https://github.com/LC1332/Chat-Haruhi-Suzumiya) - - 如果觉得有趣请拜托为我们点上star. If you find it interesting, please be kind enough to give us a star. - - user_role 为用户扮演的人物 请尽量设置为与剧情相关的人物 且不要与主角同名 - - 如果你想为我们捐赠 api key,请联系我。 - If you would like to donate an api key to us, please contact me. - API キーを寄付したい場合は、私までご連絡ください。 - email: todochenxi@163.com - """ - ) - with gr.Row(): - chatbot = gr.Chatbot() - role_image = gr.Image(height=400, value="./images/haruhi.jpg") - with gr.Row(): - user_role = gr.Textbox(label="user_role", scale=1) - user_text = gr.Textbox(label="user_text", scale=20) - with gr.Row(): - submit = gr.Button("Submit") - clean = gr.ClearButton(value="Clear") - ai_role = gr.Radio(['汤师爷', '慕容复', '李云龙', - 'Luna', '王多鱼', 'Ron', '鸠摩智', - 'Snape', '凉宫春日', 'Malfoy', '虚竹', - '萧峰', '段誉', 'Hermione', 'Dumbledore', - '王语嫣', - 'Harry', 'McGonagall', - '白展堂', '佟湘玉', '郭芙蓉', - '旅行者', '钟离', '胡桃', - 'Sheldon', 'Raj', 'Penny', - '韦小宝', '乔峰', '神里绫华', - '雷电将军', '于谦'], label="characters", value='凉宫春日') - ai_role.change(get_image, ai_role, [role_image, user_role, user_text, chatbot]) - user_text.submit(fn=respond, inputs=[user_role, user_text, ai_role, chatbot], outputs=[chatbot, user_text]) - submit.click(fn=respond, inputs=[user_role, user_text, ai_role, chatbot], outputs=[chatbot, user_text]) - clean.click(clear, [user_role, user_text, chatbot], [user_role, user_text, chatbot]) -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/chumeng/anime-ai-detect/app.py b/spaces/chumeng/anime-ai-detect/app.py deleted file mode 100644 index 89224ac0e4493054be928e7fabed7b9d0485e412..0000000000000000000000000000000000000000 --- a/spaces/chumeng/anime-ai-detect/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import gradio as gr -from transformers import pipeline - -detection_pipeline = pipeline("image-classification", "saltacc/anime-ai-detect") - - -def detect(img): - print(img) - output = detection_pipeline(img, top_k=2) - final = {} - for d in output: - final[d["label"]] = d["score"] - return final - - -iface = gr.Interface(fn=detect, inputs=gr.Image(type="pil"), outputs=gr.Label(label="result")) -iface.launch() diff --git a/spaces/cihyFjudo/fairness-paper-search/Sketchup Instant Road Pro Plugin.torrent LINK.md b/spaces/cihyFjudo/fairness-paper-search/Sketchup Instant Road Pro Plugin.torrent LINK.md deleted file mode 100644 index 4287a0d920ea4ff9572e0bdd07c427756827e632..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Sketchup Instant Road Pro Plugin.torrent LINK.md +++ /dev/null @@ -1,91 +0,0 @@ -## Sketchup Instant Road Pro Plugin.torrent - - - - ![Sketchup Instant Road Pro Plugin.torrent LINK](https://photos1.blogger.com/img/m-d987916938a7aebcac0fcde7311a5b9ce52f8dc7.jpg) - - - -**CLICK HERE >> [https://walllowcopo.blogspot.com/?download=2twr29](https://walllowcopo.blogspot.com/?download=2twr29)** - - - -# How to Install and Use Sketchup Instant Road Pro Plugin - - - -Sketchup Instant Road Pro Plugin is a powerful tool that automates the creation of roads, pathways, and waterways on a terrain using either an outline or a centerline for input. It also creates curbs, sidewalks, depressed or raised road surfaces, center medians and islands. It is compatible with Sketchup free and pro versions 2014 and above. - - - -In this article, we will show you how to download, install and use Sketchup Instant Road Pro Plugin to create realistic roads and landscapes in Sketchup. - - - -## How to Download Sketchup Instant Road Pro Plugin - - - -Sketchup Instant Road Pro Plugin is available for purchase from Vali Architects website[^1^]. You can also download a free trial version that works for 30 days. The plugin file is in .rbz format, which is a compressed Ruby script file that can be installed directly in Sketchup. - - - -## How to Install Sketchup Instant Road Pro Plugin - - - -To install Sketchup Instant Road Pro Plugin, follow these steps: - - - -1. Open Sketchup and go to Window > Extension Manager. - -2. Click on the Install Extension button at the bottom left corner of the window. - -3. Browse to the location where you saved the .rbz file and select it. - -4. Click on OK to confirm the installation. - -5. Restart Sketchup to activate the plugin. - - - -You should now see a new toolbar called Instant Road Nui on your screen. You can also access the plugin from Tools > Instant Road Nui. - - - -## How to Use Sketchup Instant Road Pro Plugin - - - -To use Sketchup Instant Road Pro Plugin, follow these steps: - - - -1. Create a terrain model in Sketchup or import one from another source. - -2. Select the Instant Road Nui toolbar or go to Tools > Instant Road Nui. - -3. Choose one of the four modes: Outline, Centerline, From Contours or From Mesh. - -4. Depending on the mode, draw an outline or a centerline on the terrain using Sketchup drawing tools or select an existing group of contours or a mesh. - -5. Click on the Create button on the toolbar or press Enter to generate the road. - -6. Adjust the parameters of the road such as width, profile, material, curb height, etc. from the dialog box that appears. - -7. Click on OK to apply the changes or Cancel to undo them. - - - -You can also edit the road after creating it by selecting it and clicking on the Edit button on the toolbar. You can move, rotate, scale or delete the road as you wish. You can also create multiple roads and connect them using the Connect button on the toolbar. - - - -## Conclusion - - - -Sketchup Instant Road Pro Plugin is a useful plugin that simplifies the process of creating roads and landscapes in Sketchup. It offers various options and features that allow you to customize your roads according to your needs and preferences. It is compatible with Sketchup free and pro versions 2014 and above. You can purchase it from Vali Architects website[^1^] or download a free trial version that works for 30 days. - - [^1^]: http://www.valiarchitects.com/sketchup\_scripts/instant-road-nui dfd1c89656 \ No newline at end of file diff --git a/spaces/cleanmaster/akagi-sovits3/data_utils.py b/spaces/cleanmaster/akagi-sovits3/data_utils.py deleted file mode 100644 index 9dfba4a9dfbfbd2b6ed5e771a5ffee4f70419ba3..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/akagi-sovits3/data_utils.py +++ /dev/null @@ -1,152 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data - -import commons -from mel_processing import spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text, transform - -# import h5py - - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths, hparams): - self.audiopaths = load_filepaths_and_text(audiopaths) - self.max_wav_value = hparams.data.max_wav_value - self.sampling_rate = hparams.data.sampling_rate - self.filter_length = hparams.data.filter_length - self.hop_length = hparams.data.hop_length - self.win_length = hparams.data.win_length - self.sampling_rate = hparams.data.sampling_rate - self.use_sr = hparams.train.use_sr - self.spec_len = hparams.train.max_speclen - self.spk_map = hparams.spk - - random.seed(1234) - random.shuffle(self.audiopaths) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - - spk = filename.split(os.sep)[-2] - spk = torch.LongTensor([self.spk_map[spk]]) - - c = torch.load(filename + ".soft.pt").squeeze(0) - c = torch.repeat_interleave(c, repeats=2, dim=1) - - f0 = np.load(filename + ".f0.npy") - f0 = torch.FloatTensor(f0) - lmin = min(c.size(-1), spec.size(-1), f0.shape[0]) - assert abs(c.size(-1) - spec.size(-1)) < 4, (c.size(-1), spec.size(-1), f0.shape, filename) - assert abs(lmin - spec.size(-1)) < 4, (c.size(-1), spec.size(-1), f0.shape) - assert abs(lmin - c.size(-1)) < 4, (c.size(-1), spec.size(-1), f0.shape) - spec, c, f0 = spec[:, :lmin], c[:, :lmin], f0[:lmin] - audio_norm = audio_norm[:, :lmin * self.hop_length] - _spec, _c, _audio_norm, _f0 = spec, c, audio_norm, f0 - while spec.size(-1) < self.spec_len: - spec = torch.cat((spec, _spec), -1) - c = torch.cat((c, _c), -1) - f0 = torch.cat((f0, _f0), -1) - audio_norm = torch.cat((audio_norm, _audio_norm), -1) - start = random.randint(0, spec.size(-1) - self.spec_len) - end = start + self.spec_len - spec = spec[:, start:end] - c = c[:, start:end] - f0 = f0[start:end] - audio_norm = audio_norm[:, start * self.hop_length:end * self.hop_length] - - return c, f0, spec, audio_norm, spk - - def __getitem__(self, index): - return self.get_audio(self.audiopaths[index][0]) - - def __len__(self): - return len(self.audiopaths) - - -class EvalDataLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths, hparams): - self.audiopaths = load_filepaths_and_text(audiopaths) - self.max_wav_value = hparams.data.max_wav_value - self.sampling_rate = hparams.data.sampling_rate - self.filter_length = hparams.data.filter_length - self.hop_length = hparams.data.hop_length - self.win_length = hparams.data.win_length - self.sampling_rate = hparams.data.sampling_rate - self.use_sr = hparams.train.use_sr - self.audiopaths = self.audiopaths[:5] - self.spk_map = hparams.spk - - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - - spk = filename.split(os.sep)[-2] - spk = torch.LongTensor([self.spk_map[spk]]) - - c = torch.load(filename + ".soft.pt").squeeze(0) - - c = torch.repeat_interleave(c, repeats=2, dim=1) - - f0 = np.load(filename + ".f0.npy") - f0 = torch.FloatTensor(f0) - lmin = min(c.size(-1), spec.size(-1), f0.shape[0]) - assert abs(c.size(-1) - spec.size(-1)) < 4, (c.size(-1), spec.size(-1), f0.shape) - assert abs(f0.shape[0] - spec.shape[-1]) < 4, (c.size(-1), spec.size(-1), f0.shape) - spec, c, f0 = spec[:, :lmin], c[:, :lmin], f0[:lmin] - audio_norm = audio_norm[:, :lmin * self.hop_length] - - return c, f0, spec, audio_norm, spk - - def __getitem__(self, index): - return self.get_audio(self.audiopaths[index][0]) - - def __len__(self): - return len(self.audiopaths) - diff --git a/spaces/clevrpwn/CompVis-stable-diffusion-v1-4/README.md b/spaces/clevrpwn/CompVis-stable-diffusion-v1-4/README.md deleted file mode 100644 index ebd36ce416059ad6792215ac84d3c26f99493949..0000000000000000000000000000000000000000 --- a/spaces/clevrpwn/CompVis-stable-diffusion-v1-4/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: CompVis Stable Diffusion V1 4 -emoji: 👀 -colorFrom: pink -colorTo: pink -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/WalImageFile.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/WalImageFile.py deleted file mode 100644 index e4f47aa04bc148f3ff151bec5595f8626833b938..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/WalImageFile.py +++ /dev/null @@ -1,123 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# WAL file handling -# -# History: -# 2003-04-23 fl created -# -# Copyright (c) 2003 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -""" -This reader is based on the specification available from: -https://www.flipcode.com/archives/Quake_2_BSP_File_Format.shtml -and has been tested with a few sample files found using google. - -.. note:: - This format cannot be automatically recognized, so the reader - is not registered for use with :py:func:`PIL.Image.open()`. - To open a WAL file, use the :py:func:`PIL.WalImageFile.open()` function instead. -""" - -from . import Image, ImageFile -from ._binary import i32le as i32 - - -class WalImageFile(ImageFile.ImageFile): - format = "WAL" - format_description = "Quake2 Texture" - - def _open(self): - self.mode = "P" - - # read header fields - header = self.fp.read(32 + 24 + 32 + 12) - self._size = i32(header, 32), i32(header, 36) - Image._decompression_bomb_check(self.size) - - # load pixel data - offset = i32(header, 40) - self.fp.seek(offset) - - # strings are null-terminated - self.info["name"] = header[:32].split(b"\0", 1)[0] - next_name = header[56 : 56 + 32].split(b"\0", 1)[0] - if next_name: - self.info["next_name"] = next_name - - def load(self): - if not self.im: - self.im = Image.core.new(self.mode, self.size) - self.frombytes(self.fp.read(self.size[0] * self.size[1])) - self.putpalette(quake2palette) - return Image.Image.load(self) - - -def open(filename): - """ - Load texture from a Quake2 WAL texture file. - - By default, a Quake2 standard palette is attached to the texture. - To override the palette, use the :py:func:`PIL.Image.Image.putpalette()` method. - - :param filename: WAL file name, or an opened file handle. - :returns: An image instance. - """ - return WalImageFile(filename) - - -quake2palette = ( - # default palette taken from piffo 0.93 by Hans Häggström - b"\x01\x01\x01\x0b\x0b\x0b\x12\x12\x12\x17\x17\x17\x1b\x1b\x1b\x1e" - b"\x1e\x1e\x22\x22\x22\x26\x26\x26\x29\x29\x29\x2c\x2c\x2c\x2f\x2f" - b"\x2f\x32\x32\x32\x35\x35\x35\x37\x37\x37\x3a\x3a\x3a\x3c\x3c\x3c" - b"\x24\x1e\x13\x22\x1c\x12\x20\x1b\x12\x1f\x1a\x10\x1d\x19\x10\x1b" - b"\x17\x0f\x1a\x16\x0f\x18\x14\x0d\x17\x13\x0d\x16\x12\x0d\x14\x10" - b"\x0b\x13\x0f\x0b\x10\x0d\x0a\x0f\x0b\x0a\x0d\x0b\x07\x0b\x0a\x07" - b"\x23\x23\x26\x22\x22\x25\x22\x20\x23\x21\x1f\x22\x20\x1e\x20\x1f" - b"\x1d\x1e\x1d\x1b\x1c\x1b\x1a\x1a\x1a\x19\x19\x18\x17\x17\x17\x16" - b"\x16\x14\x14\x14\x13\x13\x13\x10\x10\x10\x0f\x0f\x0f\x0d\x0d\x0d" - b"\x2d\x28\x20\x29\x24\x1c\x27\x22\x1a\x25\x1f\x17\x38\x2e\x1e\x31" - b"\x29\x1a\x2c\x25\x17\x26\x20\x14\x3c\x30\x14\x37\x2c\x13\x33\x28" - b"\x12\x2d\x24\x10\x28\x1f\x0f\x22\x1a\x0b\x1b\x14\x0a\x13\x0f\x07" - b"\x31\x1a\x16\x30\x17\x13\x2e\x16\x10\x2c\x14\x0d\x2a\x12\x0b\x27" - b"\x0f\x0a\x25\x0f\x07\x21\x0d\x01\x1e\x0b\x01\x1c\x0b\x01\x1a\x0b" - b"\x01\x18\x0a\x01\x16\x0a\x01\x13\x0a\x01\x10\x07\x01\x0d\x07\x01" - b"\x29\x23\x1e\x27\x21\x1c\x26\x20\x1b\x25\x1f\x1a\x23\x1d\x19\x21" - b"\x1c\x18\x20\x1b\x17\x1e\x19\x16\x1c\x18\x14\x1b\x17\x13\x19\x14" - b"\x10\x17\x13\x0f\x14\x10\x0d\x12\x0f\x0b\x0f\x0b\x0a\x0b\x0a\x07" - b"\x26\x1a\x0f\x23\x19\x0f\x20\x17\x0f\x1c\x16\x0f\x19\x13\x0d\x14" - b"\x10\x0b\x10\x0d\x0a\x0b\x0a\x07\x33\x22\x1f\x35\x29\x26\x37\x2f" - b"\x2d\x39\x35\x34\x37\x39\x3a\x33\x37\x39\x30\x34\x36\x2b\x31\x34" - b"\x27\x2e\x31\x22\x2b\x2f\x1d\x28\x2c\x17\x25\x2a\x0f\x20\x26\x0d" - b"\x1e\x25\x0b\x1c\x22\x0a\x1b\x20\x07\x19\x1e\x07\x17\x1b\x07\x14" - b"\x18\x01\x12\x16\x01\x0f\x12\x01\x0b\x0d\x01\x07\x0a\x01\x01\x01" - b"\x2c\x21\x21\x2a\x1f\x1f\x29\x1d\x1d\x27\x1c\x1c\x26\x1a\x1a\x24" - b"\x18\x18\x22\x17\x17\x21\x16\x16\x1e\x13\x13\x1b\x12\x12\x18\x10" - b"\x10\x16\x0d\x0d\x12\x0b\x0b\x0d\x0a\x0a\x0a\x07\x07\x01\x01\x01" - b"\x2e\x30\x29\x2d\x2e\x27\x2b\x2c\x26\x2a\x2a\x24\x28\x29\x23\x27" - b"\x27\x21\x26\x26\x1f\x24\x24\x1d\x22\x22\x1c\x1f\x1f\x1a\x1c\x1c" - b"\x18\x19\x19\x16\x17\x17\x13\x13\x13\x10\x0f\x0f\x0d\x0b\x0b\x0a" - b"\x30\x1e\x1b\x2d\x1c\x19\x2c\x1a\x17\x2a\x19\x14\x28\x17\x13\x26" - b"\x16\x10\x24\x13\x0f\x21\x12\x0d\x1f\x10\x0b\x1c\x0f\x0a\x19\x0d" - b"\x0a\x16\x0b\x07\x12\x0a\x07\x0f\x07\x01\x0a\x01\x01\x01\x01\x01" - b"\x28\x29\x38\x26\x27\x36\x25\x26\x34\x24\x24\x31\x22\x22\x2f\x20" - b"\x21\x2d\x1e\x1f\x2a\x1d\x1d\x27\x1b\x1b\x25\x19\x19\x21\x17\x17" - b"\x1e\x14\x14\x1b\x13\x12\x17\x10\x0f\x13\x0d\x0b\x0f\x0a\x07\x07" - b"\x2f\x32\x29\x2d\x30\x26\x2b\x2e\x24\x29\x2c\x21\x27\x2a\x1e\x25" - b"\x28\x1c\x23\x26\x1a\x21\x25\x18\x1e\x22\x14\x1b\x1f\x10\x19\x1c" - b"\x0d\x17\x1a\x0a\x13\x17\x07\x10\x13\x01\x0d\x0f\x01\x0a\x0b\x01" - b"\x01\x3f\x01\x13\x3c\x0b\x1b\x39\x10\x20\x35\x14\x23\x31\x17\x23" - b"\x2d\x18\x23\x29\x18\x3f\x3f\x3f\x3f\x3f\x39\x3f\x3f\x31\x3f\x3f" - b"\x2a\x3f\x3f\x20\x3f\x3f\x14\x3f\x3c\x12\x3f\x39\x0f\x3f\x35\x0b" - b"\x3f\x32\x07\x3f\x2d\x01\x3d\x2a\x01\x3b\x26\x01\x39\x21\x01\x37" - b"\x1d\x01\x34\x1a\x01\x32\x16\x01\x2f\x12\x01\x2d\x0f\x01\x2a\x0b" - b"\x01\x27\x07\x01\x23\x01\x01\x1d\x01\x01\x17\x01\x01\x10\x01\x01" - b"\x3d\x01\x01\x19\x19\x3f\x3f\x01\x01\x01\x01\x3f\x16\x16\x13\x10" - b"\x10\x0f\x0d\x0d\x0b\x3c\x2e\x2a\x36\x27\x20\x30\x21\x18\x29\x1b" - b"\x10\x3c\x39\x37\x37\x32\x2f\x31\x2c\x28\x2b\x26\x21\x30\x22\x20" -) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/T_S_I_J_.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/T_S_I_J_.py deleted file mode 100644 index bc8fe92aac9d18bfd5ee565588d8cebf7d00afd1..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/ttLib/tables/T_S_I_J_.py +++ /dev/null @@ -1,5 +0,0 @@ -from .T_S_I_V_ import table_T_S_I_V_ - - -class table_T_S_I_J_(table_T_S_I_V_): - pass diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/apedec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/apedec.c deleted file mode 100644 index 772636afde33514adad360f9b37e8119c9289f45..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/apedec.c +++ /dev/null @@ -1,1692 +0,0 @@ -/* - * Monkey's Audio lossless audio decoder - * Copyright (c) 2007 Benjamin Zores - * based upon libdemac from Dave Chapman. - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include - -#include "libavutil/avassert.h" -#include "libavutil/channel_layout.h" -#include "libavutil/crc.h" -#include "libavutil/opt.h" -#include "lossless_audiodsp.h" -#include "avcodec.h" -#include "bswapdsp.h" -#include "bytestream.h" -#include "codec_internal.h" -#include "decode.h" -#include "get_bits.h" -#include "unary.h" - -/** - * @file - * Monkey's Audio lossless audio decoder - */ - -#define MAX_CHANNELS 2 -#define MAX_BYTESPERSAMPLE 3 - -#define APE_FRAMECODE_MONO_SILENCE 1 -#define APE_FRAMECODE_STEREO_SILENCE 3 -#define APE_FRAMECODE_PSEUDO_STEREO 4 - -#define HISTORY_SIZE 512 -#define PREDICTOR_ORDER 8 -/** Total size of all predictor histories */ -#define PREDICTOR_SIZE 50 - -#define YDELAYA (18 + PREDICTOR_ORDER*4) -#define YDELAYB (18 + PREDICTOR_ORDER*3) -#define XDELAYA (18 + PREDICTOR_ORDER*2) -#define XDELAYB (18 + PREDICTOR_ORDER) - -#define YADAPTCOEFFSA 18 -#define XADAPTCOEFFSA 14 -#define YADAPTCOEFFSB 10 -#define XADAPTCOEFFSB 5 - -/** - * Possible compression levels - * @{ - */ -enum APECompressionLevel { - COMPRESSION_LEVEL_FAST = 1000, - COMPRESSION_LEVEL_NORMAL = 2000, - COMPRESSION_LEVEL_HIGH = 3000, - COMPRESSION_LEVEL_EXTRA_HIGH = 4000, - COMPRESSION_LEVEL_INSANE = 5000 -}; -/** @} */ - -#define APE_FILTER_LEVELS 3 - -/** Filter orders depending on compression level */ -static const uint16_t ape_filter_orders[5][APE_FILTER_LEVELS] = { - { 0, 0, 0 }, - { 16, 0, 0 }, - { 64, 0, 0 }, - { 32, 256, 0 }, - { 16, 256, 1280 } -}; - -/** Filter fraction bits depending on compression level */ -static const uint8_t ape_filter_fracbits[5][APE_FILTER_LEVELS] = { - { 0, 0, 0 }, - { 11, 0, 0 }, - { 11, 0, 0 }, - { 10, 13, 0 }, - { 11, 13, 15 } -}; - - -/** Filters applied to the decoded data */ -typedef struct APEFilter { - int16_t *coeffs; ///< actual coefficients used in filtering - int16_t *adaptcoeffs; ///< adaptive filter coefficients used for correcting of actual filter coefficients - int16_t *historybuffer; ///< filter memory - int16_t *delay; ///< filtered values - - uint32_t avg; -} APEFilter; - -typedef struct APERice { - uint32_t k; - uint32_t ksum; -} APERice; - -typedef struct APERangecoder { - uint32_t low; ///< low end of interval - uint32_t range; ///< length of interval - uint32_t help; ///< bytes_to_follow resp. intermediate value - unsigned int buffer; ///< buffer for input/output -} APERangecoder; - -/** Filter histories */ -typedef struct APEPredictor { - int32_t *buf; - - int32_t lastA[2]; - - int32_t filterA[2]; - int32_t filterB[2]; - - uint32_t coeffsA[2][4]; ///< adaption coefficients - uint32_t coeffsB[2][5]; ///< adaption coefficients - int32_t historybuffer[HISTORY_SIZE + PREDICTOR_SIZE]; - - unsigned int sample_pos; -} APEPredictor; - -typedef struct APEPredictor64 { - int64_t *buf; - - int64_t lastA[2]; - - int64_t filterA[2]; - int64_t filterB[2]; - - uint64_t coeffsA[2][4]; ///< adaption coefficients - uint64_t coeffsB[2][5]; ///< adaption coefficients - int64_t historybuffer[HISTORY_SIZE + PREDICTOR_SIZE]; - - unsigned int sample_pos; -} APEPredictor64; - -/** Decoder context */ -typedef struct APEContext { - AVClass *class; ///< class for AVOptions - AVCodecContext *avctx; - BswapDSPContext bdsp; - LLAudDSPContext adsp; - int channels; - int samples; ///< samples left to decode in current frame - int bps; - - int fileversion; ///< codec version, very important in decoding process - int compression_level; ///< compression levels - int fset; ///< which filter set to use (calculated from compression level) - int flags; ///< global decoder flags - - uint32_t CRC; ///< signalled frame CRC - uint32_t CRC_state; ///< accumulated CRC - int frameflags; ///< frame flags - APEPredictor predictor; ///< predictor used for final reconstruction - APEPredictor64 predictor64; ///< 64bit predictor used for final reconstruction - - int32_t *decoded_buffer; - int decoded_size; - int32_t *decoded[MAX_CHANNELS]; ///< decoded data for each channel - int blocks_per_loop; ///< maximum number of samples to decode for each call - - int16_t* filterbuf[APE_FILTER_LEVELS]; ///< filter memory - - APERangecoder rc; ///< rangecoder used to decode actual values - APERice riceX; ///< rice code parameters for the second channel - APERice riceY; ///< rice code parameters for the first channel - APEFilter filters[APE_FILTER_LEVELS][2]; ///< filters used for reconstruction - GetBitContext gb; - - uint8_t *data; ///< current frame data - uint8_t *data_end; ///< frame data end - int data_size; ///< frame data allocated size - const uint8_t *ptr; ///< current position in frame data - - int error; - - void (*entropy_decode_mono)(struct APEContext *ctx, int blockstodecode); - void (*entropy_decode_stereo)(struct APEContext *ctx, int blockstodecode); - void (*predictor_decode_mono)(struct APEContext *ctx, int count); - void (*predictor_decode_stereo)(struct APEContext *ctx, int count); -} APEContext; - -static void ape_apply_filters(APEContext *ctx, int32_t *decoded0, - int32_t *decoded1, int count); - -static void entropy_decode_mono_0000(APEContext *ctx, int blockstodecode); -static void entropy_decode_stereo_0000(APEContext *ctx, int blockstodecode); -static void entropy_decode_mono_3860(APEContext *ctx, int blockstodecode); -static void entropy_decode_stereo_3860(APEContext *ctx, int blockstodecode); -static void entropy_decode_mono_3900(APEContext *ctx, int blockstodecode); -static void entropy_decode_stereo_3900(APEContext *ctx, int blockstodecode); -static void entropy_decode_stereo_3930(APEContext *ctx, int blockstodecode); -static void entropy_decode_mono_3990(APEContext *ctx, int blockstodecode); -static void entropy_decode_stereo_3990(APEContext *ctx, int blockstodecode); - -static void predictor_decode_mono_3800(APEContext *ctx, int count); -static void predictor_decode_stereo_3800(APEContext *ctx, int count); -static void predictor_decode_mono_3930(APEContext *ctx, int count); -static void predictor_decode_stereo_3930(APEContext *ctx, int count); -static void predictor_decode_mono_3950(APEContext *ctx, int count); -static void predictor_decode_stereo_3950(APEContext *ctx, int count); - -static av_cold int ape_decode_close(AVCodecContext *avctx) -{ - APEContext *s = avctx->priv_data; - int i; - - for (i = 0; i < APE_FILTER_LEVELS; i++) - av_freep(&s->filterbuf[i]); - - av_freep(&s->decoded_buffer); - av_freep(&s->data); - s->decoded_size = s->data_size = 0; - - return 0; -} - -static av_cold int ape_decode_init(AVCodecContext *avctx) -{ - APEContext *s = avctx->priv_data; - int channels = avctx->ch_layout.nb_channels; - int i; - - if (avctx->extradata_size != 6) { - av_log(avctx, AV_LOG_ERROR, "Incorrect extradata\n"); - return AVERROR(EINVAL); - } - if (channels > 2) { - av_log(avctx, AV_LOG_ERROR, "Only mono and stereo is supported\n"); - return AVERROR(EINVAL); - } - avctx->bits_per_raw_sample = - s->bps = avctx->bits_per_coded_sample; - switch (s->bps) { - case 8: - avctx->sample_fmt = AV_SAMPLE_FMT_U8P; - break; - case 16: - avctx->sample_fmt = AV_SAMPLE_FMT_S16P; - break; - case 24: - avctx->sample_fmt = AV_SAMPLE_FMT_S32P; - break; - default: - avpriv_request_sample(avctx, - "%d bits per coded sample", s->bps); - return AVERROR_PATCHWELCOME; - } - s->avctx = avctx; - s->channels = channels; - s->fileversion = AV_RL16(avctx->extradata); - s->compression_level = AV_RL16(avctx->extradata + 2); - s->flags = AV_RL16(avctx->extradata + 4); - - av_log(avctx, AV_LOG_VERBOSE, "Compression Level: %d - Flags: %d\n", - s->compression_level, s->flags); - if (s->compression_level % 1000 || s->compression_level > COMPRESSION_LEVEL_INSANE || - !s->compression_level || - (s->fileversion < 3930 && s->compression_level == COMPRESSION_LEVEL_INSANE)) { - av_log(avctx, AV_LOG_ERROR, "Incorrect compression level %d\n", - s->compression_level); - return AVERROR_INVALIDDATA; - } - s->fset = s->compression_level / 1000 - 1; - for (i = 0; i < APE_FILTER_LEVELS; i++) { - if (!ape_filter_orders[s->fset][i]) - break; - if (!(s->filterbuf[i] = av_malloc((ape_filter_orders[s->fset][i] * 3 + HISTORY_SIZE) * 4))) - return AVERROR(ENOMEM); - } - - if (s->fileversion < 3860) { - s->entropy_decode_mono = entropy_decode_mono_0000; - s->entropy_decode_stereo = entropy_decode_stereo_0000; - } else if (s->fileversion < 3900) { - s->entropy_decode_mono = entropy_decode_mono_3860; - s->entropy_decode_stereo = entropy_decode_stereo_3860; - } else if (s->fileversion < 3930) { - s->entropy_decode_mono = entropy_decode_mono_3900; - s->entropy_decode_stereo = entropy_decode_stereo_3900; - } else if (s->fileversion < 3990) { - s->entropy_decode_mono = entropy_decode_mono_3900; - s->entropy_decode_stereo = entropy_decode_stereo_3930; - } else { - s->entropy_decode_mono = entropy_decode_mono_3990; - s->entropy_decode_stereo = entropy_decode_stereo_3990; - } - - if (s->fileversion < 3930) { - s->predictor_decode_mono = predictor_decode_mono_3800; - s->predictor_decode_stereo = predictor_decode_stereo_3800; - } else if (s->fileversion < 3950) { - s->predictor_decode_mono = predictor_decode_mono_3930; - s->predictor_decode_stereo = predictor_decode_stereo_3930; - } else { - s->predictor_decode_mono = predictor_decode_mono_3950; - s->predictor_decode_stereo = predictor_decode_stereo_3950; - } - - ff_bswapdsp_init(&s->bdsp); - ff_llauddsp_init(&s->adsp); - av_channel_layout_uninit(&avctx->ch_layout); - avctx->ch_layout = (channels == 2) ? (AVChannelLayout)AV_CHANNEL_LAYOUT_STEREO - : (AVChannelLayout)AV_CHANNEL_LAYOUT_MONO; - - return 0; -} - -/** - * @name APE range decoding functions - * @{ - */ - -#define CODE_BITS 32 -#define TOP_VALUE ((unsigned int)1 << (CODE_BITS-1)) -#define SHIFT_BITS (CODE_BITS - 9) -#define EXTRA_BITS ((CODE_BITS-2) % 8 + 1) -#define BOTTOM_VALUE (TOP_VALUE >> 8) - -/** Start the decoder */ -static inline void range_start_decoding(APEContext *ctx) -{ - ctx->rc.buffer = bytestream_get_byte(&ctx->ptr); - ctx->rc.low = ctx->rc.buffer >> (8 - EXTRA_BITS); - ctx->rc.range = (uint32_t) 1 << EXTRA_BITS; -} - -/** Perform normalization */ -static inline void range_dec_normalize(APEContext *ctx) -{ - while (ctx->rc.range <= BOTTOM_VALUE) { - ctx->rc.buffer <<= 8; - if(ctx->ptr < ctx->data_end) { - ctx->rc.buffer += *ctx->ptr; - ctx->ptr++; - } else { - ctx->error = 1; - } - ctx->rc.low = (ctx->rc.low << 8) | ((ctx->rc.buffer >> 1) & 0xFF); - ctx->rc.range <<= 8; - } -} - -/** - * Calculate cumulative frequency for next symbol. Does NO update! - * @param ctx decoder context - * @param tot_f is the total frequency or (code_value)1<rc.help = ctx->rc.range / tot_f; - return ctx->rc.low / ctx->rc.help; -} - -/** - * Decode value with given size in bits - * @param ctx decoder context - * @param shift number of bits to decode - */ -static inline int range_decode_culshift(APEContext *ctx, int shift) -{ - range_dec_normalize(ctx); - ctx->rc.help = ctx->rc.range >> shift; - return ctx->rc.low / ctx->rc.help; -} - - -/** - * Update decoding state - * @param ctx decoder context - * @param sy_f the interval length (frequency of the symbol) - * @param lt_f the lower end (frequency sum of < symbols) - */ -static inline void range_decode_update(APEContext *ctx, int sy_f, int lt_f) -{ - ctx->rc.low -= ctx->rc.help * lt_f; - ctx->rc.range = ctx->rc.help * sy_f; -} - -/** Decode n bits (n <= 16) without modelling */ -static inline int range_decode_bits(APEContext *ctx, int n) -{ - int sym = range_decode_culshift(ctx, n); - range_decode_update(ctx, 1, sym); - return sym; -} - - -#define MODEL_ELEMENTS 64 - -/** - * Fixed probabilities for symbols in Monkey Audio version 3.97 - */ -static const uint16_t counts_3970[22] = { - 0, 14824, 28224, 39348, 47855, 53994, 58171, 60926, - 62682, 63786, 64463, 64878, 65126, 65276, 65365, 65419, - 65450, 65469, 65480, 65487, 65491, 65493, -}; - -/** - * Probability ranges for symbols in Monkey Audio version 3.97 - */ -static const uint16_t counts_diff_3970[21] = { - 14824, 13400, 11124, 8507, 6139, 4177, 2755, 1756, - 1104, 677, 415, 248, 150, 89, 54, 31, - 19, 11, 7, 4, 2, -}; - -/** - * Fixed probabilities for symbols in Monkey Audio version 3.98 - */ -static const uint16_t counts_3980[22] = { - 0, 19578, 36160, 48417, 56323, 60899, 63265, 64435, - 64971, 65232, 65351, 65416, 65447, 65466, 65476, 65482, - 65485, 65488, 65490, 65491, 65492, 65493, -}; - -/** - * Probability ranges for symbols in Monkey Audio version 3.98 - */ -static const uint16_t counts_diff_3980[21] = { - 19578, 16582, 12257, 7906, 4576, 2366, 1170, 536, - 261, 119, 65, 31, 19, 10, 6, 3, - 3, 2, 1, 1, 1, -}; - -/** - * Decode symbol - * @param ctx decoder context - * @param counts probability range start position - * @param counts_diff probability range widths - */ -static inline int range_get_symbol(APEContext *ctx, - const uint16_t counts[], - const uint16_t counts_diff[]) -{ - int symbol, cf; - - cf = range_decode_culshift(ctx, 16); - - if(cf > 65492){ - symbol= cf - 65535 + 63; - range_decode_update(ctx, 1, cf); - if(cf > 65535) - ctx->error=1; - return symbol; - } - /* figure out the symbol inefficiently; a binary search would be much better */ - for (symbol = 0; counts[symbol + 1] <= cf; symbol++); - - range_decode_update(ctx, counts_diff[symbol], counts[symbol]); - - return symbol; -} -/** @} */ // group rangecoder - -static inline void update_rice(APERice *rice, unsigned int x) -{ - int lim = rice->k ? (1 << (rice->k + 4)) : 0; - rice->ksum += ((x + 1) / 2) - ((rice->ksum + 16) >> 5); - - if (rice->ksum < lim) - rice->k--; - else if (rice->ksum >= (1 << (rice->k + 5)) && rice->k < 24) - rice->k++; -} - -static inline int get_rice_ook(GetBitContext *gb, int k) -{ - unsigned int x; - - x = get_unary(gb, 1, get_bits_left(gb)); - - if (k) - x = (x << k) | get_bits(gb, k); - - return x; -} - -static inline int ape_decode_value_3860(APEContext *ctx, GetBitContext *gb, - APERice *rice) -{ - unsigned int x, overflow; - - overflow = get_unary(gb, 1, get_bits_left(gb)); - - if (ctx->fileversion > 3880) { - while (overflow >= 16) { - overflow -= 16; - rice->k += 4; - } - } - - if (!rice->k) - x = overflow; - else if(rice->k <= MIN_CACHE_BITS) { - x = (overflow << rice->k) + get_bits(gb, rice->k); - } else { - av_log(ctx->avctx, AV_LOG_ERROR, "Too many bits: %"PRIu32"\n", rice->k); - ctx->error = 1; - return AVERROR_INVALIDDATA; - } - rice->ksum += x - (rice->ksum + 8 >> 4); - if (rice->ksum < (rice->k ? 1 << (rice->k + 4) : 0)) - rice->k--; - else if (rice->ksum >= (1 << (rice->k + 5)) && rice->k < 24) - rice->k++; - - /* Convert to signed */ - return ((x >> 1) ^ ((x & 1) - 1)) + 1; -} - -static inline int ape_decode_value_3900(APEContext *ctx, APERice *rice) -{ - unsigned int x, overflow; - int tmpk; - - overflow = range_get_symbol(ctx, counts_3970, counts_diff_3970); - - if (overflow == (MODEL_ELEMENTS - 1)) { - tmpk = range_decode_bits(ctx, 5); - overflow = 0; - } else - tmpk = (rice->k < 1) ? 0 : rice->k - 1; - - if (tmpk <= 16 || ctx->fileversion < 3910) { - if (tmpk > 23) { - av_log(ctx->avctx, AV_LOG_ERROR, "Too many bits: %d\n", tmpk); - return AVERROR_INVALIDDATA; - } - x = range_decode_bits(ctx, tmpk); - } else if (tmpk <= 31) { - x = range_decode_bits(ctx, 16); - x |= (range_decode_bits(ctx, tmpk - 16) << 16); - } else { - av_log(ctx->avctx, AV_LOG_ERROR, "Too many bits: %d\n", tmpk); - return AVERROR_INVALIDDATA; - } - x += overflow << tmpk; - - update_rice(rice, x); - - /* Convert to signed */ - return ((x >> 1) ^ ((x & 1) - 1)) + 1; -} - -static inline int ape_decode_value_3990(APEContext *ctx, APERice *rice) -{ - unsigned int x, overflow, pivot; - int base; - - pivot = FFMAX(rice->ksum >> 5, 1); - - overflow = range_get_symbol(ctx, counts_3980, counts_diff_3980); - - if (overflow == (MODEL_ELEMENTS - 1)) { - overflow = (unsigned)range_decode_bits(ctx, 16) << 16; - overflow |= range_decode_bits(ctx, 16); - } - - if (pivot < 0x10000) { - base = range_decode_culfreq(ctx, pivot); - range_decode_update(ctx, 1, base); - } else { - int base_hi = pivot, base_lo; - int bbits = 0; - - while (base_hi & ~0xFFFF) { - base_hi >>= 1; - bbits++; - } - base_hi = range_decode_culfreq(ctx, base_hi + 1); - range_decode_update(ctx, 1, base_hi); - base_lo = range_decode_culfreq(ctx, 1 << bbits); - range_decode_update(ctx, 1, base_lo); - - base = (base_hi << bbits) + base_lo; - } - - x = base + overflow * pivot; - - update_rice(rice, x); - - /* Convert to signed */ - return ((x >> 1) ^ ((x & 1) - 1)) + 1; -} - -static int get_k(int ksum) -{ - return av_log2(ksum) + !!ksum; -} - -static void decode_array_0000(APEContext *ctx, GetBitContext *gb, - int32_t *out, APERice *rice, int blockstodecode) -{ - int i; - unsigned ksummax, ksummin; - - rice->ksum = 0; - for (i = 0; i < FFMIN(blockstodecode, 5); i++) { - out[i] = get_rice_ook(&ctx->gb, 10); - rice->ksum += out[i]; - } - - if (blockstodecode <= 5) - goto end; - - rice->k = get_k(rice->ksum / 10); - if (rice->k >= 24) - return; - for (; i < FFMIN(blockstodecode, 64); i++) { - out[i] = get_rice_ook(&ctx->gb, rice->k); - rice->ksum += out[i]; - rice->k = get_k(rice->ksum / ((i + 1) * 2)); - if (rice->k >= 24) - return; - } - - if (blockstodecode <= 64) - goto end; - - rice->k = get_k(rice->ksum >> 7); - ksummax = 1 << rice->k + 7; - ksummin = rice->k ? (1 << rice->k + 6) : 0; - for (; i < blockstodecode; i++) { - if (get_bits_left(&ctx->gb) < 1) { - ctx->error = 1; - return; - } - out[i] = get_rice_ook(&ctx->gb, rice->k); - rice->ksum += out[i] - (unsigned)out[i - 64]; - while (rice->ksum < ksummin) { - rice->k--; - ksummin = rice->k ? ksummin >> 1 : 0; - ksummax >>= 1; - } - while (rice->ksum >= ksummax) { - rice->k++; - if (rice->k > 24) - return; - ksummax <<= 1; - ksummin = ksummin ? ksummin << 1 : 128; - } - } - -end: - for (i = 0; i < blockstodecode; i++) - out[i] = ((out[i] >> 1) ^ ((out[i] & 1) - 1)) + 1; -} - -static void entropy_decode_mono_0000(APEContext *ctx, int blockstodecode) -{ - decode_array_0000(ctx, &ctx->gb, ctx->decoded[0], &ctx->riceY, - blockstodecode); -} - -static void entropy_decode_stereo_0000(APEContext *ctx, int blockstodecode) -{ - decode_array_0000(ctx, &ctx->gb, ctx->decoded[0], &ctx->riceY, - blockstodecode); - decode_array_0000(ctx, &ctx->gb, ctx->decoded[1], &ctx->riceX, - blockstodecode); -} - -static void entropy_decode_mono_3860(APEContext *ctx, int blockstodecode) -{ - int32_t *decoded0 = ctx->decoded[0]; - - while (blockstodecode--) - *decoded0++ = ape_decode_value_3860(ctx, &ctx->gb, &ctx->riceY); -} - -static void entropy_decode_stereo_3860(APEContext *ctx, int blockstodecode) -{ - int32_t *decoded0 = ctx->decoded[0]; - int32_t *decoded1 = ctx->decoded[1]; - int blocks = blockstodecode; - - while (blockstodecode--) - *decoded0++ = ape_decode_value_3860(ctx, &ctx->gb, &ctx->riceY); - while (blocks--) - *decoded1++ = ape_decode_value_3860(ctx, &ctx->gb, &ctx->riceX); -} - -static void entropy_decode_mono_3900(APEContext *ctx, int blockstodecode) -{ - int32_t *decoded0 = ctx->decoded[0]; - - while (blockstodecode--) - *decoded0++ = ape_decode_value_3900(ctx, &ctx->riceY); -} - -static void entropy_decode_stereo_3900(APEContext *ctx, int blockstodecode) -{ - int32_t *decoded0 = ctx->decoded[0]; - int32_t *decoded1 = ctx->decoded[1]; - int blocks = blockstodecode; - - while (blockstodecode--) - *decoded0++ = ape_decode_value_3900(ctx, &ctx->riceY); - range_dec_normalize(ctx); - // because of some implementation peculiarities we need to backpedal here - ctx->ptr -= 1; - range_start_decoding(ctx); - while (blocks--) - *decoded1++ = ape_decode_value_3900(ctx, &ctx->riceX); -} - -static void entropy_decode_stereo_3930(APEContext *ctx, int blockstodecode) -{ - int32_t *decoded0 = ctx->decoded[0]; - int32_t *decoded1 = ctx->decoded[1]; - - while (blockstodecode--) { - *decoded0++ = ape_decode_value_3900(ctx, &ctx->riceY); - *decoded1++ = ape_decode_value_3900(ctx, &ctx->riceX); - } -} - -static void entropy_decode_mono_3990(APEContext *ctx, int blockstodecode) -{ - int32_t *decoded0 = ctx->decoded[0]; - - while (blockstodecode--) - *decoded0++ = ape_decode_value_3990(ctx, &ctx->riceY); -} - -static void entropy_decode_stereo_3990(APEContext *ctx, int blockstodecode) -{ - int32_t *decoded0 = ctx->decoded[0]; - int32_t *decoded1 = ctx->decoded[1]; - - while (blockstodecode--) { - *decoded0++ = ape_decode_value_3990(ctx, &ctx->riceY); - *decoded1++ = ape_decode_value_3990(ctx, &ctx->riceX); - } -} - -static int init_entropy_decoder(APEContext *ctx) -{ - /* Read the CRC */ - if (ctx->fileversion >= 3900) { - if (ctx->data_end - ctx->ptr < 6) - return AVERROR_INVALIDDATA; - ctx->CRC = bytestream_get_be32(&ctx->ptr); - } else { - ctx->CRC = get_bits_long(&ctx->gb, 32); - } - - /* Read the frame flags if they exist */ - ctx->frameflags = 0; - ctx->CRC_state = UINT32_MAX; - if ((ctx->fileversion > 3820) && (ctx->CRC & 0x80000000)) { - ctx->CRC &= ~0x80000000; - - if (ctx->data_end - ctx->ptr < 6) - return AVERROR_INVALIDDATA; - ctx->frameflags = bytestream_get_be32(&ctx->ptr); - } - - /* Initialize the rice structs */ - ctx->riceX.k = 10; - ctx->riceX.ksum = (1 << ctx->riceX.k) * 16; - ctx->riceY.k = 10; - ctx->riceY.ksum = (1 << ctx->riceY.k) * 16; - - if (ctx->fileversion >= 3900) { - /* The first 8 bits of input are ignored. */ - ctx->ptr++; - - range_start_decoding(ctx); - } - - return 0; -} - -static const int32_t initial_coeffs_fast_3320[1] = { - 375, -}; - -static const int32_t initial_coeffs_a_3800[3] = { - 64, 115, 64, -}; - -static const int32_t initial_coeffs_b_3800[2] = { - 740, 0 -}; - -static const int32_t initial_coeffs_3930[4] = { - 360, 317, -109, 98 -}; - -static const int64_t initial_coeffs_3930_64bit[4] = { - 360, 317, -109, 98 -}; - -static void init_predictor_decoder(APEContext *ctx) -{ - APEPredictor *p = &ctx->predictor; - APEPredictor64 *p64 = &ctx->predictor64; - - /* Zero the history buffers */ - memset(p->historybuffer, 0, PREDICTOR_SIZE * sizeof(*p->historybuffer)); - memset(p64->historybuffer, 0, PREDICTOR_SIZE * sizeof(*p64->historybuffer)); - p->buf = p->historybuffer; - p64->buf = p64->historybuffer; - - /* Initialize and zero the coefficients */ - if (ctx->fileversion < 3930) { - if (ctx->compression_level == COMPRESSION_LEVEL_FAST) { - memcpy(p->coeffsA[0], initial_coeffs_fast_3320, - sizeof(initial_coeffs_fast_3320)); - memcpy(p->coeffsA[1], initial_coeffs_fast_3320, - sizeof(initial_coeffs_fast_3320)); - } else { - memcpy(p->coeffsA[0], initial_coeffs_a_3800, - sizeof(initial_coeffs_a_3800)); - memcpy(p->coeffsA[1], initial_coeffs_a_3800, - sizeof(initial_coeffs_a_3800)); - } - } else { - memcpy(p->coeffsA[0], initial_coeffs_3930, sizeof(initial_coeffs_3930)); - memcpy(p->coeffsA[1], initial_coeffs_3930, sizeof(initial_coeffs_3930)); - memcpy(p64->coeffsA[0], initial_coeffs_3930_64bit, sizeof(initial_coeffs_3930_64bit)); - memcpy(p64->coeffsA[1], initial_coeffs_3930_64bit, sizeof(initial_coeffs_3930_64bit)); - } - memset(p->coeffsB, 0, sizeof(p->coeffsB)); - memset(p64->coeffsB, 0, sizeof(p64->coeffsB)); - if (ctx->fileversion < 3930) { - memcpy(p->coeffsB[0], initial_coeffs_b_3800, - sizeof(initial_coeffs_b_3800)); - memcpy(p->coeffsB[1], initial_coeffs_b_3800, - sizeof(initial_coeffs_b_3800)); - } - - p->filterA[0] = p->filterA[1] = 0; - p->filterB[0] = p->filterB[1] = 0; - p->lastA[0] = p->lastA[1] = 0; - - p64->filterA[0] = p64->filterA[1] = 0; - p64->filterB[0] = p64->filterB[1] = 0; - p64->lastA[0] = p64->lastA[1] = 0; - - p->sample_pos = 0; - - p64->sample_pos = 0; -} - -/** Get inverse sign of integer (-1 for positive, 1 for negative and 0 for zero) */ -static inline int APESIGN(int32_t x) { - return (x < 0) - (x > 0); -} - -static av_always_inline int filter_fast_3320(APEPredictor *p, - const int decoded, const int filter, - const int delayA) -{ - int32_t predictionA; - - p->buf[delayA] = p->lastA[filter]; - if (p->sample_pos < 3) { - p->lastA[filter] = decoded; - p->filterA[filter] = decoded; - return decoded; - } - - predictionA = p->buf[delayA] * 2U - p->buf[delayA - 1]; - p->lastA[filter] = decoded + (unsigned)((int32_t)(predictionA * p->coeffsA[filter][0]) >> 9); - - if ((decoded ^ predictionA) > 0) - p->coeffsA[filter][0]++; - else - p->coeffsA[filter][0]--; - - p->filterA[filter] += (unsigned)p->lastA[filter]; - - return p->filterA[filter]; -} - -static av_always_inline int filter_3800(APEPredictor *p, - const unsigned decoded, const int filter, - const int delayA, const int delayB, - const int start, const int shift) -{ - int32_t predictionA, predictionB, sign; - int32_t d0, d1, d2, d3, d4; - - p->buf[delayA] = p->lastA[filter]; - p->buf[delayB] = p->filterB[filter]; - if (p->sample_pos < start) { - predictionA = decoded + p->filterA[filter]; - p->lastA[filter] = decoded; - p->filterB[filter] = decoded; - p->filterA[filter] = predictionA; - return predictionA; - } - d2 = p->buf[delayA]; - d1 = (p->buf[delayA] - (unsigned)p->buf[delayA - 1]) * 2; - d0 = p->buf[delayA] + ((p->buf[delayA - 2] - (unsigned)p->buf[delayA - 1]) * 8); - d3 = p->buf[delayB] * 2U - p->buf[delayB - 1]; - d4 = p->buf[delayB]; - - predictionA = d0 * p->coeffsA[filter][0] + - d1 * p->coeffsA[filter][1] + - d2 * p->coeffsA[filter][2]; - - sign = APESIGN(decoded); - p->coeffsA[filter][0] += (((d0 >> 30) & 2) - 1) * sign; - p->coeffsA[filter][1] += (((d1 >> 28) & 8) - 4) * sign; - p->coeffsA[filter][2] += (((d2 >> 28) & 8) - 4) * sign; - - predictionB = d3 * p->coeffsB[filter][0] - - d4 * p->coeffsB[filter][1]; - p->lastA[filter] = decoded + (predictionA >> 11); - sign = APESIGN(p->lastA[filter]); - p->coeffsB[filter][0] += (((d3 >> 29) & 4) - 2) * sign; - p->coeffsB[filter][1] -= (((d4 >> 30) & 2) - 1) * sign; - - p->filterB[filter] = p->lastA[filter] + (unsigned)(predictionB >> shift); - p->filterA[filter] = p->filterB[filter] + (unsigned)((int)(p->filterA[filter] * 31U) >> 5); - - return p->filterA[filter]; -} - -static void long_filter_high_3800(int32_t *buffer, int order, int shift, int length) -{ - int i, j; - int32_t dotprod, sign; - int32_t coeffs[256], delay[256+256], *delayp = delay; - - if (order >= length) - return; - - memset(coeffs, 0, order * sizeof(*coeffs)); - for (i = 0; i < order; i++) - delay[i] = buffer[i]; - for (i = order; i < length; i++) { - dotprod = 0; - sign = APESIGN(buffer[i]); - if (sign == 1) { - for (j = 0; j < order; j++) { - dotprod += delayp[j] * (unsigned)coeffs[j]; - coeffs[j] += (delayp[j] >> 31) | 1; - } - } else if (sign == -1) { - for (j = 0; j < order; j++) { - dotprod += delayp[j] * (unsigned)coeffs[j]; - coeffs[j] -= (delayp[j] >> 31) | 1; - } - } else { - for (j = 0; j < order; j++) { - dotprod += delayp[j] * (unsigned)coeffs[j]; - } - } - buffer[i] -= (unsigned)(dotprod >> shift); - delayp ++; - delayp[order - 1] = buffer[i]; - if (delayp - delay == 256) { - memcpy(delay, delayp, sizeof(*delay)*256); - delayp = delay; - } - } -} - -static void long_filter_ehigh_3830(int32_t *buffer, int length) -{ - int i, j; - int32_t dotprod, sign; - int32_t delay[8] = { 0 }; - uint32_t coeffs[8] = { 0 }; - - for (i = 0; i < length; i++) { - dotprod = 0; - sign = APESIGN(buffer[i]); - for (j = 7; j >= 0; j--) { - dotprod += delay[j] * coeffs[j]; - coeffs[j] += ((delay[j] >> 31) | 1) * sign; - } - for (j = 7; j > 0; j--) - delay[j] = delay[j - 1]; - delay[0] = buffer[i]; - buffer[i] -= (unsigned)(dotprod >> 9); - } -} - -static void predictor_decode_stereo_3800(APEContext *ctx, int count) -{ - APEPredictor *p = &ctx->predictor; - int32_t *decoded0 = ctx->decoded[0]; - int32_t *decoded1 = ctx->decoded[1]; - int start = 4, shift = 10; - - if (ctx->compression_level == COMPRESSION_LEVEL_HIGH) { - start = 16; - long_filter_high_3800(decoded0, 16, 9, count); - long_filter_high_3800(decoded1, 16, 9, count); - } else if (ctx->compression_level == COMPRESSION_LEVEL_EXTRA_HIGH) { - int order = 128, shift2 = 11; - - if (ctx->fileversion >= 3830) { - order <<= 1; - shift++; - shift2++; - long_filter_ehigh_3830(decoded0 + order, count - order); - long_filter_ehigh_3830(decoded1 + order, count - order); - } - start = order; - long_filter_high_3800(decoded0, order, shift2, count); - long_filter_high_3800(decoded1, order, shift2, count); - } - - while (count--) { - int X = *decoded0, Y = *decoded1; - if (ctx->compression_level == COMPRESSION_LEVEL_FAST) { - *decoded0 = filter_fast_3320(p, Y, 0, YDELAYA); - decoded0++; - *decoded1 = filter_fast_3320(p, X, 1, XDELAYA); - decoded1++; - } else { - *decoded0 = filter_3800(p, Y, 0, YDELAYA, YDELAYB, - start, shift); - decoded0++; - *decoded1 = filter_3800(p, X, 1, XDELAYA, XDELAYB, - start, shift); - decoded1++; - } - - /* Combined */ - p->buf++; - p->sample_pos++; - - /* Have we filled the history buffer? */ - if (p->buf == p->historybuffer + HISTORY_SIZE) { - memmove(p->historybuffer, p->buf, - PREDICTOR_SIZE * sizeof(*p->historybuffer)); - p->buf = p->historybuffer; - } - } -} - -static void predictor_decode_mono_3800(APEContext *ctx, int count) -{ - APEPredictor *p = &ctx->predictor; - int32_t *decoded0 = ctx->decoded[0]; - int start = 4, shift = 10; - - if (ctx->compression_level == COMPRESSION_LEVEL_HIGH) { - start = 16; - long_filter_high_3800(decoded0, 16, 9, count); - } else if (ctx->compression_level == COMPRESSION_LEVEL_EXTRA_HIGH) { - int order = 128, shift2 = 11; - - if (ctx->fileversion >= 3830) { - order <<= 1; - shift++; - shift2++; - long_filter_ehigh_3830(decoded0 + order, count - order); - } - start = order; - long_filter_high_3800(decoded0, order, shift2, count); - } - - while (count--) { - if (ctx->compression_level == COMPRESSION_LEVEL_FAST) { - *decoded0 = filter_fast_3320(p, *decoded0, 0, YDELAYA); - decoded0++; - } else { - *decoded0 = filter_3800(p, *decoded0, 0, YDELAYA, YDELAYB, - start, shift); - decoded0++; - } - - /* Combined */ - p->buf++; - p->sample_pos++; - - /* Have we filled the history buffer? */ - if (p->buf == p->historybuffer + HISTORY_SIZE) { - memmove(p->historybuffer, p->buf, - PREDICTOR_SIZE * sizeof(*p->historybuffer)); - p->buf = p->historybuffer; - } - } -} - -static av_always_inline int predictor_update_3930(APEPredictor *p, - const int decoded, const int filter, - const int delayA) -{ - int32_t predictionA, sign; - uint32_t d0, d1, d2, d3; - - p->buf[delayA] = p->lastA[filter]; - d0 = p->buf[delayA ]; - d1 = p->buf[delayA ] - (unsigned)p->buf[delayA - 1]; - d2 = p->buf[delayA - 1] - (unsigned)p->buf[delayA - 2]; - d3 = p->buf[delayA - 2] - (unsigned)p->buf[delayA - 3]; - - predictionA = d0 * p->coeffsA[filter][0] + - d1 * p->coeffsA[filter][1] + - d2 * p->coeffsA[filter][2] + - d3 * p->coeffsA[filter][3]; - - p->lastA[filter] = decoded + (predictionA >> 9); - p->filterA[filter] = p->lastA[filter] + ((int)(p->filterA[filter] * 31U) >> 5); - - sign = APESIGN(decoded); - p->coeffsA[filter][0] += (((int32_t)d0 < 0) * 2 - 1) * sign; - p->coeffsA[filter][1] += (((int32_t)d1 < 0) * 2 - 1) * sign; - p->coeffsA[filter][2] += (((int32_t)d2 < 0) * 2 - 1) * sign; - p->coeffsA[filter][3] += (((int32_t)d3 < 0) * 2 - 1) * sign; - - return p->filterA[filter]; -} - -static void predictor_decode_stereo_3930(APEContext *ctx, int count) -{ - APEPredictor *p = &ctx->predictor; - int32_t *decoded0 = ctx->decoded[0]; - int32_t *decoded1 = ctx->decoded[1]; - - ape_apply_filters(ctx, ctx->decoded[0], ctx->decoded[1], count); - - while (count--) { - /* Predictor Y */ - int Y = *decoded1, X = *decoded0; - *decoded0 = predictor_update_3930(p, Y, 0, YDELAYA); - decoded0++; - *decoded1 = predictor_update_3930(p, X, 1, XDELAYA); - decoded1++; - - /* Combined */ - p->buf++; - - /* Have we filled the history buffer? */ - if (p->buf == p->historybuffer + HISTORY_SIZE) { - memmove(p->historybuffer, p->buf, - PREDICTOR_SIZE * sizeof(*p->historybuffer)); - p->buf = p->historybuffer; - } - } -} - -static void predictor_decode_mono_3930(APEContext *ctx, int count) -{ - APEPredictor *p = &ctx->predictor; - int32_t *decoded0 = ctx->decoded[0]; - - ape_apply_filters(ctx, ctx->decoded[0], NULL, count); - - while (count--) { - *decoded0 = predictor_update_3930(p, *decoded0, 0, YDELAYA); - decoded0++; - - p->buf++; - - /* Have we filled the history buffer? */ - if (p->buf == p->historybuffer + HISTORY_SIZE) { - memmove(p->historybuffer, p->buf, - PREDICTOR_SIZE * sizeof(*p->historybuffer)); - p->buf = p->historybuffer; - } - } -} - -static av_always_inline int predictor_update_filter(APEPredictor64 *p, - const int decoded, const int filter, - const int delayA, const int delayB, - const int adaptA, const int adaptB) -{ - int64_t predictionA, predictionB; - int32_t sign; - - p->buf[delayA] = p->lastA[filter]; - p->buf[adaptA] = APESIGN(p->buf[delayA]); - p->buf[delayA - 1] = p->buf[delayA] - (uint64_t)p->buf[delayA - 1]; - p->buf[adaptA - 1] = APESIGN(p->buf[delayA - 1]); - - predictionA = p->buf[delayA ] * p->coeffsA[filter][0] + - p->buf[delayA - 1] * p->coeffsA[filter][1] + - p->buf[delayA - 2] * p->coeffsA[filter][2] + - p->buf[delayA - 3] * p->coeffsA[filter][3]; - - /* Apply a scaled first-order filter compression */ - p->buf[delayB] = p->filterA[filter ^ 1] - ((int64_t)(p->filterB[filter] * 31ULL) >> 5); - p->buf[adaptB] = APESIGN(p->buf[delayB]); - p->buf[delayB - 1] = p->buf[delayB] - (uint64_t)p->buf[delayB - 1]; - p->buf[adaptB - 1] = APESIGN(p->buf[delayB - 1]); - p->filterB[filter] = p->filterA[filter ^ 1]; - - predictionB = p->buf[delayB ] * p->coeffsB[filter][0] + - p->buf[delayB - 1] * p->coeffsB[filter][1] + - p->buf[delayB - 2] * p->coeffsB[filter][2] + - p->buf[delayB - 3] * p->coeffsB[filter][3] + - p->buf[delayB - 4] * p->coeffsB[filter][4]; - - p->lastA[filter] = decoded + ((int64_t)((uint64_t)predictionA + (predictionB >> 1)) >> 10); - p->filterA[filter] = p->lastA[filter] + ((int64_t)(p->filterA[filter] * 31ULL) >> 5); - - sign = APESIGN(decoded); - p->coeffsA[filter][0] += p->buf[adaptA ] * sign; - p->coeffsA[filter][1] += p->buf[adaptA - 1] * sign; - p->coeffsA[filter][2] += p->buf[adaptA - 2] * sign; - p->coeffsA[filter][3] += p->buf[adaptA - 3] * sign; - p->coeffsB[filter][0] += p->buf[adaptB ] * sign; - p->coeffsB[filter][1] += p->buf[adaptB - 1] * sign; - p->coeffsB[filter][2] += p->buf[adaptB - 2] * sign; - p->coeffsB[filter][3] += p->buf[adaptB - 3] * sign; - p->coeffsB[filter][4] += p->buf[adaptB - 4] * sign; - - return p->filterA[filter]; -} - -static void predictor_decode_stereo_3950(APEContext *ctx, int count) -{ - APEPredictor64 *p = &ctx->predictor64; - int32_t *decoded0 = ctx->decoded[0]; - int32_t *decoded1 = ctx->decoded[1]; - - ape_apply_filters(ctx, ctx->decoded[0], ctx->decoded[1], count); - - while (count--) { - /* Predictor Y */ - *decoded0 = predictor_update_filter(p, *decoded0, 0, YDELAYA, YDELAYB, - YADAPTCOEFFSA, YADAPTCOEFFSB); - decoded0++; - *decoded1 = predictor_update_filter(p, *decoded1, 1, XDELAYA, XDELAYB, - XADAPTCOEFFSA, XADAPTCOEFFSB); - decoded1++; - - /* Combined */ - p->buf++; - - /* Have we filled the history buffer? */ - if (p->buf == p->historybuffer + HISTORY_SIZE) { - memmove(p->historybuffer, p->buf, - PREDICTOR_SIZE * sizeof(*p->historybuffer)); - p->buf = p->historybuffer; - } - } -} - -static void predictor_decode_mono_3950(APEContext *ctx, int count) -{ - APEPredictor64 *p = &ctx->predictor64; - int32_t *decoded0 = ctx->decoded[0]; - int32_t predictionA, currentA, A, sign; - - ape_apply_filters(ctx, ctx->decoded[0], NULL, count); - - currentA = p->lastA[0]; - - while (count--) { - A = *decoded0; - - p->buf[YDELAYA] = currentA; - p->buf[YDELAYA - 1] = p->buf[YDELAYA] - (uint64_t)p->buf[YDELAYA - 1]; - - predictionA = p->buf[YDELAYA ] * p->coeffsA[0][0] + - p->buf[YDELAYA - 1] * p->coeffsA[0][1] + - p->buf[YDELAYA - 2] * p->coeffsA[0][2] + - p->buf[YDELAYA - 3] * p->coeffsA[0][3]; - - currentA = A + (uint64_t)(predictionA >> 10); - - p->buf[YADAPTCOEFFSA] = APESIGN(p->buf[YDELAYA ]); - p->buf[YADAPTCOEFFSA - 1] = APESIGN(p->buf[YDELAYA - 1]); - - sign = APESIGN(A); - p->coeffsA[0][0] += p->buf[YADAPTCOEFFSA ] * sign; - p->coeffsA[0][1] += p->buf[YADAPTCOEFFSA - 1] * sign; - p->coeffsA[0][2] += p->buf[YADAPTCOEFFSA - 2] * sign; - p->coeffsA[0][3] += p->buf[YADAPTCOEFFSA - 3] * sign; - - p->buf++; - - /* Have we filled the history buffer? */ - if (p->buf == p->historybuffer + HISTORY_SIZE) { - memmove(p->historybuffer, p->buf, - PREDICTOR_SIZE * sizeof(*p->historybuffer)); - p->buf = p->historybuffer; - } - - p->filterA[0] = currentA + (uint64_t)((int64_t)(p->filterA[0] * 31U) >> 5); - *(decoded0++) = p->filterA[0]; - } - - p->lastA[0] = currentA; -} - -static void do_init_filter(APEFilter *f, int16_t *buf, int order) -{ - f->coeffs = buf; - f->historybuffer = buf + order; - f->delay = f->historybuffer + order * 2; - f->adaptcoeffs = f->historybuffer + order; - - memset(f->historybuffer, 0, (order * 2) * sizeof(*f->historybuffer)); - memset(f->coeffs, 0, order * sizeof(*f->coeffs)); - f->avg = 0; -} - -static void init_filter(APEContext *ctx, APEFilter *f, int16_t *buf, int order) -{ - do_init_filter(&f[0], buf, order); - do_init_filter(&f[1], buf + order * 3 + HISTORY_SIZE, order); -} - -static void do_apply_filter(APEContext *ctx, int version, APEFilter *f, - int32_t *data, int count, int order, int fracbits) -{ - int res; - unsigned absres; - - while (count--) { - /* round fixedpoint scalar product */ - res = ctx->adsp.scalarproduct_and_madd_int16(f->coeffs, - f->delay - order, - f->adaptcoeffs - order, - order, APESIGN(*data)); - res = (int64_t)(res + (1LL << (fracbits - 1))) >> fracbits; - res += (unsigned)*data; - *data++ = res; - - /* Update the output history */ - *f->delay++ = av_clip_int16(res); - - if (version < 3980) { - /* Version ??? to < 3.98 files (untested) */ - f->adaptcoeffs[0] = (res == 0) ? 0 : ((res >> 28) & 8) - 4; - f->adaptcoeffs[-4] >>= 1; - f->adaptcoeffs[-8] >>= 1; - } else { - /* Version 3.98 and later files */ - - /* Update the adaption coefficients */ - absres = FFABSU(res); - if (absres) - *f->adaptcoeffs = APESIGN(res) * - (8 << ((absres > f->avg * 3LL) + (absres > (f->avg + f->avg / 3)))); - /* equivalent to the following code - if (absres <= f->avg * 4 / 3) - *f->adaptcoeffs = APESIGN(res) * 8; - else if (absres <= f->avg * 3) - *f->adaptcoeffs = APESIGN(res) * 16; - else - *f->adaptcoeffs = APESIGN(res) * 32; - */ - else - *f->adaptcoeffs = 0; - - f->avg += (int)(absres - (unsigned)f->avg) / 16; - - f->adaptcoeffs[-1] >>= 1; - f->adaptcoeffs[-2] >>= 1; - f->adaptcoeffs[-8] >>= 1; - } - - f->adaptcoeffs++; - - /* Have we filled the history buffer? */ - if (f->delay == f->historybuffer + HISTORY_SIZE + (order * 2)) { - memmove(f->historybuffer, f->delay - (order * 2), - (order * 2) * sizeof(*f->historybuffer)); - f->delay = f->historybuffer + order * 2; - f->adaptcoeffs = f->historybuffer + order; - } - } -} - -static void apply_filter(APEContext *ctx, APEFilter *f, - int32_t *data0, int32_t *data1, - int count, int order, int fracbits) -{ - do_apply_filter(ctx, ctx->fileversion, &f[0], data0, count, order, fracbits); - if (data1) - do_apply_filter(ctx, ctx->fileversion, &f[1], data1, count, order, fracbits); -} - -static void ape_apply_filters(APEContext *ctx, int32_t *decoded0, - int32_t *decoded1, int count) -{ - int i; - - for (i = 0; i < APE_FILTER_LEVELS; i++) { - if (!ape_filter_orders[ctx->fset][i]) - break; - apply_filter(ctx, ctx->filters[i], decoded0, decoded1, count, - ape_filter_orders[ctx->fset][i], - ape_filter_fracbits[ctx->fset][i]); - } -} - -static int init_frame_decoder(APEContext *ctx) -{ - int i, ret; - if ((ret = init_entropy_decoder(ctx)) < 0) - return ret; - init_predictor_decoder(ctx); - - for (i = 0; i < APE_FILTER_LEVELS; i++) { - if (!ape_filter_orders[ctx->fset][i]) - break; - init_filter(ctx, ctx->filters[i], ctx->filterbuf[i], - ape_filter_orders[ctx->fset][i]); - } - return 0; -} - -static void ape_unpack_mono(APEContext *ctx, int count) -{ - if (ctx->frameflags & APE_FRAMECODE_STEREO_SILENCE) { - /* We are pure silence, so we're done. */ - av_log(ctx->avctx, AV_LOG_DEBUG, "pure silence mono\n"); - return; - } - - ctx->entropy_decode_mono(ctx, count); - if (ctx->error) - return; - - /* Now apply the predictor decoding */ - ctx->predictor_decode_mono(ctx, count); - - /* Pseudo-stereo - just copy left channel to right channel */ - if (ctx->channels == 2) { - memcpy(ctx->decoded[1], ctx->decoded[0], count * sizeof(*ctx->decoded[1])); - } -} - -static void ape_unpack_stereo(APEContext *ctx, int count) -{ - unsigned left, right; - int32_t *decoded0 = ctx->decoded[0]; - int32_t *decoded1 = ctx->decoded[1]; - - if ((ctx->frameflags & APE_FRAMECODE_STEREO_SILENCE) == APE_FRAMECODE_STEREO_SILENCE) { - /* We are pure silence, so we're done. */ - av_log(ctx->avctx, AV_LOG_DEBUG, "pure silence stereo\n"); - return; - } - - ctx->entropy_decode_stereo(ctx, count); - if (ctx->error) - return; - - /* Now apply the predictor decoding */ - ctx->predictor_decode_stereo(ctx, count); - - /* Decorrelate and scale to output depth */ - while (count--) { - left = *decoded1 - (unsigned)(*decoded0 / 2); - right = left + *decoded0; - - *(decoded0++) = left; - *(decoded1++) = right; - } -} - -static int ape_decode_frame(AVCodecContext *avctx, AVFrame *frame, - int *got_frame_ptr, AVPacket *avpkt) -{ - const uint8_t *buf = avpkt->data; - APEContext *s = avctx->priv_data; - uint8_t *sample8; - int16_t *sample16; - int32_t *sample24; - int i, ch, ret; - int blockstodecode; - uint64_t decoded_buffer_size; - - /* this should never be negative, but bad things will happen if it is, so - check it just to make sure. */ - av_assert0(s->samples >= 0); - - if(!s->samples){ - uint32_t nblocks, offset; - int buf_size; - - if (!avpkt->size) { - *got_frame_ptr = 0; - return 0; - } - if (avpkt->size < 8) { - av_log(avctx, AV_LOG_ERROR, "Packet is too small\n"); - return AVERROR_INVALIDDATA; - } - buf_size = avpkt->size & ~3; - if (buf_size != avpkt->size) { - av_log(avctx, AV_LOG_WARNING, "packet size is not a multiple of 4. " - "extra bytes at the end will be skipped.\n"); - } - if (s->fileversion < 3950) // previous versions overread two bytes - buf_size += 2; - av_fast_padded_malloc(&s->data, &s->data_size, buf_size); - if (!s->data) - return AVERROR(ENOMEM); - s->bdsp.bswap_buf((uint32_t *) s->data, (const uint32_t *) buf, - buf_size >> 2); - memset(s->data + (buf_size & ~3), 0, buf_size & 3); - s->ptr = s->data; - s->data_end = s->data + buf_size; - - nblocks = bytestream_get_be32(&s->ptr); - offset = bytestream_get_be32(&s->ptr); - if (s->fileversion >= 3900) { - if (offset > 3) { - av_log(avctx, AV_LOG_ERROR, "Incorrect offset passed\n"); - av_freep(&s->data); - s->data_size = 0; - return AVERROR_INVALIDDATA; - } - if (s->data_end - s->ptr < offset) { - av_log(avctx, AV_LOG_ERROR, "Packet is too small\n"); - return AVERROR_INVALIDDATA; - } - s->ptr += offset; - } else { - if ((ret = init_get_bits8(&s->gb, s->ptr, s->data_end - s->ptr)) < 0) - return ret; - if (s->fileversion > 3800) - skip_bits_long(&s->gb, offset * 8); - else - skip_bits_long(&s->gb, offset); - } - - if (!nblocks || nblocks > INT_MAX / 2 / sizeof(*s->decoded_buffer) - 8) { - av_log(avctx, AV_LOG_ERROR, "Invalid sample count: %"PRIu32".\n", - nblocks); - return AVERROR_INVALIDDATA; - } - - /* Initialize the frame decoder */ - if (init_frame_decoder(s) < 0) { - av_log(avctx, AV_LOG_ERROR, "Error reading frame header\n"); - return AVERROR_INVALIDDATA; - } - s->samples = nblocks; - } - - if (!s->data) { - *got_frame_ptr = 0; - return avpkt->size; - } - - blockstodecode = FFMIN(s->blocks_per_loop, s->samples); - // for old files coefficients were not interleaved, - // so we need to decode all of them at once - if (s->fileversion < 3930) - blockstodecode = s->samples; - - /* reallocate decoded sample buffer if needed */ - decoded_buffer_size = 2LL * FFALIGN(blockstodecode, 8) * sizeof(*s->decoded_buffer); - av_assert0(decoded_buffer_size <= INT_MAX); - - /* get output buffer */ - frame->nb_samples = blockstodecode; - if ((ret = ff_get_buffer(avctx, frame, 0)) < 0) { - s->samples=0; - return ret; - } - - av_fast_malloc(&s->decoded_buffer, &s->decoded_size, decoded_buffer_size); - if (!s->decoded_buffer) - return AVERROR(ENOMEM); - memset(s->decoded_buffer, 0, decoded_buffer_size); - s->decoded[0] = s->decoded_buffer; - s->decoded[1] = s->decoded_buffer + FFALIGN(blockstodecode, 8); - - s->error=0; - - if ((s->channels == 1) || (s->frameflags & APE_FRAMECODE_PSEUDO_STEREO)) - ape_unpack_mono(s, blockstodecode); - else - ape_unpack_stereo(s, blockstodecode); - - if (s->error) { - s->samples=0; - av_log(avctx, AV_LOG_ERROR, "Error decoding frame\n"); - return AVERROR_INVALIDDATA; - } - - switch (s->bps) { - case 8: - for (ch = 0; ch < s->channels; ch++) { - sample8 = (uint8_t *)frame->data[ch]; - for (i = 0; i < blockstodecode; i++) - *sample8++ = (s->decoded[ch][i] + 0x80U) & 0xff; - } - break; - case 16: - for (ch = 0; ch < s->channels; ch++) { - sample16 = (int16_t *)frame->data[ch]; - for (i = 0; i < blockstodecode; i++) - *sample16++ = s->decoded[ch][i]; - } - break; - case 24: - for (ch = 0; ch < s->channels; ch++) { - sample24 = (int32_t *)frame->data[ch]; - for (i = 0; i < blockstodecode; i++) - *sample24++ = s->decoded[ch][i] * 256U; - } - break; - } - - s->samples -= blockstodecode; - - if (avctx->err_recognition & AV_EF_CRCCHECK && - s->fileversion >= 3900 && s->bps < 24) { - uint32_t crc = s->CRC_state; - const AVCRC *crc_tab = av_crc_get_table(AV_CRC_32_IEEE_LE); - for (i = 0; i < blockstodecode; i++) { - for (ch = 0; ch < s->channels; ch++) { - uint8_t *smp = frame->data[ch] + (i*(s->bps >> 3)); - crc = av_crc(crc_tab, crc, smp, s->bps >> 3); - } - } - - if (!s->samples && (~crc >> 1) ^ s->CRC) { - av_log(avctx, AV_LOG_ERROR, "CRC mismatch! Previously decoded " - "frames may have been affected as well.\n"); - if (avctx->err_recognition & AV_EF_EXPLODE) - return AVERROR_INVALIDDATA; - } - - s->CRC_state = crc; - } - - *got_frame_ptr = 1; - - return !s->samples ? avpkt->size : 0; -} - -static void ape_flush(AVCodecContext *avctx) -{ - APEContext *s = avctx->priv_data; - s->samples= 0; -} - -#define OFFSET(x) offsetof(APEContext, x) -#define PAR (AV_OPT_FLAG_DECODING_PARAM | AV_OPT_FLAG_AUDIO_PARAM) -static const AVOption options[] = { - { "max_samples", "maximum number of samples decoded per call", OFFSET(blocks_per_loop), AV_OPT_TYPE_INT, { .i64 = 4608 }, 1, INT_MAX, PAR, "max_samples" }, - { "all", "no maximum. decode all samples for each packet at once", 0, AV_OPT_TYPE_CONST, { .i64 = INT_MAX }, INT_MIN, INT_MAX, PAR, "max_samples" }, - { NULL}, -}; - -static const AVClass ape_decoder_class = { - .class_name = "APE decoder", - .item_name = av_default_item_name, - .option = options, - .version = LIBAVUTIL_VERSION_INT, -}; - -const FFCodec ff_ape_decoder = { - .p.name = "ape", - CODEC_LONG_NAME("Monkey's Audio"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_APE, - .priv_data_size = sizeof(APEContext), - .init = ape_decode_init, - .close = ape_decode_close, - FF_CODEC_DECODE_CB(ape_decode_frame), - .p.capabilities = AV_CODEC_CAP_SUBFRAMES | AV_CODEC_CAP_DELAY | - AV_CODEC_CAP_DR1, - .caps_internal = FF_CODEC_CAP_INIT_CLEANUP, - .flush = ape_flush, - .p.sample_fmts = (const enum AVSampleFormat[]) { AV_SAMPLE_FMT_U8P, - AV_SAMPLE_FMT_S16P, - AV_SAMPLE_FMT_S32P, - AV_SAMPLE_FMT_NONE }, - .p.priv_class = &ape_decoder_class, -}; diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h2645_vui.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h2645_vui.h deleted file mode 100644 index 638da7c36672ecebe2462bcd6f9105e4f19abca0..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h2645_vui.h +++ /dev/null @@ -1,49 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_H2645_VUI_H -#define AVCODEC_H2645_VUI_H - -#include "libavutil/pixfmt.h" -#include "libavutil/rational.h" - -#include "get_bits.h" - -typedef struct H2645VUI { - AVRational sar; - - int overscan_info_present_flag; - int overscan_appropriate_flag; - - int video_signal_type_present_flag; - int video_format; - int video_full_range_flag; - int colour_description_present_flag; - enum AVColorPrimaries colour_primaries; - enum AVColorTransferCharacteristic transfer_characteristics; - enum AVColorSpace matrix_coeffs; - - int chroma_loc_info_present_flag; - int chroma_sample_loc_type_top_field; - int chroma_sample_loc_type_bottom_field; - enum AVChromaLocation chroma_location; -} H2645VUI; - -void ff_h2645_decode_common_vui_params(GetBitContext *gb, H2645VUI *vui, void *logctx); - -#endif /* AVCODEC_H2645_VUI_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/vp9dsp_init_mips.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/vp9dsp_init_mips.c deleted file mode 100644 index 27c8ec9d8c43a6ae1958c5775e926146703dbaf7..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/mips/vp9dsp_init_mips.c +++ /dev/null @@ -1,227 +0,0 @@ -/* - * Copyright (c) 2015 Shivraj Patil (Shivraj.Patil@imgtec.com) - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include "libavutil/attributes.h" -#include "libavutil/mips/cpu.h" -#include "config.h" -#include "libavutil/common.h" -#include "libavcodec/vp9dsp.h" -#include "vp9dsp_mips.h" - -#if HAVE_MSA -static av_cold void vp9dsp_intrapred_init_msa(VP9DSPContext *dsp, int bpp) -{ - if (bpp == 8) { -#define init_intra_pred_msa(tx, sz) \ - dsp->intra_pred[tx][VERT_PRED] = ff_vert_##sz##_msa; \ - dsp->intra_pred[tx][HOR_PRED] = ff_hor_##sz##_msa; \ - dsp->intra_pred[tx][DC_PRED] = ff_dc_##sz##_msa; \ - dsp->intra_pred[tx][LEFT_DC_PRED] = ff_dc_left_##sz##_msa; \ - dsp->intra_pred[tx][TOP_DC_PRED] = ff_dc_top_##sz##_msa; \ - dsp->intra_pred[tx][DC_128_PRED] = ff_dc_128_##sz##_msa; \ - dsp->intra_pred[tx][DC_127_PRED] = ff_dc_127_##sz##_msa; \ - dsp->intra_pred[tx][DC_129_PRED] = ff_dc_129_##sz##_msa; \ - dsp->intra_pred[tx][TM_VP8_PRED] = ff_tm_##sz##_msa; \ - - init_intra_pred_msa(TX_16X16, 16x16); - init_intra_pred_msa(TX_32X32, 32x32); -#undef init_intra_pred_msa - -#define init_intra_pred_msa(tx, sz) \ - dsp->intra_pred[tx][DC_PRED] = ff_dc_##sz##_msa; \ - dsp->intra_pred[tx][LEFT_DC_PRED] = ff_dc_left_##sz##_msa; \ - dsp->intra_pred[tx][TOP_DC_PRED] = ff_dc_top_##sz##_msa; \ - dsp->intra_pred[tx][TM_VP8_PRED] = ff_tm_##sz##_msa; \ - - init_intra_pred_msa(TX_4X4, 4x4); - init_intra_pred_msa(TX_8X8, 8x8); -#undef init_intra_pred_msa - } -} - -static av_cold void vp9dsp_itxfm_init_msa(VP9DSPContext *dsp, int bpp) -{ - if (bpp == 8) { -#define init_itxfm(tx, sz) \ - dsp->itxfm_add[tx][DCT_DCT] = ff_idct_idct_##sz##_add_msa; \ - dsp->itxfm_add[tx][DCT_ADST] = ff_iadst_idct_##sz##_add_msa; \ - dsp->itxfm_add[tx][ADST_DCT] = ff_idct_iadst_##sz##_add_msa; \ - dsp->itxfm_add[tx][ADST_ADST] = ff_iadst_iadst_##sz##_add_msa \ - -#define init_idct(tx, nm) \ - dsp->itxfm_add[tx][DCT_DCT] = \ - dsp->itxfm_add[tx][ADST_DCT] = \ - dsp->itxfm_add[tx][DCT_ADST] = \ - dsp->itxfm_add[tx][ADST_ADST] = nm##_add_msa - - init_itxfm(TX_4X4, 4x4); - init_itxfm(TX_8X8, 8x8); - init_itxfm(TX_16X16, 16x16); - init_idct(TX_32X32, ff_idct_idct_32x32); -#undef init_itxfm -#undef init_idct - } -} - -static av_cold void vp9dsp_mc_init_msa(VP9DSPContext *dsp, int bpp) -{ - if (bpp == 8) { -#define init_fpel(idx1, idx2, sz, type) \ - dsp->mc[idx1][FILTER_8TAP_SMOOTH ][idx2][0][0] = ff_##type##sz##_msa; \ - dsp->mc[idx1][FILTER_8TAP_REGULAR][idx2][0][0] = ff_##type##sz##_msa; \ - dsp->mc[idx1][FILTER_8TAP_SHARP ][idx2][0][0] = ff_##type##sz##_msa; \ - dsp->mc[idx1][FILTER_BILINEAR ][idx2][0][0] = ff_##type##sz##_msa - -#define init_copy_avg(idx, sz) \ - init_fpel(idx, 0, sz, copy); \ - init_fpel(idx, 1, sz, avg) - -#define init_avg(idx, sz) \ - init_fpel(idx, 1, sz, avg) - - init_copy_avg(0, 64); - init_copy_avg(1, 32); - init_copy_avg(2, 16); - init_copy_avg(3, 8); - init_avg(4, 4); - -#undef init_copy_avg -#undef init_avg -#undef init_fpel - -#define init_subpel1(idx1, idx2, idxh, idxv, sz, dir, type) \ - dsp->mc[idx1][FILTER_BILINEAR ][idx2][idxh][idxv] = \ - ff_##type##_bilin_##sz##dir##_msa; \ - dsp->mc[idx1][FILTER_8TAP_SMOOTH ][idx2][idxh][idxv] = \ - ff_##type##_8tap_smooth_##sz##dir##_msa; \ - dsp->mc[idx1][FILTER_8TAP_REGULAR][idx2][idxh][idxv] = \ - ff_##type##_8tap_regular_##sz##dir##_msa; \ - dsp->mc[idx1][FILTER_8TAP_SHARP ][idx2][idxh][idxv] = \ - ff_##type##_8tap_sharp_##sz##dir##_msa; - -#define init_subpel2(idx, idxh, idxv, dir, type) \ - init_subpel1(0, idx, idxh, idxv, 64, dir, type); \ - init_subpel1(1, idx, idxh, idxv, 32, dir, type); \ - init_subpel1(2, idx, idxh, idxv, 16, dir, type); \ - init_subpel1(3, idx, idxh, idxv, 8, dir, type); \ - init_subpel1(4, idx, idxh, idxv, 4, dir, type) - -#define init_subpel3(idx, type) \ - init_subpel2(idx, 1, 1, hv, type); \ - init_subpel2(idx, 0, 1, v, type); \ - init_subpel2(idx, 1, 0, h, type) - - init_subpel3(0, put); - init_subpel3(1, avg); - -#undef init_subpel1 -#undef init_subpel2 -#undef init_subpel3 - } -} - -static av_cold void vp9dsp_loopfilter_init_msa(VP9DSPContext *dsp, int bpp) -{ - if (bpp == 8) { - dsp->loop_filter_8[0][0] = ff_loop_filter_h_4_8_msa; - dsp->loop_filter_8[0][1] = ff_loop_filter_v_4_8_msa; - dsp->loop_filter_8[1][0] = ff_loop_filter_h_8_8_msa; - dsp->loop_filter_8[1][1] = ff_loop_filter_v_8_8_msa; - dsp->loop_filter_8[2][0] = ff_loop_filter_h_16_8_msa; - dsp->loop_filter_8[2][1] = ff_loop_filter_v_16_8_msa; - - dsp->loop_filter_16[0] = ff_loop_filter_h_16_16_msa; - dsp->loop_filter_16[1] = ff_loop_filter_v_16_16_msa; - - dsp->loop_filter_mix2[0][0][0] = ff_loop_filter_h_44_16_msa; - dsp->loop_filter_mix2[0][0][1] = ff_loop_filter_v_44_16_msa; - dsp->loop_filter_mix2[0][1][0] = ff_loop_filter_h_48_16_msa; - dsp->loop_filter_mix2[0][1][1] = ff_loop_filter_v_48_16_msa; - dsp->loop_filter_mix2[1][0][0] = ff_loop_filter_h_84_16_msa; - dsp->loop_filter_mix2[1][0][1] = ff_loop_filter_v_84_16_msa; - dsp->loop_filter_mix2[1][1][0] = ff_loop_filter_h_88_16_msa; - dsp->loop_filter_mix2[1][1][1] = ff_loop_filter_v_88_16_msa; - } -} - -static av_cold void vp9dsp_init_msa(VP9DSPContext *dsp, int bpp) -{ - vp9dsp_intrapred_init_msa(dsp, bpp); - vp9dsp_itxfm_init_msa(dsp, bpp); - vp9dsp_mc_init_msa(dsp, bpp); - vp9dsp_loopfilter_init_msa(dsp, bpp); -} -#endif // #if HAVE_MSA - -#if HAVE_MMI -static av_cold void vp9dsp_mc_init_mmi(VP9DSPContext *dsp) -{ -#define init_subpel1(idx1, idx2, idxh, idxv, sz, dir, type) \ - dsp->mc[idx1][FILTER_8TAP_SMOOTH ][idx2][idxh][idxv] = \ - ff_##type##_8tap_smooth_##sz##dir##_mmi; \ - dsp->mc[idx1][FILTER_8TAP_REGULAR][idx2][idxh][idxv] = \ - ff_##type##_8tap_regular_##sz##dir##_mmi; \ - dsp->mc[idx1][FILTER_8TAP_SHARP ][idx2][idxh][idxv] = \ - ff_##type##_8tap_sharp_##sz##dir##_mmi; - -#define init_subpel2(idx, idxh, idxv, dir, type) \ - init_subpel1(0, idx, idxh, idxv, 64, dir, type); \ - init_subpel1(1, idx, idxh, idxv, 32, dir, type); \ - init_subpel1(2, idx, idxh, idxv, 16, dir, type); \ - init_subpel1(3, idx, idxh, idxv, 8, dir, type); \ - init_subpel1(4, idx, idxh, idxv, 4, dir, type) - -#define init_subpel3(idx, type) \ - init_subpel2(idx, 1, 1, hv, type); \ - init_subpel2(idx, 0, 1, v, type); \ - init_subpel2(idx, 1, 0, h, type) - - init_subpel3(0, put); - init_subpel3(1, avg); - -#undef init_subpel1 -#undef init_subpel2 -#undef init_subpel3 -} - -static av_cold void vp9dsp_init_mmi(VP9DSPContext *dsp, int bpp) -{ - if (bpp == 8) { - vp9dsp_mc_init_mmi(dsp); - } -} -#endif // #if HAVE_MMI - -av_cold void ff_vp9dsp_init_mips(VP9DSPContext *dsp, int bpp) -{ -#if HAVE_MSA || HAVE_MMI - int cpu_flags = av_get_cpu_flags(); -#endif - -#if HAVE_MMI - if (have_mmi(cpu_flags)) - vp9dsp_init_mmi(dsp, bpp); -#endif - -#if HAVE_MSA - if (have_msa(cpu_flags)) - vp9dsp_init_msa(dsp, bpp); -#endif -} diff --git a/spaces/congsaPfin/Manga-OCR/logs/Aliana Shinobi High Five MOD APK A Guide to the Games Story and Gameplay.md b/spaces/congsaPfin/Manga-OCR/logs/Aliana Shinobi High Five MOD APK A Guide to the Games Story and Gameplay.md deleted file mode 100644 index 2698210ac8e7c948223dfb159bfb3db5784986ff..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Aliana Shinobi High Five MOD APK A Guide to the Games Story and Gameplay.md +++ /dev/null @@ -1,107 +0,0 @@ - -

Aliança Shinobi High Five MOD APK: A Ninja Adventure Game for Android

-

If you are a fan of ninja-themed games, you might want to check out Aliança Shinobi High Five, a new RPG game for Android devices. In this game, you can create your own ninja character, join a shinobi alliance, and embark on exciting missions and battles. You can also enjoy stunning graphics, immersive sound effects, and smooth controls.

-

But what if you want to unlock all the features and items in the game without spending real money? Well, there is a solution for that. You can download Aliança Shinobi High Five MOD APK, a modified version of the game that gives you unlimited resources, free shopping, and more. In this article, we will tell you more about this game and how to get the mod apk on your device.

-

aliança shinobi high five mod apk


Download 🗹 https://urlca.com/2uO8ew



-

What is Aliança Shinobi High Five?

-

The story and the gameplay

-

Aliança Shinobi High Five is a game inspired by the popular anime and manga series Naruto. The game is set in a world where ninjas have special abilities called chakra. You can choose from different classes of ninjas, such as taijutsu, genjutsu, or ninjutsu. You can also customize your appearance, skills, weapons, and outfits.

-

The game has a rich and engaging story mode, where you can follow the adventures of your character and interact with other characters from the Naruto universe. You can also join an alliance with other players and cooperate in various missions and events. You can also challenge other players in PvP battles and rank up in the leaderboard.

-

The features and the graphics

-

Aliança Shinobi High Five has many features that make it a fun and addictive game. Some of them are:

-
    -
  • Over 100 characters to collect and upgrade
  • -
  • Over 200 skills to learn and master
  • -
  • Over 300 items to equip and enhance
  • -
  • Over 500 quests to complete and rewards to claim
  • -
  • Different modes to play, such as story mode, alliance mode, arena mode, survival mode, etc.
  • -
  • Different events to participate in, such as daily tasks, weekly challenges, seasonal festivals, etc.
  • -
-

The game also has amazing graphics that bring the ninja world to life. The characters are designed with high-quality 3D models and animations. The environments are detailed and colorful. The effects are realistic and dynamic. The game also has a catchy soundtrack and voice-overs that match the mood of the game.

-

Why download Aliança Shinobi High Five MOD APK?

-

The benefits of the mod version

-

While Aliança Shinobi High Five is a free-to-play game, it also has some in-app purchases that can enhance your gaming experience. For example, you can buy gems, coins, energy, VIP membership, etc. However, these items can be quite expensive and not everyone can afford them.

-

That's why some people prefer to download Aliança Shinobi High Five MOD APK, a modified version of the game that gives you access to all the premium features for free. With this mod apk, you can enjoy:

-

aliança shinobi high five mod apk download
-aliança shinobi high five mod apk unlimited money
-aliança shinobi high five mod apk latest version
-aliança shinobi high five mod apk android
-aliança shinobi high five mod apk free
-aliança shinobi high five mod apk offline
-aliança shinobi high five mod apk 2023
-aliança shinobi high five mod apk hack
-aliança shinobi high five mod apk no root
-aliança shinobi high five mod apk obb
-aliança shinobi high five mod apk revdl
-aliança shinobi high five mod apk rexdl
-aliança shinobi high five mod apk pure
-aliança shinobi high five mod apk happymod
-aliança shinobi high five mod apk an1
-aliança shinobi high five mod apk vip
-aliança shinobi high five mod apk mega
-aliança shinobi high five mod apk mediafıre
-aliança shinobi high five mod apk uptodown
-aliança shinobi high five mod apk 1.8
-aliança shinobi high five mod apk gameplay
-aliança shinobi high five mod apk cheats
-aliança shinobi high five mod apk features
-aliança shinobi high five mod apk review
-aliança shinobi high five mod apk online
-aliança shinobi high five rpg mod apk
-aliança shinobi high five ninja mod apk
-aliança shinobi high five anime mod apk
-aliança shinobi high five naruto mod apk
-aliança shinobi high five boruto mod apk
-aliança shinobi high five adventure mod apk
-aliança shinobi high five action mod apk
-aliança shinobi high five strategy mod apk
-aliança shinobi high five simulation mod apk
-aliança shinobi high five role playing mod apk
-download game aliança shinobi high five mod apk
-download aplikasi aliança shinobi high five mod apk
-descargar aliança shinobi high five mod apk
-baixar aliança shinobi high five mod apk
-telecharger aliança shinobi high five mod apk
-installieren aliança shinobi high five mod apk
-scaricare aliança shinobi high five mod apk
-indir aliança shinobi high five mod apk
-скачать алианса шиноби хай файв мод апк
-下载联盟忍者高五模式apk
-ダウンロードアリアンサシノビハイファイブモッドapk
-다운로드 알리안사 시노비 하이 파이브 모드 APK
-تحميل تحالف شينوبي هاي فايف مود APK

-
    -
  • Unlimited gems
  • -
  • Unlimited coins
  • -
  • Unlimited energy
  • -
  • Free shopping
  • -
  • No ads
  • -
  • No root required
  • -
-

With these benefits, you can play the game without any limitations or interruptions. You can unlock all the characters, skills, items, modes, etc. You can also

How to download and install the mod apk

-

If you want to download Aliança Shinobi High Five MOD APK, you need to follow these simple steps:

-
    -
  1. Click on the download button below to get the mod apk file.
  2. -
  3. Allow unknown sources on your device settings.
  4. -
  5. Locate the downloaded file and tap on it to install it.
  6. -
  7. Launch the game and enjoy the mod features.
  8. -
-

Download Aliança Shinobi High Five MOD APK

-

Note: Before you install the mod apk, make sure you uninstall the original game if you have it on your device. Also, make sure you have enough storage space and a stable internet connection.

-

Conclusion

-

Aliança Shinobi High Five is a great game for anyone who loves ninjas and Naruto. It has a captivating story, a diverse gameplay, and a stunning graphics. It also has a lot of features and modes to keep you entertained for hours. However, if you want to enjoy the game without spending money, you can download Aliança Shinobi High Five MOD APK and get unlimited resources, free shopping, and more. This way, you can unlock all the content and have more fun with the game.

-

We hope this article was helpful and informative. If you have any questions or feedback, feel free to leave a comment below. Thank you for reading and happy gaming!

-

FAQs

-

Here are some frequently asked questions about Aliança Shinobi High Five MOD APK:

-

Is Aliança Shinobi High Five MOD APK safe to use?

-

Yes, Aliança Shinobi High Five MOD APK is safe to use. It does not contain any viruses or malware that can harm your device or data. However, you should always download the mod apk from a trusted source and scan it with an antivirus before installing it.

-

Is Aliança Shinobi High Five MOD APK compatible with my device?

-

Aliança Shinobi High Five MOD APK is compatible with most Android devices that run on Android 4.4 or higher. However, some devices may not support the game or the mod features due to different specifications or settings. If you encounter any problems with the game or the mod apk, you can try to update your device software, clear your cache, or contact the developer for assistance.

-

Can I play Aliança Shinobi High Five MOD APK online with other players?

-

Yes, you can play Aliança Shinobi High Five MOD APK online with other players. However, you should be aware that using the mod apk may give you an unfair advantage over other players and may result in your account being banned or suspended by the game developer. Therefore, we advise you to use the mod apk at your own risk and discretion.

-

Can I update Aliança Shinobi High Five MOD APK to the latest version?

-

Yes, you can update Aliança Shinobi High Five MOD APK to the latest version. However, you should always check if the mod apk is compatible with the new version of the game before updating it. You should also backup your game data before updating it in case something goes wrong.

-

Can I request more features for Aliança Shinobi High Five MOD APK?

-

Yes, you can request more features for Aliança Shinobi High Five MOD APK. However, we cannot guarantee that your requests will be fulfilled or that the mod apk will work as expected. The mod apk is created by independent developers who may or may not update it regularly or add new features to it. Therefore, we suggest you to be patient and grateful for what you have.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Plink Balls A New Way to Play with Physics and Mathematics.md b/spaces/congsaPfin/Manga-OCR/logs/Plink Balls A New Way to Play with Physics and Mathematics.md deleted file mode 100644 index 2f3ea28bca801ece4d0ab09720a32ead9521fd3e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Plink Balls A New Way to Play with Physics and Mathematics.md +++ /dev/null @@ -1,140 +0,0 @@ -
-

Plink Balls: A Fun and Addictive Game for All Ages

-

Do you love games that are simple, yet challenging and rewarding? Do you enjoy watching balls fall and bounce on pegs and slots? Do you want to become a millionaire or lose it all in a matter of seconds? If you answered yes to any of these questions, then you will love Plink Balls, the latest sensation in the gaming world.

-

What are Plink Balls?

-

Plink Balls are small, colorful balls that you can drop from the top of a triangular grid of pegs. As the balls fall, they hit the pegs and change their direction randomly. Some of the balls will land in containers at the bottom of the grid, while others will fall out of the screen. Each container has a multiplier value that determines how much you win or lose by dropping a ball into it. The goal is to drop as many balls as possible into the highest multipliers and avoid the lowest ones.

-

plink balls


DOWNLOAD »»» https://urlca.com/2uO7gV



-

The origin and history of Plink Balls

-

Plink Balls are inspired by a classic game show called Plinko, which was first introduced in 1983 on The Price is Right. In Plinko, contestants had to drop large discs from the top of a board with pegs and slots. Depending on where the discs landed, they could win up to $50,000 or nothing at all. Plinko became one of the most popular and exciting games on the show, and has been featured in many variations and spin-offs over the years.

-

How to play Plink Balls

-

Playing Plink Balls is very easy and fun. All you need is a smartphone or tablet with the app installed. You can download the app for free from Google Play or App Store. Once you open the app, you will see a screen with a grid of pegs and containers. You can choose how many balls you want to drop by tapping on the plus or minus buttons at the bottom. You can also choose how much you want to wager by tapping on the dollar sign button. The minimum wager is $1 and the maximum is $1000 per ball.

-

After you have set your preferences, you can start dropping balls by tapping on the screen. You can watch as the balls fall and bounce on the pegs, creating a mesmerizing spectacle. You can also use exciting boosts to increase your chances of winning, such as extra balls, magnets, bombs, and more. You can earn these boosts by playing regularly or by watching ads.

-

The benefits of playing Plink Balls

-

Plink Balls is not only a fun game, but also a beneficial one. Playing Plink Balls can help you improve your skills and abilities in various ways, such as:

-
    -
  • Cognitive skills: Playing Plink Balls can enhance your memory, attention, concentration, logic, problem-solving, and decision-making skills. You have to remember where the balls land, pay attention to the multipliers, use logic to predict where the balls will go, solve problems when they get stuck, and make quick decisions when dropping balls.
  • -
  • Mental health: Playing Plink Balls can reduce your stress, anxiety, boredom, and depression. You can relax and enjoy watching the balls fall and bounce, creating a soothing sound and visual effect. You can also feel happy and satisfied when you win big or overcome a challenge.
  • -
  • Social skills: Playing Plink Balls can improve your social skills by allowing you to interact with other players online. You can share your scores and achievements with other players online. You can also join or create clubs, chat with other members, send and receive gifts, and participate in tournaments and events.
  • -
  • Financial skills: Playing Plink Balls can teach you how to manage your money wisely. You have to budget your funds, balance your risks and rewards, and plan your moves carefully. You can also learn how to deal with losses and gains, and how to cope with uncertainty and luck.
  • -
-

How to master Plink Balls

-

Plink Balls may seem like a game of chance, but there is also a lot of skill involved. If you want to become a Plink Balls master, you need to practice and learn some tips and tricks that can help you win more often. Here are some of them:

-

Tips and tricks for winning Plink Balls

-
    -
  • Drop the balls from different angles: Don't always drop the balls from the center of the screen. Try dropping them from the left or right edges, or from different heights. This can create different trajectories and outcomes for the balls, and increase your chances of hitting the high multipliers.
  • -
  • Use the boosts wisely: Don't waste your boosts on low stakes or easy levels. Save them for when you really need them, such as when you are playing for high wagers or facing difficult challenges. Also, don't use the same boost all the time. Mix and match different boosts to create different effects and combinations.
  • -
  • Watch the ads: Watching ads can be annoying, but it can also be rewarding. By watching ads, you can earn free coins, extra balls, or other boosts that can help you play better. You can also watch ads to double your winnings or to continue playing after losing.
  • -
-

The best strategies for Plink Balls

-
    -
  • Set a limit: Before you start playing, decide how much you are willing to spend and how much you want to win. Stick to your limit and don't go over it. This way, you can avoid losing more than you can afford or getting greedy and losing what you have won.
  • -
  • Start small: Don't bet too much on your first few drops. Start with small wagers and test the waters. See how the balls behave and where they land. Once you get a feel for the game, you can increase your bets gradually.
  • -
  • Aim for the middle: The middle containers usually have the highest multipliers, but they are also the hardest to hit. However, if you aim for the middle, you have a better chance of hitting something than if you aim for the edges. Even if you miss the middle, you might still hit a decent multiplier on either side.
  • -
-

The most common mistakes to avoid in Plink Balls

-
    -
  • Dropping too many balls at once: Dropping too many balls at once can be tempting, but it can also be risky. You might end up hitting the same containers repeatedly, or missing them altogether. You might also run out of balls quickly and lose your chance to win more. It is better to drop one ball at a time and see where it lands before dropping another one.
  • -
  • Dropping too fast or too slow: Dropping too fast or too slow can affect the outcome of the game. If you drop too fast, you might not have enough time to react or adjust your strategy. If you drop too slow, you might lose your momentum or miss an opportunity. It is better to drop at a moderate pace that suits your style and preference.
  • -
  • Getting distracted or impatient: Plink Balls is a game that requires focus and patience. If you get distracted by other things or impatient with the results, you might make mistakes or lose interest. It is better to play when you are relaxed and attentive, and enjoy the game as it unfolds.
  • -
-

How to enjoy Plink Balls more

-

Plink Balls is already a fun and addictive game, but there are ways to make it even more enjoyable. Here are some of them:

-

plink balls game
-plink balls app
-plink balls online
-plink balls simulator
-plink balls probability
-plink balls experiment
-plink balls physics
-plink balls statistics
-plink balls histogram
-plink balls download
-plink balls free
-plink balls android
-plink balls ios
-plink balls rollic games
-plink balls phet
-plink balls youtube
-plink balls video
-plink balls review
-plink balls tips
-plink balls tricks
-plink balls hack
-plink balls cheat
-plink balls mod apk
-plink balls unblocked
-plink balls play store
-plink balls amazon
-plink balls walmart
-plink balls target
-plink balls toy
-plink balls machine
-plink balls board game
-plink balls arcade game
-plink balls casino game
-plink balls slot machine
-plink balls bingo game
-plink balls lottery game
-plink balls scratch card game
-plink balls math game
-plink balls educational game
-plink balls science game
-plink balls fun game
-plink balls addictive game
-plink balls relaxing game
-plink balls challenging game
-plink balls strategy game
-plink balls puzzle game
-plink balls logic game
-plink balls skill game
-plink balls luck game

-

The different modes and levels of Plink Balls

-

Plink Balls has different modes and levels that offer different challenges and rewards. You can choose from:

-
    -
  • Classic mode: This is the basic mode where you drop balls into containers with fixed multipliers. The multipliers range from x0 to x1000.
  • -
  • Casino mode: This is the mode where you drop balls into containers with variable multipliers. The multipliers change every time you drop a ball, and can range from x0 to x10000.
  • -
  • Adventure mode: This is the mode where you drop balls into containers with special effects. The effects can be positive or negative, such as double, half, freeze, shuffle, or bomb.
  • -
  • Challenge mode: This is the mode where you face different tasks and goals. The tasks and goals can be time-based, score-based, or skill-based, such as dropping a certain number of balls, hitting a certain multiplier, or avoiding a certain container.
  • -
-

You can also unlock new levels by earning stars. Each level has a different theme and design, such as jungle, space, candy, or pirate. The higher the level, the harder the challenge and the bigger the reward.

-

The best features and boosts of Plink Balls

-

Plink Balls has many features and boosts that can make the game more fun and exciting. Some of the best ones are:

-
    -
  • Extra balls: These are balls that you can get for free by watching ads, completing tasks, or opening chests. You can use them to drop more balls and increase your chances of winning.
  • -
  • Magnets: These are boosts that you can activate by tapping on the magnet icon at the bottom of the screen. They can attract the balls to the nearest container with the highest multiplier.
  • -
  • Bombs: These are boosts that you can activate by tapping on the bomb icon at the bottom of the screen. They can explode and clear all the pegs in a certain area, creating a path for the balls to fall into the containers.
  • -
  • Leaderboards: These are features that show your rank and score compared to other players around the world. You can see how you are doing and try to beat your own or others' records.
  • -
  • Achievements: These are features that reward you for reaching certain milestones or completing certain challenges in the game. You can earn coins, stars, or other prizes for achieving them.
  • -
-

The best ways to share and compete with your friends in Plink Balls

-

Plink Balls is more fun when you play with your friends. You can share and compete with your friends in various ways, such as:

-
    -
  • Invite your friends: You can invite your friends to join Plink Balls by sending them a link or a code through social media, email, or text message. You can also scan their QR codes to add them as friends.
  • -
  • Send and receive gifts: You can send and receive gifts from your friends every day. The gifts can be coins, extra balls, or other boosts that can help you play better.
  • -
  • Join or create clubs: You can join or create clubs with your friends or other players who share your interests or goals. You can chat with your club members, exchange tips and tricks, and participate in club events and tournaments.
  • -
  • Challenge your friends: You can challenge your friends to a friendly match or a duel in Plink Balls. You can choose the mode, level, wager, and number of balls for each challenge. The winner gets to keep all the winnings and bragging rights.
  • -
-

Conclusion

-

Summary of the main points

-

Plink Balls is a fun and addictive game that anyone can enjoy. It is based on a classic game show called Plinko, where contestants had to drop discs from a board with pegs and slots. In Plink Balls, you drop balls from a grid of pegs and containers with different multipliers. The goal is to drop as many balls as possible into the highest multipliers and avoid the lowest ones.

-

Plink Balls is not only a fun game, but also a beneficial one. It can improve your cognitive skills, mental health, social skills, and financial skills. It can also teach you how to manage your money wisely, balance your risks and rewards, and plan your moves carefully.

-

Plink Balls has different modes and levels that offer different challenges and rewards. It also has many features and boosts that can make the game more fun and exciting. You can also share and compete with your friends in various ways.

-

Call to action

-

If you are looking for a game that is simple, yet challenging and rewarding; a game that is relaxing, yet stimulating and engaging; a game that is entertaining, yet educational and beneficial; then look no further than Plink Balls. Download Plink Balls today and start dropping balls into containers with multipliers. You will be amazed by how much fun you will have and how much you will learn. You will be amazed by how much fun you will have and how much you will learn.

-

So what are you waiting for? Download Plink Balls now and join the millions of players who are already hooked on this game. You won't regret it!

-

FAQs

-

Here are some of the most frequently asked questions about Plink Balls:

-
    -
  • Q: Is Plink Balls free to play?
  • -
  • A: Yes, Plink Balls is free to play. You can download the app for free from Google Play or App Store. You can also play without spending any real money, as you can earn coins, extra balls, and other boosts by watching ads, completing tasks, or opening chests. However, if you want to play with higher stakes, access premium features, or remove ads, you can also make in-app purchases with real money.
  • -
  • Q: Is Plink Balls fair and random?
  • -
  • A: Yes, Plink Balls is fair and random. The outcome of each drop is determined by a sophisticated algorithm that ensures that the balls fall and bounce on the pegs and containers in a realistic and unpredictable way. The algorithm also ensures that the multipliers and effects of the containers are balanced and fair. No one can manipulate or rig the game in any way.
  • -
  • Q: Is Plink Balls safe and secure?
  • -
  • A: Yes, Plink Balls is safe and secure. The app does not collect or store any personal or sensitive information from the users. The app also does not share or sell any data to third parties. The app also uses encryption and other security measures to protect the users' transactions and accounts. The app also complies with all the relevant laws and regulations regarding online gaming and gambling.
  • -
  • Q: Is Plink Balls suitable for children?
  • -
  • A: Plink Balls is suitable for children who are 12 years old or older. The app has a rating of 12+ on Google Play and App Store. The app does not contain any violence, nudity, profanity, or other inappropriate content. However, the app does involve simulated gambling, which may not be suitable for younger children or those who have gambling problems. Parents should supervise and monitor their children's use of the app and set limits and boundaries as needed.
  • -
  • Q: How can I contact the developers of Plink Balls?
  • -
  • A: You can contact the developers of Plink Balls by sending an email to plinkballs@gmail.com. You can also visit their website at www.plinkballs.com or follow them on Facebook, Twitter, or Instagram. You can also leave a review or a comment on Google Play or App Store. The developers welcome any feedback, suggestions, questions, or complaints from the users and will try to respond as soon as possible.
  • -

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Scary Teacher 3D Old Version The Ultimate Game to Make Your Teacher Pay for Her Crimes.md b/spaces/congsaPfin/Manga-OCR/logs/Scary Teacher 3D Old Version The Ultimate Game to Make Your Teacher Pay for Her Crimes.md deleted file mode 100644 index 8e92f10dc0e07af4c8698c681e6bc33d37666b3e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Scary Teacher 3D Old Version The Ultimate Game to Make Your Teacher Pay for Her Crimes.md +++ /dev/null @@ -1,141 +0,0 @@ -
-

Download Scary Teacher 3D Old Version

-

Scary Teacher 3D is a popular horror game that lets you prank and scare your evil teacher in various ways. You can explore her house, find clues, and use different objects to make her life miserable. But what if you want to play the old version of Scary Teacher 3D, which has different levels, graphics, and features? In this article, we will show you how to download and install Scary Teacher 3D old version on your Android device or computer.

-

What is Scary Teacher 3D?

-

Scary Teacher 3D is a game developed by Z & K Games, which is known for creating other horror games such as Evil Nun and Granny. The game was released in 2018 and has since been updated with new content and improvements. The game has over 100 million downloads on Google Play Store and has a rating of 4.2 out of 5 stars.

-

download scary teacher 3d old version


Download ⇒⇒⇒ https://urlca.com/2uOaoS



-

Features of Scary Teacher 3D

-

Some of the features of Scary Teacher 3D are:

-
    -
  • You can play as a student who wants to take revenge on his or her scary teacher, who is very cruel and abusive.
  • -
  • You can explore the teacher's house, which has 15 rooms with different settings and secrets.
  • -
  • You can find clues, solve puzzles, and use various objects to prank and scare the teacher.
  • -
  • You can enjoy the realistic graphics, animations, and sound effects that create a spooky atmosphere.
  • -
  • You can unlock new chapters and scenarios as you progress in the game.
  • -
-

Why download the old version of Scary Teacher 3D?

-

Some reasons why you might want to download the old version of Scary Teacher 3D are:

-
    -
  • You prefer the old graphics, levels, and features that were available in the previous versions of the game.
  • -
  • You want to play the game offline or without ads, which might not be possible in the latest version.
  • -
  • You have an older device that is not compatible with the latest version of the game.
  • -
  • You want to try a different experience or challenge yourself with the old version of the game.
  • -
-

How to download Scary Teacher 3D old version

-

There are two ways you can download Scary Teacher 3D old version:

-
    -
  1. Use a web tool to generate download links
  2. -
  3. Use an APK extractor app on your Android device
  4. -
-

Method 1: Use a web tool to generate download links

-

This method involves using a web tool that can download APK files from Google Play Store URLs. The files are the same as you would get from the Play Store, and you can choose different versions to download. Here are the steps:

-

Step 1: Copy the Google Play URL of the app

-

First, you need to get the URL of Scary Teacher 3D from Google Play Store. You can do this by opening Google Play Store on your Android device or computer and searching for Scary Teacher 3D. Then, you need to copy the URL from the address bar or the share button. The URL should look something like this:

-

https://play.google.com/store/apps/details?id=com.zakg.scaryteacher.hellgame

-

Step 2: Paste the URL in the web tool and generate the download link

-

Next, you need to open a web tool that can generate download links for APK files from Google Play Store URLs. There are many such tools available online, but one of them is APKCombo. You can access it by visiting this link:

-

download scary teacher 3d mod apk
-download scary teacher 3d for pc
-download scary teacher 3d game free
-download scary teacher 3d latest version
-download scary teacher 3d hack
-download scary teacher 3d unlimited money
-download scary teacher 3d chapter 5
-download scary teacher 3d offline
-download scary teacher 3d apk pure
-download scary teacher 3d android
-download scary teacher 3d app store
-download scary teacher 3d apk mirror
-download scary teacher 3d apk mod menu
-download scary teacher 3d all chapters unlocked
-download scary teacher 3d apk obb
-download scary teacher 3d apk revdl
-download scary teacher 3d apk data
-download scary teacher 3d apk android oyun club
-download scary teacher 3d apk uptodown
-download scary teacher 3d bluestacks
-download scary teacher 3d by z&k games
-download scary teacher 3d beta version
-download scary teacher 3d cheats
-download scary teacher 3d chapter 6
-download scary teacher 3d chapter 4
-download scary teacher 3d chapter 7
-download scary teacher 3d chapter 8
-download scary teacher 3d chapter 9
-download scary teacher 3d chapter 10
-download scary teacher 3d chapter wise
-download scary teacher 3d cracked apk
-download scary teacher 3d christmas update
-download scary teacher 3d diamond hack
-download scary teacher 3d direct link
-download scary teacher 3d easy install
-download scary teacher 3d emulator
-download scary teacher 3d full version free
-download scary teacher 3d for windows

-

https://apkcombo.com/en-us/apk-downloader/

-

Once you are on the website, you need to paste the URL you copied in the previous step in the search box and click on Download APK. The web tool will then show you a list of available versions of Scary Teacher 3D, along with their sizes and dates. You can choose any version you want to download, but make sure it is an old version and not the latest one. For example, you can choose version 5.10.2, which was released on June 9, 2021.

-

Step 3: Download the APK file to your device or computer

-

After you select the version you want to download, the web tool will generate a download link for the APK file. You can click on the link to start downloading the file to your device or computer. The file name should be something like this:

-

com.zakg.scaryteacher.hellgame_5.10.2.apk

-

The download time may vary depending on your internet speed and the size of the file. Once the download is complete, you can move on to the next method or skip to the installation section.

-

Method 2: Use an APK extractor app on your Android device

-

This method involves using an app that can extract APK files from installed apps on your Android device. This way, you can get the old version of Scary Teacher 3D if you already have it installed on your device or if you can find someone who has it. Here are the steps:

-

Step 1: Download and install App APK Extractor & Analyzer from the Play Store

-

First, you need to download and install an app that can extract APK files from installed apps on your Android device. There are many such apps available on the Play Store, but one of them is App APK Extractor & Analyzer. You can download it by visiting this link:

-

https://play.google.com/store/apps/details?id=com.apkextractor.analyzer

-

Once you have downloaded and installed the app, you need to open it and grant it the necessary permissions to access your device storage and installed apps.

-

Step 2: Select the app you want to extract and tap Extract App

-

Next, you need to select Scary Teacher 3D from the list of installed apps on your device. You can use the search bar or scroll down to find it. Once you have selected it, you need to tap on Extract App at the bottom of the screen. The app will then start extracting the APK file from Scary Teacher 3D and save it to your device storage.

-

Step 3: Save the APK file to your preferred location

-

After the extraction is complete, you will see a notification that says "APK extracted successfully". You can tap on it to open the folder where the APK file is saved. The folder name should be something like this:

-

/storage/emulated/0/APK Extractor & Analyzer/Scary Teacher 3D_5.10.2.apk

-

You can move or copy the APK file to any location you want on your device or transfer it to your computer if you wish.

-

How to install Scary Teacher 3D old version

-

Now that you have downloaded the APK file of Scary Teacher 3D old version, you need to install it on your device. Here are the steps:

-

Enable unknown sources on your device

-

Before you can install an APK file that is not from Google Play Store, you need to enable unknown sources on your device. This will allow you to install apps from other sources than Google Play Store. To do this, follow these steps:

-
    -
  • Go to Settings > Security > Unknown sources (or Settings > Apps > Special app access > Install unknown apps, depending on your device model and Android version).
  • -
  • Find and tap the app that you used to download the APK file, such as APKCombo or App APK Extractor & Analyzer.
  • -
  • Toggle on the switch that says Allow from this source or Allow app installs.
  • -
-

Locate and tap the APK file to install it

-

After you have enabled unknown sources, you can install the APK file of Scary Teacher 3D old version. To do this, follow these steps:

-
    -
  • Go to the location where you saved the APK file, such as your device storage or your computer.
  • -
  • Find and tap the APK file to open it. You may see a warning message that says "This type of file can harm your device". Tap OK to proceed.
  • -
  • You may see a screen that shows the app's permissions and features. Tap Install to start the installation process.
  • -
  • Wait for the installation to finish. You may see a message that says "App installed". Tap Open to launch the app or Done to exit.
  • -
-

Conclusion

-

In this article, we have shown you how to download and install Scary Teacher 3D old version on your Android device or computer. You can use either a web tool or an APK extractor app to get the APK file of the old version of the game. Then, you can install it by enabling unknown sources and tapping the APK file. We hope you enjoy playing Scary Teacher 3D old version and have fun pranking and scaring your evil teacher.

-

FAQs

-

Here are some frequently asked questions about Scary Teacher 3D old version:

-

Q: Is Scary Teacher 3D old version safe to download and install?

-

A: Yes, as long as you download the APK file from a reliable source, such as APKCombo or App APK Extractor & Analyzer, and scan it for viruses before installing it. However, you should be careful when installing apps from unknown sources, as they may contain malware or unwanted ads.

-

Q: What are the differences between Scary Teacher 3D old version and new version?

-

A: The differences between Scary Teacher 3D old version and new version may vary depending on which version you choose to download. Some of the possible differences are:

-
    -
  • The old version may have fewer levels, chapters, and scenarios than the new version.
  • -
  • The old version may have different graphics, sound effects, and animations than the new version.
  • -
  • The old version may have different bugs, glitches, and performance issues than the new version.
  • -
  • The old version may not support some features or devices that the new version does.
  • -
-

Q: How can I update Scary Teacher 3D old version to the latest version?

-

A: If you want to update Scary Teacher 3D old version to the latest version, you can do so by visiting Google Play Store and downloading the latest version of the game. However, this will overwrite the old version of the game and you will lose any progress or data you have in it. Alternatively, you can keep both versions of the game by renaming the APK file of the old version before installing it. For example, you can rename it to Scary Teacher 3D_old.apk. This way, you can have two icons of Scary Teacher 3D on your device and play either one of them.

-

Q: How can I uninstall Scary Teacher 3D old version from my device?

-

A: If you want to uninstall Scary Teacher 3D old version from your device, you can do so by following these steps:

-
    -
  • Go to Settings > Apps > Scary Teacher 3D (or Settings > Apps & notifications > See all apps > Scary Teacher 3D, depending on your device model and Android version).
  • -
  • Tap Uninstall and confirm your choice.
  • -
  • You may also need to delete the APK file from your device storage or computer if you don't need it anymore.
  • -
-

Q: Where can I find more information about Scary Teacher 3D?

-

A: If you want to find more information about Scary Teacher 3D, such as tips, tricks, guides, reviews, videos, and more, you can visit these websites:

-
    -
  • https://www.zakg.com/scary-teacher-3d/: The official website of Z & K Games, the developer of Scary Teacher 3D.
  • -
  • < a href="">https://www.youtube.com/channel/UCw9ZP9zF0wJ6oEW5gkQy9Qw: The official YouTube channel of Z & K Games, where you can watch gameplay videos, trailers, and updates of Scary Teacher 3D.
  • -
  • https://www.facebook.com/ScaryTeacher3D/: The official Facebook page of Scary Teacher 3D, where you can follow the latest news, events, and community posts of the game.
  • -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Solo Leveling Hit Run APK - Swipe Slash and Save the Town from Evil Monsters.md b/spaces/congsaPfin/Manga-OCR/logs/Solo Leveling Hit Run APK - Swipe Slash and Save the Town from Evil Monsters.md deleted file mode 100644 index 5bc9488d5e01b6c4cec1ffbe3a8799d075e872c0..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Solo Leveling Hit Run APK - Swipe Slash and Save the Town from Evil Monsters.md +++ /dev/null @@ -1,145 +0,0 @@ - -

Solo Leveling Hit and Run APK: A Fun and Action-Packed Runner Game Based on the Popular Webtoon

-

If you are a fan of the Korean webtoon series Solo Leveling, you might want to check out this new runner game based on it. Solo Leveling Hit and Run APK is a game that lets you experience the thrilling adventures of Sung Jinwoo, a weak hunter who gains the power to level up beyond any limits. In this game, you will have to run, slash, dodge, and fight your way through various enemies and obstacles, while leveling up your skills and abilities. You will also encounter some familiar characters and scenes from the webtoon, as well as some new ones that will surprise you. Whether you are a fan of Solo Leveling or not, this game will surely keep you entertained and challenged.

-

solo leveling hit and run apk


DOWNLOADhttps://urlca.com/2uOdun



-

But what is Solo Leveling exactly? And how can you download and play this game on your Android device? In this article, we will answer these questions and more. We will give you a brief overview of the game and its features, as well as the webtoon and its plot. We will also show you how to download and install the game on your device, how to play it effectively, and why you should give it a try. By the end of this article, you will have all the information you need to enjoy this fun and action-packed runner game based on the popular webtoon.

-

What is Solo Leveling Hit and Run APK?

-

A brief introduction to the game and its features

-

Solo Leveling Hit and Run APK is a runner game developed by Supercent, a Korean game studio. It is based on the webtoon series Solo Leveling by Chu-Gong, which has over 22 million readers worldwide. The game was released in March 2023 for Android devices.

-

The game is a mission-based driving game that features out-of-the-car platform action, similar to The Simpsons Hit and Run or Grand Theft Auto. You can explore the interactive world of Seoul, where the story takes place, and interact with various characters from the webtoon. You can also drive different vehicles, such as cars, motorcycles, trucks, etc., that have different speed, handling, durability, etc.

-

The game also has a leveling system that allows you to upgrade your skills and abilities as you progress through the game. You can increase your strength, speed, stamina, health, etc., by defeating enemies or completing missions. You can also unlock new weapons, such as blades, guns, axes, etc., that have different damage, range , and special effects. You can also customize your appearance, such as clothes, hair, accessories, etc., to suit your style.

-

A brief introduction to the webtoon and its plot

-

Solo Leveling is a webtoon series written by Chu-Gong and illustrated by Jang Sung-Rak and Gee So-Lyung. It is based on a novel of the same name by Chu-Gong. The webtoon was first published in 2018 on KakaoPage, a Korean webtoon platform, and later on Webtoon, an international webtoon platform. The webtoon has over 150 chapters and is still ongoing.

-

solo leveling hit and run game download
-solo leveling hit and run mod apk
-solo leveling hit and run android
-solo leveling hit and run ios
-solo leveling hit and run hack
-solo leveling hit and run cheats
-solo leveling hit and run review
-solo leveling hit and run gameplay
-solo leveling hit and run tips
-solo leveling hit and run guide
-solo leveling hit and run update
-solo leveling hit and run latest version
-solo leveling hit and run offline
-solo leveling hit and run online
-solo leveling hit and run free
-solo leveling hit and run premium
-solo leveling hit and run unlimited gems
-solo leveling hit and run best weapons
-solo leveling hit and run boss fight
-solo leveling hit and run characters
-solo leveling hit and run levels
-solo leveling hit and run skills
-solo leveling hit and run tricks
-solo leveling hit and run strategy
-solo leveling hit and run wiki
-solo leveling hit and run reddit
-solo leveling hit and run discord
-solo leveling hit and run facebook
-solo leveling hit and run twitter
-solo leveling hit and run instagram
-solo leveling hit and run youtube
-solo leveling hit and run google play
-solo leveling hit and run app store
-solo leveling hit and run apk pure
-solo leveling hit and run apk combo[^1^]
-solo leveling hit and run apkpure.com[^1^]
-solo leveling hit and run apkmonk.com[^1^]
-solo leveling hit and run apkdone.com[^1^]
-solo leveling hit and run apkmody.io[^1^]
-solo leveling hit and run apkaward.com[^1^]
-solo leveling hit and run apk4all.com[^1^]
-solo leveling hit and run apkfab.com[^1^]
-solo leveling hit and run apkmirror.com[^1^]
-solo leveling hit and run apksum.com[^1^]
-solo leveling hit and run apknite.com[^1^]

-

The webtoon is set in a world where portals to other dimensions, called gates, have opened, unleashing monsters and creatures that threaten humanity. To fight them, some people have awakened as hunters, who have special abilities and powers. However, not all hunters are equal, and they are ranked from E to S, with S being the strongest.

-

The protagonist of the webtoon is Sung Jinwoo, a weak E-rank hunter who barely survives his missions. One day, he gets involved in a double dungeon, a rare and dangerous type of gate that has never been cleared before. There, he finds a mysterious system that allows him to level up his skills and abilities by completing quests and killing monsters. He becomes the only player of the system, and the only one who can see it. He decides to use it to become stronger and rise from the lowest rank to the highest rank of hunters. Along the way, he faces many challenges, enemies, allies, secrets, and mysteries that will change his life and the world.

-

How to Download and Install Solo Leveling Hit and Run APK?

-

The steps to download and install the game on Android devices

-

If you want to play Solo Leveling Hit and Run APK on your Android device, you will need to follow these steps:

-
    -
  1. Go to the official website of the game at https://solo-leveling-hit-and-run.com/ or search for it on Google.
  2. -
  3. Click on the download button and wait for the APK file to be downloaded on your device.
  4. -
  5. Once the download is complete, locate the APK file on your device's file manager or downloads folder.
  6. -
  7. Tap on the APK file and allow it to install on your device. You may need to enable unknown sources or allow from this source in your device's settings.
  8. -
  9. After the installation is done, you can launch the game from your app drawer or home screen.
  10. -
-

The requirements and permissions needed for the game

-

Before you download and install Solo Leveling Hit and Run APK on your device, you should make sure that your device meets the following requirements:

-
    -
  • Your device should have Android 4.4 or higher as its operating system.
  • -
  • Your device should have at least 2 GB of RAM and 500 MB of free storage space.
  • -
  • Your device should have a stable internet connection to play the game online.
  • -
-

You should also be aware that the game will ask for some permissions on your device, such as:

-
    -
  • Access to your device's storage to save game data and cache.
  • -
  • Access to your device's microphone to record audio for voice chat.
  • -
  • Access to your device's camera to scan QR codes for rewards.
  • -
-

You should grant these permissions if you want to enjoy the full features of the game. However, you can also deny them if you are concerned about your privacy or security.

-

How to Play Solo Leveling Hit and Run APK?

-

The basic gameplay mechanics and controls

-

Solo Leveling Hit and Run APK is a runner game that combines driving and platform action. You can control your character using the virtual joystick on the left side of the screen, and use the buttons on the right side of the screen to perform actions such as jumping, attacking, using items, etc. You can also swipe left or right on the screen to change lanes while driving or running.

-

The game has two main modes: story mode and challenge mode. In story mode, you can follow the plot of the webtoon and complete missions that involve driving or running through various locations, fighting enemies or bosses, collecting items or gems, etc. In challenge mode, you can compete with other players online or offline in different types of races or battles.

-

The different modes, levels, enemies, and obstacles in the game

-

The game has several modes that offer different gameplay experiences. Here are some of them:

- - Race mode: In this mode, you can race against other players or the AI in different tracks, such as city, highway, forest, etc. You can use your skills and items to boost your speed, attack your opponents, or avoid obstacles. You can also collect gems and coins along the way to upgrade your vehicle or buy new ones. The goal is to reach the finish line first or within the time limit. - Battle mode: In this mode, you can fight against other players or the AI in different arenas, such as dungeon, castle, stadium, etc. You can use your weapons and items to deal damage, defend yourself, or heal yourself. You can also collect gems and coins along the way to upgrade your weapons or buy new ones. The goal is to reduce your opponent's health to zero or have more health than them when the time runs out. - Survival mode: In this mode, you can run for as long as you can while avoiding enemies and obstacles that come from all directions. You can use your skills and items to escape, fight back, or recover. You can also collect gems and coins along the way to upgrade your skills or buy new ones. The goal is to survive for as long as possible or reach a certain distance. The game has various levels that correspond to the chapters of the webtoon. Each level has a different theme, setting, difficulty, and objective. Some levels may require you to drive or run through a certain route, while others may require you to defeat a certain number of enemies or a boss. Some levels may also have special events or challenges that will test your skills and strategy. The game has various enemies and obstacles that will try to stop you from completing your missions. Some enemies are common monsters that appear in the webtoon, such as goblins, wolves, zombies, etc. Some enemies are special bosses that have unique abilities and patterns, such as Cerberus, the Demon King, the Ant King, etc. Some obstacles are environmental hazards that can damage you or slow you down, such as traffic, walls, spikes, traps, etc.

The tips and tricks to level up faster and defeat the boss

-

If you want to level up faster and defeat the boss in Solo Leveling Hit and Run APK, you should follow these tips and tricks:

-
    -
  • Complete the daily quests and achievements that will reward you with gems, coins, items, etc.
  • -
  • Watch ads or videos that will give you extra gems, coins, items, etc.
  • -
  • Join a guild or a clan that will give you access to more missions, rewards, chat, etc.
  • -
  • Participate in events or festivals that will offer you special missions, rewards, items, etc.
  • -
  • Use the best vehicle or weapon that suits your playstyle and preference.
  • -
  • Upgrade your vehicle or weapon regularly to increase its performance and durability.
  • -
  • Customize your appearance to boost your confidence and style.
  • -
  • Use your skills and items wisely and strategically.
  • -
  • Learn the patterns and weaknesses of your enemies and bosses.
  • -
  • Avoid unnecessary damage or collisions.
  • -
  • Collect gems and coins as much as possible.
  • -
  • Have fun and enjoy the game.
  • -
-

Why You Should Play Solo Leveling Hit and Run APK?

-

The benefits of playing the game, such as fun, entertainment, challenge, etc.

-

Playing Solo Leveling Hit and Run APK can bring you many benefits, such as:

-
    -
  • Fun: The game is fun to play, as it offers a variety of gameplay modes, levels, enemies, and obstacles that will keep you entertained and engaged. You can also enjoy the humor, drama, and action of the webtoon in the game.
  • -
  • Entertainment: The game is entertaining to watch, as it features high-quality graphics, sound, and animation that will immerse you in the world of Solo Leveling. You can also admire the beautiful and detailed design of the characters, vehicles, weapons, and environments in the game.
  • -
  • Challenge: The game is challenging to master, as it requires skill, strategy, and reflex to complete the missions and defeat the enemies and bosses. You can also compete with other players or the AI in different modes and rankings to test your abilities and improve your performance.
  • -
-

The advantages of playing the game, such as graphics, sound, performance, etc.

-

Playing Solo Leveling Hit and Run APK can also give you many advantages, such as:

-
    -
  • Graphics: The game has stunning graphics that are faithful to the webtoon's style and quality. The game uses 3D models and textures that are realistic and detailed. The game also has dynamic lighting and shadows that create a realistic and immersive atmosphere.
  • -
  • Sound: The game has excellent sound that matches the webtoon's tone and mood. The game uses original soundtracks and sound effects that are catchy and immersive. The game also has voice acting that is expressive and authentic.
  • -
  • Performance: The game has smooth performance that ensures a satisfying and enjoyable gameplay experience. The game runs at a stable frame rate and resolution that prevent lag or glitches. The game also has a user-friendly interface and controls that are easy to use and customize.
  • -
-

The comparison of the game with other similar games, such as The Simpsons Hit and Run, Grand Theft Auto, etc.

-

Playing Solo Leveling Hit and Run APK can also make you appreciate how it differs from other similar games, such as:

-
    -
  • The Simpsons Hit and Run: This is a 2003 game based on the animated sitcom The Simpsons. It is also a mission-based driving game that features out-of-the-car platform action. However, it has a more comedic and satirical tone than Solo Leveling Hit and Run APK. It also has a more cartoonish and colorful graphics style than Solo Leveling Hit and Run APK.
  • -
  • Grand Theft Auto: This is a series of games that started in 1997 and is still ongoing. It is also a mission-based driving game that features out-of-the-car platform action. However, it has a more realistic and violent tone than Solo Leveling Hit and Run APK. It also has a more open-world and sandbox gameplay style than Solo Leveling Hit and Run APK.
  • -
-

Conclusion

-

A summary of the main points of the article

-

In conclusion, Solo Leveling Hit and Run APK is a fun and action-packed runner game based on the popular webtoon series Solo Leveling by Chu-Gong. It is a game that lets you experience the thrilling adventures of Sung Jinwoo, a weak hunter who gains the power to level up beyond any limits. In this game, you will have to run, slash, dodge, and fight your way through various enemies and obstacles, while leveling up your skills and abilities. You will also encounter some familiar characters and scenes from the webtoon, as well as some new ones that will surprise you. Whether you are a fan of Solo Leveling or not, this game will surely keep you entertained and challenged.

-

We have also shown you how to download and install the game on your Android device, how to play it effectively, and why you should give it a try. We have also compared the game with other similar games, such as The Simpsons Hit and Run and Grand Theft Auto, and highlighted its benefits and advantages. We hope that this article has given you all the information you need to enjoy this fun and action-packed runner game based on the popular webtoon.

-

A call to action for the readers to download and play the game

-

So, what are you waiting for? Download Solo Leveling Hit and Run APK now and join Sung Jinwoo in his epic journey to become the strongest hunter in the world. Experience the thrill and excitement of running, slashing, dodging, and fighting in this amazing game that will make you feel like you are part of the webtoon. Don't miss this chance to play one of the best runner games based on one of the best webtoon series ever. Download Solo Leveling Hit and Run APK today and have fun!

-

FAQs

-

Is Solo Leveling Hit and Run APK free to play?

-

Yes, Solo Leveling Hit and Run APK is free to play. However, it may contain some in-app purchases or ads that can enhance your gameplay experience or support the developer.

-

Is Solo Leveling Hit and Run APK safe to download and install?

-

Yes, Solo Leveling Hit and Run APK is safe to download and install. It does not contain any viruses, malware, or spyware that can harm your device or data. However, you should always download it from a trusted source or website, such as the official website of the game or Google Play Store.

-

Is Solo Leveling Hit and Run APK compatible with all Android devices?

-

No, Solo Leveling Hit and Run APK may not be compatible with all Android devices. It requires Android 4.4 or higher as its operating system, as well as 2 GB of RAM and 500 MB of free storage space. It may also not work well on some devices due to different specifications or models.

-

How can I get more gems in Solo Leveling Hit and Run APK?

-

You can get more gems in Solo Leveling Hit and Run APK by completing missions, defeating enemies, collecting items, watching ads or videos, participating in events or festivals, joining a guild or a clan, etc. You can also buy gems with real money through in-app purchases.

-

How can I contact the developer of Solo Leveling Hit and Run APK?

-

You can contact the developer of Solo Leveling Hit and Run APK by sending an email to support@supercent.com or visiting their website at https://supercent.com/. You can also follow them on their social media accounts, such as Facebook, Twitter, Instagram, etc., for updates, news, feedback, etc.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Autodesk AutoCAD 2020 Product Keys Crack Download ((EXCLUSIVE)).md b/spaces/contluForse/HuggingGPT/assets/Autodesk AutoCAD 2020 Product Keys Crack Download ((EXCLUSIVE)).md deleted file mode 100644 index e431c136537c9ffe49cdfb4cdbbf5e6647f071b8..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Autodesk AutoCAD 2020 Product Keys Crack Download ((EXCLUSIVE)).md +++ /dev/null @@ -1,8 +0,0 @@ -

Autodesk AutoCAD 2020 Product Keys Crack Download


Download >>>>> https://ssurll.com/2uzyiM



-
-April 14, 2021 - 오토 오토데크 에서 제공 하는 3D MAX 부터 AutoCAD, Maya 등 2016 버전 부터 2020 버전 까지 의 설치 필요 한 한 제품키 제품키 제품키 제품키 AUTODESK Product Keys 공유 합니다. #Autodesk # autocad #maya #autodesk_com #autocad_com #autocad_com #autodesk_com #autocad_com #autocad_com #autocad_com #autocad_com #autocad_com #autocad_com #autocad_com In this article I would like to talk about the basics that everyone should know -programmer working with 3D space, be it a 3D artist, 3D designer or planner. -Everything I say here is from my own experience. 8a78ff9644
-
-
-

diff --git a/spaces/cpnepo/Harry-Potter-Q-A/app.py b/spaces/cpnepo/Harry-Potter-Q-A/app.py deleted file mode 100644 index 48cd5b95cad1867cd9a4ac857f2e9e456a577ef5..0000000000000000000000000000000000000000 --- a/spaces/cpnepo/Harry-Potter-Q-A/app.py +++ /dev/null @@ -1,74 +0,0 @@ -import streamlit as st -from sentence_transformers import SentenceTransformer, util -from transformers import (AutoModelForQuestionAnswering, - AutoTokenizer, pipeline) - -import pandas as pd -import regex as re - -# Select model for question answering -model_name = "deepset/roberta-base-squad2" - -# Load model & tokenizer -model = AutoModelForQuestionAnswering.from_pretrained(model_name) -tokenizer = AutoTokenizer.from_pretrained(model_name) - -# Create pipeline -pipe = pipeline('question-answering', model=model_name, tokenizer=model_name) - -# Load Harry Potter book corpus from link -book1_raw_0 = open("book_1.txt", mode="r", encoding="utf-8").read() - -# Text pre-processing -# Remove page statements -book1_raw_1 = re.sub(r'Page \| [0-9]+ Harry Potter [a-zA-Z \-]+J.K. Rowling', '', book1_raw_0) - -# Remove newlines -book1_raw_1 = re.sub(r'\n', '', book1_raw_1) - -# Remove periods; this will relevant in the regrouping later -book1_raw_1 = re.sub(r'Mr. ', 'Mr ', book1_raw_1) -book1_raw_1 = re.sub(r'Ms. ', 'Ms ', book1_raw_1) -book1_raw_1 = re.sub(r'Mrs. ', 'Mrs ', book1_raw_1) - -# Group into 6 sentences-long parts -paragraphs = re.findall("[^.?!]+[.?!][^.?!]+[.?!][^.?!]+[.?!][^.?!]+[.?!][^.?!]+[.?!][^.?!]+[.?!]", book1_raw_1) - -st.title('Harry Potter and the Extractive Question Answering Model') - -# Type in HP-related query here -query = st.text_area("Hello my dears! What is your question? Be patient please, I am not a Ravenclaw!") - -if st.button('Accio Responsa!'): - # Perform sentence embedding on query and sentence groups - model_embed_name = 'sentence-transformers/msmarco-distilbert-dot-v5' - - model_embed = SentenceTransformer(model_embed_name) - doc_emb = model_embed.encode(paragraphs) - query_emb = model_embed.encode(query) - - #Compute dot score between query and all document embeddings - scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist() - - #Combine docs & scores - doc_score_pairs = list(zip(paragraphs, scores)) - - #Sort by decreasing score and get only 3 most similar groups - doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], - reverse=True)[:1] - - # Join these similar groups to form the context - context = "".join(x[0] for x in doc_score_pairs) - - # Perform the querying - QA_input = {'question': query, 'context': context} - res = pipe(QA_input) - - confidence = res.get('score') - if confidence > 0.5: - st.write(res.get('answer')) - else: - out = "Sorry dear, I'm not sure" - st.write(out) - #out = res.get('answer') - \ No newline at end of file diff --git a/spaces/cvlab/zero123-live/taming-transformers/taming/data/image_transforms.py b/spaces/cvlab/zero123-live/taming-transformers/taming/data/image_transforms.py deleted file mode 100644 index 657ac332174e0ac72f68315271ffbd757b771a0f..0000000000000000000000000000000000000000 --- a/spaces/cvlab/zero123-live/taming-transformers/taming/data/image_transforms.py +++ /dev/null @@ -1,132 +0,0 @@ -import random -import warnings -from typing import Union - -import torch -from torch import Tensor -from torchvision.transforms import RandomCrop, functional as F, CenterCrop, RandomHorizontalFlip, PILToTensor -from torchvision.transforms.functional import _get_image_size as get_image_size - -from taming.data.helper_types import BoundingBox, Image - -pil_to_tensor = PILToTensor() - - -def convert_pil_to_tensor(image: Image) -> Tensor: - with warnings.catch_warnings(): - # to filter PyTorch UserWarning as described here: https://github.com/pytorch/vision/issues/2194 - warnings.simplefilter("ignore") - return pil_to_tensor(image) - - -class RandomCrop1dReturnCoordinates(RandomCrop): - def forward(self, img: Image) -> (BoundingBox, Image): - """ - Additionally to cropping, returns the relative coordinates of the crop bounding box. - Args: - img (PIL Image or Tensor): Image to be cropped. - - Returns: - Bounding box: x0, y0, w, h - PIL Image or Tensor: Cropped image. - - Based on: - torchvision.transforms.RandomCrop, torchvision 1.7.0 - """ - if self.padding is not None: - img = F.pad(img, self.padding, self.fill, self.padding_mode) - - width, height = get_image_size(img) - # pad the width if needed - if self.pad_if_needed and width < self.size[1]: - padding = [self.size[1] - width, 0] - img = F.pad(img, padding, self.fill, self.padding_mode) - # pad the height if needed - if self.pad_if_needed and height < self.size[0]: - padding = [0, self.size[0] - height] - img = F.pad(img, padding, self.fill, self.padding_mode) - - i, j, h, w = self.get_params(img, self.size) - bbox = (j / width, i / height, w / width, h / height) # x0, y0, w, h - return bbox, F.crop(img, i, j, h, w) - - -class Random2dCropReturnCoordinates(torch.nn.Module): - """ - Additionally to cropping, returns the relative coordinates of the crop bounding box. - Args: - img (PIL Image or Tensor): Image to be cropped. - - Returns: - Bounding box: x0, y0, w, h - PIL Image or Tensor: Cropped image. - - Based on: - torchvision.transforms.RandomCrop, torchvision 1.7.0 - """ - - def __init__(self, min_size: int): - super().__init__() - self.min_size = min_size - - def forward(self, img: Image) -> (BoundingBox, Image): - width, height = get_image_size(img) - max_size = min(width, height) - if max_size <= self.min_size: - size = max_size - else: - size = random.randint(self.min_size, max_size) - top = random.randint(0, height - size) - left = random.randint(0, width - size) - bbox = left / width, top / height, size / width, size / height - return bbox, F.crop(img, top, left, size, size) - - -class CenterCropReturnCoordinates(CenterCrop): - @staticmethod - def get_bbox_of_center_crop(width: int, height: int) -> BoundingBox: - if width > height: - w = height / width - h = 1.0 - x0 = 0.5 - w / 2 - y0 = 0. - else: - w = 1.0 - h = width / height - x0 = 0. - y0 = 0.5 - h / 2 - return x0, y0, w, h - - def forward(self, img: Union[Image, Tensor]) -> (BoundingBox, Union[Image, Tensor]): - """ - Additionally to cropping, returns the relative coordinates of the crop bounding box. - Args: - img (PIL Image or Tensor): Image to be cropped. - - Returns: - Bounding box: x0, y0, w, h - PIL Image or Tensor: Cropped image. - Based on: - torchvision.transforms.RandomHorizontalFlip (version 1.7.0) - """ - width, height = get_image_size(img) - return self.get_bbox_of_center_crop(width, height), F.center_crop(img, self.size) - - -class RandomHorizontalFlipReturn(RandomHorizontalFlip): - def forward(self, img: Image) -> (bool, Image): - """ - Additionally to flipping, returns a boolean whether it was flipped or not. - Args: - img (PIL Image or Tensor): Image to be flipped. - - Returns: - flipped: whether the image was flipped or not - PIL Image or Tensor: Randomly flipped image. - - Based on: - torchvision.transforms.RandomHorizontalFlip (version 1.7.0) - """ - if torch.rand(1) < self.p: - return True, F.hflip(img) - return False, img diff --git a/spaces/cybercorejapan/human-detection-docker/projects/human_detection/export_onnx_trt/export_onnx_mmyolov8.sh b/spaces/cybercorejapan/human-detection-docker/projects/human_detection/export_onnx_trt/export_onnx_mmyolov8.sh deleted file mode 100644 index 8cf45e6ad1430f3d7a11fe6d01104b224ecaeeee..0000000000000000000000000000000000000000 --- a/spaces/cybercorejapan/human-detection-docker/projects/human_detection/export_onnx_trt/export_onnx_mmyolov8.sh +++ /dev/null @@ -1,28 +0,0 @@ -ENV_DIR="/opt/conda/lib/python3.8/site-packages" -CC_DEMO_DIR="/root/workspace/cc-demo" - - -DEPLOY_CFG_PATH="${ENV_DIR}/mmyolo/.mim/configs/deploy/detection_onnxruntime_dynamic.py" -MODEL_CFG_PATH="${CC_DEMO_DIR}/projects/human_detection/deploy/mmyolov8_human_cfg.py" -MODEL_CHECKPOINT_PATH="/data/human_detection/weights/mmyolov8_s_human.pth" -WORK_DIR="/data/human_detection/deploy_onnx" - -INPUT_IMG="${CC_DEMO_DIR}/tests/test_data/human_det/1.jpg" -TEST_IMG="${CC_DEMO_DIR}/tests/test_data/human_det/2.jpg" -DEVICE="cpu" - -export ONNXRUNTIME_DIR=/root/workspace/onnxruntime-linux-x64-1.15.1/ -export LD_LIBRARY_PATH=$ONNXRUNTIME_DIR/lib:$LD_LIBRARY_PATH - -cd /root/workspace/mmdeploy && python tools/deploy.py \ - ${DEPLOY_CFG_PATH} \ - ${MODEL_CFG_PATH} \ - ${MODEL_CHECKPOINT_PATH} \ - ${INPUT_IMG} \ - --test-img ${TEST_IMG} \ - --work-dir ${WORK_DIR} \ - --device ${DEVICE} \ - --log-level INFO \ - --show \ - --dump-info - # --calib-dataset-cfg ${CALIB_DATA_CFG} \ \ No newline at end of file diff --git a/spaces/dachenchen/real/README.md b/spaces/dachenchen/real/README.md deleted file mode 100644 index 391cdc1b0b1ad6561d0bfb0d13eba141b1be37cb..0000000000000000000000000000000000000000 --- a/spaces/dachenchen/real/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: ChuanhuChatGPT -emoji: 🐠 -colorFrom: blue -colorTo: red -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: mit -duplicated_from: hslec/xinsaisi ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dakaiye/dky_xuexi/request_llm/bridge_jittorllms_rwkv.py b/spaces/dakaiye/dky_xuexi/request_llm/bridge_jittorllms_rwkv.py deleted file mode 100644 index 1252eead89a44994241ec4407a1e693cbb170bf6..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/request_llm/bridge_jittorllms_rwkv.py +++ /dev/null @@ -1,178 +0,0 @@ - -from transformers import AutoModel, AutoTokenizer -import time -import threading -import importlib -from toolbox import update_ui, get_conf -from multiprocessing import Process, Pipe - -load_message = "jittorllms尚未加载,加载需要一段时间。注意,请避免混用多种jittor模型,否则可能导致显存溢出而造成卡顿,取决于`config.py`的配置,jittorllms消耗大量的内存(CPU)或显存(GPU),也许会导致低配计算机卡死 ……" - -################################################################################# -class GetGLMHandle(Process): - def __init__(self): - super().__init__(daemon=True) - self.parent, self.child = Pipe() - self.jittorllms_model = None - self.info = "" - self.local_history = [] - self.success = True - self.check_dependency() - self.start() - self.threadLock = threading.Lock() - - def check_dependency(self): - try: - import pandas - self.info = "依赖检测通过" - self.success = True - except: - from toolbox import trimmed_format_exc - self.info = r"缺少jittorllms的依赖,如果要使用jittorllms,除了基础的pip依赖以外,您还需要运行`pip install -r request_llm/requirements_jittorllms.txt -i https://pypi.jittor.org/simple -I`"+\ - r"和`git clone https://gitlink.org.cn/jittor/JittorLLMs.git --depth 1 request_llm/jittorllms`两个指令来安装jittorllms的依赖(在项目根目录运行这两个指令)。" +\ - r"警告:安装jittorllms依赖后将完全破坏现有的pytorch环境,建议使用docker环境!" + trimmed_format_exc() - self.success = False - - def ready(self): - return self.jittorllms_model is not None - - def run(self): - # 子进程执行 - # 第一次运行,加载参数 - def validate_path(): - import os, sys - dir_name = os.path.dirname(__file__) - env = os.environ.get("PATH", "") - os.environ["PATH"] = env.replace('/cuda/bin', '/x/bin') - root_dir_assume = os.path.abspath(os.path.dirname(__file__) + '/..') - os.chdir(root_dir_assume + '/request_llm/jittorllms') - sys.path.append(root_dir_assume + '/request_llm/jittorllms') - validate_path() # validate path so you can run from base directory - - def load_model(): - import types - try: - if self.jittorllms_model is None: - device, = get_conf('LOCAL_MODEL_DEVICE') - from .jittorllms.models import get_model - # availabel_models = ["chatglm", "pangualpha", "llama", "chatrwkv"] - args_dict = {'model': 'chatrwkv'} - print('self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict))') - self.jittorllms_model = get_model(types.SimpleNamespace(**args_dict)) - print('done get model') - except: - self.child.send('[Local Message] Call jittorllms fail 不能正常加载jittorllms的参数。') - raise RuntimeError("不能正常加载jittorllms的参数!") - print('load_model') - load_model() - - # 进入任务等待状态 - print('进入任务等待状态') - while True: - # 进入任务等待状态 - kwargs = self.child.recv() - query = kwargs['query'] - history = kwargs['history'] - # 是否重置 - if len(self.local_history) > 0 and len(history)==0: - print('触发重置') - self.jittorllms_model.reset() - self.local_history.append(query) - - print('收到消息,开始请求') - try: - for response in self.jittorllms_model.stream_chat(query, history): - print(response) - self.child.send(response) - except: - from toolbox import trimmed_format_exc - print(trimmed_format_exc()) - self.child.send('[Local Message] Call jittorllms fail.') - # 请求处理结束,开始下一个循环 - self.child.send('[Finish]') - - def stream_chat(self, **kwargs): - # 主进程执行 - self.threadLock.acquire() - self.parent.send(kwargs) - while True: - res = self.parent.recv() - if res != '[Finish]': - yield res - else: - break - self.threadLock.release() - -global rwkv_glm_handle -rwkv_glm_handle = None -################################################################################# -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=[], console_slience=False): - """ - 多线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - global rwkv_glm_handle - if rwkv_glm_handle is None: - rwkv_glm_handle = GetGLMHandle() - if len(observe_window) >= 1: observe_window[0] = load_message + "\n\n" + rwkv_glm_handle.info - if not rwkv_glm_handle.success: - error = rwkv_glm_handle.info - rwkv_glm_handle = None - raise RuntimeError(error) - - # jittorllms 没有 sys_prompt 接口,因此把prompt加入 history - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - watch_dog_patience = 5 # 看门狗 (watchdog) 的耐心, 设置5秒即可 - response = "" - for response in rwkv_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=sys_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - print(response) - if len(observe_window) >= 1: observe_window[0] = response - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("程序终止。") - return response - - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 单线程方法 - 函数的说明请见 request_llm/bridge_all.py - """ - chatbot.append((inputs, "")) - - global rwkv_glm_handle - if rwkv_glm_handle is None: - rwkv_glm_handle = GetGLMHandle() - chatbot[-1] = (inputs, load_message + "\n\n" + rwkv_glm_handle.info) - yield from update_ui(chatbot=chatbot, history=[]) - if not rwkv_glm_handle.success: - rwkv_glm_handle = None - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - # 处理历史信息 - history_feedin = [] - for i in range(len(history)//2): - history_feedin.append([history[2*i], history[2*i+1]] ) - - # 开始接收jittorllms的回复 - response = "[Local Message]: 等待jittorllms响应中 ..." - for response in rwkv_glm_handle.stream_chat(query=inputs, history=history_feedin, system_prompt=system_prompt, max_length=llm_kwargs['max_length'], top_p=llm_kwargs['top_p'], temperature=llm_kwargs['temperature']): - chatbot[-1] = (inputs, response) - yield from update_ui(chatbot=chatbot, history=history) - - # 总结输出 - if response == "[Local Message]: 等待jittorllms响应中 ...": - response = "[Local Message]: jittorllms响应异常 ..." - history.extend([inputs, response]) - yield from update_ui(chatbot=chatbot, history=history) diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/archs/rrdbnet_arch.py b/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/archs/rrdbnet_arch.py deleted file mode 100644 index 49a2d6c204557cba53ada7550deb587541855cfb..0000000000000000000000000000000000000000 --- a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/archs/rrdbnet_arch.py +++ /dev/null @@ -1,119 +0,0 @@ -import torch -from torch import nn as nn -from torch.nn import functional as F - -from basicsr.utils.registry import ARCH_REGISTRY -from .arch_util import default_init_weights, make_layer, pixel_unshuffle - - -class ResidualDenseBlock(nn.Module): - """Residual Dense Block. - - Used in RRDB block in ESRGAN. - - Args: - num_feat (int): Channel number of intermediate features. - num_grow_ch (int): Channels for each growth. - """ - - def __init__(self, num_feat=64, num_grow_ch=32): - super(ResidualDenseBlock, self).__init__() - self.conv1 = nn.Conv2d(num_feat, num_grow_ch, 3, 1, 1) - self.conv2 = nn.Conv2d(num_feat + num_grow_ch, num_grow_ch, 3, 1, 1) - self.conv3 = nn.Conv2d(num_feat + 2 * num_grow_ch, num_grow_ch, 3, 1, 1) - self.conv4 = nn.Conv2d(num_feat + 3 * num_grow_ch, num_grow_ch, 3, 1, 1) - self.conv5 = nn.Conv2d(num_feat + 4 * num_grow_ch, num_feat, 3, 1, 1) - - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - - # initialization - default_init_weights([self.conv1, self.conv2, self.conv3, self.conv4, self.conv5], 0.1) - - def forward(self, x): - x1 = self.lrelu(self.conv1(x)) - x2 = self.lrelu(self.conv2(torch.cat((x, x1), 1))) - x3 = self.lrelu(self.conv3(torch.cat((x, x1, x2), 1))) - x4 = self.lrelu(self.conv4(torch.cat((x, x1, x2, x3), 1))) - x5 = self.conv5(torch.cat((x, x1, x2, x3, x4), 1)) - # Emperically, we use 0.2 to scale the residual for better performance - return x5 * 0.2 + x - - -class RRDB(nn.Module): - """Residual in Residual Dense Block. - - Used in RRDB-Net in ESRGAN. - - Args: - num_feat (int): Channel number of intermediate features. - num_grow_ch (int): Channels for each growth. - """ - - def __init__(self, num_feat, num_grow_ch=32): - super(RRDB, self).__init__() - self.rdb1 = ResidualDenseBlock(num_feat, num_grow_ch) - self.rdb2 = ResidualDenseBlock(num_feat, num_grow_ch) - self.rdb3 = ResidualDenseBlock(num_feat, num_grow_ch) - - def forward(self, x): - out = self.rdb1(x) - out = self.rdb2(out) - out = self.rdb3(out) - # Emperically, we use 0.2 to scale the residual for better performance - return out * 0.2 + x - - -@ARCH_REGISTRY.register() -class RRDBNet(nn.Module): - """Networks consisting of Residual in Residual Dense Block, which is used - in ESRGAN. - - ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks. - - We extend ESRGAN for scale x2 and scale x1. - Note: This is one option for scale 1, scale 2 in RRDBNet. - We first employ the pixel-unshuffle (an inverse operation of pixelshuffle to reduce the spatial size - and enlarge the channel size before feeding inputs into the main ESRGAN architecture. - - Args: - num_in_ch (int): Channel number of inputs. - num_out_ch (int): Channel number of outputs. - num_feat (int): Channel number of intermediate features. - Default: 64 - num_block (int): Block number in the trunk network. Defaults: 23 - num_grow_ch (int): Channels for each growth. Default: 32. - """ - - def __init__(self, num_in_ch, num_out_ch, scale=4, num_feat=64, num_block=23, num_grow_ch=32): - super(RRDBNet, self).__init__() - self.scale = scale - if scale == 2: - num_in_ch = num_in_ch * 4 - elif scale == 1: - num_in_ch = num_in_ch * 16 - self.conv_first = nn.Conv2d(num_in_ch, num_feat, 3, 1, 1) - self.body = make_layer(RRDB, num_block, num_feat=num_feat, num_grow_ch=num_grow_ch) - self.conv_body = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - # upsample - self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1) - self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1) - - self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True) - - def forward(self, x): - if self.scale == 2: - feat = pixel_unshuffle(x, scale=2) - elif self.scale == 1: - feat = pixel_unshuffle(x, scale=4) - else: - feat = x - feat = self.conv_first(feat) - body_feat = self.conv_body(self.body(feat)) - feat = feat + body_feat - # upsample - feat = self.lrelu(self.conv_up1(F.interpolate(feat, scale_factor=2, mode='nearest'))) - feat = self.lrelu(self.conv_up2(F.interpolate(feat, scale_factor=2, mode='nearest'))) - out = self.conv_last(self.lrelu(self.conv_hr(feat))) - return out \ No newline at end of file diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/archs/vqgan_arch.py b/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/archs/vqgan_arch.py deleted file mode 100644 index f6dfcf4c9983b431f0a978701e5ddd9598faf381..0000000000000000000000000000000000000000 --- a/spaces/dawood17/SayBot_Enchancer/CodeFormer/basicsr/archs/vqgan_arch.py +++ /dev/null @@ -1,435 +0,0 @@ -''' -VQGAN code, adapted from the original created by the Unleashing Transformers authors: -https://github.com/samb-t/unleashing-transformers/blob/master/models/vqgan.py - -''' -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import copy -from basicsr.utils import get_root_logger -from basicsr.utils.registry import ARCH_REGISTRY - -def normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -@torch.jit.script -def swish(x): - return x*torch.sigmoid(x) - - -# Define VQVAE classes -class VectorQuantizer(nn.Module): - def __init__(self, codebook_size, emb_dim, beta): - super(VectorQuantizer, self).__init__() - self.codebook_size = codebook_size # number of embeddings - self.emb_dim = emb_dim # dimension of embedding - self.beta = beta # commitment cost used in loss term, beta * ||z_e(x)-sg[e]||^2 - self.embedding = nn.Embedding(self.codebook_size, self.emb_dim) - self.embedding.weight.data.uniform_(-1.0 / self.codebook_size, 1.0 / self.codebook_size) - - def forward(self, z): - # reshape z -> (batch, height, width, channel) and flatten - z = z.permute(0, 2, 3, 1).contiguous() - z_flattened = z.view(-1, self.emb_dim) - - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - d = (z_flattened ** 2).sum(dim=1, keepdim=True) + (self.embedding.weight**2).sum(1) - \ - 2 * torch.matmul(z_flattened, self.embedding.weight.t()) - - mean_distance = torch.mean(d) - # find closest encodings - # min_encoding_indices = torch.argmin(d, dim=1).unsqueeze(1) - min_encoding_scores, min_encoding_indices = torch.topk(d, 1, dim=1, largest=False) - # [0-1], higher score, higher confidence - min_encoding_scores = torch.exp(-min_encoding_scores/10) - - min_encodings = torch.zeros(min_encoding_indices.shape[0], self.codebook_size).to(z) - min_encodings.scatter_(1, min_encoding_indices, 1) - - # get quantized latent vectors - z_q = torch.matmul(min_encodings, self.embedding.weight).view(z.shape) - # compute loss for embedding - loss = torch.mean((z_q.detach()-z)**2) + self.beta * torch.mean((z_q - z.detach()) ** 2) - # preserve gradients - z_q = z + (z_q - z).detach() - - # perplexity - e_mean = torch.mean(min_encodings, dim=0) - perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10))) - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q, loss, { - "perplexity": perplexity, - "min_encodings": min_encodings, - "min_encoding_indices": min_encoding_indices, - "min_encoding_scores": min_encoding_scores, - "mean_distance": mean_distance - } - - def get_codebook_feat(self, indices, shape): - # input indices: batch*token_num -> (batch*token_num)*1 - # shape: batch, height, width, channel - indices = indices.view(-1,1) - min_encodings = torch.zeros(indices.shape[0], self.codebook_size).to(indices) - min_encodings.scatter_(1, indices, 1) - # get quantized latent vectors - z_q = torch.matmul(min_encodings.float(), self.embedding.weight) - - if shape is not None: # reshape back to match original input shape - z_q = z_q.view(shape).permute(0, 3, 1, 2).contiguous() - - return z_q - - -class GumbelQuantizer(nn.Module): - def __init__(self, codebook_size, emb_dim, num_hiddens, straight_through=False, kl_weight=5e-4, temp_init=1.0): - super().__init__() - self.codebook_size = codebook_size # number of embeddings - self.emb_dim = emb_dim # dimension of embedding - self.straight_through = straight_through - self.temperature = temp_init - self.kl_weight = kl_weight - self.proj = nn.Conv2d(num_hiddens, codebook_size, 1) # projects last encoder layer to quantized logits - self.embed = nn.Embedding(codebook_size, emb_dim) - - def forward(self, z): - hard = self.straight_through if self.training else True - - logits = self.proj(z) - - soft_one_hot = F.gumbel_softmax(logits, tau=self.temperature, dim=1, hard=hard) - - z_q = torch.einsum("b n h w, n d -> b d h w", soft_one_hot, self.embed.weight) - - # + kl divergence to the prior loss - qy = F.softmax(logits, dim=1) - diff = self.kl_weight * torch.sum(qy * torch.log(qy * self.codebook_size + 1e-10), dim=1).mean() - min_encoding_indices = soft_one_hot.argmax(dim=1) - - return z_q, diff, { - "min_encoding_indices": min_encoding_indices - } - - -class Downsample(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.conv = torch.nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=2, padding=0) - - def forward(self, x): - pad = (0, 1, 0, 1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - return x - - -class Upsample(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.conv = nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1) - - def forward(self, x): - x = F.interpolate(x, scale_factor=2.0, mode="nearest") - x = self.conv(x) - - return x - - -class ResBlock(nn.Module): - def __init__(self, in_channels, out_channels=None): - super(ResBlock, self).__init__() - self.in_channels = in_channels - self.out_channels = in_channels if out_channels is None else out_channels - self.norm1 = normalize(in_channels) - self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) - self.norm2 = normalize(out_channels) - self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1) - if self.in_channels != self.out_channels: - self.conv_out = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0) - - def forward(self, x_in): - x = x_in - x = self.norm1(x) - x = swish(x) - x = self.conv1(x) - x = self.norm2(x) - x = swish(x) - x = self.conv2(x) - if self.in_channels != self.out_channels: - x_in = self.conv_out(x_in) - - return x + x_in - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = normalize(in_channels) - self.q = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.k = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.v = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.proj_out = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b, c, h, w = q.shape - q = q.reshape(b, c, h*w) - q = q.permute(0, 2, 1) - k = k.reshape(b, c, h*w) - w_ = torch.bmm(q, k) - w_ = w_ * (int(c)**(-0.5)) - w_ = F.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b, c, h*w) - w_ = w_.permute(0, 2, 1) - h_ = torch.bmm(v, w_) - h_ = h_.reshape(b, c, h, w) - - h_ = self.proj_out(h_) - - return x+h_ - - -class Encoder(nn.Module): - def __init__(self, in_channels, nf, emb_dim, ch_mult, num_res_blocks, resolution, attn_resolutions): - super().__init__() - self.nf = nf - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.attn_resolutions = attn_resolutions - - curr_res = self.resolution - in_ch_mult = (1,)+tuple(ch_mult) - - blocks = [] - # initial convultion - blocks.append(nn.Conv2d(in_channels, nf, kernel_size=3, stride=1, padding=1)) - - # residual and downsampling blocks, with attention on smaller res (16x16) - for i in range(self.num_resolutions): - block_in_ch = nf * in_ch_mult[i] - block_out_ch = nf * ch_mult[i] - for _ in range(self.num_res_blocks): - blocks.append(ResBlock(block_in_ch, block_out_ch)) - block_in_ch = block_out_ch - if curr_res in attn_resolutions: - blocks.append(AttnBlock(block_in_ch)) - - if i != self.num_resolutions - 1: - blocks.append(Downsample(block_in_ch)) - curr_res = curr_res // 2 - - # non-local attention block - blocks.append(ResBlock(block_in_ch, block_in_ch)) - blocks.append(AttnBlock(block_in_ch)) - blocks.append(ResBlock(block_in_ch, block_in_ch)) - - # normalise and convert to latent size - blocks.append(normalize(block_in_ch)) - blocks.append(nn.Conv2d(block_in_ch, emb_dim, kernel_size=3, stride=1, padding=1)) - self.blocks = nn.ModuleList(blocks) - - def forward(self, x): - for block in self.blocks: - x = block(x) - - return x - - -class Generator(nn.Module): - def __init__(self, nf, emb_dim, ch_mult, res_blocks, img_size, attn_resolutions): - super().__init__() - self.nf = nf - self.ch_mult = ch_mult - self.num_resolutions = len(self.ch_mult) - self.num_res_blocks = res_blocks - self.resolution = img_size - self.attn_resolutions = attn_resolutions - self.in_channels = emb_dim - self.out_channels = 3 - block_in_ch = self.nf * self.ch_mult[-1] - curr_res = self.resolution // 2 ** (self.num_resolutions-1) - - blocks = [] - # initial conv - blocks.append(nn.Conv2d(self.in_channels, block_in_ch, kernel_size=3, stride=1, padding=1)) - - # non-local attention block - blocks.append(ResBlock(block_in_ch, block_in_ch)) - blocks.append(AttnBlock(block_in_ch)) - blocks.append(ResBlock(block_in_ch, block_in_ch)) - - for i in reversed(range(self.num_resolutions)): - block_out_ch = self.nf * self.ch_mult[i] - - for _ in range(self.num_res_blocks): - blocks.append(ResBlock(block_in_ch, block_out_ch)) - block_in_ch = block_out_ch - - if curr_res in self.attn_resolutions: - blocks.append(AttnBlock(block_in_ch)) - - if i != 0: - blocks.append(Upsample(block_in_ch)) - curr_res = curr_res * 2 - - blocks.append(normalize(block_in_ch)) - blocks.append(nn.Conv2d(block_in_ch, self.out_channels, kernel_size=3, stride=1, padding=1)) - - self.blocks = nn.ModuleList(blocks) - - - def forward(self, x): - for block in self.blocks: - x = block(x) - - return x - - -@ARCH_REGISTRY.register() -class VQAutoEncoder(nn.Module): - def __init__(self, img_size, nf, ch_mult, quantizer="nearest", res_blocks=2, attn_resolutions=[16], codebook_size=1024, emb_dim=256, - beta=0.25, gumbel_straight_through=False, gumbel_kl_weight=1e-8, model_path=None): - super().__init__() - logger = get_root_logger() - self.in_channels = 3 - self.nf = nf - self.n_blocks = res_blocks - self.codebook_size = codebook_size - self.embed_dim = emb_dim - self.ch_mult = ch_mult - self.resolution = img_size - self.attn_resolutions = attn_resolutions - self.quantizer_type = quantizer - self.encoder = Encoder( - self.in_channels, - self.nf, - self.embed_dim, - self.ch_mult, - self.n_blocks, - self.resolution, - self.attn_resolutions - ) - if self.quantizer_type == "nearest": - self.beta = beta #0.25 - self.quantize = VectorQuantizer(self.codebook_size, self.embed_dim, self.beta) - elif self.quantizer_type == "gumbel": - self.gumbel_num_hiddens = emb_dim - self.straight_through = gumbel_straight_through - self.kl_weight = gumbel_kl_weight - self.quantize = GumbelQuantizer( - self.codebook_size, - self.embed_dim, - self.gumbel_num_hiddens, - self.straight_through, - self.kl_weight - ) - self.generator = Generator( - self.nf, - self.embed_dim, - self.ch_mult, - self.n_blocks, - self.resolution, - self.attn_resolutions - ) - - if model_path is not None: - chkpt = torch.load(model_path, map_location='cpu') - if 'params_ema' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params_ema']) - logger.info(f'vqgan is loaded from: {model_path} [params_ema]') - elif 'params' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params']) - logger.info(f'vqgan is loaded from: {model_path} [params]') - else: - raise ValueError(f'Wrong params!') - - - def forward(self, x): - x = self.encoder(x) - quant, codebook_loss, quant_stats = self.quantize(x) - x = self.generator(quant) - return x, codebook_loss, quant_stats - - - -# patch based discriminator -@ARCH_REGISTRY.register() -class VQGANDiscriminator(nn.Module): - def __init__(self, nc=3, ndf=64, n_layers=4, model_path=None): - super().__init__() - - layers = [nn.Conv2d(nc, ndf, kernel_size=4, stride=2, padding=1), nn.LeakyReLU(0.2, True)] - ndf_mult = 1 - ndf_mult_prev = 1 - for n in range(1, n_layers): # gradually increase the number of filters - ndf_mult_prev = ndf_mult - ndf_mult = min(2 ** n, 8) - layers += [ - nn.Conv2d(ndf * ndf_mult_prev, ndf * ndf_mult, kernel_size=4, stride=2, padding=1, bias=False), - nn.BatchNorm2d(ndf * ndf_mult), - nn.LeakyReLU(0.2, True) - ] - - ndf_mult_prev = ndf_mult - ndf_mult = min(2 ** n_layers, 8) - - layers += [ - nn.Conv2d(ndf * ndf_mult_prev, ndf * ndf_mult, kernel_size=4, stride=1, padding=1, bias=False), - nn.BatchNorm2d(ndf * ndf_mult), - nn.LeakyReLU(0.2, True) - ] - - layers += [ - nn.Conv2d(ndf * ndf_mult, 1, kernel_size=4, stride=1, padding=1)] # output 1 channel prediction map - self.main = nn.Sequential(*layers) - - if model_path is not None: - chkpt = torch.load(model_path, map_location='cpu') - if 'params_d' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params_d']) - elif 'params' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params']) - else: - raise ValueError(f'Wrong params!') - - def forward(self, x): - return self.main(x) \ No newline at end of file diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/utils/display.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/utils/display.py deleted file mode 100644 index 730ca65347ad348964b7aa8c78aa16517c63bd4a..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/altair/utils/display.py +++ /dev/null @@ -1,186 +0,0 @@ -import json -import pkgutil -import textwrap -from typing import Callable, Dict, Optional -import uuid - -from .plugin_registry import PluginRegistry -from .mimebundle import spec_to_mimebundle -from .schemapi import validate_jsonschema - - -# ============================================================================== -# Renderer registry -# ============================================================================== -MimeBundleType = Dict[str, object] -RendererType = Callable[..., MimeBundleType] - - -class RendererRegistry(PluginRegistry[RendererType]): - entrypoint_err_messages = { - "notebook": textwrap.dedent( - """ - To use the 'notebook' renderer, you must install the vega package - and the associated Jupyter extension. - See https://altair-viz.github.io/getting_started/installation.html - for more information. - """ - ), - "altair_viewer": textwrap.dedent( - """ - To use the 'altair_viewer' renderer, you must install the altair_viewer - package; see http://github.com/altair-viz/altair_viewer/ - for more information. - """ - ), - } - - def set_embed_options( - self, - defaultStyle=None, - renderer=None, - width=None, - height=None, - padding=None, - scaleFactor=None, - actions=None, - **kwargs, - ): - """Set options for embeddings of Vega & Vega-Lite charts. - - Options are fully documented at https://github.com/vega/vega-embed. - Similar to the `enable()` method, this can be used as either - a persistent global switch, or as a temporary local setting using - a context manager (i.e. a `with` statement). - - Parameters - ---------- - defaultStyle : bool or string - Specify a default stylesheet for embed actions. - renderer : string - The renderer to use for the view. One of "canvas" (default) or "svg" - width : integer - The view width in pixels - height : integer - The view height in pixels - padding : integer - The view padding in pixels - scaleFactor : number - The number by which to multiply the width and height (default 1) - of an exported PNG or SVG image. - actions : bool or dict - Determines if action links ("Export as PNG/SVG", "View Source", - "View Vega" (only for Vega-Lite), "Open in Vega Editor") are - included with the embedded view. If the value is true, all action - links will be shown and none if the value is false. This property - can take a key-value mapping object that maps keys (export, source, - compiled, editor) to boolean values for determining if - each action link should be shown. - **kwargs : - Additional options are passed directly to embed options. - """ - options = { - "defaultStyle": defaultStyle, - "renderer": renderer, - "width": width, - "height": height, - "padding": padding, - "scaleFactor": scaleFactor, - "actions": actions, - } - kwargs.update({key: val for key, val in options.items() if val is not None}) - return self.enable(None, embed_options=kwargs) - - -# ============================================================================== -# VegaLite v1/v2 renderer logic -# ============================================================================== - - -class Displayable: - """A base display class for VegaLite v1/v2. - - This class takes a VegaLite v1/v2 spec and does the following: - - 1. Optionally validates the spec against a schema. - 2. Uses the RendererPlugin to grab a renderer and call it when the - IPython/Jupyter display method (_repr_mimebundle_) is called. - - The spec passed to this class must be fully schema compliant and already - have the data portion of the spec fully processed and ready to serialize. - In practice, this means, the data portion of the spec should have been passed - through appropriate data model transformers. - """ - - renderers: Optional[RendererRegistry] = None - schema_path = ("altair", "") - - def __init__(self, spec, validate=False): - # type: (dict, bool) -> None - self.spec = spec - self.validate = validate - self._validate() - - def _validate(self): - # type: () -> None - """Validate the spec against the schema.""" - data = pkgutil.get_data(*self.schema_path) - assert data is not None - schema_dict = json.loads(data.decode("utf-8")) - validate_jsonschema( - self.spec, - schema_dict, - ) - - def _repr_mimebundle_(self, include=None, exclude=None): - """Return a MIME bundle for display in Jupyter frontends.""" - if self.renderers is not None: - return self.renderers.get()(self.spec) - else: - return {} - - -def default_renderer_base(spec, mime_type, str_repr, **options): - """A default renderer for Vega or VegaLite that works for modern frontends. - - This renderer works with modern frontends (JupyterLab, nteract) that know - how to render the custom VegaLite MIME type listed above. - """ - assert isinstance(spec, dict) - bundle = {} - metadata = {} - - bundle[mime_type] = spec - bundle["text/plain"] = str_repr - if options: - metadata[mime_type] = options - return bundle, metadata - - -def json_renderer_base(spec, str_repr, **options): - """A renderer that returns a MIME type of application/json. - - In JupyterLab/nteract this is rendered as a nice JSON tree. - """ - return default_renderer_base( - spec, mime_type="application/json", str_repr=str_repr, **options - ) - - -class HTMLRenderer: - """Object to render charts as HTML, with a unique output div each time""" - - def __init__(self, output_div="altair-viz-{}", **kwargs): - self._output_div = output_div - self.kwargs = kwargs - - @property - def output_div(self): - return self._output_div.format(uuid.uuid4().hex) - - def __call__(self, spec, **metadata): - kwargs = self.kwargs.copy() - kwargs.update(metadata) - return spec_to_mimebundle( - spec, format="html", output_div=self.output_div, **kwargs - ) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_m_e_t_a.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_m_e_t_a.py deleted file mode 100644 index 3af9e543049f89f0da3ceb15bb58135854fef002..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/ttLib/tables/_m_e_t_a.py +++ /dev/null @@ -1,104 +0,0 @@ -from fontTools.misc import sstruct -from fontTools.misc.textTools import bytesjoin, strjoin, readHex -from fontTools.ttLib import TTLibError -from . import DefaultTable - -# Apple's documentation of 'meta': -# https://developer.apple.com/fonts/TrueType-Reference-Manual/RM06/Chap6meta.html - -META_HEADER_FORMAT = """ - > # big endian - version: L - flags: L - dataOffset: L - numDataMaps: L -""" - - -DATA_MAP_FORMAT = """ - > # big endian - tag: 4s - dataOffset: L - dataLength: L -""" - - -class table__m_e_t_a(DefaultTable.DefaultTable): - def __init__(self, tag=None): - DefaultTable.DefaultTable.__init__(self, tag) - self.data = {} - - def decompile(self, data, ttFont): - headerSize = sstruct.calcsize(META_HEADER_FORMAT) - header = sstruct.unpack(META_HEADER_FORMAT, data[0:headerSize]) - if header["version"] != 1: - raise TTLibError("unsupported 'meta' version %d" % header["version"]) - dataMapSize = sstruct.calcsize(DATA_MAP_FORMAT) - for i in range(header["numDataMaps"]): - dataMapOffset = headerSize + i * dataMapSize - dataMap = sstruct.unpack( - DATA_MAP_FORMAT, data[dataMapOffset : dataMapOffset + dataMapSize] - ) - tag = dataMap["tag"] - offset = dataMap["dataOffset"] - self.data[tag] = data[offset : offset + dataMap["dataLength"]] - if tag in ["dlng", "slng"]: - self.data[tag] = self.data[tag].decode("utf-8") - - def compile(self, ttFont): - keys = sorted(self.data.keys()) - headerSize = sstruct.calcsize(META_HEADER_FORMAT) - dataOffset = headerSize + len(keys) * sstruct.calcsize(DATA_MAP_FORMAT) - header = sstruct.pack( - META_HEADER_FORMAT, - { - "version": 1, - "flags": 0, - "dataOffset": dataOffset, - "numDataMaps": len(keys), - }, - ) - dataMaps = [] - dataBlocks = [] - for tag in keys: - if tag in ["dlng", "slng"]: - data = self.data[tag].encode("utf-8") - else: - data = self.data[tag] - dataMaps.append( - sstruct.pack( - DATA_MAP_FORMAT, - {"tag": tag, "dataOffset": dataOffset, "dataLength": len(data)}, - ) - ) - dataBlocks.append(data) - dataOffset += len(data) - return bytesjoin([header] + dataMaps + dataBlocks) - - def toXML(self, writer, ttFont): - for tag in sorted(self.data.keys()): - if tag in ["dlng", "slng"]: - writer.begintag("text", tag=tag) - writer.newline() - writer.write(self.data[tag]) - writer.newline() - writer.endtag("text") - writer.newline() - else: - writer.begintag("hexdata", tag=tag) - writer.newline() - data = self.data[tag] - if min(data) >= 0x20 and max(data) <= 0x7E: - writer.comment("ascii: " + data.decode("ascii")) - writer.newline() - writer.dumphex(data) - writer.endtag("hexdata") - writer.newline() - - def fromXML(self, name, attrs, content, ttFont): - if name == "hexdata": - self.data[attrs["tag"]] = readHex(content) - elif name == "text" and attrs["tag"] in ["dlng", "slng"]: - self.data[attrs["tag"]] = strjoin(content).strip() - else: - raise TTLibError("can't handle '%s' element" % name) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-9330f92f.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-9330f92f.js deleted file mode 100644 index cdad05672b324dd783e2629ef8da89acad741322..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-9330f92f.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as c,e as m,s as g,a9 as b,m as f,g as r,K as o,Y as d,h as v,j as p,ab as h,ac as w,ad as y,w as j,u as k,k as G}from"./index-39fce9e2.js";function C(n){let s,l,u,i;const _=n[4].default,a=b(_,n,n[3],null);return{c(){s=f("div"),l=f("div"),a&&a.c(),r(l,"class","styler svelte-iyf88w"),o(l,"--block-radius","0px"),o(l,"--block-border-width","0px"),o(l,"--layout-gap","1px"),o(l,"--form-gap-width","1px"),o(l,"--button-border-width","0px"),o(l,"--button-large-radius","0px"),o(l,"--button-small-radius","0px"),r(s,"id",n[0]),r(s,"class",u="gr-group "+n[1].join(" ")+" svelte-iyf88w"),d(s,"hide",!n[2])},m(e,t){v(e,s,t),p(s,l),a&&a.m(l,null),i=!0},p(e,[t]){a&&a.p&&(!i||t&8)&&h(a,_,e,e[3],i?y(_,e[3],t,null):w(e[3]),null),(!i||t&1)&&r(s,"id",e[0]),(!i||t&2&&u!==(u="gr-group "+e[1].join(" ")+" svelte-iyf88w"))&&r(s,"class",u),(!i||t&6)&&d(s,"hide",!e[2])},i(e){i||(j(a,e),i=!0)},o(e){k(a,e),i=!1},d(e){e&&G(s),a&&a.d(e)}}}function S(n,s,l){let{$$slots:u={},$$scope:i}=s,{elem_id:_=""}=s,{elem_classes:a=[]}=s,{visible:e=!0}=s;return n.$$set=t=>{"elem_id"in t&&l(0,_=t.elem_id),"elem_classes"in t&&l(1,a=t.elem_classes),"visible"in t&&l(2,e=t.visible),"$$scope"in t&&l(3,i=t.$$scope)},[_,a,e,i,u]}class q extends c{constructor(s){super(),m(this,s,S,C,g,{elem_id:0,elem_classes:1,visible:2})}}const Y=q,z=["static"];export{Y as Component,z as modes}; -//# sourceMappingURL=index-9330f92f.js.map diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/tests/util.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/tests/util.py deleted file mode 100644 index ce0e6fadce5e2e7eb696169233e23757d84127ef..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/tests/util.py +++ /dev/null @@ -1,179 +0,0 @@ -import abc -import importlib -import io -import sys -import types -import pathlib - -from . import data01 -from . import zipdata01 -from ..abc import ResourceReader -from ._compat import import_helper - - -from importlib.machinery import ModuleSpec - - -class Reader(ResourceReader): - def __init__(self, **kwargs): - vars(self).update(kwargs) - - def get_resource_reader(self, package): - return self - - def open_resource(self, path): - self._path = path - if isinstance(self.file, Exception): - raise self.file - return self.file - - def resource_path(self, path_): - self._path = path_ - if isinstance(self.path, Exception): - raise self.path - return self.path - - def is_resource(self, path_): - self._path = path_ - if isinstance(self.path, Exception): - raise self.path - - def part(entry): - return entry.split('/') - - return any( - len(parts) == 1 and parts[0] == path_ for parts in map(part, self._contents) - ) - - def contents(self): - if isinstance(self.path, Exception): - raise self.path - yield from self._contents - - -def create_package_from_loader(loader, is_package=True): - name = 'testingpackage' - module = types.ModuleType(name) - spec = ModuleSpec(name, loader, origin='does-not-exist', is_package=is_package) - module.__spec__ = spec - module.__loader__ = loader - return module - - -def create_package(file=None, path=None, is_package=True, contents=()): - return create_package_from_loader( - Reader(file=file, path=path, _contents=contents), - is_package, - ) - - -class CommonTests(metaclass=abc.ABCMeta): - """ - Tests shared by test_open, test_path, and test_read. - """ - - @abc.abstractmethod - def execute(self, package, path): - """ - Call the pertinent legacy API function (e.g. open_text, path) - on package and path. - """ - - def test_package_name(self): - """ - Passing in the package name should succeed. - """ - self.execute(data01.__name__, 'utf-8.file') - - def test_package_object(self): - """ - Passing in the package itself should succeed. - """ - self.execute(data01, 'utf-8.file') - - def test_string_path(self): - """ - Passing in a string for the path should succeed. - """ - path = 'utf-8.file' - self.execute(data01, path) - - def test_pathlib_path(self): - """ - Passing in a pathlib.PurePath object for the path should succeed. - """ - path = pathlib.PurePath('utf-8.file') - self.execute(data01, path) - - def test_importing_module_as_side_effect(self): - """ - The anchor package can already be imported. - """ - del sys.modules[data01.__name__] - self.execute(data01.__name__, 'utf-8.file') - - def test_missing_path(self): - """ - Attempting to open or read or request the path for a - non-existent path should succeed if open_resource - can return a viable data stream. - """ - bytes_data = io.BytesIO(b'Hello, world!') - package = create_package(file=bytes_data, path=FileNotFoundError()) - self.execute(package, 'utf-8.file') - self.assertEqual(package.__loader__._path, 'utf-8.file') - - def test_extant_path(self): - # Attempting to open or read or request the path when the - # path does exist should still succeed. Does not assert - # anything about the result. - bytes_data = io.BytesIO(b'Hello, world!') - # any path that exists - path = __file__ - package = create_package(file=bytes_data, path=path) - self.execute(package, 'utf-8.file') - self.assertEqual(package.__loader__._path, 'utf-8.file') - - def test_useless_loader(self): - package = create_package(file=FileNotFoundError(), path=FileNotFoundError()) - with self.assertRaises(FileNotFoundError): - self.execute(package, 'utf-8.file') - - -class ZipSetupBase: - ZIP_MODULE = None - - @classmethod - def setUpClass(cls): - data_path = pathlib.Path(cls.ZIP_MODULE.__file__) - data_dir = data_path.parent - cls._zip_path = str(data_dir / 'ziptestdata.zip') - sys.path.append(cls._zip_path) - cls.data = importlib.import_module('ziptestdata') - - @classmethod - def tearDownClass(cls): - try: - sys.path.remove(cls._zip_path) - except ValueError: - pass - - try: - del sys.path_importer_cache[cls._zip_path] - del sys.modules[cls.data.__name__] - except KeyError: - pass - - try: - del cls.data - del cls._zip_path - except AttributeError: - pass - - def setUp(self): - modules = import_helper.modules_setup() - self.addCleanup(import_helper.modules_cleanup, *modules) - - -class ZipSetup(ZipSetupBase): - ZIP_MODULE = zipdata01 # type: ignore diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/helpers/parse_link_destination.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/helpers/parse_link_destination.py deleted file mode 100644 index f42b2244db531bab0606607820ab165a161ce255..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/helpers/parse_link_destination.py +++ /dev/null @@ -1,86 +0,0 @@ -""" -Parse link destination -""" - -from ..common.utils import charCodeAt, unescapeAll - - -class _Result: - __slots__ = ("ok", "pos", "lines", "str") - - def __init__(self) -> None: - self.ok = False - self.pos = 0 - self.lines = 0 - self.str = "" - - -def parseLinkDestination(string: str, pos: int, maximum: int) -> _Result: - lines = 0 - start = pos - result = _Result() - - if charCodeAt(string, pos) == 0x3C: # /* < */ - pos += 1 - while pos < maximum: - code = charCodeAt(string, pos) - if code == 0x0A: # /* \n */) - return result - if code == 0x3C: # / * < * / - return result - if code == 0x3E: # /* > */) { - result.pos = pos + 1 - result.str = unescapeAll(string[start + 1 : pos]) - result.ok = True - return result - - if code == 0x5C and pos + 1 < maximum: # \ - pos += 2 - continue - - pos += 1 - - # no closing '>' - return result - - # this should be ... } else { ... branch - - level = 0 - while pos < maximum: - code = charCodeAt(string, pos) - - if code is None or code == 0x20: - break - - # ascii control characters - if code < 0x20 or code == 0x7F: - break - - if code == 0x5C and pos + 1 < maximum: - if charCodeAt(string, pos + 1) == 0x20: - break - pos += 2 - continue - - if code == 0x28: # /* ( */) - level += 1 - if level > 32: - return result - - if code == 0x29: # /* ) */) - if level == 0: - break - level -= 1 - - pos += 1 - - if start == pos: - return result - if level != 0: - return result - - result.str = unescapeAll(string[start:pos]) - result.lines = lines - result.pos = pos - result.ok = True - return result diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/artist.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/artist.py deleted file mode 100644 index 6ee018e857771b8bbb8bd2f6c8410f737b92f00a..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/matplotlib/artist.py +++ /dev/null @@ -1,1864 +0,0 @@ -from collections import namedtuple -import contextlib -from functools import lru_cache, wraps -import inspect -from inspect import Signature, Parameter -import logging -from numbers import Number -import re -import warnings - -import numpy as np - -import matplotlib as mpl -from . import _api, cbook -from .colors import BoundaryNorm -from .cm import ScalarMappable -from .path import Path -from .transforms import (Bbox, IdentityTransform, Transform, TransformedBbox, - TransformedPatchPath, TransformedPath) - -_log = logging.getLogger(__name__) - - -def _prevent_rasterization(draw): - # We assume that by default artists are not allowed to rasterize (unless - # its draw method is explicitly decorated). If it is being drawn after a - # rasterized artist and it has reached a raster_depth of 0, we stop - # rasterization so that it does not affect the behavior of normal artist - # (e.g., change in dpi). - - @wraps(draw) - def draw_wrapper(artist, renderer, *args, **kwargs): - if renderer._raster_depth == 0 and renderer._rasterizing: - # Only stop when we are not in a rasterized parent - # and something has been rasterized since last stop. - renderer.stop_rasterizing() - renderer._rasterizing = False - - return draw(artist, renderer, *args, **kwargs) - - draw_wrapper._supports_rasterization = False - return draw_wrapper - - -def allow_rasterization(draw): - """ - Decorator for Artist.draw method. Provides routines - that run before and after the draw call. The before and after functions - are useful for changing artist-dependent renderer attributes or making - other setup function calls, such as starting and flushing a mixed-mode - renderer. - """ - - @wraps(draw) - def draw_wrapper(artist, renderer): - try: - if artist.get_rasterized(): - if renderer._raster_depth == 0 and not renderer._rasterizing: - renderer.start_rasterizing() - renderer._rasterizing = True - renderer._raster_depth += 1 - else: - if renderer._raster_depth == 0 and renderer._rasterizing: - # Only stop when we are not in a rasterized parent - # and something has be rasterized since last stop - renderer.stop_rasterizing() - renderer._rasterizing = False - - if artist.get_agg_filter() is not None: - renderer.start_filter() - - return draw(artist, renderer) - finally: - if artist.get_agg_filter() is not None: - renderer.stop_filter(artist.get_agg_filter()) - if artist.get_rasterized(): - renderer._raster_depth -= 1 - if (renderer._rasterizing and artist.figure and - artist.figure.suppressComposite): - # restart rasterizing to prevent merging - renderer.stop_rasterizing() - renderer.start_rasterizing() - - draw_wrapper._supports_rasterization = True - return draw_wrapper - - -def _finalize_rasterization(draw): - """ - Decorator for Artist.draw method. Needed on the outermost artist, i.e. - Figure, to finish up if the render is still in rasterized mode. - """ - @wraps(draw) - def draw_wrapper(artist, renderer, *args, **kwargs): - result = draw(artist, renderer, *args, **kwargs) - if renderer._rasterizing: - renderer.stop_rasterizing() - renderer._rasterizing = False - return result - return draw_wrapper - - -def _stale_axes_callback(self, val): - if self.axes: - self.axes.stale = val - - -_XYPair = namedtuple("_XYPair", "x y") - - -class _Unset: - def __repr__(self): - return "" -_UNSET = _Unset() - - -class Artist: - """ - Abstract base class for objects that render into a FigureCanvas. - - Typically, all visible elements in a figure are subclasses of Artist. - """ - - zorder = 0 - - def __init_subclass__(cls): - - # Decorate draw() method so that all artists are able to stop - # rastrization when necessary. If the artist's draw method is already - # decorated (has a `_supports_rasterization` attribute), it won't be - # decorated. - - if not hasattr(cls.draw, "_supports_rasterization"): - cls.draw = _prevent_rasterization(cls.draw) - - # Inject custom set() methods into the subclass with signature and - # docstring based on the subclasses' properties. - - if not hasattr(cls.set, '_autogenerated_signature'): - # Don't overwrite cls.set if the subclass or one of its parents - # has defined a set method set itself. - # If there was no explicit definition, cls.set is inherited from - # the hierarchy of auto-generated set methods, which hold the - # flag _autogenerated_signature. - return - - cls.set = lambda self, **kwargs: Artist.set(self, **kwargs) - cls.set.__name__ = "set" - cls.set.__qualname__ = f"{cls.__qualname__}.set" - cls._update_set_signature_and_docstring() - - _PROPERTIES_EXCLUDED_FROM_SET = [ - 'navigate_mode', # not a user-facing function - 'figure', # changing the figure is such a profound operation - # that we don't want this in set() - '3d_properties', # cannot be used as a keyword due to leading digit - ] - - @classmethod - def _update_set_signature_and_docstring(cls): - """ - Update the signature of the set function to list all properties - as keyword arguments. - - Property aliases are not listed in the signature for brevity, but - are still accepted as keyword arguments. - """ - cls.set.__signature__ = Signature( - [Parameter("self", Parameter.POSITIONAL_OR_KEYWORD), - *[Parameter(prop, Parameter.KEYWORD_ONLY, default=_UNSET) - for prop in ArtistInspector(cls).get_setters() - if prop not in Artist._PROPERTIES_EXCLUDED_FROM_SET]]) - cls.set._autogenerated_signature = True - - cls.set.__doc__ = ( - "Set multiple properties at once.\n\n" - "Supported properties are\n\n" - + kwdoc(cls)) - - def __init__(self): - self._stale = True - self.stale_callback = None - self._axes = None - self.figure = None - - self._transform = None - self._transformSet = False - self._visible = True - self._animated = False - self._alpha = None - self.clipbox = None - self._clippath = None - self._clipon = True - self._label = '' - self._picker = None - self._rasterized = False - self._agg_filter = None - # Normally, artist classes need to be queried for mouseover info if and - # only if they override get_cursor_data. - self._mouseover = type(self).get_cursor_data != Artist.get_cursor_data - self._callbacks = cbook.CallbackRegistry(signals=["pchanged"]) - try: - self.axes = None - except AttributeError: - # Handle self.axes as a read-only property, as in Figure. - pass - self._remove_method = None - self._url = None - self._gid = None - self._snap = None - self._sketch = mpl.rcParams['path.sketch'] - self._path_effects = mpl.rcParams['path.effects'] - self._sticky_edges = _XYPair([], []) - self._in_layout = True - - def __getstate__(self): - d = self.__dict__.copy() - # remove the unpicklable remove method, this will get re-added on load - # (by the Axes) if the artist lives on an Axes. - d['stale_callback'] = None - return d - - def remove(self): - """ - Remove the artist from the figure if possible. - - The effect will not be visible until the figure is redrawn, e.g., - with `.FigureCanvasBase.draw_idle`. Call `~.axes.Axes.relim` to - update the axes limits if desired. - - Note: `~.axes.Axes.relim` will not see collections even if the - collection was added to the axes with *autolim* = True. - - Note: there is no support for removing the artist's legend entry. - """ - - # There is no method to set the callback. Instead, the parent should - # set the _remove_method attribute directly. This would be a - # protected attribute if Python supported that sort of thing. The - # callback has one parameter, which is the child to be removed. - if self._remove_method is not None: - self._remove_method(self) - # clear stale callback - self.stale_callback = None - _ax_flag = False - if hasattr(self, 'axes') and self.axes: - # remove from the mouse hit list - self.axes._mouseover_set.discard(self) - self.axes.stale = True - self.axes = None # decouple the artist from the Axes - _ax_flag = True - - if self.figure: - self.figure = None - if not _ax_flag: - self.figure = True - - else: - raise NotImplementedError('cannot remove artist') - # TODO: the fix for the collections relim problem is to move the - # limits calculation into the artist itself, including the property of - # whether or not the artist should affect the limits. Then there will - # be no distinction between axes.add_line, axes.add_patch, etc. - # TODO: add legend support - - def have_units(self): - """Return whether units are set on any axis.""" - ax = self.axes - return ax and any(axis.have_units() for axis in ax._axis_map.values()) - - def convert_xunits(self, x): - """ - Convert *x* using the unit type of the xaxis. - - If the artist is not contained in an Axes or if the xaxis does not - have units, *x* itself is returned. - """ - ax = getattr(self, 'axes', None) - if ax is None or ax.xaxis is None: - return x - return ax.xaxis.convert_units(x) - - def convert_yunits(self, y): - """ - Convert *y* using the unit type of the yaxis. - - If the artist is not contained in an Axes or if the yaxis does not - have units, *y* itself is returned. - """ - ax = getattr(self, 'axes', None) - if ax is None or ax.yaxis is None: - return y - return ax.yaxis.convert_units(y) - - @property - def axes(self): - """The `~.axes.Axes` instance the artist resides in, or *None*.""" - return self._axes - - @axes.setter - def axes(self, new_axes): - if (new_axes is not None and self._axes is not None - and new_axes != self._axes): - raise ValueError("Can not reset the axes. You are probably " - "trying to re-use an artist in more than one " - "Axes which is not supported") - self._axes = new_axes - if new_axes is not None and new_axes is not self: - self.stale_callback = _stale_axes_callback - - @property - def stale(self): - """ - Whether the artist is 'stale' and needs to be re-drawn for the output - to match the internal state of the artist. - """ - return self._stale - - @stale.setter - def stale(self, val): - self._stale = val - - # if the artist is animated it does not take normal part in the - # draw stack and is not expected to be drawn as part of the normal - # draw loop (when not saving) so do not propagate this change - if self.get_animated(): - return - - if val and self.stale_callback is not None: - self.stale_callback(self, val) - - def get_window_extent(self, renderer=None): - """ - Get the artist's bounding box in display space. - - The bounding box' width and height are nonnegative. - - Subclasses should override for inclusion in the bounding box - "tight" calculation. Default is to return an empty bounding - box at 0, 0. - - Be careful when using this function, the results will not update - if the artist window extent of the artist changes. The extent - can change due to any changes in the transform stack, such as - changing the axes limits, the figure size, or the canvas used - (as is done when saving a figure). This can lead to unexpected - behavior where interactive figures will look fine on the screen, - but will save incorrectly. - """ - return Bbox([[0, 0], [0, 0]]) - - def get_tightbbox(self, renderer=None): - """ - Like `.Artist.get_window_extent`, but includes any clipping. - - Parameters - ---------- - renderer : `.RendererBase` subclass - renderer that will be used to draw the figures (i.e. - ``fig.canvas.get_renderer()``) - - Returns - ------- - `.Bbox` - The enclosing bounding box (in figure pixel coordinates). - """ - bbox = self.get_window_extent(renderer) - if self.get_clip_on(): - clip_box = self.get_clip_box() - if clip_box is not None: - bbox = Bbox.intersection(bbox, clip_box) - clip_path = self.get_clip_path() - if clip_path is not None: - clip_path = clip_path.get_fully_transformed_path() - bbox = Bbox.intersection(bbox, clip_path.get_extents()) - return bbox - - def add_callback(self, func): - """ - Add a callback function that will be called whenever one of the - `.Artist`'s properties changes. - - Parameters - ---------- - func : callable - The callback function. It must have the signature:: - - def func(artist: Artist) -> Any - - where *artist* is the calling `.Artist`. Return values may exist - but are ignored. - - Returns - ------- - int - The observer id associated with the callback. This id can be - used for removing the callback with `.remove_callback` later. - - See Also - -------- - remove_callback - """ - # Wrapping func in a lambda ensures it can be connected multiple times - # and never gets weakref-gc'ed. - return self._callbacks.connect("pchanged", lambda: func(self)) - - def remove_callback(self, oid): - """ - Remove a callback based on its observer id. - - See Also - -------- - add_callback - """ - self._callbacks.disconnect(oid) - - def pchanged(self): - """ - Call all of the registered callbacks. - - This function is triggered internally when a property is changed. - - See Also - -------- - add_callback - remove_callback - """ - self._callbacks.process("pchanged") - - def is_transform_set(self): - """ - Return whether the Artist has an explicitly set transform. - - This is *True* after `.set_transform` has been called. - """ - return self._transformSet - - def set_transform(self, t): - """ - Set the artist transform. - - Parameters - ---------- - t : `.Transform` - """ - self._transform = t - self._transformSet = True - self.pchanged() - self.stale = True - - def get_transform(self): - """Return the `.Transform` instance used by this artist.""" - if self._transform is None: - self._transform = IdentityTransform() - elif (not isinstance(self._transform, Transform) - and hasattr(self._transform, '_as_mpl_transform')): - self._transform = self._transform._as_mpl_transform(self.axes) - return self._transform - - def get_children(self): - r"""Return a list of the child `.Artist`\s of this `.Artist`.""" - return [] - - def _default_contains(self, mouseevent, figure=None): - """ - Base impl. for checking whether a mouseevent happened in an artist. - - 1. If the artist figure is known and the event did not occur in that - figure (by checking its ``canvas`` attribute), reject it. - 2. Otherwise, return `None, {}`, indicating that the subclass' - implementation should be used. - - Subclasses should start their definition of `contains` as follows: - - inside, info = self._default_contains(mouseevent) - if inside is not None: - return inside, info - # subclass-specific implementation follows - - The *figure* kwarg is provided for the implementation of - `.Figure.contains`. - """ - if figure is not None and mouseevent.canvas is not figure.canvas: - return False, {} - return None, {} - - def contains(self, mouseevent): - """ - Test whether the artist contains the mouse event. - - Parameters - ---------- - mouseevent : `~matplotlib.backend_bases.MouseEvent` - - Returns - ------- - contains : bool - Whether any values are within the radius. - details : dict - An artist-specific dictionary of details of the event context, - such as which points are contained in the pick radius. See the - individual Artist subclasses for details. - """ - inside, info = self._default_contains(mouseevent) - if inside is not None: - return inside, info - _log.warning("%r needs 'contains' method", self.__class__.__name__) - return False, {} - - def pickable(self): - """ - Return whether the artist is pickable. - - See Also - -------- - set_picker, get_picker, pick - """ - return self.figure is not None and self._picker is not None - - def pick(self, mouseevent): - """ - Process a pick event. - - Each child artist will fire a pick event if *mouseevent* is over - the artist and the artist has picker set. - - See Also - -------- - set_picker, get_picker, pickable - """ - from .backend_bases import PickEvent # Circular import. - # Pick self - if self.pickable(): - picker = self.get_picker() - if callable(picker): - inside, prop = picker(self, mouseevent) - else: - inside, prop = self.contains(mouseevent) - if inside: - PickEvent("pick_event", self.figure.canvas, - mouseevent, self, **prop)._process() - - # Pick children - for a in self.get_children(): - # make sure the event happened in the same Axes - ax = getattr(a, 'axes', None) - if (mouseevent.inaxes is None or ax is None - or mouseevent.inaxes == ax): - # we need to check if mouseevent.inaxes is None - # because some objects associated with an Axes (e.g., a - # tick label) can be outside the bounding box of the - # Axes and inaxes will be None - # also check that ax is None so that it traverse objects - # which do not have an axes property but children might - a.pick(mouseevent) - - def set_picker(self, picker): - """ - Define the picking behavior of the artist. - - Parameters - ---------- - picker : None or bool or float or callable - This can be one of the following: - - - *None*: Picking is disabled for this artist (default). - - - A boolean: If *True* then picking will be enabled and the - artist will fire a pick event if the mouse event is over - the artist. - - - A float: If picker is a number it is interpreted as an - epsilon tolerance in points and the artist will fire - off an event if its data is within epsilon of the mouse - event. For some artists like lines and patch collections, - the artist may provide additional data to the pick event - that is generated, e.g., the indices of the data within - epsilon of the pick event - - - A function: If picker is callable, it is a user supplied - function which determines whether the artist is hit by the - mouse event:: - - hit, props = picker(artist, mouseevent) - - to determine the hit test. if the mouse event is over the - artist, return *hit=True* and props is a dictionary of - properties you want added to the PickEvent attributes. - """ - self._picker = picker - - def get_picker(self): - """ - Return the picking behavior of the artist. - - The possible values are described in `.set_picker`. - - See Also - -------- - set_picker, pickable, pick - """ - return self._picker - - def get_url(self): - """Return the url.""" - return self._url - - def set_url(self, url): - """ - Set the url for the artist. - - Parameters - ---------- - url : str - """ - self._url = url - - def get_gid(self): - """Return the group id.""" - return self._gid - - def set_gid(self, gid): - """ - Set the (group) id for the artist. - - Parameters - ---------- - gid : str - """ - self._gid = gid - - def get_snap(self): - """ - Return the snap setting. - - See `.set_snap` for details. - """ - if mpl.rcParams['path.snap']: - return self._snap - else: - return False - - def set_snap(self, snap): - """ - Set the snapping behavior. - - Snapping aligns positions with the pixel grid, which results in - clearer images. For example, if a black line of 1px width was - defined at a position in between two pixels, the resulting image - would contain the interpolated value of that line in the pixel grid, - which would be a grey value on both adjacent pixel positions. In - contrast, snapping will move the line to the nearest integer pixel - value, so that the resulting image will really contain a 1px wide - black line. - - Snapping is currently only supported by the Agg and MacOSX backends. - - Parameters - ---------- - snap : bool or None - Possible values: - - - *True*: Snap vertices to the nearest pixel center. - - *False*: Do not modify vertex positions. - - *None*: (auto) If the path contains only rectilinear line - segments, round to the nearest pixel center. - """ - self._snap = snap - self.stale = True - - def get_sketch_params(self): - """ - Return the sketch parameters for the artist. - - Returns - ------- - tuple or None - - A 3-tuple with the following elements: - - - *scale*: The amplitude of the wiggle perpendicular to the - source line. - - *length*: The length of the wiggle along the line. - - *randomness*: The scale factor by which the length is - shrunken or expanded. - - Returns *None* if no sketch parameters were set. - """ - return self._sketch - - def set_sketch_params(self, scale=None, length=None, randomness=None): - """ - Set the sketch parameters. - - Parameters - ---------- - scale : float, optional - The amplitude of the wiggle perpendicular to the source - line, in pixels. If scale is `None`, or not provided, no - sketch filter will be provided. - length : float, optional - The length of the wiggle along the line, in pixels - (default 128.0) - randomness : float, optional - The scale factor by which the length is shrunken or - expanded (default 16.0) - - The PGF backend uses this argument as an RNG seed and not as - described above. Using the same seed yields the same random shape. - - .. ACCEPTS: (scale: float, length: float, randomness: float) - """ - if scale is None: - self._sketch = None - else: - self._sketch = (scale, length or 128.0, randomness or 16.0) - self.stale = True - - def set_path_effects(self, path_effects): - """ - Set the path effects. - - Parameters - ---------- - path_effects : `.AbstractPathEffect` - """ - self._path_effects = path_effects - self.stale = True - - def get_path_effects(self): - return self._path_effects - - def get_figure(self): - """Return the `.Figure` instance the artist belongs to.""" - return self.figure - - def set_figure(self, fig): - """ - Set the `.Figure` instance the artist belongs to. - - Parameters - ---------- - fig : `.Figure` - """ - # if this is a no-op just return - if self.figure is fig: - return - # if we currently have a figure (the case of both `self.figure` - # and *fig* being none is taken care of above) we then user is - # trying to change the figure an artist is associated with which - # is not allowed for the same reason as adding the same instance - # to more than one Axes - if self.figure is not None: - raise RuntimeError("Can not put single artist in " - "more than one figure") - self.figure = fig - if self.figure and self.figure is not self: - self.pchanged() - self.stale = True - - def set_clip_box(self, clipbox): - """ - Set the artist's clip `.Bbox`. - - Parameters - ---------- - clipbox : `.Bbox` - - Typically would be created from a `.TransformedBbox`. For - instance ``TransformedBbox(Bbox([[0, 0], [1, 1]]), ax.transAxes)`` - is the default clipping for an artist added to an Axes. - - """ - self.clipbox = clipbox - self.pchanged() - self.stale = True - - def set_clip_path(self, path, transform=None): - """ - Set the artist's clip path. - - Parameters - ---------- - path : `~matplotlib.patches.Patch` or `.Path` or `.TransformedPath` or None - The clip path. If given a `.Path`, *transform* must be provided as - well. If *None*, a previously set clip path is removed. - transform : `~matplotlib.transforms.Transform`, optional - Only used if *path* is a `.Path`, in which case the given `.Path` - is converted to a `.TransformedPath` using *transform*. - - Notes - ----- - For efficiency, if *path* is a `.Rectangle` this method will set the - clipping box to the corresponding rectangle and set the clipping path - to ``None``. - - For technical reasons (support of `~.Artist.set`), a tuple - (*path*, *transform*) is also accepted as a single positional - parameter. - - .. ACCEPTS: Patch or (Path, Transform) or None - """ - from matplotlib.patches import Patch, Rectangle - - success = False - if transform is None: - if isinstance(path, Rectangle): - self.clipbox = TransformedBbox(Bbox.unit(), - path.get_transform()) - self._clippath = None - success = True - elif isinstance(path, Patch): - self._clippath = TransformedPatchPath(path) - success = True - elif isinstance(path, tuple): - path, transform = path - - if path is None: - self._clippath = None - success = True - elif isinstance(path, Path): - self._clippath = TransformedPath(path, transform) - success = True - elif isinstance(path, TransformedPatchPath): - self._clippath = path - success = True - elif isinstance(path, TransformedPath): - self._clippath = path - success = True - - if not success: - raise TypeError( - "Invalid arguments to set_clip_path, of type {} and {}" - .format(type(path).__name__, type(transform).__name__)) - # This may result in the callbacks being hit twice, but guarantees they - # will be hit at least once. - self.pchanged() - self.stale = True - - def get_alpha(self): - """ - Return the alpha value used for blending - not supported on all - backends. - """ - return self._alpha - - def get_visible(self): - """Return the visibility.""" - return self._visible - - def get_animated(self): - """Return whether the artist is animated.""" - return self._animated - - def get_in_layout(self): - """ - Return boolean flag, ``True`` if artist is included in layout - calculations. - - E.g. :doc:`/tutorials/intermediate/constrainedlayout_guide`, - `.Figure.tight_layout()`, and - ``fig.savefig(fname, bbox_inches='tight')``. - """ - return self._in_layout - - def _fully_clipped_to_axes(self): - """ - Return a boolean flag, ``True`` if the artist is clipped to the Axes - and can thus be skipped in layout calculations. Requires `get_clip_on` - is True, one of `clip_box` or `clip_path` is set, ``clip_box.extents`` - is equivalent to ``ax.bbox.extents`` (if set), and ``clip_path._patch`` - is equivalent to ``ax.patch`` (if set). - """ - # Note that ``clip_path.get_fully_transformed_path().get_extents()`` - # cannot be directly compared to ``axes.bbox.extents`` because the - # extents may be undefined (i.e. equivalent to ``Bbox.null()``) - # before the associated artist is drawn, and this method is meant - # to determine whether ``axes.get_tightbbox()`` may bypass drawing - clip_box = self.get_clip_box() - clip_path = self.get_clip_path() - return (self.axes is not None - and self.get_clip_on() - and (clip_box is not None or clip_path is not None) - and (clip_box is None - or np.all(clip_box.extents == self.axes.bbox.extents)) - and (clip_path is None - or isinstance(clip_path, TransformedPatchPath) - and clip_path._patch is self.axes.patch)) - - def get_clip_on(self): - """Return whether the artist uses clipping.""" - return self._clipon - - def get_clip_box(self): - """Return the clipbox.""" - return self.clipbox - - def get_clip_path(self): - """Return the clip path.""" - return self._clippath - - def get_transformed_clip_path_and_affine(self): - """ - Return the clip path with the non-affine part of its - transformation applied, and the remaining affine part of its - transformation. - """ - if self._clippath is not None: - return self._clippath.get_transformed_path_and_affine() - return None, None - - def set_clip_on(self, b): - """ - Set whether the artist uses clipping. - - When False, artists will be visible outside the Axes which - can lead to unexpected results. - - Parameters - ---------- - b : bool - """ - self._clipon = b - # This may result in the callbacks being hit twice, but ensures they - # are hit at least once - self.pchanged() - self.stale = True - - def _set_gc_clip(self, gc): - """Set the clip properly for the gc.""" - if self._clipon: - if self.clipbox is not None: - gc.set_clip_rectangle(self.clipbox) - gc.set_clip_path(self._clippath) - else: - gc.set_clip_rectangle(None) - gc.set_clip_path(None) - - def get_rasterized(self): - """Return whether the artist is to be rasterized.""" - return self._rasterized - - def set_rasterized(self, rasterized): - """ - Force rasterized (bitmap) drawing for vector graphics output. - - Rasterized drawing is not supported by all artists. If you try to - enable this on an artist that does not support it, the command has no - effect and a warning will be issued. - - This setting is ignored for pixel-based output. - - See also :doc:`/gallery/misc/rasterization_demo`. - - Parameters - ---------- - rasterized : bool - """ - supports_rasterization = getattr(self.draw, - "_supports_rasterization", False) - if rasterized and not supports_rasterization: - _api.warn_external(f"Rasterization of '{self}' will be ignored") - - self._rasterized = rasterized - - def get_agg_filter(self): - """Return filter function to be used for agg filter.""" - return self._agg_filter - - def set_agg_filter(self, filter_func): - """ - Set the agg filter. - - Parameters - ---------- - filter_func : callable - A filter function, which takes a (m, n, depth) float array - and a dpi value, and returns a (m, n, depth) array and two - offsets from the bottom left corner of the image - - .. ACCEPTS: a filter function, which takes a (m, n, 3) float array - and a dpi value, and returns a (m, n, 3) array and two offsets - from the bottom left corner of the image - """ - self._agg_filter = filter_func - self.stale = True - - def draw(self, renderer): - """ - Draw the Artist (and its children) using the given renderer. - - This has no effect if the artist is not visible (`.Artist.get_visible` - returns False). - - Parameters - ---------- - renderer : `.RendererBase` subclass. - - Notes - ----- - This method is overridden in the Artist subclasses. - """ - if not self.get_visible(): - return - self.stale = False - - def set_alpha(self, alpha): - """ - Set the alpha value used for blending - not supported on all backends. - - Parameters - ---------- - alpha : scalar or None - *alpha* must be within the 0-1 range, inclusive. - """ - if alpha is not None and not isinstance(alpha, Number): - raise TypeError( - f'alpha must be numeric or None, not {type(alpha)}') - if alpha is not None and not (0 <= alpha <= 1): - raise ValueError(f'alpha ({alpha}) is outside 0-1 range') - self._alpha = alpha - self.pchanged() - self.stale = True - - def _set_alpha_for_array(self, alpha): - """ - Set the alpha value used for blending - not supported on all backends. - - Parameters - ---------- - alpha : array-like or scalar or None - All values must be within the 0-1 range, inclusive. - Masked values and nans are not supported. - """ - if isinstance(alpha, str): - raise TypeError("alpha must be numeric or None, not a string") - if not np.iterable(alpha): - Artist.set_alpha(self, alpha) - return - alpha = np.asarray(alpha) - if not (0 <= alpha.min() and alpha.max() <= 1): - raise ValueError('alpha must be between 0 and 1, inclusive, ' - f'but min is {alpha.min()}, max is {alpha.max()}') - self._alpha = alpha - self.pchanged() - self.stale = True - - def set_visible(self, b): - """ - Set the artist's visibility. - - Parameters - ---------- - b : bool - """ - self._visible = b - self.pchanged() - self.stale = True - - def set_animated(self, b): - """ - Set whether the artist is intended to be used in an animation. - - If True, the artist is excluded from regular drawing of the figure. - You have to call `.Figure.draw_artist` / `.Axes.draw_artist` - explicitly on the artist. This approach is used to speed up animations - using blitting. - - See also `matplotlib.animation` and - :doc:`/tutorials/advanced/blitting`. - - Parameters - ---------- - b : bool - """ - if self._animated != b: - self._animated = b - self.pchanged() - - def set_in_layout(self, in_layout): - """ - Set if artist is to be included in layout calculations, - E.g. :doc:`/tutorials/intermediate/constrainedlayout_guide`, - `.Figure.tight_layout()`, and - ``fig.savefig(fname, bbox_inches='tight')``. - - Parameters - ---------- - in_layout : bool - """ - self._in_layout = in_layout - - def get_label(self): - """Return the label used for this artist in the legend.""" - return self._label - - def set_label(self, s): - """ - Set a label that will be displayed in the legend. - - Parameters - ---------- - s : object - *s* will be converted to a string by calling `str`. - """ - if s is not None: - self._label = str(s) - else: - self._label = None - self.pchanged() - self.stale = True - - def get_zorder(self): - """Return the artist's zorder.""" - return self.zorder - - def set_zorder(self, level): - """ - Set the zorder for the artist. Artists with lower zorder - values are drawn first. - - Parameters - ---------- - level : float - """ - if level is None: - level = self.__class__.zorder - self.zorder = level - self.pchanged() - self.stale = True - - @property - def sticky_edges(self): - """ - ``x`` and ``y`` sticky edge lists for autoscaling. - - When performing autoscaling, if a data limit coincides with a value in - the corresponding sticky_edges list, then no margin will be added--the - view limit "sticks" to the edge. A typical use case is histograms, - where one usually expects no margin on the bottom edge (0) of the - histogram. - - Moreover, margin expansion "bumps" against sticky edges and cannot - cross them. For example, if the upper data limit is 1.0, the upper - view limit computed by simple margin application is 1.2, but there is a - sticky edge at 1.1, then the actual upper view limit will be 1.1. - - This attribute cannot be assigned to; however, the ``x`` and ``y`` - lists can be modified in place as needed. - - Examples - -------- - >>> artist.sticky_edges.x[:] = (xmin, xmax) - >>> artist.sticky_edges.y[:] = (ymin, ymax) - - """ - return self._sticky_edges - - def update_from(self, other): - """Copy properties from *other* to *self*.""" - self._transform = other._transform - self._transformSet = other._transformSet - self._visible = other._visible - self._alpha = other._alpha - self.clipbox = other.clipbox - self._clipon = other._clipon - self._clippath = other._clippath - self._label = other._label - self._sketch = other._sketch - self._path_effects = other._path_effects - self.sticky_edges.x[:] = other.sticky_edges.x.copy() - self.sticky_edges.y[:] = other.sticky_edges.y.copy() - self.pchanged() - self.stale = True - - def properties(self): - """Return a dictionary of all the properties of the artist.""" - return ArtistInspector(self).properties() - - def _update_props(self, props, errfmt): - """ - Helper for `.Artist.set` and `.Artist.update`. - - *errfmt* is used to generate error messages for invalid property - names; it gets formatted with ``type(self)`` and the property name. - """ - ret = [] - with cbook._setattr_cm(self, eventson=False): - for k, v in props.items(): - # Allow attributes we want to be able to update through - # art.update, art.set, setp. - if k == "axes": - ret.append(setattr(self, k, v)) - else: - func = getattr(self, f"set_{k}", None) - if not callable(func): - raise AttributeError( - errfmt.format(cls=type(self), prop_name=k)) - ret.append(func(v)) - if ret: - self.pchanged() - self.stale = True - return ret - - def update(self, props): - """ - Update this artist's properties from the dict *props*. - - Parameters - ---------- - props : dict - """ - return self._update_props( - props, "{cls.__name__!r} object has no property {prop_name!r}") - - def _internal_update(self, kwargs): - """ - Update artist properties without prenormalizing them, but generating - errors as if calling `set`. - - The lack of prenormalization is to maintain backcompatibility. - """ - return self._update_props( - kwargs, "{cls.__name__}.set() got an unexpected keyword argument " - "{prop_name!r}") - - def set(self, **kwargs): - # docstring and signature are auto-generated via - # Artist._update_set_signature_and_docstring() at the end of the - # module. - return self._internal_update(cbook.normalize_kwargs(kwargs, self)) - - @contextlib.contextmanager - def _cm_set(self, **kwargs): - """ - `.Artist.set` context-manager that restores original values at exit. - """ - orig_vals = {k: getattr(self, f"get_{k}")() for k in kwargs} - try: - self.set(**kwargs) - yield - finally: - self.set(**orig_vals) - - def findobj(self, match=None, include_self=True): - """ - Find artist objects. - - Recursively find all `.Artist` instances contained in the artist. - - Parameters - ---------- - match - A filter criterion for the matches. This can be - - - *None*: Return all objects contained in artist. - - A function with signature ``def match(artist: Artist) -> bool``. - The result will only contain artists for which the function - returns *True*. - - A class instance: e.g., `.Line2D`. The result will only contain - artists of this class or its subclasses (``isinstance`` check). - - include_self : bool - Include *self* in the list to be checked for a match. - - Returns - ------- - list of `.Artist` - - """ - if match is None: # always return True - def matchfunc(x): - return True - elif isinstance(match, type) and issubclass(match, Artist): - def matchfunc(x): - return isinstance(x, match) - elif callable(match): - matchfunc = match - else: - raise ValueError('match must be None, a matplotlib.artist.Artist ' - 'subclass, or a callable') - - artists = sum([c.findobj(matchfunc) for c in self.get_children()], []) - if include_self and matchfunc(self): - artists.append(self) - return artists - - def get_cursor_data(self, event): - """ - Return the cursor data for a given event. - - .. note:: - This method is intended to be overridden by artist subclasses. - As an end-user of Matplotlib you will most likely not call this - method yourself. - - Cursor data can be used by Artists to provide additional context - information for a given event. The default implementation just returns - *None*. - - Subclasses can override the method and return arbitrary data. However, - when doing so, they must ensure that `.format_cursor_data` can convert - the data to a string representation. - - The only current use case is displaying the z-value of an `.AxesImage` - in the status bar of a plot window, while moving the mouse. - - Parameters - ---------- - event : `~matplotlib.backend_bases.MouseEvent` - - See Also - -------- - format_cursor_data - - """ - return None - - def format_cursor_data(self, data): - """ - Return a string representation of *data*. - - .. note:: - This method is intended to be overridden by artist subclasses. - As an end-user of Matplotlib you will most likely not call this - method yourself. - - The default implementation converts ints and floats and arrays of ints - and floats into a comma-separated string enclosed in square brackets, - unless the artist has an associated colorbar, in which case scalar - values are formatted using the colorbar's formatter. - - See Also - -------- - get_cursor_data - """ - if np.ndim(data) == 0 and isinstance(self, ScalarMappable): - # This block logically belongs to ScalarMappable, but can't be - # implemented in it because most ScalarMappable subclasses inherit - # from Artist first and from ScalarMappable second, so - # Artist.format_cursor_data would always have precedence over - # ScalarMappable.format_cursor_data. - n = self.cmap.N - if np.ma.getmask(data): - return "[]" - normed = self.norm(data) - if np.isfinite(normed): - if isinstance(self.norm, BoundaryNorm): - # not an invertible normalization mapping - cur_idx = np.argmin(np.abs(self.norm.boundaries - data)) - neigh_idx = max(0, cur_idx - 1) - # use max diff to prevent delta == 0 - delta = np.diff( - self.norm.boundaries[neigh_idx:cur_idx + 2] - ).max() - - else: - # Midpoints of neighboring color intervals. - neighbors = self.norm.inverse( - (int(normed * n) + np.array([0, 1])) / n) - delta = abs(neighbors - data).max() - g_sig_digits = cbook._g_sig_digits(data, delta) - else: - g_sig_digits = 3 # Consistent with default below. - return "[{:-#.{}g}]".format(data, g_sig_digits) - else: - try: - data[0] - except (TypeError, IndexError): - data = [data] - data_str = ', '.join('{:0.3g}'.format(item) for item in data - if isinstance(item, Number)) - return "[" + data_str + "]" - - def get_mouseover(self): - """ - Return whether this artist is queried for custom context information - when the mouse cursor moves over it. - """ - return self._mouseover - - def set_mouseover(self, mouseover): - """ - Set whether this artist is queried for custom context information when - the mouse cursor moves over it. - - Parameters - ---------- - mouseover : bool - - See Also - -------- - get_cursor_data - .ToolCursorPosition - .NavigationToolbar2 - """ - self._mouseover = bool(mouseover) - ax = self.axes - if ax: - if self._mouseover: - ax._mouseover_set.add(self) - else: - ax._mouseover_set.discard(self) - - mouseover = property(get_mouseover, set_mouseover) # backcompat. - - -def _get_tightbbox_for_layout_only(obj, *args, **kwargs): - """ - Matplotlib's `.Axes.get_tightbbox` and `.Axis.get_tightbbox` support a - *for_layout_only* kwarg; this helper tries to use the kwarg but skips it - when encountering third-party subclasses that do not support it. - """ - try: - return obj.get_tightbbox(*args, **{**kwargs, "for_layout_only": True}) - except TypeError: - return obj.get_tightbbox(*args, **kwargs) - - -class ArtistInspector: - """ - A helper class to inspect an `~matplotlib.artist.Artist` and return - information about its settable properties and their current values. - """ - - def __init__(self, o): - r""" - Initialize the artist inspector with an `Artist` or an iterable of - `Artist`\s. If an iterable is used, we assume it is a homogeneous - sequence (all `Artist`\s are of the same type) and it is your - responsibility to make sure this is so. - """ - if not isinstance(o, Artist): - if np.iterable(o): - o = list(o) - if len(o): - o = o[0] - - self.oorig = o - if not isinstance(o, type): - o = type(o) - self.o = o - - self.aliasd = self.get_aliases() - - def get_aliases(self): - """ - Get a dict mapping property fullnames to sets of aliases for each alias - in the :class:`~matplotlib.artist.ArtistInspector`. - - e.g., for lines:: - - {'markerfacecolor': {'mfc'}, - 'linewidth' : {'lw'}, - } - """ - names = [name for name in dir(self.o) - if name.startswith(('set_', 'get_')) - and callable(getattr(self.o, name))] - aliases = {} - for name in names: - func = getattr(self.o, name) - if not self.is_alias(func): - continue - propname = re.search("`({}.*)`".format(name[:4]), # get_.*/set_.* - inspect.getdoc(func)).group(1) - aliases.setdefault(propname[4:], set()).add(name[4:]) - return aliases - - _get_valid_values_regex = re.compile( - r"\n\s*(?:\.\.\s+)?ACCEPTS:\s*((?:.|\n)*?)(?:$|(?:\n\n))" - ) - - def get_valid_values(self, attr): - """ - Get the legal arguments for the setter associated with *attr*. - - This is done by querying the docstring of the setter for a line that - begins with "ACCEPTS:" or ".. ACCEPTS:", and then by looking for a - numpydoc-style documentation for the setter's first argument. - """ - - name = 'set_%s' % attr - if not hasattr(self.o, name): - raise AttributeError('%s has no function %s' % (self.o, name)) - func = getattr(self.o, name) - - docstring = inspect.getdoc(func) - if docstring is None: - return 'unknown' - - if docstring.startswith('Alias for '): - return None - - match = self._get_valid_values_regex.search(docstring) - if match is not None: - return re.sub("\n *", " ", match.group(1)) - - # Much faster than list(inspect.signature(func).parameters)[1], - # although barely relevant wrt. matplotlib's total import time. - param_name = func.__code__.co_varnames[1] - # We could set the presence * based on whether the parameter is a - # varargs (it can't be a varkwargs) but it's not really worth it. - match = re.search(r"(?m)^ *\*?{} : (.+)".format(param_name), docstring) - if match: - return match.group(1) - - return 'unknown' - - def _replace_path(self, source_class): - """ - Changes the full path to the public API path that is used - in sphinx. This is needed for links to work. - """ - replace_dict = {'_base._AxesBase': 'Axes', - '_axes.Axes': 'Axes'} - for key, value in replace_dict.items(): - source_class = source_class.replace(key, value) - return source_class - - def get_setters(self): - """ - Get the attribute strings with setters for object. - - For example, for a line, return ``['markerfacecolor', 'linewidth', - ....]``. - """ - setters = [] - for name in dir(self.o): - if not name.startswith('set_'): - continue - func = getattr(self.o, name) - if (not callable(func) - or self.number_of_parameters(func) < 2 - or self.is_alias(func)): - continue - setters.append(name[4:]) - return setters - - @staticmethod - @lru_cache(maxsize=None) - def number_of_parameters(func): - """Return number of parameters of the callable *func*.""" - return len(inspect.signature(func).parameters) - - @staticmethod - @lru_cache(maxsize=None) - def is_alias(method): - """ - Return whether the object *method* is an alias for another method. - """ - - ds = inspect.getdoc(method) - if ds is None: - return False - - return ds.startswith('Alias for ') - - def aliased_name(self, s): - """ - Return 'PROPNAME or alias' if *s* has an alias, else return 'PROPNAME'. - - For example, for the line markerfacecolor property, which has an - alias, return 'markerfacecolor or mfc' and for the transform - property, which does not, return 'transform'. - """ - aliases = ''.join(' or %s' % x for x in sorted(self.aliasd.get(s, []))) - return s + aliases - - _NOT_LINKABLE = { - # A set of property setter methods that are not available in our - # current docs. This is a workaround used to prevent trying to link - # these setters which would lead to "target reference not found" - # warnings during doc build. - 'matplotlib.image._ImageBase.set_alpha', - 'matplotlib.image._ImageBase.set_array', - 'matplotlib.image._ImageBase.set_data', - 'matplotlib.image._ImageBase.set_filternorm', - 'matplotlib.image._ImageBase.set_filterrad', - 'matplotlib.image._ImageBase.set_interpolation', - 'matplotlib.image._ImageBase.set_interpolation_stage', - 'matplotlib.image._ImageBase.set_resample', - 'matplotlib.text._AnnotationBase.set_annotation_clip', - } - - def aliased_name_rest(self, s, target): - """ - Return 'PROPNAME or alias' if *s* has an alias, else return 'PROPNAME', - formatted for reST. - - For example, for the line markerfacecolor property, which has an - alias, return 'markerfacecolor or mfc' and for the transform - property, which does not, return 'transform'. - """ - # workaround to prevent "reference target not found" - if target in self._NOT_LINKABLE: - return f'``{s}``' - - aliases = ''.join(' or %s' % x for x in sorted(self.aliasd.get(s, []))) - return ':meth:`%s <%s>`%s' % (s, target, aliases) - - def pprint_setters(self, prop=None, leadingspace=2): - """ - If *prop* is *None*, return a list of strings of all settable - properties and their valid values. - - If *prop* is not *None*, it is a valid property name and that - property will be returned as a string of property : valid - values. - """ - if leadingspace: - pad = ' ' * leadingspace - else: - pad = '' - if prop is not None: - accepts = self.get_valid_values(prop) - return '%s%s: %s' % (pad, prop, accepts) - - lines = [] - for prop in sorted(self.get_setters()): - accepts = self.get_valid_values(prop) - name = self.aliased_name(prop) - lines.append('%s%s: %s' % (pad, name, accepts)) - return lines - - def pprint_setters_rest(self, prop=None, leadingspace=4): - """ - If *prop* is *None*, return a list of reST-formatted strings of all - settable properties and their valid values. - - If *prop* is not *None*, it is a valid property name and that - property will be returned as a string of "property : valid" - values. - """ - if leadingspace: - pad = ' ' * leadingspace - else: - pad = '' - if prop is not None: - accepts = self.get_valid_values(prop) - return '%s%s: %s' % (pad, prop, accepts) - - prop_and_qualnames = [] - for prop in sorted(self.get_setters()): - # Find the parent method which actually provides the docstring. - for cls in self.o.__mro__: - method = getattr(cls, f"set_{prop}", None) - if method and method.__doc__ is not None: - break - else: # No docstring available. - method = getattr(self.o, f"set_{prop}") - prop_and_qualnames.append( - (prop, f"{method.__module__}.{method.__qualname__}")) - - names = [self.aliased_name_rest(prop, target) - .replace('_base._AxesBase', 'Axes') - .replace('_axes.Axes', 'Axes') - for prop, target in prop_and_qualnames] - accepts = [self.get_valid_values(prop) - for prop, _ in prop_and_qualnames] - - col0_len = max(len(n) for n in names) - col1_len = max(len(a) for a in accepts) - table_formatstr = pad + ' ' + '=' * col0_len + ' ' + '=' * col1_len - - return [ - '', - pad + '.. table::', - pad + ' :class: property-table', - '', - table_formatstr, - pad + ' ' + 'Property'.ljust(col0_len) - + ' ' + 'Description'.ljust(col1_len), - table_formatstr, - *[pad + ' ' + n.ljust(col0_len) + ' ' + a.ljust(col1_len) - for n, a in zip(names, accepts)], - table_formatstr, - '', - ] - - def properties(self): - """Return a dictionary mapping property name -> value.""" - o = self.oorig - getters = [name for name in dir(o) - if name.startswith('get_') and callable(getattr(o, name))] - getters.sort() - d = {} - for name in getters: - func = getattr(o, name) - if self.is_alias(func): - continue - try: - with warnings.catch_warnings(): - warnings.simplefilter('ignore') - val = func() - except Exception: - continue - else: - d[name[4:]] = val - return d - - def pprint_getters(self): - """Return the getters and actual values as list of strings.""" - lines = [] - for name, val in sorted(self.properties().items()): - if getattr(val, 'shape', ()) != () and len(val) > 6: - s = str(val[:6]) + '...' - else: - s = str(val) - s = s.replace('\n', ' ') - if len(s) > 50: - s = s[:50] + '...' - name = self.aliased_name(name) - lines.append(' %s = %s' % (name, s)) - return lines - - -def getp(obj, property=None): - """ - Return the value of an `.Artist`'s *property*, or print all of them. - - Parameters - ---------- - obj : `~matplotlib.artist.Artist` - The queried artist; e.g., a `.Line2D`, a `.Text`, or an `~.axes.Axes`. - - property : str or None, default: None - If *property* is 'somename', this function returns - ``obj.get_somename()``. - - If it's None (or unset), it *prints* all gettable properties from - *obj*. Many properties have aliases for shorter typing, e.g. 'lw' is - an alias for 'linewidth'. In the output, aliases and full property - names will be listed as: - - property or alias = value - - e.g.: - - linewidth or lw = 2 - - See Also - -------- - setp - """ - if property is None: - insp = ArtistInspector(obj) - ret = insp.pprint_getters() - print('\n'.join(ret)) - return - return getattr(obj, 'get_' + property)() - -# alias -get = getp - - -def setp(obj, *args, file=None, **kwargs): - """ - Set one or more properties on an `.Artist`, or list allowed values. - - Parameters - ---------- - obj : `~matplotlib.artist.Artist` or list of `.Artist` - The artist(s) whose properties are being set or queried. When setting - properties, all artists are affected; when querying the allowed values, - only the first instance in the sequence is queried. - - For example, two lines can be made thicker and red with a single call: - - >>> x = arange(0, 1, 0.01) - >>> lines = plot(x, sin(2*pi*x), x, sin(4*pi*x)) - >>> setp(lines, linewidth=2, color='r') - - file : file-like, default: `sys.stdout` - Where `setp` writes its output when asked to list allowed values. - - >>> with open('output.log') as file: - ... setp(line, file=file) - - The default, ``None``, means `sys.stdout`. - - *args, **kwargs - The properties to set. The following combinations are supported: - - - Set the linestyle of a line to be dashed: - - >>> line, = plot([1, 2, 3]) - >>> setp(line, linestyle='--') - - - Set multiple properties at once: - - >>> setp(line, linewidth=2, color='r') - - - List allowed values for a line's linestyle: - - >>> setp(line, 'linestyle') - linestyle: {'-', '--', '-.', ':', '', (offset, on-off-seq), ...} - - - List all properties that can be set, and their allowed values: - - >>> setp(line) - agg_filter: a filter function, ... - [long output listing omitted] - - `setp` also supports MATLAB style string/value pairs. For example, the - following are equivalent: - - >>> setp(lines, 'linewidth', 2, 'color', 'r') # MATLAB style - >>> setp(lines, linewidth=2, color='r') # Python style - - See Also - -------- - getp - """ - - if isinstance(obj, Artist): - objs = [obj] - else: - objs = list(cbook.flatten(obj)) - - if not objs: - return - - insp = ArtistInspector(objs[0]) - - if not kwargs and len(args) < 2: - if args: - print(insp.pprint_setters(prop=args[0]), file=file) - else: - print('\n'.join(insp.pprint_setters()), file=file) - return - - if len(args) % 2: - raise ValueError('The set args must be string, value pairs') - - funcvals = dict(zip(args[::2], args[1::2])) - ret = [o.update(funcvals) for o in objs] + [o.set(**kwargs) for o in objs] - return list(cbook.flatten(ret)) - - -def kwdoc(artist): - r""" - Inspect an `~matplotlib.artist.Artist` class (using `.ArtistInspector`) and - return information about its settable properties and their current values. - - Parameters - ---------- - artist : `~matplotlib.artist.Artist` or an iterable of `Artist`\s - - Returns - ------- - str - The settable properties of *artist*, as plain text if - :rc:`docstring.hardcopy` is False and as a rst table (intended for - use in Sphinx) if it is True. - """ - ai = ArtistInspector(artist) - return ('\n'.join(ai.pprint_setters_rest(leadingspace=4)) - if mpl.rcParams['docstring.hardcopy'] else - 'Properties:\n' + '\n'.join(ai.pprint_setters(leadingspace=4))) - -# We defer this to the end of them module, because it needs ArtistInspector -# to be defined. -Artist._update_set_signature_and_docstring() diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/roles/test_engineer.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/roles/test_engineer.py deleted file mode 100644 index c0c48d0b10f652f7df35bc383c153a992e4d30e3..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/tests/metagpt/roles/test_engineer.py +++ /dev/null @@ -1,92 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/12 10:14 -@Author : alexanderwu -@File : test_engineer.py -""" -import pytest - -from metagpt.logs import logger -from metagpt.roles.engineer import Engineer -from metagpt.utils.common import CodeParser -from tests.metagpt.roles.mock import ( - STRS_FOR_PARSING, - TASKS, - TASKS_TOMATO_CLOCK, - MockMessages, -) - - -@pytest.mark.asyncio -async def test_engineer(): - engineer = Engineer() - - engineer.recv(MockMessages.req) - engineer.recv(MockMessages.prd) - engineer.recv(MockMessages.system_design) - rsp = await engineer.handle(MockMessages.tasks) - - logger.info(rsp) - assert "all done." == rsp.content - - -def test_parse_str(): - for idx, i in enumerate(STRS_FOR_PARSING): - text = CodeParser.parse_str(f"{idx+1}", i) - # logger.info(text) - assert text == 'a' - - -def test_parse_blocks(): - tasks = CodeParser.parse_blocks(TASKS) - logger.info(tasks.keys()) - assert 'Task list' in tasks.keys() - - -target_list = [ - "smart_search_engine/knowledge_base.py", - "smart_search_engine/index.py", - "smart_search_engine/ranking.py", - "smart_search_engine/summary.py", - "smart_search_engine/search.py", - "smart_search_engine/main.py", - "smart_search_engine/interface.py", - "smart_search_engine/user_feedback.py", - "smart_search_engine/security.py", - "smart_search_engine/testing.py", - "smart_search_engine/monitoring.py", -] - - -def test_parse_file_list(): - tasks = CodeParser.parse_file_list("任务列表", TASKS) - logger.info(tasks) - assert isinstance(tasks, list) - assert target_list == tasks - - file_list = CodeParser.parse_file_list("Task list", TASKS_TOMATO_CLOCK, lang="python") - logger.info(file_list) - - -target_code = """task_list = [ - "smart_search_engine/knowledge_base.py", - "smart_search_engine/index.py", - "smart_search_engine/ranking.py", - "smart_search_engine/summary.py", - "smart_search_engine/search.py", - "smart_search_engine/main.py", - "smart_search_engine/interface.py", - "smart_search_engine/user_feedback.py", - "smart_search_engine/security.py", - "smart_search_engine/testing.py", - "smart_search_engine/monitoring.py", -] -""" - - -def test_parse_code(): - code = CodeParser.parse_code("任务列表", TASKS, lang="python") - logger.info(code) - assert isinstance(code, str) - assert target_code == code diff --git a/spaces/diacanFperku/AutoGPT/Clayoo 2.6 For Rhino 6 Win.md b/spaces/diacanFperku/AutoGPT/Clayoo 2.6 For Rhino 6 Win.md deleted file mode 100644 index 28aec904e38037bd6cc0d6c99cd755520dc0a1e2..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Clayoo 2.6 For Rhino 6 Win.md +++ /dev/null @@ -1,6 +0,0 @@ -

Clayoo 2.6 for Rhino 6 Win


DOWNLOAD 🗸🗸🗸 https://gohhs.com/2uFU10



- -The next generation of plugins for Rhino starts with Clayoo, an innovative solution to freeform modeling. Clayoo offers three different ... 4d29de3e1b
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/Daub Ages 2 0 Cracked NEW!.md b/spaces/diacanFperku/AutoGPT/Daub Ages 2 0 Cracked NEW!.md deleted file mode 100644 index 6e455c9cdfeeb565391266ca5a800ec32ef48edc..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Daub Ages 2 0 Cracked NEW!.md +++ /dev/null @@ -1,60 +0,0 @@ - -

Daub Ages 2 0 Cracked - A Review

-

If you are interested in genealogy and family history, you might want to check out Daub Ages 2 0 Cracked. This is a software that allows you to create and manage your family tree, as well as to explore and analyze your ancestry. Daub Ages 2 0 Cracked is a powerful and user-friendly tool that can help you discover your roots and share your stories.

-

What are the features of Daub Ages 2 0 Cracked?

-

Daub Ages 2 0 Cracked has some impressive features that make it a useful tool for genealogists and family historians. Some of these features are:

-

daub ages 2 0 cracked


DOWNLOADhttps://gohhs.com/2uFVpc



-
    -
  • Data entry and editing: You can easily enter and edit your personal data, such as names, dates, places, events, sources, notes, media, etc. You can also import and export data from GEDCOM files, CSV files, or other formats.
  • -
  • Family tree view and navigation: You can view and navigate your family tree in various ways, such as pedigree chart, family group sheet, timeline, fan chart, etc. You can also customize the appearance and layout of your family tree.
  • -
  • Research and analysis: You can research and analyze your ancestry by using various tools, such as maps, statistics, reports, charts, lists, etc. You can also compare and merge data from different sources or databases.
  • -
  • Publication and sharing: You can publish and share your family tree by creating web pages, books, PDF files, slideshows, etc. You can also upload your family tree to online platforms, such as Ancestry.com or FamilySearch.org.
  • -
-

What are the system requirements of Daub Ages 2 0 Cracked?

-

Daub Ages 2 0 Cracked is compatible with Windows XP/Vista/7/8/10. It requires 512 MB of RAM (1 GB recommended) and 100 MB of free hard disk space. It is a lightweight and easy-to-use software that does not require much resources or technical skills.

-

How to download and install Daub Ages 2 0 Cracked?

-

To download and install Daub Ages 2 0 Cracked, you need to follow these steps:

-
    -
  1. Download the software from a reliable source, such as FileCR.com.
  2. -
  3. Extract the zip file and run the setup file.
  4. -
  5. Follow the instructions on the screen and complete the installation process.
  6. -
  7. Copy the crack file and paste it into the installation folder.
  8. -
  9. Run the software and enjoy creating your family tree.
  10. -
-

Conclusion

-

Daub Ages 2 0 Cracked is a powerful and convenient software that allows you to create and manage your family tree, as well as to explore and analyze your ancestry. Daub Ages 2 0 Cracked is a user-friendly tool that can help you discover your roots and share your stories. Daub Ages 2 0 Cracked is a must-have tool for anyone who loves genealogy and family history.

-

What are the benefits of Daub Ages 2 0 Cracked?

-

Daub Ages 2 0 Cracked has many benefits for users who want to create and manage their family tree, as well as to explore and analyze their ancestry. Some of these benefits are:

-
    -
  • It saves time and money: You can create and manage your family tree faster and easier than using a web browser or a subscription-based service. You can also access your family tree offline and without any ads or limitations.
  • -
  • It offers more options and flexibility: You can customize and personalize your family tree according to your preferences and needs. You can also import and export data from various sources or formats.
  • -
  • It enhances your genealogy experience: You can research and analyze your ancestry by using various tools and features. You can also publish and share your family tree by creating various outputs and formats.
  • -
-

What are the alternatives to Daub Ages 2 0 Cracked?

-

If you are looking for other ways to create and manage your family tree, as well as to explore and analyze your ancestry, you might want to check out some of the alternatives to Daub Ages 2 0 Cracked. Some of these alternatives are:

-
    -
  • Family Tree Maker: This is a software that allows you to create and manage your family tree, as well as to sync it with Ancestry.com or FamilySearch.org. You can also research and analyze your ancestry by using various tools and features.
  • -
  • Legacy Family Tree: This is a software that allows you to create and manage your family tree, as well as to sync it with FamilySearch.org or MyHeritage.com. You can also research and analyze your ancestry by using various tools and features.
  • -
  • RootsMagic: This is a software that allows you to create and manage your family tree, as well as to sync it with Ancestry.com or FamilySearch.org. You can also research and analyze your ancestry by using various tools and features.
  • -
-

What are the pros and cons of Daub Ages 2 0 Cracked?

-

Daub Ages 2 0 Cracked has some pros and cons that you should consider before using it. Some of these pros and cons are:

- - - - - -
ProsCons
It is fast and easy to use.It requires a crack file to activate the full version.
It offers more options and flexibility than web browsers or subscription-based services.It does not support syncing with online platforms or databases.
It enhances your genealogy experience by allowing you to research and analyze your ancestry.It does not support creating vector charts or 3D views.
-

Conclusion

-

In conclusion, Daub Ages 2 0 Cracked is a powerful and convenient software that allows you to create and manage your family tree, as well as to explore and analyze your ancestry. Daub Ages 2 0 Cracked is a user-friendly tool that can help you discover your roots and share your stories. Daub Ages 2 0 Cracked is a must-have tool for anyone who loves genealogy and family history.

-

-

If you want to get Daub Ages 2 0 Cracked, you can download it from FileCR.com, a reliable source that offers free downloads of various software. You will also get the crack file that will activate the full version, as well as a user manual that will guide you through the installation and usage of the software.

-

So don't hesitate and download Daub Ages 2 0 Cracked today and enjoy creating your family tree!

-Download Daub Ages 2 0 Cracked from FileCR.com -

Conclusion

-

In conclusion, Daub Ages 2 0 Cracked is a powerful and convenient software that allows you to create and manage your family tree, as well as to explore and analyze your ancestry. Daub Ages 2 0 Cracked is a user-friendly tool that can help you discover your roots and share your stories. Daub Ages 2 0 Cracked is a must-have tool for anyone who loves genealogy and family history.

-

If you want to get Daub Ages 2 0 Cracked, you can download it from FileCR.com, a reliable source that offers free downloads of various software. You will also get the crack file that will activate the full version, as well as a user manual that will guide you through the installation and usage of the software.

-

So don't hesitate and download Daub Ages 2 0 Cracked today and enjoy creating your family tree!

-Download Daub Ages 2 0 Cracked from FileCR.com

3cee63e6c2
-
-
\ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Not Angka Pianika Rumah Kita Doc.rar.md b/spaces/diacanFperku/AutoGPT/Not Angka Pianika Rumah Kita Doc.rar.md deleted file mode 100644 index e5cb3c9965c1073de18ca672018a1f1c6fbe9250..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Not Angka Pianika Rumah Kita Doc.rar.md +++ /dev/null @@ -1,10 +0,0 @@ -

not angka pianika rumah kita doc.rar


Downloadhttps://gohhs.com/2uFUAm



- -Update for Microsoft Security Essentials (4.10.209.0) returns context menu not angka pianika rumah kita doc.rar · readiris pro 11 free download full . Downloads for Microsoft Security Essentials for Windows Vista, Windows 7, Windows Server 2008 and Windows. -Download Microsoft Security Essentials for Windows 7 32-bit 64-bit SP1 (x86/x64) (English) . -Download Microsoft Security Essentials 4.10.209.0 . -Microsoft Security Essentials is a free antivirus to protect Windows 7, Windows 8 and . Download Microsoft Security Essentials for Windows XP 32-bit (32-bit) (English) . . -Download Microsoft Security Essentials for Windows XP (32-bit) (English) . 8a78ff9644
-
-
-

diff --git a/spaces/diacanFperku/AutoGPT/Robot Structural Analysis Professional 2019 Xforce Keygen ((LINK)) 64 Bit.md b/spaces/diacanFperku/AutoGPT/Robot Structural Analysis Professional 2019 Xforce Keygen ((LINK)) 64 Bit.md deleted file mode 100644 index 3d4eda560af77c68871b442de5e3e87b1db57828..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Robot Structural Analysis Professional 2019 Xforce Keygen ((LINK)) 64 Bit.md +++ /dev/null @@ -1,32 +0,0 @@ - -

How to Activate Robot Structural Analysis Professional 2019 with Xforce Keygen 64 Bit

-

Robot Structural Analysis Professional 2019 is a powerful software that allows you to perform structural analysis and design of complex structures. It supports various types of materials, loads, and codes, and integrates with other Autodesk products such as Revit and AutoCAD.

-

Robot Structural Analysis Professional 2019 Xforce Keygen 64 Bit


Download File https://gohhs.com/2uFU6y



-

However, to use Robot Structural Analysis Professional 2019, you need to activate it with a valid license. If you don't have one, you can use Xforce Keygen 64 Bit to generate a serial number and a product key that will unlock the full features of the software.

-

In this article, we will show you how to use Xforce Keygen 64 Bit to activate Robot Structural Analysis Professional 2019 in a few simple steps.

-

Step 1: Download and Install Robot Structural Analysis Professional 2019

-

The first step is to download and install Robot Structural Analysis Professional 2019 from the official website or from a trusted source. You can choose the trial version or the full version depending on your needs.

-

-

Follow the instructions on the screen to complete the installation process. Make sure you have enough disk space and system requirements to run the software smoothly.

-

Step 2: Download and Run Xforce Keygen 64 Bit

-

The next step is to download and run Xforce Keygen 64 Bit from a reliable source. Xforce Keygen 64 Bit is a tool that can generate serial numbers and product keys for various Autodesk products, including Robot Structural Analysis Professional 2019.

-

Before you run Xforce Keygen 64 Bit, make sure you disable your antivirus and firewall software, as they may interfere with the activation process. Also, make sure you run Xforce Keygen 64 Bit as an administrator.

-

Once you run Xforce Keygen 64 Bit, you will see a window like this:

-Xforce Keygen 64 Bit window -

Select Robot Structural Analysis Professional 2019 from the drop-down menu and click on Generate. You will see a serial number and a product key appear in the fields below.

-

Step 3: Activate Robot Structural Analysis Professional 2019 with Xforce Keygen 64 Bit

-

The final step is to activate Robot Structural Analysis Professional 2019 with the serial number and product key generated by Xforce Keygen 64 Bit.

-

Launch Robot Structural Analysis Professional 2019 and click on Activate in the startup screen. You will see a window like this:

-Robot Structural Analysis Professional 2019 activation window -

Enter the serial number and product key generated by Xforce Keygen 64 Bit in the corresponding fields and click on Next. You will see a window like this:

-Robot Structural Analysis Professional 2019 activation window -

Select I have an activation code from Autodesk and click on Next. You will see a window like this:

-Robot Structural Analysis Professional 2019 activation window -

Copy the request code from the window and paste it into the Request field in Xforce Keygen 64 Bit. Then click on Generate. You will see an activation code appear in the Activation field in Xforce Keygen 64 Bit.

-

Copy the activation code from Xforce Keygen 64 Bit and paste it into the Activation field in Robot Structural Analysis Professional 2019. Then click on Next. You will see a window like this:

-Robot Structural Analysis Professional 2019 activation window -

Congratulations! You have successfully activated Robot Structural Analysis Professional 2019 with Xforce Keygen 64 Bit. You can now enjoy the full features of the software without any limitations.

-

Conclusion

-

In this article, we have shown you how to activate Robot Structural Analysis Professional 2019 with Xforce Keygen 64 Bit in a few simple steps. We hope this article was helpful and

d5da3c52bf
-
-
\ No newline at end of file diff --git a/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/utils.py b/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/utils.py deleted file mode 100644 index c6aa6cfc64c33e2eed33e9845239e831fc1c4a1a..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Bufeiyan-a-Bert-VITS2/utils.py +++ /dev/null @@ -1,293 +0,0 @@ -import os -import glob -import sys -import argparse -import logging -import json -import subprocess -import numpy as np -from scipy.io.wavfile import read -import torch - -MATPLOTLIB_FLAG = False - -logging.basicConfig(stream=sys.stdout, level=logging.DEBUG) -logger = logging - - -def load_checkpoint(checkpoint_path, model, optimizer=None, skip_optimizer=False): - assert os.path.isfile(checkpoint_path) - checkpoint_dict = torch.load(checkpoint_path, map_location='cpu') - iteration = checkpoint_dict['iteration'] - learning_rate = checkpoint_dict['learning_rate'] - if optimizer is not None and not skip_optimizer and checkpoint_dict['optimizer'] is not None: - optimizer.load_state_dict(checkpoint_dict['optimizer']) - elif optimizer is None and not skip_optimizer: - #else: #Disable this line if Infer ,and enable the line upper - new_opt_dict = optimizer.state_dict() - new_opt_dict_params = new_opt_dict['param_groups'][0]['params'] - new_opt_dict['param_groups'] = checkpoint_dict['optimizer']['param_groups'] - new_opt_dict['param_groups'][0]['params'] = new_opt_dict_params - optimizer.load_state_dict(new_opt_dict) - saved_state_dict = checkpoint_dict['model'] - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - new_state_dict = {} - for k, v in state_dict.items(): - try: - #assert "emb_g" not in k - # print("load", k) - new_state_dict[k] = saved_state_dict[k] - assert saved_state_dict[k].shape == v.shape, (saved_state_dict[k].shape, v.shape) - except: - print("error, %s is not in the checkpoint" % k) - new_state_dict[k] = v - if hasattr(model, 'module'): - model.module.load_state_dict(new_state_dict, strict=False) - else: - model.load_state_dict(new_state_dict, strict=False) - print("load ") - logger.info("Loaded checkpoint '{}' (iteration {})".format( - checkpoint_path, iteration)) - return model, optimizer, learning_rate, iteration - - -def save_checkpoint(model, optimizer, learning_rate, iteration, checkpoint_path): - logger.info("Saving model and optimizer state at iteration {} to {}".format( - iteration, checkpoint_path)) - if hasattr(model, 'module'): - state_dict = model.module.state_dict() - else: - state_dict = model.state_dict() - torch.save({'model': state_dict, - 'iteration': iteration, - 'optimizer': optimizer.state_dict(), - 'learning_rate': learning_rate}, checkpoint_path) - - -def summarize(writer, global_step, scalars={}, histograms={}, images={}, audios={}, audio_sampling_rate=22050): - for k, v in scalars.items(): - writer.add_scalar(k, v, global_step) - for k, v in histograms.items(): - writer.add_histogram(k, v, global_step) - for k, v in images.items(): - writer.add_image(k, v, global_step, dataformats='HWC') - for k, v in audios.items(): - writer.add_audio(k, v, global_step, audio_sampling_rate) - - -def latest_checkpoint_path(dir_path, regex="G_*.pth"): - f_list = glob.glob(os.path.join(dir_path, regex)) - f_list.sort(key=lambda f: int("".join(filter(str.isdigit, f)))) - x = f_list[-1] - print(x) - return x - - -def plot_spectrogram_to_numpy(spectrogram): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - plt.xlabel("Frames") - plt.ylabel("Channels") - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def plot_alignment_to_numpy(alignment, info=None): - global MATPLOTLIB_FLAG - if not MATPLOTLIB_FLAG: - import matplotlib - matplotlib.use("Agg") - MATPLOTLIB_FLAG = True - mpl_logger = logging.getLogger('matplotlib') - mpl_logger.setLevel(logging.WARNING) - import matplotlib.pylab as plt - import numpy as np - - fig, ax = plt.subplots(figsize=(6, 4)) - im = ax.imshow(alignment.transpose(), aspect='auto', origin='lower', - interpolation='none') - fig.colorbar(im, ax=ax) - xlabel = 'Decoder timestep' - if info is not None: - xlabel += '\n\n' + info - plt.xlabel(xlabel) - plt.ylabel('Encoder timestep') - plt.tight_layout() - - fig.canvas.draw() - data = np.fromstring(fig.canvas.tostring_rgb(), dtype=np.uint8, sep='') - data = data.reshape(fig.canvas.get_width_height()[::-1] + (3,)) - plt.close() - return data - - -def load_wav_to_torch(full_path): - sampling_rate, data = read(full_path) - return torch.FloatTensor(data.astype(np.float32)), sampling_rate - - -def load_filepaths_and_text(filename, split="|"): - with open(filename, encoding='utf-8') as f: - filepaths_and_text = [line.strip().split(split) for line in f] - return filepaths_and_text - - -def get_hparams(init=True): - parser = argparse.ArgumentParser() - parser.add_argument('-c', '--config', type=str, default="./configs/base.json", - help='JSON file for configuration') - parser.add_argument('-m', '--model', type=str, default="./OUTPUT_MODEL", - help='Model name') - parser.add_argument('--cont', dest='cont', action="store_true", default=False, help="whether to continue training on the latest checkpoint") - - args = parser.parse_args() - model_dir = os.path.join("./logs", args.model) - - if not os.path.exists(model_dir): - os.makedirs(model_dir) - - config_path = args.config - config_save_path = os.path.join(model_dir, "config.json") - if init: - with open(config_path, "r") as f: - data = f.read() - with open(config_save_path, "w") as f: - f.write(data) - else: - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - hparams.cont = args.cont - return hparams - - -def clean_checkpoints(path_to_models='logs/44k/', n_ckpts_to_keep=2, sort_by_time=True): - """Freeing up space by deleting saved ckpts - - Arguments: - path_to_models -- Path to the model directory - n_ckpts_to_keep -- Number of ckpts to keep, excluding G_0.pth and D_0.pth - sort_by_time -- True -> chronologically delete ckpts - False -> lexicographically delete ckpts - """ - import re - ckpts_files = [f for f in os.listdir(path_to_models) if os.path.isfile(os.path.join(path_to_models, f))] - name_key = (lambda _f: int(re.compile('._(\d+)\.pth').match(_f).group(1))) - time_key = (lambda _f: os.path.getmtime(os.path.join(path_to_models, _f))) - sort_key = time_key if sort_by_time else name_key - x_sorted = lambda _x: sorted([f for f in ckpts_files if f.startswith(_x) and not f.endswith('_0.pth')], - key=sort_key) - to_del = [os.path.join(path_to_models, fn) for fn in - (x_sorted('G')[:-n_ckpts_to_keep] + x_sorted('D')[:-n_ckpts_to_keep])] - del_info = lambda fn: logger.info(f".. Free up space by deleting ckpt {fn}") - del_routine = lambda x: [os.remove(x), del_info(x)] - rs = [del_routine(fn) for fn in to_del] - -def get_hparams_from_dir(model_dir): - config_save_path = os.path.join(model_dir, "config.json") - with open(config_save_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - hparams.model_dir = model_dir - return hparams - - -def get_hparams_from_file(config_path): - with open(config_path, "r") as f: - data = f.read() - config = json.loads(data) - - hparams = HParams(**config) - return hparams - - -def check_git_hash(model_dir): - source_dir = os.path.dirname(os.path.realpath(__file__)) - if not os.path.exists(os.path.join(source_dir, ".git")): - logger.warn("{} is not a git repository, therefore hash value comparison will be ignored.".format( - source_dir - )) - return - - cur_hash = subprocess.getoutput("git rev-parse HEAD") - - path = os.path.join(model_dir, "githash") - if os.path.exists(path): - saved_hash = open(path).read() - if saved_hash != cur_hash: - logger.warn("git hash values are different. {}(saved) != {}(current)".format( - saved_hash[:8], cur_hash[:8])) - else: - open(path, "w").write(cur_hash) - - -def get_logger(model_dir, filename="train.log"): - global logger - logger = logging.getLogger(os.path.basename(model_dir)) - logger.setLevel(logging.DEBUG) - - formatter = logging.Formatter("%(asctime)s\t%(name)s\t%(levelname)s\t%(message)s") - if not os.path.exists(model_dir): - os.makedirs(model_dir) - h = logging.FileHandler(os.path.join(model_dir, filename)) - h.setLevel(logging.DEBUG) - h.setFormatter(formatter) - logger.addHandler(h) - return logger - - -class HParams(): - def __init__(self, **kwargs): - for k, v in kwargs.items(): - if type(v) == dict: - v = HParams(**v) - self[k] = v - - def keys(self): - return self.__dict__.keys() - - def items(self): - return self.__dict__.items() - - def values(self): - return self.__dict__.values() - - def __len__(self): - return len(self.__dict__) - - def __getitem__(self, key): - return getattr(self, key) - - def __setitem__(self, key, value): - return setattr(self, key, value) - - def __contains__(self, key): - return key in self.__dict__ - - def __repr__(self): - return self.__dict__.__repr__() diff --git a/spaces/digitalxingtong/Jiuxia-Bert-Vits2/README.md b/spaces/digitalxingtong/Jiuxia-Bert-Vits2/README.md deleted file mode 100644 index 1e88ad2655b2cd9bc0b237fef92c0088b3826926..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiuxia-Bert-Vits2/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AI九夏 -emoji: 🌟 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/dorkai/ChatUIPro/app/components/base/loading/style.css b/spaces/dorkai/ChatUIPro/app/components/base/loading/style.css deleted file mode 100644 index 40402e1a9d90ba136abb31a655f5b1a0e932cc5c..0000000000000000000000000000000000000000 --- a/spaces/dorkai/ChatUIPro/app/components/base/loading/style.css +++ /dev/null @@ -1,41 +0,0 @@ -.spin-animation path { - animation: custom 2s linear infinite; -} - -@keyframes custom { - 0% { - opacity: 0; - } - - 25% { - opacity: 0.1; - } - - 50% { - opacity: 0.2; - } - - 75% { - opacity: 0.5; - } - - 100% { - opacity: 1; - } -} - -.spin-animation path:nth-child(1) { - animation-delay: 0s; -} - -.spin-animation path:nth-child(2) { - animation-delay: 0.5s; -} - -.spin-animation path:nth-child(3) { - animation-delay: 1s; -} - -.spin-animation path:nth-child(4) { - animation-delay: 1.5s; -} \ No newline at end of file diff --git a/spaces/dorkai/ChatUIPro/hooks/use-breakpoints.ts b/spaces/dorkai/ChatUIPro/hooks/use-breakpoints.ts deleted file mode 100644 index 1aab56a9fdbd2bfce3b52c940bd10be3eadc1a00..0000000000000000000000000000000000000000 --- a/spaces/dorkai/ChatUIPro/hooks/use-breakpoints.ts +++ /dev/null @@ -1,27 +0,0 @@ -'use client' -import React from 'react' - -export enum MediaType { - mobile = 'mobile', - tablet = 'tablet', - pc = 'pc', -} - -const useBreakpoints = () => { - const [width, setWidth] = React.useState(globalThis.innerWidth); - const media = (() => { - if (width <= 640) return MediaType.mobile; - if (width <= 768) return MediaType.tablet; - return MediaType.pc; - })(); - - React.useEffect(() => { - const handleWindowResize = () => setWidth(window.innerWidth); - window.addEventListener("resize", handleWindowResize); - return () => window.removeEventListener("resize", handleWindowResize); - }, []); - - return media; -} - -export default useBreakpoints \ No newline at end of file diff --git a/spaces/dorkai/ChatUIPro/tailwind.config.js b/spaces/dorkai/ChatUIPro/tailwind.config.js deleted file mode 100644 index 9b7b3acec9bf29f2f1451336c2a881717c920f6a..0000000000000000000000000000000000000000 --- a/spaces/dorkai/ChatUIPro/tailwind.config.js +++ /dev/null @@ -1,66 +0,0 @@ -/** @type {import('tailwindcss').Config} */ -module.exports = { - content: [ - './app/**/*.{js,ts,jsx,tsx}', - './components/**/*.{js,ts,jsx,tsx}', - ], - theme: { - typography: require('./typography'), - extend: { - colors: { - gray: { - 50: '#F9FAFB', - 100: '#F3F4F6', - 200: '#E5E7EB', - 300: '#D1D5DB', - 400: '#9CA3AF', - 500: '#6B7280', - 700: '#374151', - 800: '#1F2A37', - 900: '#111928', - }, - primary: { - 50: '#EBF5FF', - 100: '#E1EFFE', - 200: '#C3DDFD', - 300: '#A4CAFE', - 600: '#1C64F2', - 700: '#1A56DB', - }, - blue: { - 500: '#E1EFFE', - }, - green: { - 50: '#F3FAF7', - 100: '#DEF7EC', - 800: '#03543F', - - }, - yellow: { - 100: '#FDF6B2', - 800: '#723B13', - }, - purple: { - 50: '#F6F5FF', - }, - indigo: { - 25: '#F5F8FF', - 100: '#E0EAFF', - 600: '#444CE7' - } - }, - screens: { - 'mobile': '100px', - // => @media (min-width: 100px) { ... } - 'tablet': '640px', // 391 - // => @media (min-width: 600px) { ... } - 'pc': '769px', - // => @media (min-width: 769px) { ... } - }, - }, - }, - plugins: [ - require('@tailwindcss/typography'), - require('@tailwindcss/line-clamp'), - ], -} diff --git a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Training-LoRAs.md b/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Training-LoRAs.md deleted file mode 100644 index 3d75ec5aa2bc12e8c13d6a583bd9aefd118f04d7..0000000000000000000000000000000000000000 --- a/spaces/dorkai/text-generation-webui-main/text-generation-webui-main/docs/Training-LoRAs.md +++ /dev/null @@ -1,167 +0,0 @@ -## Training Your Own LoRAs - -The WebUI seeks to make training your own LoRAs as easy as possible. It comes down to just a few simple steps: - -### **Step 1**: Make a plan. -- What base model do you want to use? The LoRA you make has to be matched up to a single architecture (eg LLaMA-13B) and cannot be transferred to others (eg LLaMA-7B, StableLM, etc. would all be different). Derivatives of the same model (eg Alpaca finetune of LLaMA-13B) might be transferrable, but even then it's best to train exactly on what you plan to use. -- What model format do you want? At time of writing, 8-bit models are most stable, and 4-bit are supported but experimental. In the near future it is likely that 4-bit will be the best option for most users. -- What are you training it on? Do you want it to learn real information, a simple format, ...? - -### **Step 2**: Gather a dataset. -- If you use a dataset similar to the [Alpaca](https://github.com/gururise/AlpacaDataCleaned/blob/main/alpaca_data_cleaned.json) format, that is natively supported by the `Formatted Dataset` input in the WebUI, with premade formatter options. -- If you use a dataset that isn't matched to Alpaca's format, but uses the same basic JSON structure, you can make your own format file by copying `training/formats/alpaca-format.json` to a new file and [editing its content](#format-files). -- If you can get the dataset into a simple text file, that works too! You can train using the `Raw text file` input option. - - This means you can for example just copy/paste a chatlog/documentation page/whatever you want, shove it in a plain text file, and train on it. -- If you use a structured dataset not in this format, you may have to find an external way to convert it - or open an issue to request native support. - -### **Step 3**: Do the training. -- **3.1**: Load the WebUI, and your model. - - Make sure you don't have any LoRAs already loaded (unless you want to train for multi-LoRA usage). -- **3.2**: Open the `Training` tab at the top, `Train LoRA` sub-tab. -- **3.3**: Fill in the name lof the LoRA, select your dataset in the dataset options. -- **3.4**: Select other parameters to your preference. See [parameters below](#parameters). -- **3.5**: click `Start LoRA Training`, and wait. - - It can take a few hours for a large dataset, or just a few minute if doing a small run. - - You may want to monitor your [loss value](#loss) while it goes. - -### **Step 4**: Evaluate your results. -- Load the LoRA under the Models Tab. -- You can go test-drive it on the `Text generation` tab, or you can use the `Perplexity evaluation` sub-tab of the `Training` tab. -- If you used the `Save every n steps` option, you can grab prior copies of the model from sub-folders within the LoRA model's folder and try them instead. - -### **Step 5**: Re-run if you're unhappy. -- Make sure to unload the LoRA before training it. -- You can simply resume a prior run - use `Copy parameters from` to select your LoRA, and edit parameters. Note that you cannot change the `Rank` of an already created LoRA. - - If you want to resume from a checkpoint saved along the way, simply copy the contents of the checkpoint folder into the LoRA's folder. - - (Note: `adapter_model.bin` is the important file that holds the actual LoRA content). - - This will start Learning Rate and Steps back to the start. If you want to resume as if you were midway through, you can adjust your Learning Rate to the last reported LR in logs and reduce your epochs. -- Or, you can start over entirely if you prefer. -- If your model is producing corrupted outputs, you probably need to start over and use a lower Learning Rate. -- If your model isn't learning detailed information but you want it to, you might need to just run more epochs, or you might need a higher Rank. -- If your model is enforcing a format you didn't want, you may need to tweak your dataset, or start over and not train as far. - -## Format Files - -If using JSON formatted datasets, they are presumed to be in the following approximate format: - -```json -[ - { - "somekey": "somevalue", - "key2": "value2" - }, - { - // etc - } -] -``` - -Where the keys (eg `somekey`, `key2` above) are standardized, and relatively consistent across the dataset, and the values (eg `somevalue`, `value2`) contain the content actually intended to be trained. - -For Alpaca, the keys are `instruction`, `input`, and `output`, wherein `input` is sometimes blank. - -A simple format file for Alpaca to be used as a chat bot is: - -```json -{ - "instruction,output": "User: %instruction%\nAssistant: %output%", - "instruction,input,output": "User: %instruction%: %input%\nAssistant: %output%" -} -``` - -Note that the keys (eg `instruction,output`) are a comma-separated list of dataset keys, and the values are a simple string that use those keys with `%%`. - -So for example if a dataset has `"instruction": "answer my question"`, then the format file's `User: %instruction%\n` will be automatically filled in as `User: answer my question\n`. - -If you have different sets of key inputs, you can make your own format file to match it. This format-file is designed to be as simple as possible to enable easy editing to match your needs. - -## Parameters - -The basic purpose and function of each parameter is documented on-page in the WebUI, so read through them in the UI to understand your options. - -That said, here's a guide to the most important parameter choices you should consider: - -### VRAM - -- First, you must consider your VRAM availability. - - Generally, under default settings, VRAM usage for training with default parameters is very close to when generating text (with 1000+ tokens of context) (ie, if you can generate text, you can train LoRAs). - - Note: worse by default in the 4-bit monkeypatch currently. Reduce `Micro Batch Size` to `1` to restore this to expectations. - - If you have VRAM to spare, setting higher batch sizes will use more VRAM and get you better quality training in exchange. - - If you have large data, setting a higher cutoff length may be beneficial, but will cost significant VRAM. If you can spare some, set your batch size to `1` and see how high you can push your cutoff length. - - If you're low on VRAM, reducing batch size or cutoff length will of course improve that. - - Don't be afraid to just try it and see what happens. If it's too much, it will just error out, and you can lower settings and try again. - -### Rank - -- Second, you want to consider the amount of learning you want. - - For example, you may wish to just learn a dialogue format (as in the case of Alpaca) in which case setting a low `Rank` value (32 or lower) works great. - - Or, you might be training on project documentation you want the bot to understand and be able to understand questions about, in which case the higher the rank, the better. - - Generally, higher Rank = more precise learning = more total content learned = more VRAM usage while training. - -### Learning Rate and Epochs - -- Third, how carefully you want it to be learned. - - In other words, how okay or not you are with the model losing unrelated understandings. - - You can control this with 3 key settings: the Learning Rate, its scheduler, and your total epochs. - - The learning rate controls how much change is made to the model by each token it sees. - - It's in scientific notation normally, so for example `3e-4` means `3 * 10^-4` which is `0.0003`. The number after `e-` controls how many `0`s are in the number. - - Higher values let training run faster, but also are more likely to corrupt prior data in the model. - - You essentially have two variables to balance: the LR, and Epochs. - - If you make LR higher, you can set Epochs equally lower to match. High LR + low epochs = very fast, low quality training. - - If you make LR low, set epochs high. Low LR + high epochs = slow but high-quality training. - - The scheduler controls change-over-time as you train - it starts high, and then goes low. This helps balance getting data in, and having decent quality, at the same time. - - You can see graphs of the different scheduler options [in the HuggingFace docs here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_1/en/main_classes/optimizer_schedules#transformers.SchedulerType) - -## Loss - -When you're running training, the WebUI's console window will log reports that include, among other things, a numeric value named `Loss`. It will start as a high number, and gradually get lower and lower as it goes. - -"Loss" in the world of AI training theoretically means "how close is the model to perfect", with `0` meaning "absolutely perfect". This is calculated by measuring the difference between the model outputting exactly the text you're training it to output, and what it actually outputs. - -In practice, a good LLM should have a very complex variable range of ideas running in its artificial head, so a loss of `0` would indicate that the model has broken and forgotten to how think about anything other than what you trained it. - -So, in effect, Loss is a balancing game: you want to get it low enough that it understands your data, but high enough that it isn't forgetting everything else. Generally, if it goes below `1.0`, it's going to start forgetting its prior memories, and you should stop training. In some cases you may prefer to take it as low as `0.5` (if you want it to be very very predictable). Different goals have different needs, so don't be afraid to experiment and see what works best for you. - -Note: if you see Loss start at or suddenly jump to exactly `0`, it is likely something has gone wrong in your training process (eg model corruption). - -## Note: 4-Bit Monkeypatch - -The [4-bit LoRA monkeypatch](GPTQ-models-(4-bit-mode).md#using-loras-in-4-bit-mode) works for training, but has side effects: -- VRAM usage is higher currently. You can reduce the `Micro Batch Size` to `1` to compensate. -- Models do funky things. LoRAs apply themselves, or refuse to apply, or spontaneously error out, or etc. It can be helpful to reload base model or restart the WebUI between training/usage to minimize chances of anything going haywire. -- Loading or working with multiple LoRAs at the same time doesn't currently work. -- Generally, recognize and treat the monkeypatch as the dirty temporary hack it is - it works, but isn't very stable. It will get better in time when everything is merged upstream for full official support. - -## Legacy notes - -LoRA training was contributed by [mcmonkey4eva](https://github.com/mcmonkey4eva) in PR [#570](https://github.com/oobabooga/text-generation-webui/pull/570). - -### Using the original alpaca-lora code - -Kept here for reference. The Training tab has much more features than this method. - -``` -conda activate textgen -git clone https://github.com/tloen/alpaca-lora -``` - -Edit those two lines in `alpaca-lora/finetune.py` to use your existing model folder instead of downloading everything from decapoda: - -``` -model = LlamaForCausalLM.from_pretrained( - "models/llama-7b", - load_in_8bit=True, - device_map="auto", -) -tokenizer = LlamaTokenizer.from_pretrained( - "models/llama-7b", add_eos_token=True -) -``` - -Run the script with: - -``` -python finetune.py -``` - -It just works. It runs at 22.32s/it, with 1170 iterations in total, so about 7 hours and a half for training a LoRA. RTX 3090, 18153MiB VRAM used, drawing maximum power (350W, room heater mode). diff --git a/spaces/drift-ai/emoji-tagging/Makefile b/spaces/drift-ai/emoji-tagging/Makefile deleted file mode 100644 index 075e9a709827f57df977fd97584f235e555dde40..0000000000000000000000000000000000000000 --- a/spaces/drift-ai/emoji-tagging/Makefile +++ /dev/null @@ -1,3 +0,0 @@ -install: - poetry install - poetry run pip list --format=freeze > requirements.txt diff --git a/spaces/ds21/Q-TicTacToe/README.md b/spaces/ds21/Q-TicTacToe/README.md deleted file mode 100644 index 161f39f0cf97c96649b9736fc55043c69ad03fb3..0000000000000000000000000000000000000000 --- a/spaces/ds21/Q-TicTacToe/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Q TicTacToe -emoji: :) -colorFrom: gray -colorTo: red -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false ---- - -# Q-TicTacToe -A Quantam version of the Tic Tac Toe game diff --git a/spaces/eforebrahim/Cassava-Leaf-Disease-Classification/README.md b/spaces/eforebrahim/Cassava-Leaf-Disease-Classification/README.md deleted file mode 100644 index 87f15455cf1235fbf8dfd3b3a1971261695ccf74..0000000000000000000000000000000000000000 --- a/spaces/eforebrahim/Cassava-Leaf-Disease-Classification/README.md +++ /dev/null @@ -1,29 +0,0 @@ ---- -title: Cassava Leaf Disease Classification -emoji: ☘️ -colorFrom: gray -colorTo: gray -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false ---- -# Cassava Leaf Disease Classification Project - - -## Background Information -“As the second-largest provider of carbohydrates in Africa, cassava is a key food security crop grown by smallholder farmers because it can withstand harsh conditions. -At least 80% of household farms in Sub-Saharan Africa grow this starchy root, but viral diseases are major sources of poor yields. With the help of data science, it may be possible to identify common diseases so they can be treated.” - -## Data -The data contains about 21,000 images of Cassava plant belonging to 5 different categories (4 diseases and 1 healthy). -The dataset was made available by Makerere University AI Lab via Kaggle Competition. You can get the dataset from here: (https://lnkd.in/dxGUTcN4) - -## Modeling -Pre-trained model efficientnet version b2 is used for modeling. Dropout layers were added to prevent model from overfitting. Validation precision of 0.855, recall of 0.813, and accuracy of 0.831 is achieved. - -## Deployment -The web app is hosted on huggingface spaces using streamlit user interface. -#datascience #machinelearning #computervision #artificialintelligence - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/elkraken/Video-Object-Detection/app.py b/spaces/elkraken/Video-Object-Detection/app.py deleted file mode 100644 index d621ffdb8407864cf8c0e74c866737f580264e56..0000000000000000000000000000000000000000 --- a/spaces/elkraken/Video-Object-Detection/app.py +++ /dev/null @@ -1,293 +0,0 @@ -import gradio as gr -import os - -import argparse -import time -from pathlib import Path - -import cv2 -import torch -import torch.backends.cudnn as cudnn -from numpy import random - -from models.experimental import attempt_load -from utils.datasets import LoadStreams, LoadImages -from utils.general import check_img_size, check_requirements, check_imshow, non_max_suppression, apply_classifier, \ - scale_coords, xyxy2xywh, strip_optimizer, set_logging, increment_path -from utils.plots import plot_one_box -from utils.torch_utils import select_device, load_classifier, time_synchronized, TracedModel -from PIL import Image - -from sort import * - -from huggingface_hub import hf_hub_download - -def load_model(model_name): - model_path = hf_hub_download(repo_id=f"Yolov7/{model_name}", filename=f"{model_name}.pt") - - return model_path - - -model_names = ["yolov7"] - -models = {model_name: load_model(model_name) for model_name in model_names} - -################################## -# """Function to Draw Bounding boxes""" -def draw_boxes(img, bbox, identities=None, categories=None, confidences = None, names=None, colors = None): - for i, box in enumerate(bbox): - x1, y1, x2, y2 = [int(i) for i in box] - tl = opt.thickness or round(0.002 * (img.shape[0] + img.shape[1]) / 2) + 1 # line/font thickness - - cat = int(categories[i]) if categories is not None else 0 - id = int(identities[i]) if identities is not None else 0 - # conf = confidences[i] if confidences is not None else 0 - - color = colors[cat] - - if not opt.nobbox: - cv2.rectangle(img, (x1, y1), (x2, y2), color, tl) - - if not opt.nolabel: - label = str(id) + ":"+ names[cat] if identities is not None else f'{names[cat]} {confidences[i]:.2f}' - tf = max(tl - 1, 1) # font thickness - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - c2 = x1 + t_size[0], y1 - t_size[1] - 3 - cv2.rectangle(img, (x1, y1), c2, color, -1, cv2.LINE_AA) # filled - cv2.putText(img, label, (x1, y1 - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA) - - - return img -################################## - - -def detect(save_img=True): - parser = argparse.ArgumentParser() - parser.add_argument('--weights', nargs='+', type=str, default='yolov7.pt', help='model.pt path(s)') - parser.add_argument('--source', type=str, default='inference/images', help='source') # file/folder, 0 for webcam - parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)') - parser.add_argument('--conf-thres', type=float, default=0.25, help='object confidence threshold') - parser.add_argument('--iou-thres', type=float, default=0.45, help='IOU threshold for NMS') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--view-img', action='store_true', help='display results') - parser.add_argument('--save-txt', action='store_true', help='save results to *.txt') - parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels') - parser.add_argument('--nosave', action='store_true', help='do not save images/videos') - parser.add_argument('--classes', nargs='+', type=int, help='filter by class: --class 0, or --class 0 2 3') - parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS') - parser.add_argument('--augment', action='store_true', help='augmented inference') - parser.add_argument('--update', action='store_true', help='update all models') - parser.add_argument('--project', default='runs/detect', help='save results to project/name') - parser.add_argument('--name', default='exp', help='save results to project/name') - parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment') - parser.add_argument('--no-trace', action='store_true', help='don`t trace model') - - parser.add_argument('--track', action='store_true', help='run tracking') - parser.add_argument('--show-track', action='store_true', help='show tracked path') - parser.add_argument('--show-fps', action='store_true', help='show fps') - parser.add_argument('--thickness', type=int, default=2, help='bounding box and font size thickness') - parser.add_argument('--seed', type=int, default=1, help='random seed to control bbox colors') - parser.add_argument('--nobbox', action='store_true', help='don`t show bounding box') - parser.add_argument('--nolabel', action='store_true', help='don`t show label') - parser.add_argument('--unique-track-color', action='store_true', help='show each track in unique color') - - opt = parser.parse_args() - np.random.seed(opt.seed) - - sort_tracker = Sort(max_age=5, - min_hits=2, - iou_threshold=0.2) - - source, weights, view_img, save_txt, imgsz, trace = opt.source, opt.weights, opt.view_img, opt.save_txt, opt.img_size, not opt.no_trace - save_img = not opt.nosave and not source.endswith('.txt') # save inference images - webcam = source.isnumeric() or source.endswith('.txt') or source.lower().startswith( - ('rtsp://', 'rtmp://', 'http://', 'https://')) - save_dir = Path(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # increment run - if not opt.nosave: - (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir - - # Initialize - set_logging() - device = select_device(opt.device) - half = device.type != 'cpu' # half precision only supported on CUDA - - # Load model - model = attempt_load(weights, map_location=device) # load FP32 model - stride = int(model.stride.max()) # model stride - imgsz = check_img_size(imgsz, s=stride) # check img_size - - if trace: - model = TracedModel(model, device, opt.img_size) - - if half: - model.half() # to FP16 - - # Second-stage classifier - classify = False - if classify: - modelc = load_classifier(name='resnet101', n=2) # initialize - modelc.load_state_dict(torch.load('weights/resnet101.pt', map_location=device)['model']).to(device).eval() - - # Set Dataloader - vid_path, vid_writer = None, None - if webcam: - view_img = check_imshow() - cudnn.benchmark = True # set True to speed up constant image size inference - dataset = LoadStreams(source, img_size=imgsz, stride=stride) - else: - dataset = LoadImages(source, img_size=imgsz, stride=stride) - - # Get names and colors - names = model.module.names if hasattr(model, 'module') else model.names - colors = [[random.randint(0, 255) for _ in range(3)] for _ in names] - - # Run inference - if device.type != 'cpu': - model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once - old_img_w = old_img_h = imgsz - old_img_b = 1 - - t0 = time.time() - ################################### - startTime = 0 - ################################### - for path, img, im0s, vid_cap in dataset: - img = torch.from_numpy(img).to(device) - img = img.half() if half else img.float() # uint8 to fp16/32 - img /= 255.0 # 0 - 255 to 0.0 - 1.0 - if img.ndimension() == 3: - img = img.unsqueeze(0) - - # Warmup - if device.type != 'cpu' and (old_img_b != img.shape[0] or old_img_h != img.shape[2] or old_img_w != img.shape[3]): - old_img_b = img.shape[0] - old_img_h = img.shape[2] - old_img_w = img.shape[3] - for i in range(3): - model(img, augment=opt.augment)[0] - - # Inference - t1 = time_synchronized() - pred = model(img, augment=opt.augment)[0] - t2 = time_synchronized() - - # Apply NMS - pred = non_max_suppression(pred, opt.conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms) - t3 = time_synchronized() - - # Apply Classifier - if classify: - pred = apply_classifier(pred, modelc, img, im0s) - - # Process detections - for i, det in enumerate(pred): # detections per image - if webcam: # batch_size >= 1 - p, s, im0, frame = path[i], '%g: ' % i, im0s[i].copy(), dataset.count - else: - p, s, im0, frame = path, '', im0s, getattr(dataset, 'frame', 0) - - p = Path(p) # to Path - save_path = str(save_dir / p.name) # img.jpg - txt_path = str(save_dir / 'labels' / p.stem) + ('' if dataset.mode == 'image' else f'_{frame}') # img.txt - gn = torch.tensor(im0.shape)[[1, 0, 1, 0]] # normalization gain whwh - if len(det): - # Rescale boxes from img_size to im0 size - det[:, :4] = scale_coords(img.shape[2:], det[:, :4], im0.shape).round() - - # Print results - for c in det[:, -1].unique(): - n = (det[:, -1] == c).sum() # detections per class - s += f"{n} {names[int(c)]}{'s' * (n > 1)}, " # add to string - - dets_to_sort = np.empty((0,6)) - # NOTE: We send in detected object class too - for x1,y1,x2,y2,conf,detclass in det.cpu().detach().numpy(): - dets_to_sort = np.vstack((dets_to_sort, - np.array([x1, y1, x2, y2, conf, detclass]))) - - - if opt.track: - - tracked_dets = sort_tracker.update(dets_to_sort, opt.unique_track_color) - tracks =sort_tracker.getTrackers() - - # draw boxes for visualization - if len(tracked_dets)>0: - bbox_xyxy = tracked_dets[:,:4] - identities = tracked_dets[:, 8] - categories = tracked_dets[:, 4] - confidences = None - - if opt.show_track: - #loop over tracks - for t, track in enumerate(tracks): - - track_color = colors[int(track.detclass)] if not opt.unique_track_color else sort_tracker.color_list[t] - - [cv2.line(im0, (int(track.centroidarr[i][0]), - int(track.centroidarr[i][1])), - (int(track.centroidarr[i+1][0]), - int(track.centroidarr[i+1][1])), - track_color, thickness=opt.thickness) - for i,_ in enumerate(track.centroidarr) - if i < len(track.centroidarr)-1 ] - else: - bbox_xyxy = dets_to_sort[:,:4] - identities = None - categories = dets_to_sort[:, 5] - confidences = dets_to_sort[:, 4] - - im0 = draw_boxes(im0, bbox_xyxy, identities, categories, confidences, names, colors) - - # Print time (inference + NMS) - print(f'{s}Done. ({(1E3 * (t2 - t1)):.1f}ms) Inference, ({(1E3 * (t3 - t2)):.1f}ms) NMS') - - # Stream results - ###################################################### - if dataset.mode != 'image' and opt.show_fps: - currentTime = time.time() - - fps = 1/(currentTime - startTime) - startTime = currentTime - cv2.putText(im0, "FPS: " + str(int(fps)), (20, 70), cv2.FONT_HERSHEY_PLAIN, 2, (0,255,0),2) - - ####################################################### - if view_img: - cv2.imshow(str(p), im0) - cv2.waitKey(1) # 1 millisecond - - # Save results (image with detections) - if save_img: - if dataset.mode == 'image': - cv2.imwrite(save_path, im0) - print(f" The image with the result is saved in: {save_path}") - else: # 'video' or 'stream' - if vid_path != save_path: # new video - vid_path = save_path - if isinstance(vid_writer, cv2.VideoWriter): - vid_writer.release() # release previous video writer - if vid_cap: # video - fps = vid_cap.get(cv2.CAP_PROP_FPS) - w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH)) - h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) - else: # stream - fps, w, h = 30, im0.shape[1], im0.shape[0] - save_path += '.mp4' - vid_writer = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*'mp4v'), fps, (w, h)) - vid_writer.write(im0) - - if save_txt or save_img: - s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else '' - #print(f"Results saved to {save_dir}{s}") - - print(f'Done. ({time.time() - t0:.3f}s)') - return img - - - -desc = "demo for WongKinYiu/yolov7 Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors" -gr.Interface(detect, - inputs = [gr.Video(format="mp4")], - outputs = gr.Video(format="mp4"), - title="Yolov7",description=desc).launch() -# gr.Interface(detect,[gr.Image(type="pil"),gr.Dropdown(choices=model_names)], gr.Image(type="pil"),title="Yolov7",examples=[["horses.jpeg", "yolov7"]],description="demo for WongKinYiu/yolov7 Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors").launch() \ No newline at end of file diff --git a/spaces/erc/entity-referring-classifier/ercbcm/model_loader.py b/spaces/erc/entity-referring-classifier/ercbcm/model_loader.py deleted file mode 100644 index 9cafd0dfe0199ed4e9bee10127be8e01500293ce..0000000000000000000000000000000000000000 --- a/spaces/erc/entity-referring-classifier/ercbcm/model_loader.py +++ /dev/null @@ -1,8 +0,0 @@ -import torch - -def load(load_path, model, device): - if load_path == None: return - state_dict = torch.load(load_path, map_location=device) - model.load_state_dict(state_dict['model_state_dict']) - print('[LOAD] Model has been loaded successfully from \'{}\''.format(load_path)) - return state_dict['valid_loss'] \ No newline at end of file diff --git a/spaces/eson/tokenizer-arena/vocab/chinese_alpaca_lora_7b/README.md b/spaces/eson/tokenizer-arena/vocab/chinese_alpaca_lora_7b/README.md deleted file mode 100644 index 3215d800239ba6e89bd1f3a257983222fda3e996..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/chinese_alpaca_lora_7b/README.md +++ /dev/null @@ -1,4 +0,0 @@ - - -来自 chinese-alpaca-lora-7b-merge-hf - diff --git a/spaces/fatiXbelha/sd/Black Lives Matter MP3 Listen and Download the Anthem of a Movement.md b/spaces/fatiXbelha/sd/Black Lives Matter MP3 Listen and Download the Anthem of a Movement.md deleted file mode 100644 index 155c510a97a3ea9c8cf4ef34edd3350ee888a156..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Black Lives Matter MP3 Listen and Download the Anthem of a Movement.md +++ /dev/null @@ -1,150 +0,0 @@ -
-

Download Black Lives Matter MP3: How to Support the Movement Through Music

-

If you are looking for a way to show your solidarity with the Black Lives Matter movement, one of the easiest and most effective ways is to download and listen to music that supports the cause. Music is a powerful medium that can inspire, educate, and motivate people to take action against racism, injustice, and oppression. In this article, we will explain what Black Lives Matter is, why it matters, how music can help, and where you can find and download free Black Lives Matter MP3 songs.

-

download black lives matter mp3


Download File ✵✵✵ https://urllie.com/2uNz5d



-

What is Black Lives Matter and why is it important?

-

Black Lives Matter (BLM) is an international social movement that seeks to highlight racism, discrimination, and racial inequality experienced by black people. Its primary concerns are incidents of police brutality and racially motivated violence against black people. The name Black Lives Matter signals condemnation of the unjust killings of black people by police (black people are far more likely to be killed by police in the United States than white people) and the demand that society value the lives and humanity of black people as much as it values the lives and humanity of white people. BLM activists have held large and influential protests in cities across the United States as well as internationally. A decentralized grassroots movement, Black Lives Matter is led by activists in local chapters who organize their own campaigns and programs. The chapters are affiliated with the Black Lives Matter Global Network Foundation, a nonprofit civil rights organization that is active in the United States, Canada, and the United Kingdom.

-

The origin and goals of the movement

-

BLM was cofounded in 2013 as an online movement (using the hashtag #BlackLivesMatter on social media) by three black community organizers—Patrisse Khan-Cullors, Alicia Garza, and Opal Tometi. They formed BLM after George Zimmerman, a man of German and Peruvian descent, was acquitted on charges stemming from his fatal shooting of Trayvon Martin, an unarmed black teenager, in Sanford, Florida, in February 2012. Zimmerman, a neighbourhood-watch volunteer, had seen Martin walking in his neighbourhood and called the police because he thought Martin looked “suspicious.” Although Zimmerman was told not to do anything, he followed Martin, got into an argument with him, and shot and killed him. When law enforcement arrived, Zimmerman claimed that he had been assaulted by Martin and fired in self-defense. Zimmerman remained free for weeks, but, as the shooting gained national attention, demonstrations demanding his prosecution were held in cities across the United States.

-

Support for BLM grew following other police killings, including Eric Garner, who died in a chokehold, Michael Brown, who was killed by an officer who said he acted in self-defense, Tamir Rice, who was shot while playing with a toy gun, Breonna Taylor, who was shot in her own home during a botched raid, and George Floyd, an unarmed black man who was murdered by a police officer who knelt on his neck for nearly nine minutes. BLM also advocates for justice for other victims of racial violence, such as Ahmaud Arbery, who was chased and killed by three white men while jogging, and Elijah McClain, who died after being put in a chokehold by police and injected with a sedative by paramedics.

-

The goals of BLM are to end systemic racism, police brutality, and racial violence; to affirm the dignity and worth of black lives; to create a more inclusive and equitable society; and to empower black communities to achieve social, economic, and political justice. BLM also supports the rights and liberation of other marginalized groups, such as LGBTQ+ people, women, immigrants, and indigenous people.

-

The impact and challenges of the movement

-

BLM has had a significant impact on raising awareness and sparking dialogue about the issues of racism and police violence in the United States and around the world. BLM has also influenced policy changes at the local, state, and federal levels, such as banning chokeholds, requiring body cameras, establishing civilian oversight boards, and reallocating funds from police departments to social services. BLM has also inspired solidarity movements in other countries, such as the United Kingdom, France, Germany, Australia, Brazil, and Nigeria, where people have protested against their own forms of racial discrimination and oppression.

-

However, BLM also faces many challenges and criticisms from various sources. Some of these include:

-

download black lives matter song by dax
-download black lives matter protest songs 2020
-download black lives matter anthem by mike robbins
-download black lives matter remix by dababy
-download black lives matter music video by yg
-download black lives matter mp3 free online
-download black lives matter soundtrack from we're all in this together
-download black lives matter rap by juicy j
-download black lives matter acoustic by h.e.r.
-download black lives matter lyrics by lil baby
-download black lives matter album by various artists
-download black lives matter instrumental by t-pain
-download black lives matter podcast by billboard
-download black lives matter speech by dax
-download black lives matter mixtape by hopsin and dax
-download black lives matter live performance by yg
-download black lives matter tribute by shazam
-download black lives matter ringtone by dax
-download black lives matter karaoke by h.e.r.
-download black lives matter cover by tom macdonald
-download black lives matter playlist by gaana.com
-download black lives matter radio edit by dababy
-download black lives matter extended version by juicy j
-download black lives matter unplugged by t-pain
-download black lives matter mashup by lil baby and yg
-download black lives matter documentary by new scientist
-download black lives matter spoken word by dax
-download black lives matter original song by mike robbins
-download black lives matter bonus track by hopsin and dax
-download black lives matter acoustic guitar by h.e.r.
-download black lives matter piano version by t-pain
-download black lives matter edm remix by dababy
-download black lives matter rock cover by tom macdonald
-download black lives matter reggae version by juicy j
-download black lives matter trap beat by yg
-download black lives matter soulful rendition by h.e.r.
-download black lives matter motivational speech by dax
-download black lives matter inspirational podcast by billboard
-download black lives matter educational documentary by new scientist
-download black lives matter historical soundtrack from we're all in this together
-download black lives matter comedy skit by t-pain
-download black lives matter parody song by tom macdonald
-download black lives matter dance video by yg
-download black lives matter meditation music by h.e.r.
-download black lives matter workout playlist by gaana.com
-download black lives matter trivia quiz by shazam
-download black lives matter crossword puzzle by billboard
-download black lives matter coloring book by new scientist
-download black lives matter sticker pack by dax

-
    -
  • The lack of a clear leadership structure or agenda, which makes it difficult to coordinate actions and communicate demands.
  • -
  • The resistance and backlash from some segments of society, especially white supremacists, who view BLM as a threat to their privilege and power.
  • -
  • The misrepresentation and distortion of the movement by some media outlets and politicians, who portray BLM as violent, radical, or anti-police.
  • -
  • The co-optation and commodification of the movement by some corporations and celebrities, who use BLM as a marketing strategy or a token gesture without making meaningful changes or commitments.
  • -
-

How can music help spread the message of Black Lives Matter?

-

Music is one of the most effective ways to spread the message of Black Lives Matter because it can reach a large and diverse audience, convey emotions and stories that resonate with people, and inspire them to take action. Music is also a form of cultural expression that reflects the identity, history, and struggles of black people. Music can help educate people about the issues that BLM addresses, challenge stereotypes and prejudices, celebrate black excellence and resilience, and demand justice and accountability.

-

The power and influence of music as a form of protest and expression

-

Music has always been a vital part of social movements throughout history. Music can serve as a way of protesting against injustice, expressing dissent or dissatisfaction, raising awareness or consciousness, mobilizing or organizing people, creating solidarity or community, or offering hope or healing. Music can also influence public opinion, shape cultural norms, or challenge dominant narratives.

-

Some examples of how music has been used as a form of protest and expression include:

-
    -
  • The songs of the civil rights movement in the 1950s and 1960s, such as "We Shall Overcome," "Lift Every Voice and Sing," "A Change Is Gonna Come," and "Strange Fruit," which articulated the aspirations and grievances of black Americans fighting for equality and freedom.
  • -
  • The songs of the anti-war movement in the 1960s and 1970s, such as "Blowin' in the Wind," "Give Peace a Chance," "Fortunate Son," and "War," which criticized the US involvement in the Vietnam War and advocated for peace and justice.
  • -
  • The songs of the hip-hop movement in the 1980s and 1990s, such as "The Message," "Fight the Power," "Fuck tha Police," and "Changes," which exposed the realities and challenges of urban life for black youth, such as poverty, crime, violence, police brutality, and racism.
  • -
  • The songs of the global justice movement in the 1990s and 2000s, such as "Zombie," "They Don't Care About Us," "Where Is the Love?" and "American Idiot," which denounced the effects of globalization, neoliberalism, imperialism, and militarism on human rights and the environment.
  • -
-

The examples and benefits of using music as a tool for activism and education

-

Music can also be used as a tool for activism and education by creating songs that support the goals and values of BLM, by sharing or promoting songs that raise awareness about BLM, or by using songs as a way of teaching or learning about BLM. Music can also provide a platform for black artists to express their perspectives and experiences, to amplify their voices and messages, and to showcase their creativity and talent.

-

Some examples of how music can be used as a tool for activism and education include:

-
    -
  • Creating original songs that address the issues or themes of BLM, such as "I Can't Breathe" by H.E.R., "The Bigger Picture" by Lil Baby, "Black Parade" by Beyoncé, and "This Is America" by Childish Gambino, which have become anthems for the movement.
  • -
  • Sharing or promoting songs that support BLM on social media, playlists, podcasts, radio stations, or streaming services, such as Spotify's Black Lives Matter playlist, which features songs from various genres and eras that celebrate black culture and history.
  • -
  • Using songs as a way of teaching or learning about BLM in classrooms, workshops, seminars, or online courses, such as Harvard University's course on "The Art of Black Lives Matter," which explores how music and other forms of art have shaped the movement.
  • -
-

Where can you download free Black Lives Matter MP3 songs?

-

If you want to download free Black Lives Matter MP3 songs, there are many websites and platforms that offer a variety of options. However, not all of them are legal, safe, or ethical. Some of them may violate the intellectual property rights of the artists or expose your device to viruses or malware. Therefore, you need to be careful and selective when choosing where to download free music online.

-

The best websites and platforms to find and download free music that supports the movement

-

Some of the best websites and platforms to find and download free music that supports BLM are:

- - - - - -
NameDescriptionLink
BandcampA website that allows independent artists to sell their music directly to fans. Many artists offer some or all of their songs for free or for a pay-what-you-want price. Bandcamp also waives its revenue share on the first Friday of every month to support artists during the COVID-19 pandemic. Bandcamp has a section dedicated to BLM where you can find hundreds of albums and tracks that support the movement.
SoundCloudA website that allows anyone to upload, stream, and download music for free. SoundCloud has a large and diverse community of artists and listeners who share their music online. SoundCloud has a playlist called "Black Lives Matter: Sounds of Protest" that features songs from various genres and artists that express solidarity with BLM.
NoisetradeA website that allows artists to give away their music for free in exchange for fans' email addresses and postal codes. Noisetrade has a section called "Black Voices" that showcases albums and songs from black artists across different genres. Noisetrade also encourages fans to tip the artists or donate to causes they support.

The tips and precautions to follow when downloading free music online

-

While downloading free music online can be a great way to support BLM and enjoy some amazing tunes, you also need to be aware of some potential risks and problems. Here are some tips and precautions to follow when downloading free music online:

-
    -
  • Always check the source and the quality of the music before downloading. Make sure the website or platform is reputable, reliable, and secure. Avoid websites that look suspicious, have pop-up ads, or ask for personal information.
  • -
  • Always respect the rights and wishes of the artists. Do not download or share music that is not authorized or licensed by the artists. Do not use the music for commercial purposes or modify it without permission.
  • -
  • Always scan the files for viruses or malware before opening or playing them. Use a trusted antivirus software and update it regularly. Do not open or run any files that have strange extensions or names.
  • -
  • Always backup your music files and devices. Downloading free music online can sometimes cause errors, crashes, or corruption of your files or devices. Make sure you have a backup copy of your music and other important data in case something goes wrong.
  • -
-

Conclusion

-

Downloading free Black Lives Matter MP3 songs is a simple and fun way to show your support for the movement and to enjoy some awesome music. Music can help you learn more about the issues and challenges that BLM addresses, as well as celebrate the diversity and beauty of black culture and history. Music can also inspire you to take action and join the fight for justice and equality. However, you also need to be careful and responsible when downloading free music online, and respect the rights and wishes of the artists who create it.

-

We hope this article has given you some useful information and resources on how to download free Black Lives Matter MP3 songs. If you have any questions or comments, feel free to leave them below. And remember, black lives matter!

-

FAQs

-

What are some of the most popular Black Lives Matter songs?

-

There are many songs that have been created or used to support BLM, but some of the most popular ones include:

-
    -
  • "Alright" by Kendrick Lamar, which became an anthem for BLM after it was released in 2015. The song features the chorus "We gon' be alright," which expresses hope and resilience in the face of adversity.
  • -
  • "Freedom" by Beyoncé featuring Kendrick Lamar, which was performed at the 2016 BET Awards with a powerful tribute to BLM. The song celebrates the struggle and liberation of black people throughout history.
  • -
  • "This Is America" by Childish Gambino, which won four Grammy Awards in 2019 for its provocative commentary on racism, violence, and consumerism in America. The song's video features shocking imagery and symbolism that references various incidents of racial injustice.
  • -
  • "Say It Loud - I'm Black And I'm Proud" by James Brown, which was released in 1968 during the civil rights movement. The song is considered one of the first funk songs and one of the most influential songs in black music history. The song's title became a slogan for black pride and empowerment.
  • -
  • "Strange Fruit" by Billie Holiday, which was recorded in 1939 and is widely regarded as one of the first protest songs in American music history. The song exposes the horror of lynching, a form of racial terrorism that killed thousands of black people in the United States.
  • -
-

How can I donate or contribute to the Black Lives Matter movement?

-

There are many ways you can donate or contribute to BLM, such as:

-
    -
  • Donating money to BLM organizations or causes, such as the Black Lives Matter Global Network Foundation, the NAACP Legal Defense Fund, or local bail funds.
  • -
  • Donating time or skills to BLM campaigns or programs, such as volunteering, organizing, educating, or advocating.
  • -
  • Donating goods or services to BLM communities or events, such as food, water, medical supplies, legal assistance, or transportation.
  • -
-

You can find more information on how to donate or contribute to BLM on their official website: https://blacklivesmatter.com/

-

How can I learn more about the history and issues of racism and police brutality?

-

There are many resources you can use to learn more about the history and issues of racism and police brutality, such as:

-
    -
  • Books that explore the history and impact of racism and police brutality on black people in America, such as The New Jim Crow by Michelle Alexander, Between The World And Me by Ta-Nehisi Coates, How To Be An Antiracist by Ibram X. Kendi, or The End Of Policing by Alex S. Vitale.
  • -
  • Documentaries that examine the causes and consequences of racism and police brutality on black people in America, such as 13th by Ava DuVernay, I Am Not Your Negro by Raoul Peck, or The Death And Life Of Marsha P. Johnson by David France.
  • -
  • Podcasts that discuss the current and historical issues of racism and police brutality on black people in America, such as Code Switch by NPR, 1619 by The New York Times, or Pod Save The People by Crooked Media.
  • -
-

How can I join or organize a Black Lives Matter protest or event in my area?

-

There are many ways you can join or organize a BLM protest or event in your area, such as:

-
    -
  • Following BLM social media accounts or websites to stay updated on the latest news and events related to the movement.
  • -
  • Contacting your local BLM chapter or affiliate to find out how you can get involved or support their work.
  • -
  • Attending or hosting a BLM rally, march, vigil, or workshop in your area. Make sure you follow the safety guidelines and protocols for COVID-19 prevention and protection.
  • -
  • Creating or signing a BLM petition, letter, or statement to demand change or action from your local authorities or representatives.
  • -
-

How can I support Black artists and businesses in my community?

-

There are many ways you can support Black artists and businesses in your community, such as:

-
    -
  • Purchasing or streaming their music, books, art, or other products. You can also leave positive reviews, ratings, or feedback for them online.
  • -
  • Following or subscribing to their social media accounts, websites, blogs, podcasts, or newsletters. You can also share their content with your friends, family, or network.
  • -
  • Attending or sponsoring their shows, exhibitions, performances, or events. You can also invite them to speak, teach, or collaborate with you or your organization.
  • -
  • Donating or investing in their projects, campaigns, or causes. You can also offer them mentorship, guidance, or resources.
  • -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/quantization/base.py b/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/quantization/base.py deleted file mode 100644 index 1b16c130d266fbd021d3fc29bb9f98c33dd3c588..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/SplitTrack2MusicGen/audiocraft/quantization/base.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Base class for all quantizers. -""" - -from dataclasses import dataclass, field -import typing as tp - -import torch -from torch import nn - - -@dataclass -class QuantizedResult: - x: torch.Tensor - codes: torch.Tensor - bandwidth: torch.Tensor # bandwidth in kb/s used, per batch item. - penalty: tp.Optional[torch.Tensor] = None - metrics: dict = field(default_factory=dict) - - -class BaseQuantizer(nn.Module): - """Base class for quantizers. - """ - - def forward(self, x: torch.Tensor, frame_rate: int) -> QuantizedResult: - """ - Given input tensor x, returns first the quantized (or approximately quantized) - representation along with quantized codes, bandwidth, and any penalty term for the loss. - Finally, this returns a dict of metrics to update logging etc. - Frame rate must be passed so that the bandwidth is properly computed. - """ - raise NotImplementedError() - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified sample rate at the given bandwidth. - """ - raise NotImplementedError() - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation. - """ - raise NotImplementedError() - - @property - def total_codebooks(self): - """Total number of codebooks. - """ - raise NotImplementedError() - - @property - def num_codebooks(self): - """Number of active codebooks. - """ - raise NotImplementedError() - - def set_num_codebooks(self, n: int): - """Set the number of active codebooks. - """ - raise NotImplementedError() - - -class DummyQuantizer(BaseQuantizer): - """Fake quantizer that actually does not perform any quantization. - """ - def __init__(self): - super().__init__() - - def forward(self, x: torch.Tensor, frame_rate: int): - q = x.unsqueeze(1) - return QuantizedResult(x, q, torch.tensor(q.numel() * 32 * frame_rate / 1000 / len(x)).to(x)) - - def encode(self, x: torch.Tensor) -> torch.Tensor: - """Encode a given input tensor with the specified sample rate at the given bandwidth. - In the case of the DummyQuantizer, the codes are actually identical - to the input and resulting quantized representation as no quantization is done. - """ - return x.unsqueeze(1) - - def decode(self, codes: torch.Tensor) -> torch.Tensor: - """Decode the given codes to the quantized representation. - In the case of the DummyQuantizer, the codes are actually identical - to the input and resulting quantized representation as no quantization is done. - """ - return codes.squeeze(1) - - @property - def total_codebooks(self): - """Total number of codebooks. - """ - return 1 - - @property - def num_codebooks(self): - """Total number of codebooks. - """ - return self.total_codebooks - - def set_num_codebooks(self, n: int): - """Set the number of active codebooks. - """ - raise AttributeError("Cannot override the number of codebooks for the dummy quantizer") diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/punycode.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/punycode.d.ts deleted file mode 100644 index 87ebbb90483aef0b987fb4c22d78031113fed576..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/@types/node/punycode.d.ts +++ /dev/null @@ -1,117 +0,0 @@ -/** - * **The version of the punycode module bundled in Node.js is being deprecated.**In a future major version of Node.js this module will be removed. Users - * currently depending on the `punycode` module should switch to using the - * userland-provided [Punycode.js](https://github.com/bestiejs/punycode.js) module instead. For punycode-based URL - * encoding, see `url.domainToASCII` or, more generally, the `WHATWG URL API`. - * - * The `punycode` module is a bundled version of the [Punycode.js](https://github.com/bestiejs/punycode.js) module. It - * can be accessed using: - * - * ```js - * const punycode = require('punycode'); - * ``` - * - * [Punycode](https://tools.ietf.org/html/rfc3492) is a character encoding scheme defined by RFC 3492 that is - * primarily intended for use in Internationalized Domain Names. Because host - * names in URLs are limited to ASCII characters only, Domain Names that contain - * non-ASCII characters must be converted into ASCII using the Punycode scheme. - * For instance, the Japanese character that translates into the English word,`'example'` is `'例'`. The Internationalized Domain Name, `'例.com'` (equivalent - * to `'example.com'`) is represented by Punycode as the ASCII string`'xn--fsq.com'`. - * - * The `punycode` module provides a simple implementation of the Punycode standard. - * - * The `punycode` module is a third-party dependency used by Node.js and - * made available to developers as a convenience. Fixes or other modifications to - * the module must be directed to the [Punycode.js](https://github.com/bestiejs/punycode.js) project. - * @deprecated Since v7.0.0 - Deprecated - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/punycode.js) - */ -declare module 'punycode' { - /** - * The `punycode.decode()` method converts a [Punycode](https://tools.ietf.org/html/rfc3492) string of ASCII-only - * characters to the equivalent string of Unicode codepoints. - * - * ```js - * punycode.decode('maana-pta'); // 'mañana' - * punycode.decode('--dqo34k'); // '☃-⌘' - * ``` - * @since v0.5.1 - */ - function decode(string: string): string; - /** - * The `punycode.encode()` method converts a string of Unicode codepoints to a [Punycode](https://tools.ietf.org/html/rfc3492) string of ASCII-only characters. - * - * ```js - * punycode.encode('mañana'); // 'maana-pta' - * punycode.encode('☃-⌘'); // '--dqo34k' - * ``` - * @since v0.5.1 - */ - function encode(string: string): string; - /** - * The `punycode.toUnicode()` method converts a string representing a domain name - * containing [Punycode](https://tools.ietf.org/html/rfc3492) encoded characters into Unicode. Only the [Punycode](https://tools.ietf.org/html/rfc3492) encoded parts of the domain name are be - * converted. - * - * ```js - * // decode domain names - * punycode.toUnicode('xn--maana-pta.com'); // 'mañana.com' - * punycode.toUnicode('xn----dqo34k.com'); // '☃-⌘.com' - * punycode.toUnicode('example.com'); // 'example.com' - * ``` - * @since v0.6.1 - */ - function toUnicode(domain: string): string; - /** - * The `punycode.toASCII()` method converts a Unicode string representing an - * Internationalized Domain Name to [Punycode](https://tools.ietf.org/html/rfc3492). Only the non-ASCII parts of the - * domain name will be converted. Calling `punycode.toASCII()` on a string that - * already only contains ASCII characters will have no effect. - * - * ```js - * // encode domain names - * punycode.toASCII('mañana.com'); // 'xn--maana-pta.com' - * punycode.toASCII('☃-⌘.com'); // 'xn----dqo34k.com' - * punycode.toASCII('example.com'); // 'example.com' - * ``` - * @since v0.6.1 - */ - function toASCII(domain: string): string; - /** - * @deprecated since v7.0.0 - * The version of the punycode module bundled in Node.js is being deprecated. - * In a future major version of Node.js this module will be removed. - * Users currently depending on the punycode module should switch to using - * the userland-provided Punycode.js module instead. - */ - const ucs2: ucs2; - interface ucs2 { - /** - * @deprecated since v7.0.0 - * The version of the punycode module bundled in Node.js is being deprecated. - * In a future major version of Node.js this module will be removed. - * Users currently depending on the punycode module should switch to using - * the userland-provided Punycode.js module instead. - */ - decode(string: string): number[]; - /** - * @deprecated since v7.0.0 - * The version of the punycode module bundled in Node.js is being deprecated. - * In a future major version of Node.js this module will be removed. - * Users currently depending on the punycode module should switch to using - * the userland-provided Punycode.js module instead. - */ - encode(codePoints: ReadonlyArray): string; - } - /** - * @deprecated since v7.0.0 - * The version of the punycode module bundled in Node.js is being deprecated. - * In a future major version of Node.js this module will be removed. - * Users currently depending on the punycode module should switch to using - * the userland-provided Punycode.js module instead. - */ - const version: string; -} -declare module 'node:punycode' { - export * from 'punycode'; -} diff --git a/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/evaluation/masks/countless/__init__.py b/spaces/fffiloni/lama-video-watermark-remover/saicinpainting/evaluation/masks/countless/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/floriankrempl/mtg_rules_bot/mtg/data_handler/spacy_utils.py b/spaces/floriankrempl/mtg_rules_bot/mtg/data_handler/spacy_utils.py deleted file mode 100644 index df35019fdd14687991aa6a7e8399e3249c06c771..0000000000000000000000000000000000000000 --- a/spaces/floriankrempl/mtg_rules_bot/mtg/data_handler/spacy_utils.py +++ /dev/null @@ -1,103 +0,0 @@ -# %% -import spacy -from spacy.language import Language -from spaczz.matcher import FuzzyMatcher -from spacy.tokens import Span, Doc -from spacy.pipeline.functions import merge_entities -from mtg.utils.logging import get_logger - -logger = get_logger(__name__) - -BLOCK_LIST = [ - "commander", - "flying", - "strategy", - "consider", - "will", - "vigilance", - "lifelink", - "remove", - "disrupt", - "deal damage", - "sacrifice", - "sacrificed", - "persist", - "battlefield", - "sorry", - "flash", -] - -Doc.set_extension("card_names", default=[]) - - -def load_spacy_model(cards: list[str]): - """loads new spacy model""" - # load model - nlp = spacy.blank("en") - matcher = FuzzyMatcher(nlp.vocab, fuzzy_func="quick", min_r1=93, min_r2=93) - - # set up matcher - print("setting up matcher...") - docs = nlp.pipe(cards) - for doc, card_name in zip(docs, cards): - card_docs = [doc] - if "," in card_name: - short_name = card_name.split(",")[0] - short_name_doc = nlp(short_name) - card_docs.append(short_name_doc) - if "//" in card_name: - both_sides = card_name.split("//") - side_docs = nlp.pipe(both_sides) - card_docs.extend(side_docs) - matcher.add(card_name, card_docs) - - @Language.component("card_name_matcher") - def matcher_component(doc): - matches = matcher(doc) - entities: list[Span] = [] - logger.info(f"matched {len(matches)} cards: {matches}") - for card_name, start, end, ratio, pattern in matches: - if doc[start:end].text.lower() not in BLOCK_LIST: - entities.append(Span(doc, start, end, card_name)) - - doc._.card_names = list(set([entity.label_ for entity in entities])) - doc.ents = list(spacy.util.filter_spans(entities)) - logger.info(f"added cards: {doc._.card_names}") - return doc - - nlp.add_pipe("card_name_matcher", last=True) - nlp.add_pipe("merge_entities", last=True) - return nlp - - -def match_cards(text, cards): - nlp = spacy.blank("en") - matcher = FuzzyMatcher(nlp.vocab, fuzzy_func="quick", min_r1=93, min_r2=93) - - # add cards to matcher - docs = nlp.pipe([card.name for card in cards]) - for doc, card in zip(docs, cards): - card_docs = [doc] - if "," in card.name: - short_name = card.name.split(",")[0] - short_name_doc = nlp(short_name) - card_docs.append(short_name_doc) - matcher.add(card.name, card_docs) - - # match cards - doc = nlp(text) - matches = matcher(doc) - entities: list[Span] = [] - logger.info(f"matched {len(matches)} cards: {matches}") - for card_name, start, end, ratio, pattern in matches: - if doc[start:end].text.lower() not in BLOCK_LIST: - entities.append(Span(doc, start, end, card_name)) - - doc._.card_names = list(set([entity.label_ for entity in entities])) - doc.ents = list(spacy.util.filter_spans(entities)) - doc = merge_entities(doc) - logger.debug( - f"adding {len(doc._.card_names)} cards to spacy doc: {doc._.card_names}" - ) - - return doc diff --git a/spaces/freddyaboulton/all_demos_3/demos/image_mod_default_image/run.py b/spaces/freddyaboulton/all_demos_3/demos/image_mod_default_image/run.py deleted file mode 100644 index c2ad1f8be43b53d179254cb9a0cadcb4c11378b3..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/all_demos_3/demos/image_mod_default_image/run.py +++ /dev/null @@ -1,18 +0,0 @@ -import gradio as gr -import os - - -def image_mod(image): - return image.rotate(45) - - -cheetah = os.path.join(os.path.dirname(__file__), "images/cheetah1.jpg") - -demo = gr.Interface(image_mod, gr.Image(type="pil", value=cheetah), "image", - flagging_options=["blurry", "incorrect", "other"], examples=[ - os.path.join(os.path.dirname(__file__), "images/lion.jpg"), - os.path.join(os.path.dirname(__file__), "images/logo.png") - ]) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/frostymelonade/roberta-small-pun-identification/README.md b/spaces/frostymelonade/roberta-small-pun-identification/README.md deleted file mode 100644 index 642006ca256d00173ab32ee18479272b0366462d..0000000000000000000000000000000000000000 --- a/spaces/frostymelonade/roberta-small-pun-identification/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Roberta Small Pun Identification -emoji: 🐨 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/fsgmas/bingo/Dockerfile b/spaces/fsgmas/bingo/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/fsgmas/bingo/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/fuckyoudeki/AutoGPT/tests/__init__.py b/spaces/fuckyoudeki/AutoGPT/tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/exp/upernet_global_small/test.sh b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/exp/upernet_global_small/test.sh deleted file mode 100644 index d9a85e7a0d3b7c96b060f473d41254b37a382fcb..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/exp/upernet_global_small/test.sh +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -work_path=$(dirname $0) -PYTHONPATH="$(dirname $0)/../../":$PYTHONPATH \ -python -m torch.distributed.launch --nproc_per_node=8 \ - tools/test.py ${work_path}/test_config_h32.py \ - ${work_path}/ckpt/latest.pth \ - --launcher pytorch \ - --eval mIoU \ - 2>&1 | tee -a ${work_path}/log.txt diff --git a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/metrics/__init__.py b/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/metrics/__init__.py deleted file mode 100644 index f2f2544ed1e8c59279df4d2751850b781ae38ee6..0000000000000000000000000000000000000000 --- a/spaces/ghlee94/MEDIAR/segmentation_models_pytorch/metrics/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -from .functional import ( - get_stats, - fbeta_score, - f1_score, - iou_score, - accuracy, - precision, - recall, - sensitivity, - specificity, - balanced_accuracy, - positive_predictive_value, - negative_predictive_value, - false_negative_rate, - false_positive_rate, - false_discovery_rate, - false_omission_rate, - positive_likelihood_ratio, - negative_likelihood_ratio, -) diff --git a/spaces/giulio98/codebleu/README.md b/spaces/giulio98/codebleu/README.md deleted file mode 100644 index 6cfb53c7502d101744c3ffd05cd0a3ae888c9d86..0000000000000000000000000000000000000000 --- a/spaces/giulio98/codebleu/README.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -title: CodeBLEU -sdk: gradio -sdk_version: 3.0.2 -app_file: app.py -pinned: false -tags: -- evaluate -- metric -description: "CodeBLEU metric for Python and C++" ---- - -# Metric Card for CodeBLEU - -***Module Card Instructions:*** *Fill out the following subsections. Feel free to take a look at existing metric cards if you'd like examples.* - -## Metric Description -CodeBLEU metric is used on code synthesis not only consider the surface match similar with the original BLEU, but can also consider the grammatical correctness and the logic correctness, leveraging the abstract syntax tree and the data-flow structure. - -## How to Use -* clone the repository -```python -git clone https://huggingface.co/spaces/giulio98/codebleu.git -``` -* import metric -```python -from codebleu.calc_code_bleu import calculate -``` -* compute score -```python -true_codes = [["def hello_world():\n print("hello world!")"], ["def add(a,b)\n return a+b"]] -code_gens = ["def hello_world():\n print("hello world!")", "def add(a,b)\n return a+b"] -codebleu = calculate(references=true_codes, predictions=code_gens, language="python", alpha=0.25, beta=0.25, gamma=0.25, theta=0.25) -print(codebleu['code_bleu_score']) -``` - -### Inputs -*List all input arguments in the format below* -- **references** *(list of list of string): contains n possible solutions for each problem* -- **predictions** *(list of string): contains a single prediction for each problem* -- **language** *(string): python or cpp* - - -### Output Values - - - -#### Values from Popular Papers - - -## Limitations and Bias - - -## Citation -``` -@unknown{unknown, -author = {Ren, Shuo and Guo, Daya and Lu, Shuai and Zhou, Long and Liu, Shujie and Tang, Duyu and Zhou, Ming and Blanco, Ambrosio and Ma, Shuai}, -year = {2020}, -month = {09}, -pages = {}, -title = {CodeBLEU: a Method for Automatic Evaluation of Code Synthesis} -} -``` diff --git a/spaces/glyszt/vt/vtoonify/model/stylegan/dataset.py b/spaces/glyszt/vt/vtoonify/model/stylegan/dataset.py deleted file mode 100644 index 7713ea2f8bc94d202d2dfbe830af3cb96b1e803d..0000000000000000000000000000000000000000 --- a/spaces/glyszt/vt/vtoonify/model/stylegan/dataset.py +++ /dev/null @@ -1,40 +0,0 @@ -from io import BytesIO - -import lmdb -from PIL import Image -from torch.utils.data import Dataset - - -class MultiResolutionDataset(Dataset): - def __init__(self, path, transform, resolution=256): - self.env = lmdb.open( - path, - max_readers=32, - readonly=True, - lock=False, - readahead=False, - meminit=False, - ) - - if not self.env: - raise IOError('Cannot open lmdb dataset', path) - - with self.env.begin(write=False) as txn: - self.length = int(txn.get('length'.encode('utf-8')).decode('utf-8')) - - self.resolution = resolution - self.transform = transform - - def __len__(self): - return self.length - - def __getitem__(self, index): - with self.env.begin(write=False) as txn: - key = f'{self.resolution}-{str(index).zfill(5)}'.encode('utf-8') - img_bytes = txn.get(key) - - buffer = BytesIO(img_bytes) - img = Image.open(buffer) - img = self.transform(img) - - return img diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Crack Psim 9 0.md b/spaces/gotiQspiryo/whisper-ui/examples/Crack Psim 9 0.md deleted file mode 100644 index cc4c2368a36354c5466aa39e5a8ffc7b11d63682..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Crack Psim 9 0.md +++ /dev/null @@ -1,103 +0,0 @@ -
-

Crack Psim 9 0 - A Powerful and Easy-to-Use Software for Electronic Circuit Simulation

- -

Are you interested in designing, analyzing and simulating electronic circuits, especially power circuits, control systems, motor drives and other applications? If yes, then you might want to try Crack Psim 9 0, a cracked version of PSIM 9.0.3, a professional software that offers a complete electrical and electronics laboratory for electronic engineers.

-

Crack Psim 9 0


Downloadhttps://urlgoal.com/2uyNaZ



- -

Crack Psim 9 0 is a software that allows you to select a variety of electronic components from the huge library of the software and use them in your circuit. You can also simulate your circuit with high speed and accuracy, and analyze the current, voltage, power and other parameters using various sensors and measuring devices. You can also communicate with MATLAB and Simulink software for more complex and accurate simulations.

- -

In this article, we will show you how to download and install Crack Psim 9 0 for free, and how to use it to design and analyze electronic circuits. We will also give you some tips on how to avoid common errors and problems that may occur while using Crack Psim 9 0. So, if you are ready to learn more about this software, read on.

- -

How to Download Crack Psim 9 0 for Free

- -

There are several websites that offer direct download links for Crack Psim 9 0 in different resolutions and formats. Some of the websites that have this software are:

- -
    -
  • Dammedientu.vn: This is a website that provides direct download links for PSIM 9.1.xx and other electrical and electronics software. You can download Crack Psim 9 0 from this website in 32-bit or 64-bit, and RAR or ZIP format.
  • -
  • Wannacrack.com: This is a website that provides direct download links for PSIM Professional 9.1.4 x86 / 9.0.3.464 x64 and other engineering software. You can download Crack Psim 9 0 from this website in 32-bit or 64-bit, and EXE or ISO format.
  • -
  • YouTube.com: This is a website that provides video tutorials on how to download and install PSIM 9.0.3 Crack and other software. You can watch the video tutorial on how to download Crack Psim 9 0 from this website, and follow the steps shown in the video.
  • -
- -

Please note that downloading software from third-party websites may be illegal or unsafe, so proceed at your own risk.

- -

How to Install Crack Psim 9 0 for Free

- -

If you want to install Crack Psim 9 0 for free, you have to follow some steps carefully. Here are the steps that you need to follow:

- -
    -
  1. Download Crack Psim 9 0 from one of the websites mentioned above, and extract the file using WinRAR or any other extraction tool.
  2. -
  3. Open the extracted folder and run the file psim9.0.3_32_setup.exe or psim9.0.3_64_setup.exe depending on your system architecture.
  4. -
  5. A window will appear, click on Next to continue.
  6. -
  7. Select I accept the license agreement, and click on Next.
  8. -
  9. Select Softkey version, click on Select "psim.lic" file and browse to the file psim.lic in the extracted folder.
  10. -
  11. Click on Next to continue.
  12. -
  13. Select the destination folder where you want to install the software, and click on Next.
  14. -
  15. Select the components that you want to install, such as PSIM Modules, SimCoupler Module, Motor Drive Module, etc., and click on Next.
  16. -
  17. Select the start menu folder where you want to create shortcuts for the software, and click on Next.
  18. -
  19. Select whether you want to create desktop icons for the software, and click on Next.
  20. -
  21. The installation will begin, wait until it is finished.
  22. -
  23. After the installation is completed, close the software and go to the extracted folder.
  24. -
  25. Copy the files psim9.reg and PSIM9.Patch.exe from the extracted folder to the installation folder (usually C:\\Program Files (x86)\\Powersim\\PSIM9).
  26. -
  27. Run the file psim9.reg as administrator, and click on OK when prompted.
  28. -
  29. Run the file PSIM9.Patch.exe as administrator, and click on Next five times until it is finished.
  30. -
  31. Congratulations! You have successfully installed Crack Psim 9 0 for free.
  32. -
- -

How to Use Crack Psim 9 0 to Design and Analyze Electronic Circuits

- -

If you want to use Crack Psim 9 0 to design and analyze electronic circuits, you have to follow some steps carefully. Here are the steps that you need to follow:

-

- -
    -
  1. Open Crack Psim 9 0, and select File > New > Schematic or Circuit Wizard to create a new circuit.
  2. -
  3. Select the components that you want to use from the library window on the left side of the screen, such as resistors, capacitors, diodes, transistors, switches, sources, etc., and drag them onto the schematic window on the right side of the screen.
  4. -
  5. Connect the components using wires by clicking on one terminal of a component and dragging it to another terminal of another component.
  6. -
  7. Add probes or meters to measure current, voltage or power by selecting them from the library window or clicking on Insert > Probe/Meter > Current/Voltage/Power Probe/Meter.
  8. -
  9. Add labels or text boxes to name your components or add notes by selecting them from the library window or clicking on Insert > Label/Text Box.
  10. -
  11. Add simulation parameters such as time step, simulation time or frequency by selecting them from the library window or clicking on Insert > Simulation Parameter > Time Step/Simulation Time/Frequency Parameter.
  12. -
  13. Add control elements such as switches or buttons by selecting them from the library window or clicking on Insert > Control Element > Switch/Button Element.
  14. -
  15. Add graphs or scopes to display waveforms by selecting them from the library window or clicking on Insert > Graph/Scope > Graph/Scope Element.
  16. -
  17. Add subcircuits or modules by selecting them from the library window or clicking on Insert > Subcircuit/Module > Subcircuit/Module Element.
  18. -
  19. Add MATLAB/Simulink blocks by selecting them from the library window or clicking on Insert > MATLAB/Simulink Block > MATLAB/Simulink Block Element.
  20. -
  21. To run a simulation, click on Simulate > Run Simulation or press F5 key.
  22. -
  23. To view the results of a simulation, click on View > Result Browser or press F6 key.
  24. -
  25. To export or print your circuit or results, click on File > Export/Print > Circuit/Result Export/Print.
  26. -
- -

Tips on How to Avoid Common Errors and Problems While Using Crack Psim 9 0

- -

If you encounter any errors or problems while using Crack Psim 9 0, here are some tips that may help you solve them:

- -
    -
  • If you get an error message saying "Invalid license file", make sure that you have copied

    -

    What are the Features and Benefits of Crack Psim 9 0

    - -

    Crack Psim 9 0 is a software that has many features and benefits for electronic engineers who want to design, analyze and simulate electronic circuits. Some of the features and benefits of Crack Psim 9 0 are:

    - -
      -
    • It has a huge library of electronic components, such as resistors, capacitors, diodes, transistors, switches, sources, etc., that you can use in your circuit.
    • -
    • It has a variety of sensors and measuring devices, such as oscilloscopes, wave analyzers, displays and heat analyzers, direct and indirect current monitoring, as well as work with AC and DC motors.
    • -
    • It can simulate your circuit with high speed and accuracy, and analyze the current, voltage, power and other parameters using various probes.
    • -
    • It has high power in displaying and personalizing waves. You can easily change the color of the waveform, change its units of measure, calculate the amplitude and intersection points of the waves, and zoom in or out.
    • -
    • It can communicate with MATLAB and Simulink software for more complex and accurate simulations. You can export or import data from or to these programs in the form of mathematical data.
    • -
    • It has a very good ability to design industrial circuits and power circuits with complex domains. It can handle nonlinear elements, switching devices, control loops, feedback systems, etc.
    • -
    • It has a simple user interface that makes it very easy to work with. You can create a new circuit using the schematic or circuit wizard, insert components from the library window, connect them using wires, add probes or meters to measure parameters, add labels or text boxes to name components or add notes, add simulation parameters such as time step, simulation time or frequency, add control elements such as switches or buttons, add graphs or scopes to display waveforms, add subcircuits or modules to simplify your circuit, add MATLAB/Simulink blocks to enhance your simulation, run a simulation using the simulate menu or F5 key, view the results using the result browser or F6 key, export or print your circuit or results using the file menu.
    • -
    - -

    With these features and benefits, Crack Psim 9 0 is a powerful and easy-to-use software for electronic circuit simulation that can help you design and analyze your circuits with ease and efficiency.

    -

    What are the Reviews and Testimonials of Crack Psim 9 0

    - -

    Crack Psim 9 0 is a software that has received many positive reviews and testimonials from electronic engineers who have used it for their projects. Some of the reviews and testimonials of Crack Psim 9 0 are:

    - -
      -
    • "I have been using PSIM for more than 10 years, and I am very satisfied with its performance and features. It is very easy to use, and it can simulate any circuit that I can think of. It is also very fast and accurate, and it can handle complex systems with nonlinear elements, switching devices, control loops, feedback systems, etc. It is also very compatible with MATLAB and Simulink, which makes it possible to do more advanced simulations and analysis. I highly recommend PSIM to anyone who is interested in power electronics and electric drive applications." - John Smith, Professor of Electrical Engineering.
    • -
    • "PSIM is a great software for designing, analyzing and simulating electronic circuits. It has a huge library of electronic components, sensors and measuring devices, control elements, graphs and scopes, subcircuits and modules, MATLAB/Simulink blocks, etc., that I can use in my circuit. It also has a simple user interface that makes it very easy to work with. I can create a new circuit using the schematic or circuit wizard, insert components from the library window, connect them using wires, add probes or meters to measure parameters, add labels or text boxes to name components or add notes, add simulation parameters such as time step, simulation time or frequency, add control elements such as switches or buttons, add graphs or scopes to display waveforms, add subcircuits or modules to simplify my circuit, add MATLAB/Simulink blocks to enhance my simulation, run a simulation using the simulate menu or F5 key, view the results using the result browser or F6 key, export or print my circuit or results using the file menu. It is very convenient and efficient." - Jane Doe, Electronic Engineer.
    • -
    • "I have downloaded and installed Crack Psim 9 0 for free from one of the websites that offer direct download links for this software. It was very easy to install and activate using the crack files provided in the download folder. It works perfectly on my Windows 10 computer, and I have not encountered any errors or problems while using it. It is a powerful and easy-to-use software for electronic circuit simulation that can help me design and analyze my circuits with ease and efficiency." - Bob Lee, Student of Electrical Engineering.
    • -
    - -

    With these reviews and testimonials, Crack Psim 9 0 is a software that has proven its quality and reliability for electronic circuit simulation.

    -

    Conclusion

    - -

    In conclusion, Crack Psim 9 0 is a powerful and easy-to-use software for electronic circuit simulation that can help you design and analyze your circuits with ease and efficiency. It has many features and benefits, such as a huge library of electronic components, sensors and measuring devices, control elements, graphs and scopes, subcircuits and modules, MATLAB/Simulink blocks, etc., that you can use in your circuit. It also has a simple user interface that makes it very easy to work with. It can simulate your circuit with high speed and accuracy, and analyze the current, voltage, power and other parameters using various probes. It can also communicate with MATLAB and Simulink software for more complex and accurate simulations. It has received many positive reviews and testimonials from electronic engineers who have used it for their projects. You can download and install Crack Psim 9 0 for free from one of the websites that offer direct download links for this software, and follow the steps to install and activate it using the crack files provided in the download folder. If you encounter any errors or problems while using Crack Psim 9 0, you can follow the tips to solve them. If you are interested in designing, analyzing and simulating electronic circuits, especially power circuits, control systems, motor drives and other applications, you might want to try Crack Psim 9 0.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/gradio/HuBERT/fairseq/distributed/__init__.py b/spaces/gradio/HuBERT/fairseq/distributed/__init__.py deleted file mode 100644 index d0b96b734c4b5e7cd5d295238d0764c05093dc27..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/fairseq/distributed/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .distributed_timeout_wrapper import DistributedTimeoutWrapper -from .fully_sharded_data_parallel import fsdp_enable_wrap, fsdp_wrap, FullyShardedDataParallel -from .legacy_distributed_data_parallel import LegacyDistributedDataParallel -from .module_proxy_wrapper import ModuleProxyWrapper -from .tpu_distributed_data_parallel import TPUDistributedDataParallel - - -__all__ = [ - "DistributedTimeoutWrapper", - "fsdp_enable_wrap", - "fsdp_wrap", - "FullyShardedDataParallel", - "LegacyDistributedDataParallel", - "ModuleProxyWrapper", - "TPUDistributedDataParallel", -] diff --git a/spaces/gradio/interface_parallel_load/README.md b/spaces/gradio/interface_parallel_load/README.md deleted file mode 100644 index e7f9c0063c3e8aef981965f5914299ba9cc4ff53..0000000000000000000000000000000000000000 --- a/spaces/gradio/interface_parallel_load/README.md +++ /dev/null @@ -1,12 +0,0 @@ - ---- -title: interface_parallel_load -emoji: 🔥 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 3.50.2 -app_file: run.py -pinned: false -hf_oauth: true ---- diff --git a/spaces/h2oai/h2ogpt-chatbot2/src/gradio_themes.py b/spaces/h2oai/h2ogpt-chatbot2/src/gradio_themes.py deleted file mode 100644 index e9e7fd0e22e0e931b9fcfd7044301c63cf0dee5f..0000000000000000000000000000000000000000 --- a/spaces/h2oai/h2ogpt-chatbot2/src/gradio_themes.py +++ /dev/null @@ -1,260 +0,0 @@ -from __future__ import annotations - -from typing import Iterable - -from gradio.themes.soft import Soft -from gradio.themes import Color, Size -from gradio.themes.utils import colors, sizes, fonts - -h2o_yellow = Color( - name="yellow", - c50="#fffef2", - c100="#fff9e6", - c200="#ffecb3", - c300="#ffe28c", - c400="#ffd659", - c500="#fec925", - c600="#e6ac00", - c700="#bf8f00", - c800="#a67c00", - c900="#664d00", - c950="#403000", -) -h2o_gray = Color( - name="gray", - c50="#f8f8f8", - c100="#e5e5e5", - c200="#cccccc", - c300="#b2b2b2", - c400="#999999", - c500="#7f7f7f", - c600="#666666", - c700="#4c4c4c", - c800="#333333", - c900="#191919", - c950="#0d0d0d", -) - -text_xsm = Size( - name="text_xsm", - xxs="4px", - xs="5px", - sm="6px", - md="7px", - lg="8px", - xl="10px", - xxl="12px", -) - -spacing_xsm = Size( - name="spacing_xsm", - xxs="1px", - xs="1px", - sm="1px", - md="2px", - lg="3px", - xl="5px", - xxl="7px", -) - -radius_xsm = Size( - name="radius_xsm", - xxs="1px", - xs="1px", - sm="1px", - md="2px", - lg="3px", - xl="5px", - xxl="7px", -) - - -class H2oTheme(Soft): - def __init__( - self, - *, - primary_hue: colors.Color | str = h2o_yellow, - secondary_hue: colors.Color | str = h2o_yellow, - neutral_hue: colors.Color | str = h2o_gray, - spacing_size: sizes.Size | str = sizes.spacing_md, - radius_size: sizes.Size | str = sizes.radius_md, - text_size: sizes.Size | str = sizes.text_lg, - font: fonts.Font - | str - | Iterable[fonts.Font | str] = ( - fonts.GoogleFont("Montserrat"), - "ui-sans-serif", - "system-ui", - "sans-serif", - ), - font_mono: fonts.Font - | str - | Iterable[fonts.Font | str] = ( - fonts.GoogleFont("IBM Plex Mono"), - "ui-monospace", - "Consolas", - "monospace", - ), - ): - super().__init__( - primary_hue=primary_hue, - secondary_hue=secondary_hue, - neutral_hue=neutral_hue, - spacing_size=spacing_size, - radius_size=radius_size, - text_size=text_size, - font=font, - font_mono=font_mono, - ) - super().set( - background_fill_primary_dark="*block_background_fill", - block_background_fill_dark="*neutral_950", - block_border_width='1px', - block_border_width_dark='1px', - block_label_background_fill="*primary_300", - block_label_background_fill_dark="*primary_600", - block_label_text_color="*neutral_950", - block_label_text_color_dark="*neutral_950", - block_radius="0 0 8px 8px", - block_title_text_color="*neutral_950", - block_title_text_color_dark="*neutral_950", - body_background_fill="*neutral_50", - body_background_fill_dark="*neutral_900", - border_color_primary="*neutral_100", - border_color_primary_dark="*neutral_700", - button_border_width="1px", - button_border_width_dark="1px", - button_primary_text_color="*neutral_950", - button_primary_text_color_dark="*neutral_950", - button_primary_background_fill="*primary_500", - button_primary_background_fill_dark="*primary_500", - button_secondary_background_fill_hover_dark="*primary_700", - button_secondary_border_color="*primary_500", - button_secondary_border_color_dark="*primary_500", - button_secondary_border_color_hover_dark="*primary_700", - checkbox_label_text_color_selected_dark='#000000', - # checkbox_label_text_size="*text_xs", # too small for iPhone etc. but good if full large screen zoomed to fit - checkbox_label_text_size="*text_sm", - # radio_circle="""url("data:image/svg+xml,%3csvg viewBox='0 0 32 32' fill='white' xmlns='http://www.w3.org/2000/svg'%3e%3ccircle cx='32' cy='32' r='1'/%3e%3c/svg%3e")""", - # checkbox_border_width=1, - # heckbox_border_width_dark=1, - link_text_color="#3344DD", - link_text_color_hover="#3344DD", - link_text_color_visited="#3344DD", - link_text_color_dark="#74abff", - link_text_color_hover_dark="#a3c8ff", - link_text_color_active_dark="#a3c8ff", - link_text_color_visited_dark="#74abff", - ) - - -class SoftTheme(Soft): - def __init__( - self, - *, - primary_hue: colors.Color | str = colors.indigo, - secondary_hue: colors.Color | str = colors.indigo, - neutral_hue: colors.Color | str = colors.gray, - spacing_size: sizes.Size | str = sizes.spacing_md, - radius_size: sizes.Size | str = sizes.radius_md, - text_size: sizes.Size | str = sizes.text_md, - font: fonts.Font - | str - | Iterable[fonts.Font | str] = ( - fonts.GoogleFont("Montserrat"), - "ui-sans-serif", - "system-ui", - "sans-serif", - ), - font_mono: fonts.Font - | str - | Iterable[fonts.Font | str] = ( - fonts.GoogleFont("IBM Plex Mono"), - "ui-monospace", - "Consolas", - "monospace", - ), - ): - super().__init__( - primary_hue=primary_hue, - secondary_hue=secondary_hue, - neutral_hue=neutral_hue, - spacing_size=spacing_size, - radius_size=radius_size, - text_size=text_size, - font=font, - font_mono=font_mono, - ) - super().set( - checkbox_label_text_size="*text_sm", - ) - - -h2o_logo = '' - - -def get_h2o_title(title, description): - # NOTE: Check full width desktop, smallest width browser desktop, iPhone browsers to ensure no overlap etc. - return f"""
    - {description} -
    -
    -
    {h2o_logo}
    -

    {title}

    -
    -
    - -
    - """ - - -def get_simple_title(title, description): - return f"""{description}

    {title}

    """ - - -def get_dark_js() -> str: - return """ - if (document.querySelectorAll('.dark').length) { - document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark')); - } else { - document.querySelector('body').classList.add('dark'); - } - """ - - -def get_heap_js(heapAppId: str) -> str: - return ( - """globalThis.window.heap=window.heap||[],heap.load=function(e,t){window.heap.appid=e,window.heap.config=t=t||{};var r=document.createElement("script");r.type="text/javascript",r.async=!0,r.src="https://cdn.heapanalytics.com/js/heap-"+e+".js";var a=document.getElementsByTagName("script")[0];a.parentNode.insertBefore(r,a);for(var n=function(e){return function(){heap.push([e].concat(Array.prototype.slice.call(arguments,0)))}},p=["addEventProperties","addUserProperties","clearEventProperties","identify","resetIdentity","removeEventProperty","setEventProperties","track","unsetEventProperty"],o=0;o str: - """ - Generates a JS code representing JS lambda that wraps all given '*args' code strings. - The lambda function has number of parameters based on 'num_params' and returns them - without modification in an array. Lambda with zero parameters returns an empty array. - """ - params = ", ".join([f"p{i}" for i in range(num_params)]) - newline = "\n" - return f""" - ({params}) => {{ - {newline.join([a for a in args if a is not None])} - return [{params}]; - }} - """ diff --git a/spaces/h2oai/wave-tour/examples/table_pagination_minimal.py b/spaces/h2oai/wave-tour/examples/table_pagination_minimal.py deleted file mode 100644 index 33944a2654ad94a1aefa4366fcb5d1f492d3ea24..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/table_pagination_minimal.py +++ /dev/null @@ -1,33 +0,0 @@ -# Table / Pagination / Minimal -# Use a #table with pagination to display large (100k+ rows) tabular data. -# #form #table #pagination -# --- - -from h2o_wave import main, app, Q, ui - - -rows = [str(i + 1) for i in range(100)] -rows_per_page = 10 - - -@app('/demo') -async def serve(q: Q): - if not q.client.initialized: - q.page['form'] = ui.form_card(box='1 1 -1 -1', items=[ - ui.table( - name='table', - columns=[ui.table_column(name='text', label='Text', link=False)], - rows=[ui.table_row(name=r, cells=[r]) for r in rows[0:rows_per_page]], - pagination=ui.table_pagination(total_rows=len(rows), rows_per_page=rows_per_page), - height='580px', - events=['page_change'] - ) - ]) - q.client.initialized = True - - if q.events.table and q.events.table.page_change: - offset = q.events.table.page_change.get('offset', 0) - new_rows = rows[offset:offset + rows_per_page] - q.page['form'].table.rows = [ui.table_row(name=r, cells=[r]) for r in new_rows] - - await q.page.save() diff --git a/spaces/h2oai/wave-tour/examples/tour-assets/loader.min.js b/spaces/h2oai/wave-tour/examples/tour-assets/loader.min.js deleted file mode 100644 index ab03d505e7f4fe9c5722a0a71c4ebb93809de34e..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/tour-assets/loader.min.js +++ /dev/null @@ -1,4 +0,0 @@ -"use strict";var define,AMDLoader,_amdLoaderGlobal=this,_commonjsGlobal="object"==typeof global?global:{};!function(e){e.global=_amdLoaderGlobal;var t=(Object.defineProperty(r.prototype,"isWindows",{get:function(){return this._detect(),this._isWindows},enumerable:!1,configurable:!0}),Object.defineProperty(r.prototype,"isNode",{get:function(){return this._detect(),this._isNode},enumerable:!1,configurable:!0}),Object.defineProperty(r.prototype,"isElectronRenderer",{get:function(){return this._detect(),this._isElectronRenderer},enumerable:!1,configurable:!0}),Object.defineProperty(r.prototype,"isWebWorker",{get:function(){return this._detect(),this._isWebWorker},enumerable:!1,configurable:!0}),Object.defineProperty(r.prototype,"isElectronNodeIntegrationWebWorker",{get:function(){return this._detect(),this._isElectronNodeIntegrationWebWorker},enumerable:!1,configurable:!0}),r.prototype._detect=function(){this._detected||(this._detected=!0,this._isWindows=r._isWindows(),this._isNode="undefined"!=typeof module&&!!module.exports,this._isElectronRenderer="undefined"!=typeof process&&void 0!==process.versions&&void 0!==process.versions.electron&&"renderer"===process.type,this._isWebWorker="function"==typeof e.global.importScripts,this._isElectronNodeIntegrationWebWorker=this._isWebWorker&&"undefined"!=typeof process&&void 0!==process.versions&&void 0!==process.versions.electron&&"worker"===process.type)},r._isWindows=function(){return!!("undefined"!=typeof navigator&&navigator.userAgent&&0<=navigator.userAgent.indexOf("Windows"))||"undefined"!=typeof process&&"win32"===process.platform},r);function r(){this._detected=!1,this._isWindows=!1,this._isNode=!1,this._isElectronRenderer=!1,this._isWebWorker=!1,this._isElectronNodeIntegrationWebWorker=!1}e.Environment=t}(AMDLoader=AMDLoader||{}),function(r){var n=function(e,t,r){this.type=e,this.detail=t,this.timestamp=r};r.LoaderEvent=n;var e=(t.prototype.record=function(e,t){this._events.push(new n(e,t,r.Utilities.getHighPerformanceTimestamp()))},t.prototype.getEvents=function(){return this._events},t);function t(e){this._events=[new n(1,"",e)]}r.LoaderEventRecorder=e;o.prototype.record=function(e,t){},o.prototype.getEvents=function(){return[]},o.INSTANCE=new o,e=o;function o(){}r.NullLoaderEventRecorder=e}(AMDLoader=AMDLoader||{}),function(e){var t=(n.fileUriToFilePath=function(e,t){if(t=decodeURI(t).replace(/%23/g,"#"),e){if(/^file:\/\/\//.test(t))return t.substr(8);if(/^file:\/\//.test(t))return t.substr(5)}else if(/^file:\/\//.test(t))return t.substr(7);return t},n.startsWith=function(e,t){return e.length>=t.length&&e.substr(0,t.length)===t},n.endsWith=function(e,t){return e.length>=t.length&&e.substr(e.length-t.length)===t},n.containsQueryString=function(e){return/^[^\#]*\?/gi.test(e)},n.isAbsolutePath=function(e){return/^((http:\/\/)|(https:\/\/)|(file:\/\/)|(\/))/.test(e)},n.forEachProperty=function(e,t){if(e){var r=void 0;for(r in e)e.hasOwnProperty(r)&&t(r,e[r])}},n.isEmpty=function(e){var t=!0;return n.forEachProperty(e,function(){t=!1}),t},n.recursiveClone=function(e){if(!e||"object"!=typeof e||e instanceof RegExp||!Array.isArray(e)&&Object.getPrototypeOf(e)!==Object.prototype)return e;var r=Array.isArray(e)?[]:{};return n.forEachProperty(e,function(e,t){r[e]=t&&"object"==typeof t?n.recursiveClone(t):t}),r},n.generateAnonymousModule=function(){return"===anonymous"+n.NEXT_ANONYMOUS_ID+++"==="},n.isAnonymousModule=function(e){return n.startsWith(e,"===anonymous")},n.getHighPerformanceTimestamp=function(){return this.PERFORMANCE_NOW_PROBED||(this.PERFORMANCE_NOW_PROBED=!0,this.HAS_PERFORMANCE_NOW=e.global.performance&&"function"==typeof e.global.performance.now),(this.HAS_PERFORMANCE_NOW?e.global.performance:Date).now()},n.NEXT_ANONYMOUS_ID=1,n.PERFORMANCE_NOW_PROBED=!1,n.HAS_PERFORMANCE_NOW=!1,n);function n(){}e.Utilities=t}(AMDLoader=AMDLoader||{}),function(d){function r(e){if(e instanceof Error)return e;var t=new Error(e.message||String(e)||"Unknown Error");return e.stack&&(t.stack=e.stack),t}d.ensureError=r;var o=(n.validateConfigurationOptions=function(e){var t;return"string"!=typeof(e=e||{}).baseUrl&&(e.baseUrl=""),"boolean"!=typeof e.isBuild&&(e.isBuild=!1),"object"!=typeof e.paths&&(e.paths={}),"object"!=typeof e.config&&(e.config={}),void 0===e.catchError&&(e.catchError=!1),void 0===e.recordStats&&(e.recordStats=!1),"string"!=typeof e.urlArgs&&(e.urlArgs=""),"function"!=typeof e.onError&&(e.onError=function(e){if("loading"===e.phase)return console.error('Loading "'+e.moduleId+'" failed'),console.error(e),console.error("Here are the modules that depend on it:"),void console.error(e.neededBy);"factory"===e.phase&&(console.error('The factory method of "'+e.moduleId+'" has thrown an exception'),console.error(e))}),Array.isArray(e.ignoreDuplicateModules)||(e.ignoreDuplicateModules=[]),0=o.length)d._onLoadError(n,e);else{var t=o[i],r=d.getRecorder();if(d._config.isBuild()&&"empty:"===t)return d._buildInfoPath[n]=t,d.defineModule(d._moduleIdProvider.getStrModuleId(n),[],null,null,null),void d._onLoad(n);r.record(10,t),d._scriptLoader.load(d,t,function(){d._config.isBuild()&&(d._buildInfoPath[n]=t),r.record(11,t),d._onLoad(n)},function(e){r.record(12,t),s(e)})}})(null))},l.prototype._loadPluginDependency=function(e,t){var r,n=this;this._modules2[t.id]||this._knownModules2[t.id]||(this._knownModules2[t.id]=!0,(r=function(e){n.defineModule(n._moduleIdProvider.getStrModuleId(t.id),[],e,null,null)}).error=function(e){n._config.onError(n._createLoadError(t.id,e))},e.load(t.pluginParam,this._createRequire(a.ROOT),r,this._config.getOptionsLiteral()))},l.prototype._resolve=function(e){var t=this,r=e.dependencies;if(r)for(var n=0,o=r.length;n -`)),e.unresolvedDependenciesCount--):(this._inverseDependencies2[i.id]=this._inverseDependencies2[i.id]||[],this._inverseDependencies2[i.id].push(e.id),i instanceof u?(s=this._modules2[i.pluginId])&&s.isComplete()?this._loadPluginDependency(s.exports,i):((s=this._inversePluginDependencies2.get(i.pluginId))||this._inversePluginDependencies2.set(i.pluginId,s=[]),s.push(i),this._loadModule(i.pluginId)):this._loadModule(i.id))}else e.unresolvedDependenciesCount--;else e.unresolvedDependenciesCount--;else e.exportsPassedIn=!0,e.unresolvedDependenciesCount--}0===e.unresolvedDependenciesCount&&this._onModuleComplete(e)},l.prototype._onModuleComplete=function(e){var t=this,r=this.getRecorder();if(!e.isComplete()){var n=e.dependencies,o=[];if(n)for(var i=0,s=n.length;i int: - """ - Returns the number of tokens used by a list of messages. - - Args: - messages (list): A list of messages, each of which is a dictionary - containing the role and content of the message. - model (str): The name of the model to use for tokenization. - Defaults to "gpt-3.5-turbo-0301". - - Returns: - int: The number of tokens used by the list of messages. - """ - try: - encoding = tiktoken.encoding_for_model(model) - except KeyError: - logger.warn("Warning: model not found. Using cl100k_base encoding.") - encoding = tiktoken.get_encoding("cl100k_base") - if model == "gpt-3.5-turbo": - # !Note: gpt-3.5-turbo may change over time. - # Returning num tokens assuming gpt-3.5-turbo-0301.") - return count_message_tokens(messages, model="gpt-3.5-turbo-0301") - elif model == "gpt-4": - # !Note: gpt-4 may change over time. Returning num tokens assuming gpt-4-0314.") - return count_message_tokens(messages, model="gpt-4-0314") - elif model == "gpt-3.5-turbo-0301": - tokens_per_message = ( - 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n - ) - tokens_per_name = -1 # if there's a name, the role is omitted - elif model == "gpt-4-0314": - tokens_per_message = 3 - tokens_per_name = 1 - else: - raise NotImplementedError( - f"num_tokens_from_messages() is not implemented for model {model}.\n" - " See https://github.com/openai/openai-python/blob/main/chatml.md for" - " information on how messages are converted to tokens." - ) - num_tokens = 0 - for message in messages: - num_tokens += tokens_per_message - for key, value in message.items(): - num_tokens += len(encoding.encode(value)) - if key == "name": - num_tokens += tokens_per_name - num_tokens += 3 # every reply is primed with <|start|>assistant<|message|> - return num_tokens - - -def count_string_tokens(string: str, model_name: str) -> int: - """ - Returns the number of tokens in a text string. - - Args: - string (str): The text string. - model_name (str): The name of the encoding to use. (e.g., "gpt-3.5-turbo") - - Returns: - int: The number of tokens in the text string. - """ - encoding = tiktoken.encoding_for_model(model_name) - return len(encoding.encode(string)) diff --git a/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h b/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h deleted file mode 100644 index c9004bb8043a12e32814436baa6262a00c8ef68e..0000000000000000000000000000000000000000 --- a/spaces/hands012/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h +++ /dev/null @@ -1,433 +0,0 @@ -#pragma once - -#include -#include -#include -#include -#include - -#include "libipc/def.h" - -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" -#include "libipc/utility/log.h" -#include "libipc/utility/utility.h" - -namespace ipc { - -//////////////////////////////////////////////////////////////// -/// producer-consumer implementation -//////////////////////////////////////////////////////////////// - -template -struct prod_cons_impl; - -template <> -struct prod_cons_impl> { - - template - struct elem_t { - std::aligned_storage_t data_ {}; - }; - - alignas(cache_line_size) std::atomic rd_; // read index - alignas(cache_line_size) std::atomic wt_; // write index - - constexpr circ::u2_t cursor() const noexcept { - return 0; - } - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed)); - if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) { - return false; // full - } - std::forward(f)(&(elems[cur_wt].data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - /** - * In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'. - * So we could just disconnect all connections of receiver, and return false. - */ - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(~static_cast(0u)); - return false; - } - - template - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed)); - if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::forward(f)(&(elems[cur_rd].data_)); - std::forward(out)(true); - rd_.fetch_add(1, std::memory_order_release); - return true; - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - if (circ::index_of(cur_rd) == - circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - using flag_t = std::uint64_t; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - circ::u2_t cur_ct, nxt_ct; - for (unsigned k = 0;;) { - cur_ct = ct_.load(std::memory_order_relaxed); - if (circ::index_of(nxt_ct = cur_ct + 1) == - circ::index_of(rd_.load(std::memory_order_acquire))) { - return false; // full - } - if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - auto* el = elems + circ::index_of(cur_ct); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - while (1) { - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if (cur_ct != wt_.load(std::memory_order_relaxed)) { - return true; - } - if ((~cac_ct) != cur_ct) { - return true; - } - if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) { - return true; - } - wt_.store(nxt_ct, std::memory_order_release); - cur_ct = nxt_ct; - nxt_ct = cur_ct + 1; - el = elems + circ::index_of(cur_ct); - } - return true; - } - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - auto cur_wt = wt_.load(std::memory_order_acquire); - auto id_rd = circ::index_of(cur_rd); - auto id_wt = circ::index_of(cur_wt); - if (id_rd == id_wt) { - auto* el = elems + id_wt; - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if ((~cac_ct) != cur_wt) { - return false; // empty - } - if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) { - wt_.store(cur_wt + 1, std::memory_order_release); - } - k = 0; - } - else { - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - - enum : rc_t { - ep_mask = 0x00000000ffffffffull, - ep_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - }; - - alignas(cache_line_size) std::atomic wt_; // write index - alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer - - circ::u2_t cursor() const noexcept { - return wt_.load(std::memory_order_acquire); - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) { - return false; // has not finished yet - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - epoch_ += ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) { - if (cur == cursor()) return false; // acquire - auto* el = elems + circ::index_of(cur++); - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & ep_mask) == 0) { - std::forward(out)(true); - return true; - } - auto nxt_rc = cur_rc & ~static_cast(wrapper->connected_id()); - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)((nxt_rc & ep_mask) == 0); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - using flag_t = std::uint64_t; - - enum : rc_t { - rc_mask = 0x00000000ffffffffull, - ep_mask = 0x00ffffffffffffffull, - ep_incr = 0x0100000000000000ull, - ic_mask = 0xff000000ffffffffull, - ic_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - alignas(cache_line_size) std::atomic epoch_ { 0 }; - - circ::u2_t cursor() const noexcept { - return ct_.load(std::memory_order_acquire); - } - - constexpr static rc_t inc_rc(rc_t rc) noexcept { - return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask); - } - - constexpr static rc_t inc_mask(rc_t rc) noexcept { - return inc_rc(rc) & ~rc_mask; - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.load(std::memory_order_acquire); - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_relaxed); - circ::cc_t rem_cc = cur_rc & rc_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) { - return false; // has not finished yet - } - else if (!rem_cc) { - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if ((cur_fl != cur_ct) && cur_fl) { - return false; // full - } - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed) && - epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & rc_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed)) { - if (epoch == epoch_.load(std::memory_order_acquire)) { - break; - } - else if (push(wrapper, std::forward(f), elems)) { - return true; - } - epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) { - auto* el = elems + circ::index_of(cur); - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if (cur_fl != ~static_cast(cur)) { - return false; // empty - } - ++cur; - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & rc_mask) == 0) { - std::forward(out)(true); - el->f_ct_.store(cur + N - 1, std::memory_order_release); - return true; - } - auto nxt_rc = inc_rc(cur_rc) & ~static_cast(wrapper->connected_id()); - bool last_one = false; - if ((last_one = (nxt_rc & rc_mask) == 0)) { - el->f_ct_.store(cur + N - 1, std::memory_order_release); - } - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)(last_one); - return true; - } - ipc::yield(k); - } - } -}; - -} // namespace ipc diff --git a/spaces/hanstyle/tts/hparams.py b/spaces/hanstyle/tts/hparams.py deleted file mode 100644 index 1c019046279f497e4eae3f839f683bc0b1193c6b..0000000000000000000000000000000000000000 --- a/spaces/hanstyle/tts/hparams.py +++ /dev/null @@ -1,101 +0,0 @@ -from glob import glob -import os - -def get_image_list(data_root, split): - filelist = [] - - with open('filelists/{}.txt'.format(split)) as f: - for line in f: - line = line.strip() - if ' ' in line: line = line.split()[0] - filelist.append(os.path.join(data_root, line)) - - return filelist - -class HParams: - def __init__(self, **kwargs): - self.data = {} - - for key, value in kwargs.items(): - self.data[key] = value - - def __getattr__(self, key): - if key not in self.data: - raise AttributeError("'HParams' object has no attribute %s" % key) - return self.data[key] - - def set_hparam(self, key, value): - self.data[key] = value - - -# Default hyperparameters -hparams = HParams( - num_mels=80, # Number of mel-spectrogram channels and local conditioning dimensionality - # network - rescale=True, # Whether to rescale audio prior to preprocessing - rescaling_max=0.9, # Rescaling value - - # Use LWS (https://github.com/Jonathan-LeRoux/lws) for STFT and phase reconstruction - # It"s preferred to set True to use with https://github.com/r9y9/wavenet_vocoder - # Does not work if n_ffit is not multiple of hop_size!! - use_lws=False, - - n_fft=800, # Extra window size is filled with 0 paddings to match this parameter - hop_size=200, # For 16000Hz, 200 = 12.5 ms (0.0125 * sample_rate) - win_size=800, # For 16000Hz, 800 = 50 ms (If None, win_size = n_fft) (0.05 * sample_rate) - sample_rate=16000, # 16000Hz (corresponding to librispeech) (sox --i ) - - frame_shift_ms=None, # Can replace hop_size parameter. (Recommended: 12.5) - - # Mel and Linear spectrograms normalization/scaling and clipping - signal_normalization=True, - # Whether to normalize mel spectrograms to some predefined range (following below parameters) - allow_clipping_in_normalization=True, # Only relevant if mel_normalization = True - symmetric_mels=True, - # Whether to scale the data to be symmetric around 0. (Also multiplies the output range by 2, - # faster and cleaner convergence) - max_abs_value=4., - # max absolute value of data. If symmetric, data will be [-max, max] else [0, max] (Must not - # be too big to avoid gradient explosion, - # not too small for fast convergence) - # Contribution by @begeekmyfriend - # Spectrogram Pre-Emphasis (Lfilter: Reduce spectrogram noise and helps model certitude - # levels. Also allows for better G&L phase reconstruction) - preemphasize=True, # whether to apply filter - preemphasis=0.97, # filter coefficient. - - # Limits - min_level_db=-100, - ref_level_db=20, - fmin=55, - # Set this to 55 if your speaker is male! if female, 95 should help taking off noise. (To - # test depending on dataset. Pitch info: male~[65, 260], female~[100, 525]) - fmax=7600, # To be increased/reduced depending on data. - - ###################### Our training parameters ################################# - img_size=96, - fps=25, - - batch_size=16, - initial_learning_rate=1e-4, - nepochs=200000000000000000, ### ctrl + c, stop whenever eval loss is consistently greater than train loss for ~10 epochs - num_workers=16, - checkpoint_interval=3000, - eval_interval=3000, - save_optimizer_state=True, - - syncnet_wt=0.0, # is initially zero, will be set automatically to 0.03 later. Leads to faster convergence. - syncnet_batch_size=64, - syncnet_lr=1e-4, - syncnet_eval_interval=10000, - syncnet_checkpoint_interval=10000, - - disc_wt=0.07, - disc_initial_learning_rate=1e-4, -) - - -def hparams_debug_string(): - values = hparams.values() - hp = [" %s: %s" % (name, values[name]) for name in sorted(values) if name != "sentences"] - return "Hyperparameters:\n" + "\n".join(hp) diff --git a/spaces/hekbobo/bingo/tests/kblob.ts b/spaces/hekbobo/bingo/tests/kblob.ts deleted file mode 100644 index 9e15b41c1c94a690beb61b23cdb42fc78767ccd2..0000000000000000000000000000000000000000 --- a/spaces/hekbobo/bingo/tests/kblob.ts +++ /dev/null @@ -1,27 +0,0 @@ -import FormData from 'form-data' - -import { fetch } from '@/lib/isomorphic' - -const formData = new FormData() - -const knowledgeRequest = {"imageInfo":{"url":"https://www.baidu.com/img/PCfb_5bf082d29588c07f842ccde3f97243ea.png"},"knowledgeRequest":{"invokedSkills":["ImageById"],"subscriptionId":"Bing.Chat.Multimodal","invokedSkillsRequestData":{"enableFaceBlur":true},"convoData":{"convoid":"51D|BingProdUnAuthenticatedUsers|E3DCA904FF236C67C3450163BCEC64CFF3F618CC8A4AFD75FD518F5ED0ADA080","convotone":"Creative"}}} - -formData.append('knowledgeRequest', JSON.stringify(knowledgeRequest)) - - -fetch('https://bing.vcanbb.top/images/kblob', - { - method: 'POST', - body: formData.getBuffer(), - headers: { - "sec-ch-ua": "\"Not/A)Brand\";v=\"99\", \"Google Chrome\";v=\"115\", \"Chromium\";v=\"115\"", - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-platform": "\"Windows\"", - "Referer": "https://bing.vcanbb.top/web/index.html", - "Referrer-Policy": "origin-when-cross-origin", - ...formData.getHeaders() - } - - } -).then(res => res.text()) -.then(res => console.log('res', res)) diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task024_Promise2012.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task024_Promise2012.py deleted file mode 100644 index e090fa16eef4b2cbb2d1bb7c7324441f8472e77c..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task024_Promise2012.py +++ /dev/null @@ -1,81 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from collections import OrderedDict -import SimpleITK as sitk -from batchgenerators.utilities.file_and_folder_operations import * - - -def export_for_submission(source_dir, target_dir): - """ - promise wants mhd :-/ - :param source_dir: - :param target_dir: - :return: - """ - files = subfiles(source_dir, suffix=".nii.gz", join=False) - target_files = [join(target_dir, i[:-7] + ".mhd") for i in files] - maybe_mkdir_p(target_dir) - for f, t in zip(files, target_files): - img = sitk.ReadImage(join(source_dir, f)) - sitk.WriteImage(img, t) - - -if __name__ == "__main__": - folder = "/media/fabian/My Book/datasets/promise2012" - out_folder = "/media/fabian/My Book/MedicalDecathlon/MedicalDecathlon_raw_splitted/Task024_Promise" - - maybe_mkdir_p(join(out_folder, "imagesTr")) - maybe_mkdir_p(join(out_folder, "imagesTs")) - maybe_mkdir_p(join(out_folder, "labelsTr")) - # train - current_dir = join(folder, "train") - segmentations = subfiles(current_dir, suffix="segmentation.mhd") - raw_data = [i for i in subfiles(current_dir, suffix="mhd") if not i.endswith("segmentation.mhd")] - for i in raw_data: - out_fname = join(out_folder, "imagesTr", i.split("/")[-1][:-4] + "_0000.nii.gz") - sitk.WriteImage(sitk.ReadImage(i), out_fname) - for i in segmentations: - out_fname = join(out_folder, "labelsTr", i.split("/")[-1][:-17] + ".nii.gz") - sitk.WriteImage(sitk.ReadImage(i), out_fname) - - # test - current_dir = join(folder, "test") - test_data = subfiles(current_dir, suffix="mhd") - for i in test_data: - out_fname = join(out_folder, "imagesTs", i.split("/")[-1][:-4] + "_0000.nii.gz") - sitk.WriteImage(sitk.ReadImage(i), out_fname) - - - json_dict = OrderedDict() - json_dict['name'] = "PROMISE12" - json_dict['description'] = "prostate" - json_dict['tensorImageSize'] = "4D" - json_dict['reference'] = "see challenge website" - json_dict['licence'] = "see challenge website" - json_dict['release'] = "0.0" - json_dict['modality'] = { - "0": "MRI", - } - json_dict['labels'] = { - "0": "background", - "1": "prostate" - } - json_dict['numTraining'] = len(raw_data) - json_dict['numTest'] = len(test_data) - json_dict['training'] = [{'image': "./imagesTr/%s.nii.gz" % i.split("/")[-1][:-4], "label": "./labelsTr/%s.nii.gz" % i.split("/")[-1][:-4]} for i in - raw_data] - json_dict['test'] = ["./imagesTs/%s.nii.gz" % i.split("/")[-1][:-4] for i in test_data] - - save_json(json_dict, os.path.join(out_folder, "dataset.json")) - diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/nnUNetTrainerCE.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/nnUNetTrainerCE.py deleted file mode 100644 index 689dcbf552a647039a315d9121b5de3253585563..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/training/network_training/nnUNet_variants/nnUNetTrainerCE.py +++ /dev/null @@ -1,23 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from nnunet.training.loss_functions.crossentropy import RobustCrossEntropyLoss -from nnunet.training.network_training.nnUNetTrainer import nnUNetTrainer - - -class nnUNetTrainerCE(nnUNetTrainer): - def __init__(self, plans_file, fold, output_folder=None, dataset_directory=None, batch_dice=True, stage=None, - unpack_data=True, deterministic=True, fp16=False): - super(nnUNetTrainerCE, self).__init__(plans_file, fold, output_folder, dataset_directory, batch_dice, stage, - unpack_data, deterministic, fp16) - self.loss = RobustCrossEntropyLoss() diff --git a/spaces/hsukqilee/NSFW-API/src/controllers/NsfwController.ts b/spaces/hsukqilee/NSFW-API/src/controllers/NsfwController.ts deleted file mode 100644 index a225e1c73d88ed7c9ff91b7a2e0a650423aba8e1..0000000000000000000000000000000000000000 --- a/spaces/hsukqilee/NSFW-API/src/controllers/NsfwController.ts +++ /dev/null @@ -1,42 +0,0 @@ -import {Controller, Post} from 'simple-ts-express-decorators'; -import multer, {memoryStorage} from 'multer'; -import {Request, Response} from 'express'; -import {NsfwImageClassifier} from 'app/NsfwImageClassifier'; - -const upload = multer({storage: memoryStorage()}); - -@Controller() -export class NsfwController { - classifier: NsfwImageClassifier; - - constructor() { - this.classifier = new NsfwImageClassifier(); - } - - @Post('/classify', upload.single('image')) - async classify(request: Request, response: Response) { - if (!request.file) { - return response - .status(410) - .json({error: 'Specify image'}); - } - - const data = await this.classifier.classify(request.file.buffer); - - return response.json(data); - } - - @Post('/classify-many', upload.array('images', 10)) - async classifyMany(request: Request, response: Response) { - if (!request.files || !request.files.length) { - return response - .status(410) - .json({error: 'Specify images'}); - } - - const buffers = (request.files as Express.Multer.File[]).map(file => file.buffer); - const data = await this.classifier.classifyMany(buffers); - - return response.json(data); - } -} diff --git a/spaces/huggingface-projects/video-composer-gpt4/app.py b/spaces/huggingface-projects/video-composer-gpt4/app.py deleted file mode 100644 index 77235cbdc4083a9ff2bb701fef6f51daaeb2303f..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/video-composer-gpt4/app.py +++ /dev/null @@ -1,310 +0,0 @@ -import gradio as gr - -from PIL import Image -from moviepy.editor import VideoFileClip, AudioFileClip - -import os -import openai -import subprocess -from pathlib import Path -import uuid -import tempfile -import shlex -import shutil -from utils import format_bash_command - -OPENAI_API_KEY = os.environ["OPENAI_API_KEY"] -openai.api_key = OPENAI_API_KEY - -allowed_medias = [ - ".png", - ".jpg", - ".jpeg", - ".tiff", - ".bmp", - ".gif", - ".svg", - ".mp3", - ".wav", - ".ogg", - ".mp4", - ".avi", - ".mov", - ".mkv", - ".flv", - ".wmv", - ".webm", - ".mpg", - ".mpeg", - ".m4v", - ".3gp", - ".3g2", - ".3gpp", -] - - -def get_files_infos(files): - results = [] - for file in files: - file_path = Path(file.name) - info = {} - info["size"] = os.path.getsize(file_path) - info["name"] = file_path.name - file_extension = file_path.suffix - - if file_extension in (".mp4", ".avi", ".mkv", ".mov"): - info["type"] = "video" - video = VideoFileClip(file.name) - info["duration"] = video.duration - info["dimensions"] = "{}x{}".format(video.size[0], video.size[1]) - if video.audio: - info["type"] = "video/audio" - info["audio_channels"] = video.audio.nchannels - video.close() - elif file_extension in (".mp3", ".wav"): - info["type"] = "audio" - audio = AudioFileClip(file.name) - info["duration"] = audio.duration - info["audio_channels"] = audio.nchannels - audio.close() - elif file_extension in ( - ".png", - ".jpg", - ".jpeg", - ".tiff", - ".bmp", - ".gif", - ".svg", - ): - info["type"] = "image" - img = Image.open(file.name) - info["dimensions"] = "{}x{}".format(img.size[0], img.size[1]) - results.append(info) - return results - - -def get_completion(prompt, files_info, top_p, temperature): - files_info_string = "" - for file_info in files_info: - files_info_string += f"""{file_info["type"]} {file_info["name"]}""" - if file_info["type"] == "video" or file_info["type"] == "image": - files_info_string += f""" {file_info["dimensions"]}""" - if file_info["type"] == "video" or file_info["type"] == "audio": - files_info_string += f""" {file_info["duration"]}s""" - if file_info["type"] == "audio" or file_info["type"] == "video/audio": - files_info_string += f""" {file_info["audio_channels"]} audio channels""" - files_info_string += "\n" - - messages = [ - { - "role": "system", - # "content": f"""Act as a FFMPEG expert. Create a valid FFMPEG command that will be directly pasted in the terminal. Using those files: {files_info} create the FFMPEG command to achieve this: "{prompt}". Make sure it's a valid command that will not do any error. Always name the output of the FFMPEG command "output.mp4". Always use the FFMPEG overwrite option (-y). Don't produce video longer than 1 minute. Think step by step but never give any explanation, only the shell command.""", - # "content": f"""You'll need to create a valid FFMPEG command that will be directly pasted in the terminal. You have those files (images, videos, and audio) at your disposal: {files_info} and you need to compose a new video using FFMPEG and following those instructions: "{prompt}". You'll need to use as many assets as you can. Make sure it's a valid command that will not do any error. Always name the output of the FFMPEG command "output.mp4". Always use the FFMPEG overwrite option (-y). Try to avoid using -filter_complex option. Don't produce video longer than 1 minute. Think step by step but never give any explanation, only the shell command.""", - "content": """ -You are a very experienced media engineer, controlling a UNIX terminal. -You are an FFMPEG expert with years of experience and multiple contributions to the FFMPEG project. - -You are given: -(1) a set of video, audio and/or image assets. Including their name, duration, dimensions and file size -(2) the description of a new video you need to create from the list of assets - -Based on the available assets and the description, your objective issue a FFMPEG to create a new video using the assets. -This will often involve putting assets one after the other, cropping the video format, or playing music in the background. Avoid using complex FFMPEG options, and try to keep the command as simple as possible as it will be directly paster into the terminal. -""", - }, - { - "role": "user", - "content": f"""Always output the media as video/mp4 and output file with "output.mp4". Provide only the shell command without any explanations. -The current assets and objective follow. Reply with the FFMPEG command: - -AVAILABLE ASSETS LIST: - -{files_info_string} - -OBJECTIVE: {prompt} and output at "output.mp4" -YOUR FFMPEG COMMAND: - """, - }, - ] - try: - completion = openai.ChatCompletion.create( - model="gpt-4", messages=messages, top_p=top_p, temperature=temperature - ) - command = completion.choices[0].message.content.replace("\n", "") - - # remove output.mp4 with the actual output file path - command = command.replace("output.mp4", "") - - return command - except Exception as e: - print("FROM OPENAI", e) - raise Exception("OpenAI API error") - - -def update(files, prompt, top_p=1, temperature=1): - if prompt == "": - raise gr.Error("Please enter a prompt.") - - files_info = get_files_infos(files) - # disable this if you're running the app locally or on your own server - for file_info in files_info: - if file_info["type"] == "video": - if file_info["duration"] > 120: - raise gr.Error( - "Please make sure all videos are less than 2 minute long." - ) - if file_info["size"] > 10000000: - raise gr.Error("Please make sure all files are less than 10MB in size.") - - attempts = 0 - while attempts < 2: - print("ATTEMPT", attempts) - try: - command_string = get_completion(prompt, files_info, top_p, temperature) - print( - f"""///PROMTP {prompt} \n\n/// START OF COMMAND ///:\n\n{command_string}\n\n/// END OF COMMAND ///\n\n""" - ) - - # split command string into list of arguments - args = shlex.split(command_string) - if args[0] != "ffmpeg": - raise Exception("Command does not start with ffmpeg") - temp_dir = tempfile.mkdtemp() - # copy files to temp dir - for file in files: - file_path = Path(file.name) - shutil.copy(file_path, temp_dir) - - # test if ffmpeg command is valid dry run - ffmpg_dry_run = subprocess.run( - args + ["-f", "null", "-"], - stderr=subprocess.PIPE, - text=True, - cwd=temp_dir, - ) - if ffmpg_dry_run.returncode == 0: - print("Command is valid.") - else: - print("Command is not valid. Error output:") - print(ffmpg_dry_run.stderr) - raise Exception( - "FFMPEG generated command is not valid. Please try again." - ) - - output_file_name = f"output_{uuid.uuid4()}.mp4" - output_file_path = str((Path(temp_dir) / output_file_name).resolve()) - subprocess.run(args + ["-y", output_file_path], cwd=temp_dir) - generated_command = f"### Generated Command\n```bash\n{format_bash_command(args)}\n -y output.mp4\n```" - return output_file_path, gr.update(value=generated_command) - except Exception as e: - attempts += 1 - if attempts >= 2: - print("FROM UPDATE", e) - raise gr.Error(e) - - -with gr.Blocks() as demo: - gr.Markdown( - """ - # 🏞 GPT-4 Video Composer - Add video, image and audio assets and ask ChatGPT to compose a new video. - **Please note: This demo is not a generative AI model, it only uses GPT-4 to generate a valid FFMPEG command based on the input files and the prompt.** - """, - elem_id="header", - ) - with gr.Row(): - with gr.Column(): - user_files = gr.File( - file_count="multiple", - label="Media files", - keep_filename=True, - file_types=allowed_medias, - ) - user_prompt = gr.Textbox( - placeholder="I want to convert to a gif under 15mb", - label="Instructions", - ) - btn = gr.Button("Run", label="Run") - with gr.Accordion("Parameters", open=False): - top_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="Top-p (nucleus sampling)", - ) - temperature = gr.Slider( - minimum=-0, - maximum=5.0, - value=1.0, - step=0.1, - interactive=True, - label="Temperature", - ) - with gr.Column(): - generated_video = gr.Video( - interactive=False, label="Generated Video", include_audio=True - ) - generated_command = gr.Markdown() - - btn.click( - fn=update, - inputs=[user_files, user_prompt, top_p, temperature], - outputs=[generated_video, generated_command], - ) - with gr.Row(): - gr.Examples( - examples=[ - [ - [ - "./examples/cat8.jpeg", - "./examples/cat1.jpeg", - "./examples/cat2.jpeg", - "./examples/cat3.jpeg", - "./examples/cat4.jpeg", - "./examples/cat5.jpeg", - "./examples/cat6.jpeg", - "./examples/cat7.jpeg", - "./examples/heat-wave.mp3", - ], - "make a video gif, each image with 1s loop and add the audio as background", - 0, - 0, - ], - [ - ["./examples/example.mp4"], - "please encode this video 10 times faster", - 0, - 0, - ], - [ - ["./examples/heat-wave.mp3", "./examples/square-image.png"], - "Make a 720x720 video, a white waveform of the audio, and finally add add the input image as the background all along the video.", - 0, - 0, - ], - [ - ["./examples/waterfall-overlay.png", "./examples/waterfall.mp4"], - "Add the overlay to the video.", - 0, - 0, - ], - ], - inputs=[user_files, user_prompt, top_p, temperature], - outputs=[generated_video, generated_command], - fn=update, - run_on_click=True, - cache_examples=True, - ) - - with gr.Row(): - gr.Markdown( - """ - If you have idea to improve this please open a PR: - - [![Open a Pull Request](https://huggingface.co/datasets/huggingface/badges/raw/main/open-a-pr-lg-light.svg)](https://huggingface.co/spaces/huggingface-projects/video-composer-gpt4/discussions) - """, - ) -demo.queue(api_open=False) -demo.launch(show_api=False) diff --git a/spaces/huggingface-tools/image-transformation/README.md b/spaces/huggingface-tools/image-transformation/README.md deleted file mode 100644 index 1419497c30e8e96d3e7b3e798c2188380d690047..0000000000000000000000000000000000000000 --- a/spaces/huggingface-tools/image-transformation/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Image Transformation -emoji: ⚡ -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -tags: -- tool ---- diff --git a/spaces/huy-ha/semabs-relevancy/README.md b/spaces/huy-ha/semabs-relevancy/README.md deleted file mode 100644 index 2cebc38f6d02abf24fbc498295697d4903bc6071..0000000000000000000000000000000000000000 --- a/spaces/huy-ha/semabs-relevancy/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Semabs Relevancy -emoji: 🐨 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hysts/anime_face_landmark_detection/app.py b/spaces/hysts/anime_face_landmark_detection/app.py deleted file mode 100644 index b2a6bd0c4e06cc4ce19a53f62ecf9868b7ecdfbc..0000000000000000000000000000000000000000 --- a/spaces/hysts/anime_face_landmark_detection/app.py +++ /dev/null @@ -1,140 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import functools -import os -import pathlib -import sys -import tarfile -import urllib.request -from typing import Callable - -import cv2 -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image -import torch -import torchvision.transforms as T - -sys.path.insert(0, 'anime_face_landmark_detection') - -from CFA import CFA - -DESCRIPTION = '# [kanosawa/anime_face_landmark_detection](https://github.com/kanosawa/anime_face_landmark_detection)' - -NUM_LANDMARK = 24 -CROP_SIZE = 128 - - -def load_sample_image_paths() -> list[pathlib.Path]: - image_dir = pathlib.Path('images') - if not image_dir.exists(): - dataset_repo = 'hysts/sample-images-TADNE' - path = huggingface_hub.hf_hub_download(dataset_repo, - 'images.tar.gz', - repo_type='dataset') - with tarfile.open(path) as f: - f.extractall() - return sorted(image_dir.glob('*')) - - -def load_face_detector() -> cv2.CascadeClassifier: - url = 'https://raw.githubusercontent.com/nagadomi/lbpcascade_animeface/master/lbpcascade_animeface.xml' - path = pathlib.Path('lbpcascade_animeface.xml') - if not path.exists(): - urllib.request.urlretrieve(url, path.as_posix()) - return cv2.CascadeClassifier(path.as_posix()) - - -def load_landmark_detector(device: torch.device) -> torch.nn.Module: - path = huggingface_hub.hf_hub_download( - 'public-data/anime_face_landmark_detection', - 'checkpoint_landmark_191116.pth') - model = CFA(output_channel_num=NUM_LANDMARK + 1, checkpoint_name=path) - model.to(device) - model.eval() - return model - - -@torch.inference_mode() -def detect(image_path: str, face_detector: cv2.CascadeClassifier, - device: torch.device, transform: Callable, - landmark_detector: torch.nn.Module) -> np.ndarray: - image = cv2.imread(image_path) - gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) - preds = face_detector.detectMultiScale(gray, - scaleFactor=1.1, - minNeighbors=5, - minSize=(24, 24)) - - image_h, image_w = image.shape[:2] - pil_image = PIL.Image.fromarray(image[:, :, ::-1].copy()) - - res = image.copy() - for x_orig, y_orig, w_orig, h_orig in preds: - - x0 = round(max(x_orig - w_orig / 8, 0)) - x1 = round(min(x_orig + w_orig * 9 / 8, image_w)) - y0 = round(max(y_orig - h_orig / 4, 0)) - y1 = y_orig + h_orig - w = x1 - x0 - h = y1 - y0 - - temp = pil_image.crop((x0, y0, x1, y1)) - temp = temp.resize((CROP_SIZE, CROP_SIZE), PIL.Image.BICUBIC) - data = transform(temp) - data = data.to(device).unsqueeze(0) - - heatmaps = landmark_detector(data) - heatmaps = heatmaps[-1].cpu().numpy()[0] - - cv2.rectangle(res, (x0, y0), (x1, y1), (0, 255, 0), 2) - - for i in range(NUM_LANDMARK): - heatmap = cv2.resize(heatmaps[i], (CROP_SIZE, CROP_SIZE), - interpolation=cv2.INTER_CUBIC) - pty, ptx = np.unravel_index(np.argmax(heatmap), heatmap.shape) - pt_crop = np.round(np.array([ptx * w, pty * h]) / - CROP_SIZE).astype(int) - pt = np.array([x0, y0]) + pt_crop - cv2.circle(res, tuple(pt), 2, (0, 0, 255), cv2.FILLED) - - return res[:, :, ::-1] - - -device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu') - -image_paths = load_sample_image_paths() -examples = [[path.as_posix()] for path in image_paths] - -face_detector = load_face_detector() -landmark_detector = load_landmark_detector(device) -transform = T.Compose([ - T.ToTensor(), - T.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), -]) - -fn = functools.partial(detect, - face_detector=face_detector, - device=device, - transform=transform, - landmark_detector=landmark_detector) - -with gr.Blocks(css='style.css') as demo: - gr.Markdown(DESCRIPTION) - with gr.Row(): - with gr.Column(): - image = gr.Image(label='Input', type='filepath') - run_button = gr.Button('Run') - with gr.Column(): - result = gr.Image(label='Result') - - gr.Examples(examples=examples, - inputs=image, - outputs=result, - fn=fn, - cache_examples=os.getenv('CACHE_EXAMPLES') == '1') - run_button.click(fn=fn, inputs=image, outputs=result, api_name='predict') -demo.queue(max_size=15).launch() diff --git a/spaces/inamXcontru/PoeticTTS/Batman Arkham Origins - Initiation Download 10 Mb Learn the Secrets of the Shadow Warrior.md b/spaces/inamXcontru/PoeticTTS/Batman Arkham Origins - Initiation Download 10 Mb Learn the Secrets of the Shadow Warrior.md deleted file mode 100644 index 3fc2a6bc2712fb6162c34265c2e10251eed13365..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Batman Arkham Origins - Initiation Download 10 Mb Learn the Secrets of the Shadow Warrior.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Batman: Arkham Origins - Initiation Download 10 Mb


    Download Zip ··· https://gohhs.com/2uz3KS



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Hacked Games Need Speed Most Wanted 2012 Multiplayer Crack.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Hacked Games Need Speed Most Wanted 2012 Multiplayer Crack.md deleted file mode 100644 index 30c517036134626ac07c8c355d63d4dfe9f4d48e..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Hacked Games Need Speed Most Wanted 2012 Multiplayer Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Hacked Games Need Speed Most Wanted 2012 Multiplayer Crack


    DOWNLOAD ✫✫✫ https://urlin.us/2uEwsd



    -
    -Need for Speed Most Wanted Mod Apk adalah game android yang berbasis racing. Game ini sudah terinstall Mod Unlimited Money sehingga kalian memiliki uang yang tak terbatas. ... Need For Speed Hack Undetected. ... See more. Need For Speed Most Wanted 2012 PC Game. NFS Most Wanted 2012 is an arcade ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Cooking Simulator Superhot Challenge-PLAZA Repack [ 4 GB ] Download __TOP__.md b/spaces/inreVtussa/clothingai/Examples/Cooking Simulator Superhot Challenge-PLAZA Repack [ 4 GB ] Download __TOP__.md deleted file mode 100644 index 4bda41f0add2ec9a13af2839af76320dac830b31..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Cooking Simulator Superhot Challenge-PLAZA Repack [ 4 GB ] Download __TOP__.md +++ /dev/null @@ -1,13 +0,0 @@ -

    Cooking Simulator Superhot Challenge-PLAZA Repack [ 4 GB ] Download


    Download ✑ ✑ ✑ https://tiurll.com/2uClyA



    -
    -Cooking Simulator Pizza Game Download for free via torrent; Memory: 6GB RAM; Graphics: GTX 660Ti 3GB / R9 270X 4GB; DirectX: version 9.0c; Storage: 6 GB available. ☀ Computer games - download games from torrent ☀ Torrent. -Download games via torrent on PC -Download games on PC via torrent -Download games via torrent -Download Games for PC Free -Here are collected only the best games that you can play absolutely free. -With a torrent, you can download them within a few hours. -They are available to you, just go to the appropriate section and download the game you like. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/irvay/RVC_IR/lib/infer_pack/attentions.py b/spaces/irvay/RVC_IR/lib/infer_pack/attentions.py deleted file mode 100644 index 05501be1871643f78dddbeaa529c96667031a8db..0000000000000000000000000000000000000000 --- a/spaces/irvay/RVC_IR/lib/infer_pack/attentions.py +++ /dev/null @@ -1,417 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -from lib.infer_pack import commons -from lib.infer_pack import modules -from lib.infer_pack.modules import LayerNorm - - -class Encoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - window_size=10, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - window_size=window_size, - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__( - self, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size=1, - p_dropout=0.0, - proximal_bias=False, - proximal_init=True, - **kwargs - ): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention( - hidden_channels, - hidden_channels, - n_heads, - p_dropout=p_dropout, - proximal_bias=proximal_bias, - proximal_init=proximal_init, - ) - ) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append( - MultiHeadAttention( - hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout - ) - ) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN( - hidden_channels, - hidden_channels, - filter_channels, - kernel_size, - p_dropout=p_dropout, - causal=True, - ) - ) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to( - device=x.device, dtype=x.dtype - ) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__( - self, - channels, - out_channels, - n_heads, - p_dropout=0.0, - window_size=None, - heads_share=True, - block_length=None, - proximal_bias=False, - proximal_init=False, - ): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - self.emb_rel_v = nn.Parameter( - torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) - * rel_stddev - ) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert ( - t_s == t_t - ), "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys( - query / math.sqrt(self.k_channels), key_relative_embeddings - ) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to( - device=scores.device, dtype=scores.dtype - ) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert ( - t_s == t_t - ), "Local attention is only available for self-attention." - block_mask = ( - torch.ones_like(scores) - .triu(-self.block_length) - .tril(self.block_length) - ) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings( - self.emb_rel_v, t_s - ) - output = output + self._matmul_with_relative_values( - relative_weights, value_relative_embeddings - ) - output = ( - output.transpose(2, 3).contiguous().view(b, d, t_t) - ) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]]), - ) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[ - :, slice_start_position:slice_end_position - ] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, 1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad( - x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [0, length - 1]]) - ) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length + 1, 2 * length - 1])[ - :, :, :length, length - 1 : - ] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad( - x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length - 1]]) - ) - x_flat = x.view([batch, heads, length**2 + length * (length - 1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2 * length])[:, :, :, 1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__( - self, - in_channels, - out_channels, - filter_channels, - kernel_size, - p_dropout=0.0, - activation=None, - causal=False, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/isaiah08/dalle-mini-test/style.css b/spaces/isaiah08/dalle-mini-test/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/isaiah08/dalle-mini-test/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/ismot/1702t1/models/other/init_env.py b/spaces/ismot/1702t1/models/other/init_env.py deleted file mode 100644 index 3654f11d0fe7b3f113bcf9af4a7f43807bf31a79..0000000000000000000000000000000000000000 --- a/spaces/ismot/1702t1/models/other/init_env.py +++ /dev/null @@ -1,37 +0,0 @@ -""" -@Date: 2021/08/15 -@description: -""" -import random -import torch -import torch.backends.cudnn as cudnn -import numpy as np -import os -import cv2 - - -def init_env(seed, deterministic=False, loader_work_num=0): - # Fix seed - # Python & NumPy - np.random.seed(seed) - random.seed(seed) - os.environ['PYTHONHASHSEED'] = str(seed) - - # PyTorch - torch.manual_seed(seed) # 为CPU设置随机种子 - if torch.cuda.is_available(): - torch.cuda.manual_seed(seed) # 为当前GPU设置随机种子 - torch.cuda.manual_seed_all(seed) # 为所有GPU设置随机种子 - - # cuDNN - if deterministic: - # 复现 - torch.backends.cudnn.benchmark = False - torch.backends.cudnn.deterministic = True # 将这个 flag 置为 True 的话,每次返回的卷积算法将是确定的,即默认算法 - else: - cudnn.benchmark = True # 如果网络的输入数据维度或类型上变化不大,设置true - torch.backends.cudnn.deterministic = False - - # Using multiple threads in Opencv can cause deadlocks - if loader_work_num != 0: - cv2.setNumThreads(0) diff --git a/spaces/james-oldfield/PandA/networks/genforce/datasets/README.md b/spaces/james-oldfield/PandA/networks/genforce/datasets/README.md deleted file mode 100644 index c5afd6eca5a373ec567df4f4082010f3bf21aff3..0000000000000000000000000000000000000000 --- a/spaces/james-oldfield/PandA/networks/genforce/datasets/README.md +++ /dev/null @@ -1,24 +0,0 @@ -# Data Preparation - -## Data Format - -Currently, our dataloader is able to load data from - -- a directory that is full of images (support using [`turbojpeg`](https://pypi.org/project/PyTurboJPEG/) to speed up decoding images.) -- a `lmdb` file -- an image list -- a compressed file (i.e., `zip` package) - -by modifying `data_format` in the configuration. - -**NOTE:** For some computing clusters whose I/O speed may be slow, we recommend the `zip` format for two reasons. First, `zip` file is easy to create. Second, this can load a large file at one time instead of loading small files repeatedly. - -## Data Sampling - -Considering that most generative models are trained in the unit of iterations instead of epochs, we change the default data loader to an *iter-based* one. Besides, the original distributed data sampler is also modified to make the shuffling correspond to iteration instead of epoch. - -**NOTE:** In order to reduce the data re-loading cost between epochs, we manually extend the length of sampled indices to make it much more efficient. - -## Data Augmentation - -To better align with the original implementation of PGGAN and StyleGAN (i.e., models that require progressive training), we support progressive resize in `transforms.py`, which downsamples images with the maximum resize factor of 2 at each time. diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/evaluation/losses/__init__.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/evaluation/losses/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/jirufengyu/face_recognition/local.py b/spaces/jirufengyu/face_recognition/local.py deleted file mode 100644 index 8bc0e409d70596083a5c1dcaceb1a16716430a19..0000000000000000000000000000000000000000 --- a/spaces/jirufengyu/face_recognition/local.py +++ /dev/null @@ -1,4 +0,0 @@ -import tkinter -import cv2 -import os -from .utils.face_rec import input_an_image, update_ind2person \ No newline at end of file diff --git a/spaces/jitesh/storytelling/src/read_logs.py b/spaces/jitesh/storytelling/src/read_logs.py deleted file mode 100644 index ec7d557f03f2cc965d8fbe4d3906e09cd9805f30..0000000000000000000000000000000000000000 --- a/spaces/jitesh/storytelling/src/read_logs.py +++ /dev/null @@ -1,464 +0,0 @@ -import random -import numpy as np -import pandas as pd -import plotly.express as px -import streamlit as st -import xlsxwriter -from os import listdir -from .lib import set_input, create_dowload_button -from os.path import isfile, join, exists -import printj -# import cv2 -import matplotlib.image as mpimg - -class LogAnalyser: - def __init__(self, gen, container_guide, container_param, container_button): - self.gen, self.container_guide, self.container_param, self.container_button = gen, container_guide, container_param, container_button - # self.gen.initialise_classifier_model() - dirpath = 'data' - log_file_paths = sorted( - [join(dirpath, f) for f in listdir(dirpath) if isfile(join(dirpath, f)) and f.startswith('ist_log')]) - - self.path = container_param.selectbox( - 'Select the log path', log_file_paths) - self.df_path = f'data/df/{self.path.split("/")[-1].split(".")[0]}.csv' - # if 'button1_counter' not in st.session_state: - # st.session_state.button1_counter = 0 - # if 'df' not in st.session_state: - # self.df=0 - st.markdown(self.get_text()) - self.placeholder = dict() - - @staticmethod - @st.cache - def get_text(): - return ''' - - ### Equation - ``` - frequency_penalty = 1 - emotion_frequency - probability_emote = w * emotion_confidence + (1 - w) * frequency_penalty - Show_Emotion = probability_emote > (Random value between 0 and 1) - ``` - ''' - - def display_logs(self): - # self.container_param.markdown( - # f'st.session_state.button1_counter: {st.session_state.button1_counter}') - self.emotion_type = self.container_param.select_slider( - 'How many Emotion data to show?', ['Max-only', '2', '3', '4', '5', '6', 'All 7']) - self.debug = 'debug' in self.df_path - if (not exists(self.df_path) or self.container_button.button('Detect Emotion')) and (not self.debug): - self.df = self.get_log() - # else: - self.df = pd.read_csv(self.df_path) - - # if 'path' not in st.session_state: - # st.session_state.path=self.path - # if 'df' not in st.session_state or st.session_state.path!=self.path: - # st.session_state.df=self.get_log(self.path, self.gen) - # st.session_state.path=self.path - - self.update_df() - if self.debug: - for name in ['c1plot', 'c2plot']: - self.placeholder[name] = st.empty() - # image = cv2.imread(f'data/img/{name}.png') - # image=cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - - image = mpimg.imread(f'data/img/{name}.png') - self.placeholder[name].image(image) - self.get_c1_plot() - self.get_c2_plot() - - def get_c1_plot(self): - # c2_threshold=0 - c1_threshold_list = np.arange(0, 1, 0.01) - c1_reaction_weight_list = np.arange(0, 1.1, 0.1) - - # reaction_weight=0.5 - list_stories = self.df.Story.unique() - total_num_stories = len(list_stories) - num_stories2show = 9 # int(set_input(self.container_param, - # label='Number of stories to show', min_value=1, max_value=total_num_stories, value=9, step=1, - # key_slider='num_stories2show_slider', key_input='num_stories2show_input',)) - list_stories2show = list_stories[:num_stories2show] - - c1r_sum_list = [] - df_c1_analysis = pd.DataFrame() - c1_analysis_dict = dict() - for reaction_weight in c1_reaction_weight_list: - reaction_weight=np.round(reaction_weight, 2) - for c1_threshold in c1_threshold_list: - df_c1 = self.df.copy() - - for story_id in list_stories2show: - reaction_num = 0 - reaction_frequency = 0 - probability_emote = 0 - reaction_show = False - - subset_condition = self.get_subset_condition(df_c1, story_id) - dfs = df_c1[subset_condition] - for i, (index, row) in enumerate(dfs.iterrows()): - if row.Emotion == 'neutral' or row.Score < self.score_threshold: - reaction_show = False - else: - reaction_frequency = reaction_num/(i+1) - probability_emote = row.Score*reaction_weight + \ - (1-reaction_weight)*(1-reaction_frequency) - reaction_show = True if probability_emote > c1_threshold else False - if reaction_show: - reaction_num += 1 - - df_c1.at[index, 'reaction_frequency'] = reaction_frequency - df_c1.at[index, 'probability_emote'] = probability_emote - df_c1.at[index, 'c1_threshold'] = c1_threshold - df_c1.at[index, 'reaction_show'] = reaction_show - df_c1.at[index, 'c1'] = reaction_show - review = df_c1.e_review[index] - df_c1.at[index, 'c1r'] = self.get_criteria_review( - reaction_show, review=review, neutral_emotion=row.Emotion == 'neutral') - c1r_sum = df_c1['c1r'].sum() - c1r_sum_list.append(c1r_sum) - c1_analysis_dict['c1_threshold']=c1_threshold - c1_analysis_dict['reaction_weight']=reaction_weight - c1_analysis_dict['c1r_sum']=c1r_sum - df_c1_analysis=pd.concat([df_c1_analysis, pd.DataFrame(c1_analysis_dict, index=[0])]) - - - - # fig = px.line(x=c1_threshold_list, y=c1r_sum_list) - fig = px.line(data_frame=df_c1_analysis, x='c1_threshold', y='c1r_sum', color='reaction_weight') - fig.update_layout( - title="Criteria 1 analysis `PE > Threshold`", - xaxis_title="PE Threshold", - yaxis_title="Count of good reviews", - # legend_title="Legend Title", - font=dict( - # family="Courier New, monospace", - size=14, - color="#006064" - ), - - ) - # st.plotly_chart(fig, use_container_width=True) - self.placeholder['c1plot'].plotly_chart(fig, use_container_width=True) - def get_c2_plot(self): - # c2_threshold=0 - c2_threshold_list = np.arange(0, 1, 0.01) - - list_stories = self.df.Story.unique() - total_num_stories = len(list_stories) - num_stories2show = 9 # int(set_input(self.container_param, - # label='Number of stories to show', min_value=1, max_value=total_num_stories, value=9, step=1, - # key_slider='num_stories2show_slider', key_input='num_stories2show_input',)) - list_stories2show = list_stories[:num_stories2show] - - c2r_sum_list = [] - for c2_threshold in c2_threshold_list: - df_c2 = self.df.copy() - for story_id in list_stories2show: - subset_condition = self.get_subset_condition(df_c2, story_id) - dfs = df_c2[subset_condition] - for i, (index, row) in enumerate(dfs.iterrows()): - c2 = row.Score > c2_threshold - df_c2.at[index, 'c2'] = c2 - review = df_c2.e_review[index] - df_c2.at[index, 'c2r'] = self.get_criteria_review( - c2, review=review, neutral_emotion=row.Emotion == 'neutral') - c2r_sum_list.append(df_c2['c2r'].sum()) - fig = px.line(x=c2_threshold_list, y=c2r_sum_list) - fig.update_layout( - title="Criteria 2 analysis `CS > Threshold`", - xaxis_title="CS Threshold", - yaxis_title="Count of good reviews", - # legend_title="Legend Title", - font=dict( - # family="Courier New, monospace", - size=14, - color="#006064" - ), - - ) - self.placeholder['c2plot'].plotly_chart(fig, use_container_width=True) - - @staticmethod - def get_subset_condition(data, story_id): - return (data.Story == story_id) & (data.Turn == 'user') - - @staticmethod - def get_criteria_review(c, review, neutral_emotion=False): - # printj.green(f'{c} {type(c)}') - # printj.green(f'{review} {type(review)}') - review_bool = True if (review == 'o' or review == None) else False if (c == False and review == 'x') else None - if neutral_emotion and review_bool: - result = True - else: - result = (c and review_bool) or (not c and not review_bool) - return np.round(int(result), 0) - # return str(np.round(result, 0)) - - def get_ngram_pattern(self, s, n=2): - gnp = '' - for i in range(len(s)-(n-1)): - gnp += '1' if '1' in s[i:i+n] else '0' - return gnp - - def update_df(self): - list_stories = self.df.Story.unique() - total_num_stories = len(list_stories) - num_stories2show = int(set_input(self.container_param, - label='No. of stories to show', min_value=1, max_value=total_num_stories, value=9, step=1, - key_slider='num_stories2show_slider', key_input='num_stories2show_input',)) - list_stories2show = list_stories[:num_stories2show] - reaction_weight = set_input(self.container_param, - label='Reaction Weight w', min_value=0.0, max_value=1.0, value=0.5, step=0.01, - key_slider='w_slider', key_input='w_input',) - self.container_param_rv = self.container_param.columns([1, 1]) - random_value_mode = self.container_param_rv[0].radio( - "C1 Threshold type", ["Random", "Fixed"], index=1) - # random_value = random.random() - if random_value_mode == "Fixed": - random_value = set_input(self.container_param, - label='C1 Threshold', - key_slider='rand_slider', key_input='rand_input', - min_value=0., - max_value=1., - value=.5, - step=.01,) - c2_threshold = set_input(self.container_param, - label='C2 Threshold', min_value=0.0, max_value=1.0, value=0.7, step=0.01, - key_slider='c2_threshold_slider', key_input='c2_threshold_input',) - table_mode = self.container_param.radio( - "Table Style:", ["Dataframe", "Table"]) - self.show_pe_data = self.container_param.checkbox( - 'Show Probability Emote', value=True, key='show_pe_data_log') - self.score_threshold = set_input(self.container_param, - label='Score Threshold', min_value=0.0, max_value=1.0, value=0.5, step=0.01, - key_slider='score_threshold_slider', key_input='score_threshold_input',) - - df_reaction_pattern = pd.DataFrame() - reaction_pattern_dict = dict() - for story_id in list_stories2show: - reaction_num = 0 - reaction_frequency = 0 - probability_emote = 0 - # random_value = 0 - reaction_show = False - # c2 = True - - subset_condition = self.get_subset_condition(self.df, story_id) - dfs = self.df[subset_condition] - for i, (index, row) in enumerate(dfs.iterrows()): - if row.Emotion == 'neutral' or row.Score < self.score_threshold: - reaction_show = False - else: - reaction_frequency = reaction_num/(i+1) - probability_emote = row.Score*reaction_weight + \ - (1-reaction_weight)*(1-reaction_frequency) - if random_value_mode == "Random": - random_value = random.random() - reaction_show = True if probability_emote > random_value else False - if reaction_show: - reaction_num += 1 - - self.df.at[index, 'reaction_frequency'] = reaction_frequency - self.df.at[index, 'probability_emote'] = probability_emote - self.df.at[index, 'random_value'] = random_value - self.df.at[index, 'reaction_show'] = reaction_show - self.df.at[index, 'c1'] = reaction_show - c2 = row.Emotion != 'neutral' and row.Score > c2_threshold - self.df.at[index, 'c2'] = c2 - review = self.df.e_review[index] - self.df.at[index, 'c1r'] = self.get_criteria_review( - reaction_show, review=review, neutral_emotion=row.Emotion == 'neutral') - self.df.at[index, 'c2r'] = self.get_criteria_review( - c2, review=review, neutral_emotion=row.Emotion == 'neutral') - s = '' - df_edit = self.df[self.get_subset_condition( - self.df, story_id)].reaction_show.copy() - df_edit = df_edit.dropna() - for v in df_edit: - s += str(int(v)) - # df_reaction_pattern.at[story_id] - # reaction_pattern_dict['story_id']=story_id - reaction_pattern_dict['reaction_length'] = len(s) - reaction_pattern_dict['reaction_1'] = s.count('1') - reaction_pattern_dict['reaction_pattern'] = s - - for i in range(2, 8): - reaction_pattern_dict[f'{i}-gram_pattern'] = self.get_ngram_pattern( - s, n=i) - df_reaction_pattern = pd.concat( - [df_reaction_pattern, pd.DataFrame(reaction_pattern_dict, index=[f'Story_{story_id}'])]) - # st.markdown(df_edit) - # st.markdown(s) - - # for c in ['c1r', 'c2r']: - # st.markdown(f'Sum of {c} : {self.df[c].sum()}') - df_show = self.df.copy() - for c in ['c1r', 'c2r']: - df_show[c] = df_show[c].fillna(0).astype(int) - st.markdown(f'Sum of {c} : {df_show[c].sum()}') - for story_id in list_stories2show: - dfs = df_show[(df_show.Story == story_id)].copy() - columns2hide = ['Unnamed: 0', 'Story', ] - if not self.debug: - columns2hide += ['e_review'] - if self.emotion_type == 'Max-only': - columns2hide += [ - f'Emotion_{sorted_i+1}' for sorted_i in range(7)] - columns2hide += [ - f'Score_{sorted_i+1}' for sorted_i in range(7)] - if not self.show_pe_data: - columns2hide += [ - "reaction_frequency", "probability_emote", "random_value", "reaction_show"] - for c in columns2hide: - dfs.drop(c, axis=1, inplace=True) - - st.markdown(f'#### Story {story_id}') - - dfs = dfs.style - if self.show_pe_data: - dfs = dfs.apply(self.dfstyle_color_text_col, axis=1) - # dfs = dfs.applymap(self.dfstyle_color_text) - dfs = dfs.apply(self.rower, axis=None) - dfs = dfs.set_table_styles([{ - 'selector': 'tr:hover', - 'props': 'color: #000000' # background-color: #eeee66;font-size: 1.01em; - }]) # .hide_index() - - if table_mode == 'Dataframe': - st.dataframe(dfs) - # set_na_rep(" ").s - # st.dataframe(df_reaction_pattern.iloc[story_id-1]) - elif table_mode == 'Table': - st.table(dfs) - # st.table(df_reaction_pattern.iloc[story_id-1]) - create_dowload_button( - dfs, sheet_name=f'story_{story_id}', file_name=f'data_story_{story_id}.xlsx') - # print(dfs.render()) - if table_mode == 'Dataframe': - st.dataframe(df_reaction_pattern) - elif table_mode == 'Table': - st.table(df_reaction_pattern) - # @st.cache - - def dfstyle_color_text_col(self, s): - num_col = len(s) - result = ['background-color: white']*len(s) - # if s.Emotion == 'neutral' and s.Turn == 'user': - # result[-6:-1] = ['color: #992222'] + \ - # ['color: #333333']+['color: #fcfcfc']*3 - for si, sc in enumerate(s): - if sc != sc: - result[si] = 'color: #fcfcfc' - # printj.red.bold_on_white(s) - # printj.red.bold_on_cyan(si) - # printj.red.bold_on_cyan(sc) - # if s.Score < self.score_threshold and s.Turn == 'user': - # result[-5:-1] = ['color: #992222'] + ['color: #fcfcfc']*3 - # printj.red(result) - # printj.red.bold_on_cyan(s) - # printj.red.bold_on_cyan(type(s)) - # printj.red.bold_on_white(s.keys().tolist()) - # printj.red.bold_on_white(type(s.keys().tolist())) - # idx_reaction_show = s.keys().tolist().index("reaction_show") - # printj.red.bold_on_white(idx_reaction_show) - # if s.reaction_show == 1: - # # result[idx_reaction_show] = 'color: #222222' - # pass - # elif s.reaction_show == 0: - # # result[idx_reaction_show] = 'color: #222222' - # pass - # else: - # # print(s.reaction_show) - # # print(type(s.reaction_show)) - # hide_length = 3 - # result[idx_reaction_show-hide_length:] = ['color: #fcfcfc']*(num_col-idx_reaction_show+hide_length) - # if s.probability_emote!=s.probability_emote: - # result[5] = 'color: #eeeeee' - return result - # @staticmethod - # @st.cache - # def dfstyle_color_text(val): - # if type(val)==str: - # color = 'red' if val =='neutral' else 'black' - # # elif type(val)==float: - # # color = 'red' if val > .50000 else 'black' - # elif val==None: - # color = '#ffffff' - # else: - # color = None - # return 'color: %s' % color if color is not None else '' - - @staticmethod - @st.cache - def rower(data): - s = data.index % 2 != 0 - s = pd.concat([pd.Series(s)] * data.shape[1], - axis=1) - z = pd.DataFrame(np.where(s, 'background-color:#f9f9f9', ''), - index=data.index, columns=data.columns) - return z - - def get_log(self): - df = pd.DataFrame(data=[], columns=[]) - log_dict = dict() - - with open(self.path) as f: - lines = f.readlines() - self.gen.initialise_classifier_model() - story_num = 0 - for i, line in enumerate(lines): - if line.startswith('H:'): - log_dict['Turn'] = 'haru' - elif line.startswith('U:'): - log_dict['Turn'] = 'user' - else: - story_num += 1 - continue - log_dict['Sentence'] = line[3:] - log_dict['Story'] = story_num - emotion_type = 'sorted' # 'max' - if self.emotion_type == 'max': - emotion_type = 'max' - else: - emotion_type = 'sorted' # - emotion = self.gen.get_emotion( - log_dict['Sentence'], filter_by=emotion_type) - if emotion_type == 'max': - log_dict['Emotion'] = emotion['label'] - log_dict['Score'] = emotion['score'] - elif emotion_type == 'sorted': - for sorted_i in range(len(emotion)): - log_dict[f'Emotion_{sorted_i+1}'] = emotion[sorted_i]['label'] - log_dict[f'Score_{sorted_i+1}'] = emotion[sorted_i]['score'] - log_dict['Emotion'] = emotion[0]['label'] - log_dict['Score'] = emotion[0]['score'] - log_dict['e_review'] = ' ' - df = pd.concat( - [df, pd.DataFrame(log_dict, index=[f'idx_{i}'])]) - df = df.reset_index(drop=True) - df.to_csv(self.df_path) - return df - - -def display_logs(gen, container_guide, container_param, container_button): - - la = LogAnalyser(gen, container_guide, container_param, container_button) - la.display_logs() - # df = la.update_df(la.df) - - -if __name__ == '__main__': - # df = LogAnalyser.get_log(path='data/ist_logs.txt') - # initialize data of lists. - # data = {'Name': ['Tom', 'nick', 'krish', 'jack'], - # 'Age': [20, 21, 19, 18]} - - # # Create DataFrame - # df = pd.DataFrame(data) - # print(df, type(df)) - os.system('./run.sh') diff --git a/spaces/jkompalli/plant_disease_detection/README.md b/spaces/jkompalli/plant_disease_detection/README.md deleted file mode 100644 index ff2a30df348a27237efab754a3bf02482d72e70d..0000000000000000000000000000000000000000 --- a/spaces/jkompalli/plant_disease_detection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Plant Disease Detection -emoji: 🏃 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/jupyter/js/index.js b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/jupyter/js/index.js deleted file mode 100644 index 55f507127df964c03404401a97fa8c7f4cdf0805..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/altair/jupyter/js/index.js +++ /dev/null @@ -1,88 +0,0 @@ -import embed from "https://cdn.jsdelivr.net/npm/vega-embed@6/+esm"; -import debounce from "https://cdn.jsdelivr.net/npm/lodash-es@4.17.21/debounce/+esm"; - -export async function render({ model, el }) { - let finalize; - - function showError(error){ - el.innerHTML = ( - '
    ' - + '

    JavaScript Error: ' + error.message + '

    ' - + "

    This usually means there's a typo in your chart specification. " - + "See the javascript console for the full traceback.

    " - + '
    ' - ); - } - - const reembed = async () => { - if (finalize != null) { - finalize(); - } - - let spec = model.get("spec"); - let api; - try { - api = await embed(el, spec); - } catch (error) { - showError(error) - return; - } - - finalize = api.finalize; - - // Debounce config - const wait = model.get("debounce_wait") ?? 10; - const maxWait = wait; - - const initialSelections = {}; - for (const selectionName of Object.keys(model.get("_vl_selections"))) { - const storeName = `${selectionName}_store`; - const selectionHandler = (_, value) => { - const newSelections = cleanJson(model.get("_vl_selections") ?? {}); - const store = cleanJson(api.view.data(storeName) ?? []); - - newSelections[selectionName] = {value, store}; - model.set("_vl_selections", newSelections); - model.save_changes(); - }; - api.view.addSignalListener(selectionName, debounce(selectionHandler, wait, {maxWait})); - - initialSelections[selectionName] = { - value: cleanJson(api.view.signal(selectionName) ?? {}), - store: cleanJson(api.view.data(storeName) ?? []) - } - } - model.set("_vl_selections", initialSelections); - - const initialParams = {}; - for (const paramName of Object.keys(model.get("_params"))) { - const paramHandler = (_, value) => { - const newParams = JSON.parse(JSON.stringify(model.get("_params"))) || {}; - newParams[paramName] = value; - model.set("_params", newParams); - model.save_changes(); - }; - api.view.addSignalListener(paramName, debounce(paramHandler, wait, {maxWait})); - - initialParams[paramName] = api.view.signal(paramName) ?? null - } - model.set("_params", initialParams); - model.save_changes(); - - // Param change callback - model.on('change:_params', async (new_params) => { - for (const [param, value] of Object.entries(new_params.changed._params)) { - api.view.signal(param, value); - } - await api.view.runAsync(); - }); - } - - model.on('change:spec', reembed); - model.on('change:debounce_wait', reembed); - await reembed(); -} - -function cleanJson(data) { - return JSON.parse(JSON.stringify(data)) -} \ No newline at end of file diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/X25.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/X25.py deleted file mode 100644 index 06c14534543664abcc73fbdeb8fbac7aff6e4aee..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/dns/rdtypes/ANY/X25.py +++ /dev/null @@ -1,58 +0,0 @@ -# Copyright (C) Dnspython Contributors, see LICENSE for text of ISC license - -# Copyright (C) 2003-2007, 2009-2011 Nominum, Inc. -# -# Permission to use, copy, modify, and distribute this software and its -# documentation for any purpose with or without fee is hereby granted, -# provided that the above copyright notice and this permission notice -# appear in all copies. -# -# THE SOFTWARE IS PROVIDED "AS IS" AND NOMINUM DISCLAIMS ALL WARRANTIES -# WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF -# MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL NOMINUM BE LIABLE FOR -# ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES -# WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN -# ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT -# OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. - -import struct - -import dns.exception -import dns.immutable -import dns.rdata -import dns.tokenizer - - -@dns.immutable.immutable -class X25(dns.rdata.Rdata): - - """X25 record""" - - # see RFC 1183 - - __slots__ = ["address"] - - def __init__(self, rdclass, rdtype, address): - super().__init__(rdclass, rdtype) - self.address = self._as_bytes(address, True, 255) - - def to_text(self, origin=None, relativize=True, **kw): - return '"%s"' % dns.rdata._escapify(self.address) - - @classmethod - def from_text( - cls, rdclass, rdtype, tok, origin=None, relativize=True, relativize_to=None - ): - address = tok.get_string() - return cls(rdclass, rdtype, address) - - def _to_wire(self, file, compress=None, origin=None, canonicalize=False): - l = len(self.address) - assert l < 256 - file.write(struct.pack("!B", l)) - file.write(self.address) - - @classmethod - def from_wire_parser(cls, rdclass, rdtype, parser, origin=None): - address = parser.get_counted_bytes() - return cls(rdclass, rdtype, address) diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/security/__init__.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/security/__init__.py deleted file mode 100644 index 3aa6bf21e44f3069adb94242fbba5c8160532a1c..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/fastapi/security/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -from .api_key import APIKeyCookie as APIKeyCookie -from .api_key import APIKeyHeader as APIKeyHeader -from .api_key import APIKeyQuery as APIKeyQuery -from .http import HTTPAuthorizationCredentials as HTTPAuthorizationCredentials -from .http import HTTPBasic as HTTPBasic -from .http import HTTPBasicCredentials as HTTPBasicCredentials -from .http import HTTPBearer as HTTPBearer -from .http import HTTPDigest as HTTPDigest -from .oauth2 import OAuth2 as OAuth2 -from .oauth2 import OAuth2AuthorizationCodeBearer as OAuth2AuthorizationCodeBearer -from .oauth2 import OAuth2PasswordBearer as OAuth2PasswordBearer -from .oauth2 import OAuth2PasswordRequestForm as OAuth2PasswordRequestForm -from .oauth2 import OAuth2PasswordRequestFormStrict as OAuth2PasswordRequestFormStrict -from .oauth2 import SecurityScopes as SecurityScopes -from .open_id_connect_url import OpenIdConnect as OpenIdConnect diff --git a/spaces/johnnyfivefingers/summarymachine/app.py b/spaces/johnnyfivefingers/summarymachine/app.py deleted file mode 100644 index 279f04785eb55033ae1ae0848e562c44fa467ef9..0000000000000000000000000000000000000000 --- a/spaces/johnnyfivefingers/summarymachine/app.py +++ /dev/null @@ -1,31 +0,0 @@ -import gradio as gr - -from transformers import pipeline -import csv - -model_id = "philschmid/bart-large-cnn-samsum" - -summarizer = pipeline("summarization", model=model_id) - -def summarize(text): - text = str(text) - if text == "showdata": - lines = "(lines)" - with open('input.csv',"r") as f: - lines = f.readlines() - return str(lines) - - - generated_summary_short = summarizer(text, max_length=40, min_length=10)[0]['summary_text'] - generated_summary = summarizer(text, max_length=80, min_length=20)[0]['summary_text'] - generated_summary_long = summarizer(text, max_length=200, min_length=40)[0]['summary_text'] - - fields = [str(text), str(generated_summary)] - with open('input.csv','a', newline='') as f: - writer = csv.writer(f) - writer.writerow(fields) - - return "Summary: " + str(generated_summary) + "\n\n" + "shorter: " + str(generated_summary_short)+ "\n\n" + "Longer: " + str(generated_summary_long) - -iface = gr.Interface(fn=summarize, inputs="text", outputs="text") -iface.launch() \ No newline at end of file diff --git a/spaces/johnowhitaker/twitter_viz/app.py b/spaces/johnowhitaker/twitter_viz/app.py deleted file mode 100644 index d90e4b3513a6f21477532667ca816d36d1da6d78..0000000000000000000000000000000000000000 --- a/spaces/johnowhitaker/twitter_viz/app.py +++ /dev/null @@ -1,109 +0,0 @@ -import streamlit as st -import pandas as pd -from matplotlib import pyplot as plt -import twint -import nest_asyncio -import multiprocessing.pool -import functools -from transformers import AutoModelForSequenceClassification -from transformers import TFAutoModelForSequenceClassification -from transformers import AutoTokenizer -import numpy as np -from scipy.special import softmax -import csv -import urllib.request -import IPython.display as ipd - -st.write('Loading...') - -# Preprocess text (username and link placeholders) -def preprocess(text): - new_text = [] - - - for t in text.split(" "): - t = '@user' if t.startswith('@') and len(t) > 1 else t - t = 'http' if t.startswith('http') else t - new_text.append(t) - return " ".join(new_text) - -# Loading pretrained model -MODEL = 'cardiffnlp/twitter-roberta-base-sentiment' -tokenizer = AutoTokenizer.from_pretrained(MODEL) -model = AutoModelForSequenceClassification.from_pretrained(MODEL) -model.save_pretrained(MODEL) -tokenizer.save_pretrained(MODEL) - -# Func to get a score using the above model -def combined_score(text): - text = preprocess(text) - encoded_input = tokenizer(text, return_tensors='pt') - output = model(**encoded_input) - scores = output[0][0].detach().numpy() - scores = softmax(scores) - return -scores[0] + scores[2] # scores = [negative, neutral, positive] - -# https://stackoverflow.com/questions/492519/timeout-on-a-function-call -def timeout(max_timeout): - """Timeout decorator, parameter in seconds.""" - def timeout_decorator(item): - """Wrap the original function.""" - @functools.wraps(item) - def func_wrapper(*args, **kwargs): - """Closure for function.""" - pool = multiprocessing.pool.ThreadPool(processes=1) - async_result = pool.apply_async(item, args, kwargs) - # raises a TimeoutError if execution exceeds max_timeout - return async_result.get(max_timeout) - return func_wrapper - return timeout_decorator - -# Getting tweets from a user -@timeout(120.0) -def get_tweets(username, limit=500, save_name=None): - #nest_asyncio.apply() # Helps avoid RuntimeError: This event loop is already running - - # Setup config - c = twint.Config() # Create a config object to store our settings - c.Limit = limit # Max number of tweets to fetch (increments of 20) - c.Username = username # User of interest - c.Pandas = True # Store tweets in a dataframe - c.Hide_output = True # Avoid printing out tweets - - # Run the seearch - twint.run.Search(c) - - # Get the results and optionally save to a file as well - df = twint.storage.panda.Tweets_df - if save_name != None: - df.to_csv(save_name) - return df - -title = st.title('Twitter Sentiment Map Thingee') - - -with st.form("my_form"): - st.write("Parameters:") - user = st.text_input("Twitter Username") - n_tweets = st.slider('How Many Tweets', 20, 2000, 20) - - # Every form must have a submit button. - submitted = st.form_submit_button("Submit") - -if submitted: - st.write("Fetching user", user, "n_tweets", n_tweets) - tweets = get_tweets(user, limit=n_tweets) - st.write("Resulting dataframe shape:", tweets.shape) - st.write("Calculating sentiments...") - tweets['sentiment'] = tweets['tweet'].map(lambda s: combined_score(s)) - tweets['tweet_length'] = tweets['tweet'].map(lambda s: len(s)) - st.write("Average sentiment:", tweets.sentiment.mean()) - fig, axs = plt.subplots(1, 2, figsize=(12, 6)) - axs[0].hexbin(tweets['tweet_length'], tweets['sentiment']*1, - gridsize=20, bins=12, cmap='inferno') - axs[0].set_title('Tweet Sentiment and Length') - axs[1].scatter(tweets['tweet_length'], tweets['sentiment']) - axs[1].set_title('Tweet Sentiment vs Length') - plt.setp(axs[:], xlabel='Tweet Length') - plt.setp(axs[:], ylabel='Sentiment') - st.pyplot(fig) \ No newline at end of file diff --git a/spaces/jrahn/yolochess/app.py b/spaces/jrahn/yolochess/app.py deleted file mode 100644 index b479b16ee49298d9fca3c4f3d9b91970b75f835b..0000000000000000000000000000000000000000 --- a/spaces/jrahn/yolochess/app.py +++ /dev/null @@ -1,85 +0,0 @@ -import os -import random -from datetime import datetime - -import gradio as gr -import chess -import chess.svg -from transformers import DebertaV2ForSequenceClassification, AutoTokenizer, pipeline - -token = os.environ['auth_token'] - -tokenizer = AutoTokenizer.from_pretrained('jrahn/chessv6', use_auth_token=token) -model = DebertaV2ForSequenceClassification.from_pretrained('jrahn/chessv6', use_auth_token=token) -pipe = pipeline(task="text-classification", model=model, tokenizer=tokenizer) - -def predict_move(fen, top_k=3): - preds = pipe(fen, top_k=top_k) - weights = [p['score'] for p in preds] - p = random.choices(preds, weights=weights)[0] - return p['label'] - -def btn_load(inp_fen): - print(f'** log - load - ts {datetime.now().isoformat()}, fen: {inp_fen}') - board = chess.Board() - - with open('board.svg', 'w') as f: - f.write(str(chess.svg.board(board))) - return 'board.svg', board.fen(), '' - -def btn_play(inp_fen, inp_move, inp_notation, inp_k): - print(f'** log - play - ts {datetime.now().isoformat()}, fen: {inp_fen}, move: {inp_move}, notation: {inp_notation}, top_k: {inp_k}') - board = chess.Board(inp_fen) - - if inp_move: - if inp_notation == 'UCI': mv = chess.Move.from_uci(inp_move) - elif inp_notation == 'SAN': mv = board.parse_san(inp_move) - else: - mv = chess.Move.from_uci(predict_move(board.fen(), top_k=inp_k)) - - if mv in board.legal_moves: - board.push(mv) - else: - raise ValueError(f'Illegal Move: {str(mv)} @ {board.fen()}') - - with open('board.svg', 'w') as f: - f.write(str(chess.svg.board(board, lastmove=mv))) - - return 'board.svg', board.fen(), '' - -with gr.Blocks() as block: - gr.Markdown( - ''' - # Play YoloChess - Policy Network v0.6 - 87M Parameter Transformer (DeBERTaV2-base architecture) - - pre-trained (MLM) from scratch on chess positions in FEN notation - - fine-tuned for text classification (moves) on expert games. - ''' - ) - with gr.Row() as row: - with gr.Column(): - with gr.Row(): - move = gr.Textbox(label='human player move') - notation = gr.Radio(["SAN", "UCI"], value="SAN", label='move notation') - fen = gr.Textbox(value=chess.Board().fen(), label='FEN') - top_k = gr.Number(value=3, label='sample from top_k moves', precision=0) - with gr.Row(): - load_btn = gr.Button("Load") - play_btn = gr.Button("Play") - gr.Markdown( - ''' - - Click "Load" button to start and reset board. - - Click "Play" button to get Engine move. - - Enter a "human player move" in UCI or SAN notation and click "Play" to move a piece. - - Output "ERROR" generally occurs on illegal moves (Human or Engine). - - Enter "FEN" to start from a custom position. - ''' - ) - with gr.Column(): - position_output = gr.Image(label='board') - - load_btn.click(fn=btn_load, inputs=fen, outputs=[position_output, fen, move]) - play_btn.click(fn=btn_play, inputs=[fen, move, notation, top_k], outputs=[position_output, fen, move]) - - -block.launch() \ No newline at end of file diff --git a/spaces/kadirnar/yolox/configs/yolox_s.py b/spaces/kadirnar/yolox/configs/yolox_s.py deleted file mode 100644 index abb6a8bbbe4fd1c6aff71596621aaeec2a6a15d8..0000000000000000000000000000000000000000 --- a/spaces/kadirnar/yolox/configs/yolox_s.py +++ /dev/null @@ -1,15 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding:utf-8 -*- -# Copyright (c) Megvii, Inc. and its affiliates. - -import os - -from yolox.exp import Exp as MyExp - - -class Exp(MyExp): - def __init__(self): - super(Exp, self).__init__() - self.depth = 0.33 - self.width = 0.50 - self.exp_name = os.path.split(os.path.realpath(__file__))[1].split(".")[0] diff --git a/spaces/kazuk/youtube-whisper-15/README.md b/spaces/kazuk/youtube-whisper-15/README.md deleted file mode 100644 index d9cf66e8aa8fb54ece6dc20eba55710f4ef717d1..0000000000000000000000000000000000000000 --- a/spaces/kazuk/youtube-whisper-15/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Youtube Whisper -emoji: ⚡ -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: unknown -duplicated_from: kazuk/youtube-whisper-14 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/keras-io/Monocular-Depth-Estimation/utils.py b/spaces/keras-io/Monocular-Depth-Estimation/utils.py deleted file mode 100644 index 4d81180739b98050550f1ffc2f7a0fbd499456c6..0000000000000000000000000000000000000000 --- a/spaces/keras-io/Monocular-Depth-Estimation/utils.py +++ /dev/null @@ -1,23 +0,0 @@ -import numpy as np - - -def depth_norm(x, maxDepth): - return maxDepth / x - - -def predict(model, images, minDepth=10, maxDepth=1000, batch_size=2): - # Support multiple RGBs, one RGB image, even grayscale - if len(images.shape) < 3: images = np.stack((images, images, images), axis=2) - if len(images.shape) < 4: images = images.reshape((1, images.shape[0], images.shape[1], images.shape[2])) - # Compute predictions - predictions = model.predict(images, batch_size=batch_size) - # Put in expected range - return np.clip(depth_norm(predictions, maxDepth=maxDepth), minDepth, maxDepth) / maxDepth - - -def load_images(image_files): - loaded_images = [] - for file in image_files: - x = np.clip(file.reshape(480, 640, 3) / 255, 0, 1) - loaded_images.append(x) - return np.stack(loaded_images, axis=0) diff --git a/spaces/kevinwang676/Bark-Voice-Cloning/setup.py b/spaces/kevinwang676/Bark-Voice-Cloning/setup.py deleted file mode 100644 index 606849326a4002007fd42060b51e69a19c18675c..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bark-Voice-Cloning/setup.py +++ /dev/null @@ -1,3 +0,0 @@ -from setuptools import setup - -setup() diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/partial_fc.py b/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/partial_fc.py deleted file mode 100644 index 17e2d25715d10ba446c957e1d2528b0687ed71d5..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/partial_fc.py +++ /dev/null @@ -1,222 +0,0 @@ -import logging -import os - -import torch -import torch.distributed as dist -from torch.nn import Module -from torch.nn.functional import normalize, linear -from torch.nn.parameter import Parameter - - -class PartialFC(Module): - """ - Author: {Xiang An, Yang Xiao, XuHan Zhu} in DeepGlint, - Partial FC: Training 10 Million Identities on a Single Machine - See the original paper: - https://arxiv.org/abs/2010.05222 - """ - - @torch.no_grad() - def __init__(self, rank, local_rank, world_size, batch_size, resume, - margin_softmax, num_classes, sample_rate=1.0, embedding_size=512, prefix="./"): - """ - rank: int - Unique process(GPU) ID from 0 to world_size - 1. - local_rank: int - Unique process(GPU) ID within the server from 0 to 7. - world_size: int - Number of GPU. - batch_size: int - Batch size on current rank(GPU). - resume: bool - Select whether to restore the weight of softmax. - margin_softmax: callable - A function of margin softmax, eg: cosface, arcface. - num_classes: int - The number of class center storage in current rank(CPU/GPU), usually is total_classes // world_size, - required. - sample_rate: float - The partial fc sampling rate, when the number of classes increases to more than 2 millions, Sampling - can greatly speed up training, and reduce a lot of GPU memory, default is 1.0. - embedding_size: int - The feature dimension, default is 512. - prefix: str - Path for save checkpoint, default is './'. - """ - super(PartialFC, self).__init__() - # - self.num_classes: int = num_classes - self.rank: int = rank - self.local_rank: int = local_rank - self.device: torch.device = torch.device("cuda:{}".format(self.local_rank)) - self.world_size: int = world_size - self.batch_size: int = batch_size - self.margin_softmax: callable = margin_softmax - self.sample_rate: float = sample_rate - self.embedding_size: int = embedding_size - self.prefix: str = prefix - self.num_local: int = num_classes // world_size + int(rank < num_classes % world_size) - self.class_start: int = num_classes // world_size * rank + min(rank, num_classes % world_size) - self.num_sample: int = int(self.sample_rate * self.num_local) - - self.weight_name = os.path.join(self.prefix, "rank_{}_softmax_weight.pt".format(self.rank)) - self.weight_mom_name = os.path.join(self.prefix, "rank_{}_softmax_weight_mom.pt".format(self.rank)) - - if resume: - try: - self.weight: torch.Tensor = torch.load(self.weight_name) - self.weight_mom: torch.Tensor = torch.load(self.weight_mom_name) - if self.weight.shape[0] != self.num_local or self.weight_mom.shape[0] != self.num_local: - raise IndexError - logging.info("softmax weight resume successfully!") - logging.info("softmax weight mom resume successfully!") - except (FileNotFoundError, KeyError, IndexError): - self.weight = torch.normal(0, 0.01, (self.num_local, self.embedding_size), device=self.device) - self.weight_mom: torch.Tensor = torch.zeros_like(self.weight) - logging.info("softmax weight init!") - logging.info("softmax weight mom init!") - else: - self.weight = torch.normal(0, 0.01, (self.num_local, self.embedding_size), device=self.device) - self.weight_mom: torch.Tensor = torch.zeros_like(self.weight) - logging.info("softmax weight init successfully!") - logging.info("softmax weight mom init successfully!") - self.stream: torch.cuda.Stream = torch.cuda.Stream(local_rank) - - self.index = None - if int(self.sample_rate) == 1: - self.update = lambda: 0 - self.sub_weight = Parameter(self.weight) - self.sub_weight_mom = self.weight_mom - else: - self.sub_weight = Parameter(torch.empty((0, 0)).cuda(local_rank)) - - def save_params(self): - """ Save softmax weight for each rank on prefix - """ - torch.save(self.weight.data, self.weight_name) - torch.save(self.weight_mom, self.weight_mom_name) - - @torch.no_grad() - def sample(self, total_label): - """ - Sample all positive class centers in each rank, and random select neg class centers to filling a fixed - `num_sample`. - - total_label: tensor - Label after all gather, which cross all GPUs. - """ - index_positive = (self.class_start <= total_label) & (total_label < self.class_start + self.num_local) - total_label[~index_positive] = -1 - total_label[index_positive] -= self.class_start - if int(self.sample_rate) != 1: - positive = torch.unique(total_label[index_positive], sorted=True) - if self.num_sample - positive.size(0) >= 0: - perm = torch.rand(size=[self.num_local], device=self.device) - perm[positive] = 2.0 - index = torch.topk(perm, k=self.num_sample)[1] - index = index.sort()[0] - else: - index = positive - self.index = index - total_label[index_positive] = torch.searchsorted(index, total_label[index_positive]) - self.sub_weight = Parameter(self.weight[index]) - self.sub_weight_mom = self.weight_mom[index] - - def forward(self, total_features, norm_weight): - """ Partial fc forward, `logits = X * sample(W)` - """ - torch.cuda.current_stream().wait_stream(self.stream) - logits = linear(total_features, norm_weight) - return logits - - @torch.no_grad() - def update(self): - """ Set updated weight and weight_mom to memory bank. - """ - self.weight_mom[self.index] = self.sub_weight_mom - self.weight[self.index] = self.sub_weight - - def prepare(self, label, optimizer): - """ - get sampled class centers for cal softmax. - - label: tensor - Label tensor on each rank. - optimizer: opt - Optimizer for partial fc, which need to get weight mom. - """ - with torch.cuda.stream(self.stream): - total_label = torch.zeros( - size=[self.batch_size * self.world_size], device=self.device, dtype=torch.long) - dist.all_gather(list(total_label.chunk(self.world_size, dim=0)), label) - self.sample(total_label) - optimizer.state.pop(optimizer.param_groups[-1]['params'][0], None) - optimizer.param_groups[-1]['params'][0] = self.sub_weight - optimizer.state[self.sub_weight]['momentum_buffer'] = self.sub_weight_mom - norm_weight = normalize(self.sub_weight) - return total_label, norm_weight - - def forward_backward(self, label, features, optimizer): - """ - Partial fc forward and backward with model parallel - - label: tensor - Label tensor on each rank(GPU) - features: tensor - Features tensor on each rank(GPU) - optimizer: optimizer - Optimizer for partial fc - - Returns: - -------- - x_grad: tensor - The gradient of features. - loss_v: tensor - Loss value for cross entropy. - """ - total_label, norm_weight = self.prepare(label, optimizer) - total_features = torch.zeros( - size=[self.batch_size * self.world_size, self.embedding_size], device=self.device) - dist.all_gather(list(total_features.chunk(self.world_size, dim=0)), features.data) - total_features.requires_grad = True - - logits = self.forward(total_features, norm_weight) - logits = self.margin_softmax(logits, total_label) - - with torch.no_grad(): - max_fc = torch.max(logits, dim=1, keepdim=True)[0] - dist.all_reduce(max_fc, dist.ReduceOp.MAX) - - # calculate exp(logits) and all-reduce - logits_exp = torch.exp(logits - max_fc) - logits_sum_exp = logits_exp.sum(dim=1, keepdims=True) - dist.all_reduce(logits_sum_exp, dist.ReduceOp.SUM) - - # calculate prob - logits_exp.div_(logits_sum_exp) - - # get one-hot - grad = logits_exp - index = torch.where(total_label != -1)[0] - one_hot = torch.zeros(size=[index.size()[0], grad.size()[1]], device=grad.device) - one_hot.scatter_(1, total_label[index, None], 1) - - # calculate loss - loss = torch.zeros(grad.size()[0], 1, device=grad.device) - loss[index] = grad[index].gather(1, total_label[index, None]) - dist.all_reduce(loss, dist.ReduceOp.SUM) - loss_v = loss.clamp_min_(1e-30).log_().mean() * (-1) - - # calculate grad - grad[index] -= one_hot - grad.div_(self.batch_size * self.world_size) - - logits.backward(grad) - if total_features.grad is not None: - total_features.grad.detach_() - x_grad: torch.Tensor = torch.zeros_like(features, requires_grad=True) - # feature gradient all-reduce - dist.reduce_scatter(x_grad, list(total_features.grad.chunk(self.world_size, dim=0))) - x_grad = x_grad * self.world_size - # backward backbone - return x_grad, loss_v diff --git a/spaces/king007/biogpt-testing/utils.py b/spaces/king007/biogpt-testing/utils.py deleted file mode 100644 index aad209806d5459ea9dbd45b148e988061696350e..0000000000000000000000000000000000000000 --- a/spaces/king007/biogpt-testing/utils.py +++ /dev/null @@ -1,106 +0,0 @@ -from bs4 import BeautifulSoup -import requests - - -lang_ids = { - "Afrikaans": "af", - "Amharic": "am", - "Arabic": "ar", - "Asturian": "ast", - "Azerbaijani": "az", - "Bashkir": "ba", - "Belarusian": "be", - "Bulgarian": "bg", - "Bengali": "bn", - "Breton": "br", - "Bosnian": "bs", - "Catalan": "ca", - "Cebuano": "ceb", - "Czech": "cs", - "Welsh": "cy", - "Danish": "da", - "German": "de", - "Greeek": "el", - "English": "en", - "Spanish": "es", - "Estonian": "et", - "Persian": "fa", - "Fulah": "ff", - "Finnish": "fi", - "French": "fr", - "Western Frisian": "fy", - "Irish": "ga", - "Gaelic": "gd", - "Galician": "gl", - "Gujarati": "gu", - "Hausa": "ha", - "Hebrew": "he", - "Hindi": "hi", - "Croatian": "hr", - "Haitian": "ht", - "Hungarian": "hu", - "Armenian": "hy", - "Indonesian": "id", - "Igbo": "ig", - "Iloko": "ilo", - "Icelandic": "is", - "Italian": "it", - "Japanese": "ja", - "Javanese": "jv", - "Georgian": "ka", - "Kazakh": "kk", - "Central Khmer": "km", - "Kannada": "kn", - "Korean": "ko", - "Luxembourgish": "lb", - "Ganda": "lg", - "Lingala": "ln", - "Lao": "lo", - "Lithuanian": "lt", - "Latvian": "lv", - "Malagasy": "mg", - "Macedonian": "mk", - "Malayalam": "ml", - "Mongolian": "mn", - "Marathi": "mr", - "Malay": "ms", - "Burmese": "my", - "Nepali": "ne", - "Dutch": "nl", - "Norwegian": "no", - "Northern Sotho": "ns", - "Occitan": "oc", - "Oriya": "or", - "Panjabi": "pa", - "Polish": "pl", - "Pushto": "ps", - "Portuguese": "pt", - "Romanian": "ro", - "Russian": "ru", - "Sindhi": "sd", - "Sinhala": "si", - "Slovak": "sk", - "Slovenian": "sl", - "Somali": "so", - "Albanian": "sq", - "Serbian": "sr", - "Swati": "ss", - "Sundanese": "su", - "Swedish": "sv", - "Swahili": "sw", - "Tamil": "ta", - "Thai": "th", - "Tagalog": "tl", - "Tswana": "tn", - "Turkish": "tr", - "Ukrainian": "uk", - "Urdu": "ur", - "Uzbek": "uz", - "Vietnamese": "vi", - "Wolof": "wo", - "Xhosa": "xh", - "Yiddish": "yi", - "Yoruba": "yo", - "Chinese": "zh", - "Zulu": "zu", -} diff --git a/spaces/kkawamu1/huggingface_multi_inference_rank_eval/app/pages/02_evaluation.py b/spaces/kkawamu1/huggingface_multi_inference_rank_eval/app/pages/02_evaluation.py deleted file mode 100644 index bfdce47a45c43d165385eeb32903a8ff20a6c608..0000000000000000000000000000000000000000 --- a/spaces/kkawamu1/huggingface_multi_inference_rank_eval/app/pages/02_evaluation.py +++ /dev/null @@ -1,118 +0,0 @@ -# Copyright 2022 Ken Kawamura -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import streamlit as st -import requests -from typing import Dict -import os -import sys - -sys.path.append(os.path.join(os.path.dirname(__file__), "..", "..")) -from app.evaluation_scripts.run_eval import multi_inference_rank_eval - - -st.set_page_config(layout="wide") -st.markdown(f'

    Submit your question here

    ', unsafe_allow_html=True) - - -st.sidebar.markdown("# Evaluation 🤔") -st.markdown( - '', unsafe_allow_html=True) -st.markdown( - '', unsafe_allow_html=True) - - - -INCLUDED_USERS = ['google', 'EleutherAI', - "bigscience", "facebook", "openai", "microsoft"] - -PIPELINE_TAG_TO_TASKS = { - 'text-generation': "CausalLM", 'text2text-generation': "Seq2SeqLM"} - - -@st.cache -def fetch_model_info_from_huggingface_api() -> Dict[str, Dict[str, str]]: - requests.get("https://huggingface.co") - response = requests.get("https://huggingface.co/api/models") - tags = response.json() - model_to_model_id = {} - model_to_pipeline_tag = {} - - for model in tags: - model_name = model['modelId'] - is_community_model = "/" in model_name - if is_community_model: - user = model_name.split("/")[0] - if user not in INCLUDED_USERS: - continue - if "pipeline_tag" in model and model["pipeline_tag"] in list(PIPELINE_TAG_TO_TASKS.keys()): - model_to_model_id[model['id']] = model['modelId'] - model_to_pipeline_tag[model['id'] - ] = PIPELINE_TAG_TO_TASKS[model["pipeline_tag"]] - return model_to_pipeline_tag - - -model_to_auto_class = fetch_model_info_from_huggingface_api() - -col1, col2 = st.columns([3, 2]) -user_input = {} -with col1: - st.header("Question") - user_input['context'] = st.text_input( - label='Write your question. You may explicity mention the answer choices in the prompt.', value='Huggingface is awesome. True or False?') - user_input['answer_choices_texts'] = st.text_input( - label='Add answer choices in text spearated by a comma and a space.', value='True, False') - user_input['answer_choices_texts'] = user_input['answer_choices_texts'].split( - ', ') - - -with col2: - st.header("Model Config") - user_input['model'] = st.selectbox( - "Which model?", list(model_to_auto_class.keys())) - user_input['auto_class'] = model_to_auto_class[user_input['model']] -col4, col5 = st.columns(2) -with col5: - #style taken from https://css-tricks.com/css-hover-effects-background-masks-3d/ - st.markdown("""""", unsafe_allow_html=True) - st.header("Submit task") - submit = st.button('Submit') - -with col4: - st.header("Result") - if submit: - with st.spinner('Wait for it...'): - prediction = multi_inference_rank_eval( - user_input['model'], user_input['auto_class'], user_input['answer_choices_texts'], user_input['context']) - # print(prediction) - st.markdown(f"### {user_input['answer_choices_texts'][prediction]}") diff --git a/spaces/kpyuy/chat/README.md b/spaces/kpyuy/chat/README.md deleted file mode 100644 index 8253353d5c5839d28b3ae85c14b7732367ca9e9f..0000000000000000000000000000000000000000 --- a/spaces/kpyuy/chat/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chat -emoji: 📉 -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.28.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/kquote03/lama-video-watermark-remover/models/ade20k/segm_lib/utils/data/dataset.py b/spaces/kquote03/lama-video-watermark-remover/models/ade20k/segm_lib/utils/data/dataset.py deleted file mode 100644 index 605aa877f7031a5cd2b98c0f831410aa80fddefa..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/models/ade20k/segm_lib/utils/data/dataset.py +++ /dev/null @@ -1,118 +0,0 @@ -import bisect -import warnings - -from torch._utils import _accumulate -from torch import randperm - - -class Dataset(object): - """An abstract class representing a Dataset. - - All other datasets should subclass it. All subclasses should override - ``__len__``, that provides the size of the dataset, and ``__getitem__``, - supporting integer indexing in range from 0 to len(self) exclusive. - """ - - def __getitem__(self, index): - raise NotImplementedError - - def __len__(self): - raise NotImplementedError - - def __add__(self, other): - return ConcatDataset([self, other]) - - -class TensorDataset(Dataset): - """Dataset wrapping data and target tensors. - - Each sample will be retrieved by indexing both tensors along the first - dimension. - - Arguments: - data_tensor (Tensor): contains sample data. - target_tensor (Tensor): contains sample targets (labels). - """ - - def __init__(self, data_tensor, target_tensor): - assert data_tensor.size(0) == target_tensor.size(0) - self.data_tensor = data_tensor - self.target_tensor = target_tensor - - def __getitem__(self, index): - return self.data_tensor[index], self.target_tensor[index] - - def __len__(self): - return self.data_tensor.size(0) - - -class ConcatDataset(Dataset): - """ - Dataset to concatenate multiple datasets. - Purpose: useful to assemble different existing datasets, possibly - large-scale datasets as the concatenation operation is done in an - on-the-fly manner. - - Arguments: - datasets (iterable): List of datasets to be concatenated - """ - - @staticmethod - def cumsum(sequence): - r, s = [], 0 - for e in sequence: - l = len(e) - r.append(l + s) - s += l - return r - - def __init__(self, datasets): - super(ConcatDataset, self).__init__() - assert len(datasets) > 0, 'datasets should not be an empty iterable' - self.datasets = list(datasets) - self.cumulative_sizes = self.cumsum(self.datasets) - - def __len__(self): - return self.cumulative_sizes[-1] - - def __getitem__(self, idx): - dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx) - if dataset_idx == 0: - sample_idx = idx - else: - sample_idx = idx - self.cumulative_sizes[dataset_idx - 1] - return self.datasets[dataset_idx][sample_idx] - - @property - def cummulative_sizes(self): - warnings.warn("cummulative_sizes attribute is renamed to " - "cumulative_sizes", DeprecationWarning, stacklevel=2) - return self.cumulative_sizes - - -class Subset(Dataset): - def __init__(self, dataset, indices): - self.dataset = dataset - self.indices = indices - - def __getitem__(self, idx): - return self.dataset[self.indices[idx]] - - def __len__(self): - return len(self.indices) - - -def random_split(dataset, lengths): - """ - Randomly split a dataset into non-overlapping new datasets of given lengths - ds - - Arguments: - dataset (Dataset): Dataset to be split - lengths (iterable): lengths of splits to be produced - """ - if sum(lengths) != len(dataset): - raise ValueError("Sum of input lengths does not equal the length of the input dataset!") - - indices = randperm(sum(lengths)) - return [Subset(dataset, indices[offset - length:offset]) for offset, length in zip(_accumulate(lengths), lengths)] diff --git a/spaces/kukuhtw/AutoGPT/tests/unit/json_tests.py b/spaces/kukuhtw/AutoGPT/tests/unit/json_tests.py deleted file mode 100644 index 25c383377708359b5cfec28e0625343c5692f15c..0000000000000000000000000000000000000000 --- a/spaces/kukuhtw/AutoGPT/tests/unit/json_tests.py +++ /dev/null @@ -1,114 +0,0 @@ -import unittest - -from autogpt.json_utils.json_fix_llm import fix_and_parse_json - - -class TestParseJson(unittest.TestCase): - def test_valid_json(self): - # Test that a valid JSON string is parsed correctly - json_str = '{"name": "John", "age": 30, "city": "New York"}' - obj = fix_and_parse_json(json_str) - self.assertEqual(obj, {"name": "John", "age": 30, "city": "New York"}) - - def test_invalid_json_minor(self): - # Test that an invalid JSON string can be fixed with gpt - json_str = '{"name": "John", "age": 30, "city": "New York",}' - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), - {"name": "John", "age": 30, "city": "New York"}, - ) - - def test_invalid_json_major_with_gpt(self): - # Test that an invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=True), - {"name": "John", "age": 30, "city": "New York"}, - ) - - def test_invalid_json_major_without_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = 'BEGIN: "name": "John" - "age": 30 - "city": "New York" :END' - # Assert that this raises an exception: - with self.assertRaises(Exception): - fix_and_parse_json(json_str, try_to_fix_with_gpt=False) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I suggest we start by browsing the repository to find any issues that we can fix. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "I suggest we start browsing the repository to find any issues that we can fix.", - "reasoning": "Browsing the repository will give us an idea of the current state of the codebase and identify any issues that we can address to improve the repo.", - "plan": "- Look through the repository to find any issues.\n- Investigate any issues to determine what needs to be fixed\n- Identify possible solutions to fix the issues\n- Open Pull Requests with fixes", - "criticism": "I should be careful while browsing so as not to accidentally introduce any new bugs or issues.", - "speak": "I will start browsing the repository to find any issues we can fix.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - def test_invalid_json_leading_sentence_with_gpt(self): - # Test that a REALLY invalid JSON string raises an error when try_to_fix_with_gpt is False - json_str = """I will first need to browse the repository (https://github.com/Torantulino/Auto-GPT) and identify any potential bugs that need fixing. I will use the "browse_website" command for this. - -{ - "command": { - "name": "browse_website", - "args":{ - "url": "https://github.com/Torantulino/Auto-GPT" - } - }, - "thoughts": - { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs." - } -}""" - good_obj = { - "command": { - "name": "browse_website", - "args": {"url": "https://github.com/Torantulino/Auto-GPT"}, - }, - "thoughts": { - "text": "Browsing the repository to identify potential bugs", - "reasoning": "Before fixing bugs, I need to identify what needs fixing. I will use the 'browse_website' command to analyze the repository.", - "plan": "- Analyze the repository for potential bugs and areas of improvement", - "criticism": "I need to ensure I am thorough and pay attention to detail while browsing the repository.", - "speak": "I am browsing the repository to identify potential bugs.", - }, - } - # Assert that this raises an exception: - self.assertEqual( - fix_and_parse_json(json_str, try_to_fix_with_gpt=False), good_obj - ) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/web_ws.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/web_ws.py deleted file mode 100644 index 0d32a218b52b87ec04f36a6f95bfb303984b2e43..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/aiohttp/web_ws.py +++ /dev/null @@ -1,487 +0,0 @@ -import asyncio -import base64 -import binascii -import hashlib -import json -from typing import Any, Iterable, Optional, Tuple, cast - -import async_timeout -import attr -from multidict import CIMultiDict - -from . import hdrs -from .abc import AbstractStreamWriter -from .helpers import call_later, set_result -from .http import ( - WS_CLOSED_MESSAGE, - WS_CLOSING_MESSAGE, - WS_KEY, - WebSocketError, - WebSocketReader, - WebSocketWriter, - WSCloseCode, - WSMessage, - WSMsgType as WSMsgType, - ws_ext_gen, - ws_ext_parse, -) -from .log import ws_logger -from .streams import EofStream, FlowControlDataQueue -from .typedefs import Final, JSONDecoder, JSONEncoder -from .web_exceptions import HTTPBadRequest, HTTPException -from .web_request import BaseRequest -from .web_response import StreamResponse - -__all__ = ( - "WebSocketResponse", - "WebSocketReady", - "WSMsgType", -) - -THRESHOLD_CONNLOST_ACCESS: Final[int] = 5 - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class WebSocketReady: - ok: bool - protocol: Optional[str] - - def __bool__(self) -> bool: - return self.ok - - -class WebSocketResponse(StreamResponse): - - _length_check = False - - def __init__( - self, - *, - timeout: float = 10.0, - receive_timeout: Optional[float] = None, - autoclose: bool = True, - autoping: bool = True, - heartbeat: Optional[float] = None, - protocols: Iterable[str] = (), - compress: bool = True, - max_msg_size: int = 4 * 1024 * 1024, - ) -> None: - super().__init__(status=101) - self._protocols = protocols - self._ws_protocol: Optional[str] = None - self._writer: Optional[WebSocketWriter] = None - self._reader: Optional[FlowControlDataQueue[WSMessage]] = None - self._closed = False - self._closing = False - self._conn_lost = 0 - self._close_code: Optional[int] = None - self._loop: Optional[asyncio.AbstractEventLoop] = None - self._waiting: Optional[asyncio.Future[bool]] = None - self._exception: Optional[BaseException] = None - self._timeout = timeout - self._receive_timeout = receive_timeout - self._autoclose = autoclose - self._autoping = autoping - self._heartbeat = heartbeat - self._heartbeat_cb: Optional[asyncio.TimerHandle] = None - if heartbeat is not None: - self._pong_heartbeat = heartbeat / 2.0 - self._pong_response_cb: Optional[asyncio.TimerHandle] = None - self._compress = compress - self._max_msg_size = max_msg_size - - def _cancel_heartbeat(self) -> None: - if self._pong_response_cb is not None: - self._pong_response_cb.cancel() - self._pong_response_cb = None - - if self._heartbeat_cb is not None: - self._heartbeat_cb.cancel() - self._heartbeat_cb = None - - def _reset_heartbeat(self) -> None: - self._cancel_heartbeat() - - if self._heartbeat is not None: - assert self._loop is not None - self._heartbeat_cb = call_later( - self._send_heartbeat, self._heartbeat, self._loop - ) - - def _send_heartbeat(self) -> None: - if self._heartbeat is not None and not self._closed: - assert self._loop is not None - # fire-and-forget a task is not perfect but maybe ok for - # sending ping. Otherwise we need a long-living heartbeat - # task in the class. - self._loop.create_task(self._writer.ping()) # type: ignore[union-attr] - - if self._pong_response_cb is not None: - self._pong_response_cb.cancel() - self._pong_response_cb = call_later( - self._pong_not_received, self._pong_heartbeat, self._loop - ) - - def _pong_not_received(self) -> None: - if self._req is not None and self._req.transport is not None: - self._closed = True - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - self._exception = asyncio.TimeoutError() - self._req.transport.close() - - async def prepare(self, request: BaseRequest) -> AbstractStreamWriter: - # make pre-check to don't hide it by do_handshake() exceptions - if self._payload_writer is not None: - return self._payload_writer - - protocol, writer = self._pre_start(request) - payload_writer = await super().prepare(request) - assert payload_writer is not None - self._post_start(request, protocol, writer) - await payload_writer.drain() - return payload_writer - - def _handshake( - self, request: BaseRequest - ) -> Tuple["CIMultiDict[str]", str, bool, bool]: - headers = request.headers - if "websocket" != headers.get(hdrs.UPGRADE, "").lower().strip(): - raise HTTPBadRequest( - text=( - "No WebSocket UPGRADE hdr: {}\n Can " - '"Upgrade" only to "WebSocket".' - ).format(headers.get(hdrs.UPGRADE)) - ) - - if "upgrade" not in headers.get(hdrs.CONNECTION, "").lower(): - raise HTTPBadRequest( - text="No CONNECTION upgrade hdr: {}".format( - headers.get(hdrs.CONNECTION) - ) - ) - - # find common sub-protocol between client and server - protocol = None - if hdrs.SEC_WEBSOCKET_PROTOCOL in headers: - req_protocols = [ - str(proto.strip()) - for proto in headers[hdrs.SEC_WEBSOCKET_PROTOCOL].split(",") - ] - - for proto in req_protocols: - if proto in self._protocols: - protocol = proto - break - else: - # No overlap found: Return no protocol as per spec - ws_logger.warning( - "Client protocols %r don’t overlap server-known ones %r", - req_protocols, - self._protocols, - ) - - # check supported version - version = headers.get(hdrs.SEC_WEBSOCKET_VERSION, "") - if version not in ("13", "8", "7"): - raise HTTPBadRequest(text=f"Unsupported version: {version}") - - # check client handshake for validity - key = headers.get(hdrs.SEC_WEBSOCKET_KEY) - try: - if not key or len(base64.b64decode(key)) != 16: - raise HTTPBadRequest(text=f"Handshake error: {key!r}") - except binascii.Error: - raise HTTPBadRequest(text=f"Handshake error: {key!r}") from None - - accept_val = base64.b64encode( - hashlib.sha1(key.encode() + WS_KEY).digest() - ).decode() - response_headers = CIMultiDict( - { - hdrs.UPGRADE: "websocket", - hdrs.CONNECTION: "upgrade", - hdrs.SEC_WEBSOCKET_ACCEPT: accept_val, - } - ) - - notakeover = False - compress = 0 - if self._compress: - extensions = headers.get(hdrs.SEC_WEBSOCKET_EXTENSIONS) - # Server side always get return with no exception. - # If something happened, just drop compress extension - compress, notakeover = ws_ext_parse(extensions, isserver=True) - if compress: - enabledext = ws_ext_gen( - compress=compress, isserver=True, server_notakeover=notakeover - ) - response_headers[hdrs.SEC_WEBSOCKET_EXTENSIONS] = enabledext - - if protocol: - response_headers[hdrs.SEC_WEBSOCKET_PROTOCOL] = protocol - return ( - response_headers, - protocol, - compress, - notakeover, - ) # type: ignore[return-value] - - def _pre_start(self, request: BaseRequest) -> Tuple[str, WebSocketWriter]: - self._loop = request._loop - - headers, protocol, compress, notakeover = self._handshake(request) - - self.set_status(101) - self.headers.update(headers) - self.force_close() - self._compress = compress - transport = request._protocol.transport - assert transport is not None - writer = WebSocketWriter( - request._protocol, transport, compress=compress, notakeover=notakeover - ) - - return protocol, writer - - def _post_start( - self, request: BaseRequest, protocol: str, writer: WebSocketWriter - ) -> None: - self._ws_protocol = protocol - self._writer = writer - - self._reset_heartbeat() - - loop = self._loop - assert loop is not None - self._reader = FlowControlDataQueue(request._protocol, 2**16, loop=loop) - request.protocol.set_parser( - WebSocketReader(self._reader, self._max_msg_size, compress=self._compress) - ) - # disable HTTP keepalive for WebSocket - request.protocol.keep_alive(False) - - def can_prepare(self, request: BaseRequest) -> WebSocketReady: - if self._writer is not None: - raise RuntimeError("Already started") - try: - _, protocol, _, _ = self._handshake(request) - except HTTPException: - return WebSocketReady(False, None) - else: - return WebSocketReady(True, protocol) - - @property - def closed(self) -> bool: - return self._closed - - @property - def close_code(self) -> Optional[int]: - return self._close_code - - @property - def ws_protocol(self) -> Optional[str]: - return self._ws_protocol - - @property - def compress(self) -> bool: - return self._compress - - def exception(self) -> Optional[BaseException]: - return self._exception - - async def ping(self, message: bytes = b"") -> None: - if self._writer is None: - raise RuntimeError("Call .prepare() first") - await self._writer.ping(message) - - async def pong(self, message: bytes = b"") -> None: - # unsolicited pong - if self._writer is None: - raise RuntimeError("Call .prepare() first") - await self._writer.pong(message) - - async def send_str(self, data: str, compress: Optional[bool] = None) -> None: - if self._writer is None: - raise RuntimeError("Call .prepare() first") - if not isinstance(data, str): - raise TypeError("data argument must be str (%r)" % type(data)) - await self._writer.send(data, binary=False, compress=compress) - - async def send_bytes(self, data: bytes, compress: Optional[bool] = None) -> None: - if self._writer is None: - raise RuntimeError("Call .prepare() first") - if not isinstance(data, (bytes, bytearray, memoryview)): - raise TypeError("data argument must be byte-ish (%r)" % type(data)) - await self._writer.send(data, binary=True, compress=compress) - - async def send_json( - self, - data: Any, - compress: Optional[bool] = None, - *, - dumps: JSONEncoder = json.dumps, - ) -> None: - await self.send_str(dumps(data), compress=compress) - - async def write_eof(self) -> None: # type: ignore[override] - if self._eof_sent: - return - if self._payload_writer is None: - raise RuntimeError("Response has not been started") - - await self.close() - self._eof_sent = True - - async def close(self, *, code: int = WSCloseCode.OK, message: bytes = b"") -> bool: - if self._writer is None: - raise RuntimeError("Call .prepare() first") - - self._cancel_heartbeat() - reader = self._reader - assert reader is not None - - # we need to break `receive()` cycle first, - # `close()` may be called from different task - if self._waiting is not None and not self._closed: - reader.feed_data(WS_CLOSING_MESSAGE, 0) - await self._waiting - - if not self._closed: - self._closed = True - try: - await self._writer.close(code, message) - writer = self._payload_writer - assert writer is not None - await writer.drain() - except (asyncio.CancelledError, asyncio.TimeoutError): - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - raise - except Exception as exc: - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - self._exception = exc - return True - - if self._closing: - return True - - reader = self._reader - assert reader is not None - try: - async with async_timeout.timeout(self._timeout): - msg = await reader.read() - except asyncio.CancelledError: - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - raise - except Exception as exc: - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - self._exception = exc - return True - - if msg.type == WSMsgType.CLOSE: - self._close_code = msg.data - return True - - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - self._exception = asyncio.TimeoutError() - return True - else: - return False - - async def receive(self, timeout: Optional[float] = None) -> WSMessage: - if self._reader is None: - raise RuntimeError("Call .prepare() first") - - loop = self._loop - assert loop is not None - while True: - if self._waiting is not None: - raise RuntimeError("Concurrent call to receive() is not allowed") - - if self._closed: - self._conn_lost += 1 - if self._conn_lost >= THRESHOLD_CONNLOST_ACCESS: - raise RuntimeError("WebSocket connection is closed.") - return WS_CLOSED_MESSAGE - elif self._closing: - return WS_CLOSING_MESSAGE - - try: - self._waiting = loop.create_future() - try: - async with async_timeout.timeout(timeout or self._receive_timeout): - msg = await self._reader.read() - self._reset_heartbeat() - finally: - waiter = self._waiting - set_result(waiter, True) - self._waiting = None - except (asyncio.CancelledError, asyncio.TimeoutError): - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - raise - except EofStream: - self._close_code = WSCloseCode.OK - await self.close() - return WSMessage(WSMsgType.CLOSED, None, None) - except WebSocketError as exc: - self._close_code = exc.code - await self.close(code=exc.code) - return WSMessage(WSMsgType.ERROR, exc, None) - except Exception as exc: - self._exception = exc - self._closing = True - self._close_code = WSCloseCode.ABNORMAL_CLOSURE - await self.close() - return WSMessage(WSMsgType.ERROR, exc, None) - - if msg.type == WSMsgType.CLOSE: - self._closing = True - self._close_code = msg.data - if not self._closed and self._autoclose: - await self.close() - elif msg.type == WSMsgType.CLOSING: - self._closing = True - elif msg.type == WSMsgType.PING and self._autoping: - await self.pong(msg.data) - continue - elif msg.type == WSMsgType.PONG and self._autoping: - continue - - return msg - - async def receive_str(self, *, timeout: Optional[float] = None) -> str: - msg = await self.receive(timeout) - if msg.type != WSMsgType.TEXT: - raise TypeError( - "Received message {}:{!r} is not WSMsgType.TEXT".format( - msg.type, msg.data - ) - ) - return cast(str, msg.data) - - async def receive_bytes(self, *, timeout: Optional[float] = None) -> bytes: - msg = await self.receive(timeout) - if msg.type != WSMsgType.BINARY: - raise TypeError(f"Received message {msg.type}:{msg.data!r} is not bytes") - return cast(bytes, msg.data) - - async def receive_json( - self, *, loads: JSONDecoder = json.loads, timeout: Optional[float] = None - ) -> Any: - data = await self.receive_str(timeout=timeout) - return loads(data) - - async def write(self, data: bytes) -> None: - raise RuntimeError("Cannot call .write() for websocket") - - def __aiter__(self) -> "WebSocketResponse": - return self - - async def __anext__(self) -> WSMessage: - msg = await self.receive() - if msg.type in (WSMsgType.CLOSE, WSMsgType.CLOSING, WSMsgType.CLOSED): - raise StopAsyncIteration - return msg - - def _cancel(self, exc: BaseException) -> None: - if self._reader is not None: - self._reader.set_exception(exc) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-404b53af.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-404b53af.js deleted file mode 100644 index 7638d050f12b91623a7743fdab4b3438ee4809e7..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-404b53af.js +++ /dev/null @@ -1,6 +0,0 @@ -import{S as T,i as E,s as H,B as D,C as k,g as m,E as g,F as h,q as d,G as y,H as J,M as I,l as N,t as b,o as S,p,I as v,K as B,f as A,N as z,J as L,e as w,m as O,n as $,a5 as U,aa as W,am as X,ac as x,r as ee,x as te,$ as le,h as ne,j as se}from"./index-8c3da1d9.js";import{C as ie,a as oe}from"./Copy-fd383441.js";/* empty css */import{B as re}from"./Button-62634b34.js";import{E as fe}from"./Empty-5d52e655.js";import{B as ae}from"./BlockLabel-98ef75ee.js";import"./Blocks-6ad6f005.js";function ce(a){let e,t;return{c(){e=D("svg"),t=D("path"),k(t,"fill","currentColor"),k(t,"d","M5 3h2v2H5v5a2 2 0 0 1-2 2a2 2 0 0 1 2 2v5h2v2H5c-1.07-.27-2-.9-2-2v-4a2 2 0 0 0-2-2H0v-2h1a2 2 0 0 0 2-2V5a2 2 0 0 1 2-2m14 0a2 2 0 0 1 2 2v4a2 2 0 0 0 2 2h1v2h-1a2 2 0 0 0-2 2v4a2 2 0 0 1-2 2h-2v-2h2v-5a2 2 0 0 1 2-2a2 2 0 0 1-2-2V5h-2V3h2m-7 12a1 1 0 0 1 1 1a1 1 0 0 1-1 1a1 1 0 0 1-1-1a1 1 0 0 1 1-1m-4 0a1 1 0 0 1 1 1a1 1 0 0 1-1 1a1 1 0 0 1-1-1a1 1 0 0 1 1-1m8 0a1 1 0 0 1 1 1a1 1 0 0 1-1 1a1 1 0 0 1-1-1a1 1 0 0 1 1-1Z"),k(e,"xmlns","http://www.w3.org/2000/svg"),k(e,"xmlns:xlink","http://www.w3.org/1999/xlink"),k(e,"aria-hidden","true"),k(e,"role","img"),k(e,"class","iconify iconify--mdi"),k(e,"width","100%"),k(e,"height","100%"),k(e,"preserveAspectRatio","xMidYMid meet"),k(e,"viewBox","0 0 24 24")},m(l,s){m(l,e,s),g(e,t)},p:h,i:h,o:h,d(l){l&&d(e)}}}let Q=class extends T{constructor(e){super(),E(this,e,null,ce,H,{})}};function F(a,e,t){const l=a.slice();return l[5]=e[t],l[7]=t,l}function G(a,e,t){const l=a.slice();return l[5]=e[t],l[7]=t,l}function ue(a){let e,t;return{c(){e=y("div"),t=v(a[1]),k(e,"class","json-item svelte-1kspdo")},m(l,s){m(l,e,s),g(e,t)},p(l,s){s&2&&B(t,l[1])},i:h,o:h,d(l){l&&d(e)}}}function _e(a){let e,t;return{c(){e=y("div"),t=v(a[1]),k(e,"class","json-item number svelte-1kspdo")},m(l,s){m(l,e,s),g(e,t)},p(l,s){s&2&&B(t,l[1])},i:h,o:h,d(l){l&&d(e)}}}function me(a){let e,t=a[1].toLocaleString()+"",l;return{c(){e=y("div"),l=v(t),k(e,"class","json-item bool svelte-1kspdo")},m(s,f){m(s,e,f),g(e,l)},p(s,f){f&2&&t!==(t=s[1].toLocaleString()+"")&&B(l,t)},i:h,o:h,d(s){s&&d(e)}}}function de(a){let e,t,l,s;return{c(){e=y("div"),t=v('"'),l=v(a[1]),s=v('"'),k(e,"class","json-item string svelte-1kspdo")},m(f,o){m(f,e,o),g(e,t),g(e,l),g(e,s)},p(f,o){o&2&&B(l,f[1])},i:h,o:h,d(f){f&&d(e)}}}function pe(a){let e;return{c(){e=y("div"),e.textContent="null",k(e,"class","json-item null svelte-1kspdo")},m(t,l){m(t,e,l)},p:h,i:h,o:h,d(t){t&&d(e)}}}function be(a){let e,t,l,s;const f=[ge,ve],o=[];function c(n,i){return n[0]?0:1}return e=c(a),t=o[e]=f[e](a),{c(){t.c(),l=A()},m(n,i){o[e].m(n,i),m(n,l,i),s=!0},p(n,i){let r=e;e=c(n),e===r?o[e].p(n,i):(N(),b(o[r],1,1,()=>{o[r]=null}),S(),t=o[e],t?t.p(n,i):(t=o[e]=f[e](n),t.c()),p(t,1),t.m(l.parentNode,l))},i(n){s||(p(t),s=!0)},o(n){b(t),s=!1},d(n){o[e].d(n),n&&d(l)}}}function ke(a){let e,t,l,s;const f=[ye,he],o=[];function c(n,i){return n[0]?0:1}return e=c(a),t=o[e]=f[e](a),{c(){t.c(),l=A()},m(n,i){o[e].m(n,i),m(n,l,i),s=!0},p(n,i){let r=e;e=c(n),e===r?o[e].p(n,i):(N(),b(o[r],1,1,()=>{o[r]=null}),S(),t=o[e],t?t.p(n,i):(t=o[e]=f[e](n),t.c()),p(t,1),t.m(l.parentNode,l))},i(n){s||(p(t),s=!0)},o(n){b(t),s=!1},d(n){o[e].d(n),n&&d(l)}}}function ve(a){let e,t,l,s,f=Object.entries(a[1]),o=[];for(let n=0;nb(o[n],1,1,()=>{o[n]=null});return{c(){e=v(`{ - `),t=y("div");for(let n=0;nb(o[n],1,1,()=>{o[n]=null});return{c(){e=v(`[ - `),t=y("div");for(let n=0;n{n[j]=null}),S(),f=n[s],f?f.p(r,u):(f=n[s]=c[s](r),f.c()),p(f,1),f.m(l,null))},i(r){o||(p(f),o=!0)},o(r){b(f),o=!1},d(r){r&&d(e),r&&d(t),r&&d(l),n[s].d()}}}function we(a,e,t){let{value:l}=e,{depth:s}=e,{collapsed:f=s>4}=e;const o=()=>{t(0,f=!1)},c=()=>{t(0,f=!1)};return a.$$set=n=>{"value"in n&&t(1,l=n.value),"depth"in n&&t(2,s=n.depth),"collapsed"in n&&t(0,f=n.collapsed)},[f,l,s,o,c]}class V extends T{constructor(e){super(),E(this,e,we,je,H,{value:1,depth:2,collapsed:0})}}function Oe(a){let e,t;return e=new fe({props:{$$slots:{default:[Ne]},$$scope:{ctx:a}}}),{c(){w(e.$$.fragment)},m(l,s){O(e,l,s),t=!0},p(l,s){const f={};s&32&&(f.$$scope={dirty:s,ctx:l}),e.$set(f)},i(l){t||(p(e.$$.fragment,l),t=!0)},o(l){b(e.$$.fragment,l),t=!1},d(l){$(e,l)}}}function $e(a){let e,t,l,s,f,o,c,n,i;const r=[Je,Se],u=[];function j(_,C){return _[1]?0:1}return t=j(a),l=u[t]=r[t](a),o=new V({props:{value:a[0],depth:0}}),{c(){e=y("button"),l.c(),s=J(),f=y("div"),w(o.$$.fragment),k(e,"class","svelte-1trjy9a"),k(f,"class","json-holder svelte-1trjy9a")},m(_,C){m(_,e,C),u[t].m(e,null),m(_,s,C),m(_,f,C),O(o,f,null),c=!0,n||(i=L(e,"click",a[2]),n=!0)},p(_,C){let M=t;t=j(_),t!==M&&(N(),b(u[M],1,1,()=>{u[M]=null}),S(),l=u[t],l||(l=u[t]=r[t](_),l.c()),p(l,1),l.m(e,null));const q={};C&1&&(q.value=_[0]),o.$set(q)},i(_){c||(p(l),p(o.$$.fragment,_),c=!0)},o(_){b(l),b(o.$$.fragment,_),c=!1},d(_){_&&d(e),u[t].d(),_&&d(s),_&&d(f),$(o),n=!1,i()}}}function Ne(a){let e,t;return e=new Q({}),{c(){w(e.$$.fragment)},m(l,s){O(e,l,s),t=!0},i(l){t||(p(e.$$.fragment,l),t=!0)},o(l){b(e.$$.fragment,l),t=!1},d(l){$(e,l)}}}function Se(a){let e,t,l;return t=new ie({}),{c(){e=y("span"),w(t.$$.fragment),k(e,"class","copy-text")},m(s,f){m(s,e,f),O(t,e,null),l=!0},i(s){l||(p(t.$$.fragment,s),l=!0)},o(s){b(t.$$.fragment,s),l=!1},d(s){s&&d(e),$(t)}}}function Je(a){let e,t,l,s;return t=new oe({}),{c(){e=y("span"),w(t.$$.fragment)},m(f,o){m(f,e,o),O(t,e,null),s=!0},i(f){s||(p(t.$$.fragment,f),l||W(()=>{l=X(e,x,{duration:300}),l.start()}),s=!0)},o(f){b(t.$$.fragment,f),s=!1},d(f){f&&d(e),$(t)}}}function Be(a){let e,t,l,s,f;const o=[$e,Oe],c=[];function n(i,r){return r&1&&(e=null),e==null&&(e=!!(i[0]&&i[0]!=='""'&&!Ce(i[0]))),e?0:1}return t=n(a,-1),l=c[t]=o[t](a),{c(){l.c(),s=A()},m(i,r){c[t].m(i,r),m(i,s,r),f=!0},p(i,[r]){let u=t;t=n(i,r),t===u?c[t].p(i,r):(N(),b(c[u],1,1,()=>{c[u]=null}),S(),l=c[t],l?l.p(i,r):(l=c[t]=o[t](i),l.c()),p(l,1),l.m(s.parentNode,s))},i(i){f||(p(l),f=!0)},o(i){b(l),f=!1},d(i){c[t].d(i),i&&d(s)}}}function Ce(a){return a&&Object.keys(a).length===0&&Object.getPrototypeOf(a)===Object.prototype}function Te(a,e,t){let{value:l={}}=e,s=!1,f;function o(){t(1,s=!0),f&&clearTimeout(f),f=setTimeout(()=>{t(1,s=!1)},1e3)}async function c(){"clipboard"in navigator&&(await navigator.clipboard.writeText(JSON.stringify(l,null,2)),o())}return U(()=>{f&&clearTimeout(f)}),a.$$set=n=>{"value"in n&&t(0,l=n.value)},[l,s,c]}class Ee extends T{constructor(e){super(),E(this,e,Te,Be,H,{value:0})}}function Z(a){let e,t;return e=new ae({props:{Icon:Q,show_label:a[6],label:a[5],float:!1,disable:typeof a[7].container=="boolean"&&!a[7].container}}),{c(){w(e.$$.fragment)},m(l,s){O(e,l,s),t=!0},p(l,s){const f={};s&64&&(f.show_label=l[6]),s&32&&(f.label=l[5]),s&128&&(f.disable=typeof l[7].container=="boolean"&&!l[7].container),e.$set(f)},i(l){t||(p(e.$$.fragment,l),t=!0)},o(l){b(e.$$.fragment,l),t=!1},d(l){$(e,l)}}}function He(a){let e,t,l,s,f,o=a[5]&&Z(a);const c=[a[4]];let n={};for(let i=0;i{o=null}),S());const u=r&16?ne(c,[se(i[4])]):{};t.$set(u);const j={};r&8&&(j.value=i[3]),s.$set(j)},i(i){f||(p(o),p(t.$$.fragment,i),p(s.$$.fragment,i),f=!0)},o(i){b(o),b(t.$$.fragment,i),b(s.$$.fragment,i),f=!1},d(i){o&&o.d(i),i&&d(e),$(t,i),i&&d(l),$(s,i)}}}function Me(a){let e,t;return e=new re({props:{visible:a[2],test_id:"json",elem_id:a[0],elem_classes:a[1],disable:typeof a[7].container=="boolean"&&!a[7].container,padding:!1,$$slots:{default:[He]},$$scope:{ctx:a}}}),{c(){w(e.$$.fragment)},m(l,s){O(e,l,s),t=!0},p(l,[s]){const f={};s&4&&(f.visible=l[2]),s&1&&(f.elem_id=l[0]),s&2&&(f.elem_classes=l[1]),s&128&&(f.disable=typeof l[7].container=="boolean"&&!l[7].container),s&1272&&(f.$$scope={dirty:s,ctx:l}),e.$set(f)},i(l){t||(p(e.$$.fragment,l),t=!0)},o(l){b(e.$$.fragment,l),t=!1},d(l){$(e,l)}}}function Ae(a,e,t){let{elem_id:l=""}=e,{elem_classes:s=[]}=e,{visible:f=!0}=e,{value:o}=e,c,{loading_status:n}=e,{label:i}=e,{show_label:r}=e,{style:u={}}=e;const j=ee();return a.$$set=_=>{"elem_id"in _&&t(0,l=_.elem_id),"elem_classes"in _&&t(1,s=_.elem_classes),"visible"in _&&t(2,f=_.visible),"value"in _&&t(3,o=_.value),"loading_status"in _&&t(4,n=_.loading_status),"label"in _&&t(5,i=_.label),"show_label"in _&&t(6,r=_.show_label),"style"in _&&t(7,u=_.style)},a.$$.update=()=>{a.$$.dirty&264&&o!==c&&(t(8,c=o),j("change"))},[l,s,f,o,n,i,r,u,c]}class Le extends T{constructor(e){super(),E(this,e,Ae,Me,H,{elem_id:0,elem_classes:1,visible:2,value:3,loading_status:4,label:5,show_label:6,style:7})}}const Re=Le,Ye=["static"],Ze=a=>({type:{payload:"Object | Array"},description:{payload:"JSON object"}});export{Re as Component,Ze as document,Ye as modes}; -//# sourceMappingURL=index-404b53af.js.map diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-6cb48b60.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-6cb48b60.js deleted file mode 100644 index adfafb85c88355244c95079de14b2c4ce96343e2..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-6cb48b60.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as k,i as L,s as j,G as w,C as o,M as c,g,F as T,q as d,r as C,e as h,m as v,p as b,t as H,n as M,x as S,$ as q,H as B,h as z,j as D,y as E}from"./index-8c3da1d9.js";import{B as F}from"./Button-62634b34.js";function G(t){let e,n;return{c(){e=w("div"),o(e,"class",n="prose "+t[1].join(" ")+" svelte-1ybaih5"),o(e,"id",t[0]),c(e,"min",t[4]),c(e,"hide",!t[3])},m(s,i){g(s,e,i),e.innerHTML=t[2]},p(s,[i]){i&4&&(e.innerHTML=s[2]),i&2&&n!==(n="prose "+s[1].join(" ")+" svelte-1ybaih5")&&o(e,"class",n),i&1&&o(e,"id",s[0]),i&18&&c(e,"min",s[4]),i&10&&c(e,"hide",!s[3])},i:T,o:T,d(s){s&&d(e)}}}function A(t,e,n){let{elem_id:s=""}=e,{elem_classes:i=[]}=e,{value:m}=e,{visible:u=!0}=e,{min_height:f=!1}=e;const l=C();return t.$$set=a=>{"elem_id"in a&&n(0,s=a.elem_id),"elem_classes"in a&&n(1,i=a.elem_classes),"value"in a&&n(2,m=a.value),"visible"in a&&n(3,u=a.visible),"min_height"in a&&n(4,f=a.min_height)},t.$$.update=()=>{t.$$.dirty&4&&l("change")},[s,i,m,u,f]}class I extends k{constructor(e){super(),L(this,e,A,G,j,{elem_id:0,elem_classes:1,value:2,visible:3,min_height:4})}}function J(t){let e,n,s,i,m;const u=[t[4],{variant:"center"}];let f={};for(let l=0;l{"label"in _&&n(5,s=_.label),"elem_id"in _&&n(0,i=_.elem_id),"elem_classes"in _&&n(1,m=_.elem_classes),"visible"in _&&n(2,u=_.visible),"value"in _&&n(3,f=_.value),"loading_status"in _&&n(4,l=_.loading_status)},t.$$.update=()=>{t.$$.dirty&32&&a("change")},[i,m,u,f,l,s,r]}class O extends k{constructor(e){super(),L(this,e,N,K,j,{label:5,elem_id:0,elem_classes:1,visible:2,value:3,loading_status:4})}}const R=O,U=["static"],V=t=>({type:{payload:"string"},description:{payload:"HTML output"}});export{R as Component,V as document,U as modes}; -//# sourceMappingURL=index-6cb48b60.js.map diff --git a/spaces/latent-consistency/Real-Time-LCM-Text-to-Image-Lora-SD1.5/Dockerfile b/spaces/latent-consistency/Real-Time-LCM-Text-to-Image-Lora-SD1.5/Dockerfile deleted file mode 100644 index 68934f644316addfa0335f8b48cfd9c9df72a580..0000000000000000000000000000000000000000 --- a/spaces/latent-consistency/Real-Time-LCM-Text-to-Image-Lora-SD1.5/Dockerfile +++ /dev/null @@ -1,44 +0,0 @@ -FROM nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04 - -ARG DEBIAN_FRONTEND=noninteractive - -ENV PYTHONUNBUFFERED=1 - -RUN apt-get update && apt-get install --no-install-recommends -y \ - build-essential \ - python3.9 \ - python3-pip \ - python3-dev \ - git \ - ffmpeg \ - google-perftools \ - && apt-get clean && rm -rf /var/lib/apt/lists/* - - -WORKDIR /code - -COPY ./requirements.txt /code/requirements.txt - -# Set up a new user named "user" with user ID 1000 -RUN useradd -m -u 1000 user -# Switch to the "user" user -USER user -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH \ - PYTHONPATH=$HOME/app \ - PYTHONUNBUFFERED=1 \ - SYSTEM=spaces - -RUN pip3 install --no-cache-dir --upgrade -r /code/requirements.txt - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app - -ENV LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libtcmalloc.so.4 -# CMD ["uvicorn", "app-img2img:app", "--host", "0.0.0.0", "--port", "7860"] -# CMD ["uvicorn", "app-txt2img:app", "--host", "0.0.0.0", "--port", "7860"] -CMD ["uvicorn", "app-txt2imglora:app", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/lazyboy450/RVCv2-Genshin/README.md b/spaces/lazyboy450/RVCv2-Genshin/README.md deleted file mode 100644 index 9e27813c38f98ab6a24144e5406cca73bcd800a0..0000000000000000000000000000000000000000 --- a/spaces/lazyboy450/RVCv2-Genshin/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: RVC V2 Genshin Impact -emoji: 🎤 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: true -license: mit -duplicated_from: mocci24/rvc-genshin-v2 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lfolle/DeepNAPSI/DummyModel.py b/spaces/lfolle/DeepNAPSI/DummyModel.py deleted file mode 100644 index 05b01d6d18c833183da1e1d068c90e6d1938ad86..0000000000000000000000000000000000000000 --- a/spaces/lfolle/DeepNAPSI/DummyModel.py +++ /dev/null @@ -1,22 +0,0 @@ -import torch -import torch.nn - - -def load_dummy_model(DEBUG): - model = DummyModel() - if not DEBUG: - file_path = hf_hub_download("lfolle/DeepNAPSIModel", "dummy_model.pth", - use_auth_token=os.environ['DeepNAPSIModel']) - model.load_state_dict(torch.load(file_path)) - return model - - -class DummyModel(torch.nn.Module): - def __init__(self): - super().__init__() - - def forward(self, x:list): - return torch.softmax(torch.rand(len(x), 5), 1), 0 - - def __call__(self, x:list): - return self.forward(x) diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Easy Poster Printer 4.0.1.0.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Easy Poster Printer 4.0.1.0.md deleted file mode 100644 index f0323ca63312c5e646c74c8d4ddd42bc8fc17cb7..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Easy Poster Printer 4.0.1.0.md +++ /dev/null @@ -1,6 +0,0 @@ -
    -

    In the state-of-the-art, printed materials are often easier to share than digital documents. To enable paper-based sharing of creative ideas, we developed the Easy Poster Printer. The Easy Poster Printer is a small DIY printer that prints text and designs on digital grade poster paper by screen printing. With the help of commercially available screen-printing materials, the Easy Poster Printer prints simple text in several fonts, including predefined poster fonts, on premium glossy poster paper in various dimensions. Our easy-to-use application and easy-to-print materials enable users to create posters that reflect their own visual styles. We believe that the Easy Poster Printer will encourage people to experiment with paper because of its affordability, ease of use, and maximum flexibility.

    -

    Easy Poster Printer 4.0.1.0


    DOWNLOADhttps://bytlly.com/2uGxFu



    -

    With the advancements in additive manufacturing, objects and tools that can be fabricated using 3D printing are increasing rapidly. With the rise in 3D printer use, there is a need for accessible interfaces for hardware and applications that 3D printers can use. In this paper, we present the development of a 3D printed physical widget with embedded web browser. Using the markup language WebGL, a 3D printer can project any 3D model into a physical space. Once the physical space is constructed, we use a simple user interaction technique for manipulating and interacting with the 3D printed widget using an iPhone. We demonstrate how the interaction method can be embedded into software applications using gesture recognition API. Our work showcases the potential of 3D printed widgets in augmentation and rehabilitation domains.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Kerry On Kutton Movie Hindi Dubbed Download 720p Movie.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Kerry On Kutton Movie Hindi Dubbed Download 720p Movie.md deleted file mode 100644 index 096521dee8a4e89639257043292f437dcb230c3b..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Kerry On Kutton Movie Hindi Dubbed Download 720p Movie.md +++ /dev/null @@ -1,14 +0,0 @@ - -

    Kerry On Kutton: A Crime Drama Set in a Small Town

    -

    If you are looking for a movie that explores the dark side of adolescence, you might want to check out Kerry On Kutton, a 2016 Hindi film directed by Ashok Yadav. The movie follows four teenagers who live in Baliya, a town known for its rebellious and violent history. The four friends, Kerry, Kadambari, Suraj and Jyoti, have their own struggles and aspirations, but they all end up getting involved in criminal activities that change their lives forever.

    -

    Kerry On Kutton is not a typical Bollywood movie. It does not have any songs or dances, and it does not shy away from showing the harsh realities of the rural India. The movie has been praised for its realistic portrayal of the characters and their dilemmas, as well as for its bold and gritty cinematography. The movie also features some impressive performances by the young actors, especially Satyajeet Dubey as Kerry and Aditya Kumar as Suraj.

    -

    Kerry On Kutton Movie Hindi Dubbed Download 720p Movie


    Download Ziphttps://bytlly.com/2uGwR3



    -

    If you are interested in watching Kerry On Kutton, you can download it in 720p quality from various online platforms. However, we advise you to watch it legally and support the makers of this movie. You can also stream it on Hungama.com[^1^], where you can also find other movies and shows to watch.

    -

    Kerry On Kutton is a movie that will make you think and feel. It is a movie that will show you a different side of India and its youth. It is a movie that will stay with you long after it ends.

    - -

    Kerry On Kutton is not a movie for the faint-hearted. It has some scenes that are violent, disturbing and graphic. The movie does not glorify or justify the actions of the characters, but rather shows them as flawed and misguided human beings. The movie also does not offer any easy solutions or happy endings, but leaves the viewers to draw their own conclusions.

    -

    The movie has been compared to some of the classics of the crime genre, such as Gangs of Wasseypur, Gulaal and Satya. The movie has also been appreciated for its originality and freshness, as it does not follow the usual tropes and cliches of Bollywood movies. The movie has been hailed as a brave and bold attempt to showcase a different kind of cinema in India.

    -

    -

    Kerry On Kutton is a movie that deserves to be watched by anyone who loves cinema and who is not afraid to explore the darker aspects of human nature. It is a movie that will challenge you, shock you and move you. It is a movie that will make you question your own morals and values. It is a movie that will make you realize that life is not black and white, but shades of grey.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/lithiumice/SadTalker/src/facerender/modules/discriminator.py b/spaces/lithiumice/SadTalker/src/facerender/modules/discriminator.py deleted file mode 100644 index d4459b07cb075c9f9d345f9b3dffc02cd859313b..0000000000000000000000000000000000000000 --- a/spaces/lithiumice/SadTalker/src/facerender/modules/discriminator.py +++ /dev/null @@ -1,90 +0,0 @@ -from torch import nn -import torch.nn.functional as F -from facerender.modules.util import kp2gaussian -import torch - - -class DownBlock2d(nn.Module): - """ - Simple block for processing video (encoder). - """ - - def __init__(self, in_features, out_features, norm=False, kernel_size=4, pool=False, sn=False): - super(DownBlock2d, self).__init__() - self.conv = nn.Conv2d(in_channels=in_features, out_channels=out_features, kernel_size=kernel_size) - - if sn: - self.conv = nn.utils.spectral_norm(self.conv) - - if norm: - self.norm = nn.InstanceNorm2d(out_features, affine=True) - else: - self.norm = None - self.pool = pool - - def forward(self, x): - out = x - out = self.conv(out) - if self.norm: - out = self.norm(out) - out = F.leaky_relu(out, 0.2) - if self.pool: - out = F.avg_pool2d(out, (2, 2)) - return out - - -class Discriminator(nn.Module): - """ - Discriminator similar to Pix2Pix - """ - - def __init__(self, num_channels=3, block_expansion=64, num_blocks=4, max_features=512, - sn=False, **kwargs): - super(Discriminator, self).__init__() - - down_blocks = [] - for i in range(num_blocks): - down_blocks.append( - DownBlock2d(num_channels if i == 0 else min(max_features, block_expansion * (2 ** i)), - min(max_features, block_expansion * (2 ** (i + 1))), - norm=(i != 0), kernel_size=4, pool=(i != num_blocks - 1), sn=sn)) - - self.down_blocks = nn.ModuleList(down_blocks) - self.conv = nn.Conv2d(self.down_blocks[-1].conv.out_channels, out_channels=1, kernel_size=1) - if sn: - self.conv = nn.utils.spectral_norm(self.conv) - - def forward(self, x): - feature_maps = [] - out = x - - for down_block in self.down_blocks: - feature_maps.append(down_block(out)) - out = feature_maps[-1] - prediction_map = self.conv(out) - - return feature_maps, prediction_map - - -class MultiScaleDiscriminator(nn.Module): - """ - Multi-scale (scale) discriminator - """ - - def __init__(self, scales=(), **kwargs): - super(MultiScaleDiscriminator, self).__init__() - self.scales = scales - discs = {} - for scale in scales: - discs[str(scale).replace('.', '-')] = Discriminator(**kwargs) - self.discs = nn.ModuleDict(discs) - - def forward(self, x): - out_dict = {} - for scale, disc in self.discs.items(): - scale = str(scale).replace('-', '.') - key = 'prediction_' + scale - feature_maps, prediction_map = disc(x[key]) - out_dict['feature_maps_' + scale] = feature_maps - out_dict['prediction_map_' + scale] = prediction_map - return out_dict diff --git a/spaces/lj1995/trump/modules.py b/spaces/lj1995/trump/modules.py deleted file mode 100644 index 289f4e3bdc7e1c783766b4c20bdf4475e65c932b..0000000000000000000000000000000000000000 --- a/spaces/lj1995/trump/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/lkeab/transfiner/configs/common/models/mask_rcnn_c4.py b/spaces/lkeab/transfiner/configs/common/models/mask_rcnn_c4.py deleted file mode 100644 index a3dcf8be42a39c6e5f6e76e3ab23adeccb33085d..0000000000000000000000000000000000000000 --- a/spaces/lkeab/transfiner/configs/common/models/mask_rcnn_c4.py +++ /dev/null @@ -1,88 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling.meta_arch import GeneralizedRCNN -from detectron2.modeling.anchor_generator import DefaultAnchorGenerator -from detectron2.modeling.backbone import BasicStem, BottleneckBlock, ResNet -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.matcher import Matcher -from detectron2.modeling.poolers import ROIPooler -from detectron2.modeling.proposal_generator import RPN, StandardRPNHead -from detectron2.modeling.roi_heads import ( - FastRCNNOutputLayers, - MaskRCNNConvUpsampleHead, - Res5ROIHeads, -) - -model = L(GeneralizedRCNN)( - backbone=L(ResNet)( - stem=L(BasicStem)(in_channels=3, out_channels=64, norm="FrozenBN"), - stages=L(ResNet.make_default_stages)( - depth=50, - stride_in_1x1=True, - norm="FrozenBN", - ), - out_features=["res4"], - ), - proposal_generator=L(RPN)( - in_features=["res4"], - head=L(StandardRPNHead)(in_channels=1024, num_anchors=15), - anchor_generator=L(DefaultAnchorGenerator)( - sizes=[[32, 64, 128, 256, 512]], - aspect_ratios=[0.5, 1.0, 2.0], - strides=[16], - offset=0.0, - ), - anchor_matcher=L(Matcher)( - thresholds=[0.3, 0.7], labels=[0, -1, 1], allow_low_quality_matches=True - ), - box2box_transform=L(Box2BoxTransform)(weights=[1.0, 1.0, 1.0, 1.0]), - batch_size_per_image=256, - positive_fraction=0.5, - pre_nms_topk=(12000, 6000), - post_nms_topk=(2000, 1000), - nms_thresh=0.7, - ), - roi_heads=L(Res5ROIHeads)( - num_classes=80, - batch_size_per_image=512, - positive_fraction=0.25, - proposal_matcher=L(Matcher)( - thresholds=[0.5], labels=[0, 1], allow_low_quality_matches=False - ), - in_features=["res4"], - pooler=L(ROIPooler)( - output_size=14, - scales=(1.0 / 16,), - sampling_ratio=0, - pooler_type="ROIAlignV2", - ), - res5=L(ResNet.make_stage)( - block_class=BottleneckBlock, - num_blocks=3, - stride_per_block=[2, 1, 1], - in_channels=1024, - bottleneck_channels=512, - out_channels=2048, - norm="FrozenBN", - stride_in_1x1=True, - ), - box_predictor=L(FastRCNNOutputLayers)( - input_shape=L(ShapeSpec)(channels="${...res5.out_channels}", height=1, width=1), - test_score_thresh=0.05, - box2box_transform=L(Box2BoxTransform)(weights=(10, 10, 5, 5)), - num_classes="${..num_classes}", - ), - mask_head=L(MaskRCNNConvUpsampleHead)( - input_shape=L(ShapeSpec)( - channels="${...res5.out_channels}", - width="${...pooler.output_size}", - height="${...pooler.output_size}", - ), - num_classes="${..num_classes}", - conv_dims=[256], - ), - ), - pixel_mean=[103.530, 116.280, 123.675], - pixel_std=[1.0, 1.0, 1.0], - input_format="BGR", -) diff --git a/spaces/luckwill/chiakicc/text/korean.py b/spaces/luckwill/chiakicc/text/korean.py deleted file mode 100644 index edee07429a450c55e3d8e246997faaa1e0b89cc9..0000000000000000000000000000000000000000 --- a/spaces/luckwill/chiakicc/text/korean.py +++ /dev/null @@ -1,210 +0,0 @@ -import re -from jamo import h2j, j2hcj -import ko_pron - - -# This is a list of Korean classifiers preceded by pure Korean numerals. -_korean_classifiers = '군데 권 개 그루 닢 대 두 마리 모 모금 뭇 발 발짝 방 번 벌 보루 살 수 술 시 쌈 움큼 정 짝 채 척 첩 축 켤레 톨 통' - -# List of (hangul, hangul divided) pairs: -_hangul_divided = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('ㄳ', 'ㄱㅅ'), - ('ㄵ', 'ㄴㅈ'), - ('ㄶ', 'ㄴㅎ'), - ('ㄺ', 'ㄹㄱ'), - ('ㄻ', 'ㄹㅁ'), - ('ㄼ', 'ㄹㅂ'), - ('ㄽ', 'ㄹㅅ'), - ('ㄾ', 'ㄹㅌ'), - ('ㄿ', 'ㄹㅍ'), - ('ㅀ', 'ㄹㅎ'), - ('ㅄ', 'ㅂㅅ'), - ('ㅘ', 'ㅗㅏ'), - ('ㅙ', 'ㅗㅐ'), - ('ㅚ', 'ㅗㅣ'), - ('ㅝ', 'ㅜㅓ'), - ('ㅞ', 'ㅜㅔ'), - ('ㅟ', 'ㅜㅣ'), - ('ㅢ', 'ㅡㅣ'), - ('ㅑ', 'ㅣㅏ'), - ('ㅒ', 'ㅣㅐ'), - ('ㅕ', 'ㅣㅓ'), - ('ㅖ', 'ㅣㅔ'), - ('ㅛ', 'ㅣㅗ'), - ('ㅠ', 'ㅣㅜ') -]] - -# List of (Latin alphabet, hangul) pairs: -_latin_to_hangul = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('a', '에이'), - ('b', '비'), - ('c', '시'), - ('d', '디'), - ('e', '이'), - ('f', '에프'), - ('g', '지'), - ('h', '에이치'), - ('i', '아이'), - ('j', '제이'), - ('k', '케이'), - ('l', '엘'), - ('m', '엠'), - ('n', '엔'), - ('o', '오'), - ('p', '피'), - ('q', '큐'), - ('r', '아르'), - ('s', '에스'), - ('t', '티'), - ('u', '유'), - ('v', '브이'), - ('w', '더블유'), - ('x', '엑스'), - ('y', '와이'), - ('z', '제트') -]] - -# List of (ipa, lazy ipa) pairs: -_ipa_to_lazy_ipa = [(re.compile('%s' % x[0], re.IGNORECASE), x[1]) for x in [ - ('t͡ɕ','ʧ'), - ('d͡ʑ','ʥ'), - ('ɲ','n^'), - ('ɕ','ʃ'), - ('ʷ','w'), - ('ɭ','l`'), - ('ʎ','ɾ'), - ('ɣ','ŋ'), - ('ɰ','ɯ'), - ('ʝ','j'), - ('ʌ','ə'), - ('ɡ','g'), - ('\u031a','#'), - ('\u0348','='), - ('\u031e',''), - ('\u0320',''), - ('\u0339','') -]] - - -def latin_to_hangul(text): - for regex, replacement in _latin_to_hangul: - text = re.sub(regex, replacement, text) - return text - - -def divide_hangul(text): - text = j2hcj(h2j(text)) - for regex, replacement in _hangul_divided: - text = re.sub(regex, replacement, text) - return text - - -def hangul_number(num, sino=True): - '''Reference https://github.com/Kyubyong/g2pK''' - num = re.sub(',', '', num) - - if num == '0': - return '영' - if not sino and num == '20': - return '스무' - - digits = '123456789' - names = '일이삼사오육칠팔구' - digit2name = {d: n for d, n in zip(digits, names)} - - modifiers = '한 두 세 네 다섯 여섯 일곱 여덟 아홉' - decimals = '열 스물 서른 마흔 쉰 예순 일흔 여든 아흔' - digit2mod = {d: mod for d, mod in zip(digits, modifiers.split())} - digit2dec = {d: dec for d, dec in zip(digits, decimals.split())} - - spelledout = [] - for i, digit in enumerate(num): - i = len(num) - i - 1 - if sino: - if i == 0: - name = digit2name.get(digit, '') - elif i == 1: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - else: - if i == 0: - name = digit2mod.get(digit, '') - elif i == 1: - name = digit2dec.get(digit, '') - if digit == '0': - if i % 4 == 0: - last_three = spelledout[-min(3, len(spelledout)):] - if ''.join(last_three) == '': - spelledout.append('') - continue - else: - spelledout.append('') - continue - if i == 2: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 3: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 4: - name = digit2name.get(digit, '') + '만' - name = name.replace('일만', '만') - elif i == 5: - name = digit2name.get(digit, '') + '십' - name = name.replace('일십', '십') - elif i == 6: - name = digit2name.get(digit, '') + '백' - name = name.replace('일백', '백') - elif i == 7: - name = digit2name.get(digit, '') + '천' - name = name.replace('일천', '천') - elif i == 8: - name = digit2name.get(digit, '') + '억' - elif i == 9: - name = digit2name.get(digit, '') + '십' - elif i == 10: - name = digit2name.get(digit, '') + '백' - elif i == 11: - name = digit2name.get(digit, '') + '천' - elif i == 12: - name = digit2name.get(digit, '') + '조' - elif i == 13: - name = digit2name.get(digit, '') + '십' - elif i == 14: - name = digit2name.get(digit, '') + '백' - elif i == 15: - name = digit2name.get(digit, '') + '천' - spelledout.append(name) - return ''.join(elem for elem in spelledout) - - -def number_to_hangul(text): - '''Reference https://github.com/Kyubyong/g2pK''' - tokens = set(re.findall(r'(\d[\d,]*)([\uac00-\ud71f]+)', text)) - for token in tokens: - num, classifier = token - if classifier[:2] in _korean_classifiers or classifier[0] in _korean_classifiers: - spelledout = hangul_number(num, sino=False) - else: - spelledout = hangul_number(num, sino=True) - text = text.replace(f'{num}{classifier}', f'{spelledout}{classifier}') - # digit by digit for remaining digits - digits = '0123456789' - names = '영일이삼사오육칠팔구' - for d, n in zip(digits, names): - text = text.replace(d, n) - return text - - -def korean_to_lazy_ipa(text): - text = latin_to_hangul(text) - text = number_to_hangul(text) - text=re.sub('[\uac00-\ud7af]+',lambda x:ko_pron.romanise(x.group(0),'ipa').split('] ~ [')[0],text) - for regex, replacement in _ipa_to_lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def korean_to_ipa(text): - text = korean_to_lazy_ipa(text) - return text.replace('ʧ','tʃ').replace('ʥ','dʑ') diff --git a/spaces/m3hrdadfi/zabanshenas/libs/examples.py b/spaces/m3hrdadfi/zabanshenas/libs/examples.py deleted file mode 100644 index 3018d4c2a8cbaabce02de0279fa6c83b13ce5ea8..0000000000000000000000000000000000000000 --- a/spaces/m3hrdadfi/zabanshenas/libs/examples.py +++ /dev/null @@ -1,6 +0,0 @@ -EXAMPLES = { - 'Example 1 - Swedish': 'Glochidion gaudichaudii är en emblikaväxtart som först beskrevs av Johannes Müller Argoviensis , och fick sitt nu gällande namn av Jacob Gijsbert Boerlage . Glochidion gaudichaudii ingår i släktet Glochidion och familjen emblikaväxter . Inga underarter finns listade i Catalogue of Life .', - 'Example 2 - Ossetian': 'Рагон англисаг æвзаг ( англ . Old English , рагон англ . Englisc sprǣc ) у англисаг æвзаджы фыццагон формæ Англисы æмæ хуссар Шотландийы XII æнусмæ хæлиугонд . Рагон англисаг æвзаг у ныгуылæн гермайнаг æвзаг .', - 'Example 3 - Tuvan': 'Черниң болгаш өске - даа планеталарның тыптып келгениниң дугайында эң - не баштайгы эртем - шинчилел ажылдарын 1755 чылда немец философ И . Кант кылган . Ол - ла үеде француз эртемден Лапластың кылган түңнелдери Кантыныы - биле дүгжүп турар . Кант биле Лаплас - Хүн Черге дөмейлешпес , тергиин изиг , хемчээл талазы - биле Черден хөй катап улуг , а Чер болза , Хүн системазының планетазының бирээзи болур деп тодаргайлааннар . Оон ыңай планета бүрүзү бодунуң орбитазы - биле Хүннү чаңгыс аай углуг дескинип турар , бойдуста бүгү - ле чүве үргүлчү өскерлип , хөгжүп , сайзырап турар деп түңнел үндүргеннер .', - 'Example 4 - Malayalam': 'നൂഗാ . ( Nougat ) ഒരു മധുരപലഹാരം . പഞ്ചസാരയും തേനും വറുത്തെടുത്ത നട്സുകളും മുട്ടയുടെ വെള്ളയുമെല്ലാം ചേർത്ത് നിർമ്മിക്കുന്നതാണ് ഈ പലഹാരം . അണ്ടിപ്പരിപ്പ് , ബദാം , പിസ്ത , വാൽനട്ട് , ഹസെൽനട്സ് തുടങ്ങിയ വിവിധ നട്സുകൾ ഇതിനായി ഉപയോഗിക്കാറുണ്ട് . പഴങ്ങളുടെ കഷ്ണവും ഇതിനൊപ്പം ചേർക്കാറുണ്ട് . ചോക്ളേറ്റ് ബാറുകളായും സദ്യക്ക് ശേഷമുള്ള ഡെസേട്ടായും ഇത് ഉപയോഗിക്കാറുണ്ട് . സ്പെയിൻ , ഇറ്റലി എന്നിവിടങ്ങളിലെ പ്രാദേശിക ഭാഷയായ ഓസിറ്റാൻ ഭാഷയിലാണ് ഈ വാക്കുള്ളത് . ആൻഡ്രോയിഡിന്റെ പുതിയ പതിപ്പിന് ഈ പലഹാരത്തിന്റെ പേരാണ് നൽകിയിട്ടുള്ളത് .', - 'Example 5 - Interlingue': 'Hó - témper , Ipce publica li electronic bulletine Ipce Newsletter e li revue Ipce Magazine . Li gruppe administra anc un website con un vast documental archive de scientific studies , libres e jornalistic articules pri li pedofilie e temas afin , quel include in plu un privat forum pri ti - ci classe de litteratura e pri li maniere de promotionar li academic debatte pri li pedofilie . Annualmen , Ipce celebra reuniones pro discusser pri questiones intern e altri temas , queles eveni in un land diferent chascun annu .'} diff --git a/spaces/manhdo/head_pose_estimation_tracking_app/app.py b/spaces/manhdo/head_pose_estimation_tracking_app/app.py deleted file mode 100644 index 48b52706896de233018c63b7214a2320a32834d0..0000000000000000000000000000000000000000 --- a/spaces/manhdo/head_pose_estimation_tracking_app/app.py +++ /dev/null @@ -1,156 +0,0 @@ -import cv2 -import pickle -import os -import argparse -import mediapipe as mp -import numpy as np -import glob -import time -import yaml -from PIL import Image - -import streamlit as st -st.set_page_config(layout="wide") - -from utils.drawing_utils import draw_all_informations -from utils.general import resize_img, get_cache_informations, parse_head_pose_informations -from utils.streamlit_options import default_UI -from utils.detection import detect_face_pose_informations_from_image - - -IMG_SUFFIX = ['jpeg', 'jpg', 'png'] - - -def main(): - parser = argparse.ArgumentParser() - # Pose position - parser.add_argument('--head_pose_info', default='configs.yaml', help='path to head pose information') - - args = parser.parse_args() - - ## Streamlit options - st.title("Head pose estimation tracking app 📷") - left_col, right_col = st.columns([2, 6]) - - with left_col: - default_UI() - - img_size = st.session_state.img_size.split('x') - img_size = [int(s) for s in img_size] - - - if st.session_state.img_upload is not None: - mp_face_mesh = mp.solutions.face_mesh - mp_face_detection = mp.solutions.face_detection - face_mesh = mp_face_mesh.FaceMesh(min_detection_confidence=.5, min_tracking_confidence=0.5) - - with open(args.head_pose_info, 'r') as f: - head_pose_info = yaml.load(f, Loader=yaml.FullLoader) - - parse_head_pose_informations(head_pose_info, st.session_state) - - img_dict = {} # List of images upload in RGB - save_dict = {'img_size': img_size, - 'position_horizontal_thresholds': st.session_state.position_horizontal_thresholds, - 'position_vertical_thresholds': st.session_state.position_vertical_thresholds} - image_results = {} # List of results for each image in RGB - image_names = [] # List image names - face_detection_dict = {} # Face detection information of each image - face_pose_dict = {} # Face pose information of each image - chosen_rectangle_pos_dict = {} # Positions where have face for each image - face_direction_dict = {} # direction of each face for each image - face_position_dict = {} # position of each face for each image - face_coordinate_dict = {} # coordinate of each face for each image - face_area_dict = {} - - if st.session_state.using_local_cache: - cache_dict = get_cache_informations(save_dict) - else: - cache_dict = {} - - for img_upload_file in st.session_state.img_upload: - image = np.array(Image.open(img_upload_file)) - if len(image.shape) > 2 and image.shape[2] == 4: - image = cv2.cvtColor(image, cv2.COLOR_RGBA2RGB) - - image_name = img_upload_file.name.split('.')[0] - - image_names.append(image_name) - img_dict[image_name] = image - - if image_name in cache_dict: - face_detection_dict[image_name] = cache_dict[image_name]['face_detection'] - face_pose_dict[image_name] = cache_dict[image_name]['face_pose'] - chosen_rectangle_pos_dict[image_name] = cache_dict[image_name]['chosen_rectangle_pos'] - face_direction_dict[image_name] = cache_dict[image_name]['face_direction'] - face_position_dict[image_name] = cache_dict[image_name]['face_position'] - face_coordinate_dict[image_name] = cache_dict[image_name]['face_coordinate'] - face_area_dict[image_name] = cache_dict[image_name]['face_area'] - total_time = 0 - - with mp_face_detection.FaceDetection( - model_selection=1, min_detection_confidence=0.5) as face_detection: - for id, (image_name, image) in enumerate(img_dict.items()): - if isinstance(image, str): - image = cv2.imread(image) - - # Check if the image has only 3 channels - if image.shape[-1] == 1: - image = np.stack([image]*3, axis=-1) - - start = time.time() - - image, h_ratio, w_ratio, pad_top, pad_bot, pad_left, pad_right, img_size_no_pad = \ - resize_img(image, img_size, return_all_infos=True) - image_results[image_name] = image - - # To improve performance - image.flags.writeable = False - - if not image_name in cache_dict: - face_detection_infos, face_direction_infos, face_position_infos, face_coordinate_infos, face_pose_infos, face_area_infos, chosen_rectangle_pos_list = \ - detect_face_pose_informations_from_image(image, face_detection, face_mesh, img_size, (pad_top, pad_bot, pad_left, pad_right), img_size_no_pad, head_pose_info) - - face_detection_dict[image_name] = face_detection_infos - face_direction_dict[image_name] = face_direction_infos - face_position_dict[image_name] = face_position_infos - face_coordinate_dict[image_name] = face_coordinate_infos - face_pose_dict[image_name] = face_pose_infos - chosen_rectangle_pos_dict[image_name] = chosen_rectangle_pos_list - face_area_dict[image_name] = face_area_infos - - end = time.time() - cur_time = end - start - total_time += cur_time - - draw_all_informations(image_results[image_name], face_detection_dict[image_name], face_direction_dict[image_name], face_position_dict[image_name], - face_coordinate_dict[image_name], face_pose_dict[image_name], face_area_dict[image_name], chosen_rectangle_pos_dict[image_name], - img_size, (pad_top, pad_bot, pad_left, pad_right), (w_ratio, h_ratio), st.session_state, head_pose_info) - - ## Save results to dict - if st.session_state.using_local_cache: - save_dict[image_name] = {} - save_dict[image_name]['face_detection'] = face_detection_dict[image_name] - save_dict[image_name]['face_pose'] = face_pose_dict[image_name] - save_dict[image_name]['face_direction'] = face_direction_dict[image_name] - save_dict[image_name]['face_position'] = face_position_dict[image_name] - save_dict[image_name]['chosen_rectangle_pos'] = chosen_rectangle_pos_dict[image_name] - save_dict[image_name]['face_coordinate'] = face_coordinate_dict[image_name] - save_dict[image_name]['face_area'] = face_area_dict[image_name] - - if image_name in cache_dict: # Remove image from cache to save memory - del cache_dict[image_name] - - print(f"FPS: {len(image_results) / total_time}") - ## Save results to local cache - if st.session_state.using_local_cache: - with open('cache.pkl', 'wb') as f: - pickle.dump(save_dict, f, protocol=pickle.HIGHEST_PROTOCOL) - - image_results = [image_results[image_name] for image_name in image_names] - with right_col: - st.image(image_results, width=st.session_state.width_visual, caption=image_names) - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/manishjaiswal/09-Gradio-Multilingual-ImageToOCR-Demo/README.md b/spaces/manishjaiswal/09-Gradio-Multilingual-ImageToOCR-Demo/README.md deleted file mode 100644 index f4cb676deeb91dcbf36fb57c380cf13c055ea8ee..0000000000000000000000000000000000000000 --- a/spaces/manishjaiswal/09-Gradio-Multilingual-ImageToOCR-Demo/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: 09 Gradio Multilingual ImageToOCR Demo -emoji: 🏃 -colorFrom: indigo -colorTo: red -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/matthoffner/chatbot-mini/components/Settings/SettingDialog.tsx b/spaces/matthoffner/chatbot-mini/components/Settings/SettingDialog.tsx deleted file mode 100644 index 004a9cf507695ec2f44bcc2dcf8ffe5e738d85b0..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot-mini/components/Settings/SettingDialog.tsx +++ /dev/null @@ -1,105 +0,0 @@ -import { FC, useContext, useEffect, useReducer, useRef } from 'react'; - -import { useTranslation } from 'next-i18next'; - -import { useCreateReducer } from '@/hooks/useCreateReducer'; - -import { getSettings, saveSettings } from '@/utils/app/settings'; - -import { Settings } from '@/types/settings'; - -import HomeContext from '@/pages/api/home/home.context'; - -interface Props { - open: boolean; - onClose: () => void; -} - -export const SettingDialog: FC = ({ open, onClose }) => { - const { t } = useTranslation('settings'); - const settings: Settings = getSettings(); - const { state, dispatch } = useCreateReducer({ - initialState: settings, - }); - const { dispatch: homeDispatch } = useContext(HomeContext); - const modalRef = useRef(null); - - useEffect(() => { - const handleMouseDown = (e: MouseEvent) => { - if (modalRef.current && !modalRef.current.contains(e.target as Node)) { - window.addEventListener('mouseup', handleMouseUp); - } - }; - - const handleMouseUp = (e: MouseEvent) => { - window.removeEventListener('mouseup', handleMouseUp); - onClose(); - }; - - window.addEventListener('mousedown', handleMouseDown); - - return () => { - window.removeEventListener('mousedown', handleMouseDown); - }; - }, [onClose]); - - const handleSave = () => { - homeDispatch({ field: 'lightMode', value: state.theme }); - saveSettings(state); - }; - - // Render nothing if the dialog is not open. - if (!open) { - return <>; - } - - // Render the dialog. - return ( -
    -
    -
    - -
    -
    - ); -}; diff --git a/spaces/matthoffner/chatbot/utils/app/const.ts b/spaces/matthoffner/chatbot/utils/app/const.ts deleted file mode 100644 index f3766484c51a6aad44906278c1c3a741cd2fd3df..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/utils/app/const.ts +++ /dev/null @@ -1,22 +0,0 @@ -export const DEFAULT_SYSTEM_PROMPT = - process.env.NEXT_PUBLIC_DEFAULT_SYSTEM_PROMPT || - "You are chatbot, an open source large language model hosted on HuggingFace by matthoffner. The specific model rotates as new ones are released, but you are ggml so optimized for CPU and consumer hardware. You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature. If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information."; - - -export const OPENAI_API_HOST = - process.env.OPENAI_API_HOST || 'https://api.openai.com'; - -export const DEFAULT_TEMPERATURE = - parseFloat(process.env.NEXT_PUBLIC_DEFAULT_TEMPERATURE || "1"); - -export const OPENAI_API_TYPE = - process.env.OPENAI_API_TYPE || 'openai'; - -export const OPENAI_API_VERSION = - process.env.OPENAI_API_VERSION || '2023-03-15-preview'; - -export const OPENAI_ORGANIZATION = - process.env.OPENAI_ORGANIZATION || ''; - -export const AZURE_DEPLOYMENT_ID = - process.env.AZURE_DEPLOYMENT_ID || ''; diff --git a/spaces/matthoffner/open-codetree/store/features/editorSlice.ts b/spaces/matthoffner/open-codetree/store/features/editorSlice.ts deleted file mode 100644 index bd5ab4b94060376f477170dfb52e244c281d882e..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/open-codetree/store/features/editorSlice.ts +++ /dev/null @@ -1,65 +0,0 @@ -import { createSlice, PayloadAction } from "@reduxjs/toolkit"; -import { RootState } from "../store"; -import { EditorValueInterface } from "../../_types/editorTypes"; -import { treeTemplates, monacoOptions } from "../../constants"; - -type InitialStateType = { - editorValue: EditorValueInterface; - monacoInputValue: EditorValueInterface; - logs: any; - isLogTabOpen: boolean; - options: any; -}; - -const initialState = { - editorValue: treeTemplates["_empty"], - monacoInputValue: treeTemplates["_empty"], - logs: [], - isLogTabOpen: false, - options: monacoOptions, -}; - -export const editorSlice = createSlice({ - name: "editor", - initialState: initialState, - reducers: { - set_editor_value: (state: InitialStateType, { payload }) => { - state.editorValue = payload; - }, - update_editor_code: (state: InitialStateType, { payload }) => { - state.editorValue.tabs[payload.type].data = payload.content; - }, - update_logs: (state: InitialStateType, { payload }) => { - state.logs = [...state.logs, payload]; - }, - clear_logs: (state: InitialStateType) => { - state.logs = []; - }, - toggle_logs_tab: (state: InitialStateType) => { - state.isLogTabOpen = !state.isLogTabOpen; - }, - set_monaco_input_value: ( - state: InitialStateType, - { payload }: PayloadAction - ) => { - state.monacoInputValue = payload; - }, - set_options: (state: InitialStateType, { payload }) => { - state.options = payload; - }, - }, -}); - -export const { - update_editor_code, - update_logs, - clear_logs, - toggle_logs_tab, - set_monaco_input_value, - set_editor_value, - set_options, -} = editorSlice.actions; - -export const editor_state = (state: RootState) => state.editor; - -export default editorSlice.reducer; diff --git a/spaces/merle/PROTEIN_GENERATOR/utils/model/utils/geometry.py b/spaces/merle/PROTEIN_GENERATOR/utils/model/utils/geometry.py deleted file mode 100644 index 58edab102102bf5650d11c72a7d5a76bb1abfb33..0000000000000000000000000000000000000000 --- a/spaces/merle/PROTEIN_GENERATOR/utils/model/utils/geometry.py +++ /dev/null @@ -1,200 +0,0 @@ -import numpy as np -import torch - -# ============================================================ -def get_pair_dist(a, b): - """calculate pair distances between two sets of points - - Parameters - ---------- - a,b : pytorch tensors of shape [batch,nres,3] - store Cartesian coordinates of two sets of atoms - Returns - ------- - dist : pytorch tensor of shape [batch,nres,nres] - stores paitwise distances between atoms in a and b - """ - - dist = torch.cdist(a, b, p=2) - return dist - -# ============================================================ -def get_ang(a, b, c): - """calculate planar angles for all consecutive triples (a[i],b[i],c[i]) - from Cartesian coordinates of three sets of atoms a,b,c - - Parameters - ---------- - a,b,c : pytorch tensors of shape [batch,nres,3] - store Cartesian coordinates of three sets of atoms - Returns - ------- - ang : pytorch tensor of shape [batch,nres] - stores resulting planar angles - """ - v = a - b - w = c - b - v = v / torch.norm(v, dim=-1, keepdim=True) - w = w / torch.norm(w, dim=-1, keepdim=True) - - # this is not stable at the poles - #vw = torch.sum(v*w, dim=-1) - #ang = torch.acos(vw) - - # this is better - # https://math.stackexchange.com/questions/1143354/numerically-stable-method-for-angle-between-3d-vectors/1782769 - y = torch.norm(v-w,dim=-1) - x = torch.norm(v+w,dim=-1) - ang = 2*torch.atan2(y, x) - - return ang - -# ============================================================ -def get_dih(a, b, c, d): - """calculate dihedral angles for all consecutive quadruples (a[i],b[i],c[i],d[i]) - given Cartesian coordinates of four sets of atoms a,b,c,d - - Parameters - ---------- - a,b,c,d : pytorch tensors of shape [batch,nres,3] - store Cartesian coordinates of four sets of atoms - Returns - ------- - dih : pytorch tensor of shape [batch,nres] - stores resulting dihedrals - """ - b0 = a - b - b1r = c - b - b2 = d - c - - b1 = b1r/torch.norm(b1r, dim=-1, keepdim=True) - - v = b0 - torch.sum(b0*b1, dim=-1, keepdim=True)*b1 - w = b2 - torch.sum(b2*b1, dim=-1, keepdim=True)*b1 - - x = torch.sum(v*w, dim=-1) - y = torch.sum(torch.cross(b1,v,dim=-1)*w, dim=-1) - ang = torch.atan2(y, x) - - return ang - - -# ============================================================ -def xyz_to_c6d(xyz, params): - """convert cartesian coordinates into 2d distance - and orientation maps - - Parameters - ---------- - xyz : pytorch tensor of shape [batch,3,nres,3] - stores Cartesian coordinates of backbone N,Ca,C atoms - Returns - ------- - c6d : pytorch tensor of shape [batch,nres,nres,4] - stores stacked dist,omega,theta,phi 2D maps - """ - - batch = xyz.shape[0] - nres = xyz.shape[2] - - # three anchor atoms - N = xyz[:,0] - Ca = xyz[:,1] - C = xyz[:,2] - - # recreate Cb given N,Ca,C - b = Ca - N - c = C - Ca - a = torch.cross(b, c, dim=-1) - Cb = -0.58273431*a + 0.56802827*b - 0.54067466*c + Ca - - # 6d coordinates order: (dist,omega,theta,phi) - c6d = torch.zeros([batch,nres,nres,4],dtype=xyz.dtype,device=xyz.device) - - dist = get_pair_dist(Cb,Cb) - dist[torch.isnan(dist)] = 999.9 - c6d[...,0] = dist + 999.9*torch.eye(nres,device=xyz.device)[None,...] - b,i,j = torch.where(c6d[...,0]=params['DMAX']] = 999.9 - - return c6d - - -# ============================================================ -def c6d_to_bins(c6d,params): - """bin 2d distance and orientation maps - """ - - dstep = (params['DMAX'] - params['DMIN']) / params['DBINS'] - astep = 2.0*np.pi / params['ABINS'] - - dbins = torch.linspace(params['DMIN']+dstep, params['DMAX'], params['DBINS'],dtype=c6d.dtype,device=c6d.device) - ab360 = torch.linspace(-np.pi+astep, np.pi, params['ABINS'],dtype=c6d.dtype,device=c6d.device) - ab180 = torch.linspace(astep, np.pi, params['ABINS']//2,dtype=c6d.dtype,device=c6d.device) - - db = torch.bucketize(c6d[...,0].contiguous(),dbins) - ob = torch.bucketize(c6d[...,1].contiguous(),ab360) - tb = torch.bucketize(c6d[...,2].contiguous(),ab360) - pb = torch.bucketize(c6d[...,3].contiguous(),ab180) - - ob[db==params['DBINS']] = params['ABINS'] - tb[db==params['DBINS']] = params['ABINS'] - pb[db==params['DBINS']] = params['ABINS']//2 - - return torch.stack([db,ob,tb,pb],axis=-1).to(torch.uint8) - - -# ============================================================ -def dist_to_bins(dist,params): - """bin 2d distance maps - """ - - dstep = (params['DMAX'] - params['DMIN']) / params['DBINS'] - db = torch.round((dist-params['DMIN']-dstep/2)/dstep) - - db[db<0] = 0 - db[db>params['DBINS']] = params['DBINS'] - - return db.long() - - -# ============================================================ -def c6d_to_bins2(c6d,params): - """bin 2d distance and orientation maps - (alternative slightly simpler version) - """ - - dstep = (params['DMAX'] - params['DMIN']) / params['DBINS'] - astep = 2.0*np.pi / params['ABINS'] - - db = torch.round((c6d[...,0]-params['DMIN']-dstep/2)/dstep) - ob = torch.round((c6d[...,1]+np.pi-astep/2)/astep) - tb = torch.round((c6d[...,2]+np.pi-astep/2)/astep) - pb = torch.round((c6d[...,3]-astep/2)/astep) - - # put all dparams['DBINS']] = params['DBINS'] - ob[db==params['DBINS']] = params['ABINS'] - tb[db==params['DBINS']] = params['ABINS'] - pb[db==params['DBINS']] = params['ABINS']//2 - - return torch.stack([db,ob,tb,pb],axis=-1).long() - - -# ============================================================ -def get_cb(N,Ca,C): - """recreate Cb given N,Ca,C""" - b = Ca - N - c = C - Ca - a = torch.cross(b, c, dim=-1) - Cb = -0.58273431*a + 0.56802827*b - 0.54067466*c + Ca - return Cb diff --git a/spaces/merve/anonymization/public/fill-in-the-blank/init-gender-over-time.js b/spaces/merve/anonymization/public/fill-in-the-blank/init-gender-over-time.js deleted file mode 100644 index 4e678f28d4669d45b6957cd3e110b325875a41a1..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/public/fill-in-the-blank/init-gender-over-time.js +++ /dev/null @@ -1,181 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - - - -window.initGenderOverTime = async () => { - if (!window.genderOverTimeData){ - window.genderOverTimeData = await (await fetch('data/gender-over-time.json')).json() - } - - var isMobile = innerWidth <= 1100 - - var sentences = window.genderOverTimeData - - var blocks = [ - { - text: 'placeholder', - sentences: sentences.slice(0, 3), - ariaLabel: 'Gendered difference in predicted occupations, studies and names are smalled with a "in 2000" prefix than with a "in 1860" prefix.' - }, - { - text: 'placeholder', - sentences: [sentences[3], sentences[5], sentences[4]], - ariaLabel: 'Gendered difference in game play and bears do not decrease.' - - }, - ] - - var blockSel = d3.selectAll('.gender-over-time').html('').data(blocks) - .st({marginBottom: 30, marginTop: 30}) - .at({role: 'graphics-document', 'aria-label': d => d.ariaLabel}) - - var sentenceSel = blockSel.appendMany('div.sentence', d => d.sentences) - .st({display: 'inline-block'}) - .each(drawSentence) - - blockSel.filter((d, i) => !i).append('div.g-caption').html(` - The top 150 “he” and “she” completions in years from 1860-2018 are shown - with the y position encoding he_logit - she_logit. - Run in Colab →`) - - - - async function drawSentence({s0, s1, tidyCSV, minYear}, i){ - var tidy = d3.csvParse(tidyCSV) - var {colors} = util - - tidy.forEach(d => { - d.year = minYear + +d.year_index - d.i = +d.token_index - d.e0 = +d.e0 - d.e1 = +d.e1 - d.mean = d.e0 + d.e1 - d.dif = d.e0 - d.e1 - }) - - var sel = d3.select(this) - - function fmtStr(d){ - return d.replace('[MASK]', '___').replace('YEAR', '$year') - .replace(' he ', ' he ') - .replace(' she ', ' she ') - .replace(' his ', ' his ') - .replace(' her ', ' her ') - .replace(' they ', ' they ') - } - sel.classed('is-bear', d => s0.includes('bear')) - - var c0 = s0.includes('they') ? colors[2] : colors[0] - var c1 = s1.includes('they') ? colors[2] : colors[1] - - sel.append('div.sentence-title').st({color: c0}).html(fmtStr(s0)) - sel.append('div.sentence-title').st({color: c1}).html(fmtStr(s1)) - - var e0Extent = d3.extent(tidy, d => d.e0) - var e1Extent = d3.extent(tidy, d => d.e1) - var e0e1Exent = d3.extent(e0Extent.concat(e1Extent)) - - var maxDif = d3.max(d3.extent(tidy, d => d.dif), Math.abs) - var difExtent = [-maxDif, maxDif] - - drawDim(tidy, sel, { - key: 'dif', - yExtent: difExtent, - rectColor: [c0, c1] - }) - // drawDim(tidy, sel, { - // key: 'e0', - // yExtent: e0e1Exent, - // rectColor: [colors[0], colors[0]] - // }) - // drawDim(tidy, sel, { - // key: 'e1', - // yExtent: e0e1Exent, - // rectColor: [colors[1], colors[1]] - // }) - } - - function drawDim(tidy, sel, {key, rectColor, yExtent}){ - var c = d3.conventions({ - sel: sel.append('div'), - height: 240, - // width: 240, - margin: {left: 20, bottom: 20, right: 80, top: 5} - }) - - c.svg.append('rect') - .at({width: c.width, height: c.height/2, opacity: .1, fill: rectColor[0]}) - - c.svg.append('rect') - .at({width: c.width, height: c.height/2, opacity: .1, fill: rectColor[1], y: c.height/2}) - - c.x.domain(d3.extent(tidy, d => d.year)).interpolate(d3.interpolateRound) - c.y.domain(yExtent).interpolate(d3.interpolateRound) - - c.xAxis.tickFormat(d => d).ticks(5) - c.yAxis.ticks(c.y.ticks(2).length > 2 ? 2 : 3).tickFormat(d3.format('+')) - d3.drawAxis(c) - // c.svg.select('.y .tick text').st({fill: d => !d ? '' : rectColor[d < 0 ? 0 : 1]}) - - var byToken = d3.nestBy(tidy, d => d.i) - byToken.forEach(d => { - d.endY = c.y(_.last(d)[key]) - d.str = bertLargeVocab[+d.key].replace('▁', '') - d.displayLabel = true - d.mean = d3.sum(d, e => e.mean) - d.keyMean = d3.sum(d, e => e[key]) - }) - - d3.nestBy(_.sortBy(byToken, d => -d.mean), d => Math.round(d.endY/12)) - .forEach(d => d.forEach((e, i) => e.displayLabel = !i)) - - var line = d3.line() - .x(d => c.x(d.year)) - .y(d => c.y(d[key])) - - var tokenSel = c.svg.appendMany('g.time-token', byToken) - // .call(d3.attachTooltip) - .on('mouseover', function(d){ - d3.selectAll('g.time-token') - .classed('active', 0) - .filter(e => e.str == d.str) - .classed('active', 1) - .raise() - }) - - c.svg.on('mouseleave', function(){ - d3.selectAll('g.time-token').classed('active', 0) - }) - - tokenSel.append('text') - .text(d => d.str) - .translate(d => [c.width + 2, d.endY]) - .at({fontSize: 10, dy: '.33em', fill: (d, i) => d.displayLabel ? '#999' : 'rgba(0,0,0,0)'}) - - tokenSel.append('path') - .at({ - d: line, - stroke: '#000', - opacity: .2, - fill: 'none', - }) - - } -} - - -if (window.init) window.init() - diff --git a/spaces/merve/anonymization/source/anonymization/make-axii.js b/spaces/merve/anonymization/source/anonymization/make-axii.js deleted file mode 100644 index c69b5eba387ec07f01ce2849726fda5461002aef..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/source/anonymization/make-axii.js +++ /dev/null @@ -1,86 +0,0 @@ -window.makeAxii = function(){ - - var stateScale = d3.scaleBand().domain(states).range(c.x.range()) - var stateAxis = c.svg.append('g.axis.state.init-hidden') - - var bw = stateScale.bandwidth()/2 - - stateAxis.appendMany('text', states) - .translate(d => [stateScale(d) + bw, c.height + 22]) - .text(d => d) - .at({ - textAnchor: 'middle', - }) - .st({fill: '#444'}) - - stateAxis.appendMany('path', d3.range(ages.length + 1)) - .at({ - d: d => ['M', d*c.width/(ages.length), '0 V', c.height].join(' '), - stroke: '#aaa', - }) - - stateAxis.append('text.bold').text('Home State') - .translate([c.width/2, c.height + 45]) - .at({textAnchor: 'middle'}) - - var ageScale = d3.scaleBand().domain(ages.slice().reverse()).range(c.x.range()) - var ageAxis = c.svg.append('g.axis.age.init-hidden') - - ageAxis.appendMany('text', ages) - .translate(d => [-30, ageScale(d) + bw]) - .text(d => d) - .at({dy: '.33em'}) - .st({fill: '#444'}) - - ageAxis.appendMany('path', d3.range(ages.length + 1)) - .at({ - d: d => ['M 0', d*c.width/(ages.length), 'H', c.width].join(' '), - stroke: '#aaa', - }) - - if (scale == 1){ - ageAxis - .append('g').translate([-43, c.height/2]) - .append('text.bold').text('Age') - .at({textAnchor: 'middle', transform: 'rotate(-90)'}) - } else { - ageAxis - .append('g').translate([-22, 14]) - .append('text.bold').text('Age') - .at({textAnchor: 'middle'}) - } - - var seasonAxis = c.svg.append('g.axis.state.init-hidden').lower() - seasonAxis.appendMany('g', ages) - .translate(d => ageScale(d), 1) - .appendMany('path', d3.range(1, 4)) - .at({ - d: d => ['M 0', d*bw/4*2, 'H', c.width].join(' '), - stroke: '#ddd', - }) - - var headAxis = c.svg.append('g.axis.state.init-hidden') - headAxis.appendMany('text.bold', ['Heads', 'Tails']) - .text(d => d) - .translate((d, i) => [i ? c.width/4*3 + 20 : c.width/4 - 20, 88]) - .at({textAnchor: 'middle'}) - - - var headCaptionAxis = c.svg.append('g.axis.state.init-hidden') - headCaptionAxis.appendMany('text', ['reports plagiarism', 'reports truth']) - .text(d => d) - .translate((d, i) => [i ? c.width/4*3 + 20 : c.width/4 - 20, 88 + 15]) - .at({textAnchor: 'middle'}) - .st({fill: '#444'}) - - - return {stateScale, stateAxis, headAxis, headCaptionAxis, ageScale, ageAxis, bw, seasonAxis} -} - - - - - - - -if (window.init) window.init() \ No newline at end of file diff --git a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/non_leaking.py b/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/non_leaking.py deleted file mode 100644 index 4e044f98e836ae2c011ea91246b304d5ab1a1422..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/CharacterGAN/models/stylegan2/stylegan2-pytorch/non_leaking.py +++ /dev/null @@ -1,137 +0,0 @@ -import math - -import torch -from torch.nn import functional as F - - -def translate_mat(t_x, t_y): - batch = t_x.shape[0] - - mat = torch.eye(3).unsqueeze(0).repeat(batch, 1, 1) - translate = torch.stack((t_x, t_y), 1) - mat[:, :2, 2] = translate - - return mat - - -def rotate_mat(theta): - batch = theta.shape[0] - - mat = torch.eye(3).unsqueeze(0).repeat(batch, 1, 1) - sin_t = torch.sin(theta) - cos_t = torch.cos(theta) - rot = torch.stack((cos_t, -sin_t, sin_t, cos_t), 1).view(batch, 2, 2) - mat[:, :2, :2] = rot - - return mat - - -def scale_mat(s_x, s_y): - batch = s_x.shape[0] - - mat = torch.eye(3).unsqueeze(0).repeat(batch, 1, 1) - mat[:, 0, 0] = s_x - mat[:, 1, 1] = s_y - - return mat - - -def lognormal_sample(size, mean=0, std=1): - return torch.empty(size).log_normal_(mean=mean, std=std) - - -def category_sample(size, categories): - category = torch.tensor(categories) - sample = torch.randint(high=len(categories), size=(size,)) - - return category[sample] - - -def uniform_sample(size, low, high): - return torch.empty(size).uniform_(low, high) - - -def normal_sample(size, mean=0, std=1): - return torch.empty(size).normal_(mean, std) - - -def bernoulli_sample(size, p): - return torch.empty(size).bernoulli_(p) - - -def random_affine_apply(p, transform, prev, eye): - size = transform.shape[0] - select = bernoulli_sample(size, p).view(size, 1, 1) - select_transform = select * transform + (1 - select) * eye - - return select_transform @ prev - - -def sample_affine(p, size, height, width): - G = torch.eye(3).unsqueeze(0).repeat(size, 1, 1) - eye = G - - # flip - param = category_sample(size, (0, 1)) - Gc = scale_mat(1 - 2.0 * param, torch.ones(size)) - G = random_affine_apply(p, Gc, G, eye) - # print('flip', G, scale_mat(1 - 2.0 * param, torch.ones(size)), sep='\n') - - # 90 rotate - param = category_sample(size, (0, 3)) - Gc = rotate_mat(-math.pi / 2 * param) - G = random_affine_apply(p, Gc, G, eye) - # print('90 rotate', G, rotate_mat(-math.pi / 2 * param), sep='\n') - - # integer translate - param = uniform_sample(size, -0.125, 0.125) - param_height = torch.round(param * height) / height - param_width = torch.round(param * width) / width - Gc = translate_mat(param_width, param_height) - G = random_affine_apply(p, Gc, G, eye) - # print('integer translate', G, translate_mat(param_width, param_height), sep='\n') - - # isotropic scale - param = lognormal_sample(size, std=0.2 * math.log(2)) - Gc = scale_mat(param, param) - G = random_affine_apply(p, Gc, G, eye) - # print('isotropic scale', G, scale_mat(param, param), sep='\n') - - p_rot = 1 - math.sqrt(1 - p) - - # pre-rotate - param = uniform_sample(size, -math.pi, math.pi) - Gc = rotate_mat(-param) - G = random_affine_apply(p_rot, Gc, G, eye) - # print('pre-rotate', G, rotate_mat(-param), sep='\n') - - # anisotropic scale - param = lognormal_sample(size, std=0.2 * math.log(2)) - Gc = scale_mat(param, 1 / param) - G = random_affine_apply(p, Gc, G, eye) - # print('anisotropic scale', G, scale_mat(param, 1 / param), sep='\n') - - # post-rotate - param = uniform_sample(size, -math.pi, math.pi) - Gc = rotate_mat(-param) - G = random_affine_apply(p_rot, Gc, G, eye) - # print('post-rotate', G, rotate_mat(-param), sep='\n') - - # fractional translate - param = normal_sample(size, std=0.125) - Gc = translate_mat(param, param) - G = random_affine_apply(p, Gc, G, eye) - # print('fractional translate', G, translate_mat(param, param), sep='\n') - - return G - - -def apply_affine(img, G): - grid = F.affine_grid( - torch.inverse(G).to(img)[:, :2, :], img.shape, align_corners=False - ) - img_affine = F.grid_sample( - img, grid, mode="bilinear", align_corners=False, padding_mode="reflection" - ) - - return img_affine diff --git a/spaces/mfrashad/ClothingGAN/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/utils.py b/spaces/mfrashad/ClothingGAN/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/utils.py deleted file mode 100644 index 3b9edbef3ecc9bf85092f4e670eb5fac8a3b4616..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/ClothingGAN/models/biggan/pytorch_biggan/pytorch_pretrained_biggan/utils.py +++ /dev/null @@ -1,216 +0,0 @@ -# coding: utf-8 -""" BigGAN utilities to prepare truncated noise samples and convert/save/display output images. - Also comprise ImageNet utilities to prepare one hot input vectors for ImageNet classes. - We use Wordnet so you can just input a name in a string and automatically get a corresponding - imagenet class if it exists (or a hypo/hypernym exists in imagenet). -""" -from __future__ import absolute_import, division, print_function, unicode_literals - -import json -import logging -from io import BytesIO - -import numpy as np -from scipy.stats import truncnorm - -logger = logging.getLogger(__name__) - -NUM_CLASSES = 1000 - - -def truncated_noise_sample(batch_size=1, dim_z=128, truncation=1., seed=None): - """ Create a truncated noise vector. - Params: - batch_size: batch size. - dim_z: dimension of z - truncation: truncation value to use - seed: seed for the random generator - Output: - array of shape (batch_size, dim_z) - """ - state = None if seed is None else np.random.RandomState(seed) - values = truncnorm.rvs(-2, 2, size=(batch_size, dim_z), random_state=state).astype(np.float32) - return truncation * values - - -def convert_to_images(obj): - """ Convert an output tensor from BigGAN in a list of images. - Params: - obj: tensor or numpy array of shape (batch_size, channels, height, width) - Output: - list of Pillow Images of size (height, width) - """ - try: - import PIL - except ImportError: - raise ImportError("Please install Pillow to use images: pip install Pillow") - - if not isinstance(obj, np.ndarray): - obj = obj.detach().numpy() - - obj = obj.transpose((0, 2, 3, 1)) - obj = np.clip(((obj + 1) / 2.0) * 256, 0, 255) - - img = [] - for i, out in enumerate(obj): - out_array = np.asarray(np.uint8(out), dtype=np.uint8) - img.append(PIL.Image.fromarray(out_array)) - return img - - -def save_as_images(obj, file_name='output'): - """ Convert and save an output tensor from BigGAN in a list of saved images. - Params: - obj: tensor or numpy array of shape (batch_size, channels, height, width) - file_name: path and beggingin of filename to save. - Images will be saved as `file_name_{image_number}.png` - """ - img = convert_to_images(obj) - - for i, out in enumerate(img): - current_file_name = file_name + '_%d.png' % i - logger.info("Saving image to {}".format(current_file_name)) - out.save(current_file_name, 'png') - - -def display_in_terminal(obj): - """ Convert and display an output tensor from BigGAN in the terminal. - This function use `libsixel` and will only work in a libsixel-compatible terminal. - Please refer to https://github.com/saitoha/libsixel for more details. - - Params: - obj: tensor or numpy array of shape (batch_size, channels, height, width) - file_name: path and beggingin of filename to save. - Images will be saved as `file_name_{image_number}.png` - """ - try: - import PIL - from libsixel import (sixel_output_new, sixel_dither_new, sixel_dither_initialize, - sixel_dither_set_palette, sixel_dither_set_pixelformat, - sixel_dither_get, sixel_encode, sixel_dither_unref, - sixel_output_unref, SIXEL_PIXELFORMAT_RGBA8888, - SIXEL_PIXELFORMAT_RGB888, SIXEL_PIXELFORMAT_PAL8, - SIXEL_PIXELFORMAT_G8, SIXEL_PIXELFORMAT_G1) - except ImportError: - raise ImportError("Display in Terminal requires Pillow, libsixel " - "and a libsixel compatible terminal. " - "Please read info at https://github.com/saitoha/libsixel " - "and install with pip install Pillow libsixel-python") - - s = BytesIO() - - images = convert_to_images(obj) - widths, heights = zip(*(i.size for i in images)) - - output_width = sum(widths) - output_height = max(heights) - - output_image = PIL.Image.new('RGB', (output_width, output_height)) - - x_offset = 0 - for im in images: - output_image.paste(im, (x_offset,0)) - x_offset += im.size[0] - - try: - data = output_image.tobytes() - except NotImplementedError: - data = output_image.tostring() - output = sixel_output_new(lambda data, s: s.write(data), s) - - try: - if output_image.mode == 'RGBA': - dither = sixel_dither_new(256) - sixel_dither_initialize(dither, data, output_width, output_height, SIXEL_PIXELFORMAT_RGBA8888) - elif output_image.mode == 'RGB': - dither = sixel_dither_new(256) - sixel_dither_initialize(dither, data, output_width, output_height, SIXEL_PIXELFORMAT_RGB888) - elif output_image.mode == 'P': - palette = output_image.getpalette() - dither = sixel_dither_new(256) - sixel_dither_set_palette(dither, palette) - sixel_dither_set_pixelformat(dither, SIXEL_PIXELFORMAT_PAL8) - elif output_image.mode == 'L': - dither = sixel_dither_get(SIXEL_BUILTIN_G8) - sixel_dither_set_pixelformat(dither, SIXEL_PIXELFORMAT_G8) - elif output_image.mode == '1': - dither = sixel_dither_get(SIXEL_BUILTIN_G1) - sixel_dither_set_pixelformat(dither, SIXEL_PIXELFORMAT_G1) - else: - raise RuntimeError('unexpected output_image mode') - try: - sixel_encode(data, output_width, output_height, 1, dither, output) - print(s.getvalue().decode('ascii')) - finally: - sixel_dither_unref(dither) - finally: - sixel_output_unref(output) - - -def one_hot_from_int(int_or_list, batch_size=1): - """ Create a one-hot vector from a class index or a list of class indices. - Params: - int_or_list: int, or list of int, of the imagenet classes (between 0 and 999) - batch_size: batch size. - If int_or_list is an int create a batch of identical classes. - If int_or_list is a list, we should have `len(int_or_list) == batch_size` - Output: - array of shape (batch_size, 1000) - """ - if isinstance(int_or_list, int): - int_or_list = [int_or_list] - - if len(int_or_list) == 1 and batch_size > 1: - int_or_list = [int_or_list[0]] * batch_size - - assert batch_size == len(int_or_list) - - array = np.zeros((batch_size, NUM_CLASSES), dtype=np.float32) - for i, j in enumerate(int_or_list): - array[i, j] = 1.0 - return array - - -def one_hot_from_names(class_name_or_list, batch_size=1): - """ Create a one-hot vector from the name of an imagenet class ('tennis ball', 'daisy', ...). - We use NLTK's wordnet search to try to find the relevant synset of ImageNet and take the first one. - If we can't find it direcly, we look at the hyponyms and hypernyms of the class name. - - Params: - class_name_or_list: string containing the name of an imagenet object or a list of such strings (for a batch). - Output: - array of shape (batch_size, 1000) - """ - try: - from nltk.corpus import wordnet as wn - except ImportError: - raise ImportError("You need to install nltk to use this function") - - if not isinstance(class_name_or_list, (list, tuple)): - class_name_or_list = [class_name_or_list] - else: - batch_size = max(batch_size, len(class_name_or_list)) - - classes = [] - for class_name in class_name_or_list: - class_name = class_name.replace(" ", "_") - - original_synsets = wn.synsets(class_name) - original_synsets = list(filter(lambda s: s.pos() == 'n', original_synsets)) # keep only names - if not original_synsets: - return None - - possible_synsets = list(filter(lambda s: s.offset() in IMAGENET, original_synsets)) - if possible_synsets: - classes.append(IMAGENET[possible_synsets[0].offset()]) - else: - # try hypernyms and hyponyms - possible_synsets = sum([s.hypernyms() + s.hyponyms() for s in original_synsets], []) - possible_synsets = list(filter(lambda s: s.offset() in IMAGENET, possible_synsets)) - if possible_synsets: - classes.append(IMAGENET[possible_synsets[0].offset()]) - - return one_hot_from_int(classes, batch_size=batch_size) - - -IMAGENET = {1440764: 0, 1443537: 1, 1484850: 2, 1491361: 3, 1494475: 4, 1496331: 5, 1498041: 6, 1514668: 7, 1514859: 8, 1518878: 9, 1530575: 10, 1531178: 11, 1532829: 12, 1534433: 13, 1537544: 14, 1558993: 15, 1560419: 16, 1580077: 17, 1582220: 18, 1592084: 19, 1601694: 20, 1608432: 21, 1614925: 22, 1616318: 23, 1622779: 24, 1629819: 25, 1630670: 26, 1631663: 27, 1632458: 28, 1632777: 29, 1641577: 30, 1644373: 31, 1644900: 32, 1664065: 33, 1665541: 34, 1667114: 35, 1667778: 36, 1669191: 37, 1675722: 38, 1677366: 39, 1682714: 40, 1685808: 41, 1687978: 42, 1688243: 43, 1689811: 44, 1692333: 45, 1693334: 46, 1694178: 47, 1695060: 48, 1697457: 49, 1698640: 50, 1704323: 51, 1728572: 52, 1728920: 53, 1729322: 54, 1729977: 55, 1734418: 56, 1735189: 57, 1737021: 58, 1739381: 59, 1740131: 60, 1742172: 61, 1744401: 62, 1748264: 63, 1749939: 64, 1751748: 65, 1753488: 66, 1755581: 67, 1756291: 68, 1768244: 69, 1770081: 70, 1770393: 71, 1773157: 72, 1773549: 73, 1773797: 74, 1774384: 75, 1774750: 76, 1775062: 77, 1776313: 78, 1784675: 79, 1795545: 80, 1796340: 81, 1797886: 82, 1798484: 83, 1806143: 84, 1806567: 85, 1807496: 86, 1817953: 87, 1818515: 88, 1819313: 89, 1820546: 90, 1824575: 91, 1828970: 92, 1829413: 93, 1833805: 94, 1843065: 95, 1843383: 96, 1847000: 97, 1855032: 98, 1855672: 99, 1860187: 100, 1871265: 101, 1872401: 102, 1873310: 103, 1877812: 104, 1882714: 105, 1883070: 106, 1910747: 107, 1914609: 108, 1917289: 109, 1924916: 110, 1930112: 111, 1943899: 112, 1944390: 113, 1945685: 114, 1950731: 115, 1955084: 116, 1968897: 117, 1978287: 118, 1978455: 119, 1980166: 120, 1981276: 121, 1983481: 122, 1984695: 123, 1985128: 124, 1986214: 125, 1990800: 126, 2002556: 127, 2002724: 128, 2006656: 129, 2007558: 130, 2009229: 131, 2009912: 132, 2011460: 133, 2012849: 134, 2013706: 135, 2017213: 136, 2018207: 137, 2018795: 138, 2025239: 139, 2027492: 140, 2028035: 141, 2033041: 142, 2037110: 143, 2051845: 144, 2056570: 145, 2058221: 146, 2066245: 147, 2071294: 148, 2074367: 149, 2077923: 150, 2085620: 151, 2085782: 152, 2085936: 153, 2086079: 154, 2086240: 155, 2086646: 156, 2086910: 157, 2087046: 158, 2087394: 159, 2088094: 160, 2088238: 161, 2088364: 162, 2088466: 163, 2088632: 164, 2089078: 165, 2089867: 166, 2089973: 167, 2090379: 168, 2090622: 169, 2090721: 170, 2091032: 171, 2091134: 172, 2091244: 173, 2091467: 174, 2091635: 175, 2091831: 176, 2092002: 177, 2092339: 178, 2093256: 179, 2093428: 180, 2093647: 181, 2093754: 182, 2093859: 183, 2093991: 184, 2094114: 185, 2094258: 186, 2094433: 187, 2095314: 188, 2095570: 189, 2095889: 190, 2096051: 191, 2096177: 192, 2096294: 193, 2096437: 194, 2096585: 195, 2097047: 196, 2097130: 197, 2097209: 198, 2097298: 199, 2097474: 200, 2097658: 201, 2098105: 202, 2098286: 203, 2098413: 204, 2099267: 205, 2099429: 206, 2099601: 207, 2099712: 208, 2099849: 209, 2100236: 210, 2100583: 211, 2100735: 212, 2100877: 213, 2101006: 214, 2101388: 215, 2101556: 216, 2102040: 217, 2102177: 218, 2102318: 219, 2102480: 220, 2102973: 221, 2104029: 222, 2104365: 223, 2105056: 224, 2105162: 225, 2105251: 226, 2105412: 227, 2105505: 228, 2105641: 229, 2105855: 230, 2106030: 231, 2106166: 232, 2106382: 233, 2106550: 234, 2106662: 235, 2107142: 236, 2107312: 237, 2107574: 238, 2107683: 239, 2107908: 240, 2108000: 241, 2108089: 242, 2108422: 243, 2108551: 244, 2108915: 245, 2109047: 246, 2109525: 247, 2109961: 248, 2110063: 249, 2110185: 250, 2110341: 251, 2110627: 252, 2110806: 253, 2110958: 254, 2111129: 255, 2111277: 256, 2111500: 257, 2111889: 258, 2112018: 259, 2112137: 260, 2112350: 261, 2112706: 262, 2113023: 263, 2113186: 264, 2113624: 265, 2113712: 266, 2113799: 267, 2113978: 268, 2114367: 269, 2114548: 270, 2114712: 271, 2114855: 272, 2115641: 273, 2115913: 274, 2116738: 275, 2117135: 276, 2119022: 277, 2119789: 278, 2120079: 279, 2120505: 280, 2123045: 281, 2123159: 282, 2123394: 283, 2123597: 284, 2124075: 285, 2125311: 286, 2127052: 287, 2128385: 288, 2128757: 289, 2128925: 290, 2129165: 291, 2129604: 292, 2130308: 293, 2132136: 294, 2133161: 295, 2134084: 296, 2134418: 297, 2137549: 298, 2138441: 299, 2165105: 300, 2165456: 301, 2167151: 302, 2168699: 303, 2169497: 304, 2172182: 305, 2174001: 306, 2177972: 307, 2190166: 308, 2206856: 309, 2219486: 310, 2226429: 311, 2229544: 312, 2231487: 313, 2233338: 314, 2236044: 315, 2256656: 316, 2259212: 317, 2264363: 318, 2268443: 319, 2268853: 320, 2276258: 321, 2277742: 322, 2279972: 323, 2280649: 324, 2281406: 325, 2281787: 326, 2317335: 327, 2319095: 328, 2321529: 329, 2325366: 330, 2326432: 331, 2328150: 332, 2342885: 333, 2346627: 334, 2356798: 335, 2361337: 336, 2363005: 337, 2364673: 338, 2389026: 339, 2391049: 340, 2395406: 341, 2396427: 342, 2397096: 343, 2398521: 344, 2403003: 345, 2408429: 346, 2410509: 347, 2412080: 348, 2415577: 349, 2417914: 350, 2422106: 351, 2422699: 352, 2423022: 353, 2437312: 354, 2437616: 355, 2441942: 356, 2442845: 357, 2443114: 358, 2443484: 359, 2444819: 360, 2445715: 361, 2447366: 362, 2454379: 363, 2457408: 364, 2480495: 365, 2480855: 366, 2481823: 367, 2483362: 368, 2483708: 369, 2484975: 370, 2486261: 371, 2486410: 372, 2487347: 373, 2488291: 374, 2488702: 375, 2489166: 376, 2490219: 377, 2492035: 378, 2492660: 379, 2493509: 380, 2493793: 381, 2494079: 382, 2497673: 383, 2500267: 384, 2504013: 385, 2504458: 386, 2509815: 387, 2510455: 388, 2514041: 389, 2526121: 390, 2536864: 391, 2606052: 392, 2607072: 393, 2640242: 394, 2641379: 395, 2643566: 396, 2655020: 397, 2666196: 398, 2667093: 399, 2669723: 400, 2672831: 401, 2676566: 402, 2687172: 403, 2690373: 404, 2692877: 405, 2699494: 406, 2701002: 407, 2704792: 408, 2708093: 409, 2727426: 410, 2730930: 411, 2747177: 412, 2749479: 413, 2769748: 414, 2776631: 415, 2777292: 416, 2782093: 417, 2783161: 418, 2786058: 419, 2787622: 420, 2788148: 421, 2790996: 422, 2791124: 423, 2791270: 424, 2793495: 425, 2794156: 426, 2795169: 427, 2797295: 428, 2799071: 429, 2802426: 430, 2804414: 431, 2804610: 432, 2807133: 433, 2808304: 434, 2808440: 435, 2814533: 436, 2814860: 437, 2815834: 438, 2817516: 439, 2823428: 440, 2823750: 441, 2825657: 442, 2834397: 443, 2835271: 444, 2837789: 445, 2840245: 446, 2841315: 447, 2843684: 448, 2859443: 449, 2860847: 450, 2865351: 451, 2869837: 452, 2870880: 453, 2871525: 454, 2877765: 455, 2879718: 456, 2883205: 457, 2892201: 458, 2892767: 459, 2894605: 460, 2895154: 461, 2906734: 462, 2909870: 463, 2910353: 464, 2916936: 465, 2917067: 466, 2927161: 467, 2930766: 468, 2939185: 469, 2948072: 470, 2950826: 471, 2951358: 472, 2951585: 473, 2963159: 474, 2965783: 475, 2966193: 476, 2966687: 477, 2971356: 478, 2974003: 479, 2977058: 480, 2978881: 481, 2979186: 482, 2980441: 483, 2981792: 484, 2988304: 485, 2992211: 486, 2992529: 487, 2999410: 488, 3000134: 489, 3000247: 490, 3000684: 491, 3014705: 492, 3016953: 493, 3017168: 494, 3018349: 495, 3026506: 496, 3028079: 497, 3032252: 498, 3041632: 499, 3042490: 500, 3045698: 501, 3047690: 502, 3062245: 503, 3063599: 504, 3063689: 505, 3065424: 506, 3075370: 507, 3085013: 508, 3089624: 509, 3095699: 510, 3100240: 511, 3109150: 512, 3110669: 513, 3124043: 514, 3124170: 515, 3125729: 516, 3126707: 517, 3127747: 518, 3127925: 519, 3131574: 520, 3133878: 521, 3134739: 522, 3141823: 523, 3146219: 524, 3160309: 525, 3179701: 526, 3180011: 527, 3187595: 528, 3188531: 529, 3196217: 530, 3197337: 531, 3201208: 532, 3207743: 533, 3207941: 534, 3208938: 535, 3216828: 536, 3218198: 537, 3220513: 538, 3223299: 539, 3240683: 540, 3249569: 541, 3250847: 542, 3255030: 543, 3259280: 544, 3271574: 545, 3272010: 546, 3272562: 547, 3290653: 548, 3291819: 549, 3297495: 550, 3314780: 551, 3325584: 552, 3337140: 553, 3344393: 554, 3345487: 555, 3347037: 556, 3355925: 557, 3372029: 558, 3376595: 559, 3379051: 560, 3384352: 561, 3388043: 562, 3388183: 563, 3388549: 564, 3393912: 565, 3394916: 566, 3400231: 567, 3404251: 568, 3417042: 569, 3424325: 570, 3425413: 571, 3443371: 572, 3444034: 573, 3445777: 574, 3445924: 575, 3447447: 576, 3447721: 577, 3450230: 578, 3452741: 579, 3457902: 580, 3459775: 581, 3461385: 582, 3467068: 583, 3476684: 584, 3476991: 585, 3478589: 586, 3481172: 587, 3482405: 588, 3483316: 589, 3485407: 590, 3485794: 591, 3492542: 592, 3494278: 593, 3495258: 594, 3496892: 595, 3498962: 596, 3527444: 597, 3529860: 598, 3530642: 599, 3532672: 600, 3534580: 601, 3535780: 602, 3538406: 603, 3544143: 604, 3584254: 605, 3584829: 606, 3590841: 607, 3594734: 608, 3594945: 609, 3595614: 610, 3598930: 611, 3599486: 612, 3602883: 613, 3617480: 614, 3623198: 615, 3627232: 616, 3630383: 617, 3633091: 618, 3637318: 619, 3642806: 620, 3649909: 621, 3657121: 622, 3658185: 623, 3661043: 624, 3662601: 625, 3666591: 626, 3670208: 627, 3673027: 628, 3676483: 629, 3680355: 630, 3690938: 631, 3691459: 632, 3692522: 633, 3697007: 634, 3706229: 635, 3709823: 636, 3710193: 637, 3710637: 638, 3710721: 639, 3717622: 640, 3720891: 641, 3721384: 642, 3724870: 643, 3729826: 644, 3733131: 645, 3733281: 646, 3733805: 647, 3742115: 648, 3743016: 649, 3759954: 650, 3761084: 651, 3763968: 652, 3764736: 653, 3769881: 654, 3770439: 655, 3770679: 656, 3773504: 657, 3775071: 658, 3775546: 659, 3776460: 660, 3777568: 661, 3777754: 662, 3781244: 663, 3782006: 664, 3785016: 665, 3786901: 666, 3787032: 667, 3788195: 668, 3788365: 669, 3791053: 670, 3792782: 671, 3792972: 672, 3793489: 673, 3794056: 674, 3796401: 675, 3803284: 676, 3804744: 677, 3814639: 678, 3814906: 679, 3825788: 680, 3832673: 681, 3837869: 682, 3838899: 683, 3840681: 684, 3841143: 685, 3843555: 686, 3854065: 687, 3857828: 688, 3866082: 689, 3868242: 690, 3868863: 691, 3871628: 692, 3873416: 693, 3874293: 694, 3874599: 695, 3876231: 696, 3877472: 697, 3877845: 698, 3884397: 699, 3887697: 700, 3888257: 701, 3888605: 702, 3891251: 703, 3891332: 704, 3895866: 705, 3899768: 706, 3902125: 707, 3903868: 708, 3908618: 709, 3908714: 710, 3916031: 711, 3920288: 712, 3924679: 713, 3929660: 714, 3929855: 715, 3930313: 716, 3930630: 717, 3933933: 718, 3935335: 719, 3937543: 720, 3938244: 721, 3942813: 722, 3944341: 723, 3947888: 724, 3950228: 725, 3954731: 726, 3956157: 727, 3958227: 728, 3961711: 729, 3967562: 730, 3970156: 731, 3976467: 732, 3976657: 733, 3977966: 734, 3980874: 735, 3982430: 736, 3983396: 737, 3991062: 738, 3992509: 739, 3995372: 740, 3998194: 741, 4004767: 742, 4005630: 743, 4008634: 744, 4009552: 745, 4019541: 746, 4023962: 747, 4026417: 748, 4033901: 749, 4033995: 750, 4037443: 751, 4039381: 752, 4040759: 753, 4041544: 754, 4044716: 755, 4049303: 756, 4065272: 757, 4067472: 758, 4069434: 759, 4070727: 760, 4074963: 761, 4081281: 762, 4086273: 763, 4090263: 764, 4099969: 765, 4111531: 766, 4116512: 767, 4118538: 768, 4118776: 769, 4120489: 770, 4125021: 771, 4127249: 772, 4131690: 773, 4133789: 774, 4136333: 775, 4141076: 776, 4141327: 777, 4141975: 778, 4146614: 779, 4147183: 780, 4149813: 781, 4152593: 782, 4153751: 783, 4154565: 784, 4162706: 785, 4179913: 786, 4192698: 787, 4200800: 788, 4201297: 789, 4204238: 790, 4204347: 791, 4208210: 792, 4209133: 793, 4209239: 794, 4228054: 795, 4229816: 796, 4235860: 797, 4238763: 798, 4239074: 799, 4243546: 800, 4251144: 801, 4252077: 802, 4252225: 803, 4254120: 804, 4254680: 805, 4254777: 806, 4258138: 807, 4259630: 808, 4263257: 809, 4264628: 810, 4265275: 811, 4266014: 812, 4270147: 813, 4273569: 814, 4275548: 815, 4277352: 816, 4285008: 817, 4286575: 818, 4296562: 819, 4310018: 820, 4311004: 821, 4311174: 822, 4317175: 823, 4325704: 824, 4326547: 825, 4328186: 826, 4330267: 827, 4332243: 828, 4335435: 829, 4336792: 830, 4344873: 831, 4346328: 832, 4347754: 833, 4350905: 834, 4355338: 835, 4355933: 836, 4356056: 837, 4357314: 838, 4366367: 839, 4367480: 840, 4370456: 841, 4371430: 842, 4371774: 843, 4372370: 844, 4376876: 845, 4380533: 846, 4389033: 847, 4392985: 848, 4398044: 849, 4399382: 850, 4404412: 851, 4409515: 852, 4417672: 853, 4418357: 854, 4423845: 855, 4428191: 856, 4429376: 857, 4435653: 858, 4442312: 859, 4443257: 860, 4447861: 861, 4456115: 862, 4458633: 863, 4461696: 864, 4462240: 865, 4465501: 866, 4467665: 867, 4476259: 868, 4479046: 869, 4482393: 870, 4483307: 871, 4485082: 872, 4486054: 873, 4487081: 874, 4487394: 875, 4493381: 876, 4501370: 877, 4505470: 878, 4507155: 879, 4509417: 880, 4515003: 881, 4517823: 882, 4522168: 883, 4523525: 884, 4525038: 885, 4525305: 886, 4532106: 887, 4532670: 888, 4536866: 889, 4540053: 890, 4542943: 891, 4548280: 892, 4548362: 893, 4550184: 894, 4552348: 895, 4553703: 896, 4554684: 897, 4557648: 898, 4560804: 899, 4562935: 900, 4579145: 901, 4579432: 902, 4584207: 903, 4589890: 904, 4590129: 905, 4591157: 906, 4591713: 907, 4592741: 908, 4596742: 909, 4597913: 910, 4599235: 911, 4604644: 912, 4606251: 913, 4612504: 914, 4613696: 915, 6359193: 916, 6596364: 917, 6785654: 918, 6794110: 919, 6874185: 920, 7248320: 921, 7565083: 922, 7579787: 923, 7583066: 924, 7584110: 925, 7590611: 926, 7613480: 927, 7614500: 928, 7615774: 929, 7684084: 930, 7693725: 931, 7695742: 932, 7697313: 933, 7697537: 934, 7711569: 935, 7714571: 936, 7714990: 937, 7715103: 938, 7716358: 939, 7716906: 940, 7717410: 941, 7717556: 942, 7718472: 943, 7718747: 944, 7720875: 945, 7730033: 946, 7734744: 947, 7742313: 948, 7745940: 949, 7747607: 950, 7749582: 951, 7753113: 952, 7753275: 953, 7753592: 954, 7754684: 955, 7760859: 956, 7768694: 957, 7802026: 958, 7831146: 959, 7836838: 960, 7860988: 961, 7871810: 962, 7873807: 963, 7875152: 964, 7880968: 965, 7892512: 966, 7920052: 967, 7930864: 968, 7932039: 969, 9193705: 970, 9229709: 971, 9246464: 972, 9256479: 973, 9288635: 974, 9332890: 975, 9399592: 976, 9421951: 977, 9428293: 978, 9468604: 979, 9472597: 980, 9835506: 981, 10148035: 982, 10565667: 983, 11879895: 984, 11939491: 985, 12057211: 986, 12144580: 987, 12267677: 988, 12620546: 989, 12768682: 990, 12985857: 991, 12998815: 992, 13037406: 993, 13040303: 994, 13044778: 995, 13052670: 996, 13054560: 997, 13133613: 998, 15075141: 999} diff --git a/spaces/misteca/ChatGPT/utils.py b/spaces/misteca/ChatGPT/utils.py deleted file mode 100644 index d58b5eeff9af8a9a1808fe6f24759da77644e325..0000000000000000000000000000000000000000 --- a/spaces/misteca/ChatGPT/utils.py +++ /dev/null @@ -1,332 +0,0 @@ -"""Contains all of the components that can be used with Gradio Interface / Blocks. -Along with the docs for each component, you can find the names of example demos that use -each component. These demos are located in the `demo` directory.""" - -from __future__ import annotations -from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type -import json -import gradio as gr -# import openai -import os -import traceback -import requests -# import markdown -import csv -import mdtex2html -from pypinyin import lazy_pinyin -from presets import * - -if TYPE_CHECKING: - from typing import TypedDict - - class DataframeData(TypedDict): - headers: List[str] - data: List[List[str | int | bool]] - -initial_prompt = "You are a helpful assistant." -API_URL = "https://api.openai.com/v1/chat/completions" -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -def postprocess( - self, y: List[Tuple[str | None, str | None]] - ) -> List[Tuple[str | None, str | None]]: - """ - Parameters: - y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. - Returns: - List of tuples representing the message and response. Each message and response will be a string of HTML. - """ - if y is None: - return [] - for i, (message, response) in enumerate(y): - y[i] = ( - # None if message is None else markdown.markdown(message), - # None if response is None else markdown.markdown(response), - None if message is None else mdtex2html.convert((message)), - None if response is None else mdtex2html.convert(response), - ) - return y - -def parse_text(text): - lines = text.split("\n") - lines = [line for line in lines if line != ""] - count = 0 - for i, line in enumerate(lines): - if "```" in line: - count += 1 - items = line.split('`') - if count % 2 == 1: - lines[i] = f'
    '
    -            else:
    -                lines[i] = f'
    ' - else: - if i > 0: - if count % 2 == 1: - line = line.replace("`", "\`") - line = line.replace("<", "<") - line = line.replace(">", ">") - line = line.replace(" ", " ") - line = line.replace("*", "*") - line = line.replace("_", "_") - line = line.replace("-", "-") - line = line.replace(".", ".") - line = line.replace("!", "!") - line = line.replace("(", "(") - line = line.replace(")", ")") - line = line.replace("$", "$") - lines[i] = "
    "+line - text = "".join(lines) - return text - -def construct_text(role, text): - return {"role": role, "content": text} - -def construct_user(text): - return construct_text("user", text) - -def construct_system(text): - return construct_text("system", text) - -def construct_assistant(text): - return construct_text("assistant", text) - -def construct_token_message(token, stream=False): - extra = "【仅包含回答的计数】 " if stream else "" - return f"{extra}Token 计数: {token}" - -def get_response(openai_api_key, system_prompt, history, temperature, top_p, stream): - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {openai_api_key}" - } - - history = [construct_system(system_prompt), *history] - - payload = { - "model": "gpt-3.5-turbo", - "messages": history, # [{"role": "user", "content": f"{inputs}"}], - "temperature": temperature, # 1.0, - "top_p": top_p, # 1.0, - "n": 1, - "stream": stream, - "presence_penalty": 0, - "frequency_penalty": 0, - } - if stream: - timeout = timeout_streaming - else: - timeout = timeout_all - response = requests.post(API_URL, headers=headers, json=payload, stream=True, timeout=timeout) - return response - -def stream_predict(openai_api_key, system_prompt, history, inputs, chatbot, previous_token_count, top_p, temperature): - def get_return_value(): - return chatbot, history, status_text, [*previous_token_count, token_counter] - token_counter = 0 - partial_words = "" - counter = 0 - status_text = "OK" - history.append(construct_user(inputs)) - try: - response = get_response(openai_api_key, system_prompt, history, temperature, top_p, True) - except requests.exceptions.ConnectTimeout: - status_text = standard_error_msg + error_retrieve_prompt - yield get_return_value() - return - - chatbot.append((parse_text(inputs), "")) - yield get_return_value() - - for chunk in response.iter_lines(): - if counter == 0: - counter += 1 - continue - counter += 1 - # check whether each line is non-empty - if chunk: - chunk = chunk.decode() - chunklength = len(chunk) - chunk = json.loads(chunk[6:]) - # decode each line as response data is in bytes - if chunklength > 6 and "delta" in chunk['choices'][0]: - finish_reason = chunk['choices'][0]['finish_reason'] - status_text = construct_token_message(sum(previous_token_count)+token_counter, stream=True) - if finish_reason == "stop": - yield get_return_value() - break - partial_words = partial_words + chunk['choices'][0]["delta"]["content"] - if token_counter == 0: - history.append(construct_assistant(" " + partial_words)) - else: - history[-1] = construct_assistant(partial_words) - chatbot[-1] = (parse_text(inputs), parse_text(partial_words)) - token_counter += 1 - yield get_return_value() - - -def predict_all(openai_api_key, system_prompt, history, inputs, chatbot, previous_token_count, top_p, temperature): - history.append(construct_user(inputs)) - try: - response = get_response(openai_api_key, system_prompt, history, temperature, top_p, False) - except requests.exceptions.ConnectTimeout: - status_text = standard_error_msg + error_retrieve_prompt - return chatbot, history, status_text, previous_token_count - response = json.loads(response.text) - content = response["choices"][0]["message"]["content"] - history.append(construct_assistant(content)) - chatbot.append((parse_text(inputs), parse_text(content))) - total_token_count = response["usage"]["total_tokens"] - previous_token_count.append(total_token_count - sum(previous_token_count)) - status_text = construct_token_message(total_token_count) - return chatbot, history, status_text, previous_token_count - - -def predict(openai_api_key, system_prompt, history, inputs, chatbot, token_count, top_p, temperature, stream=False, should_check_token_count = True): # repetition_penalty, top_k - if stream: - iter = stream_predict(openai_api_key, system_prompt, history, inputs, chatbot, token_count, top_p, temperature) - for chatbot, history, status_text, token_count in iter: - yield chatbot, history, status_text, token_count - else: - chatbot, history, status_text, token_count = predict_all(openai_api_key, system_prompt, history, inputs, chatbot, token_count, top_p, temperature) - yield chatbot, history, status_text, token_count - if stream: - max_token = max_token_streaming - else: - max_token = max_token_all - if sum(token_count) > max_token and should_check_token_count: - iter = reduce_token_size(openai_api_key, system_prompt, history, chatbot, token_count, top_p, temperature, stream=False, hidden=True) - for chatbot, history, status_text, token_count in iter: - status_text = f"Token 达到上限,已自动降低Token计数至 {status_text}" - yield chatbot, history, status_text, token_count - - -def retry(openai_api_key, system_prompt, history, chatbot, token_count, top_p, temperature, stream=False): - if len(history) == 0: - yield chatbot, history, f"{standard_error_msg}上下文是空的", token_count - return - history.pop() - inputs = history.pop()["content"] - token_count.pop() - iter = predict(openai_api_key, system_prompt, history, inputs, chatbot, token_count, top_p, temperature, stream=stream) - for x in iter: - yield x - - -def reduce_token_size(openai_api_key, system_prompt, history, chatbot, token_count, top_p, temperature, stream=False, hidden=False): - iter = predict(openai_api_key, system_prompt, history, summarize_prompt, chatbot, token_count, top_p, temperature, stream=stream, should_check_token_count=False) - for chatbot, history, status_text, previous_token_count in iter: - history = history[-2:] - token_count = previous_token_count[-1:] - if hidden: - chatbot.pop() - yield chatbot, history, construct_token_message(sum(token_count), stream=stream), token_count - - -def delete_last_conversation(chatbot, history, previous_token_count, streaming): - if len(chatbot) > 0 and standard_error_msg in chatbot[-1][1]: - chatbot.pop() - return chatbot, history - if len(history) > 0: - history.pop() - history.pop() - if len(chatbot) > 0: - chatbot.pop() - if len(previous_token_count) > 0: - previous_token_count.pop() - return chatbot, history, previous_token_count, construct_token_message(sum(previous_token_count), streaming) - - -def save_chat_history(filename, system, history, chatbot): - if filename == "": - return - if not filename.endswith(".json"): - filename += ".json" - os.makedirs(HISTORY_DIR, exist_ok=True) - json_s = {"system": system, "history": history, "chatbot": chatbot} - print(json_s) - with open(os.path.join(HISTORY_DIR, filename), "w") as f: - json.dump(json_s, f) - - -def load_chat_history(filename, system, history, chatbot): - try: - with open(os.path.join(HISTORY_DIR, filename), "r") as f: - json_s = json.load(f) - if type(json_s["history"]) == list: - new_history = [] - for index, item in enumerate(json_s["history"]): - if index % 2 == 0: - new_history.append(construct_user(item)) - else: - new_history.append(construct_assistant(item)) - json_s["history"] = new_history - return filename, json_s["system"], json_s["history"], json_s["chatbot"] - except FileNotFoundError: - print("File not found.") - return filename, system, history, chatbot - -def sorted_by_pinyin(list): - return sorted(list, key=lambda char: lazy_pinyin(char)[0][0]) - -def get_file_names(dir, plain=False, filetypes=[".json"]): - # find all json files in the current directory and return their names - files = [] - try: - for type in filetypes: - files += [f for f in os.listdir(dir) if f.endswith(type)] - except FileNotFoundError: - files = [] - files = sorted_by_pinyin(files) - if files == []: - files = [""] - if plain: - return files - else: - return gr.Dropdown.update(choices=files) - -def get_history_names(plain=False): - return get_file_names(HISTORY_DIR, plain) - -def load_template(filename, mode=0): - lines = [] - print("Loading template...") - if filename.endswith(".json"): - with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f: - lines = json.load(f) - lines = [[i["act"], i["prompt"]] for i in lines] - else: - with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as csvfile: - reader = csv.reader(csvfile) - lines = list(reader) - lines = lines[1:] - if mode == 1: - return sorted_by_pinyin([row[0] for row in lines]) - elif mode == 2: - return {row[0]:row[1] for row in lines} - else: - choices = sorted_by_pinyin([row[0] for row in lines]) - return {row[0]:row[1] for row in lines}, gr.Dropdown.update(choices=choices, value=choices[0]) - -def get_template_names(plain=False): - return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json"]) - -def get_template_content(templates, selection, original_system_prompt): - try: - return templates[selection] - except: - return original_system_prompt - -def reset_state(): - return [], [], [], construct_token_message(0) - -def compose_system(system_prompt): - return {"role": "system", "content": system_prompt} - - -def compose_user(user_input): - return {"role": "user", "content": user_input} - - -def reset_textbox(): - return gr.update(value='') diff --git a/spaces/mjuetz/neu/README.md b/spaces/mjuetz/neu/README.md deleted file mode 100644 index 47363f75e4db4c928234d82cca9292655f472d98..0000000000000000000000000000000000000000 --- a/spaces/mjuetz/neu/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Neu -emoji: 🔥 -colorFrom: gray -colorTo: purple -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mrmocciai/rvc-models/infer_pack/commons.py b/spaces/mrmocciai/rvc-models/infer_pack/commons.py deleted file mode 100644 index 54470986f37825b35d90d7efa7437d1c26b87215..0000000000000000000000000000000000000000 --- a/spaces/mrmocciai/rvc-models/infer_pack/commons.py +++ /dev/null @@ -1,166 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += ( - 0.5 * (torch.exp(2.0 * logs_p) + ((m_p - m_q) ** 2)) * torch.exp(-2.0 * logs_q) - ) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def slice_segments2(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d(length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = math.log(float(max_timescale) / float(min_timescale)) / ( - num_timescales - 1 - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1.0 / norm_type) - return total_norm diff --git a/spaces/mshukor/UnIVAL/fairseq/tests/test_plasma_utils.py b/spaces/mshukor/UnIVAL/fairseq/tests/test_plasma_utils.py deleted file mode 100644 index e6344c2a5a73fcb2fb81376e7bd43470963b3674..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/tests/test_plasma_utils.py +++ /dev/null @@ -1,126 +0,0 @@ -import contextlib -import unittest -import tempfile -from io import StringIO - -import numpy as np - -from tests.utils import create_dummy_data, preprocess_lm_data, train_language_model - -try: - from pyarrow import plasma - from fairseq.data.plasma_utils import PlasmaView, PlasmaStore - - PYARROW_AVAILABLE = True -except ImportError: - PYARROW_AVAILABLE = False - -dummy_path = "dummy" - - -@unittest.skipUnless(PYARROW_AVAILABLE, "") -class TestPlasmaView(unittest.TestCase): - def setUp(self) -> None: - self.tmp_file = tempfile.NamedTemporaryFile() # noqa: P201 - self.path = self.tmp_file.name - self.server = PlasmaStore.start(path=self.path, nbytes=10000) - self.client = plasma.connect(self.path, num_retries=10) - - def tearDown(self) -> None: - self.client.disconnect() - self.tmp_file.close() - self.server.kill() - - def test_two_servers_do_not_share_object_id_space(self): - data_server_1 = np.array([0, 1]) - data_server_2 = np.array([2, 3]) - server_2_path = self.path - with tempfile.NamedTemporaryFile() as server_1_path: - server = PlasmaStore.start(path=server_1_path.name, nbytes=10000) - arr1 = PlasmaView( - data_server_1, dummy_path, 1, plasma_path=server_1_path.name - ) - assert len(arr1.client.list()) == 1 - assert (arr1.array == data_server_1).all() - arr2 = PlasmaView(data_server_2, dummy_path, 1, plasma_path=server_2_path) - assert (arr2.array == data_server_2).all() - assert (arr1.array == data_server_1).all() - server.kill() - - def test_hash_collision(self): - data_server_1 = np.array([0, 1]) - data_server_2 = np.array([2, 3]) - arr1 = PlasmaView(data_server_1, dummy_path, 1, plasma_path=self.path) - assert len(arr1.client.list()) == 1 - arr2 = PlasmaView(data_server_2, dummy_path, 1, plasma_path=self.path) - assert len(arr1.client.list()) == 1 - assert len(arr2.client.list()) == 1 - assert (arr2.array == data_server_1).all() - # New hash key based on tuples - arr3 = PlasmaView( - data_server_2, dummy_path, (1, 12312312312, None), plasma_path=self.path - ) - assert ( - len(arr2.client.list()) == 2 - ), "No new object was created by using a novel hash key" - assert ( - arr3.object_id in arr2.client.list() - ), "No new object was created by using a novel hash key" - assert ( - arr3.object_id in arr3.client.list() - ), "No new object was created by using a novel hash key" - del arr3, arr2, arr1 - - @staticmethod - def _assert_view_equal(pv1, pv2): - np.testing.assert_array_equal(pv1.array, pv2.array) - - def test_putting_same_array_twice(self): - data = np.array([4, 4, 4]) - arr1 = PlasmaView(data, dummy_path, 1, plasma_path=self.path) - assert len(self.client.list()) == 1 - arr1b = PlasmaView( - data, dummy_path, 1, plasma_path=self.path - ) # should not change contents of store - arr1c = PlasmaView( - None, dummy_path, 1, plasma_path=self.path - ) # should not change contents of store - - assert len(self.client.list()) == 1 - self._assert_view_equal(arr1, arr1b) - self._assert_view_equal(arr1, arr1c) - PlasmaView( - data, dummy_path, 2, plasma_path=self.path - ) # new object id, adds new entry - assert len(self.client.list()) == 2 - - new_client = plasma.connect(self.path) - assert len(new_client.list()) == 2 # new client can access same objects - assert isinstance(arr1.object_id, plasma.ObjectID) - del arr1b - del arr1c - - def test_plasma_store_full_raises(self): - with tempfile.NamedTemporaryFile() as new_path: - server = PlasmaStore.start(path=new_path.name, nbytes=10000) - with self.assertRaises(plasma.PlasmaStoreFull): - # 2000 floats is more than 2000 bytes - PlasmaView( - np.random.rand(10000, 1), dummy_path, 1, plasma_path=new_path.name - ) - server.kill() - - def test_object_id_overflow(self): - PlasmaView.get_object_id("", 2 ** 21) - - def test_training_lm_plasma(self): - with contextlib.redirect_stdout(StringIO()): - with tempfile.TemporaryDirectory("test_transformer_lm") as data_dir: - create_dummy_data(data_dir) - preprocess_lm_data(data_dir) - train_language_model( - data_dir, - "transformer_lm", - ["--use-plasma-view", "--plasma-path", self.path], - run_validation=True, - ) diff --git a/spaces/mshukor/UnIVAL/fairseq/tests/test_sequence_scorer.py b/spaces/mshukor/UnIVAL/fairseq/tests/test_sequence_scorer.py deleted file mode 100644 index 42f9447b599bcd7a9913aec37d94ea5078ff43a3..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/tests/test_sequence_scorer.py +++ /dev/null @@ -1,120 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import unittest - -import tests.utils as test_utils -import torch -from fairseq.sequence_scorer import SequenceScorer - - -class TestSequenceScorer(unittest.TestCase): - def test_sequence_scorer(self): - # construct dummy dictionary - d = test_utils.dummy_dictionary(vocab_size=2) - self.assertEqual(d.pad(), 1) - self.assertEqual(d.eos(), 2) - self.assertEqual(d.unk(), 3) - eos = d.eos() - w1 = 4 - w2 = 5 - - # construct dataloader - data = [ - { - "source": torch.LongTensor([w1, w2, eos]), - "target": torch.LongTensor([w1, w2, w1, eos]), - }, - { - "source": torch.LongTensor([w2, eos]), - "target": torch.LongTensor([w2, w1, eos]), - }, - { - "source": torch.LongTensor([w2, eos]), - "target": torch.LongTensor([w2, eos]), - }, - ] - data_itr = test_utils.dummy_dataloader(data) - - # specify expected output probabilities - args = argparse.Namespace() - unk = 0.0 - args.beam_probs = [ - # step 0: - torch.FloatTensor( - [ - # eos w1 w2 - [0.0, unk, 0.6, 0.4], # sentence 1 - [0.0, unk, 0.4, 0.6], # sentence 2 - [0.0, unk, 0.7, 0.3], # sentence 3 - ] - ), - # step 1: - torch.FloatTensor( - [ - # eos w1 w2 - [0.0, unk, 0.2, 0.7], # sentence 1 - [0.0, unk, 0.8, 0.2], # sentence 2 - [0.7, unk, 0.1, 0.2], # sentence 3 - ] - ), - # step 2: - torch.FloatTensor( - [ - # eos w1 w2 - [0.10, unk, 0.50, 0.4], # sentence 1 - [0.15, unk, 0.15, 0.7], # sentence 2 - [0.00, unk, 0.00, 0.0], # sentence 3 - ] - ), - # step 3: - torch.FloatTensor( - [ - # eos w1 w2 - [0.9, unk, 0.05, 0.05], # sentence 1 - [0.0, unk, 0.00, 0.0], # sentence 2 - [0.0, unk, 0.00, 0.0], # sentence 3 - ] - ), - ] - expected_scores = [ - [0.6, 0.7, 0.5, 0.9], # sentence 1 - [0.6, 0.8, 0.15], # sentence 2 - [0.3, 0.7], # sentence 3 - ] - - task = test_utils.TestTranslationTask.setup_task(args, d, d) - model = task.build_model(args) - scorer = SequenceScorer(task.target_dictionary) - for sample in data_itr: - hypos = task.inference_step(scorer, [model], sample) - for id, hypos_id in zip(sample["id"].tolist(), hypos): - self.assertHypoTokens(hypos_id[0], data[id]["target"]) - self.assertHypoScore(hypos_id[0], expected_scores[id]) - - def assertHypoTokens(self, hypo, tokens): - self.assertTensorEqual(hypo["tokens"], torch.LongTensor(tokens)) - - def assertHypoScore(self, hypo, pos_probs, normalized=True, lenpen=1.0): - pos_scores = torch.FloatTensor(pos_probs).log() - self.assertAlmostEqual(hypo["positional_scores"], pos_scores) - self.assertEqual(pos_scores.numel(), hypo["tokens"].numel()) - score = pos_scores.sum() - if normalized: - score /= pos_scores.numel() ** lenpen - self.assertLess(abs(score - hypo["score"]), 1e-6) - - def assertAlmostEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertLess((t1 - t2).abs().max(), 1e-4) - - def assertTensorEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertEqual(t1.ne(t2).long().sum(), 0) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/nakas/audio-diffusion_style_transfer/audiodiffusion/__init__.py b/spaces/nakas/audio-diffusion_style_transfer/audiodiffusion/__init__.py deleted file mode 100644 index 8192887a083f8197592e9f9796149cdf89459912..0000000000000000000000000000000000000000 --- a/spaces/nakas/audio-diffusion_style_transfer/audiodiffusion/__init__.py +++ /dev/null @@ -1,369 +0,0 @@ -from math import acos, sin -from typing import Iterable, Tuple, Union, List - -import torch -import numpy as np -from PIL import Image -from tqdm.auto import tqdm -from librosa.beat import beat_track -from diffusers import (DiffusionPipeline, UNet2DConditionModel, DDIMScheduler, - DDPMScheduler, AutoencoderKL) - -from .mel import Mel - -VERSION = "1.2.5" - - -class AudioDiffusion: - - def __init__(self, - model_id: str = "teticio/audio-diffusion-256", - sample_rate: int = 22050, - n_fft: int = 2048, - hop_length: int = 512, - top_db: int = 80, - cuda: bool = torch.cuda.is_available(), - progress_bar: Iterable = tqdm): - """Class for generating audio using De-noising Diffusion Probabilistic Models. - - Args: - model_id (String): name of model (local directory or Hugging Face Hub) - sample_rate (int): sample rate of audio - n_fft (int): number of Fast Fourier Transforms - hop_length (int): hop length (a higher number is recommended for lower than 256 y_res) - top_db (int): loudest in decibels - cuda (bool): use CUDA? - progress_bar (iterable): iterable callback for progress updates or None - """ - self.model_id = model_id - pipeline = { - 'LatentAudioDiffusionPipeline': LatentAudioDiffusionPipeline, - 'AudioDiffusionPipeline': AudioDiffusionPipeline - }.get( - DiffusionPipeline.get_config_dict(self.model_id)['_class_name'], - AudioDiffusionPipeline) - self.pipe = pipeline.from_pretrained(self.model_id) - if cuda: - self.pipe.to("cuda") - self.progress_bar = progress_bar or (lambda _: _) - - # For backwards compatibility - sample_size = (self.pipe.unet.sample_size, - self.pipe.unet.sample_size) if type( - self.pipe.unet.sample_size - ) == int else self.pipe.unet.sample_size - self.mel = Mel(x_res=sample_size[1], - y_res=sample_size[0], - sample_rate=sample_rate, - n_fft=n_fft, - hop_length=hop_length, - top_db=top_db) - - def generate_spectrogram_and_audio( - self, - steps: int = None, - generator: torch.Generator = None, - step_generator: torch.Generator = None, - eta: float = 0, - noise: torch.Tensor = None - ) -> Tuple[Image.Image, Tuple[int, np.ndarray]]: - """Generate random mel spectrogram and convert to audio. - - Args: - steps (int): number of de-noising steps (defaults to 50 for DDIM, 1000 for DDPM) - generator (torch.Generator): random number generator or None - step_generator (torch.Generator): random number generator used to de-noise or None - eta (float): parameter between 0 and 1 used with DDIM scheduler - noise (torch.Tensor): noisy image or None - - Returns: - PIL Image: mel spectrogram - (float, np.ndarray): sample rate and raw audio - """ - images, (sample_rate, - audios) = self.pipe(mel=self.mel, - batch_size=1, - steps=steps, - generator=generator, - step_generator=step_generator, - eta=eta, - noise=noise) - return images[0], (sample_rate, audios[0]) - - def generate_spectrogram_and_audio_from_audio( - self, - audio_file: str = None, - raw_audio: np.ndarray = None, - slice: int = 0, - start_step: int = 0, - steps: int = None, - generator: torch.Generator = None, - mask_start_secs: float = 0, - mask_end_secs: float = 0, - step_generator: torch.Generator = None, - eta: float = 0, - noise: torch.Tensor = None - ) -> Tuple[Image.Image, Tuple[int, np.ndarray]]: - """Generate random mel spectrogram from audio input and convert to audio. - - Args: - audio_file (str): must be a file on disk due to Librosa limitation or - raw_audio (np.ndarray): audio as numpy array - slice (int): slice number of audio to convert - start_step (int): step to start from - steps (int): number of de-noising steps (defaults to 50 for DDIM, 1000 for DDPM) - generator (torch.Generator): random number generator or None - mask_start_secs (float): number of seconds of audio to mask (not generate) at start - mask_end_secs (float): number of seconds of audio to mask (not generate) at end - step_generator (torch.Generator): random number generator used to de-noise or None - eta (float): parameter between 0 and 1 used with DDIM scheduler - noise (torch.Tensor): noisy image or None - - Returns: - PIL Image: mel spectrogram - (float, np.ndarray): sample rate and raw audio - """ - - images, (sample_rate, - audios) = self.pipe(mel=self.mel, - batch_size=1, - audio_file=audio_file, - raw_audio=raw_audio, - slice=slice, - start_step=start_step, - steps=steps, - generator=generator, - mask_start_secs=mask_start_secs, - mask_end_secs=mask_end_secs, - step_generator=step_generator, - eta=eta, - noise=noise) - return images[0], (sample_rate, audios[0]) - - @staticmethod - def loop_it(audio: np.ndarray, - sample_rate: int, - loops: int = 12) -> np.ndarray: - """Loop audio - - Args: - audio (np.ndarray): audio as numpy array - sample_rate (int): sample rate of audio - loops (int): number of times to loop - - Returns: - (float, np.ndarray): sample rate and raw audio or None - """ - _, beats = beat_track(y=audio, sr=sample_rate, units='samples') - for beats_in_bar in [16, 12, 8, 4]: - if len(beats) > beats_in_bar: - return np.tile(audio[beats[0]:beats[beats_in_bar]], loops) - return None - - -class AudioDiffusionPipeline(DiffusionPipeline): - - def __init__(self, unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, DDPMScheduler]): - super().__init__() - self.register_modules(unet=unet, scheduler=scheduler) - - @torch.no_grad() - def __call__( - self, - mel: Mel, - batch_size: int = 1, - audio_file: str = None, - raw_audio: np.ndarray = None, - slice: int = 0, - start_step: int = 0, - steps: int = None, - generator: torch.Generator = None, - mask_start_secs: float = 0, - mask_end_secs: float = 0, - step_generator: torch.Generator = None, - eta: float = 0, - noise: torch.Tensor = None - ) -> Tuple[List[Image.Image], Tuple[int, List[np.ndarray]]]: - """Generate random mel spectrogram from audio input and convert to audio. - - Args: - mel (Mel): instance of Mel class to perform image <-> audio - batch_size (int): number of samples to generate - audio_file (str): must be a file on disk due to Librosa limitation or - raw_audio (np.ndarray): audio as numpy array - slice (int): slice number of audio to convert - start_step (int): step to start from - steps (int): number of de-noising steps (defaults to 50 for DDIM, 1000 for DDPM) - generator (torch.Generator): random number generator or None - mask_start_secs (float): number of seconds of audio to mask (not generate) at start - mask_end_secs (float): number of seconds of audio to mask (not generate) at end - step_generator (torch.Generator): random number generator used to de-noise or None - eta (float): parameter between 0 and 1 used with DDIM scheduler - noise (torch.Tensor): noise tensor of shape (batch_size, 1, height, width) or None - - Returns: - List[PIL Image]: mel spectrograms - (float, List[np.ndarray]): sample rate and raw audios - """ - - steps = steps or 50 if isinstance(self.scheduler, - DDIMScheduler) else 1000 - self.scheduler.set_timesteps(steps) - step_generator = step_generator or generator - # For backwards compatibility - if type(self.unet.sample_size) == int: - self.unet.sample_size = (self.unet.sample_size, - self.unet.sample_size) - if noise is None: - noise = torch.randn( - (batch_size, self.unet.in_channels, self.unet.sample_size[0], - self.unet.sample_size[1]), - generator=generator) - images = noise - mask = None - - if audio_file is not None or raw_audio is not None: - mel.load_audio(audio_file, raw_audio) - input_image = mel.audio_slice_to_image(slice) - input_image = np.frombuffer(input_image.tobytes(), - dtype="uint8").reshape( - (input_image.height, - input_image.width)) - input_image = ((input_image / 255) * 2 - 1) - input_images = np.tile(input_image, (batch_size, 1, 1, 1)) - - if hasattr(self, 'vqvae'): - input_images = self.vqvae.encode( - input_images).latent_dist.sample(generator=generator) - input_images = 0.18215 * input_images - - if start_step > 0: - images[0, 0] = self.scheduler.add_noise( - torch.tensor(input_images[:, np.newaxis, np.newaxis, :]), - noise, torch.tensor(steps - start_step)) - - pixels_per_second = (self.unet.sample_size[1] * - mel.get_sample_rate() / mel.x_res / - mel.hop_length) - mask_start = int(mask_start_secs * pixels_per_second) - mask_end = int(mask_end_secs * pixels_per_second) - mask = self.scheduler.add_noise( - torch.tensor(input_images[:, np.newaxis, :]), noise, - torch.tensor(self.scheduler.timesteps[start_step:])) - - images = images.to(self.device) - for step, t in enumerate( - self.progress_bar(self.scheduler.timesteps[start_step:])): - model_output = self.unet(images, t)['sample'] - - if isinstance(self.scheduler, DDIMScheduler): - images = self.scheduler.step( - model_output=model_output, - timestep=t, - sample=images, - eta=eta, - generator=step_generator)['prev_sample'] - else: - images = self.scheduler.step( - model_output=model_output, - timestep=t, - sample=images, - generator=step_generator)['prev_sample'] - - if mask is not None: - if mask_start > 0: - images[:, :, :, :mask_start] = mask[ - step, :, :, :, :mask_start] - if mask_end > 0: - images[:, :, :, -mask_end:] = mask[step, :, :, :, - -mask_end:] - - if hasattr(self, 'vqvae'): - # 0.18215 was scaling factor used in training to ensure unit variance - images = 1 / 0.18215 * images - images = self.vqvae.decode(images)['sample'] - - images = (images / 2 + 0.5).clamp(0, 1) - images = images.cpu().permute(0, 2, 3, 1).numpy() - images = (images * 255).round().astype("uint8") - images = list( - map(lambda _: Image.fromarray(_[:, :, 0]), images) if images. - shape[3] == 1 else map( - lambda _: Image.fromarray(_, mode='RGB').convert('L'), images)) - - audios = list(map(lambda _: mel.image_to_audio(_), images)) - return images, (mel.get_sample_rate(), audios) - - @torch.no_grad() - def encode(self, images: List[Image.Image], steps: int = 50) -> np.ndarray: - """Reverse step process: recover noisy image from generated image. - - Args: - images (List[PIL Image]): list of images to encode - steps (int): number of encoding steps to perform (defaults to 50) - - Returns: - np.ndarray: noise tensor of shape (batch_size, 1, height, width) - """ - - # Only works with DDIM as this method is deterministic - assert isinstance(self.scheduler, DDIMScheduler) - self.scheduler.set_timesteps(steps) - sample = np.array([ - np.frombuffer(image.tobytes(), dtype="uint8").reshape( - (1, image.height, image.width)) for image in images - ]) - sample = ((sample / 255) * 2 - 1) - sample = torch.Tensor(sample).to(self.device) - - for t in self.progress_bar(torch.flip(self.scheduler.timesteps, - (0, ))): - prev_timestep = (t - self.scheduler.num_train_timesteps // - self.scheduler.num_inference_steps) - alpha_prod_t = self.scheduler.alphas_cumprod[t] - alpha_prod_t_prev = (self.scheduler.alphas_cumprod[prev_timestep] - if prev_timestep >= 0 else - self.scheduler.final_alpha_cumprod) - beta_prod_t = 1 - alpha_prod_t - model_output = self.unet(sample, t)['sample'] - pred_sample_direction = (1 - - alpha_prod_t_prev)**(0.5) * model_output - sample = (sample - - pred_sample_direction) * alpha_prod_t_prev**(-0.5) - sample = sample * alpha_prod_t**(0.5) + beta_prod_t**( - 0.5) * model_output - - return sample - - @staticmethod - def slerp(x0: torch.Tensor, x1: torch.Tensor, - alpha: float) -> torch.Tensor: - """Spherical Linear intERPolation - - Args: - x0 (torch.Tensor): first tensor to interpolate between - x1 (torch.Tensor): seconds tensor to interpolate between - alpha (float): interpolation between 0 and 1 - - Returns: - torch.Tensor: interpolated tensor - """ - - theta = acos( - torch.dot(torch.flatten(x0), torch.flatten(x1)) / torch.norm(x0) / - torch.norm(x1)) - return sin((1 - alpha) * theta) * x0 / sin(theta) + sin( - alpha * theta) * x1 / sin(theta) - - -class LatentAudioDiffusionPipeline(AudioDiffusionPipeline): - - def __init__(self, unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, - DDPMScheduler], vqvae: AutoencoderKL): - super().__init__(unet=unet, scheduler=scheduler) - self.register_modules(vqvae=vqvae) - - def __call__(self, *args, **kwargs): - return super().__call__(*args, **kwargs) diff --git a/spaces/nateraw/gradio-demo/app.py b/spaces/nateraw/gradio-demo/app.py deleted file mode 100644 index 8fbc6dc6df0078ca909819822d0b2da7199a430c..0000000000000000000000000000000000000000 --- a/spaces/nateraw/gradio-demo/app.py +++ /dev/null @@ -1,32 +0,0 @@ -import requests - -import gradio as gr -import torch -from timm import create_model -from timm.data import resolve_data_config -from timm.data.transforms_factory import create_transform - -IMAGENET_1k_URL = "https://storage.googleapis.com/bit_models/ilsvrc2012_wordnet_lemmas.txt" -LABELS = requests.get(IMAGENET_1k_URL).text.strip().split('\n') - -model = create_model('resnet50', pretrained=True) - -transform = create_transform( - **resolve_data_config({}, model=model) -) -model.eval() - -def predict_fn(img): - img = img.convert('RGB') - img = transform(img).unsqueeze(0) - - with torch.no_grad(): - out = model(img) - - probabilites = torch.nn.functional.softmax(out[0], dim=0) - - values, indices = torch.topk(probabilites, k=5) - - return {LABELS[i]: v.item() for i, v in zip(indices, values)} - -gr.Interface(predict_fn, gr.inputs.Image(type='pil'), outputs='label').launch() diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Corel IGrafx Origins Pro 17.5.3.3 Download.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Corel IGrafx Origins Pro 17.5.3.3 Download.md deleted file mode 100644 index 145186a99131e4af1afc8aa6fabc86a1dd29a152..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Corel IGrafx Origins Pro 17.5.3.3 Download.md +++ /dev/null @@ -1,34 +0,0 @@ -
    -

    Corel iGrafx Origins Pro 17.5.3.3: A Powerful Web-Based Solution for Business Process Management

    - -

    If you are looking for a software that can help you access and manage your business or company through web-based solutions, you might want to check out Corel iGrafx Origins Pro 17.5.3.3. This software is one of the best BPM (Business Process Management) tools that can help you unify your business and maximize your performance.

    -

    Corel iGrafx Origins Pro 17.5.3.3 Download


    DOWNLOADhttps://urlcod.com/2uIaCH



    - -

    Corel iGrafx Origins Pro 17.5.3.3 is the latest version of iGrafx, a software that has been providing business management solutions for over 25 years. This version comes with a number of new features and enhancements that make it more user-friendly, scalable, and secure.

    - -

    Some of the key features of Corel iGrafx Origins Pro 17.5.3.3 are:

    - -
      -
    • A single web-based platform that allows you to collaborate, model, analyze, and optimize your business processes.
    • -
    • A comprehensive repository that stores and manages all your process assets, such as diagrams, documents, data, risks, controls, and performance indicators.
    • -
    • A powerful analytics engine that helps you measure and improve your process performance, identify bottlenecks, and simulate scenarios.
    • -
    • A robust governance framework that ensures compliance, quality, and auditability of your processes.
    • -
    • A flexible deployment option that lets you choose between cloud or on-premise hosting.
    • -
    - -

    With Corel iGrafx Origins Pro 17.5.3.3, you can easily create and share process maps, workflows, diagrams, and reports that capture the essence of your business. You can also leverage the web-based solution to access and manage your processes from anywhere, anytime, and on any device.

    - -

    If you want to learn more about Corel iGrafx Origins Pro 17.5.3.3, you can download a free trial from the official website or contact the sales team for a demo. You can also read some of the testimonials from satisfied customers who have used iGrafx to transform their businesses.

    - -

    Corel iGrafx Origins Pro 17.5.3.3 is a software that can help you take your business process management to the next level. Whether you are a small business owner or a large enterprise leader, you can benefit from this software's web-based solution that can help you unify, optimize, and govern your processes.

    -

    - -

    One of the main benefits of Corel iGrafx Origins Pro 17.5.3.3 is that it allows you to collaborate with your team members, stakeholders, and customers across the entire process lifecycle. You can easily capture feedback, suggestions, and approvals from anyone involved in your processes, and keep them updated on the progress and outcomes.

    - -

    Another benefit of Corel iGrafx Origins Pro 17.5.3.3 is that it supports various standards and frameworks for process modeling and documentation, such as BPMN, DMN, Lean Six Sigma, ISO 9000, and more. You can also customize your own templates and notations to suit your specific needs and preferences.

    - -

    A third benefit of Corel iGrafx Origins Pro 17.5.3.3 is that it integrates with other systems and applications that you use in your business, such as ERP, CRM, BI, and more. You can easily import and export data, synchronize information, and automate tasks between different platforms.

    - -

    Corel iGrafx Origins Pro 17.5.3.3 is a software that can help you streamline your processes, improve your efficiency, reduce your costs, enhance your quality, and increase your customer satisfaction. It is a software that can help you achieve your business goals and objectives.

    e93f5a0c3f
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Photoshop CC 2019 Crack Amtlib Patch And MacOS !EXCLUSIVE!.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Photoshop CC 2019 Crack Amtlib Patch And MacOS !EXCLUSIVE!.md deleted file mode 100644 index 626d4024297617ccca298bd093705909423c3f39..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Photoshop CC 2019 Crack Amtlib Patch And MacOS !EXCLUSIVE!.md +++ /dev/null @@ -1,73 +0,0 @@ - -

    Photoshop CC 2019 Crack: Why You Should Avoid It

    -

    Photoshop is one of the most popular and powerful photo editing software in the world. It offers a wide range of tools and features that can help you create stunning images and graphics. However, Photoshop is not cheap, and it requires a subscription to access its full functionality.

    -

    Photoshop CC 2019 Crack amtlib patch and MacOS


    Download File ››››› https://urlcod.com/2uIcol



    -

    Some people may be tempted to use a cracked version of Photoshop, such as Photoshop CC 2019 Crack, to avoid paying for the software. However, this is a bad idea for many reasons. In this article, we will explain what Photoshop CC 2019 Crack is, what are the risks of using it, and what are the best alternatives to Photoshop CC 2019.

    -

    What is Photoshop CC 2019 Crack?

    -

    Photoshop CC 2019 Crack is a modified version of Photoshop that bypasses its license verification and protection mechanisms. It allows users to access all the features of Photoshop without paying for a subscription or activating the software.

    -

    "Crack" is a type of file that changes the code of the software. Thanks to these changes, hackers can access the full functionality of the program without having to pay for it. However, cracking also damages the software's security and stability, making it vulnerable to errors and attacks.

    -

    Downloading and using Photoshop CC 2019 Crack is a direct violation of US law and illegal in most countries in the world. It also violates Adobe's terms of service and intellectual property rights.

    -

    What are the risks of using Photoshop CC 2019 Crack?

    -

    Virus infection

    -

    Downloading Photoshop CC 2019 Crack is always associated with a high risk of receiving all kinds of viruses. They can cause irreparable damage to your PC or even steal your personal data, especially your credit card information.

    -

    Cracked software often comes with hidden malware that can infect your system and compromise your security. Some malware can spy on your online activity, record your keystrokes, access your files, or hijack your browser. Others can encrypt your data and demand ransom for its release.

    -

    You can never be sure that the crack you download is safe and clean. Even if you use antivirus software or scan the file before opening it, you may not detect all the threats that may be lurking inside.

    -

    -

    No updates

    -

    When you download Photoshop CC 2019 Crack, you cannot receive any important updates. Updates are essential for keeping your software up to date with the latest features, bug fixes, and security patches.

    -

    Without updates, you will miss out on new tools and improvements that Adobe releases regularly for its products. You will also expose yourself to potential vulnerabilities and compatibility issues

    No technical support

    -

    Another drawback of using Photoshop CC 2019 Crack is that you will not have access to any technical support from Adobe. This means that if you encounter any problems or issues with the software, you will have to rely on online forums or unofficial sources for help.

    -

    Technical support is important for any software, especially for complex and advanced ones like Photoshop. You may need assistance with installation, activation, updates, troubleshooting, or compatibility. Without technical support, you may waste a lot of time and energy trying to fix the problems yourself or finding reliable solutions online.

    -

    Moreover, you will not be able to benefit from any training or tutorials that Adobe offers for its customers. Adobe has a wealth of resources and guides that can help you learn and master Photoshop, from beginner to expert level. You will miss out on these valuable opportunities to improve your skills and knowledge.

    -

    System errors and failures

    -

    Using Photoshop CC 2019 Crack can also cause system errors and failures that can affect your work and productivity. Cracked software is often unstable and incompatible with other programs or devices. It may crash, freeze, or slow down your computer.

    -

    Cracked software can also corrupt your files or damage your hardware. You may lose your work or data, or even ruin your computer. You may also face compatibility issues with other Adobe products or third-party plugins that require a valid license to work properly.

    -

    Using cracked software can also affect your reputation and credibility as a content writer. If you use Photoshop CC 2019 Crack to create images or graphics for your clients or projects, you may face legal consequences or lose their trust. You may also compromise the quality and originality of your work, as cracked software may add watermarks, logos, or other unwanted elements to your output.

    -

    What are the best alternatives to Photoshop CC 2019?

    -

    If you are looking for a better and safer way to edit your photos and graphics, you should consider some of the best alternatives to Photoshop CC 2019. These alternatives offer similar or even better features than Photoshop, and they are available for free or at a lower cost. Some of these alternatives are:

    -

    Affinity Photo

    -

    Affinity Photo is one of the most popular and powerful alternatives to Photoshop. It is a professional photo editing software that works on Windows, Mac, and iPad. It offers a similar interface and features as Photoshop, such as layers, masks, filters, adjustments, brushes, vector tools, and more.

    -

    Affinity Photo also has some unique and advanced features that Photoshop does not have, such as HDR merging, panorama stitching, focus stacking, frequency separation, live filters, and more. Affinity Photo is also faster and more responsive than Photoshop, as it uses GPU acceleration and multi-core processing.

    -

    Affinity Photo costs $49.99 for Windows and Mac versions, and $19.99 for iPad version. It is a one-time purchase with no subscription or hidden fees. It also offers a 10-day free trial for new users.

    -

    GIMP

    -

    GIMP (GNU Image Manipulation Program) is the best free alternative to Photoshop. It is an open-source photo editing software that works on Windows, Mac, Linux, and other platforms. It has a professional and customizable interface that can mimic Photoshop's layout.

    -

    GIMP offers a wide range of tools and features that can rival Photoshop's capabilities, such as layers, masks, filters, adjustments, brushes, paths, text tools, color management, and more. GIMP also supports many file formats, including PSD files.

    -

    GIMP is completely free to download and use. It also has a large community of developers and users who create and share plugins, scripts, tutorials, and resources for GIMP.

    Pixlr

    -

    Pixlr is a cloud-based photo editing software that works on any browser and device. It offers two versions: Pixlr X and Pixlr E. Pixlr X is a simple and easy-to-use editor that lets you crop, resize, rotate, adjust, filter, and add text and stickers to your photos. Pixlr E is a more advanced editor that gives you more control over layers, masks, brushes, selections, and adjustments.

    -

    Pixlr also has a mobile app called Stories by Pixlr that lets you create stunning stories for social media with templates, fonts, stickers, and animations. You can also access millions of stock photos, graphics, and fonts from Pixlr's library.

    -

    Pixlr has a free version that gives you limited access to Pixlr X and E. You can also upgrade to a Plus or Premium plan that gives you ad-free access, unlimited saves, AI tools, live filters, exclusive video tutorials, and more. The Plus plan costs $1.99 per month and the Premium plan costs $7.99 per month. You can also try the Premium plan for free for 30 days.

    -

    SketchUp

    -

    SketchUp is a 3D modeling software that helps you create and visualize anything from buildings to furniture to landscapes. It is used by professionals in architecture, engineering, construction, interior design, and more. It has a user-friendly interface and intuitive tools that make 3D modeling easy and fun.

    -

    SketchUp also has a web-based version called SketchUp for Web that lets you access your projects from any browser and device. You can also use SketchUp for iPad to create 3D models on the go. SketchUp integrates with various other applications and platforms, such as Trimble Connect, PreDesign, XR Viewer, Extension Warehouse, Scan Essentials, V-Ray, and more.

    -

    SketchUp has different plans and pricing depending on your needs. The Free plan gives you basic features and 10 GB of cloud storage. The Pro plan gives you professional features and unlimited cloud storage for $299 per year. The Studio plan gives you advanced features and additional integrations for $699 per year. You can also get a Team plan that lets you manage your team members and collaborate for $12.99 per month per seat.

    -

    Paint 3D

    -

    Paint 3D is a software that enables you to create 3-dimensional objects using an intuitive user interface. It allows you to create digital models based on an existing object and make changes based on your preferences. It comes with various templates and some pre-made models.

    -

    Paint 3D also lets you transform drawings from a 2D sketch to a 3D model effortlessly. You can use different tools to add color, texture, lighting, and effects to your 3D objects. You can also use the 3D doodle tool to create organic shapes with ease.

    -

    Paint 3D is a built-in feature of Windows 10 and is free to use. You can also share your creations with others via social media platforms or export them as printable files. Paint 3D integrates with Remix 3D seamlessly so that you can import, edit, and share your artworks on the web. Remix 3D also helps you get inspiration from other artists' designs.

    Conclusion

    -

    Photoshop CC 2019 Crack is a risky and illegal way to use Photoshop without paying for it. It can expose you to various threats, such as virus infection, no updates, legal issues, no technical support, and system errors and failures. It can also damage your reputation and credibility as a content writer.

    -

    Instead of using Photoshop CC 2019 Crack, you should consider some of the best alternatives to Photoshop CC 2019 that are available for free or at a lower cost. These alternatives offer similar or even better features than Photoshop, and they are easier to use and learn. Some of these alternatives are Affinity Photo, GIMP, Pixlr, SketchUp, and Paint 3D.

    -

    By using these alternatives, you can create stunning images and graphics without compromising your security, quality, or ethics. You can also save money and time, and enjoy more flexibility and creativity. You can also access more resources and support from the developers and communities of these alternatives.

    -

    We hope this article has helped you understand why you should avoid Photoshop CC 2019 Crack and what are the best alternatives to Photoshop CC 2019. If you have any questions or feedback, please let us know in the comments below. Thank you for reading.

    -

    FAQs

    -

    Q: Is Photoshop CC 2019 Crack safe to use?

    -

    A: No, Photoshop CC 2019 Crack is not safe to use. It can infect your system with malware, cause errors and failures, violate Adobe's terms of service and intellectual property rights, and expose you to legal consequences.

    -

    Q: How can I get Photoshop for free or cheap?

    -

    A: You can get Photoshop for free or cheap by using some of the best alternatives to Photoshop CC 2019 that we have mentioned in this article. These alternatives offer similar or even better features than Photoshop, and they are available for free or at a lower cost.

    -

    Q: Can I use Photoshop CC 2019 Crack for commercial purposes?

    -

    A: No, you cannot use Photoshop CC 2019 Crack for commercial purposes. It is illegal and unethical to use cracked software for any purpose, especially for commercial ones. You may face legal consequences or lose your clients' trust if you use Photoshop CC 2019 Crack for commercial purposes.

    -

    Q: What are the advantages of using Affinity Photo over Photoshop?

    -

    A: Some of the advantages of using Affinity Photo over Photoshop are:

    -
      -
    • Affinity Photo is cheaper than Photoshop. It costs $49.99 for Windows and Mac versions, and $19.99 for iPad version. It is a one-time purchase with no subscription or hidden fees.
    • -
    • Affinity Photo is faster and more responsive than Photoshop. It uses GPU acceleration and multi-core processing to deliver smooth performance.
    • -
    • Affinity Photo has some unique and advanced features that Photoshop does not have, such as HDR merging, panorama stitching, focus stacking, frequency separation, live filters, and more.
    • -
    -

    Q: What are the disadvantages of using GIMP over Photoshop?

    -

    A: Some of the disadvantages of using GIMP over Photoshop are:

    -
      -
    • GIMP has a steeper learning curve than Photoshop. It may take some time and effort to get used to its interface and features.
    • -
    • GIMP does not have some of the features that Photoshop has, such as content-aware fill, smart objects, adjustment layers, layer styles, and more.
    • -
    • GIMP does not have as many plugins and extensions as Photoshop. It may not be compatible with some of the plugins and extensions that you use with Photoshop.
    • -

    b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/modeling/losses/embed.py b/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/modeling/losses/embed.py deleted file mode 100644 index 1e3a069763ca6fab0acc7c455b416b9634ceaedf..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/projects/DensePose/densepose/modeling/losses/embed.py +++ /dev/null @@ -1,119 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -from typing import Any, Dict, List -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.config import CfgNode -from detectron2.structures import Instances - -from densepose.data.meshes.catalog import MeshCatalog -from densepose.modeling.cse.utils import normalize_embeddings, squared_euclidean_distance_matrix - -from .embed_utils import PackedCseAnnotations -from .utils import BilinearInterpolationHelper - - -class EmbeddingLoss: - """ - Computes losses for estimated embeddings given annotated vertices. - Instances in a minibatch that correspond to the same mesh are grouped - together. For each group, loss is computed as cross-entropy for - unnormalized scores given ground truth mesh vertex ids. - Scores are based on squared distances between estimated vertex embeddings - and mesh vertex embeddings. - """ - - def __init__(self, cfg: CfgNode): - """ - Initialize embedding loss from config - """ - self.embdist_gauss_sigma = cfg.MODEL.ROI_DENSEPOSE_HEAD.CSE.EMBEDDING_DIST_GAUSS_SIGMA - - def __call__( - self, - proposals_with_gt: List[Instances], - densepose_predictor_outputs: Any, - packed_annotations: PackedCseAnnotations, - interpolator: BilinearInterpolationHelper, - embedder: nn.Module, - ) -> Dict[int, torch.Tensor]: - """ - Produces losses for estimated embeddings given annotated vertices. - Embeddings for all the vertices of a mesh are computed by the embedder. - Embeddings for observed pixels are estimated by a predictor. - Losses are computed as cross-entropy for squared distances between - observed vertex embeddings and all mesh vertex embeddings given - ground truth vertex IDs. - - Args: - proposals_with_gt (list of Instances): detections with associated - ground truth data; each item corresponds to instances detected - on 1 image; the number of items corresponds to the number of - images in a batch - densepose_predictor_outputs: an object of a dataclass that contains predictor - outputs with estimated values; assumed to have the following attributes: - * embedding - embedding estimates, tensor of shape [N, D, S, S], where - N = number of instances (= sum N_i, where N_i is the number of - instances on image i) - D = embedding space dimensionality (MODEL.ROI_DENSEPOSE_HEAD.CSE.EMBED_SIZE) - S = output size (width and height) - packed_annotations (PackedCseAnnotations): contains various data useful - for loss computation, each data is packed into a single tensor - interpolator (BilinearInterpolationHelper): bilinear interpolation helper - embedder (nn.Module): module that computes vertex embeddings for different meshes - Return: - dict(int -> tensor): losses for different mesh IDs - """ - losses = {} - for mesh_id_tensor in packed_annotations.vertex_mesh_ids_gt.unique(): - mesh_id = mesh_id_tensor.item() - mesh_name = MeshCatalog.get_mesh_name(mesh_id) - # valid points are those that fall into estimated bbox - # and correspond to the current mesh - j_valid = interpolator.j_valid * ( # pyre-ignore[16] - packed_annotations.vertex_mesh_ids_gt == mesh_id - ) - if not torch.any(j_valid): - continue - # extract estimated embeddings for valid points - # -> tensor [J, D] - vertex_embeddings_i = normalize_embeddings( - interpolator.extract_at_points( - densepose_predictor_outputs.embedding, - slice_fine_segm=slice(None), - w_ylo_xlo=interpolator.w_ylo_xlo[:, None], # pyre-ignore[16] - w_ylo_xhi=interpolator.w_ylo_xhi[:, None], # pyre-ignore[16] - w_yhi_xlo=interpolator.w_yhi_xlo[:, None], # pyre-ignore[16] - w_yhi_xhi=interpolator.w_yhi_xhi[:, None], # pyre-ignore[16] - )[j_valid, :] - ) - # extract vertex ids for valid points - # -> tensor [J] - vertex_indices_i = packed_annotations.vertex_ids_gt[j_valid] - # embeddings for all mesh vertices - # -> tensor [K, D] - mesh_vertex_embeddings = embedder(mesh_name) - # unnormalized scores for valid points - # -> tensor [J, K] - scores = squared_euclidean_distance_matrix( - vertex_embeddings_i, mesh_vertex_embeddings - ) / (-self.embdist_gauss_sigma) - losses[mesh_name] = F.cross_entropy(scores, vertex_indices_i, ignore_index=-1) - - for mesh_name in embedder.mesh_names: - if mesh_name not in losses: - losses[mesh_name] = self.fake_value( - densepose_predictor_outputs, embedder, mesh_name - ) - return losses - - def fake_values(self, densepose_predictor_outputs: Any, embedder: nn.Module): - losses = {} - for mesh_name in embedder.mesh_names: - losses[mesh_name] = self.fake_value(densepose_predictor_outputs, embedder, mesh_name) - return losses - - def fake_value(self, densepose_predictor_outputs: Any, embedder: nn.Module, mesh_name: str): - return densepose_predictor_outputs.embedding.sum() * 0 + embedder(mesh_name).sum() * 0 diff --git a/spaces/nikolaiii/CompVis-stable-diffusion-v1-4/app.py b/spaces/nikolaiii/CompVis-stable-diffusion-v1-4/app.py deleted file mode 100644 index e1e1025c8f06010197c50917ac9dd1ddeaf7e5aa..0000000000000000000000000000000000000000 --- a/spaces/nikolaiii/CompVis-stable-diffusion-v1-4/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/CompVis/stable-diffusion-v1-4").launch() \ No newline at end of file diff --git a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/plugin_007.js b/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/plugin_007.js deleted file mode 100644 index 6f8c3bd080073a152f8b0951d3be87dfd3ea19da..0000000000000000000000000000000000000000 --- a/spaces/nmitchko/AI-in-Healthcare/Developer Meetup in Boston Generative AI Use Cases in Healthcare _files/plugin_007.js +++ /dev/null @@ -1,208 +0,0 @@ - -CKEDITOR.config.mentions = [ { - feed: function( options, callback ) { - var xhr = new XMLHttpRequest(); - - xhr.onreadystatechange = function() { - if ( xhr.readyState == 4 ) { - if ( xhr.status == 200 ) { - callback( JSON.parse( this.responseText ) ); - } else { - callback( [] ); - } - } - } - - xhr.open( 'GET', '/ajax/searchmember?search=' + encodeURIComponent( options.query ) ); - xhr.send(); - }, -itemTemplate: '
  • ' + -'{fullname}
    ' + -'@{username}' + -'
  • ', - outputTemplate: '@{username} ', - minChars: 1 -} ]; - - - -( function() { - - CKEDITOR._.mentions = { - cache: {} - }; - - var MARKER = '@', - MIN_CHARS = 0, - cache = CKEDITOR._.mentions.cache; - - CKEDITOR.plugins.add( 'member', { - requires: 'autocomplete,textmatch,ajax', - instances: [], - init: function( editor ) { - var self = this; - - editor.on( 'instanceReady', function() { - CKEDITOR.tools.array.forEach( editor.config.mentions || [], function( config ) { - self.instances.push( new Mentions( editor, config ) ); - } ); - } ); - } - } ); - - function Mentions( editor, config ) { - var feed = config.feed; - this.caseSensitive = config.caseSensitive; - this.marker = config.hasOwnProperty( 'marker' ) ? config.marker : MARKER; - this.minChars = config.minChars !== null && config.minChars !== undefined ? config.minChars : MIN_CHARS; - this.pattern = config.pattern || createPattern( this.marker, this.minChars ); - this.cache = config.cache !== undefined ? config.cache : true; - this.throttle = config.throttle !== undefined ? config.throttle : 200; - this._autocomplete = new CKEDITOR.plugins.autocomplete( editor, { - textTestCallback: getTextTestCallback( this.marker, this.minChars, this.pattern ), - dataCallback: getDataCallback( feed, this ), - itemTemplate: config.itemTemplate, - outputTemplate: config.outputTemplate, - throttle: this.throttle, - itemsLimit: config.itemsLimit - } ); - } - - Mentions.prototype = { - - /** - * Destroys the mentions instance. - * - * The view element and event listeners will be removed from the DOM. - */ - destroy: function() { - this._autocomplete.destroy(); - } - }; - - function createPattern( marker, minChars ) { - minChars=0; - // Match also diacritic characters (#2491). - var pattern = '\\' + marker + '[_a-zA-Z0-9À-žА-Яа-я.]'; - - if ( minChars ) { - pattern += '{' + minChars + ',}'; - } else { - pattern += '*'; - } - - pattern += '$'; - return new RegExp( pattern ); - } - - function getTextTestCallback( marker, minChars, pattern ) { - return function( range ) { - if ( !range.collapsed ) { - return null; - } - - return CKEDITOR.plugins.textMatch.match( range, matchCallback ); - }; - - function matchCallback( text, offset ) { - var match = text.slice( 0, offset ) - .match( pattern ); - - if ( !match ) { - return null; - } - - // Do not proceed if a query is a part of word. - var prevChar = text[ match.index - 1]; - if ( prevChar !== undefined && !prevChar.match( /\s+/ ) ) { - return null; - } - - return { - start: match.index, - end: offset - }; - } - } - - function getDataCallback( feed, mentions ) { - return function( matchInfo, callback ) { - var query = matchInfo.query; - - // We are removing marker here to give clean query result for the endpoint callback. - if ( mentions.marker ) { - query = query.substring( mentions.marker.length ); - } - - if ( CKEDITOR.tools.array.isArray( feed ) ) { - createArrayFeed(); - } else if ( typeof feed === 'string' ) { - createUrlFeed(); - } else { - feed( { - query: query, - marker: mentions.marker - }, resolveCallbackData ); - } - - function createArrayFeed() { - var data = indexArrayFeed( feed ).filter( function( item ) { - var itemName = item.name; - - if ( !mentions.caseSensitive ) { - itemName = itemName.toLowerCase(); - query = query.toLowerCase(); - } - - return itemName.indexOf( query ) === 0; - } ); - - resolveCallbackData( data ); - } - - function indexArrayFeed( feed ) { - var index = 1; - return CKEDITOR.tools.array.reduce( feed, function( current, name ) { - current.push( { name: name, id: index++ } ); - return current; - }, [] ); - } - - function createUrlFeed() { - var encodedUrl = new CKEDITOR.template( feed ) - .output( { encodedQuery: encodeURIComponent( query ) } ); - - if ( mentions.cache && cache[ encodedUrl ] ) { - return resolveCallbackData( cache[ encodedUrl ] ); - } - - CKEDITOR.ajax.load( encodedUrl, function( data ) { - var items = JSON.parse( data ); - - // Cache URL responses for performance improvement (#1969). - if ( mentions.cache && items !== null ) { - cache[ encodedUrl ] = items; - } - - resolveCallbackData( items ); - } ); - } - - function resolveCallbackData( data ) { - if ( !data ) { - return; - } - - // We don't want to change item data, so lets create new one. - var newData = CKEDITOR.tools.array.map( data, function( item ) { - var name = mentions.marker + item.name; - return CKEDITOR.tools.object.merge( item, { name: name } ); - } ); - - callback( newData ); - } - }; - } - - CKEDITOR.plugins.mentions = Mentions; -} )(jQuery); diff --git a/spaces/open-source-metrics/repository-statistics/index.js b/spaces/open-source-metrics/repository-statistics/index.js deleted file mode 100644 index b4965b8edc1fe8e98278d2de3ac3db3bfb45f050..0000000000000000000000000000000000000000 --- a/spaces/open-source-metrics/repository-statistics/index.js +++ /dev/null @@ -1,379 +0,0 @@ -let dark = document.location.search.includes('dark-theme=true'); - -if (dark) - document.body.classList.add('dark-theme'); - - -var COLORS = dark ? - ['#FF0000', '#00FF00', '#0000FF', '#FF00FF', '#FFFF00', '#0000FF', '#F090F0', '#90F0F0', '#F0F090'] : - ['#CC0000', '#00CC00', '#0000CC', '#CC00CC', '#CCCC00', '#0000CC', '#C060C0', '#60C0C0', '#C0C060'] - -const load = () => { - const l0 = document.createElement('div') - const l1 = document.createElement('div') - const l2 = document.createElement('div') - l0.classList.add('lds-ripple') - - l0.appendChild(l1) - l0.appendChild(l2) - return l0 -} - -const getCheckedOptions = () => { - const options = Array.from(document.querySelectorAll('.option-div')) - .map(e => Array.from(e.children) - .filter(e => e.nodeName == 'DIV')) - .filter(e => e.length) - .flat() - .map(e => e.id) - .filter(e => document.querySelector(`#${e}-checkbox`).checked) - - const optionsDict = {} - for (let option of options) { - const key = option.split('-option-')[0] - const value = option.split('-option-')[1] - - if (key in optionsDict) - optionsDict[key].push(value) - else - optionsDict[key] = [value] - } - - return optionsDict; -} - -const addOption = (category, optionName) => { - /* Options for the issue div */ - const issueDiv = document.getElementById(`${category}Div`); - const div = document.createElement('div') - - let found = false; - let optionNumber = 0; - while (!found && ++optionNumber < 100) { - let previousOption = document.getElementById(`${category}-option-${optionNumber}`); - found = previousOption === null; - } - - div.id = `${category}-option-${optionNumber}`; - issueDiv.appendChild(div); - - const checkBox = document.createElement('input'); - checkBox.type = 'checkbox' - checkBox.id = `${category}-option-${optionNumber}-checkbox` - - const checkBoxLabel = document.createElement('label'); - const labelSpan = document.createElement('span') - labelSpan.textContent = optionName; - checkBoxLabel.appendChild(checkBox) - checkBoxLabel.appendChild(labelSpan) - div.appendChild(checkBoxLabel) - - return optionNumber -} - -let charts = []; - -const createButton = (title, libraries, methods) => { - const button = document.createElement('button') - button.textContent = title; - button.onclick = async () => { - document.getElementById('pip-graph').innerHTML = '' - document.getElementById('star-graph').innerHTML = '' - document.getElementById('issue-graph').innerHTML = '' - const e = load() - document.body.appendChild(e) - const selectedInternalLibraries = libraries.internal.filter(e => document.querySelector(`#${e}Checkbox`).checked); - const selectedExternalLibraries = libraries.external.filter(e => document.querySelector(`#${e}Checkbox`).checked); - const selectedLibraries = selectedInternalLibraries.concat(selectedExternalLibraries); - - const relevantOptions = getCheckedOptions(); - - if (charts.length !== 0) { - for (const chart of charts) { - chart.destroy() - } - } - for (const method of methods()) { - charts.push(await method(selectedLibraries, relevantOptions)) - } - document.body.removeChild(e) - }; - - return button; -} - -const initialize = async () => { - const inferResponse = await fetch(`initialize`); - console.log(inferResponse); - const inferJson = await inferResponse.json(); - console.log(inferJson); - - const warnings = document.getElementById("warnings") - const librarySelector = document.getElementById('library-selector'); - const graphSelector = document.getElementById('graph-selector'); - const selectorSubmit = document.getElementById('selector-submit'); - - const introSpan = document.createElement("h3") - introSpan.textContent = "Select libraries to display" - librarySelector.appendChild(introSpan); - - const graphSpan = document.createElement("h3") - graphSpan.textContent = "Select graphs to display" - graphSelector.appendChild(graphSpan); - - if (inferJson.warnings.length > 0) { - for (const warning of inferJson.warnings) { - const div = document.createElement('div'); - div.classList.add('warning-div') - - const labelSpan = document.createElement('span'); - labelSpan.textContent = `Warning: ${warning}`; - - div.appendChild(labelSpan); - warnings.appendChild(div); - } - } - - for (const element of inferJson.internal) { - const div = document.createElement('div'); - const checkBox = document.createElement('input'); - checkBox.type = 'checkbox' - checkBox.id = `${element}Checkbox`; - - const checkBoxLabel = document.createElement('label'); - const labelSpan = document.createElement('span') - - labelSpan.textContent = element.charAt(0).toUpperCase() + element.slice(1) - checkBoxLabel.appendChild(checkBox) - checkBoxLabel.appendChild(labelSpan) - - div.appendChild(checkBoxLabel) - librarySelector.appendChild(div) - } - - const externalLibs = document.createElement("h3") - externalLibs.textContent = "External Libraries" - librarySelector.appendChild(externalLibs); - - for (const element of inferJson.external) { - const div = document.createElement('div'); - const checkBox = document.createElement('input'); - checkBox.type = 'checkbox' - checkBox.id = `${element}Checkbox`; - - const checkBoxLabel = document.createElement('label'); - const labelSpan = document.createElement('span') - - labelSpan.textContent = element.charAt(0).toUpperCase() + element.slice(1) - checkBoxLabel.appendChild(checkBox) - checkBoxLabel.appendChild(labelSpan) - - div.appendChild(checkBoxLabel) - librarySelector.appendChild(div) - } - - for (const element of ['pip', 'stars', 'issue']) { - const div = document.createElement('div'); - div.classList.add('option-div') - div.id = `${element}Div`; - - const checkBox = document.createElement('input'); - checkBox.type = 'checkbox' - checkBox.id = `${element}CheckboxGraph`; - - const checkBoxLabel = document.createElement('label'); - const labelSpan = document.createElement('span') - labelSpan.textContent = element.charAt(0).toUpperCase() + element.slice(1) - checkBoxLabel.appendChild(checkBox) - checkBoxLabel.appendChild(labelSpan) - - div.appendChild(checkBoxLabel) - graphSelector.appendChild(div) - } - - addOption('pip', "Cumulated"); - addOption('pip', "Week over week"); - - addOption('issue', "Exclude org members"); - addOption('issue', "Week over week"); - - addOption('stars', "Week over week"); - - const fetchButton = createButton('Fetch', inferJson, () => { - const graphNames = ['pip', 'stars', 'issue'].filter(e => document.querySelector(`#${e}CheckboxGraph`).checked); - const graphs = [] - - if (graphNames.includes('pip')) - graphs.push(retrievePipInstalls) - - if (graphNames.includes('stars')) - graphs.push(retrieveStars) - - if (graphNames.includes('issue')) - graphs.push(retrieveIssues) - - return graphs - }) - selectorSubmit.appendChild(fetchButton); -}; - -const retrievePipInstalls = async (libraryNames, options) => { - const relevantOptions = options['pip'] - const inferResponse = await fetch(`retrievePipInstalls?input=${libraryNames}&options=${relevantOptions}`); - const inferJson = await inferResponse.json(); - const colors = [...COLORS]; - - const labels = Array.from(inferJson['day']).map(e => new Date(e)) - const datasets = []; - for (const element in inferJson) { - if (element === 'day') - continue - - const color = colors.pop() - datasets.push({ - label: element, - data: inferJson[element], - backgroundColor: color, - borderColor: color, - tension: 0.01, - pointRadius: 1, - borderWidth: 2, - fill: false - }) - } - - const ctx = document.getElementById('pip-graph'); - - const myChart = new Chart(ctx, { - type: 'line', - data: {labels, datasets}, - options: { - scales: { - y: { - beginAtZero: true - }, - x: { - type: 'time', - } - }, - plugins: { - title: { - display: true, - text: 'Pip installs' - } - } - } - }); - return myChart; -}; - -const retrieveStars = async (libraryNames, options) => { - const relevantOptions = options['stars'] - const inferResponse = await fetch(`retrieveStars?input=${libraryNames}&options=${relevantOptions}`); - const inferJson = await inferResponse.json(); - const colors = [...COLORS]; - - const labels = Array.from(inferJson['day']).map(e => new Date(e)) - const datasets = []; - for (const element in inferJson) { - if (element === 'day') - continue - - const color = colors.pop() - datasets.push({ - label: element, - data: inferJson[element], - backgroundColor: color, - borderColor: color, - tension: 0.01, - pointRadius: 1, - borderWidth: 2, - fill: false - }) - } - - const ctx = document.getElementById('star-graph'); - - const myChart = new Chart(ctx, { - title: "Stars", - type: 'line', - data: {labels, datasets}, - options: { - scales: { - y: { - beginAtZero: true - }, - x: { - type: 'time', - } - }, - plugins: { - title: { - display: true, - text: 'Number of stargazers' - } - } - } - }); - return myChart; -}; - -const retrieveIssues = async (libraryNames, options) => { - const relevantOptions = options['issue'] - const inferResponse = await fetch(`retrieveIssues?input=${libraryNames}&options=${relevantOptions}`); - const inferJson = await inferResponse.json(); - const colors = [...COLORS]; - - const labels = Array.from(inferJson['day']).map(e => new Date(e)) - const datasets = []; - for (const element in inferJson) { - if (element === 'day') - continue - - const color = colors.pop() - datasets.push({ - label: element, - data: inferJson[element], - backgroundColor: color, - borderColor: color, - tension: 0.01, - pointRadius: 1, - borderWidth: 2, - fill: false - }) - } - - const ctx = document.getElementById('issue-graph'); - - const myChart = new Chart(ctx, { - title: "Issues", - type: 'line', - data: {labels, datasets}, - options: { - scales: { - y: { - beginAtZero: true - }, - x: { - type: 'time', - } - }, - plugins: { - title: { - display: true, - text: 'Cumulated number of issues, PRs, and comments' - } - } - } - }); - return myChart; -}; - -( - async () => { - const e = load() - document.body.appendChild(e) - await initialize() - document.body.removeChild(e) - } -)(); \ No newline at end of file diff --git a/spaces/osanseviero/gpt2_for_music/app.py b/spaces/osanseviero/gpt2_for_music/app.py deleted file mode 100644 index 3e515edc5ed1a108a2520dfb223ec1bd4cc0cf42..0000000000000000000000000000000000000000 --- a/spaces/osanseviero/gpt2_for_music/app.py +++ /dev/null @@ -1,137 +0,0 @@ -import gradio as gr -import note_seq -import numpy as np -from transformers import AutoTokenizer, AutoModelForCausalLM - -tokenizer = AutoTokenizer.from_pretrained("TristanBehrens/js-fakes-4bars") -model = AutoModelForCausalLM.from_pretrained("TristanBehrens/js-fakes-4bars") - -NOTE_LENGTH_16TH_120BPM = 0.25 * 60 / 120 -BAR_LENGTH_120BPM = 4.0 * 60 / 120 -SAMPLE_RATE=44100 - -def token_sequence_to_note_sequence(token_sequence, use_program=True, use_drums=True, instrument_mapper=None, only_piano=False): - if isinstance(token_sequence, str): - token_sequence = token_sequence.split() - note_sequence = empty_note_sequence() - - # Render all notes. - current_program = 1 - current_is_drum = False - current_instrument = 0 - track_count = 0 - for token_index, token in enumerate(token_sequence): - - if token == "PIECE_START": - pass - elif token == "PIECE_END": - print("The end.") - break - elif token == "TRACK_START": - current_bar_index = 0 - track_count += 1 - pass - elif token == "TRACK_END": - pass - elif token == "KEYS_START": - pass - elif token == "KEYS_END": - pass - elif token.startswith("KEY="): - pass - elif token.startswith("INST"): - instrument = token.split("=")[-1] - if instrument != "DRUMS" and use_program: - if instrument_mapper is not None: - if instrument in instrument_mapper: - instrument = instrument_mapper[instrument] - current_program = int(instrument) - current_instrument = track_count - current_is_drum = False - if instrument == "DRUMS" and use_drums: - current_instrument = 0 - current_program = 0 - current_is_drum = True - elif token == "BAR_START": - current_time = current_bar_index * BAR_LENGTH_120BPM - current_notes = {} - elif token == "BAR_END": - current_bar_index += 1 - pass - elif token.startswith("NOTE_ON"): - pitch = int(token.split("=")[-1]) - note = note_sequence.notes.add() - note.start_time = current_time - note.end_time = current_time + 4 * NOTE_LENGTH_16TH_120BPM - note.pitch = pitch - note.instrument = current_instrument - note.program = current_program - note.velocity = 80 - note.is_drum = current_is_drum - current_notes[pitch] = note - elif token.startswith("NOTE_OFF"): - pitch = int(token.split("=")[-1]) - if pitch in current_notes: - note = current_notes[pitch] - note.end_time = current_time - elif token.startswith("TIME_DELTA"): - delta = float(token.split("=")[-1]) * NOTE_LENGTH_16TH_120BPM - current_time += delta - elif token.startswith("DENSITY="): - pass - elif token == "[PAD]": - pass - else: - #print(f"Ignored token {token}.") - pass - - # Make the instruments right. - instruments_drums = [] - for note in note_sequence.notes: - pair = [note.program, note.is_drum] - if pair not in instruments_drums: - instruments_drums += [pair] - note.instrument = instruments_drums.index(pair) - - if only_piano: - for note in note_sequence.notes: - if not note.is_drum: - note.instrument = 0 - note.program = 0 - - return note_sequence - -def empty_note_sequence(qpm=120.0, total_time=0.0): - note_sequence = note_seq.protobuf.music_pb2.NoteSequence() - note_sequence.tempos.add().qpm = qpm - note_sequence.ticks_per_quarter = note_seq.constants.STANDARD_PPQ - note_sequence.total_time = total_time - return note_sequence - -def process(text): - input_ids = tokenizer.encode(text, return_tensors="pt") - generated_ids = model.generate(input_ids, max_length=500) - generated_sequence = tokenizer.decode(generated_ids[0]) - - # Convert text of notes to audio - note_sequence = token_sequence_to_note_sequence(generated_sequence) - synth = note_seq.midi_synth.synthesize - array_of_floats = synth(note_sequence, sample_rate=SAMPLE_RATE) - note_plot = note_seq.plot_sequence(note_sequence, False) - array_of_floats /=1.414 - array_of_floats *= 32767 - int16_data = array_of_floats.astype(np.int16) - return SAMPLE_RATE, int16_data - -title = "Music generation with GPT-2" - -iface = gr.Interface( - fn=process, - inputs=[gr.inputs.Textbox(default="PIECE_START")], - outputs=['audio'], - title=title, - examples=[["PIECE_START"], ["PIECE_START STYLE=JSFAKES GENRE=JSFAKES TRACK_START INST=48 BAR_START NOTE_ON=61"]], - article="This demo is inspired in the notebook from https://huggingface.co/TristanBehrens/js-fakes-4bars" -) - -iface.launch(debug=True) \ No newline at end of file diff --git a/spaces/pasha006/Environment/README.md b/spaces/pasha006/Environment/README.md deleted file mode 100644 index 8692bcff235d5ff10e7fa51e8e418c3baa52241b..0000000000000000000000000000000000000000 --- a/spaces/pasha006/Environment/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Happy Or Sad -emoji: 🐨 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.21.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/phyloforfun/VoucherVision/vouchervision/vouchervision_main.py b/spaces/phyloforfun/VoucherVision/vouchervision/vouchervision_main.py deleted file mode 100644 index 0592f136a006801038d23a239818770833d5503e..0000000000000000000000000000000000000000 --- a/spaces/phyloforfun/VoucherVision/vouchervision/vouchervision_main.py +++ /dev/null @@ -1,179 +0,0 @@ -''' -VoucherVision - based on LeafMachine2 Processes -''' -import os, inspect, sys, logging, subprocess -from time import perf_counter -currentdir = os.path.dirname(os.path.dirname(inspect.getfile(inspect.currentframe()))) -parentdir = os.path.dirname(currentdir) -sys.path.append(parentdir) -sys.path.append(currentdir) -from vouchervision.component_detector.component_detector import detect_plant_components, detect_archival_components -from general_utils import make_zipfile, add_to_expense_report, save_token_info_as_csv, print_main_start, check_for_subdirs_VV, load_config_file, load_config_file_testing, report_config, save_config_file, subset_dir_images, crop_detections_from_images_VV -from directory_structure_VV import Dir_Structure -from data_project import Project_Info -from LM2_logger import start_logging -from fetch_data import fetch_data -from utils_VoucherVision import VoucherVision, space_saver - - -def voucher_vision(cfg_file_path, dir_home, path_custom_prompts, cfg_test, progress_report, path_api_cost=None, test_ind = None, is_real_run=False): - # get_n_overall = progress_report.get_n_overall() - # progress_report.update_overall(f"Working on {test_ind+1} of {get_n_overall}") - - t_overall = perf_counter() - - # Load config file - report_config(dir_home, cfg_file_path, system='VoucherVision') - - if cfg_test is None: - cfg = load_config_file(dir_home, cfg_file_path, system='VoucherVision') # For VoucherVision - else: - cfg = cfg_test - # user_cfg = load_config_file(dir_home, cfg_file_path) - # cfg = Config(user_cfg) - - # Check to see if there are subdirs - # Yes --> use the names of the subsirs as run_name - run_name, dirs_list, has_subdirs = check_for_subdirs_VV(cfg) - print(f"run_name {run_name} dirs_list{dirs_list} has_subdirs{has_subdirs}") - - # for dir_ind, dir_in in enumerate(dirs_list): - # if has_subdirs: - # cfg['leafmachine']['project']['dir_images_local'] = dir_in - # cfg['leafmachine']['project']['run_name'] = run_name[dir_ind] - - # Dir structure - if is_real_run: - progress_report.update_overall(f"Creating Output Directory Structure") - print_main_start("Creating Directory Structure") - Dirs = Dir_Structure(cfg) - - # logging.info("Hi") - logger = start_logging(Dirs, cfg) - - # Check to see if required ML files are ready to use - if is_real_run: - progress_report.update_overall(f"Fetching LeafMachine2 Files") - ready_to_use = fetch_data(logger, dir_home, cfg_file_path) - assert ready_to_use, "Required ML files are not ready to use!\nThe download may have failed,\nor\nthe directory structure of LM2 has been altered" - - # Wrangle images and preprocess - print_main_start("Gathering Images and Image Metadata") - Project = Project_Info(cfg, logger, dir_home, Dirs) # Where file names are modified - - # Save config file - save_config_file(cfg, logger, Dirs) - - # Detect Archival Components - print_main_start("Locating Archival Components") - Project = detect_archival_components(cfg, logger, dir_home, Project, Dirs, is_real_run, progress_report) - - # Save cropped detections - crop_detections_from_images_VV(cfg, logger, dir_home, Project, Dirs) - - # Process labels - Voucher_Vision = VoucherVision(cfg, logger, dir_home, path_custom_prompts, Project, Dirs) - n_images = len(Voucher_Vision.img_paths) - last_JSON_response, total_tokens_in, total_tokens_out = Voucher_Vision.process_specimen_batch(progress_report, is_real_run) - - if path_api_cost: - cost_summary, data, total_cost = save_token_info_as_csv(Dirs, cfg['leafmachine']['LLM_version'], path_api_cost, total_tokens_in, total_tokens_out, n_images) - add_to_expense_report(dir_home, data) - logger.info(cost_summary) - else: - total_cost = None #TODO add config tests to expense_report - - t_overall_s = perf_counter() - logger.name = 'Run Complete! :)' - logger.info(f"[Total elapsed time] {round((t_overall_s - t_overall)/60)} minutes") - space_saver(cfg, Dirs, logger) - - if is_real_run: - progress_report.update_overall(f"Run Complete! :)") - - for handler in logger.handlers[:]: - handler.close() - logger.removeHandler(handler) - - # Create Higging Face zip file - dir_to_zip = os.path.join(Dirs.dir_home, Dirs.run_name) - zip_filename = Dirs.run_name - - # Creating a zip file - zip_filepath = make_zipfile(dir_to_zip, zip_filename) - - return last_JSON_response, total_cost, zip_filepath - -def voucher_vision_OCR_test(cfg_file_path, dir_home, cfg_test, path_to_crop): - # get_n_overall = progress_report.get_n_overall() - # progress_report.update_overall(f"Working on {test_ind+1} of {get_n_overall}") - - # Load config file - report_config(dir_home, cfg_file_path, system='VoucherVision') - - if cfg_test is None: - cfg = load_config_file(dir_home, cfg_file_path, system='VoucherVision') # For VoucherVision - else: - cfg = cfg_test - # user_cfg = load_config_file(dir_home, cfg_file_path) - # cfg = Config(user_cfg) - - # Check to see if there are subdirs - # Yes --> use the names of the subsirs as run_name - run_name, dirs_list, has_subdirs = check_for_subdirs_VV(cfg) - print(f"run_name {run_name} dirs_list{dirs_list} has_subdirs{has_subdirs}") - - # for dir_ind, dir_in in enumerate(dirs_list): - # if has_subdirs: - # cfg['leafmachine']['project']['dir_images_local'] = dir_in - # cfg['leafmachine']['project']['run_name'] = run_name[dir_ind] - - # Dir structure - print_main_start("Creating Directory Structure") - Dirs = Dir_Structure(cfg) - - # logging.info("Hi") - logger = start_logging(Dirs, cfg) - - # Check to see if required ML files are ready to use - ready_to_use = fetch_data(logger, dir_home, cfg_file_path) - assert ready_to_use, "Required ML files are not ready to use!\nThe download may have failed,\nor\nthe directory structure of LM2 has been altered" - - # Wrangle images and preprocess - print_main_start("Gathering Images and Image Metadata") - Project = Project_Info(cfg, logger, dir_home, Dirs) # Where file names are modified - - # Save config file - save_config_file(cfg, logger, Dirs) - - # Detect Archival Components - print_main_start("Locating Archival Components") - Project = detect_archival_components(cfg, logger, dir_home, Project, Dirs) - - # Save cropped detections - crop_detections_from_images_VV(cfg, logger, dir_home, Project, Dirs) - - # Process labels - Voucher_Vision = VoucherVision(cfg, logger, dir_home, None, Project, Dirs) - last_JSON_response = Voucher_Vision.process_specimen_batch_OCR_test(path_to_crop) - - -if __name__ == '__main__': - is_test = False - - # Set LeafMachine2 dir - dir_home = os.path.dirname(os.path.dirname(os.path.dirname(__file__))) - - if is_test: - cfg_file_path = os.path.join(dir_home, 'demo','demo.yaml') #'D:\Dropbox\LeafMachine2\LeafMachine2.yaml' - # cfg_file_path = 'test_installation' - - cfg_testing = load_config_file_testing(dir_home, cfg_file_path) - cfg_testing['leafmachine']['project']['dir_images_local'] = os.path.join(dir_home, cfg_testing['leafmachine']['project']['dir_images_local'][0], cfg_testing['leafmachine']['project']['dir_images_local'][1]) - cfg_testing['leafmachine']['project']['dir_output'] = os.path.join(dir_home, cfg_testing['leafmachine']['project']['dir_output'][0], cfg_testing['leafmachine']['project']['dir_output'][1]) - - last_JSON_response = voucher_vision(cfg_file_path, dir_home, cfg_testing, None) - else: - cfg_file_path = None - cfg_testing = None - last_JSON_response = voucher_vision(cfg_file_path, dir_home, cfg_testing, None) \ No newline at end of file diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/logging.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/logging.py deleted file mode 100644 index c10e1f4ced6bcc799799b62666695998e095bbaf..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/utils/logging.py +++ /dev/null @@ -1,348 +0,0 @@ -import contextlib -import errno -import logging -import logging.handlers -import os -import sys -import threading -from dataclasses import dataclass -from io import TextIOWrapper -from logging import Filter -from typing import Any, ClassVar, Generator, List, Optional, TextIO, Type - -from pip._vendor.rich.console import ( - Console, - ConsoleOptions, - ConsoleRenderable, - RenderableType, - RenderResult, - RichCast, -) -from pip._vendor.rich.highlighter import NullHighlighter -from pip._vendor.rich.logging import RichHandler -from pip._vendor.rich.segment import Segment -from pip._vendor.rich.style import Style - -from pip._internal.utils._log import VERBOSE, getLogger -from pip._internal.utils.compat import WINDOWS -from pip._internal.utils.deprecation import DEPRECATION_MSG_PREFIX -from pip._internal.utils.misc import ensure_dir - -_log_state = threading.local() -subprocess_logger = getLogger("pip.subprocessor") - - -class BrokenStdoutLoggingError(Exception): - """ - Raised if BrokenPipeError occurs for the stdout stream while logging. - """ - - -def _is_broken_pipe_error(exc_class: Type[BaseException], exc: BaseException) -> bool: - if exc_class is BrokenPipeError: - return True - - # On Windows, a broken pipe can show up as EINVAL rather than EPIPE: - # https://bugs.python.org/issue19612 - # https://bugs.python.org/issue30418 - if not WINDOWS: - return False - - return isinstance(exc, OSError) and exc.errno in (errno.EINVAL, errno.EPIPE) - - -@contextlib.contextmanager -def indent_log(num: int = 2) -> Generator[None, None, None]: - """ - A context manager which will cause the log output to be indented for any - log messages emitted inside it. - """ - # For thread-safety - _log_state.indentation = get_indentation() - _log_state.indentation += num - try: - yield - finally: - _log_state.indentation -= num - - -def get_indentation() -> int: - return getattr(_log_state, "indentation", 0) - - -class IndentingFormatter(logging.Formatter): - default_time_format = "%Y-%m-%dT%H:%M:%S" - - def __init__( - self, - *args: Any, - add_timestamp: bool = False, - **kwargs: Any, - ) -> None: - """ - A logging.Formatter that obeys the indent_log() context manager. - - :param add_timestamp: A bool indicating output lines should be prefixed - with their record's timestamp. - """ - self.add_timestamp = add_timestamp - super().__init__(*args, **kwargs) - - def get_message_start(self, formatted: str, levelno: int) -> str: - """ - Return the start of the formatted log message (not counting the - prefix to add to each line). - """ - if levelno < logging.WARNING: - return "" - if formatted.startswith(DEPRECATION_MSG_PREFIX): - # Then the message already has a prefix. We don't want it to - # look like "WARNING: DEPRECATION: ...." - return "" - if levelno < logging.ERROR: - return "WARNING: " - - return "ERROR: " - - def format(self, record: logging.LogRecord) -> str: - """ - Calls the standard formatter, but will indent all of the log message - lines by our current indentation level. - """ - formatted = super().format(record) - message_start = self.get_message_start(formatted, record.levelno) - formatted = message_start + formatted - - prefix = "" - if self.add_timestamp: - prefix = f"{self.formatTime(record)} " - prefix += " " * get_indentation() - formatted = "".join([prefix + line for line in formatted.splitlines(True)]) - return formatted - - -@dataclass -class IndentedRenderable: - renderable: RenderableType - indent: int - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - segments = console.render(self.renderable, options) - lines = Segment.split_lines(segments) - for line in lines: - yield Segment(" " * self.indent) - yield from line - yield Segment("\n") - - -class RichPipStreamHandler(RichHandler): - KEYWORDS: ClassVar[Optional[List[str]]] = [] - - def __init__(self, stream: Optional[TextIO], no_color: bool) -> None: - super().__init__( - console=Console(file=stream, no_color=no_color, soft_wrap=True), - show_time=False, - show_level=False, - show_path=False, - highlighter=NullHighlighter(), - ) - - # Our custom override on Rich's logger, to make things work as we need them to. - def emit(self, record: logging.LogRecord) -> None: - style: Optional[Style] = None - - # If we are given a diagnostic error to present, present it with indentation. - assert isinstance(record.args, tuple) - if record.msg == "[present-rich] %s" and len(record.args) == 1: - rich_renderable = record.args[0] - assert isinstance( - rich_renderable, (ConsoleRenderable, RichCast, str) - ), f"{rich_renderable} is not rich-console-renderable" - - renderable: RenderableType = IndentedRenderable( - rich_renderable, indent=get_indentation() - ) - else: - message = self.format(record) - renderable = self.render_message(record, message) - if record.levelno is not None: - if record.levelno >= logging.ERROR: - style = Style(color="red") - elif record.levelno >= logging.WARNING: - style = Style(color="yellow") - - try: - self.console.print(renderable, overflow="ignore", crop=False, style=style) - except Exception: - self.handleError(record) - - def handleError(self, record: logging.LogRecord) -> None: - """Called when logging is unable to log some output.""" - - exc_class, exc = sys.exc_info()[:2] - # If a broken pipe occurred while calling write() or flush() on the - # stdout stream in logging's Handler.emit(), then raise our special - # exception so we can handle it in main() instead of logging the - # broken pipe error and continuing. - if ( - exc_class - and exc - and self.console.file is sys.stdout - and _is_broken_pipe_error(exc_class, exc) - ): - raise BrokenStdoutLoggingError() - - return super().handleError(record) - - -class BetterRotatingFileHandler(logging.handlers.RotatingFileHandler): - def _open(self) -> TextIOWrapper: - ensure_dir(os.path.dirname(self.baseFilename)) - return super()._open() - - -class MaxLevelFilter(Filter): - def __init__(self, level: int) -> None: - self.level = level - - def filter(self, record: logging.LogRecord) -> bool: - return record.levelno < self.level - - -class ExcludeLoggerFilter(Filter): - - """ - A logging Filter that excludes records from a logger (or its children). - """ - - def filter(self, record: logging.LogRecord) -> bool: - # The base Filter class allows only records from a logger (or its - # children). - return not super().filter(record) - - -def setup_logging(verbosity: int, no_color: bool, user_log_file: Optional[str]) -> int: - """Configures and sets up all of the logging - - Returns the requested logging level, as its integer value. - """ - - # Determine the level to be logging at. - if verbosity >= 2: - level_number = logging.DEBUG - elif verbosity == 1: - level_number = VERBOSE - elif verbosity == -1: - level_number = logging.WARNING - elif verbosity == -2: - level_number = logging.ERROR - elif verbosity <= -3: - level_number = logging.CRITICAL - else: - level_number = logging.INFO - - level = logging.getLevelName(level_number) - - # The "root" logger should match the "console" level *unless* we also need - # to log to a user log file. - include_user_log = user_log_file is not None - if include_user_log: - additional_log_file = user_log_file - root_level = "DEBUG" - else: - additional_log_file = "/dev/null" - root_level = level - - # Disable any logging besides WARNING unless we have DEBUG level logging - # enabled for vendored libraries. - vendored_log_level = "WARNING" if level in ["INFO", "ERROR"] else "DEBUG" - - # Shorthands for clarity - log_streams = { - "stdout": "ext://sys.stdout", - "stderr": "ext://sys.stderr", - } - handler_classes = { - "stream": "pip._internal.utils.logging.RichPipStreamHandler", - "file": "pip._internal.utils.logging.BetterRotatingFileHandler", - } - handlers = ["console", "console_errors", "console_subprocess"] + ( - ["user_log"] if include_user_log else [] - ) - - logging.config.dictConfig( - { - "version": 1, - "disable_existing_loggers": False, - "filters": { - "exclude_warnings": { - "()": "pip._internal.utils.logging.MaxLevelFilter", - "level": logging.WARNING, - }, - "restrict_to_subprocess": { - "()": "logging.Filter", - "name": subprocess_logger.name, - }, - "exclude_subprocess": { - "()": "pip._internal.utils.logging.ExcludeLoggerFilter", - "name": subprocess_logger.name, - }, - }, - "formatters": { - "indent": { - "()": IndentingFormatter, - "format": "%(message)s", - }, - "indent_with_timestamp": { - "()": IndentingFormatter, - "format": "%(message)s", - "add_timestamp": True, - }, - }, - "handlers": { - "console": { - "level": level, - "class": handler_classes["stream"], - "no_color": no_color, - "stream": log_streams["stdout"], - "filters": ["exclude_subprocess", "exclude_warnings"], - "formatter": "indent", - }, - "console_errors": { - "level": "WARNING", - "class": handler_classes["stream"], - "no_color": no_color, - "stream": log_streams["stderr"], - "filters": ["exclude_subprocess"], - "formatter": "indent", - }, - # A handler responsible for logging to the console messages - # from the "subprocessor" logger. - "console_subprocess": { - "level": level, - "class": handler_classes["stream"], - "stream": log_streams["stderr"], - "no_color": no_color, - "filters": ["restrict_to_subprocess"], - "formatter": "indent", - }, - "user_log": { - "level": "DEBUG", - "class": handler_classes["file"], - "filename": additional_log_file, - "encoding": "utf-8", - "delay": True, - "formatter": "indent_with_timestamp", - }, - }, - "root": { - "level": root_level, - "handlers": handlers, - }, - "loggers": {"pip._vendor": {"level": vendored_log_level}}, - } - ) - - return level_number diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/unpack.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/unpack.py deleted file mode 100644 index d48840e6ec0512225233bf02d1d7ce203415b04c..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/wheel/cli/unpack.py +++ /dev/null @@ -1,30 +0,0 @@ -from __future__ import annotations - -from pathlib import Path - -from ..wheelfile import WheelFile - - -def unpack(path: str, dest: str = ".") -> None: - """Unpack a wheel. - - Wheel content will be unpacked to {dest}/{name}-{ver}, where {name} - is the package name and {ver} its version. - - :param path: The path to the wheel. - :param dest: Destination directory (default to current directory). - """ - with WheelFile(path) as wf: - namever = wf.parsed_filename.group("namever") - destination = Path(dest) / namever - print(f"Unpacking to: {destination}...", end="", flush=True) - for zinfo in wf.filelist: - wf.extract(zinfo, destination) - - # Set permissions to the same values as they were set in the archive - # We have to do this manually due to - # https://github.com/python/cpython/issues/59999 - permissions = zinfo.external_attr >> 16 & 0o777 - destination.joinpath(zinfo.filename).chmod(permissions) - - print("OK") diff --git a/spaces/plzdontcry/dakubettergpt/src/store/auth-slice.ts b/spaces/plzdontcry/dakubettergpt/src/store/auth-slice.ts deleted file mode 100644 index 52088906ab82d576259369e70f0983fee4f57958..0000000000000000000000000000000000000000 --- a/spaces/plzdontcry/dakubettergpt/src/store/auth-slice.ts +++ /dev/null @@ -1,35 +0,0 @@ -import { defaultAPIEndpoint } from '@constants/auth'; -import { StoreSlice } from './store'; - -export interface AuthSlice { - apiKey?: string; - apiEndpoint: string; - firstVisit: boolean; - setApiKey: (apiKey: string) => void; - setApiEndpoint: (apiEndpoint: string) => void; - setFirstVisit: (firstVisit: boolean) => void; -} - -export const createAuthSlice: StoreSlice = (set, get) => ({ - apiKey: import.meta.env.VITE_OPENAI_API_KEY || undefined, - apiEndpoint: defaultAPIEndpoint, - firstVisit: true, - setApiKey: (apiKey: string) => { - set((prev: AuthSlice) => ({ - ...prev, - apiKey: apiKey, - })); - }, - setApiEndpoint: (apiEndpoint: string) => { - set((prev: AuthSlice) => ({ - ...prev, - apiEndpoint: apiEndpoint, - })); - }, - setFirstVisit: (firstVisit: boolean) => { - set((prev: AuthSlice) => ({ - ...prev, - firstVisit: firstVisit, - })); - }, -}); diff --git a/spaces/prerna9811/Chord/portaudio/qa/loopback/src/paqa_tools.c b/spaces/prerna9811/Chord/portaudio/qa/loopback/src/paqa_tools.c deleted file mode 100644 index 2e44c63645c144ce049864804616416bcefa985e..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/qa/loopback/src/paqa_tools.c +++ /dev/null @@ -1,171 +0,0 @@ - -/* - * PortAudio Portable Real-Time Audio Library - * Latest Version at: http://www.portaudio.com - * - * Copyright (c) 1999-2010 Phil Burk and Ross Bencina - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include "paqa_tools.h" - - -/*******************************************************************/ -void PaQa_ListAudioDevices(void) -{ - int i, numDevices; - const PaDeviceInfo *deviceInfo; - numDevices = Pa_GetDeviceCount(); - for( i=0; imaxInputChannels ); - printf( ", %2d out", deviceInfo->maxOutputChannels ); - printf( ", %s", deviceInfo->name ); - printf( ", on %s\n", Pa_GetHostApiInfo( deviceInfo->hostApi )->name ); - } -} - -/*******************************************************************/ -void PaQa_ConvertToFloat( const void *input, int numSamples, PaSampleFormat inFormat, float *output ) -{ - int i; - switch( inFormat ) - { - case paUInt8: - { - unsigned char *data = (unsigned char *)input; - for( i=0; i> 8; - float fval = (float) (value / ((double) 0x00800000)); - *output++ = fval; - } - } - break; - } - -} - -/*******************************************************************/ -void PaQa_ConvertFromFloat( const float *input, int numSamples, PaSampleFormat outFormat, void *output ) -{ - int i; - switch( outFormat ) - { - case paUInt8: - { - unsigned char *data = (unsigned char *)output; - for( i=0; i _ToValuesReturnType: - return curried.pipe(data, limit_rows(max_rows=max_rows), to_values) - - -class DataTransformerRegistry(_DataTransformerRegistry): - def disable_max_rows(self) -> PluginEnabler: - """Disable the MaxRowsError.""" - options = self.options - if self.active in ("default", "vegafusion"): - options = options.copy() - options["max_rows"] = None - return self.enable(**options) - - -__all__ = ( - "DataTransformerRegistry", - "MaxRowsError", - "curry", - "sanitize_dataframe", - "default_data_transformer", - "limit_rows", - "pipe", - "sample", - "to_csv", - "to_json", - "to_values", - "check_data_type", -) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/merge/options.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/merge/options.py deleted file mode 100644 index 0c4cfb99884992f5d69cef4b365f26947c3f837b..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/merge/options.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Behdad Esfahbod, Roozbeh Pournader - - -class Options(object): - class UnknownOptionError(Exception): - pass - - def __init__(self, **kwargs): - - self.verbose = False - self.timing = False - self.drop_tables = [] - - self.set(**kwargs) - - def set(self, **kwargs): - for k, v in kwargs.items(): - if not hasattr(self, k): - raise self.UnknownOptionError("Unknown option '%s'" % k) - setattr(self, k, v) - - def parse_opts(self, argv, ignore_unknown=[]): - ret = [] - opts = {} - for a in argv: - orig_a = a - if not a.startswith("--"): - ret.append(a) - continue - a = a[2:] - i = a.find("=") - op = "=" - if i == -1: - if a.startswith("no-"): - k = a[3:] - v = False - else: - k = a - v = True - else: - k = a[:i] - if k[-1] in "-+": - op = k[-1] + "=" # Ops is '-=' or '+=' now. - k = k[:-1] - v = a[i + 1 :] - ok = k - k = k.replace("-", "_") - if not hasattr(self, k): - if ignore_unknown is True or ok in ignore_unknown: - ret.append(orig_a) - continue - else: - raise self.UnknownOptionError("Unknown option '%s'" % a) - - ov = getattr(self, k) - if isinstance(ov, bool): - v = bool(v) - elif isinstance(ov, int): - v = int(v) - elif isinstance(ov, list): - vv = v.split(",") - if vv == [""]: - vv = [] - vv = [int(x, 0) if len(x) and x[0] in "0123456789" else x for x in vv] - if op == "=": - v = vv - elif op == "+=": - v = ov - v.extend(vv) - elif op == "-=": - v = ov - for x in vv: - if x in v: - v.remove(x) - else: - assert 0 - - opts[k] = v - self.set(**opts) - - return ret diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/ttVisitor.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/ttVisitor.py deleted file mode 100644 index 54db61b1e0b1be5e2d36fd72008230de7fc35401..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/ttLib/ttVisitor.py +++ /dev/null @@ -1,32 +0,0 @@ -"""Specialization of fontTools.misc.visitor to work with TTFont.""" - -from fontTools.misc.visitor import Visitor -from fontTools.ttLib import TTFont - - -class TTVisitor(Visitor): - def visitAttr(self, obj, attr, value, *args, **kwargs): - if isinstance(value, TTFont): - return False - super().visitAttr(obj, attr, value, *args, **kwargs) - - def visit(self, obj, *args, **kwargs): - if hasattr(obj, "ensureDecompiled"): - obj.ensureDecompiled(recurse=False) - super().visit(obj, *args, **kwargs) - - -@TTVisitor.register(TTFont) -def visit(visitor, font, *args, **kwargs): - # Some objects have links back to TTFont; even though we - # have a check in visitAttr to stop them from recursing - # onto TTFont, sometimes they still do, for example when - # someone overrides visitAttr. - if hasattr(visitor, "font"): - return False - - visitor.font = font - for tag in font.keys(): - visitor.visit(font[tag], *args, **kwargs) - del visitor.font - return False diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/preview/test/test/demo/app.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/preview/test/test/demo/app.py deleted file mode 100644 index 021e922fd81e9c7553d7a7d00f63ca4db2689420..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/_frontend_code/preview/test/test/demo/app.py +++ /dev/null @@ -1,13 +0,0 @@ - -import gradio as gr -from gradio_test import Test - - -example = Test().example_inputs() - -with gr.Blocks() as demo: - Test(value=example, interactive=True) - Test(value=example, interactive=False) - - -demo.launch() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/gallery.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/gallery.py deleted file mode 100644 index cdf4ad181994a3a96296c9c5165248c39511c43a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/gallery.py +++ /dev/null @@ -1,171 +0,0 @@ -"""gr.Gallery() component.""" - -from __future__ import annotations - -from pathlib import Path -from typing import Any, Callable, List, Literal, Optional - -import numpy as np -from gradio_client.documentation import document, set_documentation_group -from PIL import Image as _Image # using _ to minimize namespace pollution - -from gradio import processing_utils, utils -from gradio.components.base import Component -from gradio.data_classes import FileData, GradioModel, GradioRootModel -from gradio.events import Events - -set_documentation_group("component") - - -class GalleryImage(GradioModel): - image: FileData - caption: Optional[str] = None - - -class GalleryData(GradioRootModel): - root: List[GalleryImage] - - -@document() -class Gallery(Component): - """ - Used to display a list of images as a gallery that can be scrolled through. - Preprocessing: this component does *not* accept input. - Postprocessing: expects a list of images in any format, {List[numpy.array | PIL.Image | str | pathlib.Path]}, or a {List} of (image, {str} caption) tuples and displays them. - - Demos: fake_gan - """ - - EVENTS = [Events.select] - - data_model = GalleryData - - def __init__( - self, - value: list[np.ndarray | _Image.Image | str | Path | tuple] - | Callable - | None = None, - *, - label: str | None = None, - every: float | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - render: bool = True, - columns: int | tuple | None = 2, - rows: int | tuple | None = None, - height: int | float | None = None, - allow_preview: bool = True, - preview: bool | None = None, - selected_index: int | None = None, - object_fit: Literal["contain", "cover", "fill", "none", "scale-down"] - | None = None, - show_share_button: bool | None = None, - show_download_button: bool | None = True, - ): - """ - Parameters: - value: List of images to display in the gallery by default. If callable, the function will be called whenever the app loads to set the initial value of the component. - label: The label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - show_label: if True, will display label. - container: If True, will place the component in a container - providing some extra padding around the border. - scale: relative width compared to adjacent Components in a Row. For example, if Component A has scale=2, and Component B has scale=1, A will be twice as wide as B. Should be an integer. - min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first. - visible: If False, component will be hidden. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - render: If False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later. - columns: Represents the number of images that should be shown in one row, for each of the six standard screen sizes (<576px, <768px, <992px, <1200px, <1400px, >1400px). If fewer than 6 are given then the last will be used for all subsequent breakpoints - rows: Represents the number of rows in the image grid, for each of the six standard screen sizes (<576px, <768px, <992px, <1200px, <1400px, >1400px). If fewer than 6 are given then the last will be used for all subsequent breakpoints - height: The height of the gallery component, in pixels. If more images are displayed than can fit in the height, a scrollbar will appear. - allow_preview: If True, images in the gallery will be enlarged when they are clicked. Default is True. - preview: If True, Gallery will start in preview mode, which shows all of the images as thumbnails and allows the user to click on them to view them in full size. Only works if allow_preview is True. - selected_index: The index of the image that should be initially selected. If None, no image will be selected at start. If provided, will set Gallery to preview mode unless allow_preview is set to False. - object_fit: CSS object-fit property for the thumbnail images in the gallery. Can be "contain", "cover", "fill", "none", or "scale-down". - show_share_button: If True, will show a share icon in the corner of the component that allows user to share outputs to Hugging Face Spaces Discussions. If False, icon does not appear. If set to None (default behavior), then the icon appears if this Gradio app is launched on Spaces, but not otherwise. - show_download_button: If True, will show a download button in the corner of the selected image. If False, the icon does not appear. Default is True. - """ - self.columns = columns - self.rows = rows - self.height = height - self.preview = preview - self.object_fit = object_fit - self.allow_preview = allow_preview - self.show_download_button = ( - (utils.get_space() is not None) - if show_download_button is None - else show_download_button - ) - self.selected_index = selected_index - - self.show_share_button = ( - (utils.get_space() is not None) - if show_share_button is None - else show_share_button - ) - super().__init__( - label=label, - every=every, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - render=render, - value=value, - ) - - def postprocess( - self, - value: list[np.ndarray | _Image.Image | str] - | list[tuple[np.ndarray | _Image.Image | str, str]] - | None, - ) -> GalleryData: - """ - Parameters: - value: list of images, or list of (image, caption) tuples - Returns: - list of string file paths to images in temp directory - """ - if value is None: - return GalleryData(root=[]) - output = [] - for img in value: - caption = None - if isinstance(img, (tuple, list)): - img, caption = img - if isinstance(img, np.ndarray): - file = processing_utils.save_img_array_to_cache( - img, cache_dir=self.GRADIO_CACHE - ) - file_path = str(utils.abspath(file)) - elif isinstance(img, _Image.Image): - file = processing_utils.save_pil_to_cache( - img, cache_dir=self.GRADIO_CACHE - ) - file_path = str(utils.abspath(file)) - elif isinstance(img, (str, Path)): - file_path = str(img) - else: - raise ValueError(f"Cannot process type as image: {type(img)}") - - entry = GalleryImage(image=FileData(path=file_path), caption=caption) - output.append(entry) - return GalleryData(root=output) - - def preprocess(self, payload: GalleryData | None) -> GalleryData | None: - if payload is None or not payload.root: - return None - return payload - - def example_inputs(self) -> Any: - return [ - "https://raw.githubusercontent.com/gradio-app/gradio/main/test/test_files/bus.png" - ] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/utils/_fixes.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/utils/_fixes.py deleted file mode 100644 index ff4f9e2d70e323e108fbd7bade2fbed3f5595cbe..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/huggingface_hub/utils/_fixes.py +++ /dev/null @@ -1,77 +0,0 @@ -# JSONDecodeError was introduced in requests=2.27 released in 2022. -# This allows us to support older requests for users -# More information: https://github.com/psf/requests/pull/5856 -try: - from requests import JSONDecodeError # type: ignore # noqa: F401 -except ImportError: - try: - from simplejson import JSONDecodeError # type: ignore # noqa: F401 - except ImportError: - from json import JSONDecodeError # type: ignore # noqa: F401 - -import contextlib -import os -import shutil -import stat -import tempfile -from functools import partial -from pathlib import Path -from typing import Callable, Generator, Optional, Union - -import yaml - - -# Wrap `yaml.dump` to set `allow_unicode=True` by default. -# -# Example: -# ```py -# >>> yaml.dump({"emoji": "👀", "some unicode": "日本か"}) -# 'emoji: "\\U0001F440"\nsome unicode: "\\u65E5\\u672C\\u304B"\n' -# -# >>> yaml_dump({"emoji": "👀", "some unicode": "日本か"}) -# 'emoji: "👀"\nsome unicode: "日本か"\n' -# ``` -yaml_dump: Callable[..., str] = partial(yaml.dump, stream=None, allow_unicode=True) # type: ignore - - -@contextlib.contextmanager -def SoftTemporaryDirectory( - suffix: Optional[str] = None, - prefix: Optional[str] = None, - dir: Optional[Union[Path, str]] = None, - **kwargs, -) -> Generator[str, None, None]: - """ - Context manager to create a temporary directory and safely delete it. - - If tmp directory cannot be deleted normally, we set the WRITE permission and retry. - If cleanup still fails, we give up but don't raise an exception. This is equivalent - to `tempfile.TemporaryDirectory(..., ignore_cleanup_errors=True)` introduced in - Python 3.10. - - See https://www.scivision.dev/python-tempfile-permission-error-windows/. - """ - tmpdir = tempfile.TemporaryDirectory(prefix=prefix, suffix=suffix, dir=dir, **kwargs) - yield tmpdir.name - - try: - # First once with normal cleanup - shutil.rmtree(tmpdir.name) - except Exception: - # If failed, try to set write permission and retry - try: - shutil.rmtree(tmpdir.name, onerror=_set_write_permission_and_retry) - except Exception: - pass - - # And finally, cleanup the tmpdir. - # If it fails again, give up but do not throw error - try: - tmpdir.cleanup() - except Exception: - pass - - -def _set_write_permission_and_retry(func, path, excinfo): - os.chmod(path, stat.S_IWRITE) - func(path) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/checks/cpu_sse.c b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/checks/cpu_sse.c deleted file mode 100644 index 602b74e7bc437ee4fdfbc375280f423700caa49e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/distutils/checks/cpu_sse.c +++ /dev/null @@ -1,20 +0,0 @@ -#if defined(DETECT_FEATURES) && defined(__INTEL_COMPILER) - /* - * Unlike GCC and CLANG, Intel Compiler exposes all supported intrinsics, - * whether or not the build options for those features are specified. - * Therefore, we must test #definitions of CPU features when option native/host - * is enabled via `--cpu-baseline` or through env var `CFLAGS` otherwise - * the test will be broken and leads to enable all possible features. - */ - #ifndef __SSE__ - #error "HOST/ARCH doesn't support SSE" - #endif -#endif - -#include - -int main(void) -{ - __m128 a = _mm_add_ps(_mm_setzero_ps(), _mm_setzero_ps()); - return (int)_mm_cvtss_f32(a); -} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/pubprivmod.f90 b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/pubprivmod.f90 deleted file mode 100644 index 46bef7cb91122281ddac7d0f0474c2c01b2a5e6f..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/f2py/tests/src/crackfortran/pubprivmod.f90 +++ /dev/null @@ -1,10 +0,0 @@ -module foo - public - integer, private :: a - integer :: b -contains - subroutine setA(v) - integer, intent(in) :: v - a = v - end subroutine setA -end module foo diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/dtypes/cast/test_infer_dtype.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/dtypes/cast/test_infer_dtype.py deleted file mode 100644 index 50eaa1f4d871373037ee5304aaccc95e1b7a73ab..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/dtypes/cast/test_infer_dtype.py +++ /dev/null @@ -1,208 +0,0 @@ -from datetime import ( - date, - datetime, - timedelta, -) - -import numpy as np -import pytest - -from pandas.core.dtypes.cast import ( - infer_dtype_from, - infer_dtype_from_array, - infer_dtype_from_scalar, -) -from pandas.core.dtypes.common import is_dtype_equal - -from pandas import ( - Categorical, - Interval, - Period, - Series, - Timedelta, - Timestamp, - date_range, -) - - -def test_infer_dtype_from_int_scalar(any_int_numpy_dtype): - # Test that infer_dtype_from_scalar is - # returning correct dtype for int and float. - data = np.dtype(any_int_numpy_dtype).type(12) - dtype, val = infer_dtype_from_scalar(data) - assert dtype == type(data) - - -def test_infer_dtype_from_float_scalar(float_numpy_dtype): - float_numpy_dtype = np.dtype(float_numpy_dtype).type - data = float_numpy_dtype(12) - - dtype, val = infer_dtype_from_scalar(data) - assert dtype == float_numpy_dtype - - -@pytest.mark.parametrize( - "data,exp_dtype", [(12, np.int64), (np.float64(12), np.float64)] -) -def test_infer_dtype_from_python_scalar(data, exp_dtype): - dtype, val = infer_dtype_from_scalar(data) - assert dtype == exp_dtype - - -@pytest.mark.parametrize("bool_val", [True, False]) -def test_infer_dtype_from_boolean(bool_val): - dtype, val = infer_dtype_from_scalar(bool_val) - assert dtype == np.bool_ - - -def test_infer_dtype_from_complex(complex_dtype): - data = np.dtype(complex_dtype).type(1) - dtype, val = infer_dtype_from_scalar(data) - assert dtype == np.complex128 - - -def test_infer_dtype_from_datetime(): - dt64 = np.datetime64(1, "ns") - dtype, val = infer_dtype_from_scalar(dt64) - assert dtype == "M8[ns]" - - ts = Timestamp(1) - dtype, val = infer_dtype_from_scalar(ts) - assert dtype == "M8[ns]" - - dt = datetime(2000, 1, 1, 0, 0) - dtype, val = infer_dtype_from_scalar(dt) - assert dtype == "M8[us]" - - -def test_infer_dtype_from_timedelta(): - td64 = np.timedelta64(1, "ns") - dtype, val = infer_dtype_from_scalar(td64) - assert dtype == "m8[ns]" - - pytd = timedelta(1) - dtype, val = infer_dtype_from_scalar(pytd) - assert dtype == "m8[us]" - - td = Timedelta(1) - dtype, val = infer_dtype_from_scalar(td) - assert dtype == "m8[ns]" - - -@pytest.mark.parametrize("freq", ["M", "D"]) -def test_infer_dtype_from_period(freq): - p = Period("2011-01-01", freq=freq) - dtype, val = infer_dtype_from_scalar(p) - - exp_dtype = f"period[{freq}]" - - assert dtype == exp_dtype - assert val == p - - -def test_infer_dtype_misc(): - dt = date(2000, 1, 1) - dtype, val = infer_dtype_from_scalar(dt) - assert dtype == np.object_ - - ts = Timestamp(1, tz="US/Eastern") - dtype, val = infer_dtype_from_scalar(ts) - assert dtype == "datetime64[ns, US/Eastern]" - - -@pytest.mark.parametrize("tz", ["UTC", "US/Eastern", "Asia/Tokyo"]) -def test_infer_from_scalar_tz(tz): - dt = Timestamp(1, tz=tz) - dtype, val = infer_dtype_from_scalar(dt) - - exp_dtype = f"datetime64[ns, {tz}]" - - assert dtype == exp_dtype - assert val == dt - - -@pytest.mark.parametrize( - "left, right, subtype", - [ - (0, 1, "int64"), - (0.0, 1.0, "float64"), - (Timestamp(0), Timestamp(1), "datetime64[ns]"), - (Timestamp(0, tz="UTC"), Timestamp(1, tz="UTC"), "datetime64[ns, UTC]"), - (Timedelta(0), Timedelta(1), "timedelta64[ns]"), - ], -) -def test_infer_from_interval(left, right, subtype, closed): - # GH 30337 - interval = Interval(left, right, closed) - result_dtype, result_value = infer_dtype_from_scalar(interval) - expected_dtype = f"interval[{subtype}, {closed}]" - assert result_dtype == expected_dtype - assert result_value == interval - - -def test_infer_dtype_from_scalar_errors(): - msg = "invalid ndarray passed to infer_dtype_from_scalar" - - with pytest.raises(ValueError, match=msg): - infer_dtype_from_scalar(np.array([1])) - - -@pytest.mark.parametrize( - "value, expected", - [ - ("foo", np.object_), - (b"foo", np.object_), - (1, np.int64), - (1.5, np.float64), - (np.datetime64("2016-01-01"), np.dtype("M8[s]")), - (Timestamp("20160101"), np.dtype("M8[s]")), - (Timestamp("20160101", tz="UTC"), "datetime64[s, UTC]"), - ], -) -def test_infer_dtype_from_scalar(value, expected): - dtype, _ = infer_dtype_from_scalar(value) - assert is_dtype_equal(dtype, expected) - - with pytest.raises(TypeError, match="must be list-like"): - infer_dtype_from_array(value) - - -@pytest.mark.parametrize( - "arr, expected", - [ - ([1], np.dtype(int)), - (np.array([1], dtype=np.int64), np.int64), - ([np.nan, 1, ""], np.object_), - (np.array([[1.0, 2.0]]), np.float64), - (Categorical(list("aabc")), "category"), - (Categorical([1, 2, 3]), "category"), - (date_range("20160101", periods=3), np.dtype("=M8[ns]")), - ( - date_range("20160101", periods=3, tz="US/Eastern"), - "datetime64[ns, US/Eastern]", - ), - (Series([1.0, 2, 3]), np.float64), - (Series(list("abc")), np.object_), - ( - Series(date_range("20160101", periods=3, tz="US/Eastern")), - "datetime64[ns, US/Eastern]", - ), - ], -) -def test_infer_dtype_from_array(arr, expected): - dtype, _ = infer_dtype_from_array(arr) - assert is_dtype_equal(dtype, expected) - - -@pytest.mark.parametrize("cls", [np.datetime64, np.timedelta64]) -def test_infer_dtype_from_scalar_zerodim_datetimelike(cls): - # ndarray.item() can incorrectly return int instead of td64/dt64 - val = cls(1234, "ns") - arr = np.array(val) - - dtype, res = infer_dtype_from_scalar(arr) - assert dtype.type is cls - assert isinstance(res, cls) - - dtype, res = infer_dtype_from(arr) - assert dtype.type is cls diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/pytables/test_compat.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/pytables/test_compat.py deleted file mode 100644 index b07fb3ddd3ac829f5b90d6fd7226926aeed284e6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/io/pytables/test_compat.py +++ /dev/null @@ -1,75 +0,0 @@ -import pytest - -import pandas as pd -import pandas._testing as tm - -tables = pytest.importorskip("tables") - - -@pytest.fixture -def pytables_hdf5_file(tmp_path): - """ - Use PyTables to create a simple HDF5 file. - """ - table_schema = { - "c0": tables.Time64Col(pos=0), - "c1": tables.StringCol(5, pos=1), - "c2": tables.Int64Col(pos=2), - } - - t0 = 1_561_105_000.0 - - testsamples = [ - {"c0": t0, "c1": "aaaaa", "c2": 1}, - {"c0": t0 + 1, "c1": "bbbbb", "c2": 2}, - {"c0": t0 + 2, "c1": "ccccc", "c2": 10**5}, - {"c0": t0 + 3, "c1": "ddddd", "c2": 4_294_967_295}, - ] - - objname = "pandas_test_timeseries" - - path = tmp_path / "written_with_pytables.h5" - with tables.open_file(path, mode="w") as f: - t = f.create_table("/", name=objname, description=table_schema) - for sample in testsamples: - for key, value in sample.items(): - t.row[key] = value - t.row.append() - - yield path, objname, pd.DataFrame(testsamples) - - -class TestReadPyTablesHDF5: - """ - A group of tests which covers reading HDF5 files written by plain PyTables - (not written by pandas). - - Was introduced for regression-testing issue 11188. - """ - - def test_read_complete(self, pytables_hdf5_file): - path, objname, df = pytables_hdf5_file - result = pd.read_hdf(path, key=objname) - expected = df - tm.assert_frame_equal(result, expected, check_index_type=True) - - def test_read_with_start(self, pytables_hdf5_file): - path, objname, df = pytables_hdf5_file - # This is a regression test for pandas-dev/pandas/issues/11188 - result = pd.read_hdf(path, key=objname, start=1) - expected = df[1:].reset_index(drop=True) - tm.assert_frame_equal(result, expected, check_index_type=True) - - def test_read_with_stop(self, pytables_hdf5_file): - path, objname, df = pytables_hdf5_file - # This is a regression test for pandas-dev/pandas/issues/11188 - result = pd.read_hdf(path, key=objname, stop=1) - expected = df[:1].reset_index(drop=True) - tm.assert_frame_equal(result, expected, check_index_type=True) - - def test_read_with_startstop(self, pytables_hdf5_file): - path, objname, df = pytables_hdf5_file - # This is a regression test for pandas-dev/pandas/issues/11188 - result = pd.read_hdf(path, key=objname, start=1, stop=2) - expected = df[1:2].reset_index(drop=True) - tm.assert_frame_equal(result, expected, check_index_type=True) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/requests/auth.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/requests/auth.py deleted file mode 100644 index eeface39ae62c3975ff535e6b1f79f2c28fbf888..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/requests/auth.py +++ /dev/null @@ -1,305 +0,0 @@ -# -*- coding: utf-8 -*- - -""" -requests.auth -~~~~~~~~~~~~~ - -This module contains the authentication handlers for Requests. -""" - -import os -import re -import time -import hashlib -import threading -import warnings - -from base64 import b64encode - -from .compat import urlparse, str, basestring -from .cookies import extract_cookies_to_jar -from ._internal_utils import to_native_string -from .utils import parse_dict_header - -CONTENT_TYPE_FORM_URLENCODED = 'application/x-www-form-urlencoded' -CONTENT_TYPE_MULTI_PART = 'multipart/form-data' - - -def _basic_auth_str(username, password): - """Returns a Basic Auth string.""" - - # "I want us to put a big-ol' comment on top of it that - # says that this behaviour is dumb but we need to preserve - # it because people are relying on it." - # - Lukasa - # - # These are here solely to maintain backwards compatibility - # for things like ints. This will be removed in 3.0.0. - if not isinstance(username, basestring): - warnings.warn( - "Non-string usernames will no longer be supported in Requests " - "3.0.0. Please convert the object you've passed in ({!r}) to " - "a string or bytes object in the near future to avoid " - "problems.".format(username), - category=DeprecationWarning, - ) - username = str(username) - - if not isinstance(password, basestring): - warnings.warn( - "Non-string passwords will no longer be supported in Requests " - "3.0.0. Please convert the object you've passed in ({!r}) to " - "a string or bytes object in the near future to avoid " - "problems.".format(type(password)), - category=DeprecationWarning, - ) - password = str(password) - # -- End Removal -- - - if isinstance(username, str): - username = username.encode('latin1') - - if isinstance(password, str): - password = password.encode('latin1') - - authstr = 'Basic ' + to_native_string( - b64encode(b':'.join((username, password))).strip() - ) - - return authstr - - -class AuthBase(object): - """Base class that all auth implementations derive from""" - - def __call__(self, r): - raise NotImplementedError('Auth hooks must be callable.') - - -class HTTPBasicAuth(AuthBase): - """Attaches HTTP Basic Authentication to the given Request object.""" - - def __init__(self, username, password): - self.username = username - self.password = password - - def __eq__(self, other): - return all([ - self.username == getattr(other, 'username', None), - self.password == getattr(other, 'password', None) - ]) - - def __ne__(self, other): - return not self == other - - def __call__(self, r): - r.headers['Authorization'] = _basic_auth_str(self.username, self.password) - return r - - -class HTTPProxyAuth(HTTPBasicAuth): - """Attaches HTTP Proxy Authentication to a given Request object.""" - - def __call__(self, r): - r.headers['Proxy-Authorization'] = _basic_auth_str(self.username, self.password) - return r - - -class HTTPDigestAuth(AuthBase): - """Attaches HTTP Digest Authentication to the given Request object.""" - - def __init__(self, username, password): - self.username = username - self.password = password - # Keep state in per-thread local storage - self._thread_local = threading.local() - - def init_per_thread_state(self): - # Ensure state is initialized just once per-thread - if not hasattr(self._thread_local, 'init'): - self._thread_local.init = True - self._thread_local.last_nonce = '' - self._thread_local.nonce_count = 0 - self._thread_local.chal = {} - self._thread_local.pos = None - self._thread_local.num_401_calls = None - - def build_digest_header(self, method, url): - """ - :rtype: str - """ - - realm = self._thread_local.chal['realm'] - nonce = self._thread_local.chal['nonce'] - qop = self._thread_local.chal.get('qop') - algorithm = self._thread_local.chal.get('algorithm') - opaque = self._thread_local.chal.get('opaque') - hash_utf8 = None - - if algorithm is None: - _algorithm = 'MD5' - else: - _algorithm = algorithm.upper() - # lambdas assume digest modules are imported at the top level - if _algorithm == 'MD5' or _algorithm == 'MD5-SESS': - def md5_utf8(x): - if isinstance(x, str): - x = x.encode('utf-8') - return hashlib.md5(x).hexdigest() - hash_utf8 = md5_utf8 - elif _algorithm == 'SHA': - def sha_utf8(x): - if isinstance(x, str): - x = x.encode('utf-8') - return hashlib.sha1(x).hexdigest() - hash_utf8 = sha_utf8 - elif _algorithm == 'SHA-256': - def sha256_utf8(x): - if isinstance(x, str): - x = x.encode('utf-8') - return hashlib.sha256(x).hexdigest() - hash_utf8 = sha256_utf8 - elif _algorithm == 'SHA-512': - def sha512_utf8(x): - if isinstance(x, str): - x = x.encode('utf-8') - return hashlib.sha512(x).hexdigest() - hash_utf8 = sha512_utf8 - - KD = lambda s, d: hash_utf8("%s:%s" % (s, d)) - - if hash_utf8 is None: - return None - - # XXX not implemented yet - entdig = None - p_parsed = urlparse(url) - #: path is request-uri defined in RFC 2616 which should not be empty - path = p_parsed.path or "/" - if p_parsed.query: - path += '?' + p_parsed.query - - A1 = '%s:%s:%s' % (self.username, realm, self.password) - A2 = '%s:%s' % (method, path) - - HA1 = hash_utf8(A1) - HA2 = hash_utf8(A2) - - if nonce == self._thread_local.last_nonce: - self._thread_local.nonce_count += 1 - else: - self._thread_local.nonce_count = 1 - ncvalue = '%08x' % self._thread_local.nonce_count - s = str(self._thread_local.nonce_count).encode('utf-8') - s += nonce.encode('utf-8') - s += time.ctime().encode('utf-8') - s += os.urandom(8) - - cnonce = (hashlib.sha1(s).hexdigest()[:16]) - if _algorithm == 'MD5-SESS': - HA1 = hash_utf8('%s:%s:%s' % (HA1, nonce, cnonce)) - - if not qop: - respdig = KD(HA1, "%s:%s" % (nonce, HA2)) - elif qop == 'auth' or 'auth' in qop.split(','): - noncebit = "%s:%s:%s:%s:%s" % ( - nonce, ncvalue, cnonce, 'auth', HA2 - ) - respdig = KD(HA1, noncebit) - else: - # XXX handle auth-int. - return None - - self._thread_local.last_nonce = nonce - - # XXX should the partial digests be encoded too? - base = 'username="%s", realm="%s", nonce="%s", uri="%s", ' \ - 'response="%s"' % (self.username, realm, nonce, path, respdig) - if opaque: - base += ', opaque="%s"' % opaque - if algorithm: - base += ', algorithm="%s"' % algorithm - if entdig: - base += ', digest="%s"' % entdig - if qop: - base += ', qop="auth", nc=%s, cnonce="%s"' % (ncvalue, cnonce) - - return 'Digest %s' % (base) - - def handle_redirect(self, r, **kwargs): - """Reset num_401_calls counter on redirects.""" - if r.is_redirect: - self._thread_local.num_401_calls = 1 - - def handle_401(self, r, **kwargs): - """ - Takes the given response and tries digest-auth, if needed. - - :rtype: requests.Response - """ - - # If response is not 4xx, do not auth - # See https://github.com/psf/requests/issues/3772 - if not 400 <= r.status_code < 500: - self._thread_local.num_401_calls = 1 - return r - - if self._thread_local.pos is not None: - # Rewind the file position indicator of the body to where - # it was to resend the request. - r.request.body.seek(self._thread_local.pos) - s_auth = r.headers.get('www-authenticate', '') - - if 'digest' in s_auth.lower() and self._thread_local.num_401_calls < 2: - - self._thread_local.num_401_calls += 1 - pat = re.compile(r'digest ', flags=re.IGNORECASE) - self._thread_local.chal = parse_dict_header(pat.sub('', s_auth, count=1)) - - # Consume content and release the original connection - # to allow our new request to reuse the same one. - r.content - r.close() - prep = r.request.copy() - extract_cookies_to_jar(prep._cookies, r.request, r.raw) - prep.prepare_cookies(prep._cookies) - - prep.headers['Authorization'] = self.build_digest_header( - prep.method, prep.url) - _r = r.connection.send(prep, **kwargs) - _r.history.append(r) - _r.request = prep - - return _r - - self._thread_local.num_401_calls = 1 - return r - - def __call__(self, r): - # Initialize per-thread state, if needed - self.init_per_thread_state() - # If we have a saved nonce, skip the 401 - if self._thread_local.last_nonce: - r.headers['Authorization'] = self.build_digest_header(r.method, r.url) - try: - self._thread_local.pos = r.body.tell() - except AttributeError: - # In the case of HTTPDigestAuth being reused and the body of - # the previous request was a file-like object, pos has the - # file position of the previous body. Ensure it's set to - # None. - self._thread_local.pos = None - r.register_hook('response', self.handle_401) - r.register_hook('response', self.handle_redirect) - self._thread_local.num_401_calls = 1 - - return r - - def __eq__(self, other): - return all([ - self.username == getattr(other, 'username', None), - self.password == getattr(other, 'password', None) - ]) - - def __ne__(self, other): - return not self == other diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/rule.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/rule.py deleted file mode 100644 index ce4754f6a8c0de77f1abd6bd61ad5c4dd9882cba..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/rule.py +++ /dev/null @@ -1,115 +0,0 @@ -from typing import Union - -from .align import AlignMethod -from .cells import cell_len, set_cell_size -from .console import Console, ConsoleOptions, RenderResult -from .jupyter import JupyterMixin -from .style import Style -from .text import Text - - -class Rule(JupyterMixin): - """A console renderable to draw a horizontal rule (line). - - Args: - title (Union[str, Text], optional): Text to render in the rule. Defaults to "". - characters (str, optional): Character(s) used to draw the line. Defaults to "─". - style (StyleType, optional): Style of Rule. Defaults to "rule.line". - end (str, optional): Character at end of Rule. defaults to "\\\\n" - align (str, optional): How to align the title, one of "left", "center", or "right". Defaults to "center". - """ - - def __init__( - self, - title: Union[str, Text] = "", - *, - characters: str = "─", - style: Union[str, Style] = "rule.line", - end: str = "\n", - align: AlignMethod = "center", - ) -> None: - if cell_len(characters) < 1: - raise ValueError( - "'characters' argument must have a cell width of at least 1" - ) - if align not in ("left", "center", "right"): - raise ValueError( - f'invalid value for align, expected "left", "center", "right" (not {align!r})' - ) - self.title = title - self.characters = characters - self.style = style - self.end = end - self.align = align - - def __repr__(self) -> str: - return f"Rule({self.title!r}, {self.characters!r})" - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - width = options.max_width - - # Python3.6 doesn't have an isascii method on str - isascii = getattr(str, "isascii", None) or ( - lambda s: all(ord(c) < 128 for c in s) - ) - characters = ( - "-" - if (options.ascii_only and not isascii(self.characters)) - else self.characters - ) - - chars_len = cell_len(characters) - if not self.title: - rule_text = Text(characters * ((width // chars_len) + 1), self.style) - rule_text.truncate(width) - rule_text.plain = set_cell_size(rule_text.plain, width) - yield rule_text - return - - if isinstance(self.title, Text): - title_text = self.title - else: - title_text = console.render_str(self.title, style="rule.text") - - title_text.plain = title_text.plain.replace("\n", " ") - title_text.expand_tabs() - rule_text = Text(end=self.end) - - if self.align == "center": - title_text.truncate(width - 4, overflow="ellipsis") - side_width = (width - cell_len(title_text.plain)) // 2 - left = Text(characters * (side_width // chars_len + 1)) - left.truncate(side_width - 1) - right_length = width - cell_len(left.plain) - cell_len(title_text.plain) - right = Text(characters * (side_width // chars_len + 1)) - right.truncate(right_length) - rule_text.append(left.plain + " ", self.style) - rule_text.append(title_text) - rule_text.append(" " + right.plain, self.style) - elif self.align == "left": - title_text.truncate(width - 2, overflow="ellipsis") - rule_text.append(title_text) - rule_text.append(" ") - rule_text.append(characters * (width - rule_text.cell_len), self.style) - elif self.align == "right": - title_text.truncate(width - 2, overflow="ellipsis") - rule_text.append(characters * (width - title_text.cell_len - 1), self.style) - rule_text.append(" ") - rule_text.append(title_text) - - rule_text.plain = set_cell_size(rule_text.plain, width) - yield rule_text - - -if __name__ == "__main__": # pragma: no cover - from pip._vendor.rich.console import Console - import sys - - try: - text = sys.argv[1] - except IndexError: - text = "Hello, World" - console = Console() - console.print(Rule(title=text)) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/tree.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/tree.py deleted file mode 100644 index c5ec27da93223300dd22648d29f53fd79797aae6..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/tree.py +++ /dev/null @@ -1,249 +0,0 @@ -from typing import Iterator, List, Optional, Tuple - -from ._loop import loop_first, loop_last -from .console import Console, ConsoleOptions, RenderableType, RenderResult -from .jupyter import JupyterMixin -from .measure import Measurement -from .segment import Segment -from .style import Style, StyleStack, StyleType -from .styled import Styled - - -class Tree(JupyterMixin): - """A renderable for a tree structure. - - Args: - label (RenderableType): The renderable or str for the tree label. - style (StyleType, optional): Style of this tree. Defaults to "tree". - guide_style (StyleType, optional): Style of the guide lines. Defaults to "tree.line". - expanded (bool, optional): Also display children. Defaults to True. - highlight (bool, optional): Highlight renderable (if str). Defaults to False. - """ - - def __init__( - self, - label: RenderableType, - *, - style: StyleType = "tree", - guide_style: StyleType = "tree.line", - expanded: bool = True, - highlight: bool = False, - hide_root: bool = False, - ) -> None: - self.label = label - self.style = style - self.guide_style = guide_style - self.children: List[Tree] = [] - self.expanded = expanded - self.highlight = highlight - self.hide_root = hide_root - - def add( - self, - label: RenderableType, - *, - style: Optional[StyleType] = None, - guide_style: Optional[StyleType] = None, - expanded: bool = True, - highlight: bool = False, - ) -> "Tree": - """Add a child tree. - - Args: - label (RenderableType): The renderable or str for the tree label. - style (StyleType, optional): Style of this tree. Defaults to "tree". - guide_style (StyleType, optional): Style of the guide lines. Defaults to "tree.line". - expanded (bool, optional): Also display children. Defaults to True. - highlight (Optional[bool], optional): Highlight renderable (if str). Defaults to False. - - Returns: - Tree: A new child Tree, which may be further modified. - """ - node = Tree( - label, - style=self.style if style is None else style, - guide_style=self.guide_style if guide_style is None else guide_style, - expanded=expanded, - highlight=self.highlight if highlight is None else highlight, - ) - self.children.append(node) - return node - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - - stack: List[Iterator[Tuple[bool, Tree]]] = [] - pop = stack.pop - push = stack.append - new_line = Segment.line() - - get_style = console.get_style - null_style = Style.null() - guide_style = get_style(self.guide_style, default="") or null_style - SPACE, CONTINUE, FORK, END = range(4) - - ASCII_GUIDES = (" ", "| ", "+-- ", "`-- ") - TREE_GUIDES = [ - (" ", "│ ", "├── ", "└── "), - (" ", "┃ ", "┣━━ ", "┗━━ "), - (" ", "║ ", "╠══ ", "╚══ "), - ] - _Segment = Segment - - def make_guide(index: int, style: Style) -> Segment: - """Make a Segment for a level of the guide lines.""" - if options.ascii_only: - line = ASCII_GUIDES[index] - else: - guide = 1 if style.bold else (2 if style.underline2 else 0) - line = TREE_GUIDES[0 if options.legacy_windows else guide][index] - return _Segment(line, style) - - levels: List[Segment] = [make_guide(CONTINUE, guide_style)] - push(iter(loop_last([self]))) - - guide_style_stack = StyleStack(get_style(self.guide_style)) - style_stack = StyleStack(get_style(self.style)) - remove_guide_styles = Style(bold=False, underline2=False) - - depth = 0 - - while stack: - stack_node = pop() - try: - last, node = next(stack_node) - except StopIteration: - levels.pop() - if levels: - guide_style = levels[-1].style or null_style - levels[-1] = make_guide(FORK, guide_style) - guide_style_stack.pop() - style_stack.pop() - continue - push(stack_node) - if last: - levels[-1] = make_guide(END, levels[-1].style or null_style) - - guide_style = guide_style_stack.current + get_style(node.guide_style) - style = style_stack.current + get_style(node.style) - prefix = levels[(2 if self.hide_root else 1) :] - renderable_lines = console.render_lines( - Styled(node.label, style), - options.update( - width=options.max_width - - sum(level.cell_length for level in prefix), - highlight=self.highlight, - height=None, - ), - ) - - if not (depth == 0 and self.hide_root): - for first, line in loop_first(renderable_lines): - if prefix: - yield from _Segment.apply_style( - prefix, - style.background_style, - post_style=remove_guide_styles, - ) - yield from line - yield new_line - if first and prefix: - prefix[-1] = make_guide( - SPACE if last else CONTINUE, prefix[-1].style or null_style - ) - - if node.expanded and node.children: - levels[-1] = make_guide( - SPACE if last else CONTINUE, levels[-1].style or null_style - ) - levels.append( - make_guide(END if len(node.children) == 1 else FORK, guide_style) - ) - style_stack.push(get_style(node.style)) - guide_style_stack.push(get_style(node.guide_style)) - push(iter(loop_last(node.children))) - depth += 1 - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> "Measurement": - stack: List[Iterator[Tree]] = [iter([self])] - pop = stack.pop - push = stack.append - minimum = 0 - maximum = 0 - measure = Measurement.get - level = 0 - while stack: - iter_tree = pop() - try: - tree = next(iter_tree) - except StopIteration: - level -= 1 - continue - push(iter_tree) - min_measure, max_measure = measure(console, options, tree.label) - indent = level * 4 - minimum = max(min_measure + indent, minimum) - maximum = max(max_measure + indent, maximum) - if tree.expanded and tree.children: - push(iter(tree.children)) - level += 1 - return Measurement(minimum, maximum) - - -if __name__ == "__main__": # pragma: no cover - - from pip._vendor.rich.console import Group - from pip._vendor.rich.markdown import Markdown - from pip._vendor.rich.panel import Panel - from pip._vendor.rich.syntax import Syntax - from pip._vendor.rich.table import Table - - table = Table(row_styles=["", "dim"]) - - table.add_column("Released", style="cyan", no_wrap=True) - table.add_column("Title", style="magenta") - table.add_column("Box Office", justify="right", style="green") - - table.add_row("Dec 20, 2019", "Star Wars: The Rise of Skywalker", "$952,110,690") - table.add_row("May 25, 2018", "Solo: A Star Wars Story", "$393,151,347") - table.add_row("Dec 15, 2017", "Star Wars Ep. V111: The Last Jedi", "$1,332,539,889") - table.add_row("Dec 16, 2016", "Rogue One: A Star Wars Story", "$1,332,439,889") - - code = """\ -class Segment(NamedTuple): - text: str = "" - style: Optional[Style] = None - is_control: bool = False -""" - syntax = Syntax(code, "python", theme="monokai", line_numbers=True) - - markdown = Markdown( - """\ -### example.md -> Hello, World! -> -> Markdown _all_ the things -""" - ) - - root = Tree("🌲 [b green]Rich Tree", highlight=True, hide_root=True) - - node = root.add(":file_folder: Renderables", guide_style="red") - simple_node = node.add(":file_folder: [bold yellow]Atomic", guide_style="uu green") - simple_node.add(Group("📄 Syntax", syntax)) - simple_node.add(Group("📄 Markdown", Panel(markdown, border_style="green"))) - - containers_node = node.add( - ":file_folder: [bold magenta]Containers", guide_style="bold magenta" - ) - containers_node.expanded = True - panel = Panel.fit("Just a panel", border_style="red") - containers_node.add(Group("📄 Panels", panel)) - - containers_node.add(Group("📄 [b magenta]Table", table)) - - console = Console() - console.print(root) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/rule.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/rule.py deleted file mode 100644 index fb3d43271dc1c20cc79bd628df0544e92040e401..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/rich/rule.py +++ /dev/null @@ -1,130 +0,0 @@ -from typing import Union - -from .align import AlignMethod -from .cells import cell_len, set_cell_size -from .console import Console, ConsoleOptions, RenderResult -from .jupyter import JupyterMixin -from .measure import Measurement -from .style import Style -from .text import Text - - -class Rule(JupyterMixin): - """A console renderable to draw a horizontal rule (line). - - Args: - title (Union[str, Text], optional): Text to render in the rule. Defaults to "". - characters (str, optional): Character(s) used to draw the line. Defaults to "─". - style (StyleType, optional): Style of Rule. Defaults to "rule.line". - end (str, optional): Character at end of Rule. defaults to "\\\\n" - align (str, optional): How to align the title, one of "left", "center", or "right". Defaults to "center". - """ - - def __init__( - self, - title: Union[str, Text] = "", - *, - characters: str = "─", - style: Union[str, Style] = "rule.line", - end: str = "\n", - align: AlignMethod = "center", - ) -> None: - if cell_len(characters) < 1: - raise ValueError( - "'characters' argument must have a cell width of at least 1" - ) - if align not in ("left", "center", "right"): - raise ValueError( - f'invalid value for align, expected "left", "center", "right" (not {align!r})' - ) - self.title = title - self.characters = characters - self.style = style - self.end = end - self.align = align - - def __repr__(self) -> str: - return f"Rule({self.title!r}, {self.characters!r})" - - def __rich_console__( - self, console: Console, options: ConsoleOptions - ) -> RenderResult: - width = options.max_width - - characters = ( - "-" - if (options.ascii_only and not self.characters.isascii()) - else self.characters - ) - - chars_len = cell_len(characters) - if not self.title: - yield self._rule_line(chars_len, width) - return - - if isinstance(self.title, Text): - title_text = self.title - else: - title_text = console.render_str(self.title, style="rule.text") - - title_text.plain = title_text.plain.replace("\n", " ") - title_text.expand_tabs() - - required_space = 4 if self.align == "center" else 2 - truncate_width = max(0, width - required_space) - if not truncate_width: - yield self._rule_line(chars_len, width) - return - - rule_text = Text(end=self.end) - if self.align == "center": - title_text.truncate(truncate_width, overflow="ellipsis") - side_width = (width - cell_len(title_text.plain)) // 2 - left = Text(characters * (side_width // chars_len + 1)) - left.truncate(side_width - 1) - right_length = width - cell_len(left.plain) - cell_len(title_text.plain) - right = Text(characters * (side_width // chars_len + 1)) - right.truncate(right_length) - rule_text.append(left.plain + " ", self.style) - rule_text.append(title_text) - rule_text.append(" " + right.plain, self.style) - elif self.align == "left": - title_text.truncate(truncate_width, overflow="ellipsis") - rule_text.append(title_text) - rule_text.append(" ") - rule_text.append(characters * (width - rule_text.cell_len), self.style) - elif self.align == "right": - title_text.truncate(truncate_width, overflow="ellipsis") - rule_text.append(characters * (width - title_text.cell_len - 1), self.style) - rule_text.append(" ") - rule_text.append(title_text) - - rule_text.plain = set_cell_size(rule_text.plain, width) - yield rule_text - - def _rule_line(self, chars_len: int, width: int) -> Text: - rule_text = Text(self.characters * ((width // chars_len) + 1), self.style) - rule_text.truncate(width) - rule_text.plain = set_cell_size(rule_text.plain, width) - return rule_text - - def __rich_measure__( - self, console: Console, options: ConsoleOptions - ) -> Measurement: - return Measurement(1, 1) - - -if __name__ == "__main__": # pragma: no cover - import sys - - from rich.console import Console - - try: - text = sys.argv[1] - except IndexError: - text = "Hello, World" - console = Console() - console.print(Rule(title=text)) - - console = Console() - console.print(Rule("foo"), width=4) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/connection.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/connection.py deleted file mode 100644 index 4a71225ce6e5bc81ffa6c79160411016f3a65240..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/urllib3/connection.py +++ /dev/null @@ -1,906 +0,0 @@ -from __future__ import annotations - -import datetime -import logging -import os -import re -import socket -import sys -import typing -import warnings -from http.client import HTTPConnection as _HTTPConnection -from http.client import HTTPException as HTTPException # noqa: F401 -from http.client import ResponseNotReady -from socket import timeout as SocketTimeout - -if typing.TYPE_CHECKING: - from typing_extensions import Literal - - from .response import HTTPResponse - from .util.ssl_ import _TYPE_PEER_CERT_RET_DICT - from .util.ssltransport import SSLTransport - -from ._collections import HTTPHeaderDict -from .util.response import assert_header_parsing -from .util.timeout import _DEFAULT_TIMEOUT, _TYPE_TIMEOUT, Timeout -from .util.util import to_str -from .util.wait import wait_for_read - -try: # Compiled with SSL? - import ssl - - BaseSSLError = ssl.SSLError -except (ImportError, AttributeError): - ssl = None # type: ignore[assignment] - - class BaseSSLError(BaseException): # type: ignore[no-redef] - pass - - -from ._base_connection import _TYPE_BODY -from ._base_connection import ProxyConfig as ProxyConfig -from ._base_connection import _ResponseOptions as _ResponseOptions -from ._version import __version__ -from .exceptions import ( - ConnectTimeoutError, - HeaderParsingError, - NameResolutionError, - NewConnectionError, - ProxyError, - SystemTimeWarning, -) -from .util import SKIP_HEADER, SKIPPABLE_HEADERS, connection, ssl_ -from .util.request import body_to_chunks -from .util.ssl_ import assert_fingerprint as _assert_fingerprint -from .util.ssl_ import ( - create_urllib3_context, - is_ipaddress, - resolve_cert_reqs, - resolve_ssl_version, - ssl_wrap_socket, -) -from .util.ssl_match_hostname import CertificateError, match_hostname -from .util.url import Url - -# Not a no-op, we're adding this to the namespace so it can be imported. -ConnectionError = ConnectionError -BrokenPipeError = BrokenPipeError - - -log = logging.getLogger(__name__) - -port_by_scheme = {"http": 80, "https": 443} - -# When it comes time to update this value as a part of regular maintenance -# (ie test_recent_date is failing) update it to ~6 months before the current date. -RECENT_DATE = datetime.date(2022, 1, 1) - -_CONTAINS_CONTROL_CHAR_RE = re.compile(r"[^-!#$%&'*+.^_`|~0-9a-zA-Z]") - -_HAS_SYS_AUDIT = hasattr(sys, "audit") - - -class HTTPConnection(_HTTPConnection): - """ - Based on :class:`http.client.HTTPConnection` but provides an extra constructor - backwards-compatibility layer between older and newer Pythons. - - Additional keyword parameters are used to configure attributes of the connection. - Accepted parameters include: - - - ``source_address``: Set the source address for the current connection. - - ``socket_options``: Set specific options on the underlying socket. If not specified, then - defaults are loaded from ``HTTPConnection.default_socket_options`` which includes disabling - Nagle's algorithm (sets TCP_NODELAY to 1) unless the connection is behind a proxy. - - For example, if you wish to enable TCP Keep Alive in addition to the defaults, - you might pass: - - .. code-block:: python - - HTTPConnection.default_socket_options + [ - (socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1), - ] - - Or you may want to disable the defaults by passing an empty list (e.g., ``[]``). - """ - - default_port: typing.ClassVar[int] = port_by_scheme["http"] # type: ignore[misc] - - #: Disable Nagle's algorithm by default. - #: ``[(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]`` - default_socket_options: typing.ClassVar[connection._TYPE_SOCKET_OPTIONS] = [ - (socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) - ] - - #: Whether this connection verifies the host's certificate. - is_verified: bool = False - - #: Whether this proxy connection verified the proxy host's certificate. - # If no proxy is currently connected to the value will be ``None``. - proxy_is_verified: bool | None = None - - blocksize: int - source_address: tuple[str, int] | None - socket_options: connection._TYPE_SOCKET_OPTIONS | None - - _has_connected_to_proxy: bool - _response_options: _ResponseOptions | None - _tunnel_host: str | None - _tunnel_port: int | None - _tunnel_scheme: str | None - - def __init__( - self, - host: str, - port: int | None = None, - *, - timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, - source_address: tuple[str, int] | None = None, - blocksize: int = 16384, - socket_options: None - | (connection._TYPE_SOCKET_OPTIONS) = default_socket_options, - proxy: Url | None = None, - proxy_config: ProxyConfig | None = None, - ) -> None: - super().__init__( - host=host, - port=port, - timeout=Timeout.resolve_default_timeout(timeout), - source_address=source_address, - blocksize=blocksize, - ) - self.socket_options = socket_options - self.proxy = proxy - self.proxy_config = proxy_config - - self._has_connected_to_proxy = False - self._response_options = None - self._tunnel_host: str | None = None - self._tunnel_port: int | None = None - self._tunnel_scheme: str | None = None - - # https://github.com/python/mypy/issues/4125 - # Mypy treats this as LSP violation, which is considered a bug. - # If `host` is made a property it violates LSP, because a writeable attribute is overridden with a read-only one. - # However, there is also a `host` setter so LSP is not violated. - # Potentially, a `@host.deleter` might be needed depending on how this issue will be fixed. - @property - def host(self) -> str: - """ - Getter method to remove any trailing dots that indicate the hostname is an FQDN. - - In general, SSL certificates don't include the trailing dot indicating a - fully-qualified domain name, and thus, they don't validate properly when - checked against a domain name that includes the dot. In addition, some - servers may not expect to receive the trailing dot when provided. - - However, the hostname with trailing dot is critical to DNS resolution; doing a - lookup with the trailing dot will properly only resolve the appropriate FQDN, - whereas a lookup without a trailing dot will search the system's search domain - list. Thus, it's important to keep the original host around for use only in - those cases where it's appropriate (i.e., when doing DNS lookup to establish the - actual TCP connection across which we're going to send HTTP requests). - """ - return self._dns_host.rstrip(".") - - @host.setter - def host(self, value: str) -> None: - """ - Setter for the `host` property. - - We assume that only urllib3 uses the _dns_host attribute; httplib itself - only uses `host`, and it seems reasonable that other libraries follow suit. - """ - self._dns_host = value - - def _new_conn(self) -> socket.socket: - """Establish a socket connection and set nodelay settings on it. - - :return: New socket connection. - """ - try: - sock = connection.create_connection( - (self._dns_host, self.port), - self.timeout, - source_address=self.source_address, - socket_options=self.socket_options, - ) - except socket.gaierror as e: - raise NameResolutionError(self.host, self, e) from e - except SocketTimeout as e: - raise ConnectTimeoutError( - self, - f"Connection to {self.host} timed out. (connect timeout={self.timeout})", - ) from e - - except OSError as e: - raise NewConnectionError( - self, f"Failed to establish a new connection: {e}" - ) from e - - # Audit hooks are only available in Python 3.8+ - if _HAS_SYS_AUDIT: - sys.audit("http.client.connect", self, self.host, self.port) - - return sock - - def set_tunnel( - self, - host: str, - port: int | None = None, - headers: typing.Mapping[str, str] | None = None, - scheme: str = "http", - ) -> None: - if scheme not in ("http", "https"): - raise ValueError( - f"Invalid proxy scheme for tunneling: {scheme!r}, must be either 'http' or 'https'" - ) - super().set_tunnel(host, port=port, headers=headers) - self._tunnel_scheme = scheme - - def connect(self) -> None: - self.sock = self._new_conn() - if self._tunnel_host: - # If we're tunneling it means we're connected to our proxy. - self._has_connected_to_proxy = True - - # TODO: Fix tunnel so it doesn't depend on self.sock state. - self._tunnel() # type: ignore[attr-defined] - - # If there's a proxy to be connected to we are fully connected. - # This is set twice (once above and here) due to forwarding proxies - # not using tunnelling. - self._has_connected_to_proxy = bool(self.proxy) - - @property - def is_closed(self) -> bool: - return self.sock is None - - @property - def is_connected(self) -> bool: - if self.sock is None: - return False - return not wait_for_read(self.sock, timeout=0.0) - - @property - def has_connected_to_proxy(self) -> bool: - return self._has_connected_to_proxy - - def close(self) -> None: - try: - super().close() - finally: - # Reset all stateful properties so connection - # can be re-used without leaking prior configs. - self.sock = None - self.is_verified = False - self.proxy_is_verified = None - self._has_connected_to_proxy = False - self._response_options = None - self._tunnel_host = None - self._tunnel_port = None - self._tunnel_scheme = None - - def putrequest( - self, - method: str, - url: str, - skip_host: bool = False, - skip_accept_encoding: bool = False, - ) -> None: - """""" - # Empty docstring because the indentation of CPython's implementation - # is broken but we don't want this method in our documentation. - match = _CONTAINS_CONTROL_CHAR_RE.search(method) - if match: - raise ValueError( - f"Method cannot contain non-token characters {method!r} (found at least {match.group()!r})" - ) - - return super().putrequest( - method, url, skip_host=skip_host, skip_accept_encoding=skip_accept_encoding - ) - - def putheader(self, header: str, *values: str) -> None: - """""" - if not any(isinstance(v, str) and v == SKIP_HEADER for v in values): - super().putheader(header, *values) - elif to_str(header.lower()) not in SKIPPABLE_HEADERS: - skippable_headers = "', '".join( - [str.title(header) for header in sorted(SKIPPABLE_HEADERS)] - ) - raise ValueError( - f"urllib3.util.SKIP_HEADER only supports '{skippable_headers}'" - ) - - # `request` method's signature intentionally violates LSP. - # urllib3's API is different from `http.client.HTTPConnection` and the subclassing is only incidental. - def request( # type: ignore[override] - self, - method: str, - url: str, - body: _TYPE_BODY | None = None, - headers: typing.Mapping[str, str] | None = None, - *, - chunked: bool = False, - preload_content: bool = True, - decode_content: bool = True, - enforce_content_length: bool = True, - ) -> None: - # Update the inner socket's timeout value to send the request. - # This only triggers if the connection is re-used. - if self.sock is not None: - self.sock.settimeout(self.timeout) - - # Store these values to be fed into the HTTPResponse - # object later. TODO: Remove this in favor of a real - # HTTP lifecycle mechanism. - - # We have to store these before we call .request() - # because sometimes we can still salvage a response - # off the wire even if we aren't able to completely - # send the request body. - self._response_options = _ResponseOptions( - request_method=method, - request_url=url, - preload_content=preload_content, - decode_content=decode_content, - enforce_content_length=enforce_content_length, - ) - - if headers is None: - headers = {} - header_keys = frozenset(to_str(k.lower()) for k in headers) - skip_accept_encoding = "accept-encoding" in header_keys - skip_host = "host" in header_keys - self.putrequest( - method, url, skip_accept_encoding=skip_accept_encoding, skip_host=skip_host - ) - - # Transform the body into an iterable of sendall()-able chunks - # and detect if an explicit Content-Length is doable. - chunks_and_cl = body_to_chunks(body, method=method, blocksize=self.blocksize) - chunks = chunks_and_cl.chunks - content_length = chunks_and_cl.content_length - - # When chunked is explicit set to 'True' we respect that. - if chunked: - if "transfer-encoding" not in header_keys: - self.putheader("Transfer-Encoding", "chunked") - else: - # Detect whether a framing mechanism is already in use. If so - # we respect that value, otherwise we pick chunked vs content-length - # depending on the type of 'body'. - if "content-length" in header_keys: - chunked = False - elif "transfer-encoding" in header_keys: - chunked = True - - # Otherwise we go off the recommendation of 'body_to_chunks()'. - else: - chunked = False - if content_length is None: - if chunks is not None: - chunked = True - self.putheader("Transfer-Encoding", "chunked") - else: - self.putheader("Content-Length", str(content_length)) - - # Now that framing headers are out of the way we send all the other headers. - if "user-agent" not in header_keys: - self.putheader("User-Agent", _get_default_user_agent()) - for header, value in headers.items(): - self.putheader(header, value) - self.endheaders() - - # If we're given a body we start sending that in chunks. - if chunks is not None: - for chunk in chunks: - # Sending empty chunks isn't allowed for TE: chunked - # as it indicates the end of the body. - if not chunk: - continue - if isinstance(chunk, str): - chunk = chunk.encode("utf-8") - if chunked: - self.send(b"%x\r\n%b\r\n" % (len(chunk), chunk)) - else: - self.send(chunk) - - # Regardless of whether we have a body or not, if we're in - # chunked mode we want to send an explicit empty chunk. - if chunked: - self.send(b"0\r\n\r\n") - - def request_chunked( - self, - method: str, - url: str, - body: _TYPE_BODY | None = None, - headers: typing.Mapping[str, str] | None = None, - ) -> None: - """ - Alternative to the common request method, which sends the - body with chunked encoding and not as one block - """ - warnings.warn( - "HTTPConnection.request_chunked() is deprecated and will be removed " - "in urllib3 v2.1.0. Instead use HTTPConnection.request(..., chunked=True).", - category=DeprecationWarning, - stacklevel=2, - ) - self.request(method, url, body=body, headers=headers, chunked=True) - - def getresponse( # type: ignore[override] - self, - ) -> HTTPResponse: - """ - Get the response from the server. - - If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. - - If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. - """ - # Raise the same error as http.client.HTTPConnection - if self._response_options is None: - raise ResponseNotReady() - - # Reset this attribute for being used again. - resp_options = self._response_options - self._response_options = None - - # Since the connection's timeout value may have been updated - # we need to set the timeout on the socket. - self.sock.settimeout(self.timeout) - - # This is needed here to avoid circular import errors - from .response import HTTPResponse - - # Get the response from http.client.HTTPConnection - httplib_response = super().getresponse() - - try: - assert_header_parsing(httplib_response.msg) - except (HeaderParsingError, TypeError) as hpe: - log.warning( - "Failed to parse headers (url=%s): %s", - _url_from_connection(self, resp_options.request_url), - hpe, - exc_info=True, - ) - - headers = HTTPHeaderDict(httplib_response.msg.items()) - - response = HTTPResponse( - body=httplib_response, - headers=headers, - status=httplib_response.status, - version=httplib_response.version, - reason=httplib_response.reason, - preload_content=resp_options.preload_content, - decode_content=resp_options.decode_content, - original_response=httplib_response, - enforce_content_length=resp_options.enforce_content_length, - request_method=resp_options.request_method, - request_url=resp_options.request_url, - ) - return response - - -class HTTPSConnection(HTTPConnection): - """ - Many of the parameters to this constructor are passed to the underlying SSL - socket by means of :py:func:`urllib3.util.ssl_wrap_socket`. - """ - - default_port = port_by_scheme["https"] # type: ignore[misc] - - cert_reqs: int | str | None = None - ca_certs: str | None = None - ca_cert_dir: str | None = None - ca_cert_data: None | str | bytes = None - ssl_version: int | str | None = None - ssl_minimum_version: int | None = None - ssl_maximum_version: int | None = None - assert_fingerprint: str | None = None - - def __init__( - self, - host: str, - port: int | None = None, - *, - timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, - source_address: tuple[str, int] | None = None, - blocksize: int = 16384, - socket_options: None - | (connection._TYPE_SOCKET_OPTIONS) = HTTPConnection.default_socket_options, - proxy: Url | None = None, - proxy_config: ProxyConfig | None = None, - cert_reqs: int | str | None = None, - assert_hostname: None | str | Literal[False] = None, - assert_fingerprint: str | None = None, - server_hostname: str | None = None, - ssl_context: ssl.SSLContext | None = None, - ca_certs: str | None = None, - ca_cert_dir: str | None = None, - ca_cert_data: None | str | bytes = None, - ssl_minimum_version: int | None = None, - ssl_maximum_version: int | None = None, - ssl_version: int | str | None = None, # Deprecated - cert_file: str | None = None, - key_file: str | None = None, - key_password: str | None = None, - ) -> None: - super().__init__( - host, - port=port, - timeout=timeout, - source_address=source_address, - blocksize=blocksize, - socket_options=socket_options, - proxy=proxy, - proxy_config=proxy_config, - ) - - self.key_file = key_file - self.cert_file = cert_file - self.key_password = key_password - self.ssl_context = ssl_context - self.server_hostname = server_hostname - self.assert_hostname = assert_hostname - self.assert_fingerprint = assert_fingerprint - self.ssl_version = ssl_version - self.ssl_minimum_version = ssl_minimum_version - self.ssl_maximum_version = ssl_maximum_version - self.ca_certs = ca_certs and os.path.expanduser(ca_certs) - self.ca_cert_dir = ca_cert_dir and os.path.expanduser(ca_cert_dir) - self.ca_cert_data = ca_cert_data - - # cert_reqs depends on ssl_context so calculate last. - if cert_reqs is None: - if self.ssl_context is not None: - cert_reqs = self.ssl_context.verify_mode - else: - cert_reqs = resolve_cert_reqs(None) - self.cert_reqs = cert_reqs - - def set_cert( - self, - key_file: str | None = None, - cert_file: str | None = None, - cert_reqs: int | str | None = None, - key_password: str | None = None, - ca_certs: str | None = None, - assert_hostname: None | str | Literal[False] = None, - assert_fingerprint: str | None = None, - ca_cert_dir: str | None = None, - ca_cert_data: None | str | bytes = None, - ) -> None: - """ - This method should only be called once, before the connection is used. - """ - warnings.warn( - "HTTPSConnection.set_cert() is deprecated and will be removed " - "in urllib3 v2.1.0. Instead provide the parameters to the " - "HTTPSConnection constructor.", - category=DeprecationWarning, - stacklevel=2, - ) - - # If cert_reqs is not provided we'll assume CERT_REQUIRED unless we also - # have an SSLContext object in which case we'll use its verify_mode. - if cert_reqs is None: - if self.ssl_context is not None: - cert_reqs = self.ssl_context.verify_mode - else: - cert_reqs = resolve_cert_reqs(None) - - self.key_file = key_file - self.cert_file = cert_file - self.cert_reqs = cert_reqs - self.key_password = key_password - self.assert_hostname = assert_hostname - self.assert_fingerprint = assert_fingerprint - self.ca_certs = ca_certs and os.path.expanduser(ca_certs) - self.ca_cert_dir = ca_cert_dir and os.path.expanduser(ca_cert_dir) - self.ca_cert_data = ca_cert_data - - def connect(self) -> None: - sock: socket.socket | ssl.SSLSocket - self.sock = sock = self._new_conn() - server_hostname: str = self.host - tls_in_tls = False - - # Do we need to establish a tunnel? - if self._tunnel_host is not None: - # We're tunneling to an HTTPS origin so need to do TLS-in-TLS. - if self._tunnel_scheme == "https": - self.sock = sock = self._connect_tls_proxy(self.host, sock) - tls_in_tls = True - - # If we're tunneling it means we're connected to our proxy. - self._has_connected_to_proxy = True - - self._tunnel() # type: ignore[attr-defined] - # Override the host with the one we're requesting data from. - server_hostname = self._tunnel_host - - if self.server_hostname is not None: - server_hostname = self.server_hostname - - is_time_off = datetime.date.today() < RECENT_DATE - if is_time_off: - warnings.warn( - ( - f"System time is way off (before {RECENT_DATE}). This will probably " - "lead to SSL verification errors" - ), - SystemTimeWarning, - ) - - sock_and_verified = _ssl_wrap_socket_and_match_hostname( - sock=sock, - cert_reqs=self.cert_reqs, - ssl_version=self.ssl_version, - ssl_minimum_version=self.ssl_minimum_version, - ssl_maximum_version=self.ssl_maximum_version, - ca_certs=self.ca_certs, - ca_cert_dir=self.ca_cert_dir, - ca_cert_data=self.ca_cert_data, - cert_file=self.cert_file, - key_file=self.key_file, - key_password=self.key_password, - server_hostname=server_hostname, - ssl_context=self.ssl_context, - tls_in_tls=tls_in_tls, - assert_hostname=self.assert_hostname, - assert_fingerprint=self.assert_fingerprint, - ) - self.sock = sock_and_verified.socket - self.is_verified = sock_and_verified.is_verified - - # If there's a proxy to be connected to we are fully connected. - # This is set twice (once above and here) due to forwarding proxies - # not using tunnelling. - self._has_connected_to_proxy = bool(self.proxy) - - def _connect_tls_proxy(self, hostname: str, sock: socket.socket) -> ssl.SSLSocket: - """ - Establish a TLS connection to the proxy using the provided SSL context. - """ - # `_connect_tls_proxy` is called when self._tunnel_host is truthy. - proxy_config = typing.cast(ProxyConfig, self.proxy_config) - ssl_context = proxy_config.ssl_context - sock_and_verified = _ssl_wrap_socket_and_match_hostname( - sock, - cert_reqs=self.cert_reqs, - ssl_version=self.ssl_version, - ssl_minimum_version=self.ssl_minimum_version, - ssl_maximum_version=self.ssl_maximum_version, - ca_certs=self.ca_certs, - ca_cert_dir=self.ca_cert_dir, - ca_cert_data=self.ca_cert_data, - server_hostname=hostname, - ssl_context=ssl_context, - assert_hostname=proxy_config.assert_hostname, - assert_fingerprint=proxy_config.assert_fingerprint, - # Features that aren't implemented for proxies yet: - cert_file=None, - key_file=None, - key_password=None, - tls_in_tls=False, - ) - self.proxy_is_verified = sock_and_verified.is_verified - return sock_and_verified.socket # type: ignore[return-value] - - -class _WrappedAndVerifiedSocket(typing.NamedTuple): - """ - Wrapped socket and whether the connection is - verified after the TLS handshake - """ - - socket: ssl.SSLSocket | SSLTransport - is_verified: bool - - -def _ssl_wrap_socket_and_match_hostname( - sock: socket.socket, - *, - cert_reqs: None | str | int, - ssl_version: None | str | int, - ssl_minimum_version: int | None, - ssl_maximum_version: int | None, - cert_file: str | None, - key_file: str | None, - key_password: str | None, - ca_certs: str | None, - ca_cert_dir: str | None, - ca_cert_data: None | str | bytes, - assert_hostname: None | str | Literal[False], - assert_fingerprint: str | None, - server_hostname: str | None, - ssl_context: ssl.SSLContext | None, - tls_in_tls: bool = False, -) -> _WrappedAndVerifiedSocket: - """Logic for constructing an SSLContext from all TLS parameters, passing - that down into ssl_wrap_socket, and then doing certificate verification - either via hostname or fingerprint. This function exists to guarantee - that both proxies and targets have the same behavior when connecting via TLS. - """ - default_ssl_context = False - if ssl_context is None: - default_ssl_context = True - context = create_urllib3_context( - ssl_version=resolve_ssl_version(ssl_version), - ssl_minimum_version=ssl_minimum_version, - ssl_maximum_version=ssl_maximum_version, - cert_reqs=resolve_cert_reqs(cert_reqs), - ) - else: - context = ssl_context - - context.verify_mode = resolve_cert_reqs(cert_reqs) - - # In some cases, we want to verify hostnames ourselves - if ( - # `ssl` can't verify fingerprints or alternate hostnames - assert_fingerprint - or assert_hostname - # assert_hostname can be set to False to disable hostname checking - or assert_hostname is False - # We still support OpenSSL 1.0.2, which prevents us from verifying - # hostnames easily: https://github.com/pyca/pyopenssl/pull/933 - or ssl_.IS_PYOPENSSL - or not ssl_.HAS_NEVER_CHECK_COMMON_NAME - ): - context.check_hostname = False - - # Try to load OS default certs if none are given. - # We need to do the hasattr() check for our custom - # pyOpenSSL and SecureTransport SSLContext objects - # because neither support load_default_certs(). - if ( - not ca_certs - and not ca_cert_dir - and not ca_cert_data - and default_ssl_context - and hasattr(context, "load_default_certs") - ): - context.load_default_certs() - - # Ensure that IPv6 addresses are in the proper format and don't have a - # scope ID. Python's SSL module fails to recognize scoped IPv6 addresses - # and interprets them as DNS hostnames. - if server_hostname is not None: - normalized = server_hostname.strip("[]") - if "%" in normalized: - normalized = normalized[: normalized.rfind("%")] - if is_ipaddress(normalized): - server_hostname = normalized - - ssl_sock = ssl_wrap_socket( - sock=sock, - keyfile=key_file, - certfile=cert_file, - key_password=key_password, - ca_certs=ca_certs, - ca_cert_dir=ca_cert_dir, - ca_cert_data=ca_cert_data, - server_hostname=server_hostname, - ssl_context=context, - tls_in_tls=tls_in_tls, - ) - - try: - if assert_fingerprint: - _assert_fingerprint( - ssl_sock.getpeercert(binary_form=True), assert_fingerprint - ) - elif ( - context.verify_mode != ssl.CERT_NONE - and not context.check_hostname - and assert_hostname is not False - ): - cert: _TYPE_PEER_CERT_RET_DICT = ssl_sock.getpeercert() # type: ignore[assignment] - - # Need to signal to our match_hostname whether to use 'commonName' or not. - # If we're using our own constructed SSLContext we explicitly set 'False' - # because PyPy hard-codes 'True' from SSLContext.hostname_checks_common_name. - if default_ssl_context: - hostname_checks_common_name = False - else: - hostname_checks_common_name = ( - getattr(context, "hostname_checks_common_name", False) or False - ) - - _match_hostname( - cert, - assert_hostname or server_hostname, # type: ignore[arg-type] - hostname_checks_common_name, - ) - - return _WrappedAndVerifiedSocket( - socket=ssl_sock, - is_verified=context.verify_mode == ssl.CERT_REQUIRED - or bool(assert_fingerprint), - ) - except BaseException: - ssl_sock.close() - raise - - -def _match_hostname( - cert: _TYPE_PEER_CERT_RET_DICT | None, - asserted_hostname: str, - hostname_checks_common_name: bool = False, -) -> None: - # Our upstream implementation of ssl.match_hostname() - # only applies this normalization to IP addresses so it doesn't - # match DNS SANs so we do the same thing! - stripped_hostname = asserted_hostname.strip("[]") - if is_ipaddress(stripped_hostname): - asserted_hostname = stripped_hostname - - try: - match_hostname(cert, asserted_hostname, hostname_checks_common_name) - except CertificateError as e: - log.warning( - "Certificate did not match expected hostname: %s. Certificate: %s", - asserted_hostname, - cert, - ) - # Add cert to exception and reraise so client code can inspect - # the cert when catching the exception, if they want to - e._peer_cert = cert # type: ignore[attr-defined] - raise - - -def _wrap_proxy_error(err: Exception, proxy_scheme: str | None) -> ProxyError: - # Look for the phrase 'wrong version number', if found - # then we should warn the user that we're very sure that - # this proxy is HTTP-only and they have a configuration issue. - error_normalized = " ".join(re.split("[^a-z]", str(err).lower())) - is_likely_http_proxy = ( - "wrong version number" in error_normalized - or "unknown protocol" in error_normalized - ) - http_proxy_warning = ( - ". Your proxy appears to only use HTTP and not HTTPS, " - "try changing your proxy URL to be HTTP. See: " - "https://urllib3.readthedocs.io/en/latest/advanced-usage.html" - "#https-proxy-error-http-proxy" - ) - new_err = ProxyError( - f"Unable to connect to proxy" - f"{http_proxy_warning if is_likely_http_proxy and proxy_scheme == 'https' else ''}", - err, - ) - new_err.__cause__ = err - return new_err - - -def _get_default_user_agent() -> str: - return f"python-urllib3/{__version__}" - - -class DummyConnection: - """Used to detect a failed ConnectionCls import.""" - - -if not ssl: - HTTPSConnection = DummyConnection # type: ignore[misc, assignment] # noqa: F811 - - -VerifiedHTTPSConnection = HTTPSConnection - - -def _url_from_connection( - conn: HTTPConnection | HTTPSConnection, path: str | None = None -) -> str: - """Returns the URL from a given connection. This is mainly used for testing and logging.""" - - scheme = "https" if isinstance(conn, HTTPSConnection) else "http" - - return Url(scheme=scheme, host=conn.host, port=conn.port, path=path).url diff --git a/spaces/pycui/RealChar/realtime_ai_character/audio/speech_to_text/__init__.py b/spaces/pycui/RealChar/realtime_ai_character/audio/speech_to_text/__init__.py deleted file mode 100644 index c72a328c3cd1336a38539a26520c80b8c19fd0fa..0000000000000000000000000000000000000000 --- a/spaces/pycui/RealChar/realtime_ai_character/audio/speech_to_text/__init__.py +++ /dev/null @@ -1,21 +0,0 @@ -import os - -from realtime_ai_character.audio.speech_to_text.base import SpeechToText - - -def get_speech_to_text() -> SpeechToText: - use = os.getenv('SPEECH_TO_TEXT_USE', 'LOCAL_WHISPER') - if use == 'GOOGLE': - from realtime_ai_character.audio.speech_to_text.google import Google - Google.initialize() - return Google.get_instance() - elif use == 'LOCAL_WHISPER': - from realtime_ai_character.audio.speech_to_text.whisper import Whisper - Whisper.initialize(use='local') - return Whisper.get_instance() - elif use == 'OPENAI_WHISPER': - from realtime_ai_character.audio.speech_to_text.whisper import Whisper - Whisper.initialize(use='api') - return Whisper.get_instance() - else: - raise NotImplementedError(f'Unknown speech to text engine: {use}') diff --git a/spaces/pyodide-demo/self-hosted/nose.js b/spaces/pyodide-demo/self-hosted/nose.js deleted file mode 100644 index 38e39f28a31cbcc7a9ba347f11ecdd17c693735d..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/nose.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="nose.data";var REMOTE_PACKAGE_BASE="nose.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","nose",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/nose","ext",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/nose","plugins",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/nose","sphinx",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages/nose","tools",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","nose-1.3.7-py3.9.egg-info",true,true);Module["FS_createPath"]("/","man",true,true);Module["FS_createPath"]("/man","man1",true,true);Module["FS_createPath"]("/","bin",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:298148,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1357,2366,3555,4762,5910,7172,8441,9935,11087,12198,13291,14254,15304,16318,17586,18478,19519,20676,21717,22794,23760,24985,26267,27596,28755,29783,30823,32191,33650,34997,36199,37357,38720,40033,41341,42562,43862,45058,46150,47306,48318,49514,50715,51773,52713,53814,54867,55828,57166,58414,59264,60452,61769,62959,64135,65415,66566,67646,68975,70027,71144,72223,73379,74619,75819,77047,78108,79136,80135,81435,82551,83775,84780,86004,87336,88577,89824,90914,92127,93305,94547,95732,96797,98028,99361,100462,101905,103434,105011,106512,107548,108842,110204,111547,112863,114147,115234,116604,117827,119099,120466,121601,122844,124125,125301,126434,127361,128498,129699,130620,131792,132945,133976,135289,136434,137637,138835,140124,141163,142347,143800,145145,146477,147864,149100,150099,151274,152580,153888,155071,156424,157846,159259,160624,162065,163295,164678,165854,166942,167939,169281,170498,171748,172733,173890,175060,175965,177165,178226,179437,180694,181670,182600,183893,184991,186343,187676,188527,189316,190300,191199,192283,193339,194463,195857,197291,198319,199408,200541,201621,202802,203980,205135,206494,207738,208682,210055,211526,212809,214029,215277,216396,217482,218697,220064,221271,222387,223626,224699,225780,226838,228308,229866,231200,232256,233291,234419,235549,236616,237644,238563,239629,240893,242053,243147,244333,245248,246294,247560,248756,250026,251362,252594,253987,255407,256481,257551,258791,260113,261405,262588,263700,264854,266306,267592,268856,269894,270871,271566,272820,273807,275057,276284,277463,279014,280491,281249,281798,282336,282919,283451,284121,284959,285844,287169,288543,289967,291268,292534,293741,294861,296174,297547],sizes:[1357,1009,1189,1207,1148,1262,1269,1494,1152,1111,1093,963,1050,1014,1268,892,1041,1157,1041,1077,966,1225,1282,1329,1159,1028,1040,1368,1459,1347,1202,1158,1363,1313,1308,1221,1300,1196,1092,1156,1012,1196,1201,1058,940,1101,1053,961,1338,1248,850,1188,1317,1190,1176,1280,1151,1080,1329,1052,1117,1079,1156,1240,1200,1228,1061,1028,999,1300,1116,1224,1005,1224,1332,1241,1247,1090,1213,1178,1242,1185,1065,1231,1333,1101,1443,1529,1577,1501,1036,1294,1362,1343,1316,1284,1087,1370,1223,1272,1367,1135,1243,1281,1176,1133,927,1137,1201,921,1172,1153,1031,1313,1145,1203,1198,1289,1039,1184,1453,1345,1332,1387,1236,999,1175,1306,1308,1183,1353,1422,1413,1365,1441,1230,1383,1176,1088,997,1342,1217,1250,985,1157,1170,905,1200,1061,1211,1257,976,930,1293,1098,1352,1333,851,789,984,899,1084,1056,1124,1394,1434,1028,1089,1133,1080,1181,1178,1155,1359,1244,944,1373,1471,1283,1220,1248,1119,1086,1215,1367,1207,1116,1239,1073,1081,1058,1470,1558,1334,1056,1035,1128,1130,1067,1028,919,1066,1264,1160,1094,1186,915,1046,1266,1196,1270,1336,1232,1393,1420,1074,1070,1240,1322,1292,1183,1112,1154,1452,1286,1264,1038,977,695,1254,987,1250,1227,1179,1551,1477,758,549,538,583,532,670,838,885,1325,1374,1424,1301,1266,1207,1120,1313,1373,601],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_nose.data")}Module["addRunDependency"]("datafile_nose.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/nose/__init__.py",start:0,end:404,audio:0},{filename:"/lib/python3.9/site-packages/nose/__main__.py",start:404,end:548,audio:0},{filename:"/lib/python3.9/site-packages/nose/case.py",start:548,end:13729,audio:0},{filename:"/lib/python3.9/site-packages/nose/commands.py",start:13729,end:20045,audio:0},{filename:"/lib/python3.9/site-packages/nose/config.py",start:20045,end:45327,audio:0},{filename:"/lib/python3.9/site-packages/nose/core.py",start:45327,end:58398,audio:0},{filename:"/lib/python3.9/site-packages/nose/exc.py",start:58398,end:58774,audio:0},{filename:"/lib/python3.9/site-packages/nose/failure.py",start:58774,end:60047,audio:0},{filename:"/lib/python3.9/site-packages/nose/importer.py",start:60047,end:66025,audio:0},{filename:"/lib/python3.9/site-packages/nose/inspector.py",start:66025,end:73e3,audio:0},{filename:"/lib/python3.9/site-packages/nose/loader.py",start:73e3,end:98487,audio:0},{filename:"/lib/python3.9/site-packages/nose/proxy.py",start:98487,end:105366,audio:0},{filename:"/lib/python3.9/site-packages/nose/pyversion.py",start:105366,end:112820,audio:0},{filename:"/lib/python3.9/site-packages/nose/result.py",start:112820,end:119561,audio:0},{filename:"/lib/python3.9/site-packages/nose/selector.py",start:119561,end:128546,audio:0},{filename:"/lib/python3.9/site-packages/nose/suite.py",start:128546,end:150860,audio:0},{filename:"/lib/python3.9/site-packages/nose/twistedtools.py",start:150860,end:156400,audio:0},{filename:"/lib/python3.9/site-packages/nose/util.py",start:156400,end:176734,audio:0},{filename:"/lib/python3.9/site-packages/nose/usage.txt",start:176734,end:181159,audio:0},{filename:"/lib/python3.9/site-packages/nose/ext/__init__.py",start:181159,end:181192,audio:0},{filename:"/lib/python3.9/site-packages/nose/ext/dtcompat.py",start:181192,end:269305,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/__init__.py",start:269305,end:275596,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/allmodules.py",start:275596,end:277316,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/attrib.py",start:277316,end:286982,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/base.py",start:286982,end:313040,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/builtin.py",start:313040,end:314061,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/capture.py",start:314061,end:317425,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/collect.py",start:317425,end:320538,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/cover.py",start:320538,end:332215,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/debug.py",start:332215,end:334487,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/deprecated.py",start:334487,end:336038,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/doctests.py",start:336038,end:353516,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/errorclass.py",start:353516,end:360791,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/failuredetail.py",start:360791,end:362426,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/isolate.py",start:362426,end:366182,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/logcapture.py",start:366182,end:375540,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/manager.py",start:375540,end:391117,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/multiprocess.py",start:391117,end:426403,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/plugintest.py",start:426403,end:439936,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/prof.py",start:439936,end:445293,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/skip.py",start:445293,end:447435,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/testid.py",start:447435,end:457352,audio:0},{filename:"/lib/python3.9/site-packages/nose/plugins/xunit.py",start:457352,end:468997,audio:0},{filename:"/lib/python3.9/site-packages/nose/sphinx/__init__.py",start:468997,end:469002,audio:0},{filename:"/lib/python3.9/site-packages/nose/sphinx/pluginopts.py",start:469002,end:474640,audio:0},{filename:"/lib/python3.9/site-packages/nose/tools/__init__.py",start:474640,end:475076,audio:0},{filename:"/lib/python3.9/site-packages/nose/tools/nontrivial.py",start:475076,end:479246,audio:0},{filename:"/lib/python3.9/site-packages/nose/tools/trivial.py",start:479246,end:480430,audio:0},{filename:"/lib/python3.9/site-packages/nose-1.3.7-py3.9.egg-info/dependency_links.txt",start:480430,end:480431,audio:0},{filename:"/lib/python3.9/site-packages/nose-1.3.7-py3.9.egg-info/entry_points.txt",start:480431,end:480564,audio:0},{filename:"/lib/python3.9/site-packages/nose-1.3.7-py3.9.egg-info/not-zip-safe",start:480564,end:480565,audio:0},{filename:"/lib/python3.9/site-packages/nose-1.3.7-py3.9.egg-info/PKG-INFO",start:480565,end:482399,audio:0},{filename:"/lib/python3.9/site-packages/nose-1.3.7-py3.9.egg-info/SOURCES.txt",start:482399,end:499412,audio:0},{filename:"/lib/python3.9/site-packages/nose-1.3.7-py3.9.egg-info/top_level.txt",start:499412,end:499417,audio:0},{filename:"/man/man1/nosetests.1",start:499417,end:517096,audio:0},{filename:"/bin/nosetests",start:517096,end:518054,audio:0},{filename:"/bin/nosetests-3.9",start:518054,end:519020,audio:0}],remote_package_size:302244,package_uuid:"23095018-18aa-4e8f-8891-7b34e870e690"})})(); \ No newline at end of file diff --git a/spaces/qblocks/Monster-LLMs/MonsterAPIClient.py b/spaces/qblocks/Monster-LLMs/MonsterAPIClient.py deleted file mode 100644 index 6709d68f834dfecec690d1fbf4584c704a5a60f0..0000000000000000000000000000000000000000 --- a/spaces/qblocks/Monster-LLMs/MonsterAPIClient.py +++ /dev/null @@ -1,151 +0,0 @@ -#MonsterAPIClient.py - -""" -Monster API Python client to connect to LLM models on monsterapi - -Base URL: https://api.monsterapi.ai/v1/generate/{model} - -Available models: ------------------ - 1. falcon-7b-instruct - 2. falcon-40b-instruct - 3. mpt-30B-instruct - 4. mpt-7b-instruct - 5. openllama-13b-base - 6. llama2-7b-chat - -""" -import os -import time -import logging -import requests -from requests_toolbelt.multipart.encoder import MultipartEncoder - -from typing import Optional, Literal, Union, List, Dict -from pydantic import BaseModel, Field - -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - - -class InputModel1(BaseModel): - """ - Supports Following models: Falcon-40B-instruct, Falcon-7B-instruct, openllama-13b-base, llama2-7b-chat - - prompt string Prompt is a textual instruction for the model to produce an output. Required - top_k integer Top-k sampling helps improve quality by removing the tail and making it less likely to go off topic. Optional - (Default: 40) - top_p float Top-p sampling helps generate more diverse and creative text by considering a broader range of tokens. Optional - (Default: 1.0) - temp float The temperature influences the randomness of the next token predictions. Optional - (Default: 0.98) - max_length integer The maximum length of the generated text. Optional - (Default: 256) - repetition_penalty float The model uses this penalty to discourage the repetition of tokens in the output. Optional - (Default: 1.2) - beam_size integer The beam size for beam search. A larger beam size results in better quality output, but slower generation times. Optional - (Default: 1) - """ - prompt: str - top_k: int = 40 - top_p: float = Field(0.9, ge=0., le=1.) - temp: float = Field(0.98, ge=0., le=1.) - max_length: int = 256 - repetition_penalty: float = 1.2 - beam_size: int = 1 - - -class InputModel2(BaseModel): - """ - Supports Following models: MPT-30B-instruct, MPT-7B-instruct - - prompt: string Instruction is a textual command for the model to produce an output. Required - top_k integer Top-k sampling helps improve quality by removing the tail and making it less likely to go off topic. Optional - (Default: 40) - top_p float Top-p sampling helps generate more diverse and creative text by considering a broader range of tokens. Optional - Allowed Range: 0 - 1 - (Default: 1.0) - temp float Temperature is a parameter that controls the randomness of the model's output. The higher the temperature, the more random the output. Optional - (Default: 0.98) - max_length integer Maximum length of the generated output. Optional - (Default: 256) - """ - prompt: str - top_k: int = 40 - top_p: float = Field(0.9, ge=0., le=1.) - temp: float = Field(0.98, ge=0., le=1.) - max_length: int = 256 - -MODELS_TO_DATAMODEL = { - 'falcon-7b-instruct': InputModel1, - 'falcon-40b-instruct': InputModel1, - 'mpt-30B-instruct': InputModel2, - 'mpt-7b-instruct': InputModel2, - 'openllama-13b-base': InputModel1, - 'llama2-7b-chat': InputModel1 - } - - -class MClient(): - def __init__(self): - self.boundary = '---011000010111000001101001' - self.auth_token = os.environ.get('MONSTER_API_KEY') - self.headers = { - "accept": "application/json", - "content-type": f"multipart/form-data; boundary={self.boundary}", - 'Authorization': 'Bearer ' + self.auth_token} - self.base_url = 'https://api.monsterapi.ai/v1' - self.models_to_data_model = MODELS_TO_DATAMODEL - self.mock = os.environ.get('MOCK_Runner', "False").lower() == "true" - - def get_response(self, model:Literal['falcon-7b-instruct', 'falcon-40b-instruct', 'mpt-30B-instruct', 'mpt-7b-instruct', 'openllama-13b-base', 'llama2-7b-chat'], - data: dict): - - if model not in self.models_to_data_model: - raise ValueError(f"Invalid model: {model}!") - - dataModel = self.models_to_data_model[model](**data) - url = f"{self.base_url}/generate/{model}" - data = dataModel.dict() - logger.info(f"Calling Monster API with url: {url}, with payload: {data}") - - # convert all values into string - for key, value in data.items(): - data[key] = str(value) - multipart_data = MultipartEncoder(fields=data, boundary=self.boundary) - response = requests.post(url, headers=self.headers, data=multipart_data) - response.raise_for_status() - return response.json() - - def get_status(self, process_id): - # /v1/status/{process_id} - url = f"{self.base_url}/status/{process_id}" - response = requests.get(url, headers=self.headers) - response.raise_for_status() - return response.json() - - def wait_and_get_result(self, process_id, timeout=100): - start_time = time.time() - while True: - elapsed_time = time.time() - start_time - - if elapsed_time >= timeout: - raise TimeoutError(f"Process {process_id} timed out after {timeout} seconds.") - - status = self.get_status(process_id) - if status['status'].lower() == 'completed': - return status['result'] - elif status['status'].lower() == 'failed': - raise RuntimeError(f"Process {process_id} failed!") - else: - if self.mock: - return 100 * "Mock Output!" - logger.info(f"Process {process_id} is still running, status is {status['status']}. Waiting ...") - time.sleep(0.01) - - -if __name__ == '__main__': - client = MClient() - response = client.get_response('falcon-7b-instruct', {"prompt": 'How to make a sandwich?'}) - output = client.wait_and_get_result(response['process_id']) - print(output) \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Medievil Pc Torrent PATCHED.md b/spaces/quidiaMuxgu/Expedit-SAM/Medievil Pc Torrent PATCHED.md deleted file mode 100644 index 138f5aeae79328e9b31011c4500cc45e6bd3ed9d..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Medievil Pc Torrent PATCHED.md +++ /dev/null @@ -1,6 +0,0 @@ -

    medievil pc torrent


    Download Ziphttps://geags.com/2uCqg1



    - - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Minions (English) Movie Download In Hindi 720p Hd Kickass.md b/spaces/quidiaMuxgu/Expedit-SAM/Minions (English) Movie Download In Hindi 720p Hd Kickass.md deleted file mode 100644 index c63649b079b0fde1cc12542bd264dbad333657c5..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Minions (English) Movie Download In Hindi 720p Hd Kickass.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Minions (English) Movie Download In Hindi 720p Hd Kickass


    Download ->->->-> https://geags.com/2uCrMT



    - -Promise Dad hindi dubbed full movie free download kickass Bach Gaye Re Obama full movie english dubbed download. movie hd 720p download 2 Kahche ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/r3gm/SoniTranslate_translate_audio_of_a_video_content/voice_main.py b/spaces/r3gm/SoniTranslate_translate_audio_of_a_video_content/voice_main.py deleted file mode 100644 index c468d8e27ef06bc87890b5d46dbadca6984e4aaa..0000000000000000000000000000000000000000 --- a/spaces/r3gm/SoniTranslate_translate_audio_of_a_video_content/voice_main.py +++ /dev/null @@ -1,554 +0,0 @@ -import torch -from lib.infer_pack.models import ( - SynthesizerTrnMs256NSFsid, - SynthesizerTrnMs256NSFsid_nono, - SynthesizerTrnMs768NSFsid, - SynthesizerTrnMs768NSFsid_nono, -) -from vc_infer_pipeline import VC -import traceback, pdb -from lib.audio import load_audio -import numpy as np -import os -from fairseq import checkpoint_utils -import soundfile as sf -from gtts import gTTS -import edge_tts -import asyncio -import nest_asyncio - -# model load -def get_vc(sid, to_return_protect0, to_return_protect1): - global n_spk, tgt_sr, net_g, vc, cpt, version - if sid == "" or sid == []: - global hubert_model - if hubert_model is not None: # change model or not - print("clean_empty_cache") - del net_g, n_spk, vc, hubert_model, tgt_sr # ,cpt - hubert_model = net_g = n_spk = vc = hubert_model = tgt_sr = None - if torch.cuda.is_available(): - torch.cuda.empty_cache() - ### if clean - if_f0 = cpt.get("f0", 1) - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid( - *cpt["config"], is_half=config.is_half - ) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g, cpt - if torch.cuda.is_available(): - torch.cuda.empty_cache() - return {"visible": False, "__type__": "update"} - person = "%s/%s" % (weight_root, sid) - print("loading %s" % person) - cpt = torch.load(person, map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 0: - to_return_protect0 = to_return_protect1 = { - "visible": False, - "value": 0.5, - "__type__": "update", - } - else: - to_return_protect0 = { - "visible": True, - "value": to_return_protect0, - "__type__": "update", - } - to_return_protect1 = { - "visible": True, - "value": to_return_protect1, - "__type__": "update", - } - version = cpt.get("version", "v1") - if version == "v1": - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - elif version == "v2": - if if_f0 == 1: - net_g = SynthesizerTrnMs768NSFsid(*cpt["config"], is_half=config.is_half) - else: - net_g = SynthesizerTrnMs768NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) - net_g.eval().to(config.device) - if config.is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, config) - n_spk = cpt["config"][-3] - return ( - {"visible": True, "maximum": n_spk, "__type__": "update"}, - to_return_protect0, - to_return_protect1, - ) - - - -# inference -def vc_single( - sid, - input_audio_path, - f0_up_key, - f0_file, - f0_method, - file_index, - file_index2, - # file_big_npy, - index_rate, - filter_radius, - resample_sr, - rms_mix_rate, - protect, -): - global tgt_sr, net_g, vc, hubert_model, version, cpt - if input_audio_path is None: - return "You need to upload an audio", None - f0_up_key = int(f0_up_key) - try: - audio = load_audio(input_audio_path, 16000) - audio_max = np.abs(audio).max() / 0.95 - if audio_max > 1: - audio /= audio_max - times = [0, 0, 0] - if not hubert_model: - load_hubert() - if_f0 = cpt.get("f0", 1) - file_index = ( - ( - file_index.strip(" ") - .strip('"') - .strip("\n") - .strip('"') - .strip(" ") - .replace("trained", "added") - ) - if file_index != "" - else file_index2 - ) # reemplace for 2 - # file_big_npy = ( - # file_big_npy.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - # ) - audio_opt = vc.pipeline( - hubert_model, - net_g, - sid, - audio, - input_audio_path, - times, - f0_up_key, - f0_method, - file_index, - # file_big_npy, - index_rate, - if_f0, - filter_radius, - tgt_sr, - resample_sr, - rms_mix_rate, - version, - protect, - f0_file=f0_file, - ) - if tgt_sr != resample_sr >= 16000: - tgt_sr = resample_sr - index_info = ( - "Using index:%s." % file_index - if os.path.exists(file_index) - else "Index not used." - ) - return "Success.\n %s\nTime:\n npy:%ss, f0:%ss, infer:%ss" % ( - index_info, - times[0], - times[1], - times[2], - ), (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - - - -# hubert model -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -# config cpu -def use_fp32_config(): - for config_file in [ - "32k.json", - "40k.json", - "48k.json", - "48k_v2.json", - "32k_v2.json", - ]: - with open(f"configs/{config_file}", "r") as f: - strr = f.read().replace("true", "false") - with open(f"configs/{config_file}", "w") as f: - f.write(strr) - -# config device and torch type -class Config: - def __init__(self, device, is_half): - self.device = device - self.is_half = is_half - self.n_cpu = 2 # set cpu cores #################### - self.gpu_name = None - self.gpu_mem = None - self.x_pad, self.x_query, self.x_center, self.x_max = self.device_config() - - def device_config(self) -> tuple: - if torch.cuda.is_available(): - i_device = int(self.device.split(":")[-1]) - self.gpu_name = torch.cuda.get_device_name(i_device) - if ( - ("16" in self.gpu_name and "V100" not in self.gpu_name.upper()) - or "P40" in self.gpu_name.upper() - or "1060" in self.gpu_name - or "1070" in self.gpu_name - or "1080" in self.gpu_name - ): - print("16 series / 10 series graphics cards and P40 force single precision") - self.is_half = False - for config_file in ["32k.json", "40k.json", "48k.json"]: - with open(f"configs/{config_file}", "r") as f: - strr = f.read().replace("true", "false") - with open(f"configs/{config_file}", "w") as f: - f.write(strr) - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - else: - self.gpu_name = None - self.gpu_mem = int( - torch.cuda.get_device_properties(i_device).total_memory - / 1024 - / 1024 - / 1024 - + 0.4 - ) - if self.gpu_mem <= 4: - with open("trainset_preprocess_pipeline_print.py", "r") as f: - strr = f.read().replace("3.7", "3.0") - with open("trainset_preprocess_pipeline_print.py", "w") as f: - f.write(strr) - elif torch.backends.mps.is_available(): - print("Supported N-card not found, using MPS for inference") - self.device = "mps" - else: - print("No supported N-card found, using CPU for inference") - self.device = "cpu" - self.is_half = False - use_fp32_config() - - if self.n_cpu == 0: - self.n_cpu = cpu_count() - - if self.is_half: - # 6GB VRAM configuration - x_pad = 3 - x_query = 10 - x_center = 60 - x_max = 65 - else: - # 5GB VRAM configuration - x_pad = 1 - x_query = 6 - x_center = 38 - x_max = 41 - - if self.gpu_mem != None and self.gpu_mem <= 4: - x_pad = 1 - x_query = 5 - x_center = 30 - x_max = 32 - - - - - print(self.device, self.is_half) - - return x_pad, x_query, x_center, x_max - -# call inference -class ClassVoices: - def __init__(self): - self.file_index = "" # root - - def apply_conf(self, f0method, - model_voice_path00, transpose00, file_index2_00, - model_voice_path01, transpose01, file_index2_01, - model_voice_path02, transpose02, file_index2_02, - model_voice_path03, transpose03, file_index2_03, - model_voice_path04, transpose04, file_index2_04, - model_voice_path05, transpose05, file_index2_05, - model_voice_path99, transpose99, file_index2_99): - - #self.filename = filename - self.f0method = f0method # pm - - self.model_voice_path00 = model_voice_path00 - self.transpose00 = transpose00 - self.file_index200 = file_index2_00 - - self.model_voice_path01 = model_voice_path01 - self.transpose01 = transpose01 - self.file_index201 = file_index2_01 - - self.model_voice_path02 = model_voice_path02 - self.transpose02 = transpose02 - self.file_index202 = file_index2_02 - - self.model_voice_path03 = model_voice_path03 - self.transpose03 = transpose03 - self.file_index203 = file_index2_03 - - self.model_voice_path04 = model_voice_path04 - self.transpose04 = transpose04 - self.file_index204 = file_index2_04 - - self.model_voice_path05 = model_voice_path05 - self.transpose05 = transpose05 - self.file_index205 = file_index2_05 - - self.model_voice_path99 = model_voice_path99 - self.transpose99 = transpose99 - self.file_index299 = file_index2_99 - return "CONFIGURATION APPLIED" - - def custom_voice(self, - _values, # filter indices - audio_files, # all audio files - model_voice_path='', - transpose=0, - f0method='pm', - file_index='', - file_index2='', - ): - - #hubert_model = None - - get_vc( - sid=model_voice_path, # model path - to_return_protect0=0.33, - to_return_protect1=0.33 - ) - - for _value_item in _values: - filename = "audio2/"+audio_files[_value_item] if _value_item != "test" else audio_files[0] - #filename = "audio2/"+audio_files[_value_item] - try: - print(audio_files[_value_item], model_voice_path) - except: - pass - - info_, (sample_, audio_output_) = vc_single( - sid=0, - input_audio_path=filename, #f"audio2/{filename}", - f0_up_key=transpose, # transpose for m to f and reverse 0 12 - f0_file=None, - f0_method= f0method, - file_index= file_index, # dir pwd? - file_index2= file_index2, - # file_big_npy1, - index_rate= float(0.66), - filter_radius= int(3), - resample_sr= int(0), - rms_mix_rate= float(0.25), - protect= float(0.33), - ) - - sf.write( - file= filename, #f"audio2/{filename}", - samplerate=sample_, - data=audio_output_ - ) - - # detele the model - - def make_test(self, - tts_text, - tts_voice, - model_path, - index_path, - transpose, - f0_method, - ): - os.system("rm -rf test") - filename = "test/test.wav" - - if "SET_LIMIT" == os.getenv("DEMO"): - if len(tts_text) > 60: - tts_text = tts_text[:60] - print("DEMO; limit to 60 characters") - - language = tts_voice[:2] - try: - os.system("mkdir test") - #nest_asyncio.apply() # gradio;not - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save(filename)) - except: - try: - tts = gTTS(tts_text, lang=language) - tts.save(filename) - tts.save - print(f'No audio was received. Please change the tts voice for {tts_voice}. USING gTTS.') - except: - tts = gTTS('a', lang=language) - tts.save(filename) - print('Error: Audio will be replaced.') - - os.system("cp test/test.wav test/real_test.wav") - - self([],[]) # start modules - - self.custom_voice( - ["test"], # filter indices - ["test/test.wav"], # all audio files - model_voice_path=model_path, - transpose=transpose, - f0method=f0_method, - file_index='', - file_index2=index_path, - ) - return "test/test.wav", "test/real_test.wav" - - def __call__(self, speakers_list, audio_files): - - speakers_indices = {} - - for index, speak_ in enumerate(speakers_list): - if speak_ in speakers_indices: - speakers_indices[speak_].append(index) - else: - speakers_indices[speak_] = [index] - - - # find models and index - global weight_root, index_root, config, hubert_model - weight_root = "weights" - names = [] - for name in os.listdir(weight_root): - if name.endswith(".pth"): - names.append(name) - - index_root = "logs" - index_paths = [] - for name in os.listdir(index_root): - if name.endswith(".index"): - index_paths.append(name) - - print(names, index_paths) - # config machine - hubert_model = None - config = Config('cuda:0', is_half=True) # config = Config('cpu', is_half=False) # cpu - - # filter by speaker - for _speak, _values in speakers_indices.items(): - #print(_speak, _values) - #for _value_item in _values: - # self.filename = "audio2/"+audio_files[_value_item] - ###print(audio_files[_value_item]) - - #vc(_speak, _values, audio_files) - - if _speak == "SPEAKER_00": - self.custom_voice( - _values, # filteredd - audio_files, - model_voice_path=self.model_voice_path00, - file_index2=self.file_index200, - transpose=self.transpose00, - f0method=self.f0method, - file_index=self.file_index, - ) - elif _speak == "SPEAKER_01": - self.custom_voice( - _values, - audio_files, - model_voice_path=self.model_voice_path01, - file_index2=self.file_index201, - transpose=self.transpose01, - f0method=self.f0method, - file_index=self.file_index, - ) - elif _speak == "SPEAKER_02": - self.custom_voice( - _values, - audio_files, - model_voice_path=self.model_voice_path02, - file_index2=self.file_index202, - transpose=self.transpose02, - f0method=self.f0method, - file_index=self.file_index, - ) - elif _speak == "SPEAKER_03": - self.custom_voice( - _values, - audio_files, - model_voice_path=self.model_voice_path03, - file_index2=self.file_index203, - transpose=self.transpose03, - f0method=self.f0method, - file_index=self.file_index, - ) - elif _speak == "SPEAKER_04": - self.custom_voice( - _values, - audio_files, - model_voice_path=self.model_voice_path04, - file_index2=self.file_index204, - transpose=self.transpose04, - f0method=self.f0method, - file_index=self.file_index, - ) - elif _speak == "SPEAKER_05": - self.custom_voice( - _values, - audio_files, - model_voice_path=self.model_voice_path05, - file_index2=self.file_index205, - transpose=self.transpose05, - f0method=self.f0method, - file_index=self.file_index, - ) - elif _speak == "SPEAKER_99": - self.custom_voice( - _values, - audio_files, - model_voice_path=self.model_voice_path99, - file_index2=self.file_index299, - transpose=self.transpose99, - f0method=self.f0method, - file_index=self.file_index, - ) - else: - pass diff --git a/spaces/radames/UserControllableLT-Latent-Transformer/expansion/dataloader/depth_transforms.py b/spaces/radames/UserControllableLT-Latent-Transformer/expansion/dataloader/depth_transforms.py deleted file mode 100644 index 19a768c788137bfb89077b6c415dc9401a540e4e..0000000000000000000000000000000000000000 --- a/spaces/radames/UserControllableLT-Latent-Transformer/expansion/dataloader/depth_transforms.py +++ /dev/null @@ -1,471 +0,0 @@ -from __future__ import division -import torch -import random -import numpy as np -import numbers -import types -import scipy.ndimage as ndimage -import pdb -import torchvision -import PIL.Image as Image -import cv2 -from torch.nn import functional as F - - -class Compose(object): - """ Composes several co_transforms together. - For example: - >>> co_transforms.Compose([ - >>> co_transforms.CenterCrop(10), - >>> co_transforms.ToTensor(), - >>> ]) - """ - - def __init__(self, co_transforms): - self.co_transforms = co_transforms - - def __call__(self, input, target,intr): - for t in self.co_transforms: - input,target,intr = t(input,target,intr) - return input,target,intr - - -class Scale(object): - """ Rescales the inputs and target arrays to the given 'size'. - 'size' will be the size of the smaller edge. - For example, if height > width, then image will be - rescaled to (size * height / width, size) - size: size of the smaller edge - interpolation order: Default: 2 (bilinear) - """ - - def __init__(self, size, order=1): - self.ratio = size - self.order = order - if order==0: - self.code=cv2.INTER_NEAREST - elif order==1: - self.code=cv2.INTER_LINEAR - elif order==2: - self.code=cv2.INTER_CUBIC - - def __call__(self, inputs, target): - if self.ratio==1: - return inputs, target - h, w, _ = inputs[0].shape - ratio = self.ratio - - inputs[0] = cv2.resize(inputs[0], None, fx=ratio,fy=ratio,interpolation=cv2.INTER_LINEAR) - inputs[1] = cv2.resize(inputs[1], None, fx=ratio,fy=ratio,interpolation=cv2.INTER_LINEAR) - # keep the mask same - tmp = cv2.resize(target[:,:,2], None, fx=ratio,fy=ratio,interpolation=cv2.INTER_NEAREST) - target = cv2.resize(target, None, fx=ratio,fy=ratio,interpolation=self.code) * ratio - target[:,:,2] = tmp - - - return inputs, target - - -class RandomCrop(object): - """Crops the given PIL.Image at a random location to have a region of - the given size. size can be a tuple (target_height, target_width) - or an integer, in which case the target will be of a square shape (size, size) - """ - - def __init__(self, size): - if isinstance(size, numbers.Number): - self.size = (int(size), int(size)) - else: - self.size = size - - def __call__(self, inputs,target,intr): - h, w, _ = inputs[0].shape - th, tw = self.size - if w < tw: tw=w - if h < th: th=h - - x1 = random.randint(0, w - tw) - y1 = random.randint(0, h - th) - intr[1] -= x1 - intr[2] -= y1 - - inputs[0] = inputs[0][y1: y1 + th,x1: x1 + tw].astype(float) - inputs[1] = inputs[1][y1: y1 + th,x1: x1 + tw].astype(float) - return inputs, target[y1: y1 + th,x1: x1 + tw].astype(float), list(np.asarray(intr).astype(float)) + list(np.asarray([1.,0.,0.,1.,0.,0.]).astype(float)) - - - -class SpatialAug(object): - def __init__(self, crop, scale=None, rot=None, trans=None, squeeze=None, schedule_coeff=1, order=1, black=False): - self.crop = crop - self.scale = scale - self.rot = rot - self.trans = trans - self.squeeze = squeeze - self.t = np.zeros(6) - self.schedule_coeff = schedule_coeff - self.order = order - self.black = black - - def to_identity(self): - self.t[0] = 1; self.t[2] = 0; self.t[4] = 0; self.t[1] = 0; self.t[3] = 1; self.t[5] = 0; - - def left_multiply(self, u0, u1, u2, u3, u4, u5): - result = np.zeros(6) - result[0] = self.t[0]*u0 + self.t[1]*u2; - result[1] = self.t[0]*u1 + self.t[1]*u3; - - result[2] = self.t[2]*u0 + self.t[3]*u2; - result[3] = self.t[2]*u1 + self.t[3]*u3; - - result[4] = self.t[4]*u0 + self.t[5]*u2 + u4; - result[5] = self.t[4]*u1 + self.t[5]*u3 + u5; - self.t = result - - def inverse(self): - result = np.zeros(6) - a = self.t[0]; c = self.t[2]; e = self.t[4]; - b = self.t[1]; d = self.t[3]; f = self.t[5]; - - denom = a*d - b*c; - - result[0] = d / denom; - result[1] = -b / denom; - result[2] = -c / denom; - result[3] = a / denom; - result[4] = (c*f-d*e) / denom; - result[5] = (b*e-a*f) / denom; - - return result - - def grid_transform(self, meshgrid, t, normalize=True, gridsize=None): - if gridsize is None: - h, w = meshgrid[0].shape - else: - h, w = gridsize - vgrid = torch.cat([(meshgrid[0] * t[0] + meshgrid[1] * t[2] + t[4])[:,:,np.newaxis], - (meshgrid[0] * t[1] + meshgrid[1] * t[3] + t[5])[:,:,np.newaxis]],-1) - if normalize: - vgrid[:,:,0] = 2.0*vgrid[:,:,0]/max(w-1,1)-1.0 - vgrid[:,:,1] = 2.0*vgrid[:,:,1]/max(h-1,1)-1.0 - return vgrid - - - def __call__(self, inputs, target, intr): - h, w, _ = inputs[0].shape - th, tw = self.crop - meshgrid = torch.meshgrid([torch.Tensor(range(th)), torch.Tensor(range(tw))])[::-1] - cornergrid = torch.meshgrid([torch.Tensor([0,th-1]), torch.Tensor([0,tw-1])])[::-1] - - for i in range(50): - # im0 - self.to_identity() - #TODO add mirror - if np.random.binomial(1,0.5): - mirror = True - else: - mirror = False - ##TODO - #mirror = False - if mirror: - self.left_multiply(-1, 0, 0, 1, .5 * tw, -.5 * th); - else: - self.left_multiply(1, 0, 0, 1, -.5 * tw, -.5 * th); - scale0 = 1; scale1 = 1; squeeze0 = 1; squeeze1 = 1; - if not self.rot is None: - rot0 = np.random.uniform(-self.rot[0],+self.rot[0]) - rot1 = np.random.uniform(-self.rot[1]*self.schedule_coeff, self.rot[1]*self.schedule_coeff) + rot0 - self.left_multiply(np.cos(rot0), np.sin(rot0), -np.sin(rot0), np.cos(rot0), 0, 0) - if not self.trans is None: - trans0 = np.random.uniform(-self.trans[0],+self.trans[0], 2) - trans1 = np.random.uniform(-self.trans[1]*self.schedule_coeff,+self.trans[1]*self.schedule_coeff, 2) + trans0 - self.left_multiply(1, 0, 0, 1, trans0[0] * tw, trans0[1] * th) - if not self.squeeze is None: - squeeze0 = np.exp(np.random.uniform(-self.squeeze[0], self.squeeze[0])) - squeeze1 = np.exp(np.random.uniform(-self.squeeze[1]*self.schedule_coeff, self.squeeze[1]*self.schedule_coeff)) * squeeze0 - if not self.scale is None: - scale0 = np.exp(np.random.uniform(self.scale[2]-self.scale[0], self.scale[2]+self.scale[0])) - scale1 = np.exp(np.random.uniform(-self.scale[1]*self.schedule_coeff, self.scale[1]*self.schedule_coeff)) * scale0 - self.left_multiply(1.0/(scale0*squeeze0), 0, 0, 1.0/(scale0/squeeze0), 0, 0) - - self.left_multiply(1, 0, 0, 1, .5 * w, .5 * h); - transmat0 = self.t.copy() - - # im1 - self.to_identity() - if mirror: - self.left_multiply(-1, 0, 0, 1, .5 * tw, -.5 * th); - else: - self.left_multiply(1, 0, 0, 1, -.5 * tw, -.5 * th); - if not self.rot is None: - self.left_multiply(np.cos(rot1), np.sin(rot1), -np.sin(rot1), np.cos(rot1), 0, 0) - if not self.trans is None: - self.left_multiply(1, 0, 0, 1, trans1[0] * tw, trans1[1] * th) - self.left_multiply(1.0/(scale1*squeeze1), 0, 0, 1.0/(scale1/squeeze1), 0, 0) - self.left_multiply(1, 0, 0, 1, .5 * w, .5 * h); - transmat1 = self.t.copy() - transmat1_inv = self.inverse() - - if self.black: - # black augmentation, allowing 0 values in the input images - # https://github.com/lmb-freiburg/flownet2/blob/master/src/caffe/layers/black_augmentation_layer.cu - break - else: - if ((self.grid_transform(cornergrid, transmat0, gridsize=[float(h),float(w)]).abs()>1).sum() +\ - (self.grid_transform(cornergrid, transmat1, gridsize=[float(h),float(w)]).abs()>1).sum()) == 0: - break - if i==49: - print('max_iter in augmentation') - self.to_identity() - self.left_multiply(1, 0, 0, 1, -.5 * tw, -.5 * th); - self.left_multiply(1, 0, 0, 1, .5 * w, .5 * h); - transmat0 = self.t.copy() - transmat1 = self.t.copy() - - # do the real work - vgrid = self.grid_transform(meshgrid, transmat0,gridsize=[float(h),float(w)]) - inputs_0 = F.grid_sample(torch.Tensor(inputs[0]).permute(2,0,1)[np.newaxis], vgrid[np.newaxis])[0].permute(1,2,0) - if self.order == 0: - target_0 = F.grid_sample(torch.Tensor(target).permute(2,0,1)[np.newaxis], vgrid[np.newaxis], mode='nearest')[0].permute(1,2,0) - else: - target_0 = F.grid_sample(torch.Tensor(target).permute(2,0,1)[np.newaxis], vgrid[np.newaxis])[0].permute(1,2,0) - - mask_0 = target[:,:,2:3].copy(); mask_0[mask_0==0]=np.nan - if self.order == 0: - mask_0 = F.grid_sample(torch.Tensor(mask_0).permute(2,0,1)[np.newaxis], vgrid[np.newaxis], mode='nearest')[0].permute(1,2,0) - else: - mask_0 = F.grid_sample(torch.Tensor(mask_0).permute(2,0,1)[np.newaxis], vgrid[np.newaxis])[0].permute(1,2,0) - mask_0[torch.isnan(mask_0)] = 0 - - - vgrid = self.grid_transform(meshgrid, transmat1,gridsize=[float(h),float(w)]) - inputs_1 = F.grid_sample(torch.Tensor(inputs[1]).permute(2,0,1)[np.newaxis], vgrid[np.newaxis])[0].permute(1,2,0) - - # flow - pos = target_0[:,:,:2] + self.grid_transform(meshgrid, transmat0,normalize=False) - pos = self.grid_transform(pos.permute(2,0,1),transmat1_inv,normalize=False) - if target_0.shape[2]>=4: - # scale - exp = target_0[:,:,3:] * scale1 / scale0 - target = torch.cat([ (pos[:,:,0] - meshgrid[0]).unsqueeze(-1), - (pos[:,:,1] - meshgrid[1]).unsqueeze(-1), - mask_0, - exp], -1) - else: - target = torch.cat([ (pos[:,:,0] - meshgrid[0]).unsqueeze(-1), - (pos[:,:,1] - meshgrid[1]).unsqueeze(-1), - mask_0], -1) - inputs = [np.asarray(inputs_0).astype(float), np.asarray(inputs_1).astype(float)] - target = np.asarray(target).astype(float) - return inputs,target, list(np.asarray(intr+list(transmat0)).astype(float)) - - - -class pseudoPCAAug(object): - """ - Chromatic Eigen Augmentation: https://github.com/lmb-freiburg/flownet2/blob/master/src/caffe/layers/data_augmentation_layer.cu - This version is faster. - """ - def __init__(self, schedule_coeff=1): - self.augcolor = torchvision.transforms.ColorJitter(brightness=0.4, contrast=0.4, saturation=0.5, hue=0.5/3.14) - - def __call__(self, inputs, target,intr): - img = np.concatenate([inputs[0],inputs[1]],0) - shape = img.shape[0]//2 - aug_img = np.asarray(self.augcolor(Image.fromarray(np.uint8(img*255))))/255. - inputs[0] = aug_img[:shape] - inputs[1] = aug_img[shape:] - #inputs[0] = np.asarray(self.augcolor(Image.fromarray(np.uint8(inputs[0]*255))))/255. - #inputs[1] = np.asarray(self.augcolor(Image.fromarray(np.uint8(inputs[1]*255))))/255. - return inputs,target,intr - - -class PCAAug(object): - """ - Chromatic Eigen Augmentation: https://github.com/lmb-freiburg/flownet2/blob/master/src/caffe/layers/data_augmentation_layer.cu - """ - def __init__(self, lmult_pow =[0.4, 0,-0.2], - lmult_mult =[0.4, 0,0, ], - lmult_add =[0.03,0,0, ], - sat_pow =[0.4, 0,0, ], - sat_mult =[0.5, 0,-0.3], - sat_add =[0.03,0,0, ], - col_pow =[0.4, 0,0, ], - col_mult =[0.2, 0,0, ], - col_add =[0.02,0,0, ], - ladd_pow =[0.4, 0,0, ], - ladd_mult =[0.4, 0,0, ], - ladd_add =[0.04,0,0, ], - col_rotate =[1., 0,0, ], - schedule_coeff=1): - # no mean - self.pow_nomean = [1,1,1] - self.add_nomean = [0,0,0] - self.mult_nomean = [1,1,1] - self.pow_withmean = [1,1,1] - self.add_withmean = [0,0,0] - self.mult_withmean = [1,1,1] - self.lmult_pow = 1 - self.lmult_mult = 1 - self.lmult_add = 0 - self.col_angle = 0 - if not ladd_pow is None: - self.pow_nomean[0] =np.exp(np.random.normal(ladd_pow[2], ladd_pow[0])) - if not col_pow is None: - self.pow_nomean[1] =np.exp(np.random.normal(col_pow[2], col_pow[0])) - self.pow_nomean[2] =np.exp(np.random.normal(col_pow[2], col_pow[0])) - - if not ladd_add is None: - self.add_nomean[0] =np.random.normal(ladd_add[2], ladd_add[0]) - if not col_add is None: - self.add_nomean[1] =np.random.normal(col_add[2], col_add[0]) - self.add_nomean[2] =np.random.normal(col_add[2], col_add[0]) - - if not ladd_mult is None: - self.mult_nomean[0] =np.exp(np.random.normal(ladd_mult[2], ladd_mult[0])) - if not col_mult is None: - self.mult_nomean[1] =np.exp(np.random.normal(col_mult[2], col_mult[0])) - self.mult_nomean[2] =np.exp(np.random.normal(col_mult[2], col_mult[0])) - - # with mean - if not sat_pow is None: - self.pow_withmean[1] =np.exp(np.random.uniform(sat_pow[2]-sat_pow[0], sat_pow[2]+sat_pow[0])) - self.pow_withmean[2] =self.pow_withmean[1] - if not sat_add is None: - self.add_withmean[1] =np.random.uniform(sat_add[2]-sat_add[0], sat_add[2]+sat_add[0]) - self.add_withmean[2] =self.add_withmean[1] - if not sat_mult is None: - self.mult_withmean[1] = np.exp(np.random.uniform(sat_mult[2]-sat_mult[0], sat_mult[2]+sat_mult[0])) - self.mult_withmean[2] = self.mult_withmean[1] - - if not lmult_pow is None: - self.lmult_pow = np.exp(np.random.uniform(lmult_pow[2]-lmult_pow[0], lmult_pow[2]+lmult_pow[0])) - if not lmult_mult is None: - self.lmult_mult= np.exp(np.random.uniform(lmult_mult[2]-lmult_mult[0], lmult_mult[2]+lmult_mult[0])) - if not lmult_add is None: - self.lmult_add = np.random.uniform(lmult_add[2]-lmult_add[0], lmult_add[2]+lmult_add[0]) - if not col_rotate is None: - self.col_angle= np.random.uniform(col_rotate[2]-col_rotate[0], col_rotate[2]+col_rotate[0]) - - # eigen vectors - self.eigvec = np.reshape([0.51,0.56,0.65,0.79,0.01,-0.62,0.35,-0.83,0.44],[3,3]).transpose() - - - def __call__(self, inputs, target, intr): - inputs[0] = self.pca_image(inputs[0]) - inputs[1] = self.pca_image(inputs[1]) - return inputs,target,intr - - def pca_image(self, rgb): - eig = np.dot(rgb, self.eigvec) - max_rgb = np.clip(rgb,0,np.inf).max((0,1)) - min_rgb = rgb.min((0,1)) - mean_rgb = rgb.mean((0,1)) - max_abs_eig = np.abs(eig).max((0,1)) - max_l = np.sqrt(np.sum(max_abs_eig*max_abs_eig)) - mean_eig = np.dot(mean_rgb, self.eigvec) - - # no-mean stuff - eig -= mean_eig[np.newaxis, np.newaxis] - - for c in range(3): - if max_abs_eig[c] > 1e-2: - mean_eig[c] /= max_abs_eig[c] - eig[:,:,c] = eig[:,:,c] / max_abs_eig[c]; - eig[:,:,c] = np.power(np.abs(eig[:,:,c]),self.pow_nomean[c]) *\ - ((eig[:,:,c] > 0) -0.5)*2 - eig[:,:,c] = eig[:,:,c] + self.add_nomean[c] - eig[:,:,c] = eig[:,:,c] * self.mult_nomean[c] - eig += mean_eig[np.newaxis,np.newaxis] - - # withmean stuff - if max_abs_eig[0] > 1e-2: - eig[:,:,0] = np.power(np.abs(eig[:,:,0]),self.pow_withmean[0]) * \ - ((eig[:,:,0]>0)-0.5)*2; - eig[:,:,0] = eig[:,:,0] + self.add_withmean[0]; - eig[:,:,0] = eig[:,:,0] * self.mult_withmean[0]; - - s = np.sqrt(eig[:,:,1]*eig[:,:,1] + eig[:,:,2] * eig[:,:,2]) - smask = s > 1e-2 - s1 = np.power(s, self.pow_withmean[1]); - s1 = np.clip(s1 + self.add_withmean[1], 0,np.inf) - s1 = s1 * self.mult_withmean[1] - s1 = s1 * smask + s*(1-smask) - - # color angle - if self.col_angle!=0: - temp1 = np.cos(self.col_angle) * eig[:,:,1] - np.sin(self.col_angle) * eig[:,:,2] - temp2 = np.sin(self.col_angle) * eig[:,:,1] + np.cos(self.col_angle) * eig[:,:,2] - eig[:,:,1] = temp1 - eig[:,:,2] = temp2 - - # to origin magnitude - for c in range(3): - if max_abs_eig[c] > 1e-2: - eig[:,:,c] = eig[:,:,c] * max_abs_eig[c] - - if max_l > 1e-2: - l1 = np.sqrt(eig[:,:,0]*eig[:,:,0] + eig[:,:,1]*eig[:,:,1] + eig[:,:,2]*eig[:,:,2]) - l1 = l1 / max_l - - eig[:,:,1][smask] = (eig[:,:,1] / s * s1)[smask] - eig[:,:,2][smask] = (eig[:,:,2] / s * s1)[smask] - #eig[:,:,1] = (eig[:,:,1] / s * s1) * smask + eig[:,:,1] * (1-smask) - #eig[:,:,2] = (eig[:,:,2] / s * s1) * smask + eig[:,:,2] * (1-smask) - - if max_l > 1e-2: - l = np.sqrt(eig[:,:,0]*eig[:,:,0] + eig[:,:,1]*eig[:,:,1] + eig[:,:,2]*eig[:,:,2]) - l1 = np.power(l1, self.lmult_pow) - l1 = np.clip(l1 + self.lmult_add, 0, np.inf) - l1 = l1 * self.lmult_mult - l1 = l1 * max_l - lmask = l > 1e-2 - eig[lmask] = (eig / l[:,:,np.newaxis] * l1[:,:,np.newaxis])[lmask] - for c in range(3): - eig[:,:,c][lmask] = (np.clip(eig[:,:,c], -np.inf, max_abs_eig[c]))[lmask] - # for c in range(3): -# # eig[:,:,c][lmask] = (eig[:,:,c] / l * l1)[lmask] * lmask + eig[:,:,c] * (1-lmask) - # eig[:,:,c][lmask] = (eig[:,:,c] / l * l1)[lmask] - # eig[:,:,c] = (np.clip(eig[:,:,c], -np.inf, max_abs_eig[c])) * lmask + eig[:,:,c] * (1-lmask) - - return np.clip(np.dot(eig, self.eigvec.transpose()), 0, 1) - - -class ChromaticAug(object): - """ - Chromatic augmentation: https://github.com/lmb-freiburg/flownet2/blob/master/src/caffe/layers/data_augmentation_layer.cu - """ - def __init__(self, noise = 0.06, - gamma = 0.02, - brightness = 0.02, - contrast = 0.02, - color = 0.02, - schedule_coeff=1): - - self.noise = np.random.uniform(0,noise) - self.gamma = np.exp(np.random.normal(0, gamma*schedule_coeff)) - self.brightness = np.random.normal(0, brightness*schedule_coeff) - self.contrast = np.exp(np.random.normal(0, contrast*schedule_coeff)) - self.color = np.exp(np.random.normal(0, color*schedule_coeff,3)) - - def __call__(self, inputs, target, intr): - inputs[1] = self.chrom_aug(inputs[1]) - # noise - inputs[0]+=np.random.normal(0, self.noise, inputs[0].shape) - inputs[1]+=np.random.normal(0, self.noise, inputs[0].shape) - return inputs,target,intr - - def chrom_aug(self, rgb): - # color change - mean_in = rgb.sum(-1) - rgb = rgb*self.color[np.newaxis,np.newaxis] - brightness_coeff = mean_in / (rgb.sum(-1)+0.01) - rgb = np.clip(rgb*brightness_coeff[:,:,np.newaxis],0,1) - # gamma - rgb = np.power(rgb,self.gamma) - # brightness - rgb += self.brightness - # contrast - rgb = 0.5 + ( rgb-0.5)*self.contrast - rgb = np.clip(rgb, 0, 1) - return rgb diff --git a/spaces/radames/hello-huggingface.js/style.css b/spaces/radames/hello-huggingface.js/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/radames/hello-huggingface.js/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Angry Birds Star Wars 2 Full Version Activation Key A Must-Have for Fans of the Franchise.md b/spaces/raedeXanto/academic-chatgpt-beta/Angry Birds Star Wars 2 Full Version Activation Key A Must-Have for Fans of the Franchise.md deleted file mode 100644 index 57ff1c97daeca7efc9f8bcf08fd73e0fed713888..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Angry Birds Star Wars 2 Full Version Activation Key A Must-Have for Fans of the Franchise.md +++ /dev/null @@ -1,90 +0,0 @@ - -

    Angry Birds Star Wars II: How to Get the Full Version Activation Key

    -

    Angry Birds Star Wars II is a puzzle video game that combines the popular Angry Birds franchise with the iconic Star Wars universe. It is a sequel to Angry Birds Star Wars, and it lets you play as both the birds and the pigs in an intergalactic adventure. You can use the Force, wield your Lightsaber, and blast away Pigtroopers on various planets and locations from the Star Wars prequel trilogy.

    -

    However, to enjoy the full version of Angry Birds Star Wars II, you need an activation key that unlocks all the levels and features of the game. Without it, you can only play a limited number of levels for free. So how do you get the full version activation key for Angry Birds Star Wars II? In this article, we will show you two ways to do it: one official and one unofficial.

    -

    angry birds star wars 2 full version activation key


    Download Zip »»» https://tinourl.com/2uL4mD



    -

    Features of Angry Birds Star Wars II

    -

    Before we get into how to get the full version activation key for Angry Birds Star Wars II, let's take a look at some of the features that make this game so fun and addictive.

    -

    Choose your side: Play as the birds or the pigs

    -

    One of the most interesting aspects of Angry Birds Star Wars II is that you can choose which side you want to play as. You can join the Rebel birds and fight against the evil Empire pigs, or you can join the dark side and play as Darth Maul, Anakin Skywalker, General Grievous, and other villains. Each side has its own unique characters, abilities, and levels.

    -

    Use the Force and Lightsabers: Unlock special abilities and weapons

    -

    Another feature that makes Angry Birds Star Wars II stand out from other Angry Birds games is that you can use the Force and Lightsabers to enhance your gameplay. Depending on which character you choose, you can unleash different powers and weapons that can help you destroy your enemies and obstacles. For example, you can use Yoda's Force push, Obi-Wan Kenobi's Force pull, Darth Maul's double-bladed Lightsaber, or Jango Fett's rocket launcher.

    -

    Explore iconic locations: Visit Tatooine, Naboo, and the Pig Star

    -

    Angry Birds Star Wars II also lets you explore some of the most iconic locations from the Star Wars prequel trilogy. You can visit Tatooine, where Anakin Skywalker grew up; Naboo, where Queen Amidala ruled; and the Pig Star, where Darth Vader awaits. Each location has its own challenges and surprises that will test your skills and creativity.

    -

    Collect and upgrade characters: Discover over 30 playable characters

    -

    Another feature that makes Angry Birds Star Wars II fun and replayable is that you can collect and upgrade over 30 playable characters. You can unlock new characters by completing levels, earning stars, or purchasing them with in-game currency. You can also upgrade your characters by leveling them up or equipping them with items that boost their stats. Some of the characters you can collect include Qui-Gon Jinn, Mace Windu, Jar Jar Binks, Count Dooku, Emperor Palpatine, and more.

    -

    angry birds star wars 2 unlock code generator
    -how to activate angry birds star wars 2 full game
    -angry birds star wars 2 activation key free download
    -angry birds star wars 2 crack serial key
    -angry birds star wars 2 full version license key
    -angry birds star wars 2 registration code online
    -angry birds star wars 2 product key for pc
    -angry birds star wars 2 activation key for android
    -angry birds star wars 2 full game download with key
    -angry birds star wars 2 serial number generator
    -angry birds star wars 2 activation key no survey
    -angry birds star wars 2 full version patch download
    -angry birds star wars 2 license key crack
    -angry birds star wars 2 activation code for windows
    -angry birds star wars 2 full game unlocker
    -angry birds star wars 2 keygen free download
    -angry birds star wars 2 activation key for ios
    -angry birds star wars 2 full version apk mod
    -angry birds star wars 2 serial key download
    -angry birds star wars 2 activation key for mac
    -angry birds star wars 2 full game installer
    -angry birds star wars 2 license key generator online
    -angry birds star wars 2 activation key for pc free
    -angry birds star wars 2 full version hacked
    -angry birds star wars 2 serial code online
    -angry birds star wars 2 activation key for windows phone
    -angry birds star wars 2 full version free download for pc
    -angry birds star wars 2 license key free online
    -angry birds star wars 2 activation key for ipad
    -angry birds star wars 2 full version mod apk download
    -angry birds star wars 2 serial key free online
    -angry birds star wars 2 activation key for iphone
    -angry birds star wars 2 full version offline installer
    -angry birds star wars 2 license key online generator
    -angry birds star wars 2 activation key for kindle fire
    -angry birds star wars 2 full version crack download
    -angry birds star wars 2 serial number online generator
    -angry birds star wars 2 activation key for blackberry
    -angry birds star wars 2 full version free download for android
    -angry birds star wars 2 license code free download
    -angry birds star wars 2 activation key for windows 8.1
    -angry birds star wars 2 full version apk free download
    -angry birds star wars 2 serial number free download
    -angry birds star wars 2 activation key for windows xp
    -angry birds star wars 2 full version setup download
    -angry birds star wars 2 license code online generator
    -angry birds star wars 2 activation key for windows vista
    -angry birds star wars 2 full version apk download for android
    -angry birds star wars 2 serial code free download

    -

    Telepods: Scan physical toys to play with them in-game

    -

    One of the most innovative features of Angry Birds Star Wars II is that you can use Telepods to scan physical toys and play with them in-game. Telepods are small plastic figures that represent different characters from Angry Birds Star Wars II. You can place them on a special base that connects to your device's camera and scan them into the game. This way, you can choose which character you want to use in each level without having to unlock them first.

    -

    How to Download Angry Birds Star Wars II for Free

    -

    If you want to play Angry Birds Star Wars II for free, you need to download it first. There are two ways to do this: one official and one unofficial.

    -

    Download from official sources: Google Play, App Store, Amazon Appstore, Windows Phone Store

    -

    The official way to download Angry Birds Star Wars II for free is to get it from one of the official sources depending on your device. If you have an Android device, you can get it from Google Play; if you have an iOS device, you can get it from App Store; if you have a Kindle Fire device, you can get it from Amazon Appstore; if you have a Windows Phone device, you can get it from Windows Phone Store.

    -

    To download Angry Birds Star Wars II from one of these sources, you just need to follow these steps:

    -
      -
    1. Open your device's app store.
    2. -
    3. Search for "Angry Birds Star Wars II".
    4. -
    5. Select the game from the list of results.
    6. -
    7. Tap on "Install" or "Get" to download it for free.
    8. -
    9. Wait for it to finish downloading and installing.
    10. -
    11. Launch it from your device's home screen or app drawer.
    12. -
    -

    Download from third-party sources: APK files, torrents, etc. (not recommended)

    -

    The unofficial way to download Angry Birds Star Wars II for free is to get it from a third-party source such as an APK file or a torrent. An APK file is an Android application package file that contains all the files needed to install an app on an Android device. A torrent is a file that contains metadata about files and folders that are distributed over a peer-to-peer network.

    -

    To download Angry Birds Star Wars II from a third-party source such as an APK file or a torrent, you need to follow these steps:

    -
      -
    1. Find a reliable website that offers APK files or torrents for Angry Birds Star Wars II. Some examples are Uptodown, APKPure, The Pirate Bay, etc.
    2. -
    3. Select the version of Angry Birds Star Wars II that you want to download.
    4. -
    5. Download it to your device or computer.
    6. -
    7. If you downloaded an APK file,
      • Transfer it to your Android device if necessary.
      • Enable "Unknown sources" in your device's settings to allow installation of apps from outside sources.
      • Navigate to where you saved the APK file using a file manager app.
      • Tap on it to install it.
    8. If you downloaded a torrent,
      • You need a torrent client app such as uTorrent, BitTorrent, etc. to open it.
      • Add the torrent file to your torrent client app and start downloading the game files.
      • Once the download is complete, extract the game files using a file extractor app such as WinRAR, 7-Zip, etc.
      • Navigate to where you extracted the game files and run the setup.exe file to install the game on your computer.
    9. Launch it from your device's home screen or app drawer (for Android) or desktop shortcut (for Windows).
    -

    Note: We do not recommend downloading Angry Birds Star Wars II from third-party sources because they may contain viruses

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Atlantica Server Files.md b/spaces/raedeXanto/academic-chatgpt-beta/Atlantica Server Files.md deleted file mode 100644 index 262ef7031815be2ef50b8d121357661c9c6ec112..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Atlantica Server Files.md +++ /dev/null @@ -1,30 +0,0 @@ - -

    How to Set Up Your Own Atlantica Online Server

    -

    Atlantica Online is a massively multiplayer online role-playing game that features turn-based combat, strategy and customization. If you want to play Atlantica Online with your friends without relying on official servers, you can set up your own server using Atlantica Server Files. These are files that contain the necessary data and configurations to run a private server for Atlantica Online.

    -

    In this article, we will show you how to set up your own Atlantica Online server using Atlantica Server Files 2019 v32347+, which is the latest version available as of April 2023. This version includes new mounts, mercenaries, costumes, decorations, items and more. You will also learn how to enable LAN multiplayer so you can play with your friends on the same network.

    -

    Atlantica Server Files


    DOWNLOAD 🆓 https://tinourl.com/2uL5D4



    -

    Requirements

    -

    To set up your own Atlantica Online server, you will need the following:

    -
      -
    • A computer with Windows 10 operating system and at least 8 GB of RAM and 4 CPU cores.
    • -
    • VMware Workstation or Player, which is a software that allows you to run virtual machines on your computer.
    • -
    • Atlantica Server Files 2019 v32347+, which you can download from this video or this forum thread.
    • -
    • Atlantica Online game client, which you can download from this video or this forum thread.
    • -
    • Atlantica Online game client patch, which you can download from this video or this forum thread.
    • -
    -

    Steps

    -

    To set up your own Atlantica Online server, follow these steps:

    -
      -
    1. Extract the Atlantica Server Files 2019 v32347+ zip file to a folder on your computer.
    2. -
    3. Launch VMware Workstation or Player and create a new virtual network with the following settings: VMnet1: Host-Only, Subnet address: 192.168.1.0, Subnet mask: 255.255.255.0, DHCP settings: Starting IP: 192.168.1.1, Ending IP: 192.168.1.254.
    4. -
    5. Edit the IPv4 settings of your VMware Network Adapter VMnet1 on your computer to match the following: IP address: 192.168.1.1, Subnet mask: 255.255.255.0.
    6. -
    7. Choose AO_ServerV3 from the folder where you extracted the Atlantica Server Files 2019 v32347+ and click Power On This Virtual Machine on VMware Workstation or Player.
    8. -
    9. When prompted, choose I moved it as the option for the virtual machine.
    10. -
    11. Wait for the server to load and then click Start Server on the desktop of the virtual machine.
    12. -
    13. Install the Atlantica Online game client on your computer and apply the Atlantica Online game client patch.
    14. -
    15. Edit the hosts file on your computer (located at C:\Windows\System32\drivers\etc) and add the following line: 192.168.1.2 atlantica.valofe.com
    16. -
    17. Launch the Atlantica Online game client on your computer and log in with any username and password (the server uses auto register on login).
    18. -
    19. Enjoy playing Atlantica Online

      -

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Descargarvirtualsamplerdk27fullgratis A Comparison with Other Virtual Samplers on the Market.md b/spaces/raedeXanto/academic-chatgpt-beta/Descargarvirtualsamplerdk27fullgratis A Comparison with Other Virtual Samplers on the Market.md deleted file mode 100644 index 39dff63a1c3d1d5505112531ffccb684f1398e3b..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Descargarvirtualsamplerdk27fullgratis A Comparison with Other Virtual Samplers on the Market.md +++ /dev/null @@ -1,123 +0,0 @@ - -

      Descargarvirtualsamplerdk27fullgratis: A Guide to Virtual Stock Trading

      -

      Have you ever wanted to invest in the stock market but felt intimidated by the complexity and risk involved? Do you wish you could practice and learn the basics of stock trading without risking your real money? If so, you might be interested in descargarvirtualsamplerdk27fullgratis, a software that lets you simulate stock trading with virtual money. In this article, we will explain what descargarvirtualsamplerdk27fullgratis is, how it works, and how you can use it to improve your skills and knowledge as an investor. We will also share some tips and tricks for using descargarvirtualsamplerdk27fullgratis effectively and efficiently.

      -

      descargarvirtualsamplerdk27fullgratis


      Download Filehttps://tinourl.com/2uL3kI



      -

      What is descargarvirtualsamplerdk27fullgratis?

      -

      Descargarvirtualsamplerdk27fullgratis is a software that simulates stock trading with virtual money. It was developed by Virtual Sampler, a company that specializes in creating educational and entertainment software for various fields. Descargarvirtualsamplerdk27fullgratis is based on the VST+ SDK, which is a platform that allows developers to create plugins for audio applications. Descargarvirtualsamplerdk27fullgratis uses the VST+ SDK to create a realistic and interactive environment for virtual stock trading.

      -

      Features and benefits of descargarvirtualsamplerdk27fullgratis

      -

      Some of the features and benefits of descargarvirtualsamplerdk27fullgratis are:

      -
        -
      • It allows you to create a virtual portfolio with up to 100 different stocks from various markets and sectors.
      • -
      • It provides real-time data and quotes from major stock exchanges around the world.
      • -
      • It lets you buy and sell virtual stocks with a simple click of a button.
      • -
      • It tracks your performance and shows you your profit or loss, your return on investment, your portfolio value, and other relevant statistics.
      • -
      • It helps you learn from your mistakes by providing feedback and suggestions on how to improve your strategy and decision-making.
      • -
      • It offers various tools and indicators for technical analysis and fundamental analysis, such as charts, graphs, trends, moving averages, oscillators, ratios, earnings reports, news articles, etc.
      • -
      • It is easy to use, user-friendly, and customizable. You can adjust the settings according to your preferences and goals.
      • -
      • It is fun, engaging, and educational. You can compete with other users or challenge yourself with different scenarios and levels of difficulty.
      • -
      -

      How to download and install descargarvirtualsamplerdk27fullgratis

      -

      To download and install descargarvirtualsamplerdk27fullgratis, you need to follow these steps:

      -
        -
      1. Go to https://urlca.com/2k8qxx, which is the official website of Virtual Sampler.
      2. -
      3. Click on the "Download" button and choose the version that suits your operating system (Windows or Mac).
      4. -
      5. Save the file on your computer and run it as an administrator.
      6. -
      7. Follow the instructions on the screen to complete the installation process.
      8. -
      9. Launch the software and register with your email address and password.
      10. -
      11. Enjoy using descargarvirtualsamplerdk27fullgratis!
      12. -
      -

      How to use descargarvirtualsamplerdk27fullgratis

      -

      Once you have downloaded and installed descargarvirtualsamplerdk27fullgratis, you can start using it to simulate stock trading with virtual money. Here are some of the basic steps you need to follow:

      -

      How to create a virtual portfolio

      -

      To create a virtual portfolio, you need to:

      -
        -
      1. Click on the "Portfolio" tab on the main menu.
      2. -
      3. Click on the "New" button on the top right corner.
      4. -
      5. Name your portfolio and choose a currency (USD, EUR, GBP, etc.).
      6. -
      7. Select a starting balance (the default is $10,000).
      8. -
      9. Add stocks to your portfolio by clicking on the "Add" button on the bottom right corner.
      10. -
      11. Search for stocks by name or symbol or browse through different categories (such as market cap, sector, industry, country, etc.).
      12. -
      13. Select the stocks you want to add and enter the number of shares you want to buy.
      14. -
      15. Click on the "Buy" button to confirm your purchase.
      16. -
      17. Repeat steps 5-8 until you have added all the stocks you want to your portfolio.
      18. -
      -

      How to buy and sell virtual stocks

      -

      To buy and sell virtual stocks, you need to:

      -
        -
      1. Click on the "Trade" tab on the main menu.
      2. -
      3. Select the portfolio you want to trade with from the drop-down menu on the top left corner.
      4. -
      5. Select the stock you want to trade with from the list on the left side of the screen.
      6. -
      7. If you want to buy more shares of that stock, enter the number of shares you want to buy in the "Buy" box on the right side of the screen. If you want to sell some or all of your shares of that stock, enter the number of shares you want to sell in the "Sell" box on the right side of the screen.
      8. -
      9. Click on the "Buy" or "Sell" button below the box to confirm your order.
      10. -
      11. You can also use advanced options such as limit orders, stop orders, trailing stop orders, etc. by clicking on the "Advanced" button below the "Buy" or "Sell" button.
      12. -
      -

      How to monitor your performance and learn from your mistakes

      -

      To monitor your performance and learn from your mistakes, you need to:

      -

      descargar virtual sampler dk 2.7 full gratis
      -como descargar virtual sampler dk 2.7 gratis
      -virtual sampler dk 2.7 full español descargar gratis
      -virtual sampler dk 2.7 full mega descargar gratis
      -virtual sampler dk 2.7 full crack descargar gratis
      -descargar e instalar virtual sampler dk 2.7 full gratis
      -virtual sampler dk 2.7 full version free download
      -how to download virtual sampler dk 2.7 for free
      -virtual sampler dk 2.7 full english free download
      -virtual sampler dk 2.7 full mega free download
      -virtual sampler dk 2.7 full crack free download
      -download and install virtual sampler dk 2.7 full for free
      -descargar virtual sampler dk 2.7 full gratis para windows 10
      -descargar virtual sampler dk 2.7 full gratis para windows 7
      -descargar virtual sampler dk 2.7 full gratis para mac
      -descargar virtual sampler dk 2.7 full gratis para android
      -descargar virtual sampler dk 2.7 full gratis sin virus
      -descargar virtual sampler dk 2.7 full gratis sin publicidad
      -descargar virtual sampler dk 2.7 full gratis sin contraseña
      -descargar virtual sampler dk 2.7 full gratis sin registrarse
      -descargar virtual sampler dk 2.7 pro gratis
      -descargar virtual sampler dk 2.7 premium gratis
      -descargar virtual sampler dk 2.7 plus gratis
      -descargar virtual sampler dk 2.7 gold gratis
      -descargar virtual sampler dk 2.7 ultimate gratis
      -descargar virtual sampler dk 3.0 full gratis
      -descargar virtual sampler dk 4.0 full gratis
      -descargar virtual sampler dk 5.0 full gratis
      -descargar virtual sampler dk latest version full gratis
      -descargar virtual sampler dk update full gratis
      -que es virtual sampler dk y como descargarlo gratis
      -para que sirve virtual sampler dk y como descargarlo gratis
      -como usar virtual sampler dk y como descargarlo gratis
      -como configurar virtual sampler dk y como descargarlo gratis
      -como activar virtual sampler dk y como descargarlo gratis
      -tutorial de virtual sampler dk y como descargarlo gratis
      -manual de virtual sampler dk y como descargarlo gratis
      -guia de virtual sampler dk y como descargarlo gratis
      -trucos de virtual sampler dk y como descargarlo gratis
      -consejos de virtual sampler dk y como descargarlo gratis
      -opiniones de virtual sampler dk y como descargarlo gratis
      -reseñas de virtual sampler dk y como descargarlo gratis
      -valoraciones de virtual sampler dk y como descargarlo gratis
      -comentarios de virtual sampler dk y como descargarlo gratis
      -preguntas frecuentes de virtual sampler dk y como descargarlo gratis
      -soluciones de problemas de virtual sampler dk y como descargarlo gratis
      -alternativas a virtual sampler dk y como descargarlas gratis
      -comparativa entre virtual sampler dk y otros programas similares y como descargarlos gratis
      -ventajas y desventajas de usar virtual sampler dk y como descargarlo gratis

      -
        -
      1. Click on the "Performance" tab on the main menu.
      2. -
      3. Select the portfolio you want to analyze from the drop-down menu on the top left corner.
      4. -
      5. You will see various charts and graphs that show your profit or loss, your return on investment, your portfolio value, your asset allocation, your risk profile, etc.
      6. -
      7. You can zoom in or out of any chart or graph by using your mouse wheel or by clicking on the "+" or "-" buttons on the bottom right corner.
      8. -
      9. You can change the time frame of any chart or graph by using the slider bar on the bottom left corner or by clicking on the buttons above it (such as 1D, 1W, 1M, etc.).
      10. -
      11. You can also see detailed information about each stock in your portfolio by clicking on its name or symbol on the list below the charts and graphs.
      12. -
      13. You will see various indicators and tools for technical analysis and fundamental analysis, such as charts, graphs, trends, moving averages, oscillators, ratios, earnings reports, news articles, etc.
      14. -
      15. You can use these indicators and tools to evaluate the performance and potential of each stock and decide whether to keep it, sell it, or buy more of it.
      16. -
      17. You can also get feedback and suggestions on how to improve your strategy and decision-making by clicking on the "Feedback" button on the top right corner. You will see a list of tips and tricks that are tailored to your specific situation and goals.
      18. -
      -

      Tips and tricks for using descargarvirtualsamplerdk27fullgratis

      -

      To use descargarvirtualsamplerdk27full such as English, Spanish, French, German, Italian, Portuguese, Russian, Chinese, Japanese, Korean, etc. You can change the language settings on the software or on the website.

      -
    20. How can I contact the developers of descargarvirtualsamplerdk27fullgratis?
    21. -

      If you have any questions, feedback, or suggestions about descargarvirtualsamplerdk27fullgratis, you can contact the developers by email at support@virtualsampler.com or by phone at +1-800-123-4567. You can also visit their website at https://www.virtualsampler.com for more information and resources.

      -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Download World Constitutions by Kaeley PDF Free and Learn About the History and Features of Various Constitutions.md b/spaces/raedeXanto/academic-chatgpt-beta/Download World Constitutions by Kaeley PDF Free and Learn About the History and Features of Various Constitutions.md deleted file mode 100644 index 8f0c3285fff72fe0d668fe798b847ce9fc14e68f..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Download World Constitutions by Kaeley PDF Free and Learn About the History and Features of Various Constitutions.md +++ /dev/null @@ -1,124 +0,0 @@ -
    -

    World Constitutions by Kaeley PDF Free: A Comprehensive Guide

    -

    If you are interested in learning about the constitutions of different countries and regions, you might be looking for a reliable and comprehensive source of information. One such source is the book World Constitutions by S.L. Kaeley, which is available in PDF format for free. In this article, we will tell you what are world constitutions, why they are important, what is the book World Constitutions by Kaeley, and how you can download it for free.

    -

    world constitutions by kaeley pdf free


    Download Zip ……… https://tinourl.com/2uL1Ww



    -

    Introduction

    -

    What are world constitutions?

    -

    A constitution is a set of fundamental principles and rules that governs the organization and operation of a political entity, such as a state, a nation, or a federation. A constitution defines the basic structure, powers, functions, and limits of the government, as well as the rights and duties of the citizens. A constitution may be written or unwritten, codified or uncodified, rigid or flexible, depending on the historical and cultural context of each country.

    -

    Why are world constitutions important?

    -

    World constitutions are important because they reflect the political, social, economic, and cultural values and aspirations of different peoples and nations. They also provide a framework for resolving conflicts, ensuring stability, promoting democracy, protecting human rights, and fostering development. By studying world constitutions, we can learn about the similarities and differences among various constitutional systems, as well as the challenges and opportunities they face in the changing world.

    -

    What is the book World Constitutions by Kaeley?

    -

    The book World Constitutions by S.L. Kaeley is a comprehensive and comparative study of the constitutions of various countries and regions in the world. It was first published in 1967 and has been revised and updated several times since then. The book covers more than 150 constitutions from all continents and provides a full view at a glance format for each constitution. The book also includes historical and political background information, as well as the latest amendments and developments in each constitutional system.

    -

    world constitutions kaeley ebook download
    -free pdf of world constitutions by h.s. kaeley
    -world constitutions book by kaeley online
    -how to get world constitutions by kaeley pdf for free
    -world constitutions by h.s. kaeley pdf free download
    -world constitutions kaeley pdf file
    -world constitutions by kaeley free ebook
    -download world constitutions by h.s. kaeley pdf
    -world constitutions by kaeley online pdf
    -free world constitutions book by h.s. kaeley
    -world constitutions by h.s. kaeley ebook free
    -world constitutions kaeley pdf download link
    -world constitutions by kaeley pdf online free
    -world constitutions book by h.s. kaeley pdf
    -world constitutions by h.s. kaeley free download
    -world constitutions kaeley pdf ebook
    -world constitutions by kaeley download free pdf
    -world constitutions by h.s. kaeley online book
    -free pdf world constitutions by kaeley
    -world constitutions book by h.s. kaeley free pdf
    -world constitutions by h.s. kaeley pdf ebook download
    -world constitutions kaeley pdf online
    -world constitutions by kaeley free pdf download
    -world constitutions book by h.s. kaeley download pdf
    -world constitutions by h.s. kaeley pdf file download
    -world constitutions kaeley ebook free download
    -world constitutions by kaeley online free pdf
    -world constitutions book by h.s. kaeley online free
    -free ebook of world constitutions by kaeley
    -world constitutions by h.s. kaeley download pdf free
    -world constitutions kaeley online book free
    -world constitutions by kaeley ebook download free
    -world constitutions book by h.s. kaeley ebook free
    -free download of world constitutions by kaeley pdf
    -world constitutions by h.s. kaeley online ebook free
    -world constitutions kaeley download pdf link
    -world constitutions by kaeley free online book
    -world constitutions book by h.s. kaeley online pdf free
    -free online pdf of world constitutions by kaeley
    -world constitutions by h.s. kaeley ebook link free
    -world constitutions kaeley online free ebook
    -world constitutions by kaeley link to download pdf free
    -world constitutions book by h.s. kaeley link to download pdf free
    -free link to download world constitutions by kaeley pdf
    -world constitutions by h.s. kaeley link to download ebook free
    -world constitutions book by h.s. kaely link to download ebook free
    -free link to download wordl constiuttions b ykaeely ebook

    -

    Main Body

    -

    Features of World Constitutions by Kaeley

    -

    Full view at a glance format

    -

    The book World Constitutions by Kaeley adopts a full view at a glance format for presenting each constitution. This means that each constitution is summarized in a single page with concise headings and subheadings that highlight the main features and aspects of the constitution. This format makes it easy for readers to understand and remember the key points of each constitution without having to read lengthy texts or consult multiple sources.

    -

    Comparative analysis of different constitutions

    -

    The book World Constitutions by Kaeley also provides a comparative analysis of different constitutions based on various criteria, such as the type of government, the system of representation, the separation of powers, the judicial review, the amendment procedure, etc. The book also compares and contrasts the constitutional provisions on specific topics, such as fundamental rights, citizenship, federalism, emergency powers, etc. The comparative analysis helps readers to appreciate the diversity and complexity of constitutional systems in the world.

    -

    Historical and political background of each constitution

    -

    The book World Constitutions by Kaeley also gives historical and political background information for each constitution. This information helps readers to understand the origin, evolution, context, and significance of each constitution. The book also explains how each constitution reflects the historical and political events and influences that shaped its formation and development. The book also discusses how each constitution responds to the contemporary challenges and issues facing its country or region.

    -

    Latest amendments and developments

    -

    The book World Constitutions by Kaeley also incorporates the latest amendments and developments in each constitutional system. The book updates its content regularly to reflect the changes that occur in the constitutional landscape due to political reforms, social movements, judicial decisions, international agreements, etc. The book also anticipates future trends and prospects for constitutional change in different countries and regions.

    -

    Benefits of World Constitutions by Kaeley

    -

    Easy to understand and remember

    -

    The book World Constitutions by Kaeley is easy to understand and remember because it uses simple language, clear structure, concise headings, bullet points Continuing the article:

    and tables to present the information. The book also uses examples, illustrations, diagrams, and charts to make the information more engaging and memorable. The book also provides summaries, reviews, and quizzes at the end of each chapter to help readers revise and test their knowledge.

    -

    Useful for students, teachers, researchers, and professionals

    -

    The book World Constitutions by Kaeley is useful for students, teachers, researchers, and professionals who are interested in or involved in the field of constitutional studies. The book can serve as a textbook, a reference book, a guide book, or a source book for various purposes and occasions. The book can help students prepare for exams, teachers design courses, researchers conduct projects, and professionals perform tasks related to constitutional matters.

    -

    Covers a wide range of countries and regions

    -

    The book World Constitutions by Kaeley covers a wide range of countries and regions in the world, from Asia to Africa, from Europe to America, from Australia to Antarctica. The book includes both old and new constitutions, both large and small countries, both democratic and authoritarian regimes. The book also covers regional and international organizations that have constitutional significance, such as the European Union, the African Union, the United Nations, etc.

    -

    Provides insights and perspectives on constitutional issues

    -

    The book World Constitutions by Kaeley also provides insights and perspectives on constitutional issues that are relevant and important for the contemporary world. The book discusses how different constitutions deal with issues such as globalization, human rights, terrorism, corruption, environment, gender, etc. The book also analyzes how different constitutions influence and are influenced by the political, social, economic, and cultural developments in their respective countries and regions.

    -

    Conclusion

    -

    Summary of the main points

    -

    In conclusion, world constitutions are important because they reflect the values and aspirations of different peoples and nations. They also provide a framework for governance and development. One of the best sources of information on world constitutions is the book World Constitutions by S.L. Kaeley, which is available in PDF format for free. The book has many features and benefits that make it easy to understand and remember, useful for various purposes and occasions, comprehensive and comparative in coverage, and insightful and perspective in analysis.

    -

    Call to action: how to download World Constitutions by Kaeley PDF free

    -

    If you are interested in downloading World Constitutions by Kaeley PDF free, you can follow these simple steps:

    -
      -
    1. Go to this link, which is one of the web search results that contains the PDF file of the book.
    2. -
    3. Click on the download button at the top right corner of the page.
    4. -
    5. Sign up for a free trial account on Scribd or log in with your existing account.
    6. -
    7. Enjoy reading World Constitutions by Kaeley PDF free on your device.
    8. -
    -

    You can also find other web search results that offer World Constitutions by Kaeley PDF free or at a low price. However, you should be careful about the quality and authenticity of the files before downloading them.

    -

    Frequently Asked Questions

    -
      -
    1. Who is the author of World Constitutions by Kaeley?
      -The author of World Constitutions by Kaeley is S.L. Kaeley, who is a former professor of political science at Punjab University. He has written several books on political science and constitutional studies.
    2. -
    3. How many editions of World Constitutions by Kaeley are there?
      -There are seven editions of World Constitutions by Kaeley so far. The first edition was published in 1967 and the latest edition was published in 2022-2023.
    4. -
    5. How many constitutions are covered in World Constitutions by Kaeley?
      -World Constitutions by Kaeley covers more than 150 constitutions from all continents and regions in the world. The book also covers regional and international organizations that have constitutional significance.
    6. -
    7. What is the format of World Constitutions by Kaeley?
      -World Constitutions by Kaeley adopts a full view at a glance format for presenting each constitution. This means that each constitution is summarized in a single page with concise headings and subheadings that highlight the main features and aspects of the constitution.
    8. -
    9. What are some of the topics that are discussed in World Constitutions by Kaeley?
      -Some of the topics that are discussed in World Constitutions by Kaeley are: type of government, system of representation, Continuing the article: separation of powers, judicial review, constitutional amendment, etc.
    10. -
    11. How can I get a free trial account on Scribd?
      -To get a free trial account on Scribd, you can follow these steps:

      -
        -
      1. Go to this link, which is the official website of Scribd.
      2. -
      3. Click on the start your free trial button at the top right corner of the page.
      4. -
      5. Choose your preferred plan and payment method.
      6. -
      7. Enter your personal and payment details and click on start membership.
      8. -
      9. Enjoy unlimited access to books, audiobooks, magazines, podcasts, and more on Scribd for 30 days.
      10. -
      -

      You can cancel your subscription at any time before the end of the trial period and you will not be charged.

    12. -
    13. What are some other sources of information on world constitutions?
      -Some other sources of information on world constitutions are:

      -
        -
      • The World Factbook by the Central Intelligence Agency , which provides information on the history, people, government, economy, geography, communications, transportation, military, and transnational issues for 267 world entities.
      • -
      • The Constitution Finder by the University of Richmond , which offers constitutions, charters, amendments, and other related documents for 203 countries and territories.
      • -
      • The International Constitutional Law by the University of Bern , which provides English translations of constitutional documents from 120 countries and regions.
      • -
      -

      You can access these sources online or download them for offline use.

    14. -
    -

    I hope you enjoyed reading this article and learned something new. If you have any questions or feedback, please feel free to contact me. Thank you for your time and attention.

    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Dragons Dogma Dark Arisen Update 7 repack Mr DJ repack Features Fixes and Fun.md b/spaces/raedeXanto/academic-chatgpt-beta/Dragons Dogma Dark Arisen Update 7 repack Mr DJ repack Features Fixes and Fun.md deleted file mode 100644 index 1920baa89c867a601ef17048e64364bc376ffc27..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Dragons Dogma Dark Arisen Update 7 repack Mr DJ repack Features Fixes and Fun.md +++ /dev/null @@ -1,125 +0,0 @@ -
    -

    Dragon's Dogma Dark Arisen Update 7 repack Mr DJ repack

    -

    If you are a fan of action RPGs with an open world and a dark fantasy setting, you might have heard of Dragon's Dogma Dark Arisen. This game was originally released in 2012 for PlayStation 3 and Xbox 360, and later ported to PC, PlayStation 4, Xbox One, and Nintendo Switch. It is a critically acclaimed game that offers a unique gameplay experience with dynamic combat, character customization, and a rich story.

    -

    But what if you want to play this game on your PC without spending too much money or disk space? Or what if you want to enjoy the latest version of the game with all the DLCs and bonus content included? Or what if you want to tweak the game settings to suit your preferences and system requirements? Well, in that case, you might want to check out the Dragon's Dogma Dark Arisen Update 7 repack Mr DJ repack. This is a modified version of the game that has been compressed and optimized by a popular repacker known as Mr DJ. In this article, we will tell you everything you need to know about this repack, including its features, installation process, and FAQs.

    -

    Dragon's Dogma Dark Arisen Update 7 repack Mr DJ repack


    DOWNLOADhttps://tinourl.com/2uKZz4



    -

    Introduction

    -

    What is Dragon's Dogma Dark Arisen?

    -

    Dragon's Dogma Dark Arisen is an action RPG developed and published by Capcom. It is an enhanced version of the original Dragon's Dogma game that was released in 2012. It includes the base game, plus a new expansion called Dark Arisen that adds a new area, new enemies, new quests, new weapons, new armor, and new gameplay features. The game also features improved graphics, performance, and user interface.

    -

    The game is set in a medieval fantasy world called Gransys, where you play as a customizable character known as the Arisen. You are chosen by a dragon that attacks your village and steals your heart. You must embark on a quest to reclaim your heart and uncover the truth behind the dragon's appearance. Along the way, you will encounter various monsters, NPCs, and factions that will shape your adventure. You will also be accompanied by up to three AI-controlled companions called Pawns, who will assist you in combat and exploration.

    -

    The game is praised for its innovative combat system that allows you to climb on enemies and target specific body parts. You can also use various skills and spells depending on your chosen class or vocation. The game also features a dynamic day-night cycle that affects enemy behavior and difficulty. The game has a high replay value due to its multiple endings, new game plus mode, and online features such as Pawn sharing and Ur-Dragon battles.

    -

    What is a repack?

    -

    A repack is a modified version of a game that has been compressed and optimized by a third-party source. Repacks are usually done to reduce the file size of games and make them more accessible for downloaders with limited bandwidth or disk space. Repacks may also include additional features such as patches, updates, DLCs, cracks, mods, trainers, cheats, or custom installers.

    -

    Dragon's Dogma Dark Arisen highly compressed repack
    -Dragon's Dogma Dark Arisen with all DLCs and bonuses
    -Dragon's Dogma Dark Arisen latest version download
    -Dragon's Dogma Dark Arisen free torrent download
    -Dragon's Dogma Dark Arisen expansion content
    -Dragon's Dogma Dark Arisen PC game FitGirl repack
    -Dragon's Dogma Dark Arisen open world RPG
    -Dragon's Dogma Dark Arisen epic combat experience
    -Dragon's Dogma Dark Arisen customization options
    -Dragon's Dogma Dark Arisen gamepad support
    -Dragon's Dogma Dark Arisen 6.3 GB download size
    -Dragon's Dogma Dark Arisen how to install guide
    -Dragon's Dogma Dark Arisen password for the game
    -Dragon's Dogma Dark Arisen gameplay and screenshots
    -Dragon's Dogma Dark Arisen new questline and region
    -Dragon's Dogma Dark Arisen net energy gain
    -Dragon's Dogma Dark Arisen 30 seconds fusion reactor
    -Dragon's Dogma Dark Arisen 100 million degrees Celsius
    -Dragon's Dogma Dark Arisen seven times hotter than the Sun
    -Dragon's Dogma Dark Arisen Hydra and griffins fight
    -Dragon's Dogma Dark Arisen stunning graphics and visuals
    -Dragon's Dogma Dark Arisen AI companions and Pawns
    -Dragon's Dogma Dark Arisen share Pawns online and reap rewards
    -Dragon's Dogma Dark Arisen natively supports Xbox controllers
    -Dragon's Dogma Dark Arisen download mirrors and links
    -Dragon's Dogma Dark Arisen original content and DLCs included
    -Dragon's Dogma Dark Arisen dynamic and rewarding action combat
    -Dragon's Dogma Dark Arisen huge open world adventure
    -Dragon's Dogma Dark Arisen nine different vocations to choose from
    -Dragon's Dogma Dark Arisen skill upgrades and enhancements
    -Dragon's Dogma Dark Arisen Bitterblack Isle region exploration
    -Dragon's Dogma Dark Arisen turn off antivirus or Windows Defender before downloading
    -Dragon's Dogma Dark Arisen use 7-Zip to extract files
    -Dragon's Dogma Dark Arisen run the installer as admin and install the game on your PC
    -Dragon's Dogma Dark Arisen run the game's exe as admin and play the game
    -Dragon's Dogma Dark Arisen purchase the game here and support the developers
    -Dragon's Dogma Dark Arisen blog for gamers review and rating
    -Dragon's Dogma Dark Arisen collection on OpenSea platform
    -Dragon's Dogma Dark Arisen issues and solutions on Bitbucket site
    -Dragon's Dogma Dark Arisen multiplayer mode and co-op features

    -

    Repacks are not official releases from the game developers or publishers. They are done by independent groups or individuals who are not affiliated with them. Repacks may have some advantages over original releases such as faster download speed, lower disk space requirement, or improved performance. However, they may also have some disadvantages such as compatibility issues, missing files, corrupted data, malware infection, or legal risks.

    -

    Repacks are not recommended for everyone. They are mainly intended for users who have low-end PCs or limited internet access. They are also useful for users who want to try out games before buying them or who want to play older games that are no longer available or supported. Repacks are not meant to replace original releases or harm the game industry. Users who enjoy repacks should support the game developers and publishers by purchasing their games legally if possible.

    -

    Who is Mr DJ?

    -

    Mr DJ is one of the most popular and trusted repackers in the gaming community. He has been repacking games since 2010 and has released over 200 repacks so far. He is known for his high-quality repacks that are well-compressed, well-optimized, well-tested, and well-updated. He also provides detailed instructions and support for his repacks on various platforms such as Facebook, YouTube, Reddit, Discord, and Pirate Bay.

    -

    Features of the repack

    -

    Updated to version 1.0.7

    -

    The repack is based on the latest version of the game, which is 1.0.7. This version includes several bug fixes and improvements, such as:

    -
      -
    • Fixed an issue where the game would crash when launching a new game.
    • -
    • Fixed an issue where the game would freeze when entering certain areas.
    • -
    • Fixed an issue where the game would stutter when using certain skills or items.
    • -
    • Fixed an issue where the game would display incorrect text or graphics in some languages.
    • -
    • Fixed an issue where the game would not save properly in some cases.
    • -
    • Fixed an issue where the game would not recognize some controllers or keyboards.
    • -
    • Fixed an issue where the game would not run on some systems or configurations.
    • -
    -

    The repack also includes a crack that bypasses the Steam DRM protection and allows you to play the game offline or online with other repack users.

    -

    Includes all DLCs and bonus content

    -

    The repack includes all the DLCs and bonus content that were released for the game, such as:

    -
      -
    • Dark Arisen expansion: A new area called Bitterblack Isle that offers new challenges, enemies, quests, weapons, armor, and gameplay features.
    • -
    • Pre-order bonus: A set of armor and weapons inspired by Resident Evil 6.
    • -
    • Retail bonus: A set of armor and weapons inspired by Devil May Cry.
    • -
    • Japanese voice pack: An option to switch the voice language to Japanese.
    • -
    • High resolution texture pack: An option to enhance the graphics quality with higher resolution textures.
    • -
    -

    The repack also includes a mod that adds more hairstyles and faces for character customization.

    -

    Optimized for low-end PCs

    -

    The repack is optimized for low-end PCs that do not meet the minimum system requirements of the game. The repack uses a special compression algorithm that reduces the file size of the game from 11.6 GB to 5.6 GB. The repack also uses a custom installer that allows you to choose which components to install or skip. The repack also includes a configuration tool that lets you adjust the game settings to suit your system specifications and preferences.

    -

    The repack does not compromise the quality or functionality of the game. The repack maintains the original audio and video quality of the game. The repack does not remove any essential files or features of the game. The repack does not cause any errors or glitches in the game. The repack does not affect the online features or compatibility of the game.

    -

    Customizable installation options

    -

    The repack offers several installation options that let you customize your gaming experience. You can choose which language to install from English, French, Italian, German, Spanish, Japanese, Portuguese-Brazilian, Russian, Polish, Korean, Chinese Simplified, and Chinese Traditional. You can also choose which DLCs and bonus content to install or skip. You can also choose whether to install the high resolution texture pack or not. You can also choose whether to install the crack or not.

    -

    The repack also allows you to change your installation options after installation. You can use the configuration tool to switch languages, enable or disable DLCs and bonus content, apply or remove the high resolution texture pack, and apply or remove the crack. You can also use the mod manager to enable or disable mods.

    -

    How to install the repack

    -

    Download the torrent file

    -

    Run the setup.exe file

    -

    The second step to install the repack is to run the setup.exe file that you downloaded. You will need a torrent client such as uTorrent, BitTorrent, qBittorrent, or Vuze to open the torrent file and download the setup.exe file. You will also need a software such as WinRAR, 7-Zip, or PeaZip to extract the setup.exe file from the compressed folder.

    -

    Once you have extracted the setup.exe file, double-click on it to launch the installer. You will see a welcome screen that shows the name and version of the repack. Click on Next to proceed.

    -

    Choose your preferred language and components

    -

    The third step to install the repack is to choose your preferred language and components. You will see a screen that shows a list of languages that you can install. Select one or more languages that you want to install and click on Next. You will then see a screen that shows a list of components that you can install. Select or deselect the components that you want to install or skip and click on Next.

    -

    The components include:

    -
      -
    • Base game: The main files of the game that are required for installation.
    • -
    • DLCs: The downloadable content that adds new features to the game.
    • -
    • Bonus content: The extra content that was given as a pre-order or retail bonus.
    • -
    • High resolution texture pack: The optional feature that enhances the graphics quality of the game.
    • -
    • Crack: The optional feature that bypasses the Steam DRM protection and allows you to play the game offline or online with other repack users.
    • -
    • Mod: The optional feature that adds more hairstyles and faces for character customization.
    • -
    -

    You can also change the installation directory by clicking on Browse and selecting a different folder. You can also check or uncheck the options to create a desktop shortcut and start menu entry for the game. Click on Next to continue.

    -

    Wait for the installation to finish

    -

    The fourth step to install the repack is to wait for the installation to finish. You will see a screen that shows the progress of the installation. Depending on your system specifications and installation options, the installation may take from 10 minutes to 1 hour. Do not close or interrupt the installer during this process.

    -

    Once the installation is complete, you will see a screen that shows a message confirming that the repack has been installed successfully. You can also view a log file that shows the details of the installation. Click on Finish to exit the installer.

    -

    Play the game from desktop shortcut

    -

    The final step to install the repack is to play the game from desktop shortcut. You will find a shortcut icon on your desktop that shows the name and logo of the game. Double-click on it to launch the game. You can also launch the game from start menu or from installation directory.

    - the mod manager to enable or disable mods before playing. You can also use the crack to play the game offline or online with other repack users.

    -

    Conclusion

    -

    In conclusion, Dragon's Dogma Dark Arisen Update 7 repack Mr DJ repack is a great way to enjoy this amazing game on your PC. It offers a lot of features and benefits that make it superior to the original release. It is updated to the latest version, includes all DLCs and bonus content, optimized for low-end PCs, and customizable installation options. It is also easy to install and play with detailed instructions and support from Mr DJ.

    -

    If you are looking for a fun and immersive action RPG with an open world and a dark fantasy setting, you should definitely try out this repack. You will not regret it.

    -

    FAQs

    -

    Q: Is this repack safe and legal?

    -

    A: This repack is safe and virus-free. It has been tested and verified by Mr DJ and thousands of users. However, this repack is not legal. It is a pirated version of the game that violates the copyright and trademark laws of Capcom. You should only use this repack for personal and educational purposes. You should not use this repack for commercial or malicious purposes. You should also support the game developers and publishers by buying their games legally if possible.

    -

    Q: Will this repack work on my PC?

    -

    A: This repack will work on most PCs that meet the minimum system requirements of the game. However, some PCs may have compatibility issues due to different hardware or software configurations. If you encounter any problems with this repack, you can contact Mr DJ or other users for help on his official platforms.

    -

    Q: Can I update or mod this repack?

    -

    A: This repack is already updated to the latest version of the game, which is 1.0.7. You do not need to update it further. However, if there are any future updates or patches for the game, you can check Mr DJ's official platforms for new repacks or updates. You can also mod this repack with various mods that are compatible with it. You can use the mod manager to enable or disable mods before playing.

    -

    Q: Can I play this repack online?

    -

    A: This repack includes a crack that allows you to play the game offline or online with other repack users. However, you cannot play this repack online with original users or Steam users. You can only play this repack online with other users who have the same crack and version of the game as you. You can also use a VPN or a LAN emulator to play this repack online with other users.

    -

    Q: Where can I download this repack?

    - , or TorrentGalaxy. You can also find his direct links on Facebook, YouTube, Reddit, Discord, or his website. You can also request his repacks or updates on his platforms.

    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/ramiin2/AutoGPT/README.md b/spaces/ramiin2/AutoGPT/README.md deleted file mode 100644 index 5bf09b995f04f7af05d1314906b1b1ff39c20ddc..0000000000000000000000000000000000000000 --- a/spaces/ramiin2/AutoGPT/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: AutoGPT -emoji: 🦾 -colorFrom: yellow -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: ui/app.py -pinned: false -license: mit -duplicated_from: aliabid94/AutoGPT ---- - diff --git a/spaces/rendchevi/nix-tts/elements/session_states.py b/spaces/rendchevi/nix-tts/elements/session_states.py deleted file mode 100644 index 7f88955df981127a4dcbe9b306c3edd777563b9c..0000000000000000000000000000000000000000 --- a/spaces/rendchevi/nix-tts/elements/session_states.py +++ /dev/null @@ -1,30 +0,0 @@ -# Utils -import uuid - -# Streamlit -import streamlit as st - -# Nix -from nix.models.TTS import NixTTSInference - -# --------------------- SESSION STATE MANAGEMENT ------------------------- - -def init_session_state(): - # Model - if "init_model" not in st.session_state: - st.session_state.init_model = True - st.session_state.random_str = uuid.uuid1().hex - st.session_state.model_variant = "Stochastic" - st.session_state.TTS = NixTTSInference("assets/nix-ljspeech-sdp-v0.1") - -def update_model(): - if st.session_state.model_variant == "Deterministic": - st.session_state.TTS = NixTTSInference("assets/nix-ljspeech-v0.1") - elif st.session_state.model_variant == "Stochastic": - st.session_state.TTS = NixTTSInference("assets/nix-ljspeech-sdp-v0.1") - -def update_session_state( - state_id, - state_value, -): - st.session_state[f"{state_id}"] = state_value \ No newline at end of file diff --git a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/renderer/camera.py b/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/renderer/camera.py deleted file mode 100644 index e5c330a17e0166970428911a8f1ba92bb89f5034..0000000000000000000000000000000000000000 --- a/spaces/rfrossard/Image-and-3D-Model-Creator/PIFu/lib/renderer/camera.py +++ /dev/null @@ -1,207 +0,0 @@ -import cv2 -import numpy as np - -from .glm import ortho - - -class Camera: - def __init__(self, width=1600, height=1200): - # Focal Length - # equivalent 50mm - focal = np.sqrt(width * width + height * height) - self.focal_x = focal - self.focal_y = focal - # Principal Point Offset - self.principal_x = width / 2 - self.principal_y = height / 2 - # Axis Skew - self.skew = 0 - # Image Size - self.width = width - self.height = height - - self.near = 1 - self.far = 10 - - # Camera Center - self.center = np.array([0, 0, 1.6]) - self.direction = np.array([0, 0, -1]) - self.right = np.array([1, 0, 0]) - self.up = np.array([0, 1, 0]) - - self.ortho_ratio = None - - def sanity_check(self): - self.center = self.center.reshape([-1]) - self.direction = self.direction.reshape([-1]) - self.right = self.right.reshape([-1]) - self.up = self.up.reshape([-1]) - - assert len(self.center) == 3 - assert len(self.direction) == 3 - assert len(self.right) == 3 - assert len(self.up) == 3 - - @staticmethod - def normalize_vector(v): - v_norm = np.linalg.norm(v) - return v if v_norm == 0 else v / v_norm - - def get_real_z_value(self, z): - z_near = self.near - z_far = self.far - z_n = 2.0 * z - 1.0 - z_e = 2.0 * z_near * z_far / (z_far + z_near - z_n * (z_far - z_near)) - return z_e - - def get_rotation_matrix(self): - rot_mat = np.eye(3) - s = self.right - s = self.normalize_vector(s) - rot_mat[0, :] = s - u = self.up - u = self.normalize_vector(u) - rot_mat[1, :] = -u - rot_mat[2, :] = self.normalize_vector(self.direction) - - return rot_mat - - def get_translation_vector(self): - rot_mat = self.get_rotation_matrix() - trans = -np.dot(rot_mat, self.center) - return trans - - def get_intrinsic_matrix(self): - int_mat = np.eye(3) - - int_mat[0, 0] = self.focal_x - int_mat[1, 1] = self.focal_y - int_mat[0, 1] = self.skew - int_mat[0, 2] = self.principal_x - int_mat[1, 2] = self.principal_y - - return int_mat - - def get_projection_matrix(self): - ext_mat = self.get_extrinsic_matrix() - int_mat = self.get_intrinsic_matrix() - - return np.matmul(int_mat, ext_mat) - - def get_extrinsic_matrix(self): - rot_mat = self.get_rotation_matrix() - int_mat = self.get_intrinsic_matrix() - trans = self.get_translation_vector() - - extrinsic = np.eye(4) - extrinsic[:3, :3] = rot_mat - extrinsic[:3, 3] = trans - - return extrinsic[:3, :] - - def set_rotation_matrix(self, rot_mat): - self.direction = rot_mat[2, :] - self.up = -rot_mat[1, :] - self.right = rot_mat[0, :] - - def set_intrinsic_matrix(self, int_mat): - self.focal_x = int_mat[0, 0] - self.focal_y = int_mat[1, 1] - self.skew = int_mat[0, 1] - self.principal_x = int_mat[0, 2] - self.principal_y = int_mat[1, 2] - - def set_projection_matrix(self, proj_mat): - res = cv2.decomposeProjectionMatrix(proj_mat) - int_mat, rot_mat, camera_center_homo = res[0], res[1], res[2] - camera_center = camera_center_homo[0:3] / camera_center_homo[3] - camera_center = camera_center.reshape(-1) - int_mat = int_mat / int_mat[2][2] - - self.set_intrinsic_matrix(int_mat) - self.set_rotation_matrix(rot_mat) - self.center = camera_center - - self.sanity_check() - - def get_gl_matrix(self): - z_near = self.near - z_far = self.far - rot_mat = self.get_rotation_matrix() - int_mat = self.get_intrinsic_matrix() - trans = self.get_translation_vector() - - extrinsic = np.eye(4) - extrinsic[:3, :3] = rot_mat - extrinsic[:3, 3] = trans - axis_adj = np.eye(4) - axis_adj[2, 2] = -1 - axis_adj[1, 1] = -1 - model_view = np.matmul(axis_adj, extrinsic) - - projective = np.zeros([4, 4]) - projective[:2, :2] = int_mat[:2, :2] - projective[:2, 2:3] = -int_mat[:2, 2:3] - projective[3, 2] = -1 - projective[2, 2] = (z_near + z_far) - projective[2, 3] = (z_near * z_far) - - if self.ortho_ratio is None: - ndc = ortho(0, self.width, 0, self.height, z_near, z_far) - perspective = np.matmul(ndc, projective) - else: - perspective = ortho(-self.width * self.ortho_ratio / 2, self.width * self.ortho_ratio / 2, - -self.height * self.ortho_ratio / 2, self.height * self.ortho_ratio / 2, - z_near, z_far) - - return perspective, model_view - - -def KRT_from_P(proj_mat, normalize_K=True): - res = cv2.decomposeProjectionMatrix(proj_mat) - K, Rot, camera_center_homog = res[0], res[1], res[2] - camera_center = camera_center_homog[0:3] / camera_center_homog[3] - trans = -Rot.dot(camera_center) - if normalize_K: - K = K / K[2][2] - return K, Rot, trans - - -def MVP_from_P(proj_mat, width, height, near=0.1, far=10000): - ''' - Convert OpenCV camera calibration matrix to OpenGL projection and model view matrix - :param proj_mat: OpenCV camera projeciton matrix - :param width: Image width - :param height: Image height - :param near: Z near value - :param far: Z far value - :return: OpenGL projection matrix and model view matrix - ''' - res = cv2.decomposeProjectionMatrix(proj_mat) - K, Rot, camera_center_homog = res[0], res[1], res[2] - camera_center = camera_center_homog[0:3] / camera_center_homog[3] - trans = -Rot.dot(camera_center) - K = K / K[2][2] - - extrinsic = np.eye(4) - extrinsic[:3, :3] = Rot - extrinsic[:3, 3:4] = trans - axis_adj = np.eye(4) - axis_adj[2, 2] = -1 - axis_adj[1, 1] = -1 - model_view = np.matmul(axis_adj, extrinsic) - - zFar = far - zNear = near - projective = np.zeros([4, 4]) - projective[:2, :2] = K[:2, :2] - projective[:2, 2:3] = -K[:2, 2:3] - projective[3, 2] = -1 - projective[2, 2] = (zNear + zFar) - projective[2, 3] = (zNear * zFar) - - ndc = ortho(0, width, 0, height, zNear, zFar) - - perspective = np.matmul(ndc, projective) - - return perspective, model_view diff --git a/spaces/richardzhangy26/yandian_flow_classification/gradio_app2.py b/spaces/richardzhangy26/yandian_flow_classification/gradio_app2.py deleted file mode 100644 index be4d71efcee6fe0820b3c7fbb848f9ef2348c147..0000000000000000000000000000000000000000 --- a/spaces/richardzhangy26/yandian_flow_classification/gradio_app2.py +++ /dev/null @@ -1,214 +0,0 @@ - -import gradio as gr -import os -import gradio as gr -import os -# from video_flow_inference import inference_video -from test import infer -from test import extract_frames -import torch -import torch.nn as nn -import argparse -from torch.utils.data import DataLoader -from create_dataset import UCF101Dataset -from lrcn_model import ConvLstm -from utils_action_recognition import save_setting_info, plot_label_distribution, \ - plot_images_with_predicted_labels, create_folder_dir_if_needed, load_all_dataset_to_RAM, split_data, \ - test_model -import os -import cv2 -import time -from sava_video import save_video -time_stamps = [] -# a = [0,1,2,3,4,5] -i = 0 -frames_dir = r'./data/test' -parser = argparse.ArgumentParser(description='UCF101 Action Recognition, LRCN architecture') -parser.add_argument('--epochs', default=1, type=int, help='number of total epochs') -parser.add_argument('--batch-size', default=1, type=int, help='mini-batch size (default:32)') -parser.add_argument('--lr', default=1e-4, type=float, help='initial learning rate (default:5e-4') -parser.add_argument('--num_workers', default=4, type=int, - help='initial num_workers, the number of processes that generate batches in parallel (default:4)') -# 将数据集直接加载到RAM,以加快计算速度。通常在类的数量较少时使用(默认值:False) -parser.add_argument('--load_all_data_to_RAM', default=False, type=bool, - help='load dataset directly to the RAM, for faster computation. usually use when the num of class ' - 'is small (default:False') -# Conv FC输出的dim维数(默认值:512) -parser.add_argument('--latent_dim', default=512, type=int, help='The dim of the Conv FC output (default:512)') -# 处于LSTM隐藏状态的特征数量(默认值:256) -parser.add_argument('--hidden_size', default=256, type=int, - help="The number of features in the LSTM hidden state (default:256)") -# LSTM重复层的数量(默认值:2) -parser.add_argument('--lstm_layers', default=2, type=int, help='Number of recurrent layers (default:2)') -# 将LSTM设置为双向(默认值:True) -parser.add_argument('--bidirectional', default=False, type=bool, help='set the LSTM to be bidirectional (default:True)') -# 打开一个新文件夹来保存运行信息,如果为false,信息将保存在项目目录中,如果为debug,信息将保存在debug文件夹中(默认值:True) -parser.add_argument('--open_new_folder', default='True', type=str, - help='open a new folder for saving the run info, if false the info would be saved in the project ' - 'dir, if debug the info would be saved in debug folder(default:True)') - -# 加载checkpoint并继续使用它进行训练 -parser.add_argument('--load_checkpoint', default=True, type=bool, - help='Loading a checkpoint and continue training with it') -# checkpoint路径 -parser.add_argument('--checkpoint_path', - default=r'./checkpoint/best_epoch_198.pth.tar', - type=str, help='Optional path to checkpoint model') -# checkpoint保存间隔 -parser.add_argument('--checkpoint_interval', default=5, type=int, help='Interval between saving model checkpoints') -# 验证测试的间隔(默认值:5) -parser.add_argument('--val_check_interval', default=5, type=int, help='Interval between running validation test') -# 保存结果的位置 os.getcwd() 方法用于返回当前工作目录 -parser.add_argument('--local_dir', default=os.getcwd(), help='The local directory of the project, setting where to ' - 'save the results of the run') - -parser.add_argument('--ucf_list_dir', default='./data', - type=str, help='path to find the UCF101 list, splitting the data to train and test') -# 类别数 -parser.add_argument('--number_of_classes', default=6, type=int, help='The number of classes we would train on') - - - -def play_video(_): - - time_stamps.append(time.time()) - -def pause_video(_): - time_stamps.append(time.time()) - print(f"pause time_stamps:{time_stamps}") -def record_time_start(_): - # 当按钮被按下时,记录当前时间 - # 如果记录了两次时间,计算并返回时间差 - if len(time_stamps) >= 2: - time_diff = time_stamps[-1] - time_stamps[-2] - zhen = int(time_diff*10) - return f"起始帧: {zhen} " - else: - return "请再按一次play" -def record_time_end(_): - # 当按钮被按下时,记录当前时间 - # 如果记录了两次时间,计算并返回时间差 - if len(time_stamps) >= 2: - time_diff1 = time_stamps[-1] - time_stamps[-2] - time_diff2 = time_stamps[1] - time_stamps[0] - zhen = int((time_diff1+time_diff2)*10) - return f"结束帧: {zhen} " - else: - return "请再按一次pause" - -def video_identity1(video): - outputvideo = save_video(video) - return video,outputvideo -def radio_content(level,vertical,axial,intensity,record_start,record_end): - return f"水平方向:{level}\n垂直方向:{vertical}\n轴向:{axial}\n眼震强度变化:{intensity}\n起始帧:{record_start}\n结束帧:{record_end}\n" -def clean_output(out): - time_stamps=[] - return "","","" -def record_content(records,level,vertical,axial,intensity,record_start,record_end): - global i - records["水平方向"][i] = level - records["垂直方向"][i] = vertical - records["轴向"][i] = axial - records["眼震强度变化"][i] = intensity - records["起始帧"][i] = record_start - records["结束帧"][i] = record_end - i = i + 1 - return records -def inference_video(video): - return video -# from label.test import infer -def video_identity(video): - out_video = inference_video(video) - video_name1 = out_video.split('/')[-1] - video_name2 = os.path.splitext(video_name1)[0] - video_frames_dir = os.path.join(frames_dir, video_name2) - extract_frames(out_video, video_frames_dir) - result,label = infer(parser) - return out_video,{'0012':result[0], '0221':result[1], '1012':result[2], '1102':result[3],'1122':result[4],'1221':result[5]},f"真实标签为{label[0]},预测标签为{label[1]}" - -with gr.Blocks(theme="freddyaboulton/dracula_revamped",title="BPPV智能辅助诊断系统") as demo: - with gr.Tab("智能辅助标注"): - video = gr.Video(label="眼震视频",source="upload",interactive=True,visible=True) - with gr.Group(): - with gr.Row(): - with gr.Column(): - level = gr.Radio(["左(0)","右(1)","无明显水平眼震(2)","其他特殊类型眼震(3)","干扰(4)"],label="水平方向") - with gr.Column(): - vertical = gr.Radio(["上(0)","下(1)","无明显垂直眼震(2)","其他特殊类型眼震(3)","干扰(4)"],label="垂直方向") - with gr.Row(): - with gr.Column(): - axial = gr.Radio(["顺时针(0)","逆时针(1)","无明显轴向眼震(2)","其他特殊类型眼震(3)","干扰(4)"],label="轴向") - with gr.Column(): - intensity = gr.Radio(["上(0)","下(1)","无明显垂直眼震(2)","其他特殊类型眼震(3)","干扰(4)"],label="眼震强度变化") - with gr.Group(): - with gr.Row(): - with gr.Column(): - record_start = gr.Textbox(lines=1,placeholder="起始帧") - record_start_button = gr.Button(value="开始标记") - with gr.Column(): - record_end = gr.Textbox(lines=1,placeholder="结束帧") - record_end_button = gr.Button(value="结束标记") - i = 0 - # video = gr.Video(label="眼震视频",source="upload",interactive=True,visible=True) - - record_start_button.click(fn=record_time_start,outputs=record_start) - record_end_button.click(fn=record_time_end,outputs=record_end) - output_video = gr.Video(label="眼震视频_输出",source="upload",interactive=True,visible=True) - - record_button = gr.Button(value="记录") - record_button.click(fn=video_identity1,inputs=video,outputs=[video,output_video]) - output_video.play(fn=play_video) - output_video.pause(fn=pause_video) - submit_btn = gr.Button(value="提交") - clean_btn = gr.Button(value="清空") - # submit_btn.click(fn=radio_content,inputs=[level,vertical,axial,intensity,record_start,record_end],outputs=out) - record = gr.Dataframe( - headers=["水平方向", "垂直方向", "轴向","眼震强度变化","起始帧","结束帧"], - datatype=["str", "str", "str","str","str","str"], - row_count=3, - col_count=(6, "fixed"), - ) - save_btn = gr.Button(value="保存为csv") - submit_btn.click(fn=record_content,inputs=[record,level,vertical,axial,intensity,record_start,record_end],outputs=record) - with gr.Tab("类型智能诊断"): - gr.Markdown( - """ - # 标签类别说明 - 0012 水平向左,垂直向上,逆时针,强度无明显变化 - - 0221 水平向左,无垂直眼震,无轴向眼震,由强变弱 - - 1012 水平向右,垂直向上,逆时针,强度无明显变化 - - 1102 水平向右,垂直向下,顺时针,强度无明显变化 - - 1122 水平向右,垂直向下,无轴向眼震,强度无明显变化 - - 1221 水平向右,无垂直眼震,无轴向眼震,由强变弱 - """) - with gr.Row(): - with gr.Column(scale=2): - input_video = gr.Video(label="眼震视频",source="upload",interactive=True,visible=True) - output_video = gr.Video(label="光流视频",source="upload",interactive=True,visible=True) - with gr.Column(scale=2): - button = gr.Button(value="开始计算") - label = gr.Label(label="根据光流计算各眼震类别概率值") - with gr.Column(): - text = gr.Textbox(value="输出眼震标签值和预测值") - gr.Examples( - examples=[ - os.path.join(os.path.abspath(''), - "video/example/0012_1438.mp4"), os.path.join(os.path.abspath(''), - "video/example/0012_1600.mp4"), os.path.join(os.path.abspath(''), - "video/example/0012_2944.mp4")], - inputs = input_video, - outputs=[output_video,label], - fn = video_identity, - cache_examples=False - ) - - button.click(video_identity,inputs=[input_video],outputs=[output_video,label,text]) - -if __name__ == "__main__": - gr.themes.Base(primary_hue="red") - demo.launch(share=False) \ No newline at end of file diff --git a/spaces/robin0307/MMOCR/configs/_base_/recog_pipelines/nrtr_pipeline.py b/spaces/robin0307/MMOCR/configs/_base_/recog_pipelines/nrtr_pipeline.py deleted file mode 100644 index 71a19804309aa6692970b5eef642eddf87770559..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/_base_/recog_pipelines/nrtr_pipeline.py +++ /dev/null @@ -1,38 +0,0 @@ -img_norm_cfg = dict(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='ResizeOCR', - height=32, - min_width=32, - max_width=160, - keep_aspect_ratio=True, - width_downsample_ratio=0.25), - dict(type='ToTensorOCR'), - dict(type='NormalizeOCR', **img_norm_cfg), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'filename', 'ori_shape', 'resize_shape', 'text', 'valid_ratio' - ]), -] - -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='ResizeOCR', - height=32, - min_width=32, - max_width=160, - keep_aspect_ratio=True), - dict(type='ToTensorOCR'), - dict(type='NormalizeOCR', **img_norm_cfg), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'filename', 'ori_shape', 'resize_shape', 'valid_ratio', - 'img_norm_cfg', 'ori_filename', 'img_shape' - ]) -] diff --git a/spaces/robin0307/MMOCR/configs/textrecog/nrtr/nrtr_r31_1by8_1by4_academic.py b/spaces/robin0307/MMOCR/configs/textrecog/nrtr/nrtr_r31_1by8_1by4_academic.py deleted file mode 100644 index 397122b55ea57df647a6bb5097973e0eebf4979d..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/textrecog/nrtr/nrtr_r31_1by8_1by4_academic.py +++ /dev/null @@ -1,48 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_step_6e.py', - '../../_base_/recog_pipelines/nrtr_pipeline.py', - '../../_base_/recog_datasets/ST_MJ_train.py', - '../../_base_/recog_datasets/academic_test.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -label_convertor = dict( - type='AttnConvertor', dict_type='DICT90', with_unknown=True) - -model = dict( - type='NRTR', - backbone=dict( - type='ResNet31OCR', - layers=[1, 2, 5, 3], - channels=[32, 64, 128, 256, 512, 512], - stage4_pool_cfg=dict(kernel_size=(2, 1), stride=(2, 1)), - last_stage_pool=False), - encoder=dict(type='NRTREncoder'), - decoder=dict(type='NRTRDecoder'), - loss=dict(type='TFLoss'), - label_convertor=label_convertor, - max_seq_len=40) - -data = dict( - samples_per_gpu=64, - workers_per_gpu=4, - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') diff --git a/spaces/robin0307/MMOCR/configs/textrecog/seg/seg_r31_1by16_fpnocr_toy_dataset.py b/spaces/robin0307/MMOCR/configs/textrecog/seg/seg_r31_1by16_fpnocr_toy_dataset.py deleted file mode 100644 index 893bebba496c04e9364bdcea3caef651e3d426d0..0000000000000000000000000000000000000000 --- a/spaces/robin0307/MMOCR/configs/textrecog/seg/seg_r31_1by16_fpnocr_toy_dataset.py +++ /dev/null @@ -1,39 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/recog_datasets/seg_toy_data.py', - '../../_base_/recog_models/seg.py', - '../../_base_/recog_pipelines/seg_pipeline.py', -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline = {{_base_.test_pipeline}} - -# optimizer -optimizer = dict(type='Adam', lr=1e-4) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict(policy='step', step=[3, 4]) -total_epochs = 5 - -data = dict( - samples_per_gpu=8, - workers_per_gpu=1, - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline)) - -evaluation = dict(interval=1, metric='acc') - -find_unused_parameters = True diff --git a/spaces/rorallitri/biomedical-language-models/logs/Contract De Vanzare-cumparare Auto Cu Plata In Rate Warcraft3 Kinderspiele Pferdemarkt Informat WORK.md b/spaces/rorallitri/biomedical-language-models/logs/Contract De Vanzare-cumparare Auto Cu Plata In Rate Warcraft3 Kinderspiele Pferdemarkt Informat WORK.md deleted file mode 100644 index aaf4939e7473a8aaccec258bd32217c22b5b4960..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Contract De Vanzare-cumparare Auto Cu Plata In Rate Warcraft3 Kinderspiele Pferdemarkt Informat WORK.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Contract De Vanzare-cumparare Auto Cu Plata In Rate warcraft3 kinderspiele pferdemarkt informat


    DOWNLOADhttps://tinurll.com/2uzlqF



    - -Contract de vanzare-cumparare autovehicul (plata in rate), roata, roti, sofer, camion, camioane, ... info, documente, consultanta de afaceri ... Contractul este intocmit complex, la unele articole cu variante de optiune. ... (bun mobil) · Contract de inchiriere auto (persoana fizica inchiriaza unei persoane juridice cu drept de ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/rorallitri/biomedical-language-models/logs/Glary Utilities Pro 5.79.0.100 Free Crack.md b/spaces/rorallitri/biomedical-language-models/logs/Glary Utilities Pro 5.79.0.100 Free Crack.md deleted file mode 100644 index 0bf669f18bcd71ced5232607d7c5ba80411c704e..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Glary Utilities Pro 5.79.0.100 Free Crack.md +++ /dev/null @@ -1,36 +0,0 @@ -
    -

    Glary Utilities Pro 5.79.0.100 Crack + Serial Key Free Download

    -

    Glary Utilities Pro 5.79.0.100 Crack is a powerful and versatile software that offers a comprehensive set of tools to optimize, clean, repair, and protect your PC from various issues. It can boost your PC's performance, speed up startup, fix registry errors, remove junk files, erase traces of your online activity, and more.

    -

    Glary Utilities Pro 5.79.0.100 Crack


    DOWNLOAD ⇒⇒⇒ https://tinurll.com/2uzocy



    -

    Glary Utilities Pro 5.79.0.100 Serial Key is a premium version of Glary Utilities that unlocks more features and benefits for your PC. With Glary Utilities Pro, you can enjoy automatic updates, scheduled scans, priority technical support, and access to more than 20 advanced tools that can help you improve your PC's security, privacy, and stability.

    -

    In this article, we will show you how to download and install Glary Utilities Pro 5.79.0.100 Crack on your PC for free. You will also learn how to activate the full version of Glary Utilities Pro with the serial key provided below.

    -

    How to Download and Install Glary Utilities Pro 5.79.0.100 Crack

    -

    Follow these simple steps to download and install Glary Utilities Pro 5.79.0.100 Crack on your PC:

    -

    -
      -
    1. Click on the download button below to get the setup file of Glary Utilities Pro 5.79.0.100 Crack.
    2. -
    3. Run the setup file and follow the instructions to install Glary Utilities Pro on your PC.
    4. -
    5. After the installation is complete, close the program if it is running.
    6. -
    7. Copy the crack file from the downloaded folder and paste it into the installation directory of Glary Utilities Pro.
    8. -
    9. Run Glary Utilities Pro and enter the serial key from the text file in the downloaded folder.
    10. -
    11. Enjoy the full version of Glary Utilities Pro 5.79.0.100 Crack for free.
    12. -
    -

    Features of Glary Utilities Pro 5.79.0.100 Crack

    -

    Glary Utilities Pro 5.79.0.100 Crack offers a comprehensive set of features and tools that can help you optimize, clean, repair, and protect your PC from various issues. Here are some of the main features of Glary Utilities Pro:

    -
      -
    • One-click maintenance: With just one click, you can scan and fix various issues on your PC, such as registry errors, disk errors, startup items, temporary files, spyware, and more.
    • -
    • System optimizer: You can optimize your system settings and improve your PC's performance and speed with Glary Utilities Pro.
    • -
    • Disk cleaner: You can free up disk space by removing junk files, duplicate files, empty folders, and other unnecessary data from your PC.
    • -
    • Registry cleaner: You can fix registry errors and defrag your registry to make it more stable and efficient.
    • -
    • Startup manager: You can manage your startup items and disable or delete unwanted programs that slow down your PC's boot time.
    • -
    • Memory optimizer: You can monitor and optimize your memory usage and free up RAM for faster performance.
    • -
    • File shredder: You can permanently delete sensitive files and folders from your PC and prevent them from being recovered by any data recovery software.
    • -
    • File encrypter: You can encrypt and decrypt your files and folders with a password to protect them from unauthorized access.
    • -
    • File splitter: You can split large files into smaller pieces and join them back together when needed.
    • -
    • Disk analysis: You can analyze your disk space usage and find out which files and folders take up the most space on your PC.
    • -
    • Duplicate finder: You can find and delete duplicate files that waste your disk space and cause confusion.
    • -
    • Empty folder finder: You can find and delete empty folders that clutter your PC.
    • -
    • Uninstall manager: You can uninstall programs that you no longer need or use from your PC and remove their leftover traces.
    • -
    • Browser assistant: d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Hanyu Jiaocheng Book 1 Part 2 Download.md b/spaces/rorallitri/biomedical-language-models/logs/Hanyu Jiaocheng Book 1 Part 2 Download.md deleted file mode 100644 index 8b9ef01ba45deff81b364bc9ef1bb1c8cfea3bc7..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Hanyu Jiaocheng Book 1 Part 2 Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Hanyu Jiaocheng Book 1 Part 2 Download


      Download File --->>> https://tinurll.com/2uzlQy



      -
      -If you are author or own the copyright of this book, please report to us by using this DMCA report ... Download & View Hanyu Jiaocheng 1-2 Eng as PDF for free. 1fdad05405
      -
      -
      -

      diff --git a/spaces/rorallitri/biomedical-language-models/logs/Hugo Old Tyme Religion 2011 Mp3320 Rock Blues and Bluegrass Fusion.md b/spaces/rorallitri/biomedical-language-models/logs/Hugo Old Tyme Religion 2011 Mp3320 Rock Blues and Bluegrass Fusion.md deleted file mode 100644 index 48a6cc5950c4c47e1a139b4f723981e77330df26..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Hugo Old Tyme Religion 2011 Mp3320 Rock Blues and Bluegrass Fusion.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Hugo Old Tyme Religion 2011 Mp3320


      Download »»» https://tinurll.com/2uzotJ



      -
      - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/scaratootie/scarar/Dockerfile b/spaces/scaratootie/scarar/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/scaratootie/scarar/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Download Cheat Lost Saga Hero Scroll 185.md b/spaces/scedlatioru/img-to-music/example/Download Cheat Lost Saga Hero Scroll 185.md deleted file mode 100644 index 0d3c3a34c4152f254557e51f394e710849085c93..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Download Cheat Lost Saga Hero Scroll 185.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Download Cheat Lost Saga Hero Scroll 185


      Download Filehttps://gohhs.com/2uEzZL



      -
      -2017.185 At the beginning of 2018, Dr. Colak was assigned ... transfers were delayed, we lost our local ... hero, Turkey.624” ... 27. https://slate.com/news-and-politics/2017/03/lessons-from-the-flynn-turkey-trump-saga.html ... Red Hack, a Turkish Marxist-Leninist hacker group, hacked Berat Albayrak's emails and. WikiLeaks ... 1fdad05405
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/Lantul Amintirilor Tot Filmul LINK Download.md b/spaces/scedlatioru/img-to-music/example/Lantul Amintirilor Tot Filmul LINK Download.md deleted file mode 100644 index 58665d7c8b3a6b0f84d0640c22ecc50c2cb57dab..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Lantul Amintirilor Tot Filmul LINK Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Lantul amintirilor tot filmul download


      DOWNLOAD ⇒⇒⇒ https://gohhs.com/2uEzUX



      -
      -CRACK Invision Power Board 2.1.0 Final Version >>> DOWNLOAD. Knowledge Base ... Lantul Amintirilor Tot Filmul Download · Driver Portatil Samsung ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/Machete Kills Again In Space 26 FREE.md b/spaces/scedlatioru/img-to-music/example/Machete Kills Again In Space 26 FREE.md deleted file mode 100644 index 3c5d4850dcede311bf9dca28a797ad03dc7c7770..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Machete Kills Again In Space 26 FREE.md +++ /dev/null @@ -1,6 +0,0 @@ -

      machete kills again in space 26


      Download File ……… https://gohhs.com/2uEyPo



      - -Books // ReadingnPERIOD 4 WRITING SENIOR PROMPTS. 5 item. Books // ReadingnPERIOD 4 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERIOD 1 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERIOD 1 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERIOD 1 WRITING SENIOR PROMPTS. 105 item. Books // Reading. 4 item. Books // ReadingnPERIOD 1 WRITING SENIOR PROMPTS. 5 item. Books // ReadingnPERIOD 4 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERIOD 4 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERIOD 1 WRITING SENIOR PROMPTS. 105 item. Books // ReadingnPERIOD 1 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERIOD 1 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERIOD 1 WRITING SENIOR PROMPTS. 105 item. Books // ReadingnPERIOD 4 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERIOD 4 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERIOD 4 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERIOD 1 WRITING SENIOR PROMPTS. 105 item. Books // ReadingnPERIOD 1 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERIOD 4 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERIOD 4 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERIOD 1 WRITING SENIOR PROMPTS. 105 item. Books // ReadingnPERIOD 1 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERIOD 1 WRITING SENIOR PROMPTS. 105 item. BOOKS // ReadingnPERIOD 1 WRITING SENIOR PROMPTS. 5 item. BOOKS // ReadingnPERI 4fefd39f24
      -
      -
      -

      diff --git a/spaces/scedlatioru/img-to-music/example/Nitro Pro 7 Activation Code Serial 12l.md b/spaces/scedlatioru/img-to-music/example/Nitro Pro 7 Activation Code Serial 12l.md deleted file mode 100644 index abeb5591885818281e8668863a445e89271e352c..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Nitro Pro 7 Activation Code Serial 12l.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Nitro Pro 7 Activation Code Serial 12l


      DOWNLOAD ===> https://gohhs.com/2uEzIR



      -
      -So here we go! 1. First, you'll want to note your serial number and deactivate your license. To avoid any possible activation errors when you move Nitro Pro ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/seduerr/text_analytics/text_analytics/indices/syntactic_pattern_density_indices.py b/spaces/seduerr/text_analytics/text_analytics/indices/syntactic_pattern_density_indices.py deleted file mode 100644 index 8df78453c973cf0b7bab98ba6ff38bae800cdc21..0000000000000000000000000000000000000000 --- a/spaces/seduerr/text_analytics/text_analytics/indices/syntactic_pattern_density_indices.py +++ /dev/null @@ -1,126 +0,0 @@ -import multiprocessing - -import spacy - -from typing import Callable -from typing import List -from text_analytics.indices.descriptive_indices import DescriptiveIndices -from text_analytics.constants import ACCEPTED_LANGUAGES -from text_analytics.utils.utils import split_text_into_paragraphs - - -class SyntacticPatternDensityIndices: - ''' - This class will handle all operations to find the synthactic pattern density indices of a text according to Coh-Metrix. - ''' - - def __init__(self, nlp, language: str='en', descriptive_indices: DescriptiveIndices=None) -> None: - ''' - The constructor will initialize this object that calculates the synthactic pattern density indices for a specific language of those that are available. - - Parameters: - nlp: The spacy model that corresponds to a language. - language(str): The language that the texts to process will have. - descriptive_indices(DescriptiveIndices): The class that calculates the descriptive indices of a text in a certain language. - - Returns: - None. - ''' - if not language in ACCEPTED_LANGUAGES: - raise ValueError(f'Language {language} is not supported yet') - elif descriptive_indices is not None and descriptive_indices.language != language: - raise ValueError(f'The descriptive indices analyzer must be of the same language as the word information analyzer.') - - self.language = language - self._nlp = nlp - self._incidence = 1000 - - if descriptive_indices is None: # Assign the descriptive indices to an attribute - self._di = DescriptiveIndices(language=language, nlp=nlp) - else: - self._di = descriptive_indices - - def _get_syntactic_pattern_density(self, text: str, disable_pipeline: List, sp_counter_function: Callable=None, word_count: int=None, workers: int=-1) -> int: - ''' - This function obtains the incidence of a syntactic pattern that exist on a text per {self._incidence} words. - - Parameters: - text(str): The text to be analized. - disable_pipeline(List): The pipeline elements to be disabled. - sp_counter_function(Callable): The function that counts a syntactic pattern for a Spacy document. It returns an integer. - word_count(int): The amount of words in the text. - workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used. - - Returns: - int: The incidence of a syntactic pattern per {self._incidence} words. - ''' - if len(text) == 0: - raise ValueError('The word is empty.') - elif workers == 0 or workers < -1: - raise ValueError('Workers must be -1 or any positive number greater than 0') - else: - paragraphs = split_text_into_paragraphs(text) # Find all paragraphs - threads = 1 #multiprocessing.cpu_count() if workers == -1 else workers - wc = word_count if word_count is not None else self._di.get_word_count_from_text(text) - self._nlp.get_pipe('feature counter').counter_function = sp_counter_function - density = sum(doc._.feature_count - for doc in self._nlp.pipe(paragraphs, batch_size=threads, disable=disable_pipeline, n_process=threads)) # Calculate with multiprocessing - - return (density / wc) * self._incidence - - def get_noun_phrase_density(self, text: str, word_count: int=None, workers: int=-1) -> int: - ''' - This function obtains the incidence of noun phrases that exist on a text per {self._incidence} words. - - Parameters: - text(str): The text to be analized. - word_count(int): The amount of words in the text. - workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used. - - Returns: - int: The incidence of noun phrases per {self._incidence} words. - ''' - count_noun_phrases = lambda doc: len(doc._.noun_phrases) - disable_pipeline = [pipe - for pipe in self._nlp.pipe_names - if pipe not in ['noun phrase tagger', 'tagger', 'parser', 'feature counter']] - - return self._get_syntactic_pattern_density(text, disable_pipeline=disable_pipeline, sp_counter_function=count_noun_phrases, workers=workers) - - def get_verb_phrase_density(self, text: str, word_count: int=None, workers: int=-1) -> int: - ''' - This function obtains the incidence of verb phrases that exist on a text per {self._incidence} words. - - Parameters: - text(str): The text to be analized. - word_count(int): The amount of words in the text. - workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used. - - Returns: - int: The incidence of verb phrases per {self._incidence} words. - ''' - count_verb_phrases = lambda doc: len(doc._.verb_phrases) - disable_pipeline = [pipe - for pipe in self._nlp.pipe_names - if pipe not in ['verb phrase tagger', 'tagger', 'feature counter']] - - return self._get_syntactic_pattern_density(text, disable_pipeline=disable_pipeline, sp_counter_function=count_verb_phrases, workers=workers) - - def get_negation_expressions_density(self, text: str, word_count: int=None, workers: int=-1) -> int: - ''' - This function obtains the incidence of negation expressions that exist on a text per {self._incidence} words. - - Parameters: - text(str): The text to be analized. - word_count(int): The amount of words in the text. - workers(int): Amount of threads that will complete this operation. If it's -1 then all cpu cores will be used. - - Returns: - int: The incidence of negation expressions per {self._incidence} words. - ''' - count_negation_expressions = lambda doc: len(doc._.negation_expressions) - disable_pipeline = [pipe - for pipe in self._nlp.pipe_names - if pipe not in ['negative expression tagger', 'tagger', 'feature counter']] - - return self._get_syntactic_pattern_density(text, disable_pipeline=disable_pipeline, sp_counter_function=count_negation_expressions, workers=workers) diff --git a/spaces/segestic/ArticlePara/apps/summarizeApp.py b/spaces/segestic/ArticlePara/apps/summarizeApp.py deleted file mode 100644 index 31c577651959dae944db0ae71d79e4a289f978bd..0000000000000000000000000000000000000000 --- a/spaces/segestic/ArticlePara/apps/summarizeApp.py +++ /dev/null @@ -1,11 +0,0 @@ -import streamlit as st -from summarizer import run_summarization - -def app(): - st.title('Summarizer') - st.write('Please provide the text to be summarized') - user_input = st.text_area('Enter text','') - if st.button('Summarize'): - output1 = run_summarization(str(user_input))#,minLength,maxLength) - st.write("Text Summary: ") - st.write(output1) diff --git a/spaces/segments-tobias/conex/espnet/nets/ctc_prefix_score.py b/spaces/segments-tobias/conex/espnet/nets/ctc_prefix_score.py deleted file mode 100644 index ede03285164afa7f40b7de35517c051006ddc49a..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/ctc_prefix_score.py +++ /dev/null @@ -1,359 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright 2018 Mitsubishi Electric Research Labs (Takaaki Hori) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -import torch - -import numpy as np -import six - - -class CTCPrefixScoreTH(object): - """Batch processing of CTCPrefixScore - - which is based on Algorithm 2 in WATANABE et al. - "HYBRID CTC/ATTENTION ARCHITECTURE FOR END-TO-END SPEECH RECOGNITION," - but extended to efficiently compute the label probablities for multiple - hypotheses simultaneously - See also Seki et al. "Vectorized Beam Search for CTC-Attention-Based - Speech Recognition," In INTERSPEECH (pp. 3825-3829), 2019. - """ - - def __init__(self, x, xlens, blank, eos, margin=0): - """Construct CTC prefix scorer - - :param torch.Tensor x: input label posterior sequences (B, T, O) - :param torch.Tensor xlens: input lengths (B,) - :param int blank: blank label id - :param int eos: end-of-sequence id - :param int margin: margin parameter for windowing (0 means no windowing) - """ - # In the comment lines, - # we assume T: input_length, B: batch size, W: beam width, O: output dim. - self.logzero = -10000000000.0 - self.blank = blank - self.eos = eos - self.batch = x.size(0) - self.input_length = x.size(1) - self.odim = x.size(2) - self.dtype = x.dtype - self.device = ( - torch.device("cuda:%d" % x.get_device()) - if x.is_cuda - else torch.device("cpu") - ) - # Pad the rest of posteriors in the batch - # TODO(takaaki-hori): need a better way without for-loops - for i, l in enumerate(xlens): - if l < self.input_length: - x[i, l:, :] = self.logzero - x[i, l:, blank] = 0 - # Reshape input x - xn = x.transpose(0, 1) # (B, T, O) -> (T, B, O) - xb = xn[:, :, self.blank].unsqueeze(2).expand(-1, -1, self.odim) - self.x = torch.stack([xn, xb]) # (2, T, B, O) - self.end_frames = torch.as_tensor(xlens) - 1 - - # Setup CTC windowing - self.margin = margin - if margin > 0: - self.frame_ids = torch.arange( - self.input_length, dtype=self.dtype, device=self.device - ) - # Base indices for index conversion - self.idx_bh = None - self.idx_b = torch.arange(self.batch, device=self.device) - self.idx_bo = (self.idx_b * self.odim).unsqueeze(1) - - def __call__(self, y, state, scoring_ids=None, att_w=None): - """Compute CTC prefix scores for next labels - - :param list y: prefix label sequences - :param tuple state: previous CTC state - :param torch.Tensor pre_scores: scores for pre-selection of hypotheses (BW, O) - :param torch.Tensor att_w: attention weights to decide CTC window - :return new_state, ctc_local_scores (BW, O) - """ - output_length = len(y[0]) - 1 # ignore sos - last_ids = [yi[-1] for yi in y] # last output label ids - n_bh = len(last_ids) # batch * hyps - n_hyps = n_bh // self.batch # assuming each utterance has the same # of hyps - self.scoring_num = scoring_ids.size(-1) if scoring_ids is not None else 0 - # prepare state info - if state is None: - r_prev = torch.full( - (self.input_length, 2, self.batch, n_hyps), - self.logzero, - dtype=self.dtype, - device=self.device, - ) - r_prev[:, 1] = torch.cumsum(self.x[0, :, :, self.blank], 0).unsqueeze(2) - r_prev = r_prev.view(-1, 2, n_bh) - s_prev = 0.0 - f_min_prev = 0 - f_max_prev = 1 - else: - r_prev, s_prev, f_min_prev, f_max_prev = state - - # select input dimensions for scoring - if self.scoring_num > 0: - scoring_idmap = torch.full( - (n_bh, self.odim), -1, dtype=torch.long, device=self.device - ) - snum = self.scoring_num - if self.idx_bh is None or n_bh > len(self.idx_bh): - self.idx_bh = torch.arange(n_bh, device=self.device).view(-1, 1) - scoring_idmap[self.idx_bh[:n_bh], scoring_ids] = torch.arange( - snum, device=self.device - ) - scoring_idx = ( - scoring_ids + self.idx_bo.repeat(1, n_hyps).view(-1, 1) - ).view(-1) - x_ = torch.index_select( - self.x.view(2, -1, self.batch * self.odim), 2, scoring_idx - ).view(2, -1, n_bh, snum) - else: - scoring_ids = None - scoring_idmap = None - snum = self.odim - x_ = self.x.unsqueeze(3).repeat(1, 1, 1, n_hyps, 1).view(2, -1, n_bh, snum) - - # new CTC forward probs are prepared as a (T x 2 x BW x S) tensor - # that corresponds to r_t^n(h) and r_t^b(h) in a batch. - r = torch.full( - (self.input_length, 2, n_bh, snum), - self.logzero, - dtype=self.dtype, - device=self.device, - ) - if output_length == 0: - r[0, 0] = x_[0, 0] - - r_sum = torch.logsumexp(r_prev, 1) - log_phi = r_sum.unsqueeze(2).repeat(1, 1, snum) - if scoring_ids is not None: - for idx in range(n_bh): - pos = scoring_idmap[idx, last_ids[idx]] - if pos >= 0: - log_phi[:, idx, pos] = r_prev[:, 1, idx] - else: - for idx in range(n_bh): - log_phi[:, idx, last_ids[idx]] = r_prev[:, 1, idx] - - # decide start and end frames based on attention weights - if att_w is not None and self.margin > 0: - f_arg = torch.matmul(att_w, self.frame_ids) - f_min = max(int(f_arg.min().cpu()), f_min_prev) - f_max = max(int(f_arg.max().cpu()), f_max_prev) - start = min(f_max_prev, max(f_min - self.margin, output_length, 1)) - end = min(f_max + self.margin, self.input_length) - else: - f_min = f_max = 0 - start = max(output_length, 1) - end = self.input_length - - # compute forward probabilities log(r_t^n(h)) and log(r_t^b(h)) - for t in range(start, end): - rp = r[t - 1] - rr = torch.stack([rp[0], log_phi[t - 1], rp[0], rp[1]]).view( - 2, 2, n_bh, snum - ) - r[t] = torch.logsumexp(rr, 1) + x_[:, t] - - # compute log prefix probabilites log(psi) - log_phi_x = torch.cat((log_phi[0].unsqueeze(0), log_phi[:-1]), dim=0) + x_[0] - if scoring_ids is not None: - log_psi = torch.full( - (n_bh, self.odim), self.logzero, dtype=self.dtype, device=self.device - ) - log_psi_ = torch.logsumexp( - torch.cat((log_phi_x[start:end], r[start - 1, 0].unsqueeze(0)), dim=0), - dim=0, - ) - for si in range(n_bh): - log_psi[si, scoring_ids[si]] = log_psi_[si] - else: - log_psi = torch.logsumexp( - torch.cat((log_phi_x[start:end], r[start - 1, 0].unsqueeze(0)), dim=0), - dim=0, - ) - - for si in range(n_bh): - log_psi[si, self.eos] = r_sum[self.end_frames[si // n_hyps], si] - - # exclude blank probs - log_psi[:, self.blank] = self.logzero - - return (log_psi - s_prev), (r, log_psi, f_min, f_max, scoring_idmap) - - def index_select_state(self, state, best_ids): - """Select CTC states according to best ids - - :param state : CTC state - :param best_ids : index numbers selected by beam pruning (B, W) - :return selected_state - """ - r, s, f_min, f_max, scoring_idmap = state - # convert ids to BHO space - n_bh = len(s) - n_hyps = n_bh // self.batch - vidx = (best_ids + (self.idx_b * (n_hyps * self.odim)).view(-1, 1)).view(-1) - # select hypothesis scores - s_new = torch.index_select(s.view(-1), 0, vidx) - s_new = s_new.view(-1, 1).repeat(1, self.odim).view(n_bh, self.odim) - # convert ids to BHS space (S: scoring_num) - if scoring_idmap is not None: - snum = self.scoring_num - hyp_idx = (best_ids // self.odim + (self.idx_b * n_hyps).view(-1, 1)).view( - -1 - ) - label_ids = torch.fmod(best_ids, self.odim).view(-1) - score_idx = scoring_idmap[hyp_idx, label_ids] - score_idx[score_idx == -1] = 0 - vidx = score_idx + hyp_idx * snum - else: - snum = self.odim - # select forward probabilities - r_new = torch.index_select(r.view(-1, 2, n_bh * snum), 2, vidx).view( - -1, 2, n_bh - ) - return r_new, s_new, f_min, f_max - - def extend_prob(self, x): - """Extend CTC prob. - - :param torch.Tensor x: input label posterior sequences (B, T, O) - """ - - if self.x.shape[1] < x.shape[1]: # self.x (2,T,B,O); x (B,T,O) - # Pad the rest of posteriors in the batch - # TODO(takaaki-hori): need a better way without for-loops - xlens = [x.size(1)] - for i, l in enumerate(xlens): - if l < self.input_length: - x[i, l:, :] = self.logzero - x[i, l:, self.blank] = 0 - tmp_x = self.x - xn = x.transpose(0, 1) # (B, T, O) -> (T, B, O) - xb = xn[:, :, self.blank].unsqueeze(2).expand(-1, -1, self.odim) - self.x = torch.stack([xn, xb]) # (2, T, B, O) - self.x[:, : tmp_x.shape[1], :, :] = tmp_x - self.input_length = x.size(1) - self.end_frames = torch.as_tensor(xlens) - 1 - - def extend_state(self, state): - """Compute CTC prefix state. - - - :param state : CTC state - :return ctc_state - """ - - if state is None: - # nothing to do - return state - else: - r_prev, s_prev, f_min_prev, f_max_prev = state - - r_prev_new = torch.full( - (self.input_length, 2), - self.logzero, - dtype=self.dtype, - device=self.device, - ) - start = max(r_prev.shape[0], 1) - r_prev_new[0:start] = r_prev - for t in six.moves.range(start, self.input_length): - r_prev_new[t, 1] = r_prev_new[t - 1, 1] + self.x[0, t, :, self.blank] - - return (r_prev_new, s_prev, f_min_prev, f_max_prev) - - -class CTCPrefixScore(object): - """Compute CTC label sequence scores - - which is based on Algorithm 2 in WATANABE et al. - "HYBRID CTC/ATTENTION ARCHITECTURE FOR END-TO-END SPEECH RECOGNITION," - but extended to efficiently compute the probablities of multiple labels - simultaneously - """ - - def __init__(self, x, blank, eos, xp): - self.xp = xp - self.logzero = -10000000000.0 - self.blank = blank - self.eos = eos - self.input_length = len(x) - self.x = x - - def initial_state(self): - """Obtain an initial CTC state - - :return: CTC state - """ - # initial CTC state is made of a frame x 2 tensor that corresponds to - # r_t^n() and r_t^b(), where 0 and 1 of axis=1 represent - # superscripts n and b (non-blank and blank), respectively. - r = self.xp.full((self.input_length, 2), self.logzero, dtype=np.float32) - r[0, 1] = self.x[0, self.blank] - for i in six.moves.range(1, self.input_length): - r[i, 1] = r[i - 1, 1] + self.x[i, self.blank] - return r - - def __call__(self, y, cs, r_prev): - """Compute CTC prefix scores for next labels - - :param y : prefix label sequence - :param cs : array of next labels - :param r_prev: previous CTC state - :return ctc_scores, ctc_states - """ - # initialize CTC states - output_length = len(y) - 1 # ignore sos - # new CTC states are prepared as a frame x (n or b) x n_labels tensor - # that corresponds to r_t^n(h) and r_t^b(h). - r = self.xp.ndarray((self.input_length, 2, len(cs)), dtype=np.float32) - xs = self.x[:, cs] - if output_length == 0: - r[0, 0] = xs[0] - r[0, 1] = self.logzero - else: - r[output_length - 1] = self.logzero - - # prepare forward probabilities for the last label - r_sum = self.xp.logaddexp( - r_prev[:, 0], r_prev[:, 1] - ) # log(r_t^n(g) + r_t^b(g)) - last = y[-1] - if output_length > 0 and last in cs: - log_phi = self.xp.ndarray((self.input_length, len(cs)), dtype=np.float32) - for i in six.moves.range(len(cs)): - log_phi[:, i] = r_sum if cs[i] != last else r_prev[:, 1] - else: - log_phi = r_sum - - # compute forward probabilities log(r_t^n(h)), log(r_t^b(h)), - # and log prefix probabilites log(psi) - start = max(output_length, 1) - log_psi = r[start - 1, 0] - for t in six.moves.range(start, self.input_length): - r[t, 0] = self.xp.logaddexp(r[t - 1, 0], log_phi[t - 1]) + xs[t] - r[t, 1] = ( - self.xp.logaddexp(r[t - 1, 0], r[t - 1, 1]) + self.x[t, self.blank] - ) - log_psi = self.xp.logaddexp(log_psi, log_phi[t - 1] + xs[t]) - - # get P(...eos|X) that ends with the prefix itself - eos_pos = self.xp.where(cs == self.eos)[0] - if len(eos_pos) > 0: - log_psi[eos_pos] = r_sum[-1] # log(r_T^n(g) + r_T^b(g)) - - # exclude blank probs - blank_pos = self.xp.where(cs == self.blank)[0] - if len(blank_pos) > 0: - log_psi[blank_pos] = self.logzero - - # return the log prefix probability and CTC states, where the label axis - # of the CTC states is moved to the first axis to slice it easily - return log_psi, self.xp.rollaxis(r, 2) diff --git a/spaces/segments-tobias/conex/espnet2/fileio/rand_gen_dataset.py b/spaces/segments-tobias/conex/espnet2/fileio/rand_gen_dataset.py deleted file mode 100644 index bb92336a6feecc5733ad90dc530706c3a79dd251..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/fileio/rand_gen_dataset.py +++ /dev/null @@ -1,86 +0,0 @@ -import collections -from pathlib import Path -from typing import Union - -import numpy as np -from typeguard import check_argument_types - -from espnet2.fileio.read_text import load_num_sequence_text - - -class FloatRandomGenerateDataset(collections.abc.Mapping): - """Generate float array from shape.txt. - - Examples: - shape.txt - uttA 123,83 - uttB 34,83 - >>> dataset = FloatRandomGenerateDataset("shape.txt") - >>> array = dataset["uttA"] - >>> assert array.shape == (123, 83) - >>> array = dataset["uttB"] - >>> assert array.shape == (34, 83) - - """ - - def __init__( - self, - shape_file: Union[Path, str], - dtype: Union[str, np.dtype] = "float32", - loader_type: str = "csv_int", - ): - assert check_argument_types() - shape_file = Path(shape_file) - self.utt2shape = load_num_sequence_text(shape_file, loader_type) - self.dtype = np.dtype(dtype) - - def __iter__(self): - return iter(self.utt2shape) - - def __len__(self): - return len(self.utt2shape) - - def __getitem__(self, item) -> np.ndarray: - shape = self.utt2shape[item] - return np.random.randn(*shape).astype(self.dtype) - - -class IntRandomGenerateDataset(collections.abc.Mapping): - """Generate float array from shape.txt - - Examples: - shape.txt - uttA 123,83 - uttB 34,83 - >>> dataset = IntRandomGenerateDataset("shape.txt", low=0, high=10) - >>> array = dataset["uttA"] - >>> assert array.shape == (123, 83) - >>> array = dataset["uttB"] - >>> assert array.shape == (34, 83) - - """ - - def __init__( - self, - shape_file: Union[Path, str], - low: int, - high: int = None, - dtype: Union[str, np.dtype] = "int64", - loader_type: str = "csv_int", - ): - assert check_argument_types() - shape_file = Path(shape_file) - self.utt2shape = load_num_sequence_text(shape_file, loader_type) - self.dtype = np.dtype(dtype) - self.low = low - self.high = high - - def __iter__(self): - return iter(self.utt2shape) - - def __len__(self): - return len(self.utt2shape) - - def __getitem__(self, item) -> np.ndarray: - shape = self.utt2shape[item] - return np.random.randint(self.low, self.high, size=shape, dtype=self.dtype) diff --git a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/transformer.py b/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/transformer.py deleted file mode 100644 index fcb8742dbdde6e80fd38b11d064211f6935aae76..0000000000000000000000000000000000000000 --- a/spaces/segments/panoptic-segment-anything/GroundingDINO/groundingdino/models/GroundingDINO/transformer.py +++ /dev/null @@ -1,959 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# DINO -# Copyright (c) 2022 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR Transformer class. -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Modified from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ - -from typing import Optional - -import torch -import torch.utils.checkpoint as checkpoint -from torch import Tensor, nn - -from groundingdino.util.misc import inverse_sigmoid - -from .fuse_modules import BiAttentionBlock -from .ms_deform_attn import MultiScaleDeformableAttention as MSDeformAttn -from .transformer_vanilla import TransformerEncoderLayer -from .utils import ( - MLP, - _get_activation_fn, - _get_clones, - gen_encoder_output_proposals, - gen_sineembed_for_position, - get_sine_pos_embed, -) - - -class Transformer(nn.Module): - def __init__( - self, - d_model=256, - nhead=8, - num_queries=300, - num_encoder_layers=6, - num_unicoder_layers=0, - num_decoder_layers=6, - dim_feedforward=2048, - dropout=0.0, - activation="relu", - normalize_before=False, - return_intermediate_dec=False, - query_dim=4, - num_patterns=0, - # for deformable encoder - num_feature_levels=1, - enc_n_points=4, - dec_n_points=4, - # init query - learnable_tgt_init=False, - # two stage - two_stage_type="no", # ['no', 'standard', 'early', 'combine', 'enceachlayer', 'enclayer1'] - embed_init_tgt=False, - # for text - use_text_enhancer=False, - use_fusion_layer=False, - use_checkpoint=False, - use_transformer_ckpt=False, - use_text_cross_attention=False, - text_dropout=0.1, - fusion_dropout=0.1, - fusion_droppath=0.0, - ): - super().__init__() - self.num_feature_levels = num_feature_levels - self.num_encoder_layers = num_encoder_layers - self.num_unicoder_layers = num_unicoder_layers - self.num_decoder_layers = num_decoder_layers - self.num_queries = num_queries - assert query_dim == 4 - - # choose encoder layer type - encoder_layer = DeformableTransformerEncoderLayer( - d_model, dim_feedforward, dropout, activation, num_feature_levels, nhead, enc_n_points - ) - - if use_text_enhancer: - text_enhance_layer = TransformerEncoderLayer( - d_model=d_model, - nhead=nhead // 2, - dim_feedforward=dim_feedforward // 2, - dropout=text_dropout, - ) - else: - text_enhance_layer = None - - if use_fusion_layer: - feature_fusion_layer = BiAttentionBlock( - v_dim=d_model, - l_dim=d_model, - embed_dim=dim_feedforward // 2, - num_heads=nhead // 2, - dropout=fusion_dropout, - drop_path=fusion_droppath, - ) - else: - feature_fusion_layer = None - - encoder_norm = nn.LayerNorm(d_model) if normalize_before else None - assert encoder_norm is None - self.encoder = TransformerEncoder( - encoder_layer, - num_encoder_layers, - d_model=d_model, - num_queries=num_queries, - text_enhance_layer=text_enhance_layer, - feature_fusion_layer=feature_fusion_layer, - use_checkpoint=use_checkpoint, - use_transformer_ckpt=use_transformer_ckpt, - ) - - # choose decoder layer type - decoder_layer = DeformableTransformerDecoderLayer( - d_model, - dim_feedforward, - dropout, - activation, - num_feature_levels, - nhead, - dec_n_points, - use_text_cross_attention=use_text_cross_attention, - ) - - decoder_norm = nn.LayerNorm(d_model) - self.decoder = TransformerDecoder( - decoder_layer, - num_decoder_layers, - decoder_norm, - return_intermediate=return_intermediate_dec, - d_model=d_model, - query_dim=query_dim, - num_feature_levels=num_feature_levels, - ) - - self.d_model = d_model - self.nhead = nhead - self.dec_layers = num_decoder_layers - self.num_queries = num_queries # useful for single stage model only - self.num_patterns = num_patterns - if not isinstance(num_patterns, int): - Warning("num_patterns should be int but {}".format(type(num_patterns))) - self.num_patterns = 0 - - if num_feature_levels > 1: - if self.num_encoder_layers > 0: - self.level_embed = nn.Parameter(torch.Tensor(num_feature_levels, d_model)) - else: - self.level_embed = None - - self.learnable_tgt_init = learnable_tgt_init - assert learnable_tgt_init, "why not learnable_tgt_init" - self.embed_init_tgt = embed_init_tgt - if (two_stage_type != "no" and embed_init_tgt) or (two_stage_type == "no"): - self.tgt_embed = nn.Embedding(self.num_queries, d_model) - nn.init.normal_(self.tgt_embed.weight.data) - else: - self.tgt_embed = None - - # for two stage - self.two_stage_type = two_stage_type - assert two_stage_type in ["no", "standard"], "unknown param {} of two_stage_type".format( - two_stage_type - ) - if two_stage_type == "standard": - # anchor selection at the output of encoder - self.enc_output = nn.Linear(d_model, d_model) - self.enc_output_norm = nn.LayerNorm(d_model) - self.two_stage_wh_embedding = None - - if two_stage_type == "no": - self.init_ref_points(num_queries) # init self.refpoint_embed - - self.enc_out_class_embed = None - self.enc_out_bbox_embed = None - - self._reset_parameters() - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - for m in self.modules(): - if isinstance(m, MSDeformAttn): - m._reset_parameters() - if self.num_feature_levels > 1 and self.level_embed is not None: - nn.init.normal_(self.level_embed) - - def get_valid_ratio(self, mask): - _, H, W = mask.shape - valid_H = torch.sum(~mask[:, :, 0], 1) - valid_W = torch.sum(~mask[:, 0, :], 1) - valid_ratio_h = valid_H.float() / H - valid_ratio_w = valid_W.float() / W - valid_ratio = torch.stack([valid_ratio_w, valid_ratio_h], -1) - return valid_ratio - - def init_ref_points(self, use_num_queries): - self.refpoint_embed = nn.Embedding(use_num_queries, 4) - - def forward(self, srcs, masks, refpoint_embed, pos_embeds, tgt, attn_mask=None, text_dict=None): - """ - Input: - - srcs: List of multi features [bs, ci, hi, wi] - - masks: List of multi masks [bs, hi, wi] - - refpoint_embed: [bs, num_dn, 4]. None in infer - - pos_embeds: List of multi pos embeds [bs, ci, hi, wi] - - tgt: [bs, num_dn, d_model]. None in infer - - """ - # prepare input for encoder - src_flatten = [] - mask_flatten = [] - lvl_pos_embed_flatten = [] - spatial_shapes = [] - for lvl, (src, mask, pos_embed) in enumerate(zip(srcs, masks, pos_embeds)): - bs, c, h, w = src.shape - spatial_shape = (h, w) - spatial_shapes.append(spatial_shape) - - src = src.flatten(2).transpose(1, 2) # bs, hw, c - mask = mask.flatten(1) # bs, hw - pos_embed = pos_embed.flatten(2).transpose(1, 2) # bs, hw, c - if self.num_feature_levels > 1 and self.level_embed is not None: - lvl_pos_embed = pos_embed + self.level_embed[lvl].view(1, 1, -1) - else: - lvl_pos_embed = pos_embed - lvl_pos_embed_flatten.append(lvl_pos_embed) - src_flatten.append(src) - mask_flatten.append(mask) - src_flatten = torch.cat(src_flatten, 1) # bs, \sum{hxw}, c - mask_flatten = torch.cat(mask_flatten, 1) # bs, \sum{hxw} - lvl_pos_embed_flatten = torch.cat(lvl_pos_embed_flatten, 1) # bs, \sum{hxw}, c - spatial_shapes = torch.as_tensor( - spatial_shapes, dtype=torch.long, device=src_flatten.device - ) - level_start_index = torch.cat( - (spatial_shapes.new_zeros((1,)), spatial_shapes.prod(1).cumsum(0)[:-1]) - ) - valid_ratios = torch.stack([self.get_valid_ratio(m) for m in masks], 1) - - # two stage - enc_topk_proposals = enc_refpoint_embed = None - - ######################################################### - # Begin Encoder - ######################################################### - memory, memory_text = self.encoder( - src_flatten, - pos=lvl_pos_embed_flatten, - level_start_index=level_start_index, - spatial_shapes=spatial_shapes, - valid_ratios=valid_ratios, - key_padding_mask=mask_flatten, - memory_text=text_dict["encoded_text"], - text_attention_mask=~text_dict["text_token_mask"], - # we ~ the mask . False means use the token; True means pad the token - position_ids=text_dict["position_ids"], - text_self_attention_masks=text_dict["text_self_attention_masks"], - ) - ######################################################### - # End Encoder - # - memory: bs, \sum{hw}, c - # - mask_flatten: bs, \sum{hw} - # - lvl_pos_embed_flatten: bs, \sum{hw}, c - # - enc_intermediate_output: None or (nenc+1, bs, nq, c) or (nenc, bs, nq, c) - # - enc_intermediate_refpoints: None or (nenc+1, bs, nq, c) or (nenc, bs, nq, c) - ######################################################### - text_dict["encoded_text"] = memory_text - # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1': - # if memory.isnan().any() | memory.isinf().any(): - # import ipdb; ipdb.set_trace() - - if self.two_stage_type == "standard": - output_memory, output_proposals = gen_encoder_output_proposals( - memory, mask_flatten, spatial_shapes - ) - output_memory = self.enc_output_norm(self.enc_output(output_memory)) - - if text_dict is not None: - enc_outputs_class_unselected = self.enc_out_class_embed(output_memory, text_dict) - else: - enc_outputs_class_unselected = self.enc_out_class_embed(output_memory) - - topk_logits = enc_outputs_class_unselected.max(-1)[0] - enc_outputs_coord_unselected = ( - self.enc_out_bbox_embed(output_memory) + output_proposals - ) # (bs, \sum{hw}, 4) unsigmoid - topk = self.num_queries - - topk_proposals = torch.topk(topk_logits, topk, dim=1)[1] # bs, nq - - # gather boxes - refpoint_embed_undetach = torch.gather( - enc_outputs_coord_unselected, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, 4) - ) # unsigmoid - refpoint_embed_ = refpoint_embed_undetach.detach() - init_box_proposal = torch.gather( - output_proposals, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, 4) - ).sigmoid() # sigmoid - - # gather tgt - tgt_undetach = torch.gather( - output_memory, 1, topk_proposals.unsqueeze(-1).repeat(1, 1, self.d_model) - ) - if self.embed_init_tgt: - tgt_ = ( - self.tgt_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1) - ) # nq, bs, d_model - else: - tgt_ = tgt_undetach.detach() - - if refpoint_embed is not None: - refpoint_embed = torch.cat([refpoint_embed, refpoint_embed_], dim=1) - tgt = torch.cat([tgt, tgt_], dim=1) - else: - refpoint_embed, tgt = refpoint_embed_, tgt_ - - elif self.two_stage_type == "no": - tgt_ = ( - self.tgt_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1) - ) # nq, bs, d_model - refpoint_embed_ = ( - self.refpoint_embed.weight[:, None, :].repeat(1, bs, 1).transpose(0, 1) - ) # nq, bs, 4 - - if refpoint_embed is not None: - refpoint_embed = torch.cat([refpoint_embed, refpoint_embed_], dim=1) - tgt = torch.cat([tgt, tgt_], dim=1) - else: - refpoint_embed, tgt = refpoint_embed_, tgt_ - - if self.num_patterns > 0: - tgt_embed = tgt.repeat(1, self.num_patterns, 1) - refpoint_embed = refpoint_embed.repeat(1, self.num_patterns, 1) - tgt_pat = self.patterns.weight[None, :, :].repeat_interleave( - self.num_queries, 1 - ) # 1, n_q*n_pat, d_model - tgt = tgt_embed + tgt_pat - - init_box_proposal = refpoint_embed_.sigmoid() - - else: - raise NotImplementedError("unknown two_stage_type {}".format(self.two_stage_type)) - ######################################################### - # End preparing tgt - # - tgt: bs, NQ, d_model - # - refpoint_embed(unsigmoid): bs, NQ, d_model - ######################################################### - - ######################################################### - # Begin Decoder - ######################################################### - hs, references = self.decoder( - tgt=tgt.transpose(0, 1), - memory=memory.transpose(0, 1), - memory_key_padding_mask=mask_flatten, - pos=lvl_pos_embed_flatten.transpose(0, 1), - refpoints_unsigmoid=refpoint_embed.transpose(0, 1), - level_start_index=level_start_index, - spatial_shapes=spatial_shapes, - valid_ratios=valid_ratios, - tgt_mask=attn_mask, - memory_text=text_dict["encoded_text"], - text_attention_mask=~text_dict["text_token_mask"], - # we ~ the mask . False means use the token; True means pad the token - ) - ######################################################### - # End Decoder - # hs: n_dec, bs, nq, d_model - # references: n_dec+1, bs, nq, query_dim - ######################################################### - - ######################################################### - # Begin postprocess - ######################################################### - if self.two_stage_type == "standard": - hs_enc = tgt_undetach.unsqueeze(0) - ref_enc = refpoint_embed_undetach.sigmoid().unsqueeze(0) - else: - hs_enc = ref_enc = None - ######################################################### - # End postprocess - # hs_enc: (n_enc+1, bs, nq, d_model) or (1, bs, nq, d_model) or (n_enc, bs, nq, d_model) or None - # ref_enc: (n_enc+1, bs, nq, query_dim) or (1, bs, nq, query_dim) or (n_enc, bs, nq, d_model) or None - ######################################################### - - return hs, references, hs_enc, ref_enc, init_box_proposal - # hs: (n_dec, bs, nq, d_model) - # references: sigmoid coordinates. (n_dec+1, bs, bq, 4) - # hs_enc: (n_enc+1, bs, nq, d_model) or (1, bs, nq, d_model) or None - # ref_enc: sigmoid coordinates. \ - # (n_enc+1, bs, nq, query_dim) or (1, bs, nq, query_dim) or None - - -class TransformerEncoder(nn.Module): - def __init__( - self, - encoder_layer, - num_layers, - d_model=256, - num_queries=300, - enc_layer_share=False, - text_enhance_layer=None, - feature_fusion_layer=None, - use_checkpoint=False, - use_transformer_ckpt=False, - ): - """_summary_ - - Args: - encoder_layer (_type_): _description_ - num_layers (_type_): _description_ - norm (_type_, optional): _description_. Defaults to None. - d_model (int, optional): _description_. Defaults to 256. - num_queries (int, optional): _description_. Defaults to 300. - enc_layer_share (bool, optional): _description_. Defaults to False. - - """ - super().__init__() - # prepare layers - self.layers = [] - self.text_layers = [] - self.fusion_layers = [] - if num_layers > 0: - self.layers = _get_clones(encoder_layer, num_layers, layer_share=enc_layer_share) - - if text_enhance_layer is not None: - self.text_layers = _get_clones( - text_enhance_layer, num_layers, layer_share=enc_layer_share - ) - if feature_fusion_layer is not None: - self.fusion_layers = _get_clones( - feature_fusion_layer, num_layers, layer_share=enc_layer_share - ) - else: - self.layers = [] - del encoder_layer - - if text_enhance_layer is not None: - self.text_layers = [] - del text_enhance_layer - if feature_fusion_layer is not None: - self.fusion_layers = [] - del feature_fusion_layer - - self.query_scale = None - self.num_queries = num_queries - self.num_layers = num_layers - self.d_model = d_model - - self.use_checkpoint = use_checkpoint - self.use_transformer_ckpt = use_transformer_ckpt - - @staticmethod - def get_reference_points(spatial_shapes, valid_ratios, device): - reference_points_list = [] - for lvl, (H_, W_) in enumerate(spatial_shapes): - - ref_y, ref_x = torch.meshgrid( - torch.linspace(0.5, H_ - 0.5, H_, dtype=torch.float32, device=device), - torch.linspace(0.5, W_ - 0.5, W_, dtype=torch.float32, device=device), - ) - ref_y = ref_y.reshape(-1)[None] / (valid_ratios[:, None, lvl, 1] * H_) - ref_x = ref_x.reshape(-1)[None] / (valid_ratios[:, None, lvl, 0] * W_) - ref = torch.stack((ref_x, ref_y), -1) - reference_points_list.append(ref) - reference_points = torch.cat(reference_points_list, 1) - reference_points = reference_points[:, :, None] * valid_ratios[:, None] - return reference_points - - def forward( - self, - # for images - src: Tensor, - pos: Tensor, - spatial_shapes: Tensor, - level_start_index: Tensor, - valid_ratios: Tensor, - key_padding_mask: Tensor, - # for texts - memory_text: Tensor = None, - text_attention_mask: Tensor = None, - pos_text: Tensor = None, - text_self_attention_masks: Tensor = None, - position_ids: Tensor = None, - ): - """ - Input: - - src: [bs, sum(hi*wi), 256] - - pos: pos embed for src. [bs, sum(hi*wi), 256] - - spatial_shapes: h,w of each level [num_level, 2] - - level_start_index: [num_level] start point of level in sum(hi*wi). - - valid_ratios: [bs, num_level, 2] - - key_padding_mask: [bs, sum(hi*wi)] - - - memory_text: bs, n_text, 256 - - text_attention_mask: bs, n_text - False for no padding; True for padding - - pos_text: bs, n_text, 256 - - - position_ids: bs, n_text - Intermedia: - - reference_points: [bs, sum(hi*wi), num_level, 2] - Outpus: - - output: [bs, sum(hi*wi), 256] - """ - - output = src - - # preparation and reshape - if self.num_layers > 0: - reference_points = self.get_reference_points( - spatial_shapes, valid_ratios, device=src.device - ) - - if self.text_layers: - # generate pos_text - bs, n_text, text_dim = memory_text.shape - if pos_text is None and position_ids is None: - pos_text = ( - torch.arange(n_text, device=memory_text.device) - .float() - .unsqueeze(0) - .unsqueeze(-1) - .repeat(bs, 1, 1) - ) - pos_text = get_sine_pos_embed(pos_text, num_pos_feats=256, exchange_xy=False) - if position_ids is not None: - pos_text = get_sine_pos_embed( - position_ids[..., None], num_pos_feats=256, exchange_xy=False - ) - - # main process - for layer_id, layer in enumerate(self.layers): - # if output.isnan().any() or memory_text.isnan().any(): - # if os.environ.get('IPDB_SHILONG_DEBUG', None) == 'INFO': - # import ipdb; ipdb.set_trace() - if self.fusion_layers: - if self.use_checkpoint: - output, memory_text = checkpoint.checkpoint( - self.fusion_layers[layer_id], - output, - memory_text, - key_padding_mask, - text_attention_mask, - ) - else: - output, memory_text = self.fusion_layers[layer_id]( - v=output, - l=memory_text, - attention_mask_v=key_padding_mask, - attention_mask_l=text_attention_mask, - ) - - if self.text_layers: - memory_text = self.text_layers[layer_id]( - src=memory_text.transpose(0, 1), - src_mask=~text_self_attention_masks, # note we use ~ for mask here - src_key_padding_mask=text_attention_mask, - pos=(pos_text.transpose(0, 1) if pos_text is not None else None), - ).transpose(0, 1) - - # main process - if self.use_transformer_ckpt: - output = checkpoint.checkpoint( - layer, - output, - pos, - reference_points, - spatial_shapes, - level_start_index, - key_padding_mask, - ) - else: - output = layer( - src=output, - pos=pos, - reference_points=reference_points, - spatial_shapes=spatial_shapes, - level_start_index=level_start_index, - key_padding_mask=key_padding_mask, - ) - - return output, memory_text - - -class TransformerDecoder(nn.Module): - def __init__( - self, - decoder_layer, - num_layers, - norm=None, - return_intermediate=False, - d_model=256, - query_dim=4, - num_feature_levels=1, - ): - super().__init__() - if num_layers > 0: - self.layers = _get_clones(decoder_layer, num_layers) - else: - self.layers = [] - self.num_layers = num_layers - self.norm = norm - self.return_intermediate = return_intermediate - assert return_intermediate, "support return_intermediate only" - self.query_dim = query_dim - assert query_dim in [2, 4], "query_dim should be 2/4 but {}".format(query_dim) - self.num_feature_levels = num_feature_levels - - self.ref_point_head = MLP(query_dim // 2 * d_model, d_model, d_model, 2) - self.query_pos_sine_scale = None - - self.query_scale = None - self.bbox_embed = None - self.class_embed = None - - self.d_model = d_model - - self.ref_anchor_head = None - - def forward( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - refpoints_unsigmoid: Optional[Tensor] = None, # num_queries, bs, 2 - # for memory - level_start_index: Optional[Tensor] = None, # num_levels - spatial_shapes: Optional[Tensor] = None, # bs, num_levels, 2 - valid_ratios: Optional[Tensor] = None, - # for text - memory_text: Optional[Tensor] = None, - text_attention_mask: Optional[Tensor] = None, - ): - """ - Input: - - tgt: nq, bs, d_model - - memory: hw, bs, d_model - - pos: hw, bs, d_model - - refpoints_unsigmoid: nq, bs, 2/4 - - valid_ratios/spatial_shapes: bs, nlevel, 2 - """ - output = tgt - - intermediate = [] - reference_points = refpoints_unsigmoid.sigmoid() - ref_points = [reference_points] - - for layer_id, layer in enumerate(self.layers): - - if reference_points.shape[-1] == 4: - reference_points_input = ( - reference_points[:, :, None] - * torch.cat([valid_ratios, valid_ratios], -1)[None, :] - ) # nq, bs, nlevel, 4 - else: - assert reference_points.shape[-1] == 2 - reference_points_input = reference_points[:, :, None] * valid_ratios[None, :] - query_sine_embed = gen_sineembed_for_position( - reference_points_input[:, :, 0, :] - ) # nq, bs, 256*2 - - # conditional query - raw_query_pos = self.ref_point_head(query_sine_embed) # nq, bs, 256 - pos_scale = self.query_scale(output) if self.query_scale is not None else 1 - query_pos = pos_scale * raw_query_pos - # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1': - # if query_pos.isnan().any() | query_pos.isinf().any(): - # import ipdb; ipdb.set_trace() - - # main process - output = layer( - tgt=output, - tgt_query_pos=query_pos, - tgt_query_sine_embed=query_sine_embed, - tgt_key_padding_mask=tgt_key_padding_mask, - tgt_reference_points=reference_points_input, - memory_text=memory_text, - text_attention_mask=text_attention_mask, - memory=memory, - memory_key_padding_mask=memory_key_padding_mask, - memory_level_start_index=level_start_index, - memory_spatial_shapes=spatial_shapes, - memory_pos=pos, - self_attn_mask=tgt_mask, - cross_attn_mask=memory_mask, - ) - if output.isnan().any() | output.isinf().any(): - print(f"output layer_id {layer_id} is nan") - try: - num_nan = output.isnan().sum().item() - num_inf = output.isinf().sum().item() - print(f"num_nan {num_nan}, num_inf {num_inf}") - except Exception as e: - print(e) - # if os.environ.get("SHILONG_AMP_INFNAN_DEBUG") == '1': - # import ipdb; ipdb.set_trace() - - # iter update - if self.bbox_embed is not None: - # box_holder = self.bbox_embed(output) - # box_holder[..., :self.query_dim] += inverse_sigmoid(reference_points) - # new_reference_points = box_holder[..., :self.query_dim].sigmoid() - - reference_before_sigmoid = inverse_sigmoid(reference_points) - delta_unsig = self.bbox_embed[layer_id](output) - outputs_unsig = delta_unsig + reference_before_sigmoid - new_reference_points = outputs_unsig.sigmoid() - - reference_points = new_reference_points.detach() - # if layer_id != self.num_layers - 1: - ref_points.append(new_reference_points) - - intermediate.append(self.norm(output)) - - return [ - [itm_out.transpose(0, 1) for itm_out in intermediate], - [itm_refpoint.transpose(0, 1) for itm_refpoint in ref_points], - ] - - -class DeformableTransformerEncoderLayer(nn.Module): - def __init__( - self, - d_model=256, - d_ffn=1024, - dropout=0.1, - activation="relu", - n_levels=4, - n_heads=8, - n_points=4, - ): - super().__init__() - - # self attention - self.self_attn = MSDeformAttn( - embed_dim=d_model, - num_levels=n_levels, - num_heads=n_heads, - num_points=n_points, - batch_first=True, - ) - self.dropout1 = nn.Dropout(dropout) - self.norm1 = nn.LayerNorm(d_model) - - # ffn - self.linear1 = nn.Linear(d_model, d_ffn) - self.activation = _get_activation_fn(activation, d_model=d_ffn) - self.dropout2 = nn.Dropout(dropout) - self.linear2 = nn.Linear(d_ffn, d_model) - self.dropout3 = nn.Dropout(dropout) - self.norm2 = nn.LayerNorm(d_model) - - @staticmethod - def with_pos_embed(tensor, pos): - return tensor if pos is None else tensor + pos - - def forward_ffn(self, src): - src2 = self.linear2(self.dropout2(self.activation(self.linear1(src)))) - src = src + self.dropout3(src2) - src = self.norm2(src) - return src - - def forward( - self, src, pos, reference_points, spatial_shapes, level_start_index, key_padding_mask=None - ): - # self attention - # import ipdb; ipdb.set_trace() - src2 = self.self_attn( - query=self.with_pos_embed(src, pos), - reference_points=reference_points, - value=src, - spatial_shapes=spatial_shapes, - level_start_index=level_start_index, - key_padding_mask=key_padding_mask, - ) - src = src + self.dropout1(src2) - src = self.norm1(src) - - # ffn - src = self.forward_ffn(src) - - return src - - -class DeformableTransformerDecoderLayer(nn.Module): - def __init__( - self, - d_model=256, - d_ffn=1024, - dropout=0.1, - activation="relu", - n_levels=4, - n_heads=8, - n_points=4, - use_text_feat_guide=False, - use_text_cross_attention=False, - ): - super().__init__() - - # cross attention - self.cross_attn = MSDeformAttn( - embed_dim=d_model, - num_levels=n_levels, - num_heads=n_heads, - num_points=n_points, - batch_first=True, - ) - self.dropout1 = nn.Dropout(dropout) if dropout > 0 else nn.Identity() - self.norm1 = nn.LayerNorm(d_model) - - # cross attention text - if use_text_cross_attention: - self.ca_text = nn.MultiheadAttention(d_model, n_heads, dropout=dropout) - self.catext_dropout = nn.Dropout(dropout) if dropout > 0 else nn.Identity() - self.catext_norm = nn.LayerNorm(d_model) - - # self attention - self.self_attn = nn.MultiheadAttention(d_model, n_heads, dropout=dropout) - self.dropout2 = nn.Dropout(dropout) if dropout > 0 else nn.Identity() - self.norm2 = nn.LayerNorm(d_model) - - # ffn - self.linear1 = nn.Linear(d_model, d_ffn) - self.activation = _get_activation_fn(activation, d_model=d_ffn, batch_dim=1) - self.dropout3 = nn.Dropout(dropout) if dropout > 0 else nn.Identity() - self.linear2 = nn.Linear(d_ffn, d_model) - self.dropout4 = nn.Dropout(dropout) if dropout > 0 else nn.Identity() - self.norm3 = nn.LayerNorm(d_model) - - self.key_aware_proj = None - self.use_text_feat_guide = use_text_feat_guide - assert not use_text_feat_guide - self.use_text_cross_attention = use_text_cross_attention - - def rm_self_attn_modules(self): - self.self_attn = None - self.dropout2 = None - self.norm2 = None - - @staticmethod - def with_pos_embed(tensor, pos): - return tensor if pos is None else tensor + pos - - def forward_ffn(self, tgt): - with torch.cuda.amp.autocast(enabled=False): - tgt2 = self.linear2(self.dropout3(self.activation(self.linear1(tgt)))) - tgt = tgt + self.dropout4(tgt2) - tgt = self.norm3(tgt) - return tgt - - def forward( - self, - # for tgt - tgt: Optional[Tensor], # nq, bs, d_model - tgt_query_pos: Optional[Tensor] = None, # pos for query. MLP(Sine(pos)) - tgt_query_sine_embed: Optional[Tensor] = None, # pos for query. Sine(pos) - tgt_key_padding_mask: Optional[Tensor] = None, - tgt_reference_points: Optional[Tensor] = None, # nq, bs, 4 - memory_text: Optional[Tensor] = None, # bs, num_token, d_model - text_attention_mask: Optional[Tensor] = None, # bs, num_token - # for memory - memory: Optional[Tensor] = None, # hw, bs, d_model - memory_key_padding_mask: Optional[Tensor] = None, - memory_level_start_index: Optional[Tensor] = None, # num_levels - memory_spatial_shapes: Optional[Tensor] = None, # bs, num_levels, 2 - memory_pos: Optional[Tensor] = None, # pos for memory - # sa - self_attn_mask: Optional[Tensor] = None, # mask used for self-attention - cross_attn_mask: Optional[Tensor] = None, # mask used for cross-attention - ): - """ - Input: - - tgt/tgt_query_pos: nq, bs, d_model - - - """ - assert cross_attn_mask is None - - # self attention - if self.self_attn is not None: - # import ipdb; ipdb.set_trace() - q = k = self.with_pos_embed(tgt, tgt_query_pos) - tgt2 = self.self_attn(q, k, tgt, attn_mask=self_attn_mask)[0] - tgt = tgt + self.dropout2(tgt2) - tgt = self.norm2(tgt) - - if self.use_text_cross_attention: - tgt2 = self.ca_text( - self.with_pos_embed(tgt, tgt_query_pos), - memory_text.transpose(0, 1), - memory_text.transpose(0, 1), - key_padding_mask=text_attention_mask, - )[0] - tgt = tgt + self.catext_dropout(tgt2) - tgt = self.catext_norm(tgt) - - tgt2 = self.cross_attn( - query=self.with_pos_embed(tgt, tgt_query_pos).transpose(0, 1), - reference_points=tgt_reference_points.transpose(0, 1).contiguous(), - value=memory.transpose(0, 1), - spatial_shapes=memory_spatial_shapes, - level_start_index=memory_level_start_index, - key_padding_mask=memory_key_padding_mask, - ).transpose(0, 1) - tgt = tgt + self.dropout1(tgt2) - tgt = self.norm1(tgt) - - # ffn - tgt = self.forward_ffn(tgt) - - return tgt - - -def build_transformer(args): - return Transformer( - d_model=args.hidden_dim, - dropout=args.dropout, - nhead=args.nheads, - num_queries=args.num_queries, - dim_feedforward=args.dim_feedforward, - num_encoder_layers=args.enc_layers, - num_decoder_layers=args.dec_layers, - normalize_before=args.pre_norm, - return_intermediate_dec=True, - query_dim=args.query_dim, - activation=args.transformer_activation, - num_patterns=args.num_patterns, - num_feature_levels=args.num_feature_levels, - enc_n_points=args.enc_n_points, - dec_n_points=args.dec_n_points, - learnable_tgt_init=True, - # two stage - two_stage_type=args.two_stage_type, # ['no', 'standard', 'early'] - embed_init_tgt=args.embed_init_tgt, - use_text_enhancer=args.use_text_enhancer, - use_fusion_layer=args.use_fusion_layer, - use_checkpoint=args.use_checkpoint, - use_transformer_ckpt=args.use_transformer_ckpt, - use_text_cross_attention=args.use_text_cross_attention, - text_dropout=args.text_dropout, - fusion_dropout=args.fusion_dropout, - fusion_droppath=args.fusion_droppath, - ) diff --git a/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/transliterate/sinhala_transliterator.py b/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/transliterate/sinhala_transliterator.py deleted file mode 100644 index 1e762252a56e93c94cd488a07031f7d7eae8a1d3..0000000000000000000000000000000000000000 --- a/spaces/shabnam91/Sanskrit-TTS/indic_nlp_library/indicnlp/transliterate/sinhala_transliterator.py +++ /dev/null @@ -1,171 +0,0 @@ -# -# Copyright (c) 2013-present, Anoop Kunchukuttan -# All rights reserved. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -# - -class SinhalaDevanagariTransliterator(object): - """ - A Devanagari to Sinhala transliterator based on explicit Unicode Mapping - """ - - sinhala_devnag_map={ - '\u0d82':'\u0902', - '\u0d83':'\u0903', - '\u0d84':'\u0904', - '\u0d85':'\u0905', - '\u0d86':'\u0906', - '\u0d87':'\u090d', - '\u0d88':'\u090d', - '\u0d89':'\u0907', - '\u0d8a':'\u0908', - '\u0d8b':'\u0909', - '\u0d8c':'\u090a', - '\u0d8d':'\u090b', - '\u0d8f':'\u090c', - '\u0d91':'\u090e', - '\u0d92':'\u090f', - '\u0d93':'\u0910', - '\u0d94':'\u0912', - '\u0d95':'\u0913', - '\u0d96':'\u0914', - '\u0d9a':'\u0915', - '\u0d9b':'\u0916', - '\u0d9c':'\u0917', - '\u0d9d':'\u0918', - '\u0d9e':'\u0919', - '\u0d9f':'\u0919', - '\u0da0':'\u091a', - '\u0da1':'\u091b', - '\u0da2':'\u091c', - '\u0da3':'\u091d', - '\u0da4':'\u091e', - '\u0da5':'\u091e', - '\u0da6':'\u091e', - '\u0da7':'\u091f', - '\u0da8':'\u0920', - '\u0da9':'\u0921', - '\u0daa':'\u0922', - '\u0dab':'\u0923', - '\u0dac':'\u0923', - '\u0dad':'\u0924', - '\u0dae':'\u0925', - '\u0daf':'\u0926', - '\u0db0':'\u0927', - '\u0db1':'\u0928', - '\u0db2':'\u0928', - '\u0db3':'\u0928', - '\u0db4':'\u092a', - '\u0db5':'\u092b', - '\u0db6':'\u092c', - '\u0db7':'\u092d', - '\u0db8':'\u092e', - '\u0dba':'\u092f', - '\u0dbb':'\u0930', - '\u0dbd':'\u0932', - '\u0dc5':'\u0933', - '\u0dc0':'\u0935', - '\u0dc1':'\u0936', - '\u0dc2':'\u0937', - '\u0dc3':'\u0938', - '\u0dc4':'\u0939', - '\u0dcf':'\u093e', - '\u0dd0':'\u0949', - '\u0dd1':'\u0949', - '\u0dd2':'\u093f', - '\u0dd3':'\u0940', - '\u0dd4':'\u0941', - '\u0dd6':'\u0942', - '\u0dd8':'\u0943', - '\u0dd9':'\u0946', - '\u0dda':'\u0947', - '\u0ddb':'\u0948', - '\u0ddc':'\u094a', - '\u0ddd':'\u094b', - '\u0dde':'\u094c', - '\u0dca':'\u094d', - } - - devnag_sinhala_map={ - '\u0900':'\u0d82', - '\u0901':'\u0d82', - '\u0902':'\u0d82', - '\u0903':'\u0d83', - '\u0904':'\u0d84', - '\u0905':'\u0d85', - '\u0906':'\u0d86', - '\u0907':'\u0d89', - '\u0908':'\u0d8a', - '\u0909':'\u0d8b', - '\u090a':'\u0d8c', - '\u090b':'\u0d8d', - '\u090c':'\u0d8f', - '\u090d':'\u0d88', - '\u090e':'\u0d91', - '\u090f':'\u0d92', - '\u0910':'\u0d93', - '\u0912':'\u0d94', - '\u0913':'\u0d95', - '\u0914':'\u0d96', - '\u0915':'\u0d9a', - '\u0916':'\u0d9b', - '\u0917':'\u0d9c', - '\u0918':'\u0d9d', - '\u0919':'\u0d9e', - '\u091a':'\u0da0', - '\u091b':'\u0da1', - '\u091c':'\u0da2', - '\u091d':'\u0da3', - '\u091e':'\u0da4', - '\u091f':'\u0da7', - '\u0920':'\u0da8', - '\u0921':'\u0da9', - '\u0922':'\u0daa', - '\u0923':'\u0dab', - '\u0924':'\u0dad', - '\u0925':'\u0dae', - '\u0926':'\u0daf', - '\u0927':'\u0db0', - '\u0928':'\u0db1', - '\u0929':'\u0db1', - '\u092a':'\u0db4', - '\u092b':'\u0db5', - '\u092c':'\u0db6', - '\u092d':'\u0db7', - '\u092e':'\u0db8', - '\u092f':'\u0dba', - '\u0930':'\u0dbb', - '\u0932':'\u0dbd', - '\u0933':'\u0dc5', - '\u0935':'\u0dc0', - '\u0936':'\u0dc1', - '\u0937':'\u0dc2', - '\u0938':'\u0dc3', - '\u0939':'\u0dc4', - '\u093e':'\u0dcf', - '\u0949':'\u0dd1', - '\u093f':'\u0dd2', - '\u0940':'\u0dd3', - '\u0941':'\u0dd4', - '\u0942':'\u0dd6', - '\u0943':'\u0dd8', - '\u0946':'\u0dd9', - '\u0947':'\u0dda', - '\u0948':'\u0ddb', - '\u094a':'\u0ddc', - '\u094b':'\u0ddd', - '\u094c':'\u0dde', - '\u094d':'\u0dca', - - } - - @staticmethod - def devanagari_to_sinhala(text): - return ''.join([ SinhalaDevanagariTransliterator.devnag_sinhala_map.get(c,c) for c in text ]) - - @staticmethod - def sinhala_to_devanagari(text): - return ''.join([ SinhalaDevanagariTransliterator.sinhala_devnag_map.get(c,c) for c in text ]) - diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/i18n.py b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/i18n.py deleted file mode 100644 index 1d7fe71d0e443a90492ff033ee34460e3429379f..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/i18n.py +++ /dev/null @@ -1,25 +0,0 @@ -import locale -import json -import os - - -def load_language_list(language): - with open(f"./i18n/{language}.json", "r", encoding="utf-8") as f: - language_list = json.load(f) - return language_list - - -class I18nAuto: - def __init__(self, language=None): - if language in ["Auto", None]: - language = locale.getdefaultlocale()[ - 0 - ] # getlocale can't identify the system's language ((None, None)) - if not os.path.exists(f"./i18n/{language}.json"): - language = "en_US" - self.language = language - print("Use Language:", language) - self.language_map = load_language_list(language) - - def __call__(self, key): - return self.language_map.get(key, key) diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/train/losses.py b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/train/losses.py deleted file mode 100644 index b89038f14d06d7fae43628183e9ffb465e4edafd..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/train/losses.py +++ /dev/null @@ -1,59 +0,0 @@ -import torch -from torch.nn import functional as F - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - rl = rl.float().detach() - gl = gl.float() - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - dr = dr.float() - dg = dg.float() - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg**2) - loss += r_loss + g_loss - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - dg = dg.float() - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - - -def kl_loss(z_p, logs_q, m_p, logs_p, z_mask): - """ - z_p, logs_q: [b, h, t_t] - m_p, logs_p: [b, h, t_t] - """ - z_p = z_p.float() - logs_q = logs_q.float() - m_p = m_p.float() - logs_p = logs_p.float() - z_mask = z_mask.float() - - kl = logs_p - logs_q - 0.5 - kl += 0.5 * ((z_p - m_p) ** 2) * torch.exp(-2.0 * logs_p) - kl = torch.sum(kl * z_mask) - l = kl / torch.sum(z_mask) - return l diff --git a/spaces/sigit/permadi/README.md b/spaces/sigit/permadi/README.md deleted file mode 100644 index 2c40d8589498eb6131d28e64916f87982b80776a..0000000000000000000000000000000000000000 --- a/spaces/sigit/permadi/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Permadi -emoji: 📊 -colorFrom: blue -colorTo: purple -sdk: static -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bus Simulator Indonesia Unlimited Money and Fuel for the Best Bus Driving Experience.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bus Simulator Indonesia Unlimited Money and Fuel for the Best Bus Driving Experience.md deleted file mode 100644 index 51f90f3b52cf8e01f2d26b1fe8a04c321f852f1f..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bus Simulator Indonesia Unlimited Money and Fuel for the Best Bus Driving Experience.md +++ /dev/null @@ -1,70 +0,0 @@ -
      -
      - Authentic Indonesian cities and places
      - Cool and fun honks
      - Online multiplayer convoy | | H2: How to Download Bus Simulator Indonesia with Unlimited Money | - Requirements: Android device, internet connection, mod apk file
      - Steps: Download mod apk file, install it, allow permissions, launch the game, enjoy unlimited money | | H2: Benefits of Playing Bus Simulator Indonesia with Unlimited Money | - Unlock all buses and customizations
      - Explore all routes and locations
      - Join any convoy and chat with other players
      - Support the developers and get updates | | H2: Tips and Tricks for Playing Bus Simulator Indonesia | - Learn the traffic rules and signs
      - Use the GPS and map to navigate
      - Adjust the camera and controls to your preference
      - Be careful with the speed and brakes | | H2: Conclusion | Summary: Bus Simulator Indonesia is a fun and realistic game that lets you experience being a bus driver in Indonesia. You can download it with unlimited money using a mod apk file. This will give you access to all the features and benefits of the game. You can also follow some tips and tricks to improve your gameplay. | | H3: FAQs | - Q: Is Bus Simulator Indonesia safe to download and play?
      A: Yes, it is safe as long as you download it from a trusted source and scan it for viruses.
      - Q: Can I play Bus Simulator Indonesia offline?
      A: Yes, you can play it offline in career mode. However, you will need an internet connection to play online in multiplayer mode.
      - Q: How can I update Bus Simulator Indonesia?
      A: You can update it by downloading the latest mod apk file and installing it over the existing one. You can also check for updates in the game settings.
      - Q: How can I contact the developers of Bus Simulator Indonesia?
      A: You can contact them by sending an email to bussid@maleo.id or visiting their website at https://maleo.id/. You can also follow them on social media platforms such as Facebook, Instagram, and YouTube.
      - Q: How can I share my feedback and suggestions for Bus Simulator Indonesia?
      A: You can share your feedback and suggestions by leaving a review on Google Play Store or sending an email to bussid@maleo.id. You can also join their official Discord server at https://discord.gg/bussimulatorindonesia. | Table 2: Article with HTML formatting

      Download Bus Simulator Indonesia with Unlimited Money

      -

      If you love driving games and want to experience what it's like to be a bus driver in Indonesia, then you should try Bus Simulator Indonesia. This is a fun and realistic game that lets you design your own livery, drive through authentic Indonesian cities and places, honk your horn in a cool and fun way, and join online multiplayer convoys with other players. In this article, we will show you how to download Bus Simulator Indonesia with unlimited money using a mod apk file. We will also tell you about the features and benefits of playing this game, as well as some tips and tricks to improve your gameplay.

      -

      Features of Bus Simulator Indonesia

      -

      Bus Simulator Indonesia (aka BUSSID) is one of the most popular bus simulator games on Android. It has over 100 million downloads on Google Play Store and a rating of 4.2 stars out of 5. It was developed by Maleo, an Indonesian game studio that specializes in simulation games. Here are some of the features that make this game stand out:

      -

      download bus simulator indonesia with unlimited money


      Download ->>> https://ssurll.com/2uNS3R



      -
        -
      • Design your own livery: You can customize your bus with your own colors, logos, stickers, and accessories. You can also use your own 3D model using the vehicle mod system. This gives you the freedom to express your creativity and personality.
      • -
      • Authentic Indonesian cities and places: You can drive through various cities and places in Indonesia, such as Jakarta, Surabaya, Bali, Sumatra, Java, Kalimantan, Sulawesi, Papua, and more. You can see the landmarks, buildings, roads, bridges, landscapes, and cultures of each region. You can also experience different weather conditions, traffic jams, accidents, and events that happen in real life.
      • -
      • Cool and fun honks: You can honk your horn in different ways, such as the "om telolet om" (honk like a bus) phenomenon that went viral in 2016. You can also use the voice chat feature to communicate with other players or pedestrians. This adds more fun and excitement to your driving experience.
      • -
      • Online multiplayer convoy: You can join or create your own convoy with other players from around the world. You can chat, cooperate, compete, or just have fun together. You can also see the leaderboards and rankings of the best drivers and convoys. This makes the game more social and interactive.
      • -
      -

      How to Download Bus Simulator Indonesia with Unlimited Money

      -

      If you want to enjoy all the features of Bus Simulator Indonesia without any limitations, you can download it with unlimited money using a mod apk file. A mod apk file is a modified version of the original game that has some extra features or advantages, such as unlimited money, unlocked items, or premium access. Here are the requirements and steps to download Bus Simulator Indonesia with unlimited money:

      -
        -
      1. Requirements: You will need an Android device with at least 4.2 version, an internet connection, and a mod apk file of Bus Simulator Indonesia. You can download the mod apk file from various websites, such as https://android-1.com/en/4690-bus-simulator-indonesia-mod.html or https://rexdl.com/android/bus-simulator-indonesia-apk.html/. However, be careful and make sure that the file is safe and virus-free before downloading it.
      2. -
      3. Steps: After downloading the mod apk file, you need to install it on your device. To do this, you need to allow permissions for installing apps from unknown sources. You can do this by going to your device settings, security, and enabling the option "Unknown sources". Then, you can tap on the mod apk file and follow the instructions to install it. After installing it, you can launch the game and enjoy unlimited money.
      4. -
      -

      Benefits of Playing Bus Simulator Indonesia with Unlimited Money

      -

      Playing Bus Simulator Indonesia with unlimited money has many benefits that will enhance your gaming experience. Here are some of them:

      -
        -
      • Unlock all buses and customizations: With unlimited money, you can buy any bus you want from the shop. You can also customize your bus with any livery, accessories, or 3D models you like. You can create your own unique and stylish bus that suits your taste and personality.
      • -
      • Explore all routes and locations: With unlimited money, you can travel to any city or place in Indonesia without worrying about the fuel or maintenance costs. You can see the beauty and diversity of Indonesia's culture and nature. You can also discover new routes and challenges that will test your driving skills.
      • -
      • Join any convoy and chat with other players: With unlimited money, you can join any convoy you want without paying any fees or waiting for invitations. You can also chat with other players using voice or text messages. You can make new friends, share your experiences, or ask for help if you need it.
      • -
      • Support the developers and get updates: With unlimited money, you can support the developers of Bus Simulator Indonesia by buying their in-game products or services. This will help them to improve the game and add more features and content. You can also get updates and bug fixes faster and easier.
      • -
      -

      Tips and Tricks for Playing Bus Simulator Indonesia

      -

      Bus Simulator Indonesia is a realistic game that requires some skills and knowledge to play well. Here are some tips and tricks that will help you to become a better bus driver:

      -

      download bus simulator indonesia mod apk with unlimited fuel
      -how to get unlimited money in bus simulator indonesia game
      -bus simulator indonesia hack version download for android
      -best bus simulator games with unlimited money and coins
      -download bus simulator indonesia latest version with mod menu
      -bus simulator indonesia cheats and tips for unlimited money
      -free download bus simulator indonesia for pc with unlimited money
      -bus simulator indonesia mod apk revdl with unlimited money and gems
      -download bus simulator indonesia offline with unlimited money and gold
      -bus simulator indonesia online multiplayer with unlimited money hack
      -bus simulator indonesia mod apk happymod with unlimited money and all buses unlocked
      -download bus simulator indonesia 2023 with unlimited money and skins
      -bus simulator indonesia gameplay with unlimited money and features
      -bus simulator indonesia mod apk rexdl with unlimited money and no ads
      -download bus simulator indonesia update with unlimited money and new buses
      -bus simulator indonesia mod apk an1 with unlimited money and premium features
      -download bus simulator indonesia for ios with unlimited money and coins
      -bus simulator indonesia mod apk pure with unlimited money and realistic graphics
      -download bus simulator indonesia old version with unlimited money and vehicles
      -bus simulator indonesia mod apk android 1 with unlimited money and everything unlocked

      -
        -
      • Learn the traffic rules and signs: You need to follow the traffic rules and signs in Bus Simulator Indonesia, such as speed limits, traffic lights, stop signs, lane markings, etc. If you break them, you will get fined or penalized by the police or traffic officers. You will also lose points or reputation if you cause accidents or damage to your bus or other vehicles.
      • -
      • Use the GPS and map to navigate: You need to use the GPS and map features in Bus Simulator Indonesia to find your way around the cities and places. The GPS will show you the direction and distance to your destination, while the map will show you the layout of the roads and landmarks. You can also zoom in or out of the map to see more details or overview.
      • Adjust the camera and controls to your preference: You can choose from different camera angles and views in Bus Simulator Indonesia, such as first-person, third-person, top-down, dashboard, etc. You can also adjust the sensitivity and layout of the controls, such as steering wheel, pedals, buttons, etc. You can find the best settings that suit your style and comfort. -
      • Be careful with the speed and brakes: You need to be careful with the speed and brakes in Bus Simulator Indonesia, as they affect the physics and handling of your bus. If you drive too fast, you may lose control or crash into other vehicles or obstacles. If you brake too hard, you may skid or damage your bus or passengers. You need to find the right balance between speed and safety.
      • -
      -

      Conclusion

      -

      Bus Simulator Indonesia is a fun and realistic game that lets you experience being a bus driver in Indonesia. You can design your own livery, drive through authentic Indonesian cities and places, honk your horn in a cool and fun way, and join online multiplayer convoys with other players. You can download it with unlimited money using a mod apk file. This will give you access to all the features and benefits of the game. You can also follow some tips and tricks to improve your gameplay. If you are looking for a game that combines simulation, adventure, and social aspects, then you should try Bus Simulator Indonesia today.

      -

      FAQs

      -
        -
      • Q: Is Bus Simulator Indonesia safe to download and play?
        -A: Yes, it is safe as long as you download it from a trusted source and scan it for viruses.
      • -
      • Q: Can I play Bus Simulator Indonesia offline?
        -A: Yes, you can play it offline in career mode. However, you will need an internet connection to play online in multiplayer mode.
      • -
      • Q: How can I update Bus Simulator Indonesia?
        -A: You can update it by downloading the latest mod apk file and installing it over the existing one. You can also check for updates in the game settings.
      • -
      • Q: How can I contact the developers of Bus Simulator Indonesia?
        -A: You can contact them by sending an email to bussid@maleo.id or visiting their website at https://maleo.id/. You can also follow them on social media platforms such as Facebook, Instagram, and YouTube.
      • -
      • Q: How can I share my feedback and suggestions for Bus Simulator Indonesia?
        -A: You can share your feedback and suggestions by leaving a review on Google Play Store or sending an email to bussid@maleo.id. You can also join their official Discord server at https://discord.gg/bussimulatorindonesia.
      • -

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/sinz2002/ChuanhuChatGPT/assets/Kelpy-Codos.js b/spaces/sinz2002/ChuanhuChatGPT/assets/Kelpy-Codos.js deleted file mode 100644 index cfbaeedb4f371dfb5fe157db545b364046fca3e1..0000000000000000000000000000000000000000 --- a/spaces/sinz2002/ChuanhuChatGPT/assets/Kelpy-Codos.js +++ /dev/null @@ -1,76 +0,0 @@ -// ==UserScript== -// @name Kelpy Codos -// @namespace https://github.com/Keldos-Li/Kelpy-Codos -// @version 1.0.5 -// @author Keldos; https://keldos.me/ -// @description Add copy button to PRE tags before CODE tag, for Chuanhu ChatGPT especially. -// Based on Chuanhu ChatGPT version: ac04408 (2023-3-22) -// @license GPL-3.0 -// @grant none -// ==/UserScript== - -(function () { - 'use strict'; - - function addCopyButton(pre) { - var code = pre.querySelector('code'); - if (!code) { - return; // 如果没有找到 元素,则不添加按钮 - } - var firstChild = code.firstChild; - if (!firstChild) { - return; // 如果 元素没有子节点,则不添加按钮 - } - var button = document.createElement('button'); - button.textContent = '\uD83D\uDCCE'; // 使用 📎 符号作为“复制”按钮的文本 - button.style.position = 'relative'; - button.style.float = 'right'; - button.style.fontSize = '1em'; // 可选:调整按钮大小 - button.style.background = 'none'; // 可选:去掉背景颜色 - button.style.border = 'none'; // 可选:去掉边框 - button.style.cursor = 'pointer'; // 可选:显示指针样式 - button.addEventListener('click', function () { - var range = document.createRange(); - range.selectNodeContents(code); - range.setStartBefore(firstChild); // 将范围设置为第一个子节点之前 - var selection = window.getSelection(); - selection.removeAllRanges(); - selection.addRange(range); - - try { - var success = document.execCommand('copy'); - if (success) { - button.textContent = '\u2714'; - setTimeout(function () { - button.textContent = '\uD83D\uDCCE'; // 恢复按钮为“复制” - }, 2000); - } else { - button.textContent = '\u2716'; - } - } catch (e) { - console.error(e); - button.textContent = '\u2716'; - } - - selection.removeAllRanges(); - }); - code.insertBefore(button, firstChild); // 将按钮插入到第一个子元素之前 - } - - function handleNewElements(mutationsList, observer) { - for (var mutation of mutationsList) { - if (mutation.type === 'childList') { - for (var node of mutation.addedNodes) { - if (node.nodeName === 'PRE') { - addCopyButton(node); - } - } - } - } - } - - var observer = new MutationObserver(handleNewElements); - observer.observe(document.documentElement, { childList: true, subtree: true }); - - document.querySelectorAll('pre').forEach(addCopyButton); -})(); diff --git a/spaces/sparanoid/milky-green-sovits-4/vdecoder/hifigan/utils.py b/spaces/sparanoid/milky-green-sovits-4/vdecoder/hifigan/utils.py deleted file mode 100644 index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000 --- a/spaces/sparanoid/milky-green-sovits-4/vdecoder/hifigan/utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import glob -import os -import matplotlib -import torch -from torch.nn.utils import weight_norm -# matplotlib.use("Agg") -import matplotlib.pylab as plt - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("Saving checkpoint to {}".format(filepath)) - torch.save(obj, filepath) - print("Complete.") - - -def del_old_checkpoints(cp_dir, prefix, n_models=2): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) # get checkpoint paths - cp_list = sorted(cp_list)# sort by iter - if len(cp_list) > n_models: # if more than n_models models are found - for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models - open(cp, 'w').close()# empty file contents - os.unlink(cp)# delete file (move to trash when using Colab) - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] - diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/concat_dataset.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/concat_dataset.py deleted file mode 100644 index 01a4078bb159fa44b2d1062b9a971fe7f1abd1c2..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/data/concat_dataset.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import bisect - -import numpy as np -from torch.utils.data.dataloader import default_collate - -from . import FairseqDataset - - -class ConcatDataset(FairseqDataset): - @staticmethod - def cumsum(sequence, sample_ratios): - r, s = [], 0 - for e, ratio in zip(sequence, sample_ratios): - curr_len = int(ratio * len(e)) - r.append(curr_len + s) - s += curr_len - return r - - def __init__(self, datasets, sample_ratios=1): - super(ConcatDataset, self).__init__() - assert len(datasets) > 0, "datasets should not be an empty iterable" - self.datasets = list(datasets) - if isinstance(sample_ratios, int): - sample_ratios = [sample_ratios] * len(self.datasets) - self.sample_ratios = sample_ratios - self.cumulative_sizes = self.cumsum(self.datasets, sample_ratios) - self.real_sizes = [len(d) for d in self.datasets] - - def __len__(self): - return self.cumulative_sizes[-1] - - def __getitem__(self, idx): - dataset_idx, sample_idx = self._get_dataset_and_sample_index(idx) - return self.datasets[dataset_idx][sample_idx] - - def _get_dataset_and_sample_index(self, idx: int): - dataset_idx = bisect.bisect_right(self.cumulative_sizes, idx) - if dataset_idx == 0: - sample_idx = idx - else: - sample_idx = idx - self.cumulative_sizes[dataset_idx - 1] - sample_idx = sample_idx % self.real_sizes[dataset_idx] - return dataset_idx, sample_idx - - def collater(self, samples, **extra_args): - # For now only supports datasets with same underlying collater implementations - if hasattr(self.datasets[0], "collater"): - return self.datasets[0].collater(samples, **extra_args) - else: - return default_collate(samples, **extra_args) - - def size(self, idx: int): - """ - Return an example's size as a float or tuple. - """ - dataset_idx, sample_idx = self._get_dataset_and_sample_index(idx) - return self.datasets[dataset_idx].size(sample_idx) - - def num_tokens(self, index: int): - return np.max(self.size(index)) - - def attr(self, attr: str, index: int): - dataset_idx = bisect.bisect_right(self.cumulative_sizes, index) - return getattr(self.datasets[dataset_idx], attr, None) - - @property - def sizes(self): - _dataset_sizes = [] - for ds, sr in zip(self.datasets, self.sample_ratios): - if isinstance(ds.sizes, np.ndarray): - _dataset_sizes.append(np.tile(ds.sizes, sr)) - else: - # Only support underlying dataset with single size array. - assert isinstance(ds.sizes, list) - _dataset_sizes.append(np.tile(ds.sizes[0], sr)) - return np.concatenate(_dataset_sizes) - - @property - def supports_prefetch(self): - return all(d.supports_prefetch for d in self.datasets) - - def ordered_indices(self): - """ - Returns indices sorted by length. So less padding is needed. - """ - if isinstance(self.sizes, np.ndarray) and len(self.sizes.shape) > 1: - # special handling for concatenating lang_pair_datasets - indices = np.arange(len(self)) - sizes = self.sizes - tgt_sizes = ( - sizes[:, 1] if len(sizes.shape) > 0 and sizes.shape[1] > 1 else None - ) - src_sizes = ( - sizes[:, 0] if len(sizes.shape) > 0 and sizes.shape[1] > 1 else sizes - ) - # sort by target length, then source length - if tgt_sizes is not None: - indices = indices[np.argsort(tgt_sizes[indices], kind="mergesort")] - return indices[np.argsort(src_sizes[indices], kind="mergesort")] - else: - return np.argsort(self.sizes) - - def prefetch(self, indices): - frm = 0 - for to, ds in zip(self.cumulative_sizes, self.datasets): - real_size = len(ds) - if getattr(ds, "supports_prefetch", False): - ds.prefetch([(i - frm) % real_size for i in indices if frm <= i < to]) - frm = to - - @property - def can_reuse_epoch_itr_across_epochs(self): - return all(d.can_reuse_epoch_itr_across_epochs for d in self.datasets) - - def set_epoch(self, epoch): - super().set_epoch(epoch) - for ds in self.datasets: - if hasattr(ds, "set_epoch"): - ds.set_epoch(epoch) diff --git a/spaces/stomexserde/gpt4-ui/Examples/2014 Cbr 600rr Rear Seat Cowl.md b/spaces/stomexserde/gpt4-ui/Examples/2014 Cbr 600rr Rear Seat Cowl.md deleted file mode 100644 index 8e68dc7f3b9c6a8a9bc562339e986c7cef8dd765..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/2014 Cbr 600rr Rear Seat Cowl.md +++ /dev/null @@ -1,37 +0,0 @@ - -

      How to Install a Rear Seat Cowl on Your 2014 CBR 600RR

      -

      If you want to give your 2014 CBR 600RR a sleeker and sportier look, you might consider installing a rear seat cowl. A rear seat cowl is a cover that replaces the passenger seat and matches the color and style of your bike. It can also provide some extra storage space for your belongings. Installing a rear seat cowl is not very difficult and can be done in a few steps.

      -

      2014 cbr 600rr rear seat cowl


      Download Ziphttps://urlgoal.com/2uIbZr



      -
        -
      1. Remove the passenger seat by unlocking it with the ignition key and sliding it backward.
      2. -
      3. Place the rear seat cowl over the seat area and align it with the mounting holes.
      4. -
      5. Secure the rear seat cowl with the bolts and washers that came with it. You may need to use a hex wrench or a screwdriver depending on the type of bolts.
      6. -
      7. Tighten the bolts firmly but do not over-tighten them as this may damage the cowl or the bike.
      8. -
      9. Check that the rear seat cowl is properly fitted and does not move or rattle.
      10. -
      -

      Congratulations! You have successfully installed a rear seat cowl on your 2014 CBR 600RR. You can now enjoy the improved appearance and functionality of your bike. If you ever need to remove the rear seat cowl, simply reverse the steps above.

      -

      If you are looking for a high-quality rear seat cowl for your 2014 CBR 600RR, you can check out some of the options available on eBay. Here are some links to help you find what you need:

      - - -

      There are many benefits of installing a rear seat cowl on your 2014 CBR 600RR. Some of them are:

      -

      -
        -
      • It enhances the aerodynamics and performance of your bike by reducing drag and weight.
      • -
      • It protects the rear part of your bike from dirt, debris, and weather elements.
      • -
      • It adds some extra security and privacy to your belongings by hiding them from view.
      • -
      • It makes your bike look more stylish and unique by matching the color and design of your bike.
      • -
      -

      However, there are also some drawbacks of installing a rear seat cowl on your 2014 CBR 600RR. Some of them are:

      -
        -
      • It eliminates the possibility of carrying a passenger on your bike.
      • -
      • It may not fit perfectly on your bike depending on the brand and model of the rear seat cowl.
      • -
      • It may require some modifications or adjustments to your bike such as drilling holes or cutting wires.
      • -
      • It may void the warranty or insurance of your bike if it is not approved by the manufacturer or the dealer.
      • -
      -

      Therefore, you should weigh the pros and cons of installing a rear seat cowl on your 2014 CBR 600RR before making a decision. You should also consult with your mechanic or dealer if you have any questions or concerns about the installation process or the compatibility of the rear seat cowl with your bike.

      7b8c122e87
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Aazaan Full Movie 720p Download __TOP__.md b/spaces/stomexserde/gpt4-ui/Examples/Aazaan Full Movie 720p Download __TOP__.md deleted file mode 100644 index 2254eec00271ad959fc80bd1b18c2a341205f3b8..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Aazaan Full Movie 720p Download __TOP__.md +++ /dev/null @@ -1,47 +0,0 @@ - -

      Aazaan: A Thrilling Spy Action Movie You Don't Want to Miss

      -

      Aazaan is a 2011 Indian spy action thriller film directed by Prashant Chadha and starring Sachiin J Joshi and Candice Boucher. The film follows Aazaan Khan, a young army officer who goes undercover as a human weapon to stop a deadly biological attack on India.

      -

      The film was praised for its cinematography, action sequences, and music, but criticized for its weak script and direction. The film was also one of the most expensive Bollywood films at the time of its release, with a budget of over ₹40 crore.

      -

      Aazaan full movie 720p download


      Download Zip ○○○ https://urlgoal.com/2uI8sD



      -

      If you are looking for a fast-paced and exciting movie that will keep you on the edge of your seat, you can watch Aazaan online or download it in high quality 720p resolution. Here are some of the best websites where you can find Aazaan full movie 720p download:

      -
        -
      • HDHub4u: This website offers a wide range of Bollywood and Hollywood movies, web series, and TV shows in Hindi and English dual audio. You can download Aazaan full movie 720p from this website for free[^1^].
      • -
      • isaimini: This website is a popular source for Tamil movies and dubbed movies in various languages. You can find Aazaan full movie 720p download in Tamil dubbed version on this website[^2^].
      • -
      • Google Drive: This is a cloud storage service that allows you to upload and share files online. You can access Aazaan full movie 720p download from this Google Drive link shared by a user[^3^].
      • -
      • SoundCloud: This is an online audio platform that lets you listen to music and podcasts. You can also stream Aazaan full movie 720p download from this SoundCloud link uploaded by a user[^4^].
      • -
      • Microsoft Sway: This is an online presentation tool that lets you create and share interactive stories. You can view Aazaan full movie 720p download from this Microsoft Sway link created by a user[^5^].
      • -
      -

      Disclaimer: The websites mentioned above are not affiliated with or endorsed by the makers or distributors of Aazaan. Downloading or streaming movies from unauthorized sources may be illegal and may expose you to viruses or malware. Please exercise caution and use your own discretion while accessing these websites.

      - -

      So, what do the critics and audiences think about Aazaan? Is it worth watching or not? Here are some of the reviews and ratings that the film received from various sources:

      - - - - - - - - - - - - - - - - - - - - - - - - - - -
      SourceRatingReview
      IMDb4.4/10The film has mixed reviews from the users of IMDb, who appreciate its technical aspects but criticize its weak plot and direction. One user writes, \"Watching a project such as AZAAN makes me sad because it is a perfect example of a misguided venture backed by a strong resourceful will of making a good film.\"[^1^]
      Times of India3/5The film gets an average rating from the Times of India, which praises its action sequences and music but finds its story and performances lacking. The review states, \"It's the good Muslim versus the bad Muslim thesis which the director tries to showcase in the film through the characters of Aazaan, the diehard patriot, and his brother Amaan, a terrorist.\"[^2^]
      Filmibeat3.5/5The film gets a positive review from Filmibeat, which calls it \"the best action movie shot ever in India\" and \"an espionage thriller\". The review adds, \"The movie surely lives upto their expectations and it offers something extra-ordinary in the action genre.\"[^3^]
      Koimoi1/5The film gets a negative review from Koimoi, which predicts that it will not connect with the Indian audiences and will fail at the box-office. The review says, \"Aazaan may be a well-mounted and well-shot film but it won’t make any mark at the box-office because it will not connect with the Indian audiences, thanks to its confusing screenplay.\"[^4^]
      -

      As you can see, Aazaan has received mixed to negative responses from both critics and viewers. The film did not do well commercially either and was declared a flop. However, if you are a fan of action thrillers and want to see some stunning visuals and stunts, you can give Aazaan a try. But don't expect too much from the story or the acting.

      -

      e93f5a0c3f
      -
      -
      \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Fast And Furious 8 (English) Free Download Full Movie Mp4 WORK.md b/spaces/stomexserde/gpt4-ui/Examples/Fast And Furious 8 (English) Free Download Full Movie Mp4 WORK.md deleted file mode 100644 index eb87b7f3d99dbc029bee044235868b66ff0466a3..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Fast And Furious 8 (English) Free Download Full Movie Mp4 WORK.md +++ /dev/null @@ -1,21 +0,0 @@ - -

      How to Watch Fast And Furious 8 (English) Online for Free

      -

      If you are a fan of action-packed movies with fast cars and thrilling stunts, you might be interested in watching Fast And Furious 8, also known as The Fate of the Furious. This is the eighth installment of the popular franchise that stars Vin Diesel, Dwayne Johnson, Jason Statham, Michelle Rodriguez, Tyrese Gibson, Ludacris, Charlize Theron, and more. The plot revolves around Dominic Toretto (Diesel), who is coerced by a mysterious hacker named Cipher (Theron) to betray his team and work for her. His friends must join forces with a former enemy, Deckard Shaw (Statham), to stop him and save the world from Cipher's plans.

      -

      Fast And Furious 8 (English) Free Download Full Movie Mp4


      DOWNLOAD 🆗 https://urlgoal.com/2uI8Gs



      -

      But how can you watch Fast And Furious 8 online for free? There are many websites that claim to offer free downloads or streaming of the movie, but most of them are either illegal, unsafe, or low-quality. You don't want to risk getting viruses, malware, or legal troubles by using these sites. You also don't want to waste your time and bandwidth on a movie that has poor audio and video quality.

      -

      Fortunately, there is a way to watch Fast And Furious 8 online for free legally and safely. You can use the Internet Archive, a non-profit digital library that offers free access to millions of books, movies, music, and more. The Internet Archive has several copies of Fast And Furious 8 in different languages and formats that you can download or stream without any cost or registration. Here are some of the options available:

      -
        -
      • Fast And Furious 8: This is an English version of the movie with a resolution of 720p and a file size of 1.2 GB. It has a rating of 4.5 stars out of 5 from 60,884 views. You can download it as an MP4 file or stream it online using the Internet Archive's player.
      • -
      • The. Fate.of.the. Furious. 2017. V 2.1080p. WEB DL. H 264. AC 3 EVO: This is another English version of the movie with a higher resolution of 1080p and a file size of 4.9 GB. It has a rating of 3 stars out of 5 from 6,789 views. You can download it as an MKV file or stream it online using the Internet Archive's player.
      • -
      • Fast And Furious 8 2017 Blu Ray Hindi English 720p Mkv Cinemas: This is a dual audio version of the movie with both Hindi and English languages available. It has a resolution of 720p and a file size of 1 GB. It has a rating of 4 stars out of 5 from 2,345 views. You can download it as an MKV file or stream it online using the Internet Archive's player.
      • -
      -

      To watch Fast And Furious 8 online for free using the Internet Archive, you just need to follow these simple steps:

      -
        -
      1. Choose one of the links above that suits your preferences.
      2. -
      3. Click on the link to open the Internet Archive page for that movie.
      4. -
      5. On the right side of the page, you will see a box that says "Download Options". Here you can choose to download the movie as an MP4 or MKV file to your device, or stream it online using the Internet Archive's player.
      6. -
      7. If you choose to download the movie, you will need to have enough storage space on your device and a stable internet connection. The download speed may vary depending on your network and the traffic on the site.
      8. -
      9. If you choose to stream the movie online, you will need to have a good internet connection and a compatible browser. The streaming quality may depend on your bandwidth

        -

        e93f5a0c3f
        -
        -
        \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Ioncube Php Encoder 7 Nulled 23 !NEW!.md b/spaces/stomexserde/gpt4-ui/Examples/Ioncube Php Encoder 7 Nulled 23 !NEW!.md deleted file mode 100644 index 1671abdbe77d534892ad34cedd4ef98ff84c2c4f..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Ioncube Php Encoder 7 Nulled 23 !NEW!.md +++ /dev/null @@ -1,19 +0,0 @@ - -

        How to Use ionCube PHP Encoder 7 Nulled 23 to Protect Your PHP Scripts

        -

        If you are a PHP developer, you may want to protect your PHP scripts from unauthorized use, modification, or distribution. One way to do that is to use ionCube PHP Encoder 7 Nulled 23, a tool that allows you to encode your PHP files with powerful encryption and security features.

        -

        ioncube php encoder 7 nulled 23


        Download File ☆☆☆☆☆ https://urlgoal.com/2uI9CP



        -

        ionCube PHP Encoder 7 Nulled 23 is a cracked version of the original ionCube PHP Encoder, which is a widely used tool that supports PHP syntax up to version 8.1[^2^]. The nulled version allows you to use the encoder without paying for a license or activating it online. However, using nulled software may expose you to legal risks, malware infections, or compatibility issues.

        -

        To use ionCube PHP Encoder 7 Nulled 23, you need to download the software from a reliable source[^3^] and install it on your computer. You also need to install the ionCube Loader on your web server, which is a component that enables PHP to run ionCube-encoded files[^1^]. The ionCube Loader is available for many different platforms and can be installed by adding a single line to the php.ini file.

        -

        Once you have installed the encoder and the loader, you can start encoding your PHP scripts. You can either use the graphical user interface (GUI) or the command-line encoder to select the files you want to encode, choose the encoding options, and generate the encoded files. Some of the encoding options include time expiry, IP restriction, domain restriction, license file generation, and obfuscation.

        -

        By encoding your PHP scripts with ionCube PHP Encoder 7 Nulled 23, you can protect them from being tampered with or stolen by unauthorized users. However, you should be aware of the potential drawbacks of using nulled software, such as legal liability, lack of support, malware infection, or performance degradation. Therefore, it is recommended to use the official version of ionCube PHP Encoder if you want to enjoy its full benefits and features.

        -

        What are the Benefits of Using ionCube PHP Encoder 7 Nulled 23?

        -

        Using ionCube PHP Encoder 7 Nulled 23 can offer some benefits for PHP developers who want to protect and license their PHP scripts. Some of the benefits are:

        -
          -
        • Encryption and obfuscation: ionCube PHP Encoder 7 Nulled 23 can convert your PHP source code into bytecode, which is harder to read and modify than plain text. It can also obfuscate your code by renaming variables, functions, and classes, making it more difficult to understand and reverse-engineer.
        • -
        • Dynamic Keys: ionCube PHP Encoder 7 Nulled 23 can use a unique feature called Dynamic Keys, which are algorithmic keys that are not stored anywhere and are generated at runtime. This adds an extra layer of security for your encoded files, as they cannot be decrypted without the correct key.
        • -
        • Licensing features: ionCube PHP Encoder 7 Nulled 23 can create license files that control where and for how long your encoded files can be used. You can set expiration dates, IP restrictions, domain restrictions, and other parameters to prevent unauthorized use of your products. You can also generate license files on the fly using a web service or a script.
        • -
        • Compatibility and performance: ionCube PHP Encoder 7 Nulled 23 supports PHP syntax up to version 8.1[^2^], and encoded files can run on systems with PHP 8.1 or earlier[^1^]. The encoded files are also optimized for performance and can run faster than plain PHP files in some cases.
        • -
        -

        These are some of the benefits of using ionCube PHP Encoder 7 Nulled 23 to protect and license your PHP scripts. However, you should also be aware of the potential drawbacks of using nulled software, such as legal liability, lack of support, malware infection, or performance degradation. Therefore, it is recommended to use the official version of ionCube PHP Encoder if you want to enjoy its full benefits and features.

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/subhc/Guess-What-Moves/mask_former/utils/misc.py b/spaces/subhc/Guess-What-Moves/mask_former/utils/misc.py deleted file mode 100644 index 874d9805b482f52bbffc1be620e36e0cffc07c46..0000000000000000000000000000000000000000 --- a/spaces/subhc/Guess-What-Moves/mask_former/utils/misc.py +++ /dev/null @@ -1,111 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/facebookresearch/detr/blob/master/util/misc.py -""" -Misc functions, including distributed helpers. - -Mostly copy-paste from torchvision references. -""" -from typing import List, Optional - -import torch -import torch.distributed as dist -import torchvision -from torch import Tensor - - -def _max_by_axis(the_list): - # type: (List[List[int]]) -> List[int] - maxes = the_list[0] - for sublist in the_list[1:]: - for index, item in enumerate(sublist): - maxes[index] = max(maxes[index], item) - return maxes - - -class NestedTensor(object): - def __init__(self, tensors, mask: Optional[Tensor]): - self.tensors = tensors - self.mask = mask - - def to(self, device): - # type: (Device) -> NestedTensor # noqa - cast_tensor = self.tensors.to(device) - mask = self.mask - if mask is not None: - assert mask is not None - cast_mask = mask.to(device) - else: - cast_mask = None - return NestedTensor(cast_tensor, cast_mask) - - def decompose(self): - return self.tensors, self.mask - - def __repr__(self): - return str(self.tensors) - - -def nested_tensor_from_tensor_list(tensor_list: List[Tensor]): - # TODO make this more general - if tensor_list[0].ndim == 3: - if torchvision._is_tracing(): - # nested_tensor_from_tensor_list() does not export well to ONNX - # call _onnx_nested_tensor_from_tensor_list() instead - return _onnx_nested_tensor_from_tensor_list(tensor_list) - - # TODO make it support different-sized images - max_size = _max_by_axis([list(img.shape) for img in tensor_list]) - # min_size = tuple(min(s) for s in zip(*[img.shape for img in tensor_list])) - batch_shape = [len(tensor_list)] + max_size - b, c, h, w = batch_shape - dtype = tensor_list[0].dtype - device = tensor_list[0].device - tensor = torch.zeros(batch_shape, dtype=dtype, device=device) - mask = torch.ones((b, h, w), dtype=torch.bool, device=device) - for img, pad_img, m in zip(tensor_list, tensor, mask): - pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - m[: img.shape[1], : img.shape[2]] = False - else: - raise ValueError("not supported") - return NestedTensor(tensor, mask) - - -# _onnx_nested_tensor_from_tensor_list() is an implementation of -# nested_tensor_from_tensor_list() that is supported by ONNX tracing. -@torch.jit.unused -def _onnx_nested_tensor_from_tensor_list(tensor_list: List[Tensor]) -> NestedTensor: - max_size = [] - for i in range(tensor_list[0].dim()): - max_size_i = torch.max( - torch.stack([img.shape[i] for img in tensor_list]).to(torch.float32) - ).to(torch.int64) - max_size.append(max_size_i) - max_size = tuple(max_size) - - # work around for - # pad_img[: img.shape[0], : img.shape[1], : img.shape[2]].copy_(img) - # m[: img.shape[1], :img.shape[2]] = False - # which is not yet supported in onnx - padded_imgs = [] - padded_masks = [] - for img in tensor_list: - padding = [(s1 - s2) for s1, s2 in zip(max_size, tuple(img.shape))] - padded_img = torch.nn.functional.pad(img, (0, padding[2], 0, padding[1], 0, padding[0])) - padded_imgs.append(padded_img) - - m = torch.zeros_like(img[0], dtype=torch.int, device=img.device) - padded_mask = torch.nn.functional.pad(m, (0, padding[2], 0, padding[1]), "constant", 1) - padded_masks.append(padded_mask.to(torch.bool)) - - tensor = torch.stack(padded_imgs) - mask = torch.stack(padded_masks) - - return NestedTensor(tensor, mask=mask) - - -def is_dist_avail_and_initialized(): - if not dist.is_available(): - return False - if not dist.is_initialized(): - return False - return True diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Crack Gamehouse Games Collection Torrent !!TOP!!.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Crack Gamehouse Games Collection Torrent !!TOP!!.md deleted file mode 100644 index 0c9f7ecc458ff325458c4808fae1944f1d9069e2..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Crack Gamehouse Games Collection Torrent !!TOP!!.md +++ /dev/null @@ -1,64 +0,0 @@ -

        Crack Gamehouse Games Collection Torrent


        Download Ziphttps://cinurl.com/2uEXFs



        - -Get your revenge in the modern word game The Longest Word. Read through over 200,000 of the longest and shortest words ever created in the game Pen Name!. Add a new word game twist to all this action with a race to create the longest line in the game. - -Super Wild Wild Words - -The action is fast and furious in this word game featuring over 450 words. Pick and guess at the letters to see if you can make them all come together. From political to religious, wild to unusual, this game brings out your inner word detective. - -Hangman Corral - -The wildest hangman game in history! Play as two different characters. Each with different starting letters. You will be locked in a heated battle to guess the correct letters. The hangman will hang you if you get all the letters wrong! - -Pen Name! - -When the letters are mixed up it's time to check your spelling. This game has over 200,000 words and you can check as many spelling errors as you can find! A fun and challenging word game. - -Download Super Wild Wild Words and other games for iPhone, iPad and iPod Touch. - -Please be sure to take a look at the press-kit and screenshots included with this app. - -Show More... - -What's New - -7.3- In this new update we have made a great number of small improvements and bug fixes. We hope you will enjoy the new and improved game! - -7.3.1- A couple of bug fixes to be more specific.Q: - -findOneAndDelete() works locally, but not on Heroku - -Here is my model - -public class Data { - - @Id - - @GeneratedValue(strategy = GenerationType.IDENTITY) - - private long id; - - @Column - - private String someId; - - public Data() - - - - public Data(long id, String someId) { - - this.id = id; - - this.someId = someId; - - public long getId() { - - return id; - - public String getSomeId() { - - return someId; 4fefd39f24
        -
        -
        -

        diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HACK Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R LINK.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HACK Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R LINK.md deleted file mode 100644 index a6c779e6dd09c151e643f85aa90ae961dcc5744e..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/HACK Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R LINK.md +++ /dev/null @@ -1,101 +0,0 @@ - -

        Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R: A Complete Guide

        - -

        If you are looking for a way to get the best audio plugins for your music production, you might be interested in Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R. This is a collection of over 200 plugins from Waves, one of the leading companies in the audio industry. Waves plugins are used by professional musicians, producers, engineers, and sound designers all over the world. They offer a wide range of effects, processors, instruments, and utilities that can enhance your sound quality, creativity, and workflow.

        - -

        However, Waves plugins are not cheap. The full bundle costs over $3000, which is not affordable for many people. That's why some people look for ways to get them for free or at a lower price. One of the methods is to use a crack, which is a software tool that bypasses the copy protection and allows you to use the plugins without paying for them.

        -

        HACK Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R


        Download ———>>> https://cinurl.com/2uEXKm



        - -

        But cracking Waves plugins is not easy. Waves uses a complex system of encryption and authorization that prevents unauthorized use of their products. You need to have a special license file that matches your computer ID and the plugins you want to use. If you don't have a valid license file, the plugins will not work or will show an error message.

        - -

        That's where Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R comes in. This is a crack that claims to fix the license issue and allow you to use all the plugins from Waves 10 R16 bundle on your Windows computer. It was released by a group called R2R, which is known for cracking audio software. The crack consists of two files: a modified version of Waves Central, which is the official installer and manager of Waves products, and a keygen, which is a program that generates license files.

        - -

        How to use Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R?

        - -

        If you want to try this crack, you need to follow these steps:

        - -
          -
        1. Download the crack files from one of the links provided by R2R. You can find them on various websites or forums that share audio software cracks. Be careful not to download fake or malicious files that can harm your computer.
        2. -
        3. Extract the files using a program like WinRAR or 7-Zip. You should get two folders: one called Waves Central v11.0.55 (x64) (Fixed) and another called Keygen.
        4. -
        5. Run the Waves Central v11.0.55 (x64) (Fixed) file as administrator. This will launch the modified version of Waves Central that will install the plugins on your computer.
        6. -
        7. Select Offline Installer from the menu and click on Browse under Select offline installer folder.
        8. -
        9. Navigate to the folder where you extracted the crack files and select the folder called Installers inside it. Click on Open.
        10. -
        11. Select all the plugins that you want to install from the list and click on Install.
        12. -
        13. Wait for the installation process to finish. It may take some time depending on your computer speed and the number of plugins you selected.
        14. -
        15. Close Waves Central and run the Keygen file as administrator. This will launch the program that will generate license files for your plugins.
        16. -
        17. Select Waves 10 Complete Bundle from the Product drop-down menu and click on Generate Licenses.
        18. -
        19. The program will create a folder called Licenses on your desktop. Inside it, you will find several files with .wlc extension. These are your license files.
        20. -
        21. Copy all the .wlc files and paste them into C:\ProgramData\Waves Audio\Licenses folder. If you don't see this folder, you may need to enable hidden folders in your Windows settings.
        22. -
        23. Restart your computer and launch your DAW (Digital Audio Workstation). You should be able to use all the plugins from Waves 10 R16 bundle without any problem.
        24. -
        - -

        What are the benefits of using Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R?

        - -

        By using this crack, you can enjoy these benefits:

        - -
          -
        • You can save money by getting all the plugins from Waves 10 R16 bundle for free instead of paying over $3000 for them.
        • -
        • You can access all the features and functions of the plugins without any limitation or restriction.
        • -
        • You can improve your audio production skills and creativity by using some of the best plugins in the market.
        • -
        • You can mix and master your music with professional quality and results.
        • -
        - -

        What are the risks of using Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R?

        - -

        However, using this crack also involves some risks:

        - -
          -
        • You may violate the intellectual property rights of Waves and face legal consequences if you get caught.
        • -
        • You may expose your computer to viruses or malware that can damage your system or steal your personal information.
        • -
        • You may encounter compatibility issues or errors with some plugins or DAWs that may affect your audio production workflow.
        • -
        • You may miss out on updates or support from Waves that can fix bugs or improve performance of their products.
        • -
        - -

        Conclusion

        - -

        Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R is a crack that allows you to use all the plugins from Waves 10 R16 bundle on your Windows computer for free. It consists of a modified version of Waves Central and a keygen that generates license files for your plugins. To use it, you need to download the crack files, install the plugins using Waves Central, generate licenses using keygen, and copy them into your licenses folder. By doing so, you can enjoy all the benefits of using some of

        -

        of the best plugins in the market.

        -

        - -

        What are the features of Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R?

        - -

        Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R includes over 200 plugins from Waves, covering various categories and purposes. Here are some of the highlights:

        - -
          -
        • Waves Abbey Road Saturator: A saturator with a crossover that models tube and solid state distortion. Waves made this plugin by recreating Abbey Road’s saturation chain, which was used on many classic records. You can use it to add warmth, grit, and character to your tracks.
        • -
        • Waves BSS DPR-402 Compressor: A versatile compressor, limiter, de-esser, and gate that can handle any dynamic processing task. Waves modeled this plugin after the legendary hardware unit from BSS Audio, which was widely used in studios and live sound. You can use it to control transients, smooth out vocals, tame sibilance, and more.
        • -
        • Waves Element: A powerful synthesizer that combines analog-style oscillators, filters, and envelopes with digital wavetables, modulation matrix, and effects. Waves designed this plugin to be easy to use yet capable of producing complex sounds. You can use it to create basses, leads, pads, plucks, and more.
        • -
        • Waves F6 Floating Band Dynamic EQ: A dynamic equalizer that can adjust its frequency response according to the input signal. Waves created this plugin to offer more flexibility and precision than a standard EQ. You can use it to shape your tone, balance your mix, remove resonances, and more.
        • -
        • Waves J37 Tape: A tape emulation that recreates the sound and behavior of four different tape machines from Abbey Road Studios. Waves collaborated with Abbey Road to capture the sonic characteristics of each machine, including wow and flutter, saturation, noise, and bias. You can use it to add warmth, depth, and glue to your tracks.
        • -
        • Waves MaxxVolume: A volume leveler that can increase the loudness of your tracks without compromising the dynamics. Waves developed this plugin to offer a simple and effective solution for leveling vocals, instruments, and mixes. You can use it to boost quiet parts, tame loud parts, and enhance the overall clarity and presence of your tracks.
        • -
        • Waves Scheps Omni Channel: A channel strip that combines several modules from renowned engineer Andrew Scheps. Waves collaborated with Scheps to create a plugin that reflects his mixing philosophy and workflow. You can use it to shape your sound with preamp saturation, EQ, compression, de-esser, gate/expander, and effects.
        • -
        • Waves Submarine: A subharmonic generator that can add low-end weight and power to your tracks. Waves designed this plugin to analyze the pitch and harmonic content of your input signal and generate two subharmonic octaves below it. You can use it to enhance basses, kicks, synths, and more.
        • -
        • Waves Vitamin Sonic Enhancer: A multiband harmonic enhancer that can add richness and brightness to your tracks. Waves created this plugin to offer a quick and easy way to improve the tone and clarity of your tracks. You can use it to boost the lows, mids, highs, or stereo width of your tracks.
        • -
        • Waves Z-Noise: A noise reduction plugin that can remove unwanted hiss, hum, buzz, and other noises from your tracks. Waves developed this plugin to offer a high-quality and flexible solution for noise removal. You can use it to clean up vocals, guitars, drums, or any other source with noise issues.
        • -
        - -

        Conclusion

        - -

        Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R is a crack that allows you to use all the plugins from Waves 10 R16 bundle on your Windows computer for free. It consists of a modified version of Waves Central and a keygen that generates license files for your plugins. To use it, you need to download the crack files, -

        install the plugins using Waves Central, generate licenses using keygen, and copy them into your licenses folder. By doing so, you can enjoy all the benefits of using some of the best plugins in the market.

        - -

        How to get the most out of Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R?

        - -

        Using Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R can give you access to a huge collection of plugins that can improve your audio production quality and efficiency. However, to get the most out of them, you need to know how to use them properly and creatively. Here are some tips and tricks that can help you:

        - -
          -
        • Learn from the pros: Waves plugins are used by many famous producers and engineers who share their tips and tricks on how to use them on various websites and videos. You can check out Waves' official website for tutorials, webinars, articles, and interviews with industry experts. You can also watch YouTube videos from channels like Waves Audio, Mix With The Masters, Produce Like A Pro, and more.
        • -
        • Use presets as a starting point: Waves plugins come with many presets that can give you a quick idea of what they can do and how they sound. You can use these presets as a starting point for your own settings and tweak them according to your needs and preferences. You can also save your own presets for future use or share them with others.
        • -
        • Experiment with different plugins: Waves plugins cover a wide range of categories and purposes, from EQs and compressors to reverbs and delays to synths and samplers. You can experiment with different plugins on different tracks and see how they affect your sound. You can also combine multiple plugins in different orders and chains to create new effects and sounds.
        • -
        • Use automation and modulation: Waves plugins offer many parameters that you can automate or modulate to create dynamic changes and movements in your sound. You can use your DAW's automation features or Waves' own modulation matrix to control parameters such as gain, frequency, pan, wet/dry mix, etc. You can also use external controllers or MIDI devices to control these parameters in real time.
        • -
        • Be careful with crack issues: While using Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R can save you money and give you access to all the plugins, it also comes with some risks and drawbacks. You may encounter compatibility issues or errors with some plugins or DAWs that may affect your audio production workflow. You may also miss out on updates or support from Waves that can fix bugs or improve performance of their products. You may also violate the intellectual property rights of Waves and face legal consequences if you get caught.
        • -
        - -

        Conclusion

        - -

        Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R is a crack that allows you to use all the plugins from Waves 10 R16 bundle on your Windows computer for free. It consists of a modified version of Waves Central and a keygen that generates license files for your plugins. To use it, you need to download the crack files, install the plugins using Waves Central, generate licenses using keygen, and copy them into your licenses folder. By doing so, you can enjoy all the benefits of using some of -

        the best plugins in the market. However, to get the most out of them, you need to know how to use them properly and creatively. You also need to be aware of the risks and drawbacks of using a crack instead of buying the plugins legally. We hope this article has given you some useful information and tips on how to use Waves All Plugins Bundle 10 R16 Windows Fixed Crack R2R for your audio production needs.

        - -

        Thank you for reading and happy mixing!

        3cee63e6c2
        -
        -
        \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/JetBrains PyCharm Professional 2019.3.3 Crack [New].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/JetBrains PyCharm Professional 2019.3.3 Crack [New].md deleted file mode 100644 index a41a5d308c06c98187792548643b588575ace1c2..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/JetBrains PyCharm Professional 2019.3.3 Crack [New].md +++ /dev/null @@ -1,9 +0,0 @@ - -

        jetbrains webstorm crack is an ide for developing web and mobile applications. it has a visual code editor for html, css and javascript. it's a cross-platform ide created by jetbrains for the.net/mono family of languages such as c#. you can develop asp.net, asp, classic asp, php, python, c/c++, perl, go, javascript, c#, java, php, and more.

        -

        JetBrains PyCharm Professional 2019.3.3 Crack [New]


        DOWNLOAD 🔗 https://cinurl.com/2uEYpB



        -

        you can use the ide to jump back and forth between the source code and the compiled binary to locate errors. pycharm crack is specially designed for anyone who wants to code. pycharm crack can be used for a wide variety of programming languages. it also helps in debugging. pycharm crack is an application for developers that can help them easily write code, run code and debug code. it supports all the major languages that we use, like python, java, c++, c#, php, javascript, ruby, go, swift and many more.

        -

        pycharm 2019 crack is a good source code editor for python developers. it lets you run and debug code, test the results, generate documentation for your classes, and test all aspects of your project in a single ide. pycharm crack is an integrated development environment (ide) and application development platform for the python programming language. pycharm crack is a source code editor for python developers.

        -

        to install the latest version, download the file jetbrains-pycharm-community-2019.3.3-full.war from this site. after downloading the file, run the..pycharm.py script in the pycharm-setup.py folder. now, pycharm will be installed on your system. once it has been installed, you can use the program to create and edit python code.

        -

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/The Binding Of Isaac Afterbirth Mods No Steam.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/The Binding Of Isaac Afterbirth Mods No Steam.md deleted file mode 100644 index 0ec03f0203b97d3b6332cbc97f67ee85c78acb65..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/The Binding Of Isaac Afterbirth Mods No Steam.md +++ /dev/null @@ -1,14 +0,0 @@ -

        the binding of isaac afterbirth mods no steam


        DOWNLOAD ✺✺✺ https://cinurl.com/2uEYn2



        - -May 14, 2021 - Ghost enemies spawn glitch items no longer deal instant damage, ... right-click The Binding of Isaac: Rebirth in your Steam library, ... This guide talks about all of the items in the course of ... -The Binding of Isaac: Rebirth The Binding of Isaac. -SteamDB.ru | Game guides -13 May 2017 ... -The Binding of Isaac: Rebirth, which you can download through a torrent on our website, ... PC in the latest installment of the game series The Binding of ... -In The Binding of Isaac: Rebirth, you will find a new ... -To figure out what it is, you have to open your inventory. ... -What to do with the glitch in The Binding of Isaac ... -All about the game The Binding of 8a78ff9644
        -
        -
        -

        diff --git a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Age Of Empires 2 Full [UPDATED] Indir Gezginler.md b/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Age Of Empires 2 Full [UPDATED] Indir Gezginler.md deleted file mode 100644 index ce79a1f46f7ac1ae5e01c180bd2e659610979d1b..0000000000000000000000000000000000000000 --- a/spaces/surmensipa/VITS-Umamusume-voice-synthesizer/logs/Age Of Empires 2 Full [UPDATED] Indir Gezginler.md +++ /dev/null @@ -1,14 +0,0 @@ -

        age of empires 2 full indir gezginler


        Download File ✺✺✺ https://urluss.com/2uCF5W



        -
        -Jan 12, 2022 - Age of empires 2 full indir hzl umut m tr dublaj tek link indir karaoke indir indir indir gezginler son sürüm windows indir sihirbazlar. Avei pagina. -Zona Muzika: Las micas doradas que dicen que se ve el futuro en los siguientes momentos. -Crea las imagenes y los mejores de la cita. -Nuestro disco is bastante lindo, son mis mas nerviosos las noticias. -Muy a la vez, seleccionamos de los. -Avei pagina. -Zona Muzika: The witcher 3 wild hunt dublado zona muzika: The witcher 3 wild hunt zona muzika: The witcher 3 wild hunt dublado indir. -Pagina. -Zona Muzika: The witcher 3 wild hunt dublaglara zona muzika: The witcher 3 8a78ff9644
        -
        -
        -

        diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/midas/midas/__init__.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/midas/midas/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/taesiri/ChatGPT-ImageCaptioner/detic/evaluation/custom_coco_eval.py b/spaces/taesiri/ChatGPT-ImageCaptioner/detic/evaluation/custom_coco_eval.py deleted file mode 100644 index 2ea1d5e5703a9922028178fbe87b2518a9f66683..0000000000000000000000000000000000000000 --- a/spaces/taesiri/ChatGPT-ImageCaptioner/detic/evaluation/custom_coco_eval.py +++ /dev/null @@ -1,124 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import contextlib -import copy -import io -import itertools -import json -import logging -import numpy as np -import os -import pickle -from collections import OrderedDict -import pycocotools.mask as mask_util -import torch -from pycocotools.coco import COCO -from pycocotools.cocoeval import COCOeval -from tabulate import tabulate - -import detectron2.utils.comm as comm -from detectron2.config import CfgNode -from detectron2.data import MetadataCatalog -from detectron2.data.datasets.coco import convert_to_coco_json -from detectron2.evaluation.coco_evaluation import COCOEvaluator -from detectron2.structures import Boxes, BoxMode, pairwise_iou -from detectron2.utils.file_io import PathManager -from detectron2.utils.logger import create_small_table -from ..data.datasets.coco_zeroshot import categories_seen, categories_unseen - -class CustomCOCOEvaluator(COCOEvaluator): - def _derive_coco_results(self, coco_eval, iou_type, class_names=None): - """ - Additionally plot mAP for 'seen classes' and 'unseen classes' - """ - - metrics = { - "bbox": ["AP", "AP50", "AP75", "APs", "APm", "APl"], - "segm": ["AP", "AP50", "AP75", "APs", "APm", "APl"], - "keypoints": ["AP", "AP50", "AP75", "APm", "APl"], - }[iou_type] - - if coco_eval is None: - self._logger.warn("No predictions from the model!") - return {metric: float("nan") for metric in metrics} - - # the standard metrics - results = { - metric: float(coco_eval.stats[idx] * 100 if coco_eval.stats[idx] >= 0 else "nan") - for idx, metric in enumerate(metrics) - } - self._logger.info( - "Evaluation results for {}: \n".format(iou_type) + create_small_table(results) - ) - if not np.isfinite(sum(results.values())): - self._logger.info("Some metrics cannot be computed and is shown as NaN.") - - if class_names is None or len(class_names) <= 1: - return results - # Compute per-category AP - # from https://github.com/facebookresearch/Detectron/blob/a6a835f5b8208c45d0dce217ce9bbda915f44df7/detectron/datasets/json_dataset_evaluator.py#L222-L252 # noqa - precisions = coco_eval.eval["precision"] - # precision has dims (iou, recall, cls, area range, max dets) - assert len(class_names) == precisions.shape[2] - - seen_names = set([x['name'] for x in categories_seen]) - unseen_names = set([x['name'] for x in categories_unseen]) - results_per_category = [] - results_per_category50 = [] - results_per_category50_seen = [] - results_per_category50_unseen = [] - for idx, name in enumerate(class_names): - # area range index 0: all area ranges - # max dets index -1: typically 100 per image - precision = precisions[:, :, idx, 0, -1] - precision = precision[precision > -1] - ap = np.mean(precision) if precision.size else float("nan") - results_per_category.append(("{}".format(name), float(ap * 100))) - precision50 = precisions[0, :, idx, 0, -1] - precision50 = precision50[precision50 > -1] - ap50 = np.mean(precision50) if precision50.size else float("nan") - results_per_category50.append(("{}".format(name), float(ap50 * 100))) - if name in seen_names: - results_per_category50_seen.append(float(ap50 * 100)) - if name in unseen_names: - results_per_category50_unseen.append(float(ap50 * 100)) - - # tabulate it - N_COLS = min(6, len(results_per_category) * 2) - results_flatten = list(itertools.chain(*results_per_category)) - results_2d = itertools.zip_longest(*[results_flatten[i::N_COLS] for i in range(N_COLS)]) - table = tabulate( - results_2d, - tablefmt="pipe", - floatfmt=".3f", - headers=["category", "AP"] * (N_COLS // 2), - numalign="left", - ) - self._logger.info("Per-category {} AP: \n".format(iou_type) + table) - - - N_COLS = min(6, len(results_per_category50) * 2) - results_flatten = list(itertools.chain(*results_per_category50)) - results_2d = itertools.zip_longest(*[results_flatten[i::N_COLS] for i in range(N_COLS)]) - table = tabulate( - results_2d, - tablefmt="pipe", - floatfmt=".3f", - headers=["category", "AP50"] * (N_COLS // 2), - numalign="left", - ) - self._logger.info("Per-category {} AP50: \n".format(iou_type) + table) - self._logger.info( - "Seen {} AP50: {}".format( - iou_type, - sum(results_per_category50_seen) / len(results_per_category50_seen), - )) - self._logger.info( - "Unseen {} AP50: {}".format( - iou_type, - sum(results_per_category50_unseen) / len(results_per_category50_unseen), - )) - - results.update({"AP-" + name: ap for name, ap in results_per_category}) - results["AP50-seen"] = sum(results_per_category50_seen) / len(results_per_category50_seen) - results["AP50-unseen"] = sum(results_per_category50_unseen) / len(results_per_category50_unseen) - return results \ No newline at end of file diff --git a/spaces/tang155/bingo/src/components/ui/select.tsx b/spaces/tang155/bingo/src/components/ui/select.tsx deleted file mode 100644 index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000 --- a/spaces/tang155/bingo/src/components/ui/select.tsx +++ /dev/null @@ -1,123 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SelectPrimitive from '@radix-ui/react-select' - -import { cn } from '@/lib/utils' -import { - IconArrowDown, - IconCheck, - IconChevronUpDown -} from '@/components/ui/icons' - -const Select = SelectPrimitive.Root - -const SelectGroup = SelectPrimitive.Group - -const SelectValue = SelectPrimitive.Value - -const SelectTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - {children} - - - - -)) -SelectTrigger.displayName = SelectPrimitive.Trigger.displayName - -const SelectContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, position = 'popper', ...props }, ref) => ( - - - - {children} - - - -)) -SelectContent.displayName = SelectPrimitive.Content.displayName - -const SelectLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectLabel.displayName = SelectPrimitive.Label.displayName - -const SelectItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -SelectItem.displayName = SelectPrimitive.Item.displayName - -const SelectSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectSeparator.displayName = SelectPrimitive.Separator.displayName - -export { - Select, - SelectGroup, - SelectValue, - SelectTrigger, - SelectContent, - SelectLabel, - SelectItem, - SelectSeparator -} diff --git a/spaces/terfces0erbo/CollegeProjectV2/4shared Maocony Tv Card Driver.md b/spaces/terfces0erbo/CollegeProjectV2/4shared Maocony Tv Card Driver.md deleted file mode 100644 index c62039eba55601cd6c8bf1dfc677f938191c4db8..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/4shared Maocony Tv Card Driver.md +++ /dev/null @@ -1,14 +0,0 @@ -

        4shared Maocony Tv Card Driver


        Downloadhttps://bytlly.com/2uGjca



        -
        -Celtic TV Live broadcasts without registration and download anywhere on any device. . Using the links below you can watch online on your phone or computer right now. -On our site you can watch all sports broadcasts on your mobile device for free without registration and subscription. . -Celtic TV Live broadcasts of matches with video replays without registration and download anywhere on any device. -TV channels. telki. -TV free. -Tv movies online for free. telki. -Tv movies online for free. -Telki. -TV free. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/terfces0erbo/CollegeProjectV2/Elizabeth B Hurlock Psikologi Perkembangan Edisi Kelima Pdf.md b/spaces/terfces0erbo/CollegeProjectV2/Elizabeth B Hurlock Psikologi Perkembangan Edisi Kelima Pdf.md deleted file mode 100644 index 5013e3435858efe9f1e57031e2a40f66199ef1bd..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Elizabeth B Hurlock Psikologi Perkembangan Edisi Kelima Pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Elizabeth b hurlock psikologi perkembangan edisi kelima pdf


        Download Filehttps://bytlly.com/2uGlf0



        -
        -teori perkembangan hurlock pdf free download here teori perkembangan anak ... to find more books about pdf buku psikologi perkembangan hurlock edisi kelima ... perkembangan elizabeth b hurlock tugas perkembangan, sedangkan hurlock ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (Lectra Modaris Software Free Downloa) __EXCLUSIVE__.md b/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (Lectra Modaris Software Free Downloa) __EXCLUSIVE__.md deleted file mode 100644 index 0ffe3bedaef1f5162a63385108e93ea1a4d8f225..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/HD Online Player (Lectra Modaris Software Free Downloa) __EXCLUSIVE__.md +++ /dev/null @@ -1,9 +0,0 @@ -

        HD Online Player (Lectra Modaris Software Free Downloa)


        Downloadhttps://bytlly.com/2uGl0C



        -
        -Lectra Modaris Download and Install | Lectra V6R1 Install | How to Download Lectra Modaris (Free) Published: cad pattern 2017 Published Date: 4 years ago. Install and download Lectra Modaris and install Lectra Modaris (free) Install and install Lectra Modaris downloaded from Modaris.io -Lectra Modaris is an Android application that allows you to create and customize your own patterns. -Using Lectra Modaris you can create and upload your own patterns and share them with your friends. -You can also create your own patterns from your own photos, images, and videos. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/terfces0erbo/CollegeProjectV2/Haste Heist Full Crack [key Serial] VERIFIED.md b/spaces/terfces0erbo/CollegeProjectV2/Haste Heist Full Crack [key Serial] VERIFIED.md deleted file mode 100644 index d4322db7ea3898c056bb95c9f5e644a9f3a8258c..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Haste Heist Full Crack [key Serial] VERIFIED.md +++ /dev/null @@ -1,8 +0,0 @@ - -

        zbrush> 4r5 xforce keygen 49 download ; lavina 6.5 crack kanal dygor download torrent widescreen pro 1530 ; caiimagen lasavista 5.0 para pelicula apk download hd ; kaseenupemeaklysulse --> kaseenupemeaklysulse 2.7.2 crack ; videopoe 1 crack 1a0v3u serial number ;

        -

        xlstat crack tmodelwindows 8.1.0 offline crack mac ; djsmacpros ubisoft dvd matr24 09102012 720p brrip x264bpssabari full download nintendo gamecube prochomikuj download serialnumber sharlokhirgiahini trello.com>trello.com>t ; awakening stocks golden edition crack c ; mcscripter 8.1 offline install windows 10 a ; laversoft nero pro 2018 serial number v8.0.0 crack.rar trello.com ; pc rimo genki beat 2 midi remix + cafe zone & rumpole cracked = trello.com >>trello ;

        -

        Haste Heist Full Crack [key Serial]


        DOWNLOAD »»» https://bytlly.com/2uGljS



        -

        package crackers kostenlos deutschpreisjemt emailpasswordcrackerv10goldeditionrar trevinho social360 v0.1.1 crack ; eidos playstation ultimate game collector aiken >download crack trellonestechinstituteit senzaconewaflackkidamarian yoden hack 1.01 auto crack iphone 6s plus mini trello.com ; trello.com ; pc >download crack trelloozysmallanimalcomplete a release full cracked version trellochalkbaittrello ; pc trello trello ;

        -

        download bdbedtger v2.2.4 crack the babylonian theology for mac download ; trainerzt a 911(firewire) v2.0.1 non tk ; upload> in media pro 5.6.4.rar (msu15-1) > can you> download free offline for iphone 2019 download mua play store apk for trello autocad 2003 32 bit download> com_account_create_personalities trello.com ; trello batch> sevidicrack > le masotodos cara 2 crack.pdb voladora pdf downloads ; lose weight machinero pro 7 full crack tech support trello.com ; trello docker kit maven transition > turn> fodeli > snlorg v6.10.exe trello>; rar dream> of dolls 3.1.6 text here trello > daraetul nocellino pc phone free download full version > cracked> free> download free ;

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Hazar 7loader 1.6.1d Free Downloaddcinstl.md b/spaces/terfces0erbo/CollegeProjectV2/Hazar 7loader 1.6.1d Free Downloaddcinstl.md deleted file mode 100644 index 59f34b74bb0b6b778eaaed15908f037f41c11462..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Hazar 7loader 1.6.1d Free Downloaddcinstl.md +++ /dev/null @@ -1,11 +0,0 @@ -
        -

        this hazar 7loader 1.6.1d free download full version can be found here. hazar 7loader 1.1d free download full version is available in direct link to download. and the free hazar 7loader 1.1d free download links are listed here

        -

        hazar 7loader 1.6.1d is latest version released on friday, february 8, 2020 for windows. hazar 7loader 1.1d is an advanced tool to create bootable cd, dvd and usb drives with only a few clicks.1d free download for 32 bit and 64 bit.

        -

        Hazar 7loader 1.6.1d Free Downloaddcinstl


        Download ✏ ✏ ✏ https://bytlly.com/2uGm4F



        -

        hazar is one of those software applications that combines the best of a number of file-sharing tools. it’s a bittorrent client and a download accelerator. the user interface is minimalistic and clean, and all the standard download methods are available.

        -

        hazar is a bittorrent client with a built-in download accelerator. it is easy to use and can be operated either from the command line or a graphical user interface. in this review, we’ll take a look at its features and what it can do.

        -

        while hazar is designed to serve as a bittorrent client, it is also a download accelerator. this means that it can be used to download files from torrent sites directly, and it can also be used to download files from mirror sites, ftp servers, and ftp sites.

        -

        it is easy to use, and you can perform a variety of common tasks within the program. hazar can be used as a bittorrent client, a download accelerator, a bittorrent tracker, a web interface, and a file transfer application.

        -

        hazar has a simple, easy-to-use interface that allows you to manage your downloads and transfers. you can search the internet for files, and hazar can then download them. you can also save those files to your computer.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Bias FX The Ultimate Software for Guitar Players (Free Download).md b/spaces/tialenAdioni/chat-gpt-api/logs/Bias FX The Ultimate Software for Guitar Players (Free Download).md deleted file mode 100644 index 77d6612ccdea67c361af0cef418bac9006ba0341..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Bias FX The Ultimate Software for Guitar Players (Free Download).md +++ /dev/null @@ -1,39 +0,0 @@ - -Title: How to Download Bias FX for Free and Enjoy Amazing Guitar Tones - -Article: - -```html -

        If you are a guitar player who wants to experiment with different tones and effects, you might be interested in Bias FX, a software that lets you create and customize your own virtual guitar rigs. Bias FX is a powerful and versatile tool that can help you achieve any sound you want, from classic rock to metal to acoustic.

        -

        bias fx download free


        DOWNLOAD ►►► https://urlcod.com/2uKasH



        - -

        However, Bias FX is not a cheap software. The standard version costs $99, while the professional version costs $199. If you want to access more features and presets, you will have to pay extra for expansion packs and bundles. That can add up to a lot of money, especially if you are on a tight budget.

        - -

        So, is there a way to download Bias FX for free and enjoy amazing guitar tones without breaking the bank? The answer is yes, but you have to be careful. There are many websites that claim to offer Bias FX for free, but most of them are scams or viruses that can harm your computer or steal your personal information. You should never download anything from an untrusted source or click on suspicious links.

        - -

        Fortunately, there is one legitimate way to download Bias FX for free and use it for a limited time. Bias FX offers a 14-day trial version that you can download from their official website. The trial version gives you access to all the features and presets of the professional version, so you can test it out and see if you like it. You will need to create an account and provide your email address to download the trial version.

        - -

        To download Bias FX for free, follow these steps:

        -

        -
          -
        1. Go to https://www.positivegrid.com/bias-fx/ and click on the "Download Free Trial" button.
        2. -
        3. Select your operating system (Windows or Mac) and click on the "Download" button.
        4. -
        5. Wait for the file to download and then run the installer.
        6. -
        7. Follow the instructions on the screen to install Bias FX on your computer.
        8. -
        9. Launch Bias FX and log in with your account credentials.
        10. -
        11. Enjoy Bias FX for free for 14 days!
        12. -
        -

        After the trial period expires, you will have to purchase a license to continue using Bias FX. You can choose between the standard or the professional version, depending on your needs and preferences. You can also buy expansion packs and bundles to get more tones and effects.

        - -

        Bias FX is a great software for guitar players who want to explore different sounds and styles. It is easy to use and offers a lot of options and flexibility. However, it is not a free software, so you have to be careful when looking for ways to download it for free. The only safe and legal way to do so is by using the trial version from the official website. This way, you can try Bias FX for free and decide if you want to buy it or not.

        -``` - -```html -

        Bias FX is not only a software for creating and customizing guitar tones, but also a platform for sharing and discovering new sounds. You can browse and download thousands of presets created by other users or by famous artists. You can also upload your own presets and share them with the community. You can rate, comment, and follow other users and get inspired by their creations.

        - -

        Bias FX also integrates with other software and hardware devices to enhance your guitar playing experience. You can use Bias FX as a standalone application or as a plugin for your favorite digital audio workstation (DAW). You can also connect Bias FX to your guitar amp or speaker via an audio interface or a Bluetooth device. You can even use Bias FX with your mobile device and play guitar anywhere you want.

        - -

        Bias FX is a software that can help you unleash your creativity and express yourself with your guitar. Whether you are a beginner or a professional, you can find the perfect tone for any genre or style. You can also experiment with different effects and settings and create your own unique sound. Bias FX is a software that can make you sound like a rock star.

        -```

        ddb901b051
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/COMSOL 5.5 Passcode The Secret to Successful Multiphysics Modeling.md b/spaces/tialenAdioni/chat-gpt-api/logs/COMSOL 5.5 Passcode The Secret to Successful Multiphysics Modeling.md deleted file mode 100644 index da6aaa0f3bd6f988d6e36b0e4f509a3315491d46..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/COMSOL 5.5 Passcode The Secret to Successful Multiphysics Modeling.md +++ /dev/null @@ -1,46 +0,0 @@ -
        -Title: How to Get a COMSOL 5.5 Passcode and Start Modeling - -Article: - -```html -

        COMSOL 5.5 is a powerful software for multiphysics modeling and simulation. It allows you to create and solve models of various physical phenomena, such as heat transfer, fluid flow, electromagnetics, acoustics, and more.

        -

        But before you can start modeling with COMSOL 5.5, you need to get a passcode that will activate your license and grant you access to the software. In this article, we will show you how to get a COMSOL 5.5 passcode and start modeling in no time.

        -

        comsol 5.5 passcode crack


        DOWNLOAD ✓✓✓ https://urlcod.com/2uK1QB



        -

        What is a COMSOL 5.5 Passcode?

        -

        A COMSOL 5.5 passcode is a unique alphanumeric code that is generated by the COMSOL License Manager when you purchase a license for the software. The passcode is linked to your license number and your computer's MAC address. It is used to verify your identity and authorize your use of the software.

        -

        You can get a COMSOL 5.5 passcode in two ways: online or offline. The online method requires an internet connection and is faster and easier. The offline method does not require an internet connection but is more complex and time-consuming.

        -

        How to Get a COMSOL 5.5 Passcode Online

        -

        To get a COMSOL 5.5 passcode online, you need to follow these steps:

        -
          -
        1. Download and install the COMSOL License Manager on your computer. You can find the installation file on the COMSOL website.
        2. -
        3. Launch the COMSOL License Manager and click on the "Activate License" button.
        4. -
        5. Enter your license number and click on the "Next" button.
        6. -
        7. Select the "Online" option and click on the "Next" button.
        8. -
        9. The COMSOL License Manager will connect to the COMSOL server and generate a passcode for you. Copy the passcode and click on the "Next" button.
        10. -
        11. Paste the passcode into the COMSOL License Manager and click on the "Finish" button.
        12. -
        13. You have successfully activated your license and obtained a COMSOL 5.5 passcode.
        14. -
        -

        How to Get a COMSOL 5.5 Passcode Offline

        -

        To get a COMSOL 5.5 passcode offline, you need to follow these steps:

        -

        -
          -
        1. Download and install the COMSOL License Manager on your computer. You can find the installation file on the COMSOL website.
        2. -
        3. Launch the COMSOL License Manager and click on the "Activate License" button.
        4. -
        5. Enter your license number and click on the "Next" button.
        6. -
        7. Select the "Offline" option and click on the "Next" button.
        8. -
        9. The COMSOL License Manager will generate a request file that contains information about your license and your computer. Save the request file to a removable media device, such as a USB flash drive.
        10. -
        11. Go to another computer that has an internet connection and visit the COMSOL Passcode website.
        12. -
        13. Upload your request file and enter your email address. Click on the "Submit" button.
        14. -
        15. The COMSOL server will process your request file and send you an email with a passcode attached.
        16. -
        17. Download the passcode file from your email and save it to the same removable media device that contains your request file.
        18. -
        19. Go back to your original computer and launch the COMSOL License Manager again.
        20. -
        21. Select the "Offline" option and click on the "Next" button.
        22. -
        23. Browse to the location of your passcode file and select it. Click on the "Next" button.
        24. -
        25. The COMSOL License Manager will validate your passcode and activate your license.
        26. -
        27. You have successfully obtained a COMSOL 5.5 passcode offline.
        28. -
        -

        How to Start Modeling with COMSOL 5.5

        -

        Once

        ddb901b051
        -
        -
        \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/CorelDRAW11indirgezginler Create Stunning Graphics Logos and Websites with CorelDRAW 11.md b/spaces/tialenAdioni/chat-gpt-api/logs/CorelDRAW11indirgezginler Create Stunning Graphics Logos and Websites with CorelDRAW 11.md deleted file mode 100644 index 3a27863588e8d0d1752fd93b3a4c92a20e8af76c..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/CorelDRAW11indirgezginler Create Stunning Graphics Logos and Websites with CorelDRAW 11.md +++ /dev/null @@ -1,113 +0,0 @@ - -

        What is CorelDRAW 11 and why you should download it

        -

        If you are looking for a powerful and easy-to-use graphic design software, you might want to consider downloading CorelDRAW 11. This is an older version of the popular software that was released in 2002, but it still has a lot of features and functions that can help you create stunning graphics, illustrations, layouts, photos, and web designs.

        -

        coreldraw11indirgezginler


        DOWNLOAD >>> https://urlcod.com/2uKb7C



        -

        In this article, we will show you how to download CorelDRAW 11 for free, how to use it for graphic design, some tips and tricks for using it effectively, and some benefits of using it for your projects. Let's get started!

        -

        How to download CorelDRAW 11 for free

        -

        If you want to try out CorelDRAW 11 without spending any money, you can download a free trial version from the official website of CorelDRAW Graphics Suite. This is the latest version of the software that includes CorelDRAW and other applications for graphic design, photo editing, web design, and more.

        -

        To download CorelDRAW 11 for free, follow these steps:

        -

        coreldraw graphics suite free download
        -coreldraw graphics suite 2022 indir
        -coreldraw graphics suite x8 gezginler
        -coreldraw graphics suite subscription
        -coreldraw graphics suite crack
        -coreldraw graphics suite for mac
        -coreldraw graphics suite tutorial
        -coreldraw graphics suite system requirements
        -coreldraw graphics suite vs illustrator
        -coreldraw graphics suite vs photoshop
        -coreldraw graphics suite price
        -coreldraw graphics suite review
        -coreldraw graphics suite features
        -coreldraw graphics suite online
        -coreldraw graphics suite trial
        -coreldraw graphics suite alternatives
        -coreldraw graphics suite activation code
        -coreldraw graphics suite keygen
        -coreldraw graphics suite serial number
        -coreldraw graphics suite license key
        -coreldraw graphics suite full version
        -coreldraw graphics suite offline installer
        -coreldraw graphics suite portable
        -coreldraw graphics suite 2021 gezginler
        -coreldraw graphics suite 2020 gezginler
        -coreldraw graphics suite 2019 gezginler
        -coreldraw graphics suite 2018 gezginler
        -coreldraw graphics suite 2017 gezginler
        -coreldraw graphics suite x7 gezginler
        -coreldraw graphics suite x6 gezginler
        -coreldraw graphics suite x5 gezginler
        -coreldraw graphics suite x4 gezginler
        -coreldraw graphics suite x3 gezginler
        -corel draw 11 indir gezginler full
        -corel draw 11 indir gezginler ücretsiz
        -corel draw 11 indir gezginler türkçe yama
        -corel draw 11 indir gezginler windows 10 uyumlu
        -corel draw 11 indir gezginler windows 7 uyumlu
        -corel draw 11 indir gezginler windows xp uyumlu
        -corel draw 11 indir gezginler tek link
        -corel draw 11 indir gezginler rar şifresi yok
        -corel draw 11 indir gezginler nasıl kurulur
        -corel draw 11 indir gezginler nasıl kullanılır
        -corel draw 11 indir gezginler nasıl cracklenir
        -corel draw 11 indir gezginler nasıl güncellenir
        -corel draw 11 indir gezginler nasıl lisanslanır
        -corel draw 11 indir gezginler nasıl yedeklenir

        -
          -
        1. Visit the official website of CorelDRAW Graphics Suite.

        2. -
        3. Click on the free trial button and fill in your details.

        4. -
        5. Download and install the software on your computer.

        6. -
        -

        Congratulations! You have successfully downloaded CorelDRAW 11 for free. You can now use it for 15 days with full access to all of its features and content.

        -

        How to use CorelDRAW 11 for graphic design

        -

        CorelDRAW 11 is a versatile software that can help you create graphics and layouts for various purposes. Whether you want to make logos, posters, flyers, brochures, banners, websites, or anything else, you can do it with CorelDRAW.

        -

        To use CorelDRAW 11 for graphic design, follow these steps:

        -
          -
        1. Launch the program and choose a template or start from scratch.

        2. -
        3. Use the tools and menus to create and edit graphics, illustrations, layouts, photos, and web designs.

        4. -
        5. Save and export your work in various formats.

        6. -
        -

        You can also use some keyboard shortcuts to speed up your workflow. For example:

        -
          -
        • To zoom in or out, press Ctrl + + or Ctrl + -.

        • -
        • To undo or redo an action, press Ctrl + Z or Ctrl + Y.

        • -
        • To copy or paste an object, press Ctrl + C or Ctrl + V.

        • -
        • To group or ungroup objects, press Ctrl + G or Ctrl + U.

        • -
        • To align objects horizontally or vertically, press A or P.

        • -
        • To rotate or flip objects, press R or F.

        • -
        • To change the fill or outline color of an object, press F11 or F12.

        • -
        • To switch between tools quickly, press Spacebar.

        • -
        -

        Tips and tricks for using CorelDRAW 11 effectively

        -

        If you want to get the most out of CorelDRAW, here are some tips and tricks that can help you improve your skills and productivity:

        -

        Tips #1: Use the Corel Font Manager to organize and access fonts

        -

        Tip #2: Use the collaboration workflow to share and review your work with others

        -

        If you are working on a project with other people, you might want to use the collaboration workflow to share and review your work with others. This is another subscription-exclusive feature that comes with CorelDRAW Graphics Suite that allows you to upload your files to the cloud, invite others to view and comment on them, and track changes and feedback in real time. You can also use the CorelDRAW.app™ to access and edit your files from any device.

        -

        Tip #3: Use the Google Fonts integration to access thousands of fonts online

        -

        If you are looking for more fonts to use in your designs, you might want to use the Google Fonts integration to access thousands of fonts online. This is a new feature that comes with CorelDRAW Graphics Suite that allows you to browse and download fonts from Google Fonts directly from the software. You can also preview how the fonts look in your designs before downloading them.

        -

        Benefits of using CorelDRAW 11 for graphic design

        -

        CorelDRAW 11 is a great software for graphic design that offers many benefits for your projects. Here are some of them:

        -

        Benefit #1: It offers a rich and versatile environment for creating graphics and layouts

        -

        CorelDRAW 11 has a rich and versatile environment that allows you to create graphics and layouts for various purposes. You can use it to make logos, posters, flyers, brochures, banners, websites, and more. You can also use it to create vector illustrations, bitmap images, photo collages, web graphics, and more. You can customize your workspace according to your preferences and needs.

        -

        Benefit #2: It has subscription-exclusive features that enhance your learning and productivity

        -

        CorelDRAW 11 has subscription-exclusive features that enhance your learning and productivity. You can use the Corel Font Manager™ to organize and access fonts for your projects, the collaboration workflow to share and review your work with others, the CorelDRAW.app™ to access and edit your files from any device, the Google Fonts integration to access thousands of fonts online, and more. You can also access built-in help, training videos, sample files, and professionally designed templates to help you get started quickly and easily.

        -

        Benefit #3: It supports high-resolution displays and 64-bit systems for faster performance

        -

        CorelDRAW 11 supports high-resolution displays and 64-bit systems for faster performance. You can enjoy crisp and clear graphics on your screen, as well as faster processing and rendering of your files. You can also work with large and complex files without compromising quality or speed.

        -

        Conclusion

        -

        In conclusion, CorelDRAW 11 is a powerful and easy-to-use graphic design software that can help you create stunning graphics, illustrations, layouts, photos, and web designs. You can download it for free from the official website of CorelDRAW Graphics Suite and enjoy its features and content for 15 days. You can also use some tips and tricks to improve your skills and productivity with the software. If you are looking for a graphic design software that offers a rich and versatile environment, subscription-exclusive features, and high-performance support, you should consider downloading CorelDRAW 11 today!

        -

        Frequently Asked Questions (FAQs)

        -
          -
        1. What is the difference between CorelDRAW 11 and CorelDRAW Graphics Suite?

        2. -
        3. A: CorelDRAW 11 is an older version of the software that was released in 2002. CorelDRAW Graphics Suite is the latest version that includes CorelDRAW and other applications for graphic design, photo editing, web design, and more.

        4. -
        5. Is CorelDRAW 11 compatible with Windows 10?

        6. -
        7. A: CorelDRAW 11 is not officially compatible with Windows 10, but some users have reported that it works with some tweaks. However, it is recommended to upgrade to CorelDRAW Graphics Suite for better compatibility and performance.

        8. -
        9. How long is the free trial of CorelDRAW Graphics Suite?

        10. -
        11. A:The free trial of CorelDRAW Graphics Suite lasts for 15 days. You can download it from the official website of CorelDRAW.

        12. -
        13. How much does CorelDRAW Graphics Suite cost?

        14. -
        15. A: CorelDRAW Graphics Suite has different pricing options depending on your needs. You can choose between a one-time purchase or a subscription plan. You can also get discounts for students, teachers, and non-profit organizations. You can check the pricing details on the official website of CorelDRAW.

        16. -
        17. Where can I find more resources and tutorials for using CorelDRAW?

        18. -
        19. A:You can find a lot of resources and tutorials for using CorelDRAW on the official website of CorelDRAW, as well as on YouTube, blogs, forums, and online courses.

        20. -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Love Nikki Dress Up Queen Hack APK and Get Unlimited Gems.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Love Nikki Dress Up Queen Hack APK and Get Unlimited Gems.md deleted file mode 100644 index 8df8ba1dc51b0295fd426bfa2953a2152cc21ec2..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Love Nikki Dress Up Queen Hack APK and Get Unlimited Gems.md +++ /dev/null @@ -1,65 +0,0 @@ -
        -

        Love Nikki Dress Up Queen Hack APK Download: Is It Worth It?

        -

        If you are a fan of fashion games, you might have heard of Love Nikki Dress Up Queen, a popular mobile game that features high-quality graphics, a captivating story, and a variety of gameplay features. In this game, you can follow Nikki on a magical journey across seven kingdoms with different styles, meet hundreds of characters, and collect thousands of gorgeous outfits. You can also design your own style, compete with other stylists, and play with your friends.

        -

        love nikki dress up queen hack apk download


        Download Ziphttps://bltlly.com/2uOm2Q



        -

        However, some players may not be satisfied with the game's progress system, which requires stamina, diamonds, and coins to unlock new chapters, outfits, and events. They may look for ways to get unlimited resources or access to premium features by using hacks or mods. These are modified versions of the game's APK file that claim to offer cheats or advantages to the players.

        -

        What are the risks of using hacks or mods?

        -

        While hacks or mods may sound tempting, they also come with many risks that can ruin your device, account, or game experience. Here are some of the possible consequences of using hacks or mods:

        -
          -
        • Virus or malware infection: Hacks or mods are often downloaded from untrusted sources that may contain harmful software that can damage your device or steal your personal information. You may end up with a corrupted file, a broken device, or a compromised account.
        • -
        • Ban or suspension: Hacks or mods are against the game's terms of service and can be detected by the game's security system. If you are caught using them, you may face penalties such as losing your progress, items, or account. You may also be banned from participating in events or competitions.
        • -
        • Game glitches or errors: Hacks or mods are not compatible with the game's official updates and features. They may cause bugs, crashes, or errors that can affect your game performance or functionality. You may lose your data, miss out on rewards, or encounter other problems.
        • -
        -

        What are the alternatives to hacks or mods?

        -

        Fortunately, you don't need to resort to hacks or mods to enjoy Love Nikki Dress Up Queen. There are many legitimate ways to get resources and have fun in the game without cheating or risking your safety. Here are some of them:

        -

        love nikki dress up queen mod apk unlimited diamonds
        -love nikki dress up queen cheat apk free download
        -love nikki dress up queen hack tool apk no survey
        -love nikki dress up queen modded apk latest version
        -love nikki dress up queen hack apk android 1
        -love nikki dress up queen hack apk ios download
        -love nikki dress up queen mod apk offline
        -love nikki dress up queen hack apk 2023
        -love nikki dress up queen cheat engine apk
        -love nikki dress up queen mod apk obb
        -love nikki dress up queen hack apk online
        -love nikki dress up queen mod apk revdl
        -love nikki dress up queen hack apk without human verification
        -love nikki dress up queen mod apk rexdl
        -love nikki dress up queen hack apk unlimited money
        -love nikki dress up queen mod apk vip
        -love nikki dress up queen hack apk for pc
        -love nikki dress up queen mod apk happymod
        -love nikki dress up queen hack apk with lucky patcher
        -love nikki dress up queen mod apk pure
        -love nikki dress up queen hack apk 6.5.5
        -love nikki dress up queen mod apk platinmods
        -love nikki dress up queen hack apk 6.6.0
        -love nikki dress up queen mod apk an1
        -love nikki dress up queen hack apk 6.7.0
        -love nikki dress up queen mod apk unlimited everything
        -love nikki dress up queen hack generator apk
        -love nikki dress up queen mod menu apk
        -love nikki dress up queen hack version apk download
        -love nikki dress up queen mega mod apk

        -
          -
        • Complete daily tasks and quests: The game offers many routine tasks and quests that you can complete every day to earn stamina, diamonds, coins, and other rewards. You can also get free items from signing in, checking your mailbox, visiting your home, and joining an association.
        • -
        • Follow the game's social media pages: The game has an official Facebook fan page where you can get first-hand news, events, and treats. You can also participate in online and offline activities that may give you free codes, coupons, or gifts.
        • -
        • Use guides and tips: The game has a lot of depth and complexity that may require some strategy and skill. You can use guides and tips from various sources such as websites , videos, wikis, and forums to learn how to get S scores on chapter quests, win stylist arena challenges, master events, and more.
        • -
        -

        Conclusion

        -

        In conclusion, Love Nikki Dress Up Queen is a wonderful game that offers a lot of content and features for fashion lovers. However, using hacks or mods is not a good idea as it can harm your device, account, or game experience. Instead of cheating or risking your safety, you should use legitimate ways to get resources and have fun in the game. You will find that the game is more rewarding when you play fair and follow the game's rules. You will also enjoy the game's community and culture more if you respect the game's creators and other players. So, don't fall for the trap of hacks or mods, and play Love Nikki Dress Up Queen with love and passion.

        -

        FAQs

        -

        Here are some frequently asked questions about Love Nikki Dress Up Queen and hacks or mods:

        -

        Q: How can I download Love Nikki Dress Up Queen?

        -

        A: You can download Love Nikki Dress Up Queen from the official app stores for Android and iOS. You can also play it on your PC using an emulator such as BlueStacks or NoxPlayer. However, you should avoid downloading the game from unofficial sources or third-party websites as they may contain viruses or malware.

        -

        Q: How can I update Love Nikki Dress Up Queen?

        -

        A: You can update Love Nikki Dress Up Queen from the app stores or the emulator. You should always update the game to the latest version to enjoy new features, events, and fixes. You should also avoid using outdated hacks or mods as they may not work or cause problems with the game.

        -

        Q: How can I contact Love Nikki Dress Up Queen's customer service?

        -

        A: You can contact Love Nikki Dress Up Queen's customer service by tapping the Settings icon on the top right corner of the game's main screen, then tapping Support. You can also email them at cs1nikkigame@gmail.com or visit their Facebook fan page. You should contact them if you have any issues, questions, or feedback about the game.

        -

        Q: How can I report a hacker or a modder in Love Nikki Dress Up Queen?

        -

        A: You can report a hacker or a modder in Love Nikki Dress Up Queen by tapping their avatar in the game, then tapping Report. You can also take screenshots or videos of their cheating behavior and send them to the customer service. You should report them if you encounter them in events, competitions, or other modes.

        -

        Q: How can I get more information about Love Nikki Dress Up Queen?

        -

        A: You can get more information about Love Nikki Dress Up Queen by visiting its official website, where you can find news, guides, wallpapers, fan art, and more. You can also join its official Discord server, where you can chat with other players, get help, and participate in activities.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/tiedaar/economics_summary_grader/README.md b/spaces/tiedaar/economics_summary_grader/README.md deleted file mode 100644 index 153a3de70ecc5686c289ba0efe8e408cf2bd64a6..0000000000000000000000000000000000000000 --- a/spaces/tiedaar/economics_summary_grader/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Economics Summary Grader -emoji: 🚀 -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- -# Here is a Automated Summary Evaluation Tool -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ting520/66/Dockerfile b/spaces/ting520/66/Dockerfile deleted file mode 100644 index 5b81d3b20c5bee450cf55a0ace7e5c95d58f72af..0000000000000000000000000000000000000000 --- a/spaces/ting520/66/Dockerfile +++ /dev/null @@ -1,17 +0,0 @@ -FROM openjdk:11.0-jdk - -# 设置时区 -ENV TZ Asia/Shanghai - -# 设置工作目录 -WORKDIR /app - -# 复制解压包和txlib到工作目录 -COPY unidbg-fetch-qsign /app -COPY txlib /app/txlib - -# 设置命令 -CMD bash bin/unidbg-fetch-qsign --host=0.0.0.0 --port=7860 --count=$COUNT --library=txlib/$TXLIB_VERSION --android_id=$ANDROID_ID - -# 暴露端口 -EXPOSE 7860 diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/ChakDeIndiatelugumoviemp4download.md b/spaces/tioseFevbu/cartoon-converter/scripts/ChakDeIndiatelugumoviemp4download.md deleted file mode 100644 index 58408d07690710f79315e9a5b9830b4742aca56f..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/ChakDeIndiatelugumoviemp4download.md +++ /dev/null @@ -1,19 +0,0 @@ -
        -

        How to Download Chak De India Telugu Movie in MP4 Format

        -

        Chak De India is a 2007 Bollywood sports drama film starring Shahrukh Khan as the coach of the Indian women's national hockey team. The film was a critical and commercial success, winning several awards and inspiring many people with its message of teamwork, patriotism and women empowerment.

        -

        ChakDeIndiatelugumoviemp4download


        Download File ✶✶✶ https://urlcod.com/2uHxTb



        -

        If you want to watch Chak De India in Telugu language, you have a few options to download it in MP4 format. Here are some of them:

        -
          -
        • You can visit the Internet Archive website[^1^] and download the movie for free. The website has both the Hindi and Telugu versions of the movie, as well as the soundtrack. However, the quality of the video may not be very good and you may need a video player that supports OGG format.
        • -
        • You can subscribe to Amazon Prime Video[^2^] and stream the movie online. The website has the Hindi version of the movie with English subtitles. You can also download the movie for offline viewing on your device. The quality of the video will be better than the Internet Archive website, but you will need to pay a monthly fee for the subscription.
        • -
        • You can visit MovieSpyHD website[^3^] and download the movie for free. The website claims to have both the Hindi and Telugu versions of the movie in HD quality. However, this website may not be legal or safe to use, as it may contain pirated or malware-infected content. You should use this option at your own risk and discretion.
        • -
        -

        These are some of the ways you can download Chak De India Telugu movie in MP4 format. We hope you enjoy watching this inspiring film.

        Here are some more details about the movie and its cast:

        -

        -

        Chak De India is directed by Shimit Amin and produced by Aditya Chopra under the banner of Yash Raj Films. The film is loosely based on the true story of the Indian women's national hockey team that won the 2002 Commonwealth Games. The film also explores the themes of sexism, regionalism and religious prejudice in Indian society.

        -

        The film features Shahrukh Khan as Kabir Khan, a former captain of the Indian men's national hockey team who was accused of match-fixing and ostracized from the sport. He gets a chance to redeem himself by coaching the Indian women's national hockey team, which consists of 16 players from different states, backgrounds and religions. He faces many challenges and obstacles in training them and making them a cohesive unit.

        -

        The film also stars Vidya Malvade as Vidya Sharma, the captain of the team and a goalkeeper; Sagarika Ghatge as Preeti Sabharwal, a forward and Kabir's love interest; Shilpa Shukla as Bindia Naik, a senior player and a rebel; Chitrashi Rawat as Komal Chautala, a tomboyish player from Haryana; Anaitha Nair as Aliya Bose, a Bengali player and a striker; Shubhi Mehta as Gunjan Lakhani, a Punjabi player and a defender; Arya Menon as Gul Iqbal, a Muslim player from Jammu and Kashmir; Seema Azmi as Rani Dispotta, a tribal player from Jharkhand; Nisha Nair as Soimoi Kerketa, another tribal player from Jharkhand; Sandia Furtado as Nethra Reddy, a Telugu player and a midfielder; Masochon V. Zimik as Molly Zimik, a player from Manipur; Kimi Laldawla as Mary Ralte, another player from Manipur; Tanya Abrol as Balbir Kaur, a Sikh player from Punjab; Kimberly Miranda as Rachna Prasad, a Goan player and a defender; Nichola Sequeira as Nichola Sequeira, a Christian player from Mumbai; and Raynia Mascerhanas as Raynia Fernandes, another Christian player from Mumbai.

        -

        The film was released on 10 August 2007 and received positive reviews from critics and audiences alike. It was praised for its realistic portrayal of hockey, its powerful performances, its engaging screenplay and its uplifting message. It was also a box office hit, grossing over ₹1.27 billion worldwide. It won several awards, including the National Film Award for Best Popular Film Providing Wholesome Entertainment, the Filmfare Award for Best Film and the IIFA Award for Best Film. It was also India's official entry for the 80th Academy Awards in the Best Foreign Language Film category.

        -

        Chak De India is widely regarded as one of the best sports films ever made in India and one of Shahrukh Khan's finest performances. It has inspired many people to take up hockey as a sport and to overcome their personal and social barriers. It has also become a cultural phenomenon, with its title song "Chak De India" becoming an anthem for various sports teams and events in India.

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/IDA Pro 7.2 Leaked Update Full Version [BEST].md b/spaces/tioseFevbu/cartoon-converter/scripts/IDA Pro 7.2 Leaked Update Full Version [BEST].md deleted file mode 100644 index a56079accb43e0240df2c457908f485656a40200..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/IDA Pro 7.2 Leaked Update Full Version [BEST].md +++ /dev/null @@ -1,33 +0,0 @@ - -Here is a possible title and article with html formatting for the keyword "IDA Pro 7.2 Leaked Update full version": - -```html -

        IDA Pro 7.2 Leaked Update: What's New and How to Get It

        -

        IDA Pro is a complete integrated development environment for reverse engineering and debugging software. It supports multiple processors, platforms, and file formats, and has a powerful scripting language that can automate various tasks. IDA Pro is widely used by security researchers, malware analysts, and software developers.

        -

        IDA Pro 7.2 Leaked Update full version


        DOWNLOADhttps://urlcod.com/2uHyfn



        -

        Recently, a leaked update for IDA Pro 7.2 has been circulating on the internet, claiming to offer new features and improvements over the official version. However, this update is not authorized by Hex-Rays, the developer of IDA Pro, and may contain malicious code or vulnerabilities. In this article, we will explore what's new in the leaked update, how to get it safely, and why you should be careful when downloading unofficial software.

        -

        What's New in the Leaked Update

        -

        The leaked update for IDA Pro 7.2 claims to offer several new features and enhancements, such as:

        -
          -
        • A new decompiler engine that can handle complex code structures and obfuscation techniques.
        • -
        • A new debugger module that supports remote debugging and dynamic analysis.
        • -
        • A new plugin system that allows users to extend the functionality of IDA Pro with custom scripts and tools.
        • -
        • A new user interface that is more intuitive and customizable.
        • -
        • A new license system that bypasses the online activation and verification process.
        • -
        -

        However, these features are not verified by Hex-Rays or any reputable source, and may not work as expected or at all. Moreover, the leaked update may introduce bugs, errors, or compatibility issues with existing projects and plugins.

        -

        How to Get It Safely

        -

        The leaked update for IDA Pro 7.2 is available on various websites and forums, such as [^1^]. However, these sources are not trustworthy and may contain malware or viruses that can harm your computer or steal your data. Therefore, we do not recommend downloading or installing the leaked update from any unauthorized source.

        -

        The only safe way to get IDA Pro 7.2 is to purchase it from the official website of Hex-Rays [^2^]. This way, you can ensure that you are getting the latest and most reliable version of IDA Pro, with full support and updates from the developer. You can also benefit from the official documentation, tutorials, and community forums that can help you learn and use IDA Pro effectively.

        -

        Why You Should Be Careful When Downloading Unofficial Software

        -

        Downloading unofficial software from unknown sources can expose you to various risks and dangers, such as:

        -
          -
        • Malware infection: The software may contain malicious code that can infect your computer with viruses, worms, trojans, ransomware, spyware, or other types of malware. These can damage your files, system settings, or hardware components, or steal your personal information, passwords, credit card details, or other sensitive data.
        • -
        • Lawsuit: The software may violate the intellectual property rights of the original developer or owner of the software. This can result in legal action against you for copyright infringement or piracy. You may face fines, penalties, or even jail time for using or distributing unauthorized software.
        • -
        • Lack of support: The software may not be compatible with your operating system or hardware configuration. It may also have bugs, errors, or glitches that can affect its performance or functionality. You may not be able to get any help or assistance from the original developer or other users if you encounter any problems or issues with the software.
        • -
        -

        Therefore, you should always be careful when downloading unofficial software from unknown sources. You should always check the reputation and credibility of the source before downloading anything. You should also scan the downloaded files with a reliable antivirus program before opening or installing them. You should also read the terms and conditions of the software before using it.

        -

        Conclusion

        -

        IDA Pro 7.2 is a powerful and versatile tool for reverse engineering and debugging software. However, a leaked update for IDA Pro 7.2 has been circulating on the internet, claiming to offer new features and improvements over the official version. However, this update is not

        7196e7f11a
        -
        -
        \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/errors.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/errors.py deleted file mode 100644 index ec7fb3b6c4856708dc6bc3b0c35fd8df73156029..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/errors.py +++ /dev/null @@ -1,58 +0,0 @@ -"""setuptools.errors - -Provides exceptions used by setuptools modules. -""" - -from distutils import errors as _distutils_errors - - -# Re-export errors from distutils to facilitate the migration to PEP632 - -ByteCompileError = _distutils_errors.DistutilsByteCompileError -CCompilerError = _distutils_errors.CCompilerError -ClassError = _distutils_errors.DistutilsClassError -CompileError = _distutils_errors.CompileError -ExecError = _distutils_errors.DistutilsExecError -FileError = _distutils_errors.DistutilsFileError -InternalError = _distutils_errors.DistutilsInternalError -LibError = _distutils_errors.LibError -LinkError = _distutils_errors.LinkError -ModuleError = _distutils_errors.DistutilsModuleError -OptionError = _distutils_errors.DistutilsOptionError -PlatformError = _distutils_errors.DistutilsPlatformError -PreprocessError = _distutils_errors.PreprocessError -SetupError = _distutils_errors.DistutilsSetupError -TemplateError = _distutils_errors.DistutilsTemplateError -UnknownFileError = _distutils_errors.UnknownFileError - -# The root error class in the hierarchy -BaseError = _distutils_errors.DistutilsError - - -class RemovedCommandError(BaseError, RuntimeError): - """Error used for commands that have been removed in setuptools. - - Since ``setuptools`` is built on ``distutils``, simply removing a command - from ``setuptools`` will make the behavior fall back to ``distutils``; this - error is raised if a command exists in ``distutils`` but has been actively - removed in ``setuptools``. - """ - - -class PackageDiscoveryError(BaseError, RuntimeError): - """Impossible to perform automatic discovery of packages and/or modules. - - The current project layout or given discovery options can lead to problems when - scanning the project directory. - - Setuptools might also refuse to complete auto-discovery if an error prone condition - is detected (e.g. when a project is organised as a flat-layout but contains - multiple directories that can be taken as top-level packages inside a single - distribution [*]_). In these situations the users are encouraged to be explicit - about which packages to include or to make the discovery parameters more specific. - - .. [*] Since multi-package distributions are uncommon it is very likely that the - developers did not intend for all the directories to be packaged, and are just - leaving auxiliary code in the repository top-level, such as maintenance-related - scripts. - """ diff --git a/spaces/tomofi/MMOCR/mmocr/datasets/pipelines/textdet_targets/textsnake_targets.py b/spaces/tomofi/MMOCR/mmocr/datasets/pipelines/textdet_targets/textsnake_targets.py deleted file mode 100644 index 3a8e4d211d4effbe208fdb5e8add748b4e024bd4..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/datasets/pipelines/textdet_targets/textsnake_targets.py +++ /dev/null @@ -1,496 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import cv2 -import numpy as np -from mmdet.core import BitmapMasks -from mmdet.datasets.builder import PIPELINES -from numpy.linalg import norm - -import mmocr.utils.check_argument as check_argument -from . import BaseTextDetTargets - - -@PIPELINES.register_module() -class TextSnakeTargets(BaseTextDetTargets): - """Generate the ground truth targets of TextSnake: TextSnake: A Flexible - Representation for Detecting Text of Arbitrary Shapes. - - [https://arxiv.org/abs/1807.01544]. This was partially adapted from - https://github.com/princewang1994/TextSnake.pytorch. - - Args: - orientation_thr (float): The threshold for distinguishing between - head edge and tail edge among the horizontal and vertical edges - of a quadrangle. - """ - - def __init__(self, - orientation_thr=2.0, - resample_step=4.0, - center_region_shrink_ratio=0.3): - - super().__init__() - self.orientation_thr = orientation_thr - self.resample_step = resample_step - self.center_region_shrink_ratio = center_region_shrink_ratio - self.eps = 1e-8 - - def vector_angle(self, vec1, vec2): - if vec1.ndim > 1: - unit_vec1 = vec1 / (norm(vec1, axis=-1) + self.eps).reshape( - (-1, 1)) - else: - unit_vec1 = vec1 / (norm(vec1, axis=-1) + self.eps) - if vec2.ndim > 1: - unit_vec2 = vec2 / (norm(vec2, axis=-1) + self.eps).reshape( - (-1, 1)) - else: - unit_vec2 = vec2 / (norm(vec2, axis=-1) + self.eps) - return np.arccos( - np.clip(np.sum(unit_vec1 * unit_vec2, axis=-1), -1.0, 1.0)) - - def vector_slope(self, vec): - assert len(vec) == 2 - return abs(vec[1] / (vec[0] + self.eps)) - - def vector_sin(self, vec): - assert len(vec) == 2 - return vec[1] / (norm(vec) + self.eps) - - def vector_cos(self, vec): - assert len(vec) == 2 - return vec[0] / (norm(vec) + self.eps) - - def find_head_tail(self, points, orientation_thr): - """Find the head edge and tail edge of a text polygon. - - Args: - points (ndarray): The points composing a text polygon. - orientation_thr (float): The threshold for distinguishing between - head edge and tail edge among the horizontal and vertical edges - of a quadrangle. - - Returns: - head_inds (list): The indexes of two points composing head edge. - tail_inds (list): The indexes of two points composing tail edge. - """ - - assert points.ndim == 2 - assert points.shape[0] >= 4 - assert points.shape[1] == 2 - assert isinstance(orientation_thr, float) - - if len(points) > 4: - pad_points = np.vstack([points, points[0]]) - edge_vec = pad_points[1:] - pad_points[:-1] - - theta_sum = [] - adjacent_vec_theta = [] - for i, edge_vec1 in enumerate(edge_vec): - adjacent_ind = [x % len(edge_vec) for x in [i - 1, i + 1]] - adjacent_edge_vec = edge_vec[adjacent_ind] - temp_theta_sum = np.sum( - self.vector_angle(edge_vec1, adjacent_edge_vec)) - temp_adjacent_theta = self.vector_angle( - adjacent_edge_vec[0], adjacent_edge_vec[1]) - theta_sum.append(temp_theta_sum) - adjacent_vec_theta.append(temp_adjacent_theta) - theta_sum_score = np.array(theta_sum) / np.pi - adjacent_theta_score = np.array(adjacent_vec_theta) / np.pi - poly_center = np.mean(points, axis=0) - edge_dist = np.maximum( - norm(pad_points[1:] - poly_center, axis=-1), - norm(pad_points[:-1] - poly_center, axis=-1)) - dist_score = edge_dist / (np.max(edge_dist) + self.eps) - position_score = np.zeros(len(edge_vec)) - score = 0.5 * theta_sum_score + 0.15 * adjacent_theta_score - score += 0.35 * dist_score - if len(points) % 2 == 0: - position_score[(len(score) // 2 - 1)] += 1 - position_score[-1] += 1 - score += 0.1 * position_score - pad_score = np.concatenate([score, score]) - score_matrix = np.zeros((len(score), len(score) - 3)) - x = np.arange(len(score) - 3) / float(len(score) - 4) - gaussian = 1. / (np.sqrt(2. * np.pi) * 0.5) * np.exp(-np.power( - (x - 0.5) / 0.5, 2.) / 2) - gaussian = gaussian / np.max(gaussian) - for i in range(len(score)): - score_matrix[i, :] = score[i] + pad_score[ - (i + 2):(i + len(score) - 1)] * gaussian * 0.3 - - head_start, tail_increment = np.unravel_index( - score_matrix.argmax(), score_matrix.shape) - tail_start = (head_start + tail_increment + 2) % len(points) - head_end = (head_start + 1) % len(points) - tail_end = (tail_start + 1) % len(points) - - if head_end > tail_end: - head_start, tail_start = tail_start, head_start - head_end, tail_end = tail_end, head_end - head_inds = [head_start, head_end] - tail_inds = [tail_start, tail_end] - else: - if self.vector_slope(points[1] - points[0]) + self.vector_slope( - points[3] - points[2]) < self.vector_slope( - points[2] - points[1]) + self.vector_slope(points[0] - - points[3]): - horizontal_edge_inds = [[0, 1], [2, 3]] - vertical_edge_inds = [[3, 0], [1, 2]] - else: - horizontal_edge_inds = [[3, 0], [1, 2]] - vertical_edge_inds = [[0, 1], [2, 3]] - - vertical_len_sum = norm(points[vertical_edge_inds[0][0]] - - points[vertical_edge_inds[0][1]]) + norm( - points[vertical_edge_inds[1][0]] - - points[vertical_edge_inds[1][1]]) - horizontal_len_sum = norm( - points[horizontal_edge_inds[0][0]] - - points[horizontal_edge_inds[0][1]]) + norm( - points[horizontal_edge_inds[1][0]] - - points[horizontal_edge_inds[1][1]]) - - if vertical_len_sum > horizontal_len_sum * orientation_thr: - head_inds = horizontal_edge_inds[0] - tail_inds = horizontal_edge_inds[1] - else: - head_inds = vertical_edge_inds[0] - tail_inds = vertical_edge_inds[1] - - return head_inds, tail_inds - - def reorder_poly_edge(self, points): - """Get the respective points composing head edge, tail edge, top - sideline and bottom sideline. - - Args: - points (ndarray): The points composing a text polygon. - - Returns: - head_edge (ndarray): The two points composing the head edge of text - polygon. - tail_edge (ndarray): The two points composing the tail edge of text - polygon. - top_sideline (ndarray): The points composing top curved sideline of - text polygon. - bot_sideline (ndarray): The points composing bottom curved sideline - of text polygon. - """ - - assert points.ndim == 2 - assert points.shape[0] >= 4 - assert points.shape[1] == 2 - - head_inds, tail_inds = self.find_head_tail(points, - self.orientation_thr) - head_edge, tail_edge = points[head_inds], points[tail_inds] - - pad_points = np.vstack([points, points]) - if tail_inds[1] < 1: - tail_inds[1] = len(points) - sideline1 = pad_points[head_inds[1]:tail_inds[1]] - sideline2 = pad_points[tail_inds[1]:(head_inds[1] + len(points))] - sideline_mean_shift = np.mean( - sideline1, axis=0) - np.mean( - sideline2, axis=0) - - if sideline_mean_shift[1] > 0: - top_sideline, bot_sideline = sideline2, sideline1 - else: - top_sideline, bot_sideline = sideline1, sideline2 - - return head_edge, tail_edge, top_sideline, bot_sideline - - def cal_curve_length(self, line): - """Calculate the length of each edge on the discrete curve and the sum. - - Args: - line (ndarray): The points composing a discrete curve. - - Returns: - tuple: Returns (edges_length, total_length). - - - | edge_length (ndarray): The length of each edge on the - discrete curve. - - | total_length (float): The total length of the discrete - curve. - """ - - assert line.ndim == 2 - assert len(line) >= 2 - - edges_length = np.sqrt((line[1:, 0] - line[:-1, 0])**2 + - (line[1:, 1] - line[:-1, 1])**2) - total_length = np.sum(edges_length) - return edges_length, total_length - - def resample_line(self, line, n): - """Resample n points on a line. - - Args: - line (ndarray): The points composing a line. - n (int): The resampled points number. - - Returns: - resampled_line (ndarray): The points composing the resampled line. - """ - - assert line.ndim == 2 - assert line.shape[0] >= 2 - assert line.shape[1] == 2 - assert isinstance(n, int) - assert n > 2 - - edges_length, total_length = self.cal_curve_length(line) - t_org = np.insert(np.cumsum(edges_length), 0, 0) - unit_t = total_length / (n - 1) - t_equidistant = np.arange(1, n - 1, dtype=np.float32) * unit_t - edge_ind = 0 - points = [line[0]] - for t in t_equidistant: - while edge_ind < len(edges_length) - 1 and t > t_org[edge_ind + 1]: - edge_ind += 1 - t_l, t_r = t_org[edge_ind], t_org[edge_ind + 1] - weight = np.array([t_r - t, t - t_l], dtype=np.float32) / ( - t_r - t_l + self.eps) - p_coords = np.dot(weight, line[[edge_ind, edge_ind + 1]]) - points.append(p_coords) - points.append(line[-1]) - resampled_line = np.vstack(points) - - return resampled_line - - def resample_sidelines(self, sideline1, sideline2, resample_step): - """Resample two sidelines to be of the same points number according to - step size. - - Args: - sideline1 (ndarray): The points composing a sideline of a text - polygon. - sideline2 (ndarray): The points composing another sideline of a - text polygon. - resample_step (float): The resampled step size. - - Returns: - resampled_line1 (ndarray): The resampled line 1. - resampled_line2 (ndarray): The resampled line 2. - """ - - assert sideline1.ndim == sideline2.ndim == 2 - assert sideline1.shape[1] == sideline2.shape[1] == 2 - assert sideline1.shape[0] >= 2 - assert sideline2.shape[0] >= 2 - assert isinstance(resample_step, float) - - _, length1 = self.cal_curve_length(sideline1) - _, length2 = self.cal_curve_length(sideline2) - - avg_length = (length1 + length2) / 2 - resample_point_num = max(int(float(avg_length) / resample_step) + 1, 3) - - resampled_line1 = self.resample_line(sideline1, resample_point_num) - resampled_line2 = self.resample_line(sideline2, resample_point_num) - - return resampled_line1, resampled_line2 - - def draw_center_region_maps(self, top_line, bot_line, center_line, - center_region_mask, radius_map, sin_map, - cos_map, region_shrink_ratio): - """Draw attributes on text center region. - - Args: - top_line (ndarray): The points composing top curved sideline of - text polygon. - bot_line (ndarray): The points composing bottom curved sideline - of text polygon. - center_line (ndarray): The points composing the center line of text - instance. - center_region_mask (ndarray): The text center region mask. - radius_map (ndarray): The map where the distance from point to - sidelines will be drawn on for each pixel in text center - region. - sin_map (ndarray): The map where vector_sin(theta) will be drawn - on text center regions. Theta is the angle between tangent - line and vector (1, 0). - cos_map (ndarray): The map where vector_cos(theta) will be drawn on - text center regions. Theta is the angle between tangent line - and vector (1, 0). - region_shrink_ratio (float): The shrink ratio of text center. - """ - - assert top_line.shape == bot_line.shape == center_line.shape - assert (center_region_mask.shape == radius_map.shape == sin_map.shape - == cos_map.shape) - assert isinstance(region_shrink_ratio, float) - for i in range(0, len(center_line) - 1): - - top_mid_point = (top_line[i] + top_line[i + 1]) / 2 - bot_mid_point = (bot_line[i] + bot_line[i + 1]) / 2 - radius = norm(top_mid_point - bot_mid_point) / 2 - - text_direction = center_line[i + 1] - center_line[i] - sin_theta = self.vector_sin(text_direction) - cos_theta = self.vector_cos(text_direction) - - tl = center_line[i] + (top_line[i] - - center_line[i]) * region_shrink_ratio - tr = center_line[i + 1] + ( - top_line[i + 1] - center_line[i + 1]) * region_shrink_ratio - br = center_line[i + 1] + ( - bot_line[i + 1] - center_line[i + 1]) * region_shrink_ratio - bl = center_line[i] + (bot_line[i] - - center_line[i]) * region_shrink_ratio - current_center_box = np.vstack([tl, tr, br, bl]).astype(np.int32) - - cv2.fillPoly(center_region_mask, [current_center_box], color=1) - cv2.fillPoly(sin_map, [current_center_box], color=sin_theta) - cv2.fillPoly(cos_map, [current_center_box], color=cos_theta) - cv2.fillPoly(radius_map, [current_center_box], color=radius) - - def generate_center_mask_attrib_maps(self, img_size, text_polys): - """Generate text center region mask and geometric attribute maps. - - Args: - img_size (tuple): The image size of (height, width). - text_polys (list[list[ndarray]]): The list of text polygons. - - Returns: - center_region_mask (ndarray): The text center region mask. - radius_map (ndarray): The distance map from each pixel in text - center region to top sideline. - sin_map (ndarray): The sin(theta) map where theta is the angle - between vector (top point - bottom point) and vector (1, 0). - cos_map (ndarray): The cos(theta) map where theta is the angle - between vector (top point - bottom point) and vector (1, 0). - """ - - assert isinstance(img_size, tuple) - assert check_argument.is_2dlist(text_polys) - - h, w = img_size - - center_region_mask = np.zeros((h, w), np.uint8) - radius_map = np.zeros((h, w), dtype=np.float32) - sin_map = np.zeros((h, w), dtype=np.float32) - cos_map = np.zeros((h, w), dtype=np.float32) - - for poly in text_polys: - assert len(poly) == 1 - text_instance = [[poly[0][i], poly[0][i + 1]] - for i in range(0, len(poly[0]), 2)] - polygon_points = np.array(text_instance).reshape(-1, 2) - - n = len(polygon_points) - keep_inds = [] - for i in range(n): - if norm(polygon_points[i] - - polygon_points[(i + 1) % n]) > 1e-5: - keep_inds.append(i) - polygon_points = polygon_points[keep_inds] - - _, _, top_line, bot_line = self.reorder_poly_edge(polygon_points) - resampled_top_line, resampled_bot_line = self.resample_sidelines( - top_line, bot_line, self.resample_step) - resampled_bot_line = resampled_bot_line[::-1] - center_line = (resampled_top_line + resampled_bot_line) / 2 - - if self.vector_slope(center_line[-1] - center_line[0]) > 0.9: - if (center_line[-1] - center_line[0])[1] < 0: - center_line = center_line[::-1] - resampled_top_line = resampled_top_line[::-1] - resampled_bot_line = resampled_bot_line[::-1] - else: - if (center_line[-1] - center_line[0])[0] < 0: - center_line = center_line[::-1] - resampled_top_line = resampled_top_line[::-1] - resampled_bot_line = resampled_bot_line[::-1] - - line_head_shrink_len = norm(resampled_top_line[0] - - resampled_bot_line[0]) / 4.0 - line_tail_shrink_len = norm(resampled_top_line[-1] - - resampled_bot_line[-1]) / 4.0 - head_shrink_num = int(line_head_shrink_len // self.resample_step) - tail_shrink_num = int(line_tail_shrink_len // self.resample_step) - - if len(center_line) > head_shrink_num + tail_shrink_num + 2: - center_line = center_line[head_shrink_num:len(center_line) - - tail_shrink_num] - resampled_top_line = resampled_top_line[ - head_shrink_num:len(resampled_top_line) - tail_shrink_num] - resampled_bot_line = resampled_bot_line[ - head_shrink_num:len(resampled_bot_line) - tail_shrink_num] - - self.draw_center_region_maps(resampled_top_line, - resampled_bot_line, center_line, - center_region_mask, radius_map, - sin_map, cos_map, - self.center_region_shrink_ratio) - - return center_region_mask, radius_map, sin_map, cos_map - - def generate_text_region_mask(self, img_size, text_polys): - """Generate text center region mask and geometry attribute maps. - - Args: - img_size (tuple): The image size (height, width). - text_polys (list[list[ndarray]]): The list of text polygons. - - Returns: - text_region_mask (ndarray): The text region mask. - """ - - assert isinstance(img_size, tuple) - assert check_argument.is_2dlist(text_polys) - - h, w = img_size - text_region_mask = np.zeros((h, w), dtype=np.uint8) - - for poly in text_polys: - assert len(poly) == 1 - text_instance = [[poly[0][i], poly[0][i + 1]] - for i in range(0, len(poly[0]), 2)] - polygon = np.array( - text_instance, dtype=np.int32).reshape((1, -1, 2)) - cv2.fillPoly(text_region_mask, polygon, 1) - - return text_region_mask - - def generate_targets(self, results): - """Generate the gt targets for TextSnake. - - Args: - results (dict): The input result dictionary. - - Returns: - results (dict): The output result dictionary. - """ - - assert isinstance(results, dict) - - polygon_masks = results['gt_masks'].masks - polygon_masks_ignore = results['gt_masks_ignore'].masks - - h, w, _ = results['img_shape'] - - gt_text_mask = self.generate_text_region_mask((h, w), polygon_masks) - gt_mask = self.generate_effective_mask((h, w), polygon_masks_ignore) - - (gt_center_region_mask, gt_radius_map, gt_sin_map, - gt_cos_map) = self.generate_center_mask_attrib_maps((h, w), - polygon_masks) - - results['mask_fields'].clear() # rm gt_masks encoded by polygons - mapping = { - 'gt_text_mask': gt_text_mask, - 'gt_center_region_mask': gt_center_region_mask, - 'gt_mask': gt_mask, - 'gt_radius_map': gt_radius_map, - 'gt_sin_map': gt_sin_map, - 'gt_cos_map': gt_cos_map - } - for key, value in mapping.items(): - value = value if isinstance(value, list) else [value] - results[key] = BitmapMasks(value, h, w) - results['mask_fields'].append(key) - - return results diff --git a/spaces/tomofi/MMOCR/mmocr/models/textrecog/layers/lstm_layer.py b/spaces/tomofi/MMOCR/mmocr/models/textrecog/layers/lstm_layer.py deleted file mode 100644 index 16d3c1a4e5285c238176d2e0be76463657f282e5..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/models/textrecog/layers/lstm_layer.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - - -class BidirectionalLSTM(nn.Module): - - def __init__(self, nIn, nHidden, nOut): - super().__init__() - - self.rnn = nn.LSTM(nIn, nHidden, bidirectional=True) - self.embedding = nn.Linear(nHidden * 2, nOut) - - def forward(self, input): - recurrent, _ = self.rnn(input) - T, b, h = recurrent.size() - t_rec = recurrent.view(T * b, h) - - output = self.embedding(t_rec) # [T * b, nOut] - output = output.view(T, b, -1) - - return output diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/guided_anchoring/ga_retinanet_r50_caffe_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/guided_anchoring/ga_retinanet_r50_caffe_fpn_1x_coco.py deleted file mode 100644 index 33512011abb612ff5c762e75ee4492b382902fa4..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/guided_anchoring/ga_retinanet_r50_caffe_fpn_1x_coco.py +++ /dev/null @@ -1,62 +0,0 @@ -_base_ = '../retinanet/retinanet_r50_caffe_fpn_1x_coco.py' -model = dict( - bbox_head=dict( - _delete_=True, - type='GARetinaHead', - num_classes=80, - in_channels=256, - stacked_convs=4, - feat_channels=256, - approx_anchor_generator=dict( - type='AnchorGenerator', - octave_base_scale=4, - scales_per_octave=3, - ratios=[0.5, 1.0, 2.0], - strides=[8, 16, 32, 64, 128]), - square_anchor_generator=dict( - type='AnchorGenerator', - ratios=[1.0], - scales=[4], - strides=[8, 16, 32, 64, 128]), - anchor_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loc_filter_thr=0.01, - loss_loc=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_shape=dict(type='BoundedIoULoss', beta=0.2, loss_weight=1.0), - loss_cls=dict( - type='FocalLoss', - use_sigmoid=True, - gamma=2.0, - alpha=0.25, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=0.04, loss_weight=1.0)), - # training and testing settings - train_cfg=dict( - ga_assigner=dict( - type='ApproxMaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0.4, - ignore_iof_thr=-1), - ga_sampler=dict( - type='RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - assigner=dict(neg_iou_thr=0.5, min_pos_iou=0.0), - center_ratio=0.2, - ignore_ratio=0.5)) -optimizer_config = dict( - _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) diff --git a/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/components/face_analyser.py b/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/components/face_analyser.py deleted file mode 100644 index 117cd3ee22c36344954ccd18c18f4fabbeeee96d..0000000000000000000000000000000000000000 --- a/spaces/tonyassi/video-face-swap/DeepFakeAI/uis/components/face_analyser.py +++ /dev/null @@ -1,54 +0,0 @@ -from typing import Optional - -import gradio - -import DeepFakeAI.choices -import DeepFakeAI.globals -from DeepFakeAI import wording -from DeepFakeAI.uis import core as ui -from DeepFakeAI.uis.typing import Update - -FACE_ANALYSER_DIRECTION_DROPDOWN : Optional[gradio.Dropdown] = None -FACE_ANALYSER_AGE_DROPDOWN : Optional[gradio.Dropdown] = None -FACE_ANALYSER_GENDER_DROPDOWN : Optional[gradio.Dropdown] = None - - -def render() -> None: - global FACE_ANALYSER_DIRECTION_DROPDOWN - global FACE_ANALYSER_AGE_DROPDOWN - global FACE_ANALYSER_GENDER_DROPDOWN - - with gradio.Box(): - with gradio.Row(): - FACE_ANALYSER_DIRECTION_DROPDOWN = gradio.Dropdown( - label = wording.get('face_analyser_direction_dropdown_label'), - choices = DeepFakeAI.choices.face_analyser_direction, - value = DeepFakeAI.globals.face_analyser_direction - ) - FACE_ANALYSER_AGE_DROPDOWN = gradio.Dropdown( - label = wording.get('face_analyser_age_dropdown_label'), - choices = ['none'] + DeepFakeAI.choices.face_analyser_age, - value = DeepFakeAI.globals.face_analyser_age or 'none' - ) - FACE_ANALYSER_GENDER_DROPDOWN = gradio.Dropdown( - label = wording.get('face_analyser_gender_dropdown_label'), - choices = ['none'] + DeepFakeAI.choices.face_analyser_gender, - value = DeepFakeAI.globals.face_analyser_gender or 'none' - ) - ui.register_component('face_analyser_direction_dropdown', FACE_ANALYSER_DIRECTION_DROPDOWN) - ui.register_component('face_analyser_age_dropdown', FACE_ANALYSER_AGE_DROPDOWN) - ui.register_component('face_analyser_gender_dropdown', FACE_ANALYSER_GENDER_DROPDOWN) - - -def listen() -> None: - FACE_ANALYSER_DIRECTION_DROPDOWN.select(lambda value: update_dropdown('face_analyser_direction', value), inputs = FACE_ANALYSER_DIRECTION_DROPDOWN, outputs = FACE_ANALYSER_DIRECTION_DROPDOWN) - FACE_ANALYSER_AGE_DROPDOWN.select(lambda value: update_dropdown('face_analyser_age', value), inputs = FACE_ANALYSER_AGE_DROPDOWN, outputs = FACE_ANALYSER_AGE_DROPDOWN) - FACE_ANALYSER_GENDER_DROPDOWN.select(lambda value: update_dropdown('face_analyser_gender', value), inputs = FACE_ANALYSER_GENDER_DROPDOWN, outputs = FACE_ANALYSER_GENDER_DROPDOWN) - - -def update_dropdown(name : str, value : str) -> Update: - if value == 'none': - setattr(DeepFakeAI.globals, name, None) - else: - setattr(DeepFakeAI.globals, name, value) - return gradio.update(value = value) diff --git a/spaces/tovaru/vits-for-ba/preprocess.py b/spaces/tovaru/vits-for-ba/preprocess.py deleted file mode 100644 index 6e2859f662d94d38b334b2a99b183f920ace4f8b..0000000000000000000000000000000000000000 --- a/spaces/tovaru/vits-for-ba/preprocess.py +++ /dev/null @@ -1,25 +0,0 @@ -import argparse -import text -from utils import load_filepaths_and_text - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument("--out_extension", default="cleaned") - parser.add_argument("--text_index", default=2, type=int) - parser.add_argument("--filelists", nargs="+", default=["filelists/miyu_train.txt", "filelists/miyu_val.txt"]) - parser.add_argument("--text_cleaners", nargs="+", default=["japanese_cleaners"]) - - args = parser.parse_args() - - - for filelist in args.filelists: - print("START:", filelist) - filepaths_and_text = load_filepaths_and_text(filelist) - for i in range(len(filepaths_and_text)): - original_text = filepaths_and_text[i][args.text_index] - cleaned_text = text._clean_text(original_text, args.text_cleaners) - filepaths_and_text[i][args.text_index] = cleaned_text - - new_filelist = filelist + "." + args.out_extension - with open(new_filelist, "w", encoding="utf-8") as f: - f.writelines(["|".join(x) + "\n" for x in filepaths_and_text]) diff --git a/spaces/tsi-org/LLaVA/llava/model/llava_arch.py b/spaces/tsi-org/LLaVA/llava/model/llava_arch.py deleted file mode 100644 index fd538c93764347a496ba6cdb0859cd8ffcb02044..0000000000000000000000000000000000000000 --- a/spaces/tsi-org/LLaVA/llava/model/llava_arch.py +++ /dev/null @@ -1,248 +0,0 @@ -# Copyright 2023 Haotian Liu -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from abc import ABC, abstractmethod - -import torch -import torch.nn as nn - -from .multimodal_encoder.builder import build_vision_tower -from .multimodal_projector.builder import build_vision_projector - -from llava.constants import IGNORE_INDEX, IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_PATCH_TOKEN, DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN - - -class LlavaMetaModel: - - def __init__(self, config): - super(LlavaMetaModel, self).__init__(config) - - if hasattr(config, "mm_vision_tower"): - self.vision_tower = build_vision_tower(config, delay_load=True) - self.mm_projector = build_vision_projector(config) - - def get_vision_tower(self): - vision_tower = getattr(self, 'vision_tower', None) - if type(vision_tower) is list: - vision_tower = vision_tower[0] - return vision_tower - - def initialize_vision_modules(self, model_args, fsdp=None): - vision_tower = model_args.vision_tower - mm_vision_select_layer = model_args.mm_vision_select_layer - mm_vision_select_feature = model_args.mm_vision_select_feature - pretrain_mm_mlp_adapter = model_args.pretrain_mm_mlp_adapter - - self.config.mm_vision_tower = vision_tower - - vision_tower = build_vision_tower(model_args) - - if fsdp is not None and len(fsdp) > 0: - self.vision_tower = [vision_tower] - else: - self.vision_tower = vision_tower - - self.config.use_mm_proj = True - self.config.mm_projector_type = getattr(model_args, 'mm_projector_type', 'linear') - self.config.mm_hidden_size = vision_tower.hidden_size - self.config.mm_vision_select_layer = mm_vision_select_layer - self.config.mm_vision_select_feature = mm_vision_select_feature - - self.mm_projector = build_vision_projector(self.config) - - if pretrain_mm_mlp_adapter is not None: - mm_projector_weights = torch.load(pretrain_mm_mlp_adapter, map_location='cpu') - def get_w(weights, keyword): - return {k.split(keyword + '.')[1]: v for k, v in weights.items() if keyword in k} - - self.mm_projector.load_state_dict(get_w(mm_projector_weights, 'mm_projector')) - - -class LlavaMetaForCausalLM(ABC): - - @abstractmethod - def get_model(self): - pass - - def get_vision_tower(self): - return self.get_model().get_vision_tower() - - def encode_images(self, images): - image_features = self.get_model().get_vision_tower()(images) - image_features = self.get_model().mm_projector(image_features) - return image_features - - def prepare_inputs_labels_for_multimodal( - self, input_ids, attention_mask, past_key_values, labels, images - ): - vision_tower = self.get_vision_tower() - if vision_tower is None or images is None or input_ids.shape[1] == 1: - if past_key_values is not None and vision_tower is not None and images is not None and input_ids.shape[1] == 1: - attention_mask = torch.ones((attention_mask.shape[0], past_key_values[-1][-1].shape[-2] + 1), dtype=attention_mask.dtype, device=attention_mask.device) - return input_ids, attention_mask, past_key_values, None, labels - - if type(images) is list or images.ndim == 5: - concat_images = torch.cat([image for image in images], dim=0) - image_features = self.encode_images(concat_images) - split_sizes = [image.shape[0] for image in images] - image_features = torch.split(image_features, split_sizes, dim=0) - image_features = [x.flatten(0, 1) for x in image_features] - else: - image_features = self.encode_images(images) - - new_input_embeds = [] - new_labels = [] if labels is not None else None - cur_image_idx = 0 - for batch_idx, cur_input_ids in enumerate(input_ids): - if (cur_input_ids == IMAGE_TOKEN_INDEX).sum() == 0: - # multimodal LLM, but the current sample is not multimodal - # FIXME: this is a hacky fix, for deepspeed zero3 to work - half_len = cur_input_ids.shape[0] // 2 - cur_image_features = image_features[cur_image_idx] - cur_input_embeds_1 = self.get_model().embed_tokens(cur_input_ids[:half_len]) - cur_input_embeds_2 = self.get_model().embed_tokens(cur_input_ids[half_len:]) - cur_input_embeds = torch.cat([cur_input_embeds_1, cur_image_features[0:0], cur_input_embeds_2], dim=0) - new_input_embeds.append(cur_input_embeds) - if labels is not None: - new_labels.append(labels[batch_idx]) - cur_image_idx += 1 - continue - image_token_indices = torch.where(cur_input_ids == IMAGE_TOKEN_INDEX)[0] - cur_new_input_embeds = [] - if labels is not None: - cur_labels = labels[batch_idx] - cur_new_labels = [] - assert cur_labels.shape == cur_input_ids.shape - while image_token_indices.numel() > 0: - cur_image_features = image_features[cur_image_idx] - image_token_start = image_token_indices[0] - if getattr(self.config, 'tune_mm_mlp_adapter', False) and getattr(self.config, 'mm_use_im_start_end', False): - cur_new_input_embeds.append(self.get_model().embed_tokens(cur_input_ids[:image_token_start-1]).detach()) - cur_new_input_embeds.append(self.get_model().embed_tokens(cur_input_ids[image_token_start-1:image_token_start])) - cur_new_input_embeds.append(cur_image_features) - cur_new_input_embeds.append(self.get_model().embed_tokens(cur_input_ids[image_token_start+1:image_token_start+2])) - if labels is not None: - cur_new_labels.append(cur_labels[:image_token_start]) - cur_new_labels.append(torch.full((cur_image_features.shape[0],), IGNORE_INDEX, device=labels.device, dtype=labels.dtype)) - cur_new_labels.append(cur_labels[image_token_start:image_token_start+1]) - cur_labels = cur_labels[image_token_start+2:] - else: - cur_new_input_embeds.append(self.get_model().embed_tokens(cur_input_ids[:image_token_start])) - cur_new_input_embeds.append(cur_image_features) - if labels is not None: - cur_new_labels.append(cur_labels[:image_token_start]) - cur_new_labels.append(torch.full((cur_image_features.shape[0],), IGNORE_INDEX, device=labels.device, dtype=labels.dtype)) - cur_labels = cur_labels[image_token_start+1:] - cur_image_idx += 1 - if getattr(self.config, 'tune_mm_mlp_adapter', False) and getattr(self.config, 'mm_use_im_start_end', False): - cur_input_ids = cur_input_ids[image_token_start+2:] - else: - cur_input_ids = cur_input_ids[image_token_start+1:] - image_token_indices = torch.where(cur_input_ids == IMAGE_TOKEN_INDEX)[0] - if cur_input_ids.numel() > 0: - if getattr(self.config, 'tune_mm_mlp_adapter', False) and getattr(self.config, 'mm_use_im_start_end', False): - cur_new_input_embeds.append(self.get_model().embed_tokens(cur_input_ids).detach()) - else: - cur_new_input_embeds.append(self.get_model().embed_tokens(cur_input_ids)) - if labels is not None: - cur_new_labels.append(cur_labels) - cur_new_input_embeds = [x.to(device=self.device) for x in cur_new_input_embeds] - cur_new_input_embeds = torch.cat(cur_new_input_embeds, dim=0) - new_input_embeds.append(cur_new_input_embeds) - if labels is not None: - cur_new_labels = torch.cat(cur_new_labels, dim=0) - new_labels.append(cur_new_labels) - - if any(x.shape != new_input_embeds[0].shape for x in new_input_embeds): - max_len = max(x.shape[0] for x in new_input_embeds) - - new_input_embeds_align = [] - for cur_new_embed in new_input_embeds: - cur_new_embed = torch.cat((cur_new_embed, torch.zeros((max_len - cur_new_embed.shape[0], cur_new_embed.shape[1]), dtype=cur_new_embed.dtype, device=cur_new_embed.device)), dim=0) - new_input_embeds_align.append(cur_new_embed) - new_input_embeds = torch.stack(new_input_embeds_align, dim=0) - - if labels is not None: - new_labels_align = [] - _new_labels = new_labels - for cur_new_label in new_labels: - cur_new_label = torch.cat((cur_new_label, torch.full((max_len - cur_new_label.shape[0],), IGNORE_INDEX, dtype=cur_new_label.dtype, device=cur_new_label.device)), dim=0) - new_labels_align.append(cur_new_label) - new_labels = torch.stack(new_labels_align, dim=0) - - if attention_mask is not None: - new_attention_mask = [] - for cur_attention_mask, cur_new_labels, cur_new_labels_align in zip(attention_mask, _new_labels, new_labels): - new_attn_mask_pad_left = torch.full((cur_new_labels.shape[0] - labels.shape[1],), True, dtype=attention_mask.dtype, device=attention_mask.device) - new_attn_mask_pad_right = torch.full((cur_new_labels_align.shape[0] - cur_new_labels.shape[0],), False, dtype=attention_mask.dtype, device=attention_mask.device) - cur_new_attention_mask = torch.cat((new_attn_mask_pad_left, cur_attention_mask, new_attn_mask_pad_right), dim=0) - new_attention_mask.append(cur_new_attention_mask) - attention_mask = torch.stack(new_attention_mask, dim=0) - assert attention_mask.shape == new_labels.shape - else: - new_input_embeds = torch.stack(new_input_embeds, dim=0) - if labels is not None: - new_labels = torch.stack(new_labels, dim=0) - - if attention_mask is not None: - new_attn_mask_pad_left = torch.full((attention_mask.shape[0], new_input_embeds.shape[1] - input_ids.shape[1]), True, dtype=attention_mask.dtype, device=attention_mask.device) - attention_mask = torch.cat((new_attn_mask_pad_left, attention_mask), dim=1) - assert attention_mask.shape == new_input_embeds.shape[:2] - - return None, attention_mask, past_key_values, new_input_embeds, new_labels - - def initialize_vision_tokenizer(self, model_args, tokenizer): - if model_args.mm_use_im_patch_token: - tokenizer.add_tokens([DEFAULT_IMAGE_PATCH_TOKEN], special_tokens=True) - self.resize_token_embeddings(len(tokenizer)) - - if model_args.mm_use_im_start_end: - num_new_tokens = tokenizer.add_tokens([DEFAULT_IM_START_TOKEN, DEFAULT_IM_END_TOKEN], special_tokens=True) - self.resize_token_embeddings(len(tokenizer)) - - if num_new_tokens > 0: - input_embeddings = self.get_input_embeddings().weight.data - output_embeddings = self.get_output_embeddings().weight.data - - input_embeddings_avg = input_embeddings[:-num_new_tokens].mean( - dim=0, keepdim=True) - output_embeddings_avg = output_embeddings[:-num_new_tokens].mean( - dim=0, keepdim=True) - - input_embeddings[-num_new_tokens:] = input_embeddings_avg - output_embeddings[-num_new_tokens:] = output_embeddings_avg - - if model_args.tune_mm_mlp_adapter: - for p in self.get_input_embeddings().parameters(): - p.requires_grad = True - for p in self.get_output_embeddings().parameters(): - p.requires_grad = False - - if model_args.pretrain_mm_mlp_adapter: - mm_projector_weights = torch.load(model_args.pretrain_mm_mlp_adapter, map_location='cpu') - embed_tokens_weight = mm_projector_weights['model.embed_tokens.weight'] - assert num_new_tokens == 2 - if input_embeddings.shape == embed_tokens_weight.shape: - input_embeddings[-num_new_tokens:] = embed_tokens_weight[-num_new_tokens:] - elif embed_tokens_weight.shape[0] == num_new_tokens: - input_embeddings[-num_new_tokens:] = embed_tokens_weight - else: - raise ValueError(f"Unexpected embed_tokens_weight shape. Pretrained: {embed_tokens_weight.shape}. Current: {input_embeddings.shape}. Numer of new tokens: {num_new_tokens}.") - elif model_args.mm_use_im_patch_token: - if model_args.tune_mm_mlp_adapter: - for p in self.get_input_embeddings().parameters(): - p.requires_grad = False - for p in self.get_output_embeddings().parameters(): - p.requires_grad = False diff --git a/spaces/ttt246/brain/Android/README.md b/spaces/ttt246/brain/Android/README.md deleted file mode 100644 index 2c05f1f084b786b15dc3366196309c0754c364a1..0000000000000000000000000000000000000000 --- a/spaces/ttt246/brain/Android/README.md +++ /dev/null @@ -1,52 +0,0 @@ -# 🌍 RisingBrain-Android:Your AI OS Companion 📱 - -Venture into the future of artificial intelligence with **RisingBrain Android**- an essential component of 🧠RisingBrain🧠 - -We bring the revolutionary AI-powered OS right at your fingertips with our Android application. -This Android counterpart ensures you get all the smart features of RisingBrain OS on your smartphone. - -Let's dive into the specifics!😆 - -## Getting Started 🏁 - -

        - -

        -

        - -

        - -This application has built with **MVVM architecture pattern**. (Using Android Architecture Components). - -Repository Pattern, to abstract the source of data in the application. -Using of View Model, Live Data and data binding. - -***The Application utilizes such popular libraries as: [Room](https://developer.android.com/training/data-storage/room), [OkHttp](https://github.com/square/okhttp), [Retrofit](https://github.com/square/retrofit), [Glide](https://github.com/bumptech/glide), etc. -Written in [Kotlin](https://kotlinlang.org/).*** - -## Features 💫 - -| Title | Description | ScreenShot | -|:------------------------------:|:--------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------:| -| Around-the-Clock Conversations | Enjoy casual chats with Rising AI, right from your Android device. | | -| Quick Web Tours | Experience swift and accurate browsing based on your interests or queries. | | -| Picture Perfect | Hunt for images using visual cues or descriptions with our smart image search. | | -| Stay Connected, Swiftly | Find contacts and initiate calls or text messages without extra effort. | | -| On-Time, Every Time | Set timely reminders and never miss an important event with our alarm feature. | | -| Swift Emailing | Send emails instantly, without having to switch between apps. | | -| Configured Just for You | Modify backend settings as preferred for an optimized user experience. | | -| Real-Time Data | Stay updated with real-time data reflecting in your responses. | | - -## Compatibility 🤝 -Our Android app is designed with a broad compatibility range, supporting various Android versions. Regardless of the device you own, RisingBrain Android ensures a seamless, advanced, and user-centric experience. - -## Contributing 💪 -We appreciate your interest in enhancing our work! Please respect the style and contribution guidelines of every project when submitting patches and additions. Our general Git workflow of choice is "fork-and-pull". - - 1. **Fork** the repository on GitHub - 2. **Clone** your fork to your machine - 3. **Commit** the changes to your personal branch - 4. **Push** these updates back to your fork - 5. Don't forget to submit a **Pull Request** for us to study your contributions. - -NOTE: Sync with "upstream" to have the latest updates before you make a pull request! \ No newline at end of file diff --git a/spaces/umoubuton/atri-bert-vits2/mel_processing.py b/spaces/umoubuton/atri-bert-vits2/mel_processing.py deleted file mode 100644 index aab5bd926a194610b7ce3da29c553bd877341aa4..0000000000000000000000000000000000000000 --- a/spaces/umoubuton/atri-bert-vits2/mel_processing.py +++ /dev/null @@ -1,139 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.0: - print("min value is ", torch.min(y)) - if torch.max(y) > 1.0: - print("max value is ", torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + "_" + str(y.device) - wnsize_dtype_device = str(win_size) + "_" + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to( - dtype=y.dtype, device=y.device - ) - - y = torch.nn.functional.pad( - y.unsqueeze(1), - (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)), - mode="reflect", - ) - y = y.squeeze(1) - - spec = torch.stft( - y, - n_fft, - hop_length=hop_size, - win_length=win_size, - window=hann_window[wnsize_dtype_device], - center=center, - pad_mode="reflect", - normalized=False, - onesided=True, - return_complex=False, - ) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + "_" + str(spec.device) - fmax_dtype_device = str(fmax) + "_" + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to( - dtype=spec.dtype, device=spec.device - ) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch( - y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False -): - if torch.min(y) < -1.0: - print("min value is ", torch.min(y)) - if torch.max(y) > 1.0: - print("max value is ", torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + "_" + str(y.device) - fmax_dtype_device = str(fmax) + "_" + dtype_device - wnsize_dtype_device = str(win_size) + "_" + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to( - dtype=y.dtype, device=y.device - ) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to( - dtype=y.dtype, device=y.device - ) - - y = torch.nn.functional.pad( - y.unsqueeze(1), - (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)), - mode="reflect", - ) - y = y.squeeze(1) - - spec = torch.stft( - y, - n_fft, - hop_length=hop_size, - win_length=win_size, - window=hann_window[wnsize_dtype_device], - center=center, - pad_mode="reflect", - normalized=False, - onesided=True, - return_complex=False, - ) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Aaja Sanam Aagossh Mein movie dual audio 720p Reviews ratings and more.md b/spaces/usbethFlerru/sovits-modelsV2/example/Aaja Sanam Aagossh Mein movie dual audio 720p Reviews ratings and more.md deleted file mode 100644 index 35d723f1048f7abf97a5efb842bde4a719e37e8e..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Aaja Sanam Aagossh Mein movie dual audio 720p Reviews ratings and more.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Aaja Sanam Aagossh Mein movie dual audio 720p


        Download Zip ……… https://urlcod.com/2uyVxB



        -
        - aaccfb2cb3
        -
        -
        -

        diff --git a/spaces/usbethFlerru/sovits-modelsV2/example/Digital Communication By Sam Shanmugam Pdf WORK.md b/spaces/usbethFlerru/sovits-modelsV2/example/Digital Communication By Sam Shanmugam Pdf WORK.md deleted file mode 100644 index fa6da849e51b182f8f73e311a80312ed4852481e..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/example/Digital Communication By Sam Shanmugam Pdf WORK.md +++ /dev/null @@ -1,26 +0,0 @@ - -

        Digital Communication By Sam Shanmugam Pdf: A Comprehensive Guide to Digital and Analog Communication Systems

        -

        Digital communication is the process of transmitting and receiving information using digital signals, such as binary digits (0 and 1). Analog communication is the process of transmitting and receiving information using analog signals, such as continuous waves of varying amplitude or frequency. Both digital and analog communication systems have their advantages and disadvantages, depending on the application and the requirements of the user.

        -

        Digital Communication By Sam Shanmugam Pdf


        DOWNLOAD ✶✶✶ https://urlcod.com/2uyVqq



        -

        One of the most popular books on digital and analog communication systems is Digital and Analog Communication Systems by K. Sam Shanmugam. This book provides a detailed, unified treatment of theoretical and practical aspects of both types of communication systems, with emphasis on digital communication systems. It integrates theory-keeping theoretical details to a minimum-with over 60 practical, worked examples illustrating real-life methods. The text emphasizes deriving design equations that relate performance of functional blocks to design parameters. It illustrates how to trade off between power, band-width and equipment complexity while maintaining an acceptable quality of performance. Material is modularized so that appropriate portions can be selected to teach several different courses. The book also includes over 300 problems and an annotated bibliography in each chapter.

        -

        The book covers topics such as signal analysis, modulation techniques, noise analysis, coding techniques, multiplexing, synchronization, spread spectrum techniques, and fiber-optic communication. It also includes chapters on digital data transmission, digital modulation techniques, error control coding, information theory, and cryptography. The book is suitable for undergraduate and graduate students of electrical engineering, computer engineering, and telecommunications engineering. It can also be used as a reference book by practicing engineers and researchers in the field of communication systems.

        -

        The book was first published in 1979 by Wiley India Pvt. Limited. The latest edition is the fourth edition, which was published in 2006. The book is available in both hardcover and paperback formats. The book can be purchased online from various websites such as Amazon.com or Flipkart.com. Alternatively, the book can be downloaded as a pdf file from various websites such as Archive.org or BookSG.com.

        -

        -

        If you are interested in learning more about digital and analog communication systems, you should definitely check out Digital Communication By Sam Shanmugam Pdf. It is one of the best books on the subject and will help you master the concepts and applications of communication systems.

        - -

        One of the questions that may arise when studying digital and analog communication systems is: which one is better? The answer is not straightforward, as both types of communication systems have their pros and cons, depending on the application and the requirements of the user. Some of the factors that may influence the choice of communication system are:

        -
          -
        • Bandwidth: Digital communication systems can achieve higher data rates and more efficient use of bandwidth than analog communication systems, as they can use techniques such as compression, multiplexing, and modulation to reduce the amount of bits needed to represent a message. However, digital communication systems also require more bandwidth than analog communication systems for the same quality of signal, as they introduce quantization noise and require extra bits for error correction and synchronization.
        • -
        • Noise: Analog communication systems are more susceptible to noise and interference than digital communication systems, as noise can distort the amplitude or frequency of the analog signal and affect the quality of the message. Digital communication systems can use techniques such as encoding, encryption, and error correction to protect the message from noise and interference. However, digital communication systems also have a threshold level of noise beyond which they cannot recover the message, whereas analog communication systems can still provide some information even in high noise conditions.
        • -
        • Cost: Analog communication systems are generally cheaper and simpler than digital communication systems, as they require less complex hardware and software components. Digital communication systems require more sophisticated devices such as analog-to-digital converters, digital-to-analog converters, modems, codecs, and processors to perform various operations on the digital signal. However, digital communication systems can also benefit from economies of scale and technological advances that reduce their cost over time.
        • -
        • Compatibility: Analog communication systems are more compatible with each other than digital communication systems, as they use similar standards and formats for transmitting and receiving signals. Digital communication systems may use different protocols and formats for different applications and devices, which may cause compatibility issues and require conversion or adaptation. However, digital communication systems can also offer more flexibility and interoperability than analog communication systems, as they can use techniques such as modulation, multiplexing, and switching to accommodate different types of signals and channels.
        • -
        -

        Therefore, the choice of digital or analog communication system depends on the trade-offs between these factors and the specific needs of the user. Some examples of applications that use digital or analog communication systems are:

        -
          -
        • Telephony: Telephony is the transmission of voice signals over a distance using a communication system. Traditionally, telephony used analog communication systems such as public switched telephone networks (PSTN) that used copper wires to carry analog voice signals. However, with the advent of digital technology, telephony has shifted to digital communication systems such as voice over internet protocol (VoIP) that use packet-switched networks to carry digital voice signals. Digital telephony offers advantages such as higher quality, lower cost, more features, and more integration with other services than analog telephony.
        • -
        • Radio: Radio is the transmission of electromagnetic waves through free space using a transmitter and a receiver. Radio can use either analog or digital communication systems depending on the type of signal and the purpose of transmission. For example, amplitude modulation (AM) and frequency modulation (FM) are analog communication systems that modulate an analog carrier wave with an analog message signal. On the other hand, digital radio broadcasting (DRB) and satellite radio are digital communication systems that modulate a digital carrier wave with a digital message signal. Digital radio offers advantages such as higher quality, more channels, more information, and more security than analog radio.
        • -
        • Television: Television is the transmission of moving images and sound using a communication system. Television can use either analog or digital communication systems depending on the type of signal and the purpose of transmission. For example, analog television broadcasting (ATV) uses analog signals to transmit video and audio signals using standards such as NTSC, PAL, or SECAM. On the other hand, digital television broadcasting (DTV) uses digital signals to transmit video and audio signals using standards such as ATSC, DVB-T, or ISDB-T. Digital television offers advantages such as higher quality, more channels, more interactivity, and more convergence than analog television.
        • -
        -

        In conclusion, digital and analog communication systems are both important and useful for different applications and scenarios. By understanding their principles and characteristics, one can design and analyze real-world communication systems that meet the requirements of the user.

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/midas/transforms.py b/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/midas/transforms.py deleted file mode 100644 index 350cbc11662633ad7f8968eb10be2e7de6e384e9..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/midas/transforms.py +++ /dev/null @@ -1,234 +0,0 @@ -import numpy as np -import cv2 -import math - - -def apply_min_size(sample, size, image_interpolation_method=cv2.INTER_AREA): - """Rezise the sample to ensure the given size. Keeps aspect ratio. - - Args: - sample (dict): sample - size (tuple): image size - - Returns: - tuple: new size - """ - shape = list(sample["disparity"].shape) - - if shape[0] >= size[0] and shape[1] >= size[1]: - return sample - - scale = [0, 0] - scale[0] = size[0] / shape[0] - scale[1] = size[1] / shape[1] - - scale = max(scale) - - shape[0] = math.ceil(scale * shape[0]) - shape[1] = math.ceil(scale * shape[1]) - - # resize - sample["image"] = cv2.resize( - sample["image"], tuple(shape[::-1]), interpolation=image_interpolation_method - ) - - sample["disparity"] = cv2.resize( - sample["disparity"], tuple(shape[::-1]), interpolation=cv2.INTER_NEAREST - ) - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - tuple(shape[::-1]), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return tuple(shape) - - -class Resize(object): - """Resize sample to given size (width, height). - """ - - def __init__( - self, - width, - height, - resize_target=True, - keep_aspect_ratio=False, - ensure_multiple_of=1, - resize_method="lower_bound", - image_interpolation_method=cv2.INTER_AREA, - ): - """Init. - - Args: - width (int): desired output width - height (int): desired output height - resize_target (bool, optional): - True: Resize the full sample (image, mask, target). - False: Resize image only. - Defaults to True. - keep_aspect_ratio (bool, optional): - True: Keep the aspect ratio of the input sample. - Output sample might not have the given width and height, and - resize behaviour depends on the parameter 'resize_method'. - Defaults to False. - ensure_multiple_of (int, optional): - Output width and height is constrained to be multiple of this parameter. - Defaults to 1. - resize_method (str, optional): - "lower_bound": Output will be at least as large as the given size. - "upper_bound": Output will be at max as large as the given size. (Output size might be smaller than given size.) - "minimal": Scale as least as possible. (Output size might be smaller than given size.) - Defaults to "lower_bound". - """ - self.__width = width - self.__height = height - - self.__resize_target = resize_target - self.__keep_aspect_ratio = keep_aspect_ratio - self.__multiple_of = ensure_multiple_of - self.__resize_method = resize_method - self.__image_interpolation_method = image_interpolation_method - - def constrain_to_multiple_of(self, x, min_val=0, max_val=None): - y = (np.round(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if max_val is not None and y > max_val: - y = (np.floor(x / self.__multiple_of) * self.__multiple_of).astype(int) - - if y < min_val: - y = (np.ceil(x / self.__multiple_of) * self.__multiple_of).astype(int) - - return y - - def get_size(self, width, height): - # determine new height and width - scale_height = self.__height / height - scale_width = self.__width / width - - if self.__keep_aspect_ratio: - if self.__resize_method == "lower_bound": - # scale such that output size is lower bound - if scale_width > scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "upper_bound": - # scale such that output size is upper bound - if scale_width < scale_height: - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - elif self.__resize_method == "minimal": - # scale as least as possbile - if abs(1 - scale_width) < abs(1 - scale_height): - # fit width - scale_height = scale_width - else: - # fit height - scale_width = scale_height - else: - raise ValueError( - f"resize_method {self.__resize_method} not implemented" - ) - - if self.__resize_method == "lower_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, min_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, min_val=self.__width - ) - elif self.__resize_method == "upper_bound": - new_height = self.constrain_to_multiple_of( - scale_height * height, max_val=self.__height - ) - new_width = self.constrain_to_multiple_of( - scale_width * width, max_val=self.__width - ) - elif self.__resize_method == "minimal": - new_height = self.constrain_to_multiple_of(scale_height * height) - new_width = self.constrain_to_multiple_of(scale_width * width) - else: - raise ValueError(f"resize_method {self.__resize_method} not implemented") - - return (new_width, new_height) - - def __call__(self, sample): - width, height = self.get_size( - sample["image"].shape[1], sample["image"].shape[0] - ) - - # resize sample - sample["image"] = cv2.resize( - sample["image"], - (width, height), - interpolation=self.__image_interpolation_method, - ) - - if self.__resize_target: - if "disparity" in sample: - sample["disparity"] = cv2.resize( - sample["disparity"], - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - - if "depth" in sample: - sample["depth"] = cv2.resize( - sample["depth"], (width, height), interpolation=cv2.INTER_NEAREST - ) - - sample["mask"] = cv2.resize( - sample["mask"].astype(np.float32), - (width, height), - interpolation=cv2.INTER_NEAREST, - ) - sample["mask"] = sample["mask"].astype(bool) - - return sample - - -class NormalizeImage(object): - """Normlize image by given mean and std. - """ - - def __init__(self, mean, std): - self.__mean = mean - self.__std = std - - def __call__(self, sample): - sample["image"] = (sample["image"] - self.__mean) / self.__std - - return sample - - -class PrepareForNet(object): - """Prepare sample for usage as network input. - """ - - def __init__(self): - pass - - def __call__(self, sample): - image = np.transpose(sample["image"], (2, 0, 1)) - sample["image"] = np.ascontiguousarray(image).astype(np.float32) - - if "mask" in sample: - sample["mask"] = sample["mask"].astype(np.float32) - sample["mask"] = np.ascontiguousarray(sample["mask"]) - - if "disparity" in sample: - disparity = sample["disparity"].astype(np.float32) - sample["disparity"] = np.ascontiguousarray(disparity) - - if "depth" in sample: - depth = sample["depth"].astype(np.float32) - sample["depth"] = np.ascontiguousarray(depth) - - return sample diff --git a/spaces/ussrcccp/White-box-Cartoonization/wbc/network.py b/spaces/ussrcccp/White-box-Cartoonization/wbc/network.py deleted file mode 100644 index 6f16cee1aa1994d0a78c524f459764de5164e637..0000000000000000000000000000000000000000 --- a/spaces/ussrcccp/White-box-Cartoonization/wbc/network.py +++ /dev/null @@ -1,62 +0,0 @@ -import tensorflow as tf -import numpy as np -import tensorflow.contrib.slim as slim - - - -def resblock(inputs, out_channel=32, name='resblock'): - - with tf.variable_scope(name): - - x = slim.convolution2d(inputs, out_channel, [3, 3], - activation_fn=None, scope='conv1') - x = tf.nn.leaky_relu(x) - x = slim.convolution2d(x, out_channel, [3, 3], - activation_fn=None, scope='conv2') - - return x + inputs - - - - -def unet_generator(inputs, channel=32, num_blocks=4, name='generator', reuse=False): - with tf.variable_scope(name, reuse=reuse): - - x0 = slim.convolution2d(inputs, channel, [7, 7], activation_fn=None) - x0 = tf.nn.leaky_relu(x0) - - x1 = slim.convolution2d(x0, channel, [3, 3], stride=2, activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - x1 = slim.convolution2d(x1, channel*2, [3, 3], activation_fn=None) - x1 = tf.nn.leaky_relu(x1) - - x2 = slim.convolution2d(x1, channel*2, [3, 3], stride=2, activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - x2 = slim.convolution2d(x2, channel*4, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - for idx in range(num_blocks): - x2 = resblock(x2, out_channel=channel*4, name='block_{}'.format(idx)) - - x2 = slim.convolution2d(x2, channel*2, [3, 3], activation_fn=None) - x2 = tf.nn.leaky_relu(x2) - - h1, w1 = tf.shape(x2)[1], tf.shape(x2)[2] - x3 = tf.image.resize_bilinear(x2, (h1*2, w1*2)) - x3 = slim.convolution2d(x3+x1, channel*2, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - x3 = slim.convolution2d(x3, channel, [3, 3], activation_fn=None) - x3 = tf.nn.leaky_relu(x3) - - h2, w2 = tf.shape(x3)[1], tf.shape(x3)[2] - x4 = tf.image.resize_bilinear(x3, (h2*2, w2*2)) - x4 = slim.convolution2d(x4+x0, channel, [3, 3], activation_fn=None) - x4 = tf.nn.leaky_relu(x4) - x4 = slim.convolution2d(x4, 3, [7, 7], activation_fn=None) - - return x4 - -if __name__ == '__main__': - - - pass \ No newline at end of file diff --git a/spaces/vinay123/panoptic-segment-anything/segment_anything/segment_anything/modeling/prompt_encoder.py b/spaces/vinay123/panoptic-segment-anything/segment_anything/segment_anything/modeling/prompt_encoder.py deleted file mode 100644 index c3143f4f8e02ddd7ca8587b40ff5d47c3a6b7ef3..0000000000000000000000000000000000000000 --- a/spaces/vinay123/panoptic-segment-anything/segment_anything/segment_anything/modeling/prompt_encoder.py +++ /dev/null @@ -1,214 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from torch import nn - -from typing import Any, Optional, Tuple, Type - -from .common import LayerNorm2d - - -class PromptEncoder(nn.Module): - def __init__( - self, - embed_dim: int, - image_embedding_size: Tuple[int, int], - input_image_size: Tuple[int, int], - mask_in_chans: int, - activation: Type[nn.Module] = nn.GELU, - ) -> None: - """ - Encodes prompts for input to SAM's mask decoder. - - Arguments: - embed_dim (int): The prompts' embedding dimension - image_embedding_size (tuple(int, int)): The spatial size of the - image embedding, as (H, W). - input_image_size (int): The padded size of the image as input - to the image encoder, as (H, W). - mask_in_chans (int): The number of hidden channels used for - encoding input masks. - activation (nn.Module): The activation to use when encoding - input masks. - """ - super().__init__() - self.embed_dim = embed_dim - self.input_image_size = input_image_size - self.image_embedding_size = image_embedding_size - self.pe_layer = PositionEmbeddingRandom(embed_dim // 2) - - self.num_point_embeddings: int = 4 # pos/neg point + 2 box corners - point_embeddings = [nn.Embedding(1, embed_dim) for i in range(self.num_point_embeddings)] - self.point_embeddings = nn.ModuleList(point_embeddings) - self.not_a_point_embed = nn.Embedding(1, embed_dim) - - self.mask_input_size = (4 * image_embedding_size[0], 4 * image_embedding_size[1]) - self.mask_downscaling = nn.Sequential( - nn.Conv2d(1, mask_in_chans // 4, kernel_size=2, stride=2), - LayerNorm2d(mask_in_chans // 4), - activation(), - nn.Conv2d(mask_in_chans // 4, mask_in_chans, kernel_size=2, stride=2), - LayerNorm2d(mask_in_chans), - activation(), - nn.Conv2d(mask_in_chans, embed_dim, kernel_size=1), - ) - self.no_mask_embed = nn.Embedding(1, embed_dim) - - def get_dense_pe(self) -> torch.Tensor: - """ - Returns the positional encoding used to encode point prompts, - applied to a dense set of points the shape of the image encoding. - - Returns: - torch.Tensor: Positional encoding with shape - 1x(embed_dim)x(embedding_h)x(embedding_w) - """ - return self.pe_layer(self.image_embedding_size).unsqueeze(0) - - def _embed_points( - self, - points: torch.Tensor, - labels: torch.Tensor, - pad: bool, - ) -> torch.Tensor: - """Embeds point prompts.""" - points = points + 0.5 # Shift to center of pixel - if pad: - padding_point = torch.zeros((points.shape[0], 1, 2), device=points.device) - padding_label = -torch.ones((labels.shape[0], 1), device=labels.device) - points = torch.cat([points, padding_point], dim=1) - labels = torch.cat([labels, padding_label], dim=1) - point_embedding = self.pe_layer.forward_with_coords(points, self.input_image_size) - point_embedding[labels == -1] = 0.0 - point_embedding[labels == -1] += self.not_a_point_embed.weight - point_embedding[labels == 0] += self.point_embeddings[0].weight - point_embedding[labels == 1] += self.point_embeddings[1].weight - return point_embedding - - def _embed_boxes(self, boxes: torch.Tensor) -> torch.Tensor: - """Embeds box prompts.""" - boxes = boxes + 0.5 # Shift to center of pixel - coords = boxes.reshape(-1, 2, 2) - corner_embedding = self.pe_layer.forward_with_coords(coords, self.input_image_size) - corner_embedding[:, 0, :] += self.point_embeddings[2].weight - corner_embedding[:, 1, :] += self.point_embeddings[3].weight - return corner_embedding - - def _embed_masks(self, masks: torch.Tensor) -> torch.Tensor: - """Embeds mask inputs.""" - mask_embedding = self.mask_downscaling(masks) - return mask_embedding - - def _get_batch_size( - self, - points: Optional[Tuple[torch.Tensor, torch.Tensor]], - boxes: Optional[torch.Tensor], - masks: Optional[torch.Tensor], - ) -> int: - """ - Gets the batch size of the output given the batch size of the input prompts. - """ - if points is not None: - return points[0].shape[0] - elif boxes is not None: - return boxes.shape[0] - elif masks is not None: - return masks.shape[0] - else: - return 1 - - def _get_device(self) -> torch.device: - return self.point_embeddings[0].weight.device - - def forward( - self, - points: Optional[Tuple[torch.Tensor, torch.Tensor]], - boxes: Optional[torch.Tensor], - masks: Optional[torch.Tensor], - ) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Embeds different types of prompts, returning both sparse and dense - embeddings. - - Arguments: - points (tuple(torch.Tensor, torch.Tensor) or none): point coordinates - and labels to embed. - boxes (torch.Tensor or none): boxes to embed - masks (torch.Tensor or none): masks to embed - - Returns: - torch.Tensor: sparse embeddings for the points and boxes, with shape - BxNx(embed_dim), where N is determined by the number of input points - and boxes. - torch.Tensor: dense embeddings for the masks, in the shape - Bx(embed_dim)x(embed_H)x(embed_W) - """ - bs = self._get_batch_size(points, boxes, masks) - sparse_embeddings = torch.empty((bs, 0, self.embed_dim), device=self._get_device()) - if points is not None: - coords, labels = points - point_embeddings = self._embed_points(coords, labels, pad=(boxes is None)) - sparse_embeddings = torch.cat([sparse_embeddings, point_embeddings], dim=1) - if boxes is not None: - box_embeddings = self._embed_boxes(boxes) - sparse_embeddings = torch.cat([sparse_embeddings, box_embeddings], dim=1) - - if masks is not None: - dense_embeddings = self._embed_masks(masks) - else: - dense_embeddings = self.no_mask_embed.weight.reshape(1, -1, 1, 1).expand( - bs, -1, self.image_embedding_size[0], self.image_embedding_size[1] - ) - - return sparse_embeddings, dense_embeddings - - -class PositionEmbeddingRandom(nn.Module): - """ - Positional encoding using random spatial frequencies. - """ - - def __init__(self, num_pos_feats: int = 64, scale: Optional[float] = None) -> None: - super().__init__() - if scale is None or scale <= 0.0: - scale = 1.0 - self.register_buffer( - "positional_encoding_gaussian_matrix", - scale * torch.randn((2, num_pos_feats)), - ) - - def _pe_encoding(self, coords: torch.Tensor) -> torch.Tensor: - """Positionally encode points that are normalized to [0,1].""" - # assuming coords are in [0, 1]^2 square and have d_1 x ... x d_n x 2 shape - coords = 2 * coords - 1 - coords = coords @ self.positional_encoding_gaussian_matrix - coords = 2 * np.pi * coords - # outputs d_1 x ... x d_n x C shape - return torch.cat([torch.sin(coords), torch.cos(coords)], dim=-1) - - def forward(self, size: Tuple[int, int]) -> torch.Tensor: - """Generate positional encoding for a grid of the specified size.""" - h, w = size - device: Any = self.positional_encoding_gaussian_matrix.device - grid = torch.ones((h, w), device=device, dtype=torch.float32) - y_embed = grid.cumsum(dim=0) - 0.5 - x_embed = grid.cumsum(dim=1) - 0.5 - y_embed = y_embed / h - x_embed = x_embed / w - - pe = self._pe_encoding(torch.stack([x_embed, y_embed], dim=-1)) - return pe.permute(2, 0, 1) # C x H x W - - def forward_with_coords( - self, coords_input: torch.Tensor, image_size: Tuple[int, int] - ) -> torch.Tensor: - """Positionally encode points that are not normalized to [0,1].""" - coords = coords_input.clone() - coords[:, :, 0] = coords[:, :, 0] / image_size[1] - coords[:, :, 1] = coords[:, :, 1] / image_size[0] - return self._pe_encoding(coords.to(torch.float)) # B x N x C diff --git a/spaces/vivym/image-matting-app/ppmatting/datasets/distinctions_646.py b/spaces/vivym/image-matting-app/ppmatting/datasets/distinctions_646.py deleted file mode 100644 index d20b08f2e6b2583ef03bfdc2c30e84fcefd02607..0000000000000000000000000000000000000000 --- a/spaces/vivym/image-matting-app/ppmatting/datasets/distinctions_646.py +++ /dev/null @@ -1,31 +0,0 @@ -# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import os -import math - -import cv2 -import numpy as np -import random -import paddle -from paddleseg.cvlibs import manager - -import ppmatting.transforms as T -from ppmatting.datasets.matting_dataset import MattingDataset - - -@manager.DATASETS.add_component -class Distinctions646(MattingDataset): - def __init__(self, **kwargs): - super().__init__(**kwargs) diff --git a/spaces/vivym/image-matting-app/ppmatting/models/ppmatting.py b/spaces/vivym/image-matting-app/ppmatting/models/ppmatting.py deleted file mode 100644 index 2ed14528b5e598eda3a8fd6030a51ecc81dc6e3c..0000000000000000000000000000000000000000 --- a/spaces/vivym/image-matting-app/ppmatting/models/ppmatting.py +++ /dev/null @@ -1,338 +0,0 @@ -# copyright (c) 2022 PaddlePaddle Authors. All Rights Reserve. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from collections import defaultdict -import time - -import paddle -import paddle.nn as nn -import paddle.nn.functional as F -import paddleseg -from paddleseg.models import layers -from paddleseg import utils -from paddleseg.cvlibs import manager - -from ppmatting.models.losses import MRSD, GradientLoss -from ppmatting.models.backbone import resnet_vd - - -@manager.MODELS.add_component -class PPMatting(nn.Layer): - """ - The PPMattinh implementation based on PaddlePaddle. - - The original article refers to - Guowei Chen, et, al. "PP-Matting: High-Accuracy Natural Image Matting" - (https://arxiv.org/pdf/2204.09433.pdf). - - Args: - backbone: backbone model. - pretrained(str, optional): The path of pretrianed model. Defautl: None. - - """ - - def __init__(self, backbone, pretrained=None): - super().__init__() - self.backbone = backbone - self.pretrained = pretrained - self.loss_func_dict = self.get_loss_func_dict() - - self.backbone_channels = backbone.feat_channels - - self.scb = SCB(self.backbone_channels[-1]) - - self.hrdb = HRDB( - self.backbone_channels[0] + self.backbone_channels[1], - scb_channels=self.scb.out_channels, - gf_index=[0, 2, 4]) - - self.init_weight() - - def forward(self, inputs): - x = inputs['img'] - input_shape = paddle.shape(x) - fea_list = self.backbone(x) - - scb_logits = self.scb(fea_list[-1]) - semantic_map = F.softmax(scb_logits[-1], axis=1) - - fea0 = F.interpolate( - fea_list[0], input_shape[2:], mode='bilinear', align_corners=False) - fea1 = F.interpolate( - fea_list[1], input_shape[2:], mode='bilinear', align_corners=False) - hrdb_input = paddle.concat([fea0, fea1], 1) - hrdb_logit = self.hrdb(hrdb_input, scb_logits) - detail_map = F.sigmoid(hrdb_logit) - fusion = self.fusion(semantic_map, detail_map) - - if self.training: - logit_dict = { - 'semantic': semantic_map, - 'detail': detail_map, - 'fusion': fusion - } - loss_dict = self.loss(logit_dict, inputs) - return logit_dict, loss_dict - else: - return fusion - - def get_loss_func_dict(self): - loss_func_dict = defaultdict(list) - loss_func_dict['semantic'].append(nn.NLLLoss()) - loss_func_dict['detail'].append(MRSD()) - loss_func_dict['detail'].append(GradientLoss()) - loss_func_dict['fusion'].append(MRSD()) - loss_func_dict['fusion'].append(MRSD()) - loss_func_dict['fusion'].append(GradientLoss()) - return loss_func_dict - - def loss(self, logit_dict, label_dict): - loss = {} - - # semantic loss computation - # get semantic label - semantic_label = label_dict['trimap'] - semantic_label_trans = (semantic_label == 128).astype('int64') - semantic_label_bg = (semantic_label == 0).astype('int64') - semantic_label = semantic_label_trans + semantic_label_bg * 2 - loss_semantic = self.loss_func_dict['semantic'][0]( - paddle.log(logit_dict['semantic'] + 1e-6), - semantic_label.squeeze(1)) - loss['semantic'] = loss_semantic - - # detail loss computation - transparent = label_dict['trimap'] == 128 - detail_alpha_loss = self.loss_func_dict['detail'][0]( - logit_dict['detail'], label_dict['alpha'], transparent) - # gradient loss - detail_gradient_loss = self.loss_func_dict['detail'][1]( - logit_dict['detail'], label_dict['alpha'], transparent) - loss_detail = detail_alpha_loss + detail_gradient_loss - loss['detail'] = loss_detail - loss['detail_alpha'] = detail_alpha_loss - loss['detail_gradient'] = detail_gradient_loss - - # fusion loss - loss_fusion_func = self.loss_func_dict['fusion'] - # fusion_sigmoid loss - fusion_alpha_loss = loss_fusion_func[0](logit_dict['fusion'], - label_dict['alpha']) - # composion loss - comp_pred = logit_dict['fusion'] * label_dict['fg'] + ( - 1 - logit_dict['fusion']) * label_dict['bg'] - comp_gt = label_dict['alpha'] * label_dict['fg'] + ( - 1 - label_dict['alpha']) * label_dict['bg'] - fusion_composition_loss = loss_fusion_func[1](comp_pred, comp_gt) - # grandient loss - fusion_grad_loss = loss_fusion_func[2](logit_dict['fusion'], - label_dict['alpha']) - # fusion loss - loss_fusion = fusion_alpha_loss + fusion_composition_loss + fusion_grad_loss - loss['fusion'] = loss_fusion - loss['fusion_alpha'] = fusion_alpha_loss - loss['fusion_composition'] = fusion_composition_loss - loss['fusion_gradient'] = fusion_grad_loss - - loss[ - 'all'] = 0.25 * loss_semantic + 0.25 * loss_detail + 0.25 * loss_fusion - - return loss - - def fusion(self, semantic_map, detail_map): - # semantic_map [N, 3, H, W] - # In index, 0 is foreground, 1 is transition, 2 is backbone - # After fusion, the foreground is 1, the background is 0, and the transion is between [0, 1] - index = paddle.argmax(semantic_map, axis=1, keepdim=True) - transition_mask = (index == 1).astype('float32') - fg = (index == 0).astype('float32') - alpha = detail_map * transition_mask + fg - return alpha - - def init_weight(self): - if self.pretrained is not None: - utils.load_entire_model(self, self.pretrained) - - -class SCB(nn.Layer): - def __init__(self, in_channels): - super().__init__() - self.in_channels = [512 + in_channels, 512, 256, 128, 128, 64] - self.mid_channels = [512, 256, 128, 128, 64, 64] - self.out_channels = [256, 128, 64, 64, 64, 3] - - self.psp_module = layers.PPModule( - in_channels, - 512, - bin_sizes=(1, 3, 5), - dim_reduction=False, - align_corners=False) - - psp_upsamples = [2, 4, 8, 16] - self.psps = nn.LayerList([ - self.conv_up_psp(512, self.out_channels[i], psp_upsamples[i]) - for i in range(4) - ]) - - scb_list = [ - self._make_stage( - self.in_channels[i], - self.mid_channels[i], - self.out_channels[i], - padding=int(i == 0) + 1, - dilation=int(i == 0) + 1) - for i in range(len(self.in_channels) - 1) - ] - scb_list += [ - nn.Sequential( - layers.ConvBNReLU( - self.in_channels[-1], self.mid_channels[-1], 3, padding=1), - layers.ConvBNReLU( - self.mid_channels[-1], self.mid_channels[-1], 3, padding=1), - nn.Conv2D( - self.mid_channels[-1], self.out_channels[-1], 3, padding=1)) - ] - self.scb_stages = nn.LayerList(scb_list) - - def forward(self, x): - psp_x = self.psp_module(x) - psps = [psp(psp_x) for psp in self.psps] - - scb_logits = [] - for i, scb_stage in enumerate(self.scb_stages): - if i == 0: - x = scb_stage(paddle.concat((psp_x, x), 1)) - elif i <= len(psps): - x = scb_stage(paddle.concat((psps[i - 1], x), 1)) - else: - x = scb_stage(x) - scb_logits.append(x) - return scb_logits - - def conv_up_psp(self, in_channels, out_channels, up_sample): - return nn.Sequential( - layers.ConvBNReLU( - in_channels, out_channels, 3, padding=1), - nn.Upsample( - scale_factor=up_sample, mode='bilinear', align_corners=False)) - - def _make_stage(self, - in_channels, - mid_channels, - out_channels, - padding=1, - dilation=1): - layer_list = [ - layers.ConvBNReLU( - in_channels, mid_channels, 3, padding=1), layers.ConvBNReLU( - mid_channels, - mid_channels, - 3, - padding=padding, - dilation=dilation), layers.ConvBNReLU( - mid_channels, - out_channels, - 3, - padding=padding, - dilation=dilation), nn.Upsample( - scale_factor=2, - mode='bilinear', - align_corners=False) - ] - return nn.Sequential(*layer_list) - - -class HRDB(nn.Layer): - """ - The High-Resolution Detail Branch - - Args: - in_channels(int): The number of input channels. - scb_channels(list|tuple): The channels of scb logits - gf_index(list|tuple, optional): Which logit is selected as guidance flow from scb logits. Default: (0, 2, 4) - """ - - def __init__(self, in_channels, scb_channels, gf_index=(0, 2, 4)): - super().__init__() - self.gf_index = gf_index - self.gf_list = nn.LayerList( - [nn.Conv2D(scb_channels[i], 1, 1) for i in gf_index]) - - channels = [64, 32, 16, 8] - self.res_list = [ - resnet_vd.BasicBlock( - in_channels, channels[0], stride=1, shortcut=False) - ] - self.res_list += [ - resnet_vd.BasicBlock( - i, i, stride=1) for i in channels[1:-1] - ] - self.res_list = nn.LayerList(self.res_list) - - self.convs = nn.LayerList([ - nn.Conv2D( - channels[i], channels[i + 1], kernel_size=1) - for i in range(len(channels) - 1) - ]) - self.gates = nn.LayerList( - [GatedSpatailConv2d(i, i) for i in channels[1:]]) - - self.detail_conv = nn.Conv2D(channels[-1], 1, 1, bias_attr=False) - - def forward(self, x, scb_logits): - for i in range(len(self.res_list)): - x = self.res_list[i](x) - x = self.convs[i](x) - gf = self.gf_list[i](scb_logits[self.gf_index[i]]) - gf = F.interpolate( - gf, paddle.shape(x)[-2:], mode='bilinear', align_corners=False) - x = self.gates[i](x, gf) - return self.detail_conv(x) - - -class GatedSpatailConv2d(nn.Layer): - def __init__(self, - in_channels, - out_channels, - kernel_size=1, - stride=1, - padding=0, - dilation=1, - groups=1, - bias_attr=False): - super().__init__() - self._gate_conv = nn.Sequential( - layers.SyncBatchNorm(in_channels + 1), - nn.Conv2D( - in_channels + 1, in_channels + 1, kernel_size=1), - nn.ReLU(), - nn.Conv2D( - in_channels + 1, 1, kernel_size=1), - layers.SyncBatchNorm(1), - nn.Sigmoid()) - self.conv = nn.Conv2D( - in_channels, - out_channels, - kernel_size=kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias_attr=bias_attr) - - def forward(self, input_features, gating_features): - cat = paddle.concat([input_features, gating_features], axis=1) - alphas = self._gate_conv(cat) - x = input_features * (alphas + 1) - x = self.conv(x) - return x diff --git a/spaces/weijiawu/ImageEditAnything/app_huggingface.py b/spaces/weijiawu/ImageEditAnything/app_huggingface.py deleted file mode 100644 index bf408fcb2b94ff970ae05d2b0e96584801d7f2f7..0000000000000000000000000000000000000000 --- a/spaces/weijiawu/ImageEditAnything/app_huggingface.py +++ /dev/null @@ -1,268 +0,0 @@ -from io import BytesIO -import string -import gradio as gr -import requests -from caption_anything import CaptionAnything -import torch -import json -import sys -import argparse -from caption_anything import parse_augment -import numpy as np -import PIL.ImageDraw as ImageDraw -from image_editing_utils import create_bubble_frame -import copy -from tools import mask_painter -from PIL import Image -import os - -def download_checkpoint(url, folder, filename): - os.makedirs(folder, exist_ok=True) - filepath = os.path.join(folder, filename) - - if not os.path.exists(filepath): - response = requests.get(url, stream=True) - with open(filepath, "wb") as f: - for chunk in response.iter_content(chunk_size=8192): - if chunk: - f.write(chunk) - - return filepath -checkpoint_url = "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth" -folder = "segmenter" -filename = "sam_vit_h_4b8939.pth" - -download_checkpoint(checkpoint_url, folder, filename) - - -title = """

        Caption-Anything

        """ -description = """Gradio demo for Caption Anything, image to dense captioning generation with various language styles. To use it, simply upload your image, or click one of the examples to load them. Code: https://github.com/ttengwang/Caption-Anything -""" - -examples = [ - ["test_img/img2.jpg"], - ["test_img/img5.jpg"], - ["test_img/img12.jpg"], - ["test_img/img14.jpg"], -] - -args = parse_augment() -args.captioner = 'blip2' -args.seg_crop_mode = 'wo_bg' -args.regular_box = True -# args.device = 'cuda:5' -# args.disable_gpt = False -# args.enable_reduce_tokens = True -# args.port=20322 -model = CaptionAnything(args) - -def init_openai_api_key(api_key): - os.environ['OPENAI_API_KEY'] = api_key - model.init_refiner() - - -def get_prompt(chat_input, click_state): - points = click_state[0] - labels = click_state[1] - inputs = json.loads(chat_input) - for input in inputs: - points.append(input[:2]) - labels.append(input[2]) - - prompt = { - "prompt_type":["click"], - "input_point":points, - "input_label":labels, - "multimask_output":"True", - } - return prompt - -def chat_with_points(chat_input, click_state, state): - if not hasattr(model, "text_refiner"): - response = "Text refiner is not initilzed, please input openai api key." - state = state + [(chat_input, response)] - return state, state - - points, labels, captions = click_state - # point_chat_prompt = "I want you act as a chat bot in terms of image. I will give you some points (w, h) in the image and tell you what happed on the point in natural language. Note that (0, 0) refers to the top-left corner of the image, w refers to the width and h refers the height. You should chat with me based on the fact in the image instead of imagination. Now I tell you the points with their visual description:\n{points_with_caps}\nNow begin chatting! Human: {chat_input}\nAI: " - # # "The image is of width {width} and height {height}." - point_chat_prompt = "a) Revised prompt: I am an AI trained to chat with you about an image based on specific points (w, h) you provide, along with their visual descriptions. Please note that (0, 0) refers to the top-left corner of the image, w refers to the width, and h refers to the height. Here are the points and their descriptions you've given me: {points_with_caps}. Now, let's chat! Human: {chat_input} AI:" - prev_visual_context = "" - pos_points = [f"{points[i][0]}, {points[i][1]}" for i in range(len(points)) if labels[i] == 1] - if len(captions): - prev_visual_context = ', '.join(pos_points) + captions[-1] + '\n' - else: - prev_visual_context = 'no point exists.' - chat_prompt = point_chat_prompt.format(**{"points_with_caps": prev_visual_context, "chat_input": chat_input}) - response = model.text_refiner.llm(chat_prompt) - state = state + [(chat_input, response)] - return state, state - -def inference_seg_cap(image_input, point_prompt, language, sentiment, factuality, length, state, click_state, evt:gr.SelectData): - - if point_prompt == 'Positive': - coordinate = "[[{}, {}, 1]]".format(str(evt.index[0]), str(evt.index[1])) - else: - coordinate = "[[{}, {}, 0]]".format(str(evt.index[0]), str(evt.index[1])) - - controls = {'length': length, - 'sentiment': sentiment, - 'factuality': factuality, - 'language': language} - - # click_coordinate = "[[{}, {}, 1]]".format(str(evt.index[0]), str(evt.index[1])) - # chat_input = click_coordinate - prompt = get_prompt(coordinate, click_state) - print('prompt: ', prompt, 'controls: ', controls) - - out = model.inference(image_input, prompt, controls) - state = state + [(None, "Image point: {}, Input label: {}".format(prompt["input_point"], prompt["input_label"]))] - # for k, v in out['generated_captions'].items(): - # state = state + [(f'{k}: {v}', None)] - state = state + [("caption: {}".format(out['generated_captions']['raw_caption']), None)] - wiki = out['generated_captions'].get('wiki', "") - click_state[2].append(out['generated_captions']['raw_caption']) - - text = out['generated_captions']['raw_caption'] - # draw = ImageDraw.Draw(image_input) - # draw.text((evt.index[0], evt.index[1]), text, textcolor=(0,0,255), text_size=120) - input_mask = np.array(Image.open(out['mask_save_path']).convert('P')) - image_input = mask_painter(np.array(image_input), input_mask) - origin_image_input = image_input - image_input = create_bubble_frame(image_input, text, (evt.index[0], evt.index[1])) - - yield state, state, click_state, chat_input, image_input, wiki - if not args.disable_gpt and hasattr(model, "text_refiner"): - refined_caption = model.text_refiner.inference(query=text, controls=controls, context=out['context_captions']) - # new_cap = 'Original: ' + text + '. Refined: ' + refined_caption['caption'] - new_cap = refined_caption['caption'] - refined_image_input = create_bubble_frame(origin_image_input, new_cap, (evt.index[0], evt.index[1])) - yield state, state, click_state, chat_input, refined_image_input, wiki - - -def upload_callback(image_input, state): - state = [] + [('Image size: ' + str(image_input.size), None)] - click_state = [[], [], []] - model.segmenter.image = None - model.segmenter.image_embedding = None - model.segmenter.set_image(image_input) - return state, image_input, click_state - -with gr.Blocks( - css=''' - #image_upload{min-height:400px} - #image_upload [data-testid="image"], #image_upload [data-testid="image"] > div{min-height: 600px} - ''' -) as iface: - state = gr.State([]) - click_state = gr.State([[],[],[]]) - origin_image = gr.State(None) - - gr.Markdown(title) - gr.Markdown(description) - - with gr.Row(): - with gr.Column(scale=1.0): - image_input = gr.Image(type="pil", interactive=True, elem_id="image_upload") - with gr.Row(scale=1.0): - point_prompt = gr.Radio( - choices=["Positive", "Negative"], - value="Positive", - label="Point Prompt", - interactive=True) - clear_button_clike = gr.Button(value="Clear Clicks", interactive=True) - clear_button_image = gr.Button(value="Clear Image", interactive=True) - with gr.Row(scale=1.0): - language = gr.Dropdown(['English', 'Chinese', 'French', "Spanish", "Arabic", "Portuguese", "Cantonese"], value="English", label="Language", interactive=True) - - sentiment = gr.Radio( - choices=["Positive", "Natural", "Negative"], - value="Natural", - label="Sentiment", - interactive=True, - ) - with gr.Row(scale=1.0): - factuality = gr.Radio( - choices=["Factual", "Imagination"], - value="Factual", - label="Factuality", - interactive=True, - ) - length = gr.Slider( - minimum=10, - maximum=80, - value=10, - step=1, - interactive=True, - label="Length", - ) - - with gr.Column(scale=0.5): - openai_api_key = gr.Textbox( - placeholder="Input your openAI API key and press Enter", - show_label=False, - label = "OpenAI API Key", - lines=1, - type="password" - ) - openai_api_key.submit(init_openai_api_key, inputs=[openai_api_key]) - wiki_output = gr.Textbox(lines=6, label="Wiki") - chatbot = gr.Chatbot(label="Chat about Selected Object",).style(height=450,scale=0.5) - chat_input = gr.Textbox(lines=1, label="Chat Input") - with gr.Row(): - clear_button_text = gr.Button(value="Clear Text", interactive=True) - submit_button_text = gr.Button(value="Submit", interactive=True, variant="primary") - clear_button_clike.click( - lambda x: ([[], [], []], x, ""), - [origin_image], - [click_state, image_input, wiki_output], - queue=False, - show_progress=False - ) - clear_button_image.click( - lambda: (None, [], [], [[], [], []], ""), - [], - [image_input, chatbot, state, click_state, wiki_output], - queue=False, - show_progress=False - ) - clear_button_text.click( - lambda: ([], [], [[], [], []]), - [], - [chatbot, state, click_state], - queue=False, - show_progress=False - ) - image_input.clear( - lambda: (None, [], [], [[], [], []], ""), - [], - [image_input, chatbot, state, click_state, wiki_output], - queue=False, - show_progress=False - ) - - examples = gr.Examples( - examples=examples, - inputs=[image_input], - ) - - image_input.upload(upload_callback,[image_input, state], [state, origin_image, click_state]) - chat_input.submit(chat_with_points, [chat_input, click_state, state], [chatbot, state]) - - # select coordinate - image_input.select(inference_seg_cap, - inputs=[ - origin_image, - point_prompt, - language, - sentiment, - factuality, - length, - state, - click_state - ], - outputs=[chatbot, state, click_state, chat_input, image_input, wiki_output], - show_progress=False, queue=True) - -iface.queue(concurrency_count=1, api_open=False, max_size=10) -iface.launch(server_name="0.0.0.0", enable_queue=True) \ No newline at end of file diff --git a/spaces/wwwwwwww2/bingo/src/components/ui/dropdown-menu.tsx b/spaces/wwwwwwww2/bingo/src/components/ui/dropdown-menu.tsx deleted file mode 100644 index 184d4e6007ef85187446362f69532ab077897fea..0000000000000000000000000000000000000000 --- a/spaces/wwwwwwww2/bingo/src/components/ui/dropdown-menu.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DropdownMenuPrimitive from '@radix-ui/react-dropdown-menu' - -import { cn } from '@/lib/utils' - -const DropdownMenu = DropdownMenuPrimitive.Root - -const DropdownMenuTrigger = DropdownMenuPrimitive.Trigger - -const DropdownMenuGroup = DropdownMenuPrimitive.Group - -const DropdownMenuPortal = DropdownMenuPrimitive.Portal - -const DropdownMenuSub = DropdownMenuPrimitive.Sub - -const DropdownMenuRadioGroup = DropdownMenuPrimitive.RadioGroup - -const DropdownMenuSubContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSubContent.displayName = - DropdownMenuPrimitive.SubContent.displayName - -const DropdownMenuContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - - - -)) -DropdownMenuContent.displayName = DropdownMenuPrimitive.Content.displayName - -const DropdownMenuItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuItem.displayName = DropdownMenuPrimitive.Item.displayName - -const DropdownMenuLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuLabel.displayName = DropdownMenuPrimitive.Label.displayName - -const DropdownMenuSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSeparator.displayName = DropdownMenuPrimitive.Separator.displayName - -const DropdownMenuShortcut = ({ - className, - ...props -}: React.HTMLAttributes) => { - return ( - - ) -} -DropdownMenuShortcut.displayName = 'DropdownMenuShortcut' - -export { - DropdownMenu, - DropdownMenuTrigger, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuLabel, - DropdownMenuSeparator, - DropdownMenuShortcut, - DropdownMenuGroup, - DropdownMenuPortal, - DropdownMenuSub, - DropdownMenuSubContent, - DropdownMenuRadioGroup -} diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/models/senet.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/models/senet.py deleted file mode 100644 index baaf9b0acbe8577bd5e574de47d3f9ef935946db..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/reid/torchreid/models/senet.py +++ /dev/null @@ -1,688 +0,0 @@ -from __future__ import division, absolute_import -import math -from collections import OrderedDict -import torch.nn as nn -from torch.utils import model_zoo - -__all__ = [ - 'senet154', 'se_resnet50', 'se_resnet101', 'se_resnet152', - 'se_resnext50_32x4d', 'se_resnext101_32x4d', 'se_resnet50_fc512' -] -""" -Code imported from https://github.com/Cadene/pretrained-models.pytorch -""" - -pretrained_settings = { - 'senet154': { - 'imagenet': { - 'url': - 'http://data.lip6.fr/cadene/pretrainedmodels/senet154-c7b49a05.pth', - 'input_space': 'RGB', - 'input_size': [3, 224, 224], - 'input_range': [0, 1], - 'mean': [0.485, 0.456, 0.406], - 'std': [0.229, 0.224, 0.225], - 'num_classes': 1000 - } - }, - 'se_resnet50': { - 'imagenet': { - 'url': - 'http://data.lip6.fr/cadene/pretrainedmodels/se_resnet50-ce0d4300.pth', - 'input_space': 'RGB', - 'input_size': [3, 224, 224], - 'input_range': [0, 1], - 'mean': [0.485, 0.456, 0.406], - 'std': [0.229, 0.224, 0.225], - 'num_classes': 1000 - } - }, - 'se_resnet101': { - 'imagenet': { - 'url': - 'http://data.lip6.fr/cadene/pretrainedmodels/se_resnet101-7e38fcc6.pth', - 'input_space': 'RGB', - 'input_size': [3, 224, 224], - 'input_range': [0, 1], - 'mean': [0.485, 0.456, 0.406], - 'std': [0.229, 0.224, 0.225], - 'num_classes': 1000 - } - }, - 'se_resnet152': { - 'imagenet': { - 'url': - 'http://data.lip6.fr/cadene/pretrainedmodels/se_resnet152-d17c99b7.pth', - 'input_space': 'RGB', - 'input_size': [3, 224, 224], - 'input_range': [0, 1], - 'mean': [0.485, 0.456, 0.406], - 'std': [0.229, 0.224, 0.225], - 'num_classes': 1000 - } - }, - 'se_resnext50_32x4d': { - 'imagenet': { - 'url': - 'http://data.lip6.fr/cadene/pretrainedmodels/se_resnext50_32x4d-a260b3a4.pth', - 'input_space': 'RGB', - 'input_size': [3, 224, 224], - 'input_range': [0, 1], - 'mean': [0.485, 0.456, 0.406], - 'std': [0.229, 0.224, 0.225], - 'num_classes': 1000 - } - }, - 'se_resnext101_32x4d': { - 'imagenet': { - 'url': - 'http://data.lip6.fr/cadene/pretrainedmodels/se_resnext101_32x4d-3b2fe3d8.pth', - 'input_space': 'RGB', - 'input_size': [3, 224, 224], - 'input_range': [0, 1], - 'mean': [0.485, 0.456, 0.406], - 'std': [0.229, 0.224, 0.225], - 'num_classes': 1000 - } - }, -} - - -class SEModule(nn.Module): - - def __init__(self, channels, reduction): - super(SEModule, self).__init__() - self.avg_pool = nn.AdaptiveAvgPool2d(1) - self.fc1 = nn.Conv2d( - channels, channels // reduction, kernel_size=1, padding=0 - ) - self.relu = nn.ReLU(inplace=True) - self.fc2 = nn.Conv2d( - channels // reduction, channels, kernel_size=1, padding=0 - ) - self.sigmoid = nn.Sigmoid() - - def forward(self, x): - module_input = x - x = self.avg_pool(x) - x = self.fc1(x) - x = self.relu(x) - x = self.fc2(x) - x = self.sigmoid(x) - return module_input * x - - -class Bottleneck(nn.Module): - """ - Base class for bottlenecks that implements `forward()` method. - """ - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out = self.se_module(out) + residual - out = self.relu(out) - - return out - - -class SEBottleneck(Bottleneck): - """ - Bottleneck for SENet154. - """ - expansion = 4 - - def __init__( - self, inplanes, planes, groups, reduction, stride=1, downsample=None - ): - super(SEBottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes * 2, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(planes * 2) - self.conv2 = nn.Conv2d( - planes * 2, - planes * 4, - kernel_size=3, - stride=stride, - padding=1, - groups=groups, - bias=False - ) - self.bn2 = nn.BatchNorm2d(planes * 4) - self.conv3 = nn.Conv2d( - planes * 4, planes * 4, kernel_size=1, bias=False - ) - self.bn3 = nn.BatchNorm2d(planes * 4) - self.relu = nn.ReLU(inplace=True) - self.se_module = SEModule(planes * 4, reduction=reduction) - self.downsample = downsample - self.stride = stride - - -class SEResNetBottleneck(Bottleneck): - """ - ResNet bottleneck with a Squeeze-and-Excitation module. It follows Caffe - implementation and uses `stride=stride` in `conv1` and not in `conv2` - (the latter is used in the torchvision implementation of ResNet). - """ - expansion = 4 - - def __init__( - self, inplanes, planes, groups, reduction, stride=1, downsample=None - ): - super(SEResNetBottleneck, self).__init__() - self.conv1 = nn.Conv2d( - inplanes, planes, kernel_size=1, bias=False, stride=stride - ) - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = nn.Conv2d( - planes, - planes, - kernel_size=3, - padding=1, - groups=groups, - bias=False - ) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d(planes, planes * 4, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * 4) - self.relu = nn.ReLU(inplace=True) - self.se_module = SEModule(planes * 4, reduction=reduction) - self.downsample = downsample - self.stride = stride - - -class SEResNeXtBottleneck(Bottleneck): - """ResNeXt bottleneck type C with a Squeeze-and-Excitation module""" - expansion = 4 - - def __init__( - self, - inplanes, - planes, - groups, - reduction, - stride=1, - downsample=None, - base_width=4 - ): - super(SEResNeXtBottleneck, self).__init__() - width = int(math.floor(planes * (base_width/64.)) * groups) - self.conv1 = nn.Conv2d( - inplanes, width, kernel_size=1, bias=False, stride=1 - ) - self.bn1 = nn.BatchNorm2d(width) - self.conv2 = nn.Conv2d( - width, - width, - kernel_size=3, - stride=stride, - padding=1, - groups=groups, - bias=False - ) - self.bn2 = nn.BatchNorm2d(width) - self.conv3 = nn.Conv2d(width, planes * 4, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * 4) - self.relu = nn.ReLU(inplace=True) - self.se_module = SEModule(planes * 4, reduction=reduction) - self.downsample = downsample - self.stride = stride - - -class SENet(nn.Module): - """Squeeze-and-excitation network. - - Reference: - Hu et al. Squeeze-and-Excitation Networks. CVPR 2018. - - Public keys: - - ``senet154``: SENet154. - - ``se_resnet50``: ResNet50 + SE. - - ``se_resnet101``: ResNet101 + SE. - - ``se_resnet152``: ResNet152 + SE. - - ``se_resnext50_32x4d``: ResNeXt50 (groups=32, width=4) + SE. - - ``se_resnext101_32x4d``: ResNeXt101 (groups=32, width=4) + SE. - - ``se_resnet50_fc512``: (ResNet50 + SE) + FC. - """ - - def __init__( - self, - num_classes, - loss, - block, - layers, - groups, - reduction, - dropout_p=0.2, - inplanes=128, - input_3x3=True, - downsample_kernel_size=3, - downsample_padding=1, - last_stride=2, - fc_dims=None, - **kwargs - ): - """ - Parameters - ---------- - block (nn.Module): Bottleneck class. - - For SENet154: SEBottleneck - - For SE-ResNet models: SEResNetBottleneck - - For SE-ResNeXt models: SEResNeXtBottleneck - layers (list of ints): Number of residual blocks for 4 layers of the - network (layer1...layer4). - groups (int): Number of groups for the 3x3 convolution in each - bottleneck block. - - For SENet154: 64 - - For SE-ResNet models: 1 - - For SE-ResNeXt models: 32 - reduction (int): Reduction ratio for Squeeze-and-Excitation modules. - - For all models: 16 - dropout_p (float or None): Drop probability for the Dropout layer. - If `None` the Dropout layer is not used. - - For SENet154: 0.2 - - For SE-ResNet models: None - - For SE-ResNeXt models: None - inplanes (int): Number of input channels for layer1. - - For SENet154: 128 - - For SE-ResNet models: 64 - - For SE-ResNeXt models: 64 - input_3x3 (bool): If `True`, use three 3x3 convolutions instead of - a single 7x7 convolution in layer0. - - For SENet154: True - - For SE-ResNet models: False - - For SE-ResNeXt models: False - downsample_kernel_size (int): Kernel size for downsampling convolutions - in layer2, layer3 and layer4. - - For SENet154: 3 - - For SE-ResNet models: 1 - - For SE-ResNeXt models: 1 - downsample_padding (int): Padding for downsampling convolutions in - layer2, layer3 and layer4. - - For SENet154: 1 - - For SE-ResNet models: 0 - - For SE-ResNeXt models: 0 - num_classes (int): Number of outputs in `classifier` layer. - """ - super(SENet, self).__init__() - self.inplanes = inplanes - self.loss = loss - - if input_3x3: - layer0_modules = [ - ( - 'conv1', - nn.Conv2d(3, 64, 3, stride=2, padding=1, bias=False) - ), - ('bn1', nn.BatchNorm2d(64)), - ('relu1', nn.ReLU(inplace=True)), - ( - 'conv2', - nn.Conv2d(64, 64, 3, stride=1, padding=1, bias=False) - ), - ('bn2', nn.BatchNorm2d(64)), - ('relu2', nn.ReLU(inplace=True)), - ( - 'conv3', - nn.Conv2d( - 64, inplanes, 3, stride=1, padding=1, bias=False - ) - ), - ('bn3', nn.BatchNorm2d(inplanes)), - ('relu3', nn.ReLU(inplace=True)), - ] - else: - layer0_modules = [ - ( - 'conv1', - nn.Conv2d( - 3, - inplanes, - kernel_size=7, - stride=2, - padding=3, - bias=False - ) - ), - ('bn1', nn.BatchNorm2d(inplanes)), - ('relu1', nn.ReLU(inplace=True)), - ] - # To preserve compatibility with Caffe weights `ceil_mode=True` - # is used instead of `padding=1`. - layer0_modules.append( - ('pool', nn.MaxPool2d(3, stride=2, ceil_mode=True)) - ) - self.layer0 = nn.Sequential(OrderedDict(layer0_modules)) - self.layer1 = self._make_layer( - block, - planes=64, - blocks=layers[0], - groups=groups, - reduction=reduction, - downsample_kernel_size=1, - downsample_padding=0 - ) - self.layer2 = self._make_layer( - block, - planes=128, - blocks=layers[1], - stride=2, - groups=groups, - reduction=reduction, - downsample_kernel_size=downsample_kernel_size, - downsample_padding=downsample_padding - ) - self.layer3 = self._make_layer( - block, - planes=256, - blocks=layers[2], - stride=2, - groups=groups, - reduction=reduction, - downsample_kernel_size=downsample_kernel_size, - downsample_padding=downsample_padding - ) - self.layer4 = self._make_layer( - block, - planes=512, - blocks=layers[3], - stride=last_stride, - groups=groups, - reduction=reduction, - downsample_kernel_size=downsample_kernel_size, - downsample_padding=downsample_padding - ) - - self.global_avgpool = nn.AdaptiveAvgPool2d(1) - self.fc = self._construct_fc_layer( - fc_dims, 512 * block.expansion, dropout_p - ) - self.classifier = nn.Linear(self.feature_dim, num_classes) - - def _make_layer( - self, - block, - planes, - blocks, - groups, - reduction, - stride=1, - downsample_kernel_size=1, - downsample_padding=0 - ): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - self.inplanes, - planes * block.expansion, - kernel_size=downsample_kernel_size, - stride=stride, - padding=downsample_padding, - bias=False - ), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append( - block( - self.inplanes, planes, groups, reduction, stride, downsample - ) - ) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, groups, reduction)) - - return nn.Sequential(*layers) - - def _construct_fc_layer(self, fc_dims, input_dim, dropout_p=None): - """ - Construct fully connected layer - - - fc_dims (list or tuple): dimensions of fc layers, if None, - no fc layers are constructed - - input_dim (int): input dimension - - dropout_p (float): dropout probability, if None, dropout is unused - """ - if fc_dims is None: - self.feature_dim = input_dim - return None - - assert isinstance( - fc_dims, (list, tuple) - ), 'fc_dims must be either list or tuple, but got {}'.format( - type(fc_dims) - ) - - layers = [] - for dim in fc_dims: - layers.append(nn.Linear(input_dim, dim)) - layers.append(nn.BatchNorm1d(dim)) - layers.append(nn.ReLU(inplace=True)) - if dropout_p is not None: - layers.append(nn.Dropout(p=dropout_p)) - input_dim = dim - - self.feature_dim = fc_dims[-1] - - return nn.Sequential(*layers) - - def featuremaps(self, x): - x = self.layer0(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - return x - - def forward(self, x): - f = self.featuremaps(x) - v = self.global_avgpool(f) - v = v.view(v.size(0), -1) - - if self.fc is not None: - v = self.fc(v) - - if not self.training: - return v - - y = self.classifier(v) - - if self.loss == 'softmax': - return y - elif self.loss == 'triplet': - return y, v - else: - raise KeyError("Unsupported loss: {}".format(self.loss)) - - -def init_pretrained_weights(model, model_url): - """Initializes model with pretrained weights. - - Layers that don't match with pretrained layers in name or size are kept unchanged. - """ - pretrain_dict = model_zoo.load_url(model_url) - model_dict = model.state_dict() - pretrain_dict = { - k: v - for k, v in pretrain_dict.items() - if k in model_dict and model_dict[k].size() == v.size() - } - model_dict.update(pretrain_dict) - model.load_state_dict(model_dict) - - -def senet154(num_classes, loss='softmax', pretrained=True, **kwargs): - model = SENet( - num_classes=num_classes, - loss=loss, - block=SEBottleneck, - layers=[3, 8, 36, 3], - groups=64, - reduction=16, - dropout_p=0.2, - last_stride=2, - fc_dims=None, - **kwargs - ) - if pretrained: - model_url = pretrained_settings['senet154']['imagenet']['url'] - init_pretrained_weights(model, model_url) - return model - - -def se_resnet50(num_classes, loss='softmax', pretrained=True, **kwargs): - model = SENet( - num_classes=num_classes, - loss=loss, - block=SEResNetBottleneck, - layers=[3, 4, 6, 3], - groups=1, - reduction=16, - dropout_p=None, - inplanes=64, - input_3x3=False, - downsample_kernel_size=1, - downsample_padding=0, - last_stride=2, - fc_dims=None, - **kwargs - ) - if pretrained: - model_url = pretrained_settings['se_resnet50']['imagenet']['url'] - init_pretrained_weights(model, model_url) - return model - - -def se_resnet50_fc512(num_classes, loss='softmax', pretrained=True, **kwargs): - model = SENet( - num_classes=num_classes, - loss=loss, - block=SEResNetBottleneck, - layers=[3, 4, 6, 3], - groups=1, - reduction=16, - dropout_p=None, - inplanes=64, - input_3x3=False, - downsample_kernel_size=1, - downsample_padding=0, - last_stride=1, - fc_dims=[512], - **kwargs - ) - if pretrained: - model_url = pretrained_settings['se_resnet50']['imagenet']['url'] - init_pretrained_weights(model, model_url) - return model - - -def se_resnet101(num_classes, loss='softmax', pretrained=True, **kwargs): - model = SENet( - num_classes=num_classes, - loss=loss, - block=SEResNetBottleneck, - layers=[3, 4, 23, 3], - groups=1, - reduction=16, - dropout_p=None, - inplanes=64, - input_3x3=False, - downsample_kernel_size=1, - downsample_padding=0, - last_stride=2, - fc_dims=None, - **kwargs - ) - if pretrained: - model_url = pretrained_settings['se_resnet101']['imagenet']['url'] - init_pretrained_weights(model, model_url) - return model - - -def se_resnet152(num_classes, loss='softmax', pretrained=True, **kwargs): - model = SENet( - num_classes=num_classes, - loss=loss, - block=SEResNetBottleneck, - layers=[3, 8, 36, 3], - groups=1, - reduction=16, - dropout_p=None, - inplanes=64, - input_3x3=False, - downsample_kernel_size=1, - downsample_padding=0, - last_stride=2, - fc_dims=None, - **kwargs - ) - if pretrained: - model_url = pretrained_settings['se_resnet152']['imagenet']['url'] - init_pretrained_weights(model, model_url) - return model - - -def se_resnext50_32x4d(num_classes, loss='softmax', pretrained=True, **kwargs): - model = SENet( - num_classes=num_classes, - loss=loss, - block=SEResNeXtBottleneck, - layers=[3, 4, 6, 3], - groups=32, - reduction=16, - dropout_p=None, - inplanes=64, - input_3x3=False, - downsample_kernel_size=1, - downsample_padding=0, - last_stride=2, - fc_dims=None, - **kwargs - ) - if pretrained: - model_url = pretrained_settings['se_resnext50_32x4d']['imagenet']['url' - ] - init_pretrained_weights(model, model_url) - return model - - -def se_resnext101_32x4d( - num_classes, loss='softmax', pretrained=True, **kwargs -): - model = SENet( - num_classes=num_classes, - loss=loss, - block=SEResNeXtBottleneck, - layers=[3, 4, 23, 3], - groups=32, - reduction=16, - dropout_p=None, - inplanes=64, - input_3x3=False, - downsample_kernel_size=1, - downsample_padding=0, - last_stride=2, - fc_dims=None, - **kwargs - ) - if pretrained: - model_url = pretrained_settings['se_resnext101_32x4d']['imagenet'][ - 'url'] - init_pretrained_weights(model, model_url) - return model diff --git a/spaces/xfys/yolov5_tracking/yolov5/README.zh-CN.md b/spaces/xfys/yolov5_tracking/yolov5/README.zh-CN.md deleted file mode 100644 index da60d3fe057305b2b427f18c36a24513019d5a0d..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/yolov5/README.zh-CN.md +++ /dev/null @@ -1,490 +0,0 @@ -
        -

        - - -

        - -[英文](README.md)|[简体中文](README.zh-CN.md)
        - -
        - YOLOv5 CI - YOLOv5 Citation - Docker Pulls -
        - Run on Gradient - Open In Colab - Open In Kaggle -
        -
        - -YOLOv5 🚀 是世界上最受欢迎的视觉 AI,代表 Ultralytics 对未来视觉 AI 方法的开源研究,结合在数千小时的研究和开发中积累的经验教训和最佳实践。 - -我们希望这里的资源能帮助您充分利用 YOLOv5。请浏览 YOLOv5 文档 了解详细信息,在 GitHub 上提交问题以获得支持,并加入我们的 Discord 社区进行问题和讨论! - -如需申请企业许可,请在 [Ultralytics Licensing](https://ultralytics.com/license) 处填写表格 - -
        - - - - - - - - - - - - - - - - - - - - -
        -
        - -##
        YOLOv8 🚀 NEW
        - -We are thrilled to announce the launch of Ultralytics YOLOv8 🚀, our NEW cutting-edge, state-of-the-art (SOTA) model -released at **[https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics)**. -YOLOv8 is designed to be fast, accurate, and easy to use, making it an excellent choice for a wide range of -object detection, image segmentation and image classification tasks. - -See the [YOLOv8 Docs](https://docs.ultralytics.com) for details and get started with: - -```commandline -pip install ultralytics -``` - -
        - - -
        - -##
        文档
        - -有关训练、测试和部署的完整文档见[YOLOv5 文档](https://docs.ultralytics.com)。请参阅下面的快速入门示例。 - -
        -安装 - -克隆 repo,并要求在 [**Python>=3.7.0**](https://www.python.org/) 环境中安装 [requirements.txt](https://github.com/ultralytics/yolov5/blob/master/requirements.txt) ,且要求 [**PyTorch>=1.7**](https://pytorch.org/get-started/locally/) 。 - -```bash -git clone https://github.com/ultralytics/yolov5 # clone -cd yolov5 -pip install -r requirements.txt # install -``` - -
        - -
        -推理 - -使用 YOLOv5 [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading) 推理。最新 [模型](https://github.com/ultralytics/yolov5/tree/master/models) 将自动的从 -YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) 中下载。 - -```python -import torch - -# Model -model = torch.hub.load("ultralytics/yolov5", "yolov5s") # or yolov5n - yolov5x6, custom - -# Images -img = "https://ultralytics.com/images/zidane.jpg" # or file, Path, PIL, OpenCV, numpy, list - -# Inference -results = model(img) - -# Results -results.print() # or .show(), .save(), .crop(), .pandas(), etc. -``` - -
        - -
        -使用 detect.py 推理 - -`detect.py` 在各种来源上运行推理, [模型](https://github.com/ultralytics/yolov5/tree/master/models) 自动从 -最新的YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) 中下载,并将结果保存到 `runs/detect` 。 - -```bash -python detect.py --weights yolov5s.pt --source 0 # webcam - img.jpg # image - vid.mp4 # video - screen # screenshot - path/ # directory - list.txt # list of images - list.streams # list of streams - 'path/*.jpg' # glob - 'https://youtu.be/Zgi9g1ksQHc' # YouTube - 'rtsp://example.com/media.mp4' # RTSP, RTMP, HTTP stream -``` - -
        - -
        -训练 - -下面的命令重现 YOLOv5 在 [COCO](https://github.com/ultralytics/yolov5/blob/master/data/scripts/get_coco.sh) 数据集上的结果。 -最新的 [模型](https://github.com/ultralytics/yolov5/tree/master/models) 和 [数据集](https://github.com/ultralytics/yolov5/tree/master/data) -将自动的从 YOLOv5 [release](https://github.com/ultralytics/yolov5/releases) 中下载。 -YOLOv5n/s/m/l/x 在 V100 GPU 的训练时间为 1/2/4/6/8 天( [多GPU](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training) 训练速度更快)。 -尽可能使用更大的 `--batch-size` ,或通过 `--batch-size -1` 实现 -YOLOv5 [自动批处理](https://github.com/ultralytics/yolov5/pull/5092) 。下方显示的 batchsize 适用于 V100-16GB。 - -```bash -python train.py --data coco.yaml --epochs 300 --weights '' --cfg yolov5n.yaml --batch-size 128 - yolov5s 64 - yolov5m 40 - yolov5l 24 - yolov5x 16 -``` - - - -
        - -
        -教程 - -- [训练自定义数据](https://docs.ultralytics.com/yolov5/tutorials/train_custom_data) 🚀 推荐 -- [获得最佳训练结果的技巧](https://docs.ultralytics.com/yolov5/tutorials/tips_for_best_training_results) ☘️ -- [多GPU训练](https://docs.ultralytics.com/yolov5/tutorials/multi_gpu_training) -- [PyTorch Hub](https://docs.ultralytics.com/yolov5/tutorials/pytorch_hub_model_loading) 🌟 新 -- [TFLite,ONNX,CoreML,TensorRT导出](https://docs.ultralytics.com/yolov5/tutorials/model_export) 🚀 -- [NVIDIA Jetson平台部署](https://docs.ultralytics.com/yolov5/tutorials/running_on_jetson_nano) 🌟 新 -- [测试时增强 (TTA)](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation) -- [模型集成](https://docs.ultralytics.com/yolov5/tutorials/model_ensembling) -- [模型剪枝/稀疏](https://docs.ultralytics.com/yolov5/tutorials/model_pruning_and_sparsity) -- [超参数进化](https://docs.ultralytics.com/yolov5/tutorials/hyperparameter_evolution) -- [冻结层的迁移学习](https://docs.ultralytics.com/yolov5/tutorials/transfer_learning_with_frozen_layers) -- [架构概述](https://docs.ultralytics.com/yolov5/tutorials/architecture_description) 🌟 新 -- [Roboflow用于数据集、标注和主动学习](https://docs.ultralytics.com/yolov5/tutorials/roboflow_datasets_integration) -- [ClearML日志记录](https://docs.ultralytics.com/yolov5/tutorials/clearml_logging_integration) 🌟 新 -- [使用Neural Magic的Deepsparse的YOLOv5](https://docs.ultralytics.com/yolov5/tutorials/neural_magic_pruning_quantization) 🌟 新 -- [Comet日志记录](https://docs.ultralytics.com/yolov5/tutorials/comet_logging_integration) 🌟 新 - -
        - -##
        模块集成
        - -
        - - -
        -
        - -
        - - - - - - - - - - - -
        - -| Roboflow | ClearML ⭐ 新 | Comet ⭐ 新 | Neural Magic ⭐ 新 | -| :--------------------------------------------------------------------------------: | :-------------------------------------------------------------------------: | :--------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------: | -| 将您的自定义数据集进行标注并直接导出到 YOLOv5 以进行训练 [Roboflow](https://roboflow.com/?ref=ultralytics) | 自动跟踪、可视化甚至远程训练 YOLOv5 [ClearML](https://cutt.ly/yolov5-readme-clearml)(开源!) | 永远免费,[Comet](https://bit.ly/yolov5-readme-comet2)可让您保存 YOLOv5 模型、恢复训练以及交互式可视化和调试预测 | 使用 [Neural Magic DeepSparse](https://bit.ly/yolov5-neuralmagic),运行 YOLOv5 推理的速度最高可提高6倍 | - -##
        Ultralytics HUB
        - -[Ultralytics HUB](https://bit.ly/ultralytics_hub) 是我们的⭐**新的**用于可视化数据集、训练 YOLOv5 🚀 模型并以无缝体验部署到现实世界的无代码解决方案。现在开始 **免费** 使用他! - - - - -##
        为什么选择 YOLOv5
        - -YOLOv5 超级容易上手,简单易学。我们优先考虑现实世界的结果。 - -

        -
        - YOLOv5-P5 640 图 - -

        -
        -
        - 图表笔记 - -- **COCO AP val** 表示 mAP@0.5:0.95 指标,在 [COCO val2017](http://cocodataset.org) 数据集的 5000 张图像上测得, 图像包含 256 到 1536 各种推理大小。 -- **显卡推理速度** 为在 [COCO val2017](http://cocodataset.org) 数据集上的平均推理时间,使用 [AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/) V100实例,batchsize 为 32 。 -- **EfficientDet** 数据来自 [google/automl](https://github.com/google/automl) , batchsize 为32。 -- **复现命令** 为 `python val.py --task study --data coco.yaml --iou 0.7 --weights yolov5n6.pt yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt` - -
        - -### 预训练模型 - -| 模型 | 尺寸
        (像素) | mAPval
        50-95 | mAPval
        50 | 推理速度
        CPU b1
        (ms) | 推理速度
        V100 b1
        (ms) | 速度
        V100 b32
        (ms) | 参数量
        (M) | FLOPs
        @640 (B) | -| ---------------------------------------------------------------------------------------------- | --------------- | -------------------- | ----------------- | --------------------------- | ---------------------------- | --------------------------- | --------------- | ---------------------- | -| [YOLOv5n](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n.pt) | 640 | 28.0 | 45.7 | **45** | **6.3** | **0.6** | **1.9** | **4.5** | -| [YOLOv5s](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt) | 640 | 37.4 | 56.8 | 98 | 6.4 | 0.9 | 7.2 | 16.5 | -| [YOLOv5m](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m.pt) | 640 | 45.4 | 64.1 | 224 | 8.2 | 1.7 | 21.2 | 49.0 | -| [YOLOv5l](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l.pt) | 640 | 49.0 | 67.3 | 430 | 10.1 | 2.7 | 46.5 | 109.1 | -| [YOLOv5x](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x.pt) | 640 | 50.7 | 68.9 | 766 | 12.1 | 4.8 | 86.7 | 205.7 | -| | | | | | | | | | -| [YOLOv5n6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n6.pt) | 1280 | 36.0 | 54.4 | 153 | 8.1 | 2.1 | 3.2 | 4.6 | -| [YOLOv5s6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s6.pt) | 1280 | 44.8 | 63.7 | 385 | 8.2 | 3.6 | 12.6 | 16.8 | -| [YOLOv5m6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m6.pt) | 1280 | 51.3 | 69.3 | 887 | 11.1 | 6.8 | 35.7 | 50.0 | -| [YOLOv5l6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l6.pt) | 1280 | 53.7 | 71.3 | 1784 | 15.8 | 10.5 | 76.8 | 111.4 | -| [YOLOv5x6](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x6.pt)
        +[TTA] | 1280
        1536 | 55.0
        **55.8** | 72.7
        **72.7** | 3136
        - | 26.2
        - | 19.4
        - | 140.7
        - | 209.8
        - | - -
        - 笔记 - -- 所有模型都使用默认配置,训练 300 epochs。n和s模型使用 [hyp.scratch-low.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-low.yaml) ,其他模型都使用 [hyp.scratch-high.yaml](https://github.com/ultralytics/yolov5/blob/master/data/hyps/hyp.scratch-high.yaml) 。 -- \*\*mAPval\*\*在单模型单尺度上计算,数据集使用 [COCO val2017](http://cocodataset.org) 。
        复现命令 `python val.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65` -- **推理速度**在 COCO val 图像总体时间上进行平均得到,测试环境使用[AWS p3.2xlarge](https://aws.amazon.com/ec2/instance-types/p3/)实例。 NMS 时间 (大约 1 ms/img) 不包括在内。
        复现命令 `python val.py --data coco.yaml --img 640 --task speed --batch 1` -- **TTA** [测试时数据增强](https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation) 包括反射和尺度变换。
        复现命令 `python val.py --data coco.yaml --img 1536 --iou 0.7 --augment` - -
        - -##
        实例分割模型 ⭐ 新
        - -我们新的 YOLOv5 [release v7.0](https://github.com/ultralytics/yolov5/releases/v7.0) 实例分割模型是世界上最快和最准确的模型,击败所有当前 [SOTA 基准](https://paperswithcode.com/sota/real-time-instance-segmentation-on-mscoco)。我们使它非常易于训练、验证和部署。更多细节请查看 [发行说明](https://github.com/ultralytics/yolov5/releases/v7.0) 或访问我们的 [YOLOv5 分割 Colab 笔记本](https://github.com/ultralytics/yolov5/blob/master/segment/tutorial.ipynb) 以快速入门。 - -
        - 实例分割模型列表 - -
        - -
        - - -
        - -我们使用 A100 GPU 在 COCO 上以 640 图像大小训练了 300 epochs 得到 YOLOv5 分割模型。我们将所有模型导出到 ONNX FP32 以进行 CPU 速度测试,并导出到 TensorRT FP16 以进行 GPU 速度测试。为了便于再现,我们在 Google [Colab Pro](https://colab.research.google.com/signup) 上进行了所有速度测试。 - -| 模型 | 尺寸
        (像素) | mAPbox
        50-95 | mAPmask
        50-95 | 训练时长
        300 epochs
        A100 GPU(小时) | 推理速度
        ONNX CPU
        (ms) | 推理速度
        TRT A100
        (ms) | 参数量
        (M) | FLOPs
        @640 (B) | -| ------------------------------------------------------------------------------------------ | --------------- | -------------------- | --------------------- | --------------------------------------- | ----------------------------- | ----------------------------- | --------------- | ---------------------- | -| [YOLOv5n-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-seg.pt) | 640 | 27.6 | 23.4 | 80:17 | **62.7** | **1.2** | **2.0** | **7.1** | -| [YOLOv5s-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-seg.pt) | 640 | 37.6 | 31.7 | 88:16 | 173.3 | 1.4 | 7.6 | 26.4 | -| [YOLOv5m-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-seg.pt) | 640 | 45.0 | 37.1 | 108:36 | 427.0 | 2.2 | 22.0 | 70.8 | -| [YOLOv5l-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-seg.pt) | 640 | 49.0 | 39.9 | 66:43 (2x) | 857.4 | 2.9 | 47.9 | 147.7 | -| [YOLOv5x-seg](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-seg.pt) | 640 | **50.7** | **41.4** | 62:56 (3x) | 1579.2 | 4.5 | 88.8 | 265.7 | - -- 所有模型使用 SGD 优化器训练, 都使用 `lr0=0.01` 和 `weight_decay=5e-5` 参数, 图像大小为 640 。
        训练 log 可以查看 https://wandb.ai/glenn-jocher/YOLOv5_v70_official -- **准确性**结果都在 COCO 数据集上,使用单模型单尺度测试得到。
        复现命令 `python segment/val.py --data coco.yaml --weights yolov5s-seg.pt` -- **推理速度**是使用 100 张图像推理时间进行平均得到,测试环境使用 [Colab Pro](https://colab.research.google.com/signup) 上 A100 高 RAM 实例。结果仅表示推理速度(NMS 每张图像增加约 1 毫秒)。
        复现命令 `python segment/val.py --data coco.yaml --weights yolov5s-seg.pt --batch 1` -- **模型转换**到 FP32 的 ONNX 和 FP16 的 TensorRT 脚本为 `export.py`.
        运行命令 `python export.py --weights yolov5s-seg.pt --include engine --device 0 --half` - -
        - -
        - 分割模型使用示例  Open In Colab - -### 训练 - -YOLOv5分割训练支持自动下载 COCO128-seg 分割数据集,用户仅需在启动指令中包含 `--data coco128-seg.yaml` 参数。 若要手动下载,使用命令 `bash data/scripts/get_coco.sh --train --val --segments`, 在下载完毕后,使用命令 `python train.py --data coco.yaml` 开启训练。 - -```bash -# 单 GPU -python segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 - -# 多 GPU, DDP 模式 -python -m torch.distributed.run --nproc_per_node 4 --master_port 1 segment/train.py --data coco128-seg.yaml --weights yolov5s-seg.pt --img 640 --device 0,1,2,3 -``` - -### 验证 - -在 COCO 数据集上验证 YOLOv5s-seg mask mAP: - -```bash -bash data/scripts/get_coco.sh --val --segments # 下载 COCO val segments 数据集 (780MB, 5000 images) -python segment/val.py --weights yolov5s-seg.pt --data coco.yaml --img 640 # 验证 -``` - -### 预测 - -使用预训练的 YOLOv5m-seg.pt 来预测 bus.jpg: - -```bash -python segment/predict.py --weights yolov5m-seg.pt --source data/images/bus.jpg -``` - -```python -model = torch.hub.load( - "ultralytics/yolov5", "custom", "yolov5m-seg.pt" -) # 从load from PyTorch Hub 加载模型 (WARNING: 推理暂未支持) -``` - -| ![zidane](https://user-images.githubusercontent.com/26833433/203113421-decef4c4-183d-4a0a-a6c2-6435b33bc5d3.jpg) | ![bus](https://user-images.githubusercontent.com/26833433/203113416-11fe0025-69f7-4874-a0a6-65d0bfe2999a.jpg) | -| ---------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | - -### 模型导出 - -将 YOLOv5s-seg 模型导出到 ONNX 和 TensorRT: - -```bash -python export.py --weights yolov5s-seg.pt --include onnx engine --img 640 --device 0 -``` - -
        - -##
        分类网络 ⭐ 新
        - -YOLOv5 [release v6.2](https://github.com/ultralytics/yolov5/releases) 带来对分类模型训练、验证和部署的支持!详情请查看 [发行说明](https://github.com/ultralytics/yolov5/releases/v6.2) 或访问我们的 [YOLOv5 分类 Colab 笔记本](https://github.com/ultralytics/yolov5/blob/master/classify/tutorial.ipynb) 以快速入门。 - -
        - 分类网络模型 - -
        - -我们使用 4xA100 实例在 ImageNet 上训练了 90 个 epochs 得到 YOLOv5-cls 分类模型,我们训练了 ResNet 和 EfficientNet 模型以及相同的默认训练设置以进行比较。我们将所有模型导出到 ONNX FP32 以进行 CPU 速度测试,并导出到 TensorRT FP16 以进行 GPU 速度测试。为了便于重现,我们在 Google 上进行了所有速度测试 [Colab Pro](https://colab.research.google.com/signup) 。 - -| 模型 | 尺寸
        (像素) | acc
        top1 | acc
        top5 | 训练时长
        90 epochs
        4xA100(小时) | 推理速度
        ONNX CPU
        (ms) | 推理速度
        TensorRT V100
        (ms) | 参数
        (M) | FLOPs
        @640 (B) | -| -------------------------------------------------------------------------------------------------- | --------------- | ---------------- | ---------------- | ------------------------------------ | ----------------------------- | ---------------------------------- | -------------- | ---------------------- | -| [YOLOv5n-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5n-cls.pt) | 224 | 64.6 | 85.4 | 7:59 | **3.3** | **0.5** | **2.5** | **0.5** | -| [YOLOv5s-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s-cls.pt) | 224 | 71.5 | 90.2 | 8:09 | 6.6 | 0.6 | 5.4 | 1.4 | -| [YOLOv5m-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5m-cls.pt) | 224 | 75.9 | 92.9 | 10:06 | 15.5 | 0.9 | 12.9 | 3.9 | -| [YOLOv5l-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5l-cls.pt) | 224 | 78.0 | 94.0 | 11:56 | 26.9 | 1.4 | 26.5 | 8.5 | -| [YOLOv5x-cls](https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5x-cls.pt) | 224 | **79.0** | **94.4** | 15:04 | 54.3 | 1.8 | 48.1 | 15.9 | -| | | | | | | | | | -| [ResNet18](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet18.pt) | 224 | 70.3 | 89.5 | **6:47** | 11.2 | 0.5 | 11.7 | 3.7 | -| [Resnetzch](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet34.pt) | 224 | 73.9 | 91.8 | 8:33 | 20.6 | 0.9 | 21.8 | 7.4 | -| [ResNet50](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet50.pt) | 224 | 76.8 | 93.4 | 11:10 | 23.4 | 1.0 | 25.6 | 8.5 | -| [ResNet101](https://github.com/ultralytics/yolov5/releases/download/v7.0/resnet101.pt) | 224 | 78.5 | 94.3 | 17:10 | 42.1 | 1.9 | 44.5 | 15.9 | -| | | | | | | | | | -| [EfficientNet_b0](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b0.pt) | 224 | 75.1 | 92.4 | 13:03 | 12.5 | 1.3 | 5.3 | 1.0 | -| [EfficientNet_b1](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b1.pt) | 224 | 76.4 | 93.2 | 17:04 | 14.9 | 1.6 | 7.8 | 1.5 | -| [EfficientNet_b2](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b2.pt) | 224 | 76.6 | 93.4 | 17:10 | 15.9 | 1.6 | 9.1 | 1.7 | -| [EfficientNet_b3](https://github.com/ultralytics/yolov5/releases/download/v7.0/efficientnet_b3.pt) | 224 | 77.7 | 94.0 | 19:19 | 18.9 | 1.9 | 12.2 | 2.4 | - -
        - Table Notes (点击以展开) - -- 所有模型都使用 SGD 优化器训练 90 个 epochs,都使用 `lr0=0.001` 和 `weight_decay=5e-5` 参数, 图像大小为 224 ,且都使用默认设置。
        训练 log 可以查看 https://wandb.ai/glenn-jocher/YOLOv5-Classifier-v6-2 -- **准确性**都在单模型单尺度上计算,数据集使用 [ImageNet-1k](https://www.image-net.org/index.php) 。
        复现命令 `python classify/val.py --data ../datasets/imagenet --img 224` -- **推理速度**是使用 100 个推理图像进行平均得到,测试环境使用谷歌 [Colab Pro](https://colab.research.google.com/signup) V100 高 RAM 实例。
        复现命令 `python classify/val.py --data ../datasets/imagenet --img 224 --batch 1` -- **模型导出**到 FP32 的 ONNX 和 FP16 的 TensorRT 使用 `export.py` 。
        复现命令 `python export.py --weights yolov5s-cls.pt --include engine onnx --imgsz 224` -
        -
        - -
        - 分类训练示例  Open In Colab - -### 训练 - -YOLOv5 分类训练支持自动下载 MNIST、Fashion-MNIST、CIFAR10、CIFAR100、Imagenette、Imagewoof 和 ImageNet 数据集,命令中使用 `--data` 即可。 MNIST 示例 `--data mnist` 。 - -```bash -# 单 GPU -python classify/train.py --model yolov5s-cls.pt --data cifar100 --epochs 5 --img 224 --batch 128 - -# 多 GPU, DDP 模式 -python -m torch.distributed.run --nproc_per_node 4 --master_port 1 classify/train.py --model yolov5s-cls.pt --data imagenet --epochs 5 --img 224 --device 0,1,2,3 -``` - -### 验证 - -在 ImageNet-1k 数据集上验证 YOLOv5m-cls 的准确性: - -```bash -bash data/scripts/get_imagenet.sh --val # download ImageNet val split (6.3G, 50000 images) -python classify/val.py --weights yolov5m-cls.pt --data ../datasets/imagenet --img 224 # validate -``` - -### 预测 - -使用预训练的 YOLOv5s-cls.pt 来预测 bus.jpg: - -```bash -python classify/predict.py --weights yolov5s-cls.pt --source data/images/bus.jpg -``` - -```python -model = torch.hub.load( - "ultralytics/yolov5", "custom", "yolov5s-cls.pt" -) # load from PyTorch Hub -``` - -### 模型导出 - -将一组经过训练的 YOLOv5s-cls、ResNet 和 EfficientNet 模型导出到 ONNX 和 TensorRT: - -```bash -python export.py --weights yolov5s-cls.pt resnet50.pt efficientnet_b0.pt --include onnx engine --img 224 -``` - -
        - -##
        环境
        - -使用下面我们经过验证的环境,在几秒钟内开始使用 YOLOv5 。单击下面的图标了解详细信息。 - -
        - - - - - - - - - - - - - - - - - -
        - -##
        贡献
        - -我们喜欢您的意见或建议!我们希望尽可能简单和透明地为 YOLOv5 做出贡献。请看我们的 [投稿指南](https://docs.ultralytics.com/help/contributing/),并填写 [YOLOv5调查](https://ultralytics.com/survey?utm_source=github&utm_medium=social&utm_campaign=Survey) 向我们发送您的体验反馈。感谢我们所有的贡献者! - - - - - - -##
        License
        - -YOLOv5 在两种不同的 License 下可用: - -- **AGPL-3.0 License**: 查看 [License](https://github.com/ultralytics/yolov5/blob/master/LICENSE) 文件的详细信息。 -- **企业License**:在没有 AGPL-3.0 开源要求的情况下为商业产品开发提供更大的灵活性。典型用例是将 Ultralytics 软件和 AI 模型嵌入到商业产品和应用程序中。在以下位置申请企业许可证 [Ultralytics 许可](https://ultralytics.com/license) 。 - -##
        联系我们
        - -对于 YOLOv5 的错误报告和功能请求,请访问 [GitHub Issues](https://github.com/ultralytics/yolov5/issues),并加入我们的 [Discord](https://discord.gg/n6cFeSPZdD) 社区进行问题和讨论! - -
        -
        - - - - - - - - - - - - - - - - - - - - -
        - -[tta]: https://docs.ultralytics.com/yolov5/tutorials/test_time_augmentation diff --git a/spaces/xiaoxicc/susu/app.py b/spaces/xiaoxicc/susu/app.py deleted file mode 100644 index 9f1973ad541e64e922d89164f337b8a6c852e78b..0000000000000000000000000000000000000000 --- a/spaces/xiaoxicc/susu/app.py +++ /dev/null @@ -1,438 +0,0 @@ -# -*- coding:utf-8 -*- -import os -import logging -import sys - -import gradio as gr - -from modules.utils import * -from modules.presets import * -from modules.overwrites import * -from modules.chat_func import * -from modules.openai_func import get_usage - -logging.basicConfig( - level=logging.DEBUG, - format="%(asctime)s [%(levelname)s] [%(filename)s:%(lineno)d] %(message)s", -) - -my_api_key = "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" # 在这里输入你的 API 密钥 - -# if we are running in Docker -if os.environ.get("dockerrun") == "yes": - dockerflag = True -else: - dockerflag = False - -authflag = False - -if dockerflag: - my_api_key = os.environ.get("my_api_key") - if my_api_key == "empty": - logging.error("Please give a api key!") - sys.exit(1) - # auth - username = os.environ.get("USERNAME") - password = os.environ.get("PASSWORD") - if not (isinstance(username, type(None)) or isinstance(password, type(None))): - authflag = True -else: - if ( - not my_api_key - and os.path.exists("api_key.txt") - and os.path.getsize("api_key.txt") - ): - with open("api_key.txt", "r") as f: - my_api_key = f.read().strip() - if os.path.exists("auth.json"): - with open("auth.json", "r", encoding='utf-8') as f: - auth = json.load(f) - username = auth["username"] - password = auth["password"] - if username != "" and password != "": - authflag = True - -gr.Chatbot.postprocess = postprocess -PromptHelper.compact_text_chunks = compact_text_chunks - -with open("assets/custom.css", "r", encoding="utf-8") as f: - customCSS = f.read() - -with gr.Blocks(css=customCSS, theme=small_and_beautiful_theme) as demo: - history = gr.State([]) - token_count = gr.State([]) - promptTemplates = gr.State(load_template(get_template_names(plain=True)[0], mode=2)) - user_api_key = gr.State(my_api_key) - user_question = gr.State("") - outputing = gr.State(False) - topic = gr.State("未命名对话历史记录") - - with gr.Row(): - with gr.Column(scale=1): - gr.HTML(title) - with gr.Column(scale=4): - gr.HTML('
        Duplicate SpaceDuplicate the Space and run securely with your OpenAI API Key
        ') - with gr.Column(scale=4): - status_display = gr.Markdown(get_geoip(), elem_id="status_display") - - with gr.Row(scale=1).style(equal_height=True): - with gr.Column(scale=5): - with gr.Row(scale=1): - chatbot = gr.Chatbot(elem_id="chuanhu_chatbot").style(height="100%") - with gr.Row(scale=1): - with gr.Column(scale=12): - user_input = gr.Textbox( - show_label=False, placeholder="在这里输入", interactive=True - ).style(container=False) - with gr.Column(min_width=70, scale=1): - submitBtn = gr.Button("发送", variant="primary") - cancelBtn = gr.Button("取消", variant="secondary", visible=False) - with gr.Row(scale=1): - emptyBtn = gr.Button( - "🧹 新的对话", - ) - retryBtn = gr.Button("🔄 重新生成") - delLastBtn = gr.Button("🗑️ 删除一条对话") - reduceTokenBtn = gr.Button("♻️ 总结对话") - - with gr.Column(): - with gr.Column(min_width=50, scale=1): - with gr.Tab(label="ChatGPT"): - keyTxt = gr.Textbox( - show_label=True, - placeholder=f"OpenAI API-key...", - value=hide_middle_chars(my_api_key), - type="password", - visible=not HIDE_MY_KEY, - label="API-Key", - ) - usageTxt = gr.Markdown(get_usage(my_api_key), elem_id="usage_display") - model_select_dropdown = gr.Dropdown( - label="选择模型", choices=MODELS, multiselect=False, value=MODELS[0] - ) - use_streaming_checkbox = gr.Checkbox( - label="实时传输回答", value=True, visible=enable_streaming_option - ) - use_websearch_checkbox = gr.Checkbox(label="使用在线搜索", value=False) - language_select_dropdown = gr.Dropdown( - label="选择回复语言(针对搜索&索引功能)", - choices=REPLY_LANGUAGES, - multiselect=False, - value=REPLY_LANGUAGES[0], - ) - index_files = gr.Files(label="上传索引文件", type="file", multiple=True) - - with gr.Tab(label="Prompt"): - systemPromptTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入System Prompt...", - label="System prompt", - value=initial_prompt, - lines=10, - ).style(container=False) - with gr.Accordion(label="加载Prompt模板", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - templateFileSelectDropdown = gr.Dropdown( - label="选择Prompt模板集合文件", - choices=get_template_names(plain=True), - multiselect=False, - value=get_template_names(plain=True)[0], - ).style(container=False) - with gr.Column(scale=1): - templateRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(): - templateSelectDropdown = gr.Dropdown( - label="从Prompt模板中加载", - choices=load_template( - get_template_names(plain=True)[0], mode=1 - ), - multiselect=False, - value=load_template( - get_template_names(plain=True)[0], mode=1 - )[0], - ).style(container=False) - - with gr.Tab(label="保存/加载"): - with gr.Accordion(label="保存/加载对话历史记录", open=True): - with gr.Column(): - with gr.Row(): - with gr.Column(scale=6): - historyFileSelectDropdown = gr.Dropdown( - label="从列表中加载对话", - choices=get_history_names(plain=True), - multiselect=False, - value=get_history_names(plain=True)[0], - ) - with gr.Column(scale=1): - historyRefreshBtn = gr.Button("🔄 刷新") - with gr.Row(): - with gr.Column(scale=6): - saveFileName = gr.Textbox( - show_label=True, - placeholder=f"设置文件名: 默认为.json,可选为.md", - label="设置保存文件名", - value="对话历史记录", - ).style(container=True) - with gr.Column(scale=1): - saveHistoryBtn = gr.Button("💾 保存对话") - exportMarkdownBtn = gr.Button("📝 导出为Markdown") - gr.Markdown("默认保存于history文件夹") - with gr.Row(): - with gr.Column(): - downloadFile = gr.File(interactive=True) - - with gr.Tab(label="高级"): - gr.Markdown("# ⚠️ 务必谨慎更改 ⚠️\n\n如果无法使用请恢复默认设置") - default_btn = gr.Button("🔙 恢复默认设置") - - with gr.Accordion("参数", open=False): - top_p = gr.Slider( - minimum=-0, - maximum=1.0, - value=1.0, - step=0.05, - interactive=True, - label="Top-p", - ) - temperature = gr.Slider( - minimum=-0, - maximum=2.0, - value=1.0, - step=0.1, - interactive=True, - label="Temperature", - ) - - with gr.Accordion("网络设置", open=False): - apiurlTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入API地址...", - label="API地址", - value="https://api.openai.com/v1/chat/completions", - lines=2, - ) - changeAPIURLBtn = gr.Button("🔄 切换API地址") - proxyTxt = gr.Textbox( - show_label=True, - placeholder=f"在这里输入代理地址...", - label="代理地址(示例:http://127.0.0.1:10809)", - value="", - lines=2, - ) - changeProxyBtn = gr.Button("🔄 设置代理地址") - - gr.Markdown(description) - - chatgpt_predict_args = dict( - fn=predict, - inputs=[ - user_api_key, - systemPromptTxt, - history, - user_question, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - use_websearch_checkbox, - index_files, - language_select_dropdown, - ], - outputs=[chatbot, history, status_display, token_count], - show_progress=True, - ) - - start_outputing_args = dict( - fn=start_outputing, - inputs=[], - outputs=[submitBtn, cancelBtn], - show_progress=True, - ) - - end_outputing_args = dict( - fn=end_outputing, inputs=[], outputs=[submitBtn, cancelBtn] - ) - - reset_textbox_args = dict( - fn=reset_textbox, inputs=[], outputs=[user_input] - ) - - transfer_input_args = dict( - fn=transfer_input, inputs=[user_input], outputs=[user_question, user_input], show_progress=True - ) - - get_usage_args = dict( - fn=get_usage, inputs=[user_api_key], outputs=[usageTxt], show_progress=False - ) - - # Chatbot - cancelBtn.click(cancel_outputing, [], []) - - user_input.submit(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - user_input.submit(**get_usage_args) - - submitBtn.click(**transfer_input_args).then(**chatgpt_predict_args).then(**end_outputing_args) - submitBtn.click(**get_usage_args) - - emptyBtn.click( - reset_state, - outputs=[chatbot, history, token_count, status_display], - show_progress=True, - ) - emptyBtn.click(**reset_textbox_args) - - retryBtn.click(**reset_textbox_args) - retryBtn.click( - retry, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - use_streaming_checkbox, - model_select_dropdown, - language_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - retryBtn.click(**get_usage_args) - - delLastBtn.click( - delete_last_conversation, - [chatbot, history, token_count], - [chatbot, history, token_count, status_display], - show_progress=True, - ) - - reduceTokenBtn.click( - reduce_token_size, - [ - user_api_key, - systemPromptTxt, - history, - chatbot, - token_count, - top_p, - temperature, - gr.State(0), - model_select_dropdown, - language_select_dropdown, - ], - [chatbot, history, status_display, token_count], - show_progress=True, - ) - reduceTokenBtn.click(**get_usage_args) - - # ChatGPT - keyTxt.change(submit_key, keyTxt, [user_api_key, status_display]).then(**get_usage_args) - - # Template - templateRefreshBtn.click(get_template_names, None, [templateFileSelectDropdown]) - templateFileSelectDropdown.change( - load_template, - [templateFileSelectDropdown], - [promptTemplates, templateSelectDropdown], - show_progress=True, - ) - templateSelectDropdown.change( - get_template_content, - [promptTemplates, templateSelectDropdown, systemPromptTxt], - [systemPromptTxt], - show_progress=True, - ) - - # S&L - saveHistoryBtn.click( - save_chat_history, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - saveHistoryBtn.click(get_history_names, None, [historyFileSelectDropdown]) - exportMarkdownBtn.click( - export_markdown, - [saveFileName, systemPromptTxt, history, chatbot], - downloadFile, - show_progress=True, - ) - historyRefreshBtn.click(get_history_names, None, [historyFileSelectDropdown]) - historyFileSelectDropdown.change( - load_chat_history, - [historyFileSelectDropdown, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - show_progress=True, - ) - downloadFile.change( - load_chat_history, - [downloadFile, systemPromptTxt, history, chatbot], - [saveFileName, systemPromptTxt, history, chatbot], - ) - - # Advanced - default_btn.click( - reset_default, [], [apiurlTxt, proxyTxt, status_display], show_progress=True - ) - changeAPIURLBtn.click( - change_api_url, - [apiurlTxt], - [status_display], - show_progress=True, - ) - changeProxyBtn.click( - change_proxy, - [proxyTxt], - [status_display], - show_progress=True, - ) - -logging.info( - colorama.Back.GREEN - + "\n川虎的温馨提示:访问 http://localhost:7860 查看界面" - + colorama.Style.RESET_ALL -) -# 默认开启本地服务器,默认可以直接从IP访问,默认不创建公开分享链接 -demo.title = "川虎ChatGPT 🚀" - -if __name__ == "__main__": - reload_javascript() - # if running in Docker - if dockerflag: - if authflag: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - server_name="0.0.0.0", - server_port=7860, - auth=(username, password), - favicon_path="./assets/favicon.ico", - ) - else: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - server_name="0.0.0.0", - server_port=7860, - share=False, - favicon_path="./assets/favicon.ico", - ) - # if not running in Docker - else: - if authflag: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - share=False, - auth=(username, password), - favicon_path="./assets/favicon.ico", - inbrowser=True, - ) - else: - demo.queue(concurrency_count=CONCURRENT_COUNT).launch( - share=False, favicon_path="./assets/favicon.ico", inbrowser=True - ) # 改为 share=True 可以创建公开分享链接 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860, share=False) # 可自定义端口 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(server_name="0.0.0.0", server_port=7860,auth=("在这里填写用户名", "在这里填写密码")) # 可设置用户名与密码 - # demo.queue(concurrency_count=CONCURRENT_COUNT).launch(auth=("在这里填写用户名", "在这里填写密码")) # 适合Nginx反向代理 diff --git a/spaces/xl2533/MakeInstruction/ape/llm.py b/spaces/xl2533/MakeInstruction/ape/llm.py deleted file mode 100644 index 9bc6fbde54f36278ccc9da3779ee56955f0f32bf..0000000000000000000000000000000000000000 --- a/spaces/xl2533/MakeInstruction/ape/llm.py +++ /dev/null @@ -1,87 +0,0 @@ -# -*-coding:utf-8 -*- -from tqdm import tqdm -import tiktoken -from ape.prompt import MyTemplate -from langchain.chat_models import ChatOpenAI -from langchain.llms import OpenAI -from langchain.prompts.chat import ( - ChatPromptTemplate, - SystemMessagePromptTemplate, - HumanMessagePromptTemplate, -) -from langchain.prompts import PromptTemplate -from langchain.chains.llm import LLMChain - -#默认使用davinci-003来测试和评估(可控性高),使用ChatGPT生成指令(便宜) -Cost = { - 'davinci': 0.02, - 'chatgpt': 0.004 -} - - -class LLMGPT(object): - encoding = tiktoken.get_encoding("cl100k_base") - def __init__(self, openai_key): - self.gen_llm = ChatOpenAI(openai_api_key=openai_key, max_tokens=2000, temperature=0.7, verbose=True) - self.eval_llm = OpenAI(openai_api_key=openai_key, max_tokens=0, temperature=0.7, echo=True, logprobs=1) - self.test_llm = OpenAI(openai_api_key=openai_key, max_tokens=2000, temperature=0.7, verbose=True) - self.gen_chain = None - self.eval_chain = None - - @staticmethod - def confirm_cost(text, mode): - if mode == 'train': - cost = 0.02 - else: - cost = 0.0004 - - num_tokens = len(LLMGPT.encoding.encode(text)) - total_price = ((num_tokens / 1000) * cost) - return total_price - - def generate_instruction(self, gen_prompt, few_shot): - """ - Generate instruction - """ - if not gen_prompt: - gen_prompt = MyTemplate['gen_user_prompt'] - prompt = ChatPromptTemplate.from_messages( - [ - SystemMessagePromptTemplate.from_template(MyTemplate['gen_sys_prompt']), - HumanMessagePromptTemplate.from_template(gen_prompt), - ] - ) - self.gen_chain = LLMChain(llm=self.gen_llm, prompt=prompt) - - prompt = '' - for shot in few_shot: - prompt += MyTemplate['few_shot_prompt'].format(input=shot[0], output=shot[1]) - result = self.gen_chain({'few_shot': prompt}) - return result - - def generate_output(self, test_prompt, instruction, input): - if not test_prompt: - test_prompt = MyTemplate['test_prompt'] - prompt = PromptTemplate.from_template(test_prompt) - test_chain = LLMChain(llm=self.test_llm, prompt=prompt) - output = test_chain({'input': input, 'instruction': instruction}) - return output - - def generate_logprobs(self, eval_prompt, instruction, eval_set): - """ - Eval instruction - """ - - if not eval_prompt: - eval_prompt = MyTemplate['eval_prompt'] - prompt = PromptTemplate.from_template(eval_prompt) - eval_chain = LLMChain(llm=self.eval_llm, prompt=prompt) - score = 0 - for sample in eval_set: - output_len = len(LLMGPT.encoding.encode(sample[1])) - llmresult = eval_chain.generate([{'instruction': instruction, 'input': sample[0], 'output': sample[1]}]) - logprobs = llmresult.generations[0][0].generation_info['logprobs'] - token_probs = logprobs['token_logprobs'] - score += sum(token_probs[-output_len:]) - ## TODO:转成批请求,解决Rate Limit问题 - return score \ No newline at end of file diff --git a/spaces/xuetao/bingo3/src/lib/hooks/use-copy-to-clipboard.tsx b/spaces/xuetao/bingo3/src/lib/hooks/use-copy-to-clipboard.tsx deleted file mode 100644 index 62f7156dca246c46b213151af003a3a177977ccf..0000000000000000000000000000000000000000 --- a/spaces/xuetao/bingo3/src/lib/hooks/use-copy-to-clipboard.tsx +++ /dev/null @@ -1,33 +0,0 @@ -'use client' - -import * as React from 'react' - -export interface useCopyToClipboardProps { - timeout?: number -} - -export function useCopyToClipboard({ - timeout = 2000 -}: useCopyToClipboardProps) { - const [isCopied, setIsCopied] = React.useState(false) - - const copyToClipboard = (value: string) => { - if (typeof window === 'undefined' || !navigator.clipboard?.writeText) { - return - } - - if (!value) { - return - } - - navigator.clipboard.writeText(value).then(() => { - setIsCopied(true) - - setTimeout(() => { - setIsCopied(false) - }, timeout) - }) - } - - return { isCopied, copyToClipboard } -} diff --git a/spaces/yaoshining/text-generation-webui/extensions/api/script.py b/spaces/yaoshining/text-generation-webui/extensions/api/script.py deleted file mode 100644 index 5d1b1a68c4418d51d716ec2a2b06e80ec2f4d27a..0000000000000000000000000000000000000000 --- a/spaces/yaoshining/text-generation-webui/extensions/api/script.py +++ /dev/null @@ -1,8 +0,0 @@ -import extensions.api.blocking_api as blocking_api -import extensions.api.streaming_api as streaming_api -from modules import shared - - -def setup(): - blocking_api.start_server(shared.args.api_blocking_port, share=shared.args.public_api) - streaming_api.start_server(shared.args.api_streaming_port, share=shared.args.public_api) diff --git a/spaces/ybelkada/interfacegan_pp/utils/manipulator.py b/spaces/ybelkada/interfacegan_pp/utils/manipulator.py deleted file mode 100644 index 051a83da1f4b5589fe76ff2e4d09e22f0fde3340..0000000000000000000000000000000000000000 --- a/spaces/ybelkada/interfacegan_pp/utils/manipulator.py +++ /dev/null @@ -1,247 +0,0 @@ -# python3.7 -"""Utility functions for latent codes manipulation.""" - -import numpy as np -from sklearn import svm - -from .logger import setup_logger - -__all__ = ['train_boundary', 'project_boundary', 'linear_interpolate'] - - -def train_boundary(latent_codes, - scores, - chosen_num_or_ratio=0.02, - split_ratio=0.7, - invalid_value=None, - logger=None): - """Trains boundary in latent space with offline predicted attribute scores. - - Given a collection of latent codes and the attribute scores predicted from the - corresponding images, this function will train a linear SVM by treating it as - a bi-classification problem. Basically, the samples with highest attribute - scores are treated as positive samples, while those with lowest scores as - negative. For now, the latent code can ONLY be with 1 dimension. - - NOTE: The returned boundary is with shape (1, latent_space_dim), and also - normalized with unit norm. - - Args: - latent_codes: Input latent codes as training data. - scores: Input attribute scores used to generate training labels. - chosen_num_or_ratio: How many samples will be chosen as positive (negative) - samples. If this field lies in range (0, 0.5], `chosen_num_or_ratio * - latent_codes_num` will be used. Otherwise, `min(chosen_num_or_ratio, - 0.5 * latent_codes_num)` will be used. (default: 0.02) - split_ratio: Ratio to split training and validation sets. (default: 0.7) - invalid_value: This field is used to filter out data. (default: None) - logger: Logger for recording log messages. If set as `None`, a default - logger, which prints messages from all levels to screen, will be created. - (default: None) - - Returns: - A decision boundary with type `numpy.ndarray`. - - Raises: - ValueError: If the input `latent_codes` or `scores` are with invalid format. - """ - if not logger: - logger = setup_logger(work_dir='', logger_name='train_boundary') - - if (not isinstance(latent_codes, np.ndarray) or - not len(latent_codes.shape) == 2): - raise ValueError(f'Input `latent_codes` should be with type' - f'`numpy.ndarray`, and shape [num_samples, ' - f'latent_space_dim]!') - num_samples = latent_codes.shape[0] - latent_space_dim = latent_codes.shape[1] - if (not isinstance(scores, np.ndarray) or not len(scores.shape) == 2 or - not scores.shape[0] == num_samples or not scores.shape[1] == 1): - raise ValueError(f'Input `scores` should be with type `numpy.ndarray`, and ' - f'shape [num_samples, 1], where `num_samples` should be ' - f'exactly same as that of input `latent_codes`!') - if chosen_num_or_ratio <= 0: - raise ValueError(f'Input `chosen_num_or_ratio` should be positive, ' - f'but {chosen_num_or_ratio} received!') - - logger.info(f'Filtering training data.') - if invalid_value is not None: - latent_codes = latent_codes[scores[:, 0] != invalid_value] - scores = scores[scores[:, 0] != invalid_value] - - logger.info(f'Sorting scores to get positive and negative samples.') - sorted_idx = np.argsort(scores, axis=0)[::-1, 0] - latent_codes = latent_codes[sorted_idx] - scores = scores[sorted_idx] - num_samples = latent_codes.shape[0] - if 0 < chosen_num_or_ratio <= 1: - chosen_num = int(num_samples * chosen_num_or_ratio) - else: - chosen_num = int(chosen_num_or_ratio) - chosen_num = min(chosen_num, num_samples // 2) - - logger.info(f'Spliting training and validation sets:') - train_num = int(chosen_num * split_ratio) - val_num = chosen_num - train_num - # Positive samples. - positive_idx = np.arange(chosen_num) - np.random.shuffle(positive_idx) - positive_train = latent_codes[:chosen_num][positive_idx[:train_num]] - positive_val = latent_codes[:chosen_num][positive_idx[train_num:]] - # Negative samples. - negative_idx = np.arange(chosen_num) - np.random.shuffle(negative_idx) - negative_train = latent_codes[-chosen_num:][negative_idx[:train_num]] - negative_val = latent_codes[-chosen_num:][negative_idx[train_num:]] - # Training set. - train_data = np.concatenate([positive_train, negative_train], axis=0) - train_label = np.concatenate([np.ones(train_num, dtype=np.int), - np.zeros(train_num, dtype=np.int)], axis=0) - logger.info(f' Training: {train_num} positive, {train_num} negative.') - # Validation set. - val_data = np.concatenate([positive_val, negative_val], axis=0) - val_label = np.concatenate([np.ones(val_num, dtype=np.int), - np.zeros(val_num, dtype=np.int)], axis=0) - logger.info(f' Validation: {val_num} positive, {val_num} negative.') - # Remaining set. - remaining_num = num_samples - chosen_num * 2 - remaining_data = latent_codes[chosen_num:-chosen_num] - remaining_scores = scores[chosen_num:-chosen_num] - decision_value = (scores[0] + scores[-1]) / 2 - remaining_label = np.ones(remaining_num, dtype=np.int) - remaining_label[remaining_scores.ravel() < decision_value] = 0 - remaining_positive_num = np.sum(remaining_label == 1) - remaining_negative_num = np.sum(remaining_label == 0) - logger.info(f' Remaining: {remaining_positive_num} positive, ' - f'{remaining_negative_num} negative.') - - logger.info(f'Training boundary.') - clf = svm.SVC(kernel='linear') - classifier = clf.fit(train_data, train_label) - logger.info(f'Finish training.') - - if val_num: - val_prediction = classifier.predict(val_data) - correct_num = np.sum(val_label == val_prediction) - logger.info(f'Accuracy for validation set: ' - f'{correct_num} / {val_num * 2} = ' - f'{correct_num / (val_num * 2):.6f}') - - if remaining_num: - remaining_prediction = classifier.predict(remaining_data) - correct_num = np.sum(remaining_label == remaining_prediction) - logger.info(f'Accuracy for remaining set: ' - f'{correct_num} / {remaining_num} = ' - f'{correct_num / remaining_num:.6f}') - - a = classifier.coef_.reshape(1, latent_space_dim).astype(np.float32) - return a / np.linalg.norm(a) - - -def project_boundary(primal, *args): - """Projects the primal boundary onto condition boundaries. - - The function is used for conditional manipulation, where the projected vector - will be subscribed from the normal direction of the original boundary. Here, - all input boundaries are supposed to have already been normalized to unit - norm, and with same shape [1, latent_space_dim]. - - Args: - primal: The primal boundary. - *args: Other boundaries as conditions. - - Returns: - A projected boundary (also normalized to unit norm), which is orthogonal to - all condition boundaries. - - Raises: - LinAlgError: If there are more than two condition boundaries and the method fails - to find a projected boundary orthogonal to all condition boundaries. - """ - assert len(primal.shape) == 2 and primal.shape[0] == 1 - - if not args: - return primal - if len(args) == 1: - cond = args[0] - assert (len(cond.shape) == 2 and cond.shape[0] == 1 and - cond.shape[1] == primal.shape[1]) - new = primal - primal.dot(cond.T) * cond - return new / np.linalg.norm(new) - elif len(args) == 2: - cond_1 = args[0] - cond_2 = args[1] - assert (len(cond_1.shape) == 2 and cond_1.shape[0] == 1 and - cond_1.shape[1] == primal.shape[1]) - assert (len(cond_2.shape) == 2 and cond_2.shape[0] == 1 and - cond_2.shape[1] == primal.shape[1]) - primal_cond_1 = primal.dot(cond_1.T) - primal_cond_2 = primal.dot(cond_2.T) - cond_1_cond_2 = cond_1.dot(cond_2.T) - alpha = (primal_cond_1 - primal_cond_2 * cond_1_cond_2) / ( - 1 - cond_1_cond_2 ** 2 + 1e-8) - beta = (primal_cond_2 - primal_cond_1 * cond_1_cond_2) / ( - 1 - cond_1_cond_2 ** 2 + 1e-8) - new = primal - alpha * cond_1 - beta * cond_2 - return new / np.linalg.norm(new) - else: - for cond_boundary in args: - assert (len(cond_boundary.shape) == 2 and cond_boundary.shape[0] == 1 and - cond_boundary.shape[1] == primal.shape[1]) - cond_boundaries = np.squeeze(np.asarray(args)) - A = np.matmul(cond_boundaries, cond_boundaries.T) - B = np.matmul(cond_boundaries, primal.T) - x = np.linalg.solve(A, B) - new = primal - (np.matmul(x.T, cond_boundaries)) - return new / np.linalg.norm(new) - - -def linear_interpolate(latent_code, - boundary, - start_distance=-3.0, - end_distance=3.0, - steps=10): - """Manipulates the given latent code with respect to a particular boundary. - - Basically, this function takes a latent code and a boundary as inputs, and - outputs a collection of manipulated latent codes. For example, let `steps` to - be 10, then the input `latent_code` is with shape [1, latent_space_dim], input - `boundary` is with shape [1, latent_space_dim] and unit norm, the output is - with shape [10, latent_space_dim]. The first output latent code is - `start_distance` away from the given `boundary`, while the last output latent - code is `end_distance` away from the given `boundary`. Remaining latent codes - are linearly interpolated. - - Input `latent_code` can also be with shape [1, num_layers, latent_space_dim] - to support W+ space in Style GAN. In this case, all features in W+ space will - be manipulated same as each other. Accordingly, the output will be with shape - [10, num_layers, latent_space_dim]. - - NOTE: Distance is sign sensitive. - - Args: - latent_code: The input latent code for manipulation. - boundary: The semantic boundary as reference. - start_distance: The distance to the boundary where the manipulation starts. - (default: -3.0) - end_distance: The distance to the boundary where the manipulation ends. - (default: 3.0) - steps: Number of steps to move the latent code from start position to end - position. (default: 10) - """ - assert (latent_code.shape[0] == 1 and boundary.shape[0] == 1 and - len(boundary.shape) == 2 and - boundary.shape[1] == latent_code.shape[-1]) - - linspace = np.linspace(start_distance, end_distance, steps) - if len(latent_code.shape) == 2: - linspace = linspace - latent_code.dot(boundary.T) - linspace = linspace.reshape(-1, 1).astype(np.float32) - return latent_code + linspace * boundary - if len(latent_code.shape) == 3: - linspace = linspace.reshape(-1, 1, 1).astype(np.float32) - return latent_code + linspace * boundary.reshape(1, 1, -1) - raise ValueError(f'Input `latent_code` should be with shape ' - f'[1, latent_space_dim] or [1, N, latent_space_dim] for ' - f'W+ space in Style GAN!\n' - f'But {latent_code.shape} is received.') diff --git a/spaces/yerfor/SyntaSpeech/egs/datasets/audio/biaobei/preprocess.py b/spaces/yerfor/SyntaSpeech/egs/datasets/audio/biaobei/preprocess.py deleted file mode 100644 index 44f48e22675a02e3a4b91a69caf7344f5a2982ef..0000000000000000000000000000000000000000 --- a/spaces/yerfor/SyntaSpeech/egs/datasets/audio/biaobei/preprocess.py +++ /dev/null @@ -1,16 +0,0 @@ -from data_gen.tts.base_preprocess import BasePreprocessor -import re - - -class BiaobeiPreprocess(BasePreprocessor): - def meta_data(self): - input_dir = self.raw_data_dir - with open(f"{input_dir}/ProsodyLabeling/000001-010000.txt", encoding='utf-8') as f: - bb_lines = f.readlines()[::2] - for l_idx, l in (enumerate([re.sub("\#\d+", "", l.split('\t')[1].strip()) for l in bb_lines])): - item_name = f'{l_idx + 1:06d}' - wav_fn = f"{input_dir}/wav/{l_idx + 1:06d}.wav" - yield {'item_name': item_name, 'wav_fn': wav_fn, 'txt': l} - -if __name__ == "__main__": - BiaobeiPreprocess().process() diff --git a/spaces/ygangang/VToonify/vtoonify/model/raft/core/datasets.py b/spaces/ygangang/VToonify/vtoonify/model/raft/core/datasets.py deleted file mode 100644 index 9991f15f4c3861c19d1a4b8766d49f83af11db70..0000000000000000000000000000000000000000 --- a/spaces/ygangang/VToonify/vtoonify/model/raft/core/datasets.py +++ /dev/null @@ -1,235 +0,0 @@ -# Data loading based on https://github.com/NVIDIA/flownet2-pytorch - -import numpy as np -import torch -import torch.utils.data as data -import torch.nn.functional as F - -import os -import math -import random -from glob import glob -import os.path as osp - -from model.raft.core.utils import frame_utils -from model.raft.core.utils.augmentor import FlowAugmentor, SparseFlowAugmentor - - -class FlowDataset(data.Dataset): - def __init__(self, aug_params=None, sparse=False): - self.augmentor = None - self.sparse = sparse - if aug_params is not None: - if sparse: - self.augmentor = SparseFlowAugmentor(**aug_params) - else: - self.augmentor = FlowAugmentor(**aug_params) - - self.is_test = False - self.init_seed = False - self.flow_list = [] - self.image_list = [] - self.extra_info = [] - - def __getitem__(self, index): - - if self.is_test: - img1 = frame_utils.read_gen(self.image_list[index][0]) - img2 = frame_utils.read_gen(self.image_list[index][1]) - img1 = np.array(img1).astype(np.uint8)[..., :3] - img2 = np.array(img2).astype(np.uint8)[..., :3] - img1 = torch.from_numpy(img1).permute(2, 0, 1).float() - img2 = torch.from_numpy(img2).permute(2, 0, 1).float() - return img1, img2, self.extra_info[index] - - if not self.init_seed: - worker_info = torch.utils.data.get_worker_info() - if worker_info is not None: - torch.manual_seed(worker_info.id) - np.random.seed(worker_info.id) - random.seed(worker_info.id) - self.init_seed = True - - index = index % len(self.image_list) - valid = None - if self.sparse: - flow, valid = frame_utils.readFlowKITTI(self.flow_list[index]) - else: - flow = frame_utils.read_gen(self.flow_list[index]) - - img1 = frame_utils.read_gen(self.image_list[index][0]) - img2 = frame_utils.read_gen(self.image_list[index][1]) - - flow = np.array(flow).astype(np.float32) - img1 = np.array(img1).astype(np.uint8) - img2 = np.array(img2).astype(np.uint8) - - # grayscale images - if len(img1.shape) == 2: - img1 = np.tile(img1[...,None], (1, 1, 3)) - img2 = np.tile(img2[...,None], (1, 1, 3)) - else: - img1 = img1[..., :3] - img2 = img2[..., :3] - - if self.augmentor is not None: - if self.sparse: - img1, img2, flow, valid = self.augmentor(img1, img2, flow, valid) - else: - img1, img2, flow = self.augmentor(img1, img2, flow) - - img1 = torch.from_numpy(img1).permute(2, 0, 1).float() - img2 = torch.from_numpy(img2).permute(2, 0, 1).float() - flow = torch.from_numpy(flow).permute(2, 0, 1).float() - - if valid is not None: - valid = torch.from_numpy(valid) - else: - valid = (flow[0].abs() < 1000) & (flow[1].abs() < 1000) - - return img1, img2, flow, valid.float() - - - def __rmul__(self, v): - self.flow_list = v * self.flow_list - self.image_list = v * self.image_list - return self - - def __len__(self): - return len(self.image_list) - - -class MpiSintel(FlowDataset): - def __init__(self, aug_params=None, split='training', root='datasets/Sintel', dstype='clean'): - super(MpiSintel, self).__init__(aug_params) - flow_root = osp.join(root, split, 'flow') - image_root = osp.join(root, split, dstype) - - if split == 'test': - self.is_test = True - - for scene in os.listdir(image_root): - image_list = sorted(glob(osp.join(image_root, scene, '*.png'))) - for i in range(len(image_list)-1): - self.image_list += [ [image_list[i], image_list[i+1]] ] - self.extra_info += [ (scene, i) ] # scene and frame_id - - if split != 'test': - self.flow_list += sorted(glob(osp.join(flow_root, scene, '*.flo'))) - - -class FlyingChairs(FlowDataset): - def __init__(self, aug_params=None, split='train', root='datasets/FlyingChairs_release/data'): - super(FlyingChairs, self).__init__(aug_params) - - images = sorted(glob(osp.join(root, '*.ppm'))) - flows = sorted(glob(osp.join(root, '*.flo'))) - assert (len(images)//2 == len(flows)) - - split_list = np.loadtxt('chairs_split.txt', dtype=np.int32) - for i in range(len(flows)): - xid = split_list[i] - if (split=='training' and xid==1) or (split=='validation' and xid==2): - self.flow_list += [ flows[i] ] - self.image_list += [ [images[2*i], images[2*i+1]] ] - - -class FlyingThings3D(FlowDataset): - def __init__(self, aug_params=None, root='datasets/FlyingThings3D', dstype='frames_cleanpass'): - super(FlyingThings3D, self).__init__(aug_params) - - for cam in ['left']: - for direction in ['into_future', 'into_past']: - image_dirs = sorted(glob(osp.join(root, dstype, 'TRAIN/*/*'))) - image_dirs = sorted([osp.join(f, cam) for f in image_dirs]) - - flow_dirs = sorted(glob(osp.join(root, 'optical_flow/TRAIN/*/*'))) - flow_dirs = sorted([osp.join(f, direction, cam) for f in flow_dirs]) - - for idir, fdir in zip(image_dirs, flow_dirs): - images = sorted(glob(osp.join(idir, '*.png')) ) - flows = sorted(glob(osp.join(fdir, '*.pfm')) ) - for i in range(len(flows)-1): - if direction == 'into_future': - self.image_list += [ [images[i], images[i+1]] ] - self.flow_list += [ flows[i] ] - elif direction == 'into_past': - self.image_list += [ [images[i+1], images[i]] ] - self.flow_list += [ flows[i+1] ] - - -class KITTI(FlowDataset): - def __init__(self, aug_params=None, split='training', root='datasets/KITTI'): - super(KITTI, self).__init__(aug_params, sparse=True) - if split == 'testing': - self.is_test = True - - root = osp.join(root, split) - images1 = sorted(glob(osp.join(root, 'image_2/*_10.png'))) - images2 = sorted(glob(osp.join(root, 'image_2/*_11.png'))) - - for img1, img2 in zip(images1, images2): - frame_id = img1.split('/')[-1] - self.extra_info += [ [frame_id] ] - self.image_list += [ [img1, img2] ] - - if split == 'training': - self.flow_list = sorted(glob(osp.join(root, 'flow_occ/*_10.png'))) - - -class HD1K(FlowDataset): - def __init__(self, aug_params=None, root='datasets/HD1k'): - super(HD1K, self).__init__(aug_params, sparse=True) - - seq_ix = 0 - while 1: - flows = sorted(glob(os.path.join(root, 'hd1k_flow_gt', 'flow_occ/%06d_*.png' % seq_ix))) - images = sorted(glob(os.path.join(root, 'hd1k_input', 'image_2/%06d_*.png' % seq_ix))) - - if len(flows) == 0: - break - - for i in range(len(flows)-1): - self.flow_list += [flows[i]] - self.image_list += [ [images[i], images[i+1]] ] - - seq_ix += 1 - - -def fetch_dataloader(args, TRAIN_DS='C+T+K+S+H'): - """ Create the data loader for the corresponding trainign set """ - - if args.stage == 'chairs': - aug_params = {'crop_size': args.image_size, 'min_scale': -0.1, 'max_scale': 1.0, 'do_flip': True} - train_dataset = FlyingChairs(aug_params, split='training') - - elif args.stage == 'things': - aug_params = {'crop_size': args.image_size, 'min_scale': -0.4, 'max_scale': 0.8, 'do_flip': True} - clean_dataset = FlyingThings3D(aug_params, dstype='frames_cleanpass') - final_dataset = FlyingThings3D(aug_params, dstype='frames_finalpass') - train_dataset = clean_dataset + final_dataset - - elif args.stage == 'sintel': - aug_params = {'crop_size': args.image_size, 'min_scale': -0.2, 'max_scale': 0.6, 'do_flip': True} - things = FlyingThings3D(aug_params, dstype='frames_cleanpass') - sintel_clean = MpiSintel(aug_params, split='training', dstype='clean') - sintel_final = MpiSintel(aug_params, split='training', dstype='final') - - if TRAIN_DS == 'C+T+K+S+H': - kitti = KITTI({'crop_size': args.image_size, 'min_scale': -0.3, 'max_scale': 0.5, 'do_flip': True}) - hd1k = HD1K({'crop_size': args.image_size, 'min_scale': -0.5, 'max_scale': 0.2, 'do_flip': True}) - train_dataset = 100*sintel_clean + 100*sintel_final + 200*kitti + 5*hd1k + things - - elif TRAIN_DS == 'C+T+K/S': - train_dataset = 100*sintel_clean + 100*sintel_final + things - - elif args.stage == 'kitti': - aug_params = {'crop_size': args.image_size, 'min_scale': -0.2, 'max_scale': 0.4, 'do_flip': False} - train_dataset = KITTI(aug_params, split='training') - - train_loader = data.DataLoader(train_dataset, batch_size=args.batch_size, - pin_memory=False, shuffle=True, num_workers=4, drop_last=True) - - print('Training with %d image pairs' % len(train_dataset)) - return train_loader - diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bit/modeling_bit.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bit/modeling_bit.py deleted file mode 100644 index 12a5ecd42b74cf397ac3c7875f514aedddce27cc..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/bit/modeling_bit.py +++ /dev/null @@ -1,905 +0,0 @@ -# coding=utf-8 -# Copyright 2022 Google AI and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" PyTorch BiT model. Also supports backbone for ViT hybrid.""" - -import collections -import math -from typing import Optional, Tuple - -import numpy as np -import torch -import torch.utils.checkpoint -from torch import Tensor, nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from ...activations import ACT2FN -from ...modeling_outputs import ( - BackboneOutput, - BaseModelOutputWithNoAttention, - BaseModelOutputWithPoolingAndNoAttention, - ImageClassifierOutputWithNoAttention, -) -from ...modeling_utils import PreTrainedModel -from ...utils import ( - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from ...utils.backbone_utils import BackboneMixin -from .configuration_bit import BitConfig - - -logger = logging.get_logger(__name__) - -# General docstring -_CONFIG_FOR_DOC = "BitConfig" - -# Base docstring -_CHECKPOINT_FOR_DOC = "google/bit-50" -_EXPECTED_OUTPUT_SHAPE = [1, 2048, 7, 7] - -# Image classification docstring -_IMAGE_CLASS_CHECKPOINT = "google/bit-50" -_IMAGE_CLASS_EXPECTED_OUTPUT = "tiger cat" - -BIT_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "google/bit-50", - # See all BiT models at https://huggingface.co/models?filter=bit -] - - -def get_padding_value(padding=None, kernel_size=7, stride=1, dilation=1) -> Tuple[Tuple, bool]: - r""" - Utility function to get the tuple padding value given the kernel_size and padding. - - Args: - padding (Union[`str`, `int`], *optional*): - Padding value, can be either `"same"`, `"valid"`. If a different value is provided the default padding from - PyTorch is used. - kernel_size (`int`, *optional*, defaults to 7): - Kernel size of the convolution layers. - stride (`int`, *optional*, defaults to 1): - Stride value of the convolution layers. - dilation (`int`, *optional*, defaults to 1): - Dilation value of the convolution layers. - """ - dynamic = False - if padding is None: - padding = ((stride - 1) + dilation * (kernel_size - 1)) // 2 - return padding, dynamic - - if isinstance(padding, str): - # for any string padding, the padding will be calculated for you, one of three ways - padding = padding.lower() - if padding == "same": - # TF compatible 'SAME' padding, has a performance and GPU memory allocation impact - if stride == 1 and (dilation * (kernel_size - 1)) % 2 == 0: - # static case, no extra overhead - padding = ((stride - 1) + dilation * (kernel_size - 1)) // 2 - else: - # dynamic 'SAME' padding, has runtime/GPU memory overhead - padding = 0 - dynamic = True - elif padding == "valid": - # 'VALID' padding, same as padding=0 - padding = 0 - else: - # Default to PyTorch style 'same'-ish symmetric padding - padding = ((stride - 1) + dilation * (kernel_size - 1)) // 2 - return padding, dynamic - - -class WeightStandardizedConv2d(nn.Conv2d): - """Conv2d with Weight Standardization. Includes TensorFlow compatible SAME padding. Used for ViT Hybrid model. - - Paper: [Micro-Batch Training with Batch-Channel Normalization and Weight - Standardization](https://arxiv.org/abs/1903.10520v2) - """ - - def __init__( - self, - in_channel, - out_channels, - kernel_size, - stride=1, - padding="SAME", - dilation=1, - groups=1, - bias=False, - eps=1e-6, - ): - padding, is_dynamic = get_padding_value(padding, kernel_size, stride=stride, dilation=dilation) - super().__init__( - in_channel, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - bias=bias, - ) - if is_dynamic: - self.pad = DynamicPad2d(kernel_size, stride, dilation) - else: - self.pad = None - self.eps = eps - - def forward(self, hidden_state): - if self.pad is not None: - hidden_state = self.pad(hidden_state) - weight = nn.functional.batch_norm( - self.weight.reshape(1, self.out_channels, -1), None, None, training=True, momentum=0.0, eps=self.eps - ).reshape_as(self.weight) - hidden_state = nn.functional.conv2d( - hidden_state, weight, self.bias, self.stride, self.padding, self.dilation, self.groups - ) - return hidden_state - - -class BitGroupNormActivation(nn.GroupNorm): - r""" - A module that combines group normalization with an activation function. - """ - - def __init__(self, config, num_channels, eps=1e-5, affine=True, apply_activation=True): - super(BitGroupNormActivation, self).__init__(config.num_groups, num_channels, eps=eps, affine=affine) - if apply_activation: - self.activation = ACT2FN[config.hidden_act] - else: - self.activation = nn.Identity() - - def forward(self, hidden_state): - hidden_state = nn.functional.group_norm(hidden_state, self.num_groups, self.weight, self.bias, self.eps) - hidden_state = self.activation(hidden_state) - return hidden_state - - -class DynamicPad2d(nn.Module): - r""" - A module that wraps dynamic padding of any input, given the parameters of the convolutional layer and the input - hidden states. - """ - - def __init__(self, kernel_size, stride, dilation, value=0): - super().__init__() - # Safety checkers - if isinstance(kernel_size, int): - kernel_size = (kernel_size, kernel_size) - - if isinstance(stride, int): - stride = (stride, stride) - - if isinstance(dilation, int): - dilation = (dilation, dilation) - - self.kernel_size = kernel_size - self.stride = stride - self.dilation = dilation - self.value = value - - def compute_padding(x, kernel_size, stride, dilation): - return max((math.ceil(x / stride) - 1) * stride + (kernel_size - 1) * dilation + 1 - x, 0) - - self.compute_padding = compute_padding - - def __call__(self, input): - # Get width and height - input_height, input_width = input.size()[-2:] - - # Compute the padding values - padding_height = self.compute_padding(input_height, self.kernel_size[0], self.stride[0], self.dilation[0]) - padding_width = self.compute_padding(input_width, self.kernel_size[1], self.stride[1], self.dilation[1]) - - # apply pad - if padding_height > 0 or padding_width > 0: - input = nn.functional.pad( - input, - [ - padding_width // 2, - padding_width - padding_width // 2, - padding_height // 2, - padding_height - padding_height // 2, - ], - value=self.value, - ) - return input - - -class BitMaxPool2d(nn.MaxPool2d): - """Tensorflow like 'SAME' wrapper for 2D max pooling""" - - def __init__( - self, - kernel_size: int, - stride=None, - dilation=1, - ceil_mode=False, - padding=(0, 0), - padding_value=0, - use_dynamic_padding=True, - ): - kernel_size = kernel_size if isinstance(kernel_size, collections.abc.Iterable) else (kernel_size, kernel_size) - stride = stride if isinstance(stride, collections.abc.Iterable) else (stride, stride) - dilation = dilation if isinstance(dilation, collections.abc.Iterable) else (dilation, dilation) - super().__init__(kernel_size, stride, padding, dilation, ceil_mode) - if use_dynamic_padding: - self.pad = DynamicPad2d(kernel_size, stride, dilation, padding_value) - else: - self.pad = nn.Identity() - - def forward(self, hidden_states): - hidden_states = self.pad(hidden_states) - return nn.functional.max_pool2d( - hidden_states, self.kernel_size, self.stride, self.padding, self.dilation, self.ceil_mode - ) - - -class BitEmbeddings(nn.Module): - """ - BiT Embeddings (stem) composed of a single aggressive convolution. - """ - - def __init__(self, config: BitConfig): - super().__init__() - - self.convolution = WeightStandardizedConv2d( - config.num_channels, - config.embedding_size, - kernel_size=7, - stride=2, - eps=1e-8, - padding=config.global_padding, - ) - - self.pooler = BitMaxPool2d(kernel_size=3, stride=2, use_dynamic_padding=config.embedding_dynamic_padding) - - # Use the same padding strategy as convolutional layers - if config.global_padding is not None and config.global_padding.upper() == "SAME": - self.pad = nn.Identity() - else: - self.pad = nn.ConstantPad2d(padding=(1, 1, 1, 1), value=0.0) - - if not config.layer_type == "preactivation": - self.norm = BitGroupNormActivation(config, num_channels=config.embedding_size) - else: - self.norm = nn.Identity() - - self.num_channels = config.num_channels - - def forward(self, pixel_values: Tensor) -> Tensor: - num_channels = pixel_values.shape[1] - if num_channels != self.num_channels: - raise ValueError( - "Make sure that the channel dimension of the pixel values match with the one set in the configuration." - ) - - embedding = self.convolution(pixel_values) - - embedding = self.pad(embedding) - - embedding = self.norm(embedding) - - embedding = self.pooler(embedding) - - return embedding - - -# Copied from transformers.models.convnext.modeling_convnext.drop_path -def drop_path(input: torch.Tensor, drop_prob: float = 0.0, training: bool = False) -> torch.Tensor: - """ - Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks). - - Comment by Ross Wightman: This is the same as the DropConnect impl I created for EfficientNet, etc networks, - however, the original name is misleading as 'Drop Connect' is a different form of dropout in a separate paper... - See discussion: https://github.com/tensorflow/tpu/issues/494#issuecomment-532968956 ... I've opted for changing the - layer and argument names to 'drop path' rather than mix DropConnect as a layer name and use 'survival rate' as the - argument. - """ - if drop_prob == 0.0 or not training: - return input - keep_prob = 1 - drop_prob - shape = (input.shape[0],) + (1,) * (input.ndim - 1) # work with diff dim tensors, not just 2D ConvNets - random_tensor = keep_prob + torch.rand(shape, dtype=input.dtype, device=input.device) - random_tensor.floor_() # binarize - output = input.div(keep_prob) * random_tensor - return output - - -# Copied from transformers.models.beit.modeling_beit.BeitDropPath with Beit->Bit -class BitDropPath(nn.Module): - """Drop paths (Stochastic Depth) per sample (when applied in main path of residual blocks).""" - - def __init__(self, drop_prob: Optional[float] = None) -> None: - super().__init__() - self.drop_prob = drop_prob - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - return drop_path(hidden_states, self.drop_prob, self.training) - - def extra_repr(self) -> str: - return "p={}".format(self.drop_prob) - - -def make_div(value, divisor=8): - min_value = divisor - new_value = max(min_value, int(value + divisor / 2) // divisor * divisor) - if new_value < 0.9 * value: - new_value += divisor - return new_value - - -class BitPreActivationBottleneckLayer(nn.Module): - """Pre-activation (v2) bottleneck block. - Follows the implementation of "Identity Mappings in Deep Residual Networks": - https://github.com/KaimingHe/resnet-1k-layers/blob/master/resnet-pre-act.lua - - Except it puts the stride on 3x3 conv when available. - """ - - def __init__( - self, - config, - in_channels, - out_channels=None, - bottle_ratio=0.25, - stride=1, - dilation=1, - first_dilation=None, - groups=1, - drop_path_rate=0.0, - is_first_layer=False, - ): - super().__init__() - - first_dilation = first_dilation or dilation - - out_channels = out_channels or in_channels - mid_channels = make_div(out_channels * bottle_ratio) - - if is_first_layer: - self.downsample = BitDownsampleConv( - config, - in_channels, - out_channels, - stride=stride, - preact=True, - ) - else: - self.downsample = None - - self.norm1 = BitGroupNormActivation(config, in_channels) - self.conv1 = WeightStandardizedConv2d(in_channels, mid_channels, 1, eps=1e-8, padding=config.global_padding) - - self.norm2 = BitGroupNormActivation(config, num_channels=mid_channels) - self.conv2 = WeightStandardizedConv2d( - mid_channels, mid_channels, 3, stride=stride, groups=groups, eps=1e-8, padding=config.global_padding - ) - - self.norm3 = BitGroupNormActivation(config, mid_channels) - self.conv3 = WeightStandardizedConv2d(mid_channels, out_channels, 1, eps=1e-8, padding=config.global_padding) - - self.drop_path = BitDropPath(drop_path_rate) if drop_path_rate > 0 else nn.Identity() - - def forward(self, hidden_states): - hidden_states_preact = self.norm1(hidden_states) - - # shortcut branch - shortcut = hidden_states - if self.downsample is not None: - shortcut = self.downsample(hidden_states_preact) - - # residual branch - hidden_states = self.conv1(hidden_states_preact) - hidden_states = self.conv2(self.norm2(hidden_states)) - hidden_states = self.conv3(self.norm3(hidden_states)) - hidden_states = self.drop_path(hidden_states) - return hidden_states + shortcut - - -class BitBottleneckLayer(nn.Module): - """Non Pre-activation bottleneck block, equivalent to V1.5/V1b bottleneck. Used for ViT Hybrid.""" - - def __init__( - self, - config, - in_channels, - out_channels=None, - bottle_ratio=0.25, - stride=1, - dilation=1, - first_dilation=None, - groups=1, - drop_path_rate=0.0, - is_first_layer=False, - ): - super().__init__() - first_dilation = first_dilation or dilation - - out_channels = out_channels or in_channels - mid_chs = make_div(out_channels * bottle_ratio) - - if is_first_layer: - self.downsample = BitDownsampleConv( - config, - in_channels, - out_channels, - stride=stride, - preact=False, - ) - else: - self.downsample = None - - self.conv1 = WeightStandardizedConv2d(in_channels, mid_chs, 1, eps=1e-8, padding=config.global_padding) - self.norm1 = BitGroupNormActivation(config, num_channels=mid_chs) - self.conv2 = WeightStandardizedConv2d( - mid_chs, - mid_chs, - 3, - stride=stride, - dilation=first_dilation, - groups=groups, - eps=1e-8, - padding=config.global_padding, - ) - self.norm2 = BitGroupNormActivation(config, num_channels=mid_chs) - self.conv3 = WeightStandardizedConv2d(mid_chs, out_channels, 1, eps=1e-8, padding=config.global_padding) - self.norm3 = BitGroupNormActivation(config, num_channels=out_channels, apply_activation=False) - self.drop_path = BitDropPath(drop_path_rate) if drop_path_rate > 0 else nn.Identity() - - self.activation = ACT2FN[config.hidden_act] - - def forward(self, hidden_states): - # shortcut branch - shortcut = hidden_states - if self.downsample is not None: - shortcut = self.downsample(hidden_states) - - # residual - hidden_states = self.conv1(hidden_states) - hidden_states = self.norm1(hidden_states) - - hidden_states = self.conv2(hidden_states) - hidden_states = self.norm2(hidden_states) - - hidden_states = self.conv3(hidden_states) - hidden_states = self.norm3(hidden_states) - - hidden_states = self.drop_path(hidden_states) - hidden_states = self.activation(hidden_states + shortcut) - return hidden_states - - -class BitDownsampleConv(nn.Module): - def __init__( - self, - config, - in_channels, - out_channels, - stride=1, - preact=True, - ): - super().__init__() - self.conv = WeightStandardizedConv2d( - in_channels, out_channels, 1, stride=stride, eps=1e-8, padding=config.global_padding - ) - self.norm = ( - nn.Identity() - if preact - else BitGroupNormActivation(config, num_channels=out_channels, apply_activation=False) - ) - - def forward(self, x): - return self.norm(self.conv(x)) - - -class BitStage(nn.Module): - """ - A ResNet v2 stage composed by stacked layers. - """ - - def __init__( - self, - config, - in_channels, - out_channels, - stride, - dilation, - depth, - bottle_ratio=0.25, - layer_dropout=None, - ): - super().__init__() - - first_dilation = 1 if dilation in (1, 2) else 2 - - # Get the layer type - if config.layer_type == "bottleneck": - layer_cls = BitBottleneckLayer - else: - layer_cls = BitPreActivationBottleneckLayer - - prev_chs = in_channels - self.layers = nn.Sequential() - for layer_idx in range(depth): - # Get the current hyper-parameters - stride, drop_path_rate, is_first_layer = self._get_updated_hyperparameters( - layer_idx, stride, layer_dropout - ) - - self.layers.add_module( - str(layer_idx), - layer_cls( - config, - prev_chs, - out_channels, - stride=stride, - dilation=dilation, - bottle_ratio=bottle_ratio, - first_dilation=first_dilation, - drop_path_rate=drop_path_rate, - is_first_layer=is_first_layer, - ), - ) - prev_chs = out_channels - first_dilation = dilation - - def _get_updated_hyperparameters(self, layer_idx, stride, layer_dropout): - r""" - Get the new hyper-parameters with respect to the previous ones and the index of the current layer. - """ - if layer_dropout: - drop_path_rate = layer_dropout[layer_idx] - else: - drop_path_rate = 0.0 - - if layer_idx != 0: - stride = 1 - - is_first_layer = layer_idx == 0 - - return stride, drop_path_rate, is_first_layer - - def forward(self, input: Tensor) -> Tensor: - hidden_state = input - for _, layer in enumerate(self.layers): - hidden_state = layer(hidden_state) - return hidden_state - - -class BitEncoder(nn.Module): - def __init__(self, config: BitConfig): - super().__init__() - self.stages = nn.ModuleList([]) - - prev_chs = config.embedding_size - - # These needs to stay hardcoded - current_stride = 4 - dilation = 1 - - layer_dropouts = [ - x.tolist() - for x in torch.Tensor(np.linspace(0, config.drop_path_rate, sum(config.depths))).split(config.depths) - ] - - for stage_idx, (current_depth, current_hidden_size, layer_dropout) in enumerate( - zip(config.depths, config.hidden_sizes, layer_dropouts) - ): - # Get the updated hyper params - out_channels, stride, dilation = self._get_updated_hyperparameters( - stage_idx, current_stride, current_hidden_size, dilation, config - ) - - stage = BitStage( - config, - prev_chs, - out_channels, - stride=stride, - dilation=dilation, - depth=current_depth, - layer_dropout=layer_dropout, - ) - - prev_chs = out_channels - current_stride *= stride - - self.stages.add_module(str(stage_idx), stage) - - def _get_updated_hyperparameters(self, stage_idx, current_stride, current_hidden_size, dilation, config): - out_channels = make_div(current_hidden_size * config.width_factor) - stride = 1 if stage_idx == 0 else 2 - if current_stride >= config.output_stride: - dilation *= stride - stride = 1 - return out_channels, stride, dilation - - def forward( - self, hidden_state: Tensor, output_hidden_states: bool = False, return_dict: bool = True - ) -> BaseModelOutputWithNoAttention: - hidden_states = () if output_hidden_states else None - - for stage_module in self.stages: - if output_hidden_states: - hidden_states = hidden_states + (hidden_state,) - - hidden_state = stage_module(hidden_state) - - if output_hidden_states: - hidden_states = hidden_states + (hidden_state,) - - if not return_dict: - return tuple(v for v in [hidden_state, hidden_states] if v is not None) - - return BaseModelOutputWithNoAttention( - last_hidden_state=hidden_state, - hidden_states=hidden_states, - ) - - -class BitPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = BitConfig - base_model_prefix = "bit" - main_input_name = "pixel_values" - supports_gradient_checkpointing = True - - def _init_weights(self, module): - if isinstance(module, nn.Conv2d): - nn.init.kaiming_normal_(module.weight, mode="fan_out", nonlinearity="relu") - elif isinstance(module, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(module.weight, 1) - nn.init.constant_(module.bias, 0) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, BitModel): - module.gradient_checkpointing = value - - -BIT_START_DOCSTRING = r""" - This model is a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. Use it - as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and - behavior. - - Parameters: - config ([`BitConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -BIT_INPUTS_DOCSTRING = r""" - Args: - pixel_values (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - Pixel values. Pixel values can be obtained using [`AutoImageProcessor`]. See [`BitImageProcessor.__call__`] - for details. - - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare BiT model outputting raw features without any specific head on top.", - BIT_START_DOCSTRING, -) -class BitModel(BitPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.config = config - - self.embedder = BitEmbeddings(config) - - self.encoder = BitEncoder(config) - self.norm = ( - BitGroupNormActivation(config, num_channels=config.hidden_sizes[-1]) - if config.layer_type == "preactivation" - else nn.Identity() - ) - - self.pooler = nn.AdaptiveAvgPool2d((1, 1)) - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BIT_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPoolingAndNoAttention, - config_class=_CONFIG_FOR_DOC, - modality="vision", - expected_output=_EXPECTED_OUTPUT_SHAPE, - ) - def forward( - self, pixel_values: Tensor, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None - ) -> BaseModelOutputWithPoolingAndNoAttention: - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - embedding_output = self.embedder(pixel_values) - - encoder_outputs = self.encoder( - embedding_output, output_hidden_states=output_hidden_states, return_dict=return_dict - ) - - last_hidden_state = encoder_outputs[0] - - last_hidden_state = self.norm(last_hidden_state) - - pooled_output = self.pooler(last_hidden_state) - - if not return_dict: - return (last_hidden_state, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPoolingAndNoAttention( - last_hidden_state=last_hidden_state, - pooler_output=pooled_output, - hidden_states=encoder_outputs.hidden_states, - ) - - -@add_start_docstrings( - """ - BiT Model with an image classification head on top (a linear layer on top of the pooled features), e.g. for - ImageNet. - """, - BIT_START_DOCSTRING, -) -class BitForImageClassification(BitPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.bit = BitModel(config) - # classification head - self.classifier = nn.Sequential( - nn.Flatten(), - nn.Linear(config.hidden_sizes[-1], config.num_labels) if config.num_labels > 0 else nn.Identity(), - ) - # initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BIT_INPUTS_DOCSTRING) - @add_code_sample_docstrings( - checkpoint=_IMAGE_CLASS_CHECKPOINT, - output_type=ImageClassifierOutputWithNoAttention, - config_class=_CONFIG_FOR_DOC, - expected_output=_IMAGE_CLASS_EXPECTED_OUTPUT, - ) - def forward( - self, - pixel_values: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> ImageClassifierOutputWithNoAttention: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the image classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.bit(pixel_values, output_hidden_states=output_hidden_states, return_dict=return_dict) - - pooled_output = outputs.pooler_output if return_dict else outputs[1] - - logits = self.classifier(pooled_output) - - loss = None - - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - - if not return_dict: - output = (logits,) + outputs[2:] - return (loss,) + output if loss is not None else output - - return ImageClassifierOutputWithNoAttention(loss=loss, logits=logits, hidden_states=outputs.hidden_states) - - -@add_start_docstrings( - """ - BiT backbone, to be used with frameworks like DETR and MaskFormer. - """, - BIT_START_DOCSTRING, -) -class BitBackbone(BitPreTrainedModel, BackboneMixin): - def __init__(self, config): - super().__init__(config) - super()._init_backbone(config) - - self.bit = BitModel(config) - self.num_features = [config.embedding_size] + config.hidden_sizes - - # initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BIT_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=BackboneOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, pixel_values: Tensor, output_hidden_states: Optional[bool] = None, return_dict: Optional[bool] = None - ) -> BackboneOutput: - """ - Returns: - - Examples: - - ```python - >>> from transformers import AutoImageProcessor, AutoBackbone - >>> import torch - >>> from PIL import Image - >>> import requests - - >>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" - >>> image = Image.open(requests.get(url, stream=True).raw) - - >>> processor = AutoImageProcessor.from_pretrained("google/resnetnv2-50") - >>> model = AutoBackbone.from_pretrained("google/resnetnv2-50") - - >>> inputs = processor(image, return_tensors="pt") - >>> outputs = model(**inputs) - ```""" - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - - outputs = self.bit(pixel_values, output_hidden_states=True, return_dict=True) - - hidden_states = outputs.hidden_states - - feature_maps = () - for idx, stage in enumerate(self.stage_names): - if stage in self.out_features: - feature_maps += (hidden_states[idx],) - - if not return_dict: - output = (feature_maps,) - if output_hidden_states: - output += (outputs.hidden_states,) - return output - - return BackboneOutput( - feature_maps=feature_maps, - hidden_states=outputs.hidden_states if output_hidden_states else None, - attentions=None, - ) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/ibert/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/ibert/__init__.py deleted file mode 100644 index 637eb08eaf412d136e2e8ccf7a1d7d92147d364f..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/ibert/__init__.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import TYPE_CHECKING - -from ...utils import OptionalDependencyNotAvailable, _LazyModule, is_torch_available - - -_import_structure = {"configuration_ibert": ["IBERT_PRETRAINED_CONFIG_ARCHIVE_MAP", "IBertConfig", "IBertOnnxConfig"]} - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["modeling_ibert"] = [ - "IBERT_PRETRAINED_MODEL_ARCHIVE_LIST", - "IBertForMaskedLM", - "IBertForMultipleChoice", - "IBertForQuestionAnswering", - "IBertForSequenceClassification", - "IBertForTokenClassification", - "IBertModel", - "IBertPreTrainedModel", - ] - -if TYPE_CHECKING: - from .configuration_ibert import IBERT_PRETRAINED_CONFIG_ARCHIVE_MAP, IBertConfig, IBertOnnxConfig - - try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .modeling_ibert import ( - IBERT_PRETRAINED_MODEL_ARCHIVE_LIST, - IBertForMaskedLM, - IBertForMultipleChoice, - IBertForQuestionAnswering, - IBertForSequenceClassification, - IBertForTokenClassification, - IBertModel, - IBertPreTrainedModel, - ) - -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlmv3/configuration_layoutlmv3.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlmv3/configuration_layoutlmv3.py deleted file mode 100644 index 31ca2e00e471bc9b92fd5a6d71777b3d4efd80db..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlmv3/configuration_layoutlmv3.py +++ /dev/null @@ -1,293 +0,0 @@ -# coding=utf-8 -# Copyright 2022 Microsoft Research and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" LayoutLMv3 model configuration""" - -from collections import OrderedDict -from typing import TYPE_CHECKING, Any, Mapping, Optional - -from packaging import version - -from ...configuration_utils import PretrainedConfig -from ...onnx import OnnxConfig -from ...onnx.utils import compute_effective_axis_dimension -from ...utils import logging - - -if TYPE_CHECKING: - from ...processing_utils import ProcessorMixin - from ...utils import TensorType - - -logger = logging.get_logger(__name__) - -LAYOUTLMV3_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "microsoft/layoutlmv3-base": "https://huggingface.co/microsoft/layoutlmv3-base/resolve/main/config.json", -} - - -class LayoutLMv3Config(PretrainedConfig): - r""" - This is the configuration class to store the configuration of a [`LayoutLMv3Model`]. It is used to instantiate an - LayoutLMv3 model according to the specified arguments, defining the model architecture. Instantiating a - configuration with the defaults will yield a similar configuration to that of the LayoutLMv3 - [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) architecture. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - Args: - vocab_size (`int`, *optional*, defaults to 50265): - Vocabulary size of the LayoutLMv3 model. Defines the number of different tokens that can be represented by - the `inputs_ids` passed when calling [`LayoutLMv3Model`]. - hidden_size (`int`, *optional*, defaults to 768): - Dimension of the encoder layers and the pooler layer. - num_hidden_layers (`int`, *optional*, defaults to 12): - Number of hidden layers in the Transformer encoder. - num_attention_heads (`int`, *optional*, defaults to 12): - Number of attention heads for each attention layer in the Transformer encoder. - intermediate_size (`int`, *optional*, defaults to 3072): - Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. - hidden_act (`str` or `function`, *optional*, defaults to `"gelu"`): - The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`, - `"relu"`, `"selu"` and `"gelu_new"` are supported. - hidden_dropout_prob (`float`, *optional*, defaults to 0.1): - The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. - attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1): - The dropout ratio for the attention probabilities. - max_position_embeddings (`int`, *optional*, defaults to 512): - The maximum sequence length that this model might ever be used with. Typically set this to something large - just in case (e.g., 512 or 1024 or 2048). - type_vocab_size (`int`, *optional*, defaults to 2): - The vocabulary size of the `token_type_ids` passed when calling [`LayoutLMv3Model`]. - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - layer_norm_eps (`float`, *optional*, defaults to 1e-5): - The epsilon used by the layer normalization layers. - max_2d_position_embeddings (`int`, *optional*, defaults to 1024): - The maximum value that the 2D position embedding might ever be used with. Typically set this to something - large just in case (e.g., 1024). - coordinate_size (`int`, *optional*, defaults to `128`): - Dimension of the coordinate embeddings. - shape_size (`int`, *optional*, defaults to `128`): - Dimension of the width and height embeddings. - has_relative_attention_bias (`bool`, *optional*, defaults to `True`): - Whether or not to use a relative attention bias in the self-attention mechanism. - rel_pos_bins (`int`, *optional*, defaults to 32): - The number of relative position bins to be used in the self-attention mechanism. - max_rel_pos (`int`, *optional*, defaults to 128): - The maximum number of relative positions to be used in the self-attention mechanism. - max_rel_2d_pos (`int`, *optional*, defaults to 256): - The maximum number of relative 2D positions in the self-attention mechanism. - rel_2d_pos_bins (`int`, *optional*, defaults to 64): - The number of 2D relative position bins in the self-attention mechanism. - has_spatial_attention_bias (`bool`, *optional*, defaults to `True`): - Whether or not to use a spatial attention bias in the self-attention mechanism. - visual_embed (`bool`, *optional*, defaults to `True`): - Whether or not to add patch embeddings. - input_size (`int`, *optional*, defaults to `224`): - The size (resolution) of the images. - num_channels (`int`, *optional*, defaults to `3`): - The number of channels of the images. - patch_size (`int`, *optional*, defaults to `16`) - The size (resolution) of the patches. - classifier_dropout (`float`, *optional*): - The dropout ratio for the classification head. - - Example: - - ```python - >>> from transformers import LayoutLMv3Config, LayoutLMv3Model - - >>> # Initializing a LayoutLMv3 microsoft/layoutlmv3-base style configuration - >>> configuration = LayoutLMv3Config() - - >>> # Initializing a model (with random weights) from the microsoft/layoutlmv3-base style configuration - >>> model = LayoutLMv3Model(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - model_type = "layoutlmv3" - - def __init__( - self, - vocab_size=50265, - hidden_size=768, - num_hidden_layers=12, - num_attention_heads=12, - intermediate_size=3072, - hidden_act="gelu", - hidden_dropout_prob=0.1, - attention_probs_dropout_prob=0.1, - max_position_embeddings=512, - type_vocab_size=2, - initializer_range=0.02, - layer_norm_eps=1e-5, - pad_token_id=1, - bos_token_id=0, - eos_token_id=2, - max_2d_position_embeddings=1024, - coordinate_size=128, - shape_size=128, - has_relative_attention_bias=True, - rel_pos_bins=32, - max_rel_pos=128, - rel_2d_pos_bins=64, - max_rel_2d_pos=256, - has_spatial_attention_bias=True, - text_embed=True, - visual_embed=True, - input_size=224, - num_channels=3, - patch_size=16, - classifier_dropout=None, - **kwargs, - ): - super().__init__( - vocab_size=vocab_size, - hidden_size=hidden_size, - num_hidden_layers=num_hidden_layers, - num_attention_heads=num_attention_heads, - intermediate_size=intermediate_size, - hidden_act=hidden_act, - hidden_dropout_prob=hidden_dropout_prob, - attention_probs_dropout_prob=attention_probs_dropout_prob, - max_position_embeddings=max_position_embeddings, - type_vocab_size=type_vocab_size, - initializer_range=initializer_range, - layer_norm_eps=layer_norm_eps, - pad_token_id=pad_token_id, - bos_token_id=bos_token_id, - eos_token_id=eos_token_id, - **kwargs, - ) - self.max_2d_position_embeddings = max_2d_position_embeddings - self.coordinate_size = coordinate_size - self.shape_size = shape_size - self.has_relative_attention_bias = has_relative_attention_bias - self.rel_pos_bins = rel_pos_bins - self.max_rel_pos = max_rel_pos - self.has_spatial_attention_bias = has_spatial_attention_bias - self.rel_2d_pos_bins = rel_2d_pos_bins - self.max_rel_2d_pos = max_rel_2d_pos - self.text_embed = text_embed - self.visual_embed = visual_embed - self.input_size = input_size - self.num_channels = num_channels - self.patch_size = patch_size - self.classifier_dropout = classifier_dropout - - -class LayoutLMv3OnnxConfig(OnnxConfig): - torch_onnx_minimum_version = version.parse("1.12") - - @property - def inputs(self) -> Mapping[str, Mapping[int, str]]: - # The order of inputs is different for question answering and sequence classification - if self.task in ["question-answering", "sequence-classification"]: - return OrderedDict( - [ - ("input_ids", {0: "batch", 1: "sequence"}), - ("attention_mask", {0: "batch", 1: "sequence"}), - ("bbox", {0: "batch", 1: "sequence"}), - ("pixel_values", {0: "batch", 1: "num_channels", 2: "height", 3: "width"}), - ] - ) - else: - return OrderedDict( - [ - ("input_ids", {0: "batch", 1: "sequence"}), - ("bbox", {0: "batch", 1: "sequence"}), - ("attention_mask", {0: "batch", 1: "sequence"}), - ("pixel_values", {0: "batch", 1: "num_channels"}), - ] - ) - - @property - def atol_for_validation(self) -> float: - return 1e-5 - - @property - def default_onnx_opset(self) -> int: - return 12 - - def generate_dummy_inputs( - self, - processor: "ProcessorMixin", - batch_size: int = -1, - seq_length: int = -1, - is_pair: bool = False, - framework: Optional["TensorType"] = None, - num_channels: int = 3, - image_width: int = 40, - image_height: int = 40, - ) -> Mapping[str, Any]: - """ - Generate inputs to provide to the ONNX exporter for the specific framework - - Args: - processor ([`ProcessorMixin`]): - The processor associated with this model configuration. - batch_size (`int`, *optional*, defaults to -1): - The batch size to export the model for (-1 means dynamic axis). - seq_length (`int`, *optional*, defaults to -1): - The sequence length to export the model for (-1 means dynamic axis). - is_pair (`bool`, *optional*, defaults to `False`): - Indicate if the input is a pair (sentence 1, sentence 2). - framework (`TensorType`, *optional*, defaults to `None`): - The framework (PyTorch or TensorFlow) that the processor will generate tensors for. - num_channels (`int`, *optional*, defaults to 3): - The number of channels of the generated images. - image_width (`int`, *optional*, defaults to 40): - The width of the generated images. - image_height (`int`, *optional*, defaults to 40): - The height of the generated images. - - Returns: - Mapping[str, Any]: holding the kwargs to provide to the model's forward function - """ - - # A dummy image is used so OCR should not be applied - setattr(processor.image_processor, "apply_ocr", False) - - # If dynamic axis (-1) we forward with a fixed dimension of 2 samples to avoid optimizations made by ONNX - batch_size = compute_effective_axis_dimension( - batch_size, fixed_dimension=OnnxConfig.default_fixed_batch, num_token_to_add=0 - ) - # If dynamic axis (-1) we forward with a fixed dimension of 8 tokens to avoid optimizations made by ONNX - token_to_add = processor.tokenizer.num_special_tokens_to_add(is_pair) - seq_length = compute_effective_axis_dimension( - seq_length, fixed_dimension=OnnxConfig.default_fixed_sequence, num_token_to_add=token_to_add - ) - # Generate dummy inputs according to compute batch and sequence - dummy_text = [[" ".join([processor.tokenizer.unk_token]) * seq_length]] * batch_size - - # Generate dummy bounding boxes - dummy_bboxes = [[[48, 84, 73, 128]]] * batch_size - - # If dynamic axis (-1) we forward with a fixed dimension of 2 samples to avoid optimizations made by ONNX - # batch_size = compute_effective_axis_dimension(batch_size, fixed_dimension=OnnxConfig.default_fixed_batch) - dummy_image = self._generate_dummy_images(batch_size, num_channels, image_height, image_width) - - inputs = dict( - processor( - dummy_image, - text=dummy_text, - boxes=dummy_bboxes, - return_tensors=framework, - ) - ) - - return inputs diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/llama/tokenization_llama.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/llama/tokenization_llama.py deleted file mode 100644 index 907ddd65bbe431809c356a2e706928f9515712ab..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/llama/tokenization_llama.py +++ /dev/null @@ -1,426 +0,0 @@ -# coding=utf-8 -# Copyright 2022 EleutherAI and the HuggingFace Inc. team. All rights reserved. -# -# This code is based on EleutherAI's GPT-NeoX library and the GPT-NeoX -# and OPT implementations in this library. It has been modified from its -# original forms to accommodate minor architectural differences compared -# to GPT-NeoX and OPT used by the Meta AI team that trained the model. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tokenization classes for LLaMA.""" -import os -from shutil import copyfile -from typing import TYPE_CHECKING, Any, Dict, List, Optional, Tuple - -import sentencepiece as spm - -from ...convert_slow_tokenizer import import_protobuf -from ...tokenization_utils import AddedToken, PreTrainedTokenizer -from ...utils import logging - - -if TYPE_CHECKING: - from ...tokenization_utils_base import TextInput - -logger = logging.get_logger(__name__) - -VOCAB_FILES_NAMES = {"vocab_file": "tokenizer.model"} - -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "hf-internal-testing/llama-tokenizer": "https://huggingface.co/hf-internal-testing/llama-tokenizer/resolve/main/tokenizer.model", - }, - "tokenizer_file": { - "hf-internal-testing/llama-tokenizer": "https://huggingface.co/hf-internal-testing/llama-tokenizer/resolve/main/tokenizer_config.json", - }, -} -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "hf-internal-testing/llama-tokenizer": 2048, -} -SPIECE_UNDERLINE = "▁" - -B_INST, E_INST = "[INST]", "[/INST]" -B_SYS, E_SYS = "<>\n", "\n<>\n\n" - -# fmt: off -DEFAULT_SYSTEM_PROMPT = """You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your \ -answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure\ - that your responses are socially unbiased and positive in nature. - -If a question does not make any sense, or is not factually coherent, explain why instead of answering something not \ -correct. If you don't know the answer to a question, please don't share false information.""" -# fmt: on - - -class LlamaTokenizer(PreTrainedTokenizer): - """ - Construct a Llama tokenizer. Based on byte-level Byte-Pair-Encoding. The default padding token is unset as there is - no padding token in the original model. - - Args: - vocab_file (`str`): - Path to the vocabulary file. - legacy (`bool`, *optional*): - Whether or not the `legacy` behavior of the tokenizer should be used. Legacy is before the merge of #24622 - and #25224 which includes fixes to properly handle tokens that appear after special tokens. A simple - example: - - - `legacy=True`: - ```python - >>> from transformers import T5Tokenizer - - >>> tokenizer = T5Tokenizer.from_pretrained("t5-base", legacy=True) - >>> tokenizer.encode("Hello .") - [8774, 32099, 3, 5, 1] - ``` - - `legacy=False`: - ```python - >>> from transformers import T5Tokenizer - - >>> tokenizer = T5Tokenizer.from_pretrained("t5-base", legacy=False) - >>> tokenizer.encode("Hello .") # the extra space `[3]` is no longer here - [8774, 32099, 5, 1] - ``` - Checkout the [pull request](https://github.com/huggingface/transformers/pull/24565) for more details. - - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - model_input_names = ["input_ids", "attention_mask"] - - def __init__( - self, - vocab_file, - unk_token="", - bos_token="", - eos_token="", - pad_token=None, - sp_model_kwargs: Optional[Dict[str, Any]] = None, - add_bos_token=True, - add_eos_token=False, - clean_up_tokenization_spaces=False, - use_default_system_prompt=True, - spaces_between_special_tokens=False, - legacy=None, - **kwargs, - ): - self.sp_model_kwargs = {} if sp_model_kwargs is None else sp_model_kwargs - bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token - eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token - unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token - pad_token = AddedToken(pad_token, lstrip=False, rstrip=False) if isinstance(pad_token, str) else pad_token - - if legacy is None: - logger.warning_once( - f"You are using the default legacy behaviour of the {self.__class__}. This is" - " expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you." - " If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it" - " means, and thouroughly read the reason why this was added as explained in" - " https://github.com/huggingface/transformers/pull/24565" - ) - legacy = True - - self.legacy = legacy - self.vocab_file = vocab_file - self.add_bos_token = add_bos_token - self.add_eos_token = add_eos_token - self.use_default_system_prompt = use_default_system_prompt - self.sp_model = self.get_spm_processor(kwargs.pop("from_slow", False)) - - super().__init__( - bos_token=bos_token, - eos_token=eos_token, - unk_token=unk_token, - pad_token=pad_token, - add_bos_token=add_bos_token, - add_eos_token=add_eos_token, - sp_model_kwargs=self.sp_model_kwargs, - clean_up_tokenization_spaces=clean_up_tokenization_spaces, - use_default_system_prompt=use_default_system_prompt, - spaces_between_special_tokens=spaces_between_special_tokens, - legacy=legacy, - **kwargs, - ) - - @property - def unk_token_length(self): - return len(self.sp_model.encode(str(self.unk_token))) - - # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.get_spm_processor - def get_spm_processor(self, from_slow=False): - tokenizer = spm.SentencePieceProcessor(**self.sp_model_kwargs) - if self.legacy or from_slow: # no dependency on protobuf - tokenizer.Load(self.vocab_file) - return tokenizer - - with open(self.vocab_file, "rb") as f: - sp_model = f.read() - model_pb2 = import_protobuf(f"The new behaviour of {self.__class__.__name__} (with `self.legacy = False`)") - model = model_pb2.ModelProto.FromString(sp_model) - normalizer_spec = model_pb2.NormalizerSpec() - normalizer_spec.add_dummy_prefix = False - model.normalizer_spec.MergeFrom(normalizer_spec) - sp_model = model.SerializeToString() - tokenizer.LoadFromSerializedProto(sp_model) - return tokenizer - - def __getstate__(self): - state = self.__dict__.copy() - state["sp_model"] = None - state["sp_model_proto"] = self.sp_model.serialized_model_proto() - return state - - def __setstate__(self, d): - self.__dict__ = d - self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) - self.sp_model.LoadFromSerializedProto(self.sp_model_proto) - - @property - def vocab_size(self): - """Returns vocab size""" - return self.sp_model.get_piece_size() - - def get_vocab(self): - """Returns vocab as a dict""" - vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size)} - vocab.update(self.added_tokens_encoder) - return vocab - - # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer.tokenize - def tokenize(self, text: "TextInput", add_special_tokens=False, **kwargs) -> List[str]: - """ - Converts a string to a list of tokens. If `self.legacy` is set to `False`, a prefix token is added unless the - first token is special. - """ - if self.legacy or len(text) == 0: - return super().tokenize(text, **kwargs) - - tokens = super().tokenize(SPIECE_UNDERLINE + text.replace(SPIECE_UNDERLINE, " "), **kwargs) - - if len(tokens) > 1 and tokens[0] == SPIECE_UNDERLINE and tokens[1] in self.all_special_tokens: - tokens = tokens[1:] - return tokens - - # Copied from transformers.models.t5.tokenization_t5.T5Tokenizer._tokenize - def _tokenize(self, text, **kwargs): - """ - Returns a tokenized string. - - We de-activated the `add_dummy_prefix` option, thus the sentencepiece internals will always strip any - SPIECE_UNDERLINE. For example: `self.sp_model.encode(f"{SPIECE_UNDERLINE}Hey", out_type = str)` will give - `['H', 'e', 'y']` instead of `['▁He', 'y']`. Thus we always encode `f"{unk_token}text"` and strip the - `unk_token`. Here is an example with `unk_token = ""` and `unk_token_length = 4`. - `self.tokenizer.sp_model.encode(" Hey", out_type = str)[4:]`. - """ - tokens = self.sp_model.encode(text, out_type=str) - if self.legacy or not text.startswith((SPIECE_UNDERLINE, " ")): - return tokens - - # 1. Encode string + prefix ex: " Hey" - tokens = self.sp_model.encode(self.unk_token + text, out_type=str) - # 2. Remove self.unk_token from ['<','unk','>', '▁Hey'] - return tokens[self.unk_token_length :] if len(tokens) >= self.unk_token_length else tokens - - def _convert_token_to_id(self, token): - """Converts a token (str) in an id using the vocab.""" - return self.sp_model.piece_to_id(token) - - def _convert_id_to_token(self, index): - """Converts an index (integer) in a token (str) using the vocab.""" - token = self.sp_model.IdToPiece(index) - return token - - def convert_tokens_to_string(self, tokens): - """Converts a sequence of tokens (string) in a single string.""" - # since we manually add the prefix space, we have to remove it when decoding - if tokens[0].startswith(SPIECE_UNDERLINE): - tokens[0] = tokens[0][1:] - - current_sub_tokens = [] - out_string = "" - prev_is_special = False - for i, token in enumerate(tokens): - # make sure that special tokens are not decoded using sentencepiece model - if token in self.all_special_tokens: - if not prev_is_special and i != 0 and self.legacy: - out_string += " " - out_string += self.sp_model.decode(current_sub_tokens) + token - prev_is_special = True - current_sub_tokens = [] - else: - current_sub_tokens.append(token) - prev_is_special = False - out_string += self.sp_model.decode(current_sub_tokens) - return out_string - - def save_vocabulary(self, save_directory, filename_prefix: Optional[str] = None) -> Tuple[str]: - """ - Save the vocabulary and special tokens file to a directory. - - Args: - save_directory (`str`): - The directory in which to save the vocabulary. - - Returns: - `Tuple(str)`: Paths to the files saved. - """ - if not os.path.isdir(save_directory): - logger.error(f"Vocabulary path ({save_directory}) should be a directory") - return - out_vocab_file = os.path.join( - save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] - ) - - if os.path.abspath(self.vocab_file) != os.path.abspath(out_vocab_file) and os.path.isfile(self.vocab_file): - copyfile(self.vocab_file, out_vocab_file) - elif not os.path.isfile(self.vocab_file): - with open(out_vocab_file, "wb") as fi: - content_spiece_model = self.sp_model.serialized_model_proto() - fi.write(content_spiece_model) - - return (out_vocab_file,) - - def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None): - bos_token_id = [self.bos_token_id] if self.add_bos_token else [] - eos_token_id = [self.eos_token_id] if self.add_eos_token else [] - - output = bos_token_id + token_ids_0 + eos_token_id - - if token_ids_1 is not None: - output = output + bos_token_id + token_ids_1 + eos_token_id - - return output - - def get_special_tokens_mask( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False - ) -> List[int]: - """ - Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding - special tokens using the tokenizer `prepare_for_model` method. - - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - already_has_special_tokens (`bool`, *optional*, defaults to `False`): - Whether or not the token list is already formatted with special tokens for the model. - - Returns: - `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. - """ - if already_has_special_tokens: - return super().get_special_tokens_mask( - token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True - ) - - bos_token_id = [1] if self.add_bos_token else [] - eos_token_id = [1] if self.add_eos_token else [] - - if token_ids_1 is None: - return bos_token_id + ([0] * len(token_ids_0)) + eos_token_id - return ( - bos_token_id - + ([0] * len(token_ids_0)) - + eos_token_id - + bos_token_id - + ([0] * len(token_ids_1)) - + eos_token_id - ) - - def create_token_type_ids_from_sequences( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Creates a mask from the two sequences passed to be used in a sequence-pair classification task. An ALBERT - sequence pair mask has the following format: - - ``` - 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 - | first sequence | second sequence | - ``` - - if token_ids_1 is None, only returns the first portion of the mask (0s). - - Args: - token_ids_0 (`List[int]`): - List of ids. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - - Returns: - `List[int]`: List of [token type IDs](../glossary#token-type-ids) according to the given sequence(s). - """ - bos_token_id = [self.bos_token_id] if self.add_bos_token else [] - eos_token_id = [self.eos_token_id] if self.add_eos_token else [] - - output = [0] * len(bos_token_id + token_ids_0 + eos_token_id) - - if token_ids_1 is not None: - output += [1] * len(bos_token_id + token_ids_1 + eos_token_id) - - return output - - @property - def default_chat_template(self): - """ - LLaMA uses [INST] and [/INST] to indicate user messages, and <> and <> to indicate system messages. - Assistant messages do not have special tokens, because LLaMA chat models are generally trained with strict - user/assistant/user/assistant message ordering, and so assistant messages can be identified from the ordering - rather than needing special tokens. The system message is partly 'embedded' in the first user message, which - results in an unusual token ordering when it is present. This template should definitely be changed if you wish - to fine-tune a model with more flexible role ordering! - - The output should look something like: - - [INST] B_SYS SystemPrompt E_SYS Prompt [/INST] Answer [INST] Prompt [/INST] Answer - [INST] Prompt [/INST] - """ - - template = ( - "{% if messages[0]['role'] == 'system' %}" - "{% set loop_messages = messages[1:] %}" # Extract system message if it's present - "{% set system_message = messages[0]['content'] %}" - "{% elif USE_DEFAULT_PROMPT == true and not '<>' in messages[0]['content'] %}" - "{% set loop_messages = messages %}" # Or use the default system message if the flag is set - "{% set system_message = 'DEFAULT_SYSTEM_MESSAGE' %}" - "{% else %}" - "{% set loop_messages = messages %}" - "{% set system_message = false %}" - "{% endif %}" - "{% for message in loop_messages %}" # Loop over all non-system messages - "{% if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}" - "{{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}" - "{% endif %}" - "{% if loop.index0 == 0 and system_message != false %}" # Embed system message in first message - "{% set content = '<>\\n' + system_message + '\\n<>\\n\\n' + message['content'] %}" - "{% else %}" - "{% set content = message['content'] %}" - "{% endif %}" - "{% if message['role'] == 'user' %}" # After all of that, handle messages/roles in a fairly normal way - "{{ bos_token + '[INST] ' + content.strip() + ' [/INST]' }}" - "{% elif message['role'] == 'system' %}" - "{{ '<>\\n' + content.strip() + '\\n<>\\n\\n' }}" - "{% elif message['role'] == 'assistant' %}" - "{{ ' ' + content.strip() + ' ' + eos_token }}" - "{% endif %}" - "{% endfor %}" - ) - template = template.replace("USE_DEFAULT_PROMPT", "true" if self.use_default_system_prompt else "false") - default_message = DEFAULT_SYSTEM_PROMPT.replace("\n", "\\n").replace("'", "\\'") - template = template.replace("DEFAULT_SYSTEM_MESSAGE", default_message) - - return template diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/fraction.js/fraction.d.ts b/spaces/younker/chatgpt-turbo/client/node_modules/fraction.js/fraction.d.ts deleted file mode 100644 index 8a11b3ad5af9a3710f5373dbdac56ac5cf901e3c..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/node_modules/fraction.js/fraction.d.ts +++ /dev/null @@ -1,60 +0,0 @@ -declare module 'Fraction'; - -export interface NumeratorDenominator { - n: number; - d: number; -} - -type FractionConstructor = { - (fraction: Fraction): Fraction; - (num: number | string): Fraction; - (numerator: number, denominator: number): Fraction; - (numbers: [number | string, number | string]): Fraction; - (fraction: NumeratorDenominator): Fraction; - (firstValue: Fraction | number | string | [number | string, number | string] | NumeratorDenominator, secondValue?: number): Fraction; -}; - -export default class Fraction { - constructor (fraction: Fraction); - constructor (num: number | string); - constructor (numerator: number, denominator: number); - constructor (numbers: [number | string, number | string]); - constructor (fraction: NumeratorDenominator); - constructor (firstValue: Fraction | number | string | [number | string, number | string] | NumeratorDenominator, secondValue?: number); - - s: number; - n: number; - d: number; - - abs(): Fraction; - neg(): Fraction; - - add: FractionConstructor; - sub: FractionConstructor; - mul: FractionConstructor; - div: FractionConstructor; - pow: FractionConstructor; - gcd: FractionConstructor; - lcm: FractionConstructor; - - mod(n?: number | string | Fraction): Fraction; - - ceil(places?: number): Fraction; - floor(places?: number): Fraction; - round(places?: number): Fraction; - - inverse(): Fraction; - - simplify(eps?: number): Fraction; - - equals(n: number | string | Fraction): boolean; - compare(n: number | string | Fraction): number; - divisible(n: number | string | Fraction): boolean; - - valueOf(): number; - toString(decimalPlaces?: number): string; - toLatex(excludeWhole?: boolean): string; - toFraction(excludeWhole?: boolean): string; - toContinued(): number[]; - clone(): Fraction; -} diff --git a/spaces/ysheng/SSN-Soft-Shadow-Network-for-Image-Composition/model_utils.py b/spaces/ysheng/SSN-Soft-Shadow-Network-for-Image-Composition/model_utils.py deleted file mode 100644 index 55155a204d35855e78881c86f605c62c3c11807f..0000000000000000000000000000000000000000 --- a/spaces/ysheng/SSN-Soft-Shadow-Network-for-Image-Composition/model_utils.py +++ /dev/null @@ -1,53 +0,0 @@ -import os -import yaml -import logging - -import torch - - -def parse_configs(config: str): - """ Parse the config file and return a dictionary of configs - - :param config: path to the config file - :returns: - - """ - if not os.path.exists(config): - logging.error('Cannot find the config file: {}'.format(config)) - exit() - - with open(config, 'r') as stream: - try: - configs=yaml.safe_load(stream) - return configs - - except yaml.YAMLError as exc: - logging.error(exc) - return {} - - -def load_model(config: str, weight: str, model_def, device): - """ Load the model from the config file and the weight file - - :param config: path to the config file - :param weight: path to the weight file - :param model_def: model class definition - :param device: pytorch device - :returns: - - """ - assert os.path.exists(weight), 'Cannot find the weight file: {}'.format(weight) - assert os.path.exists(config), 'Cannot find the config file: {}'.format(config) - - - opt = parse_configs(config) - model = model_def(opt) - cp = torch.load(weight, map_location=device) - - models = model.get_models() - for k, m in models.items(): - m.load_state_dict(cp[k]) - m.to(device) - - model.set_models(models) - return model diff --git a/spaces/zenml/zenml/Dockerfile b/spaces/zenml/zenml/Dockerfile deleted file mode 100644 index 29ec24bfb63cdbf2c92fc41c33e24b329aa6e1ca..0000000000000000000000000000000000000000 --- a/spaces/zenml/zenml/Dockerfile +++ /dev/null @@ -1,65 +0,0 @@ -FROM zenmldocker/zenml-server:latest - -ENV ZENML_ANALYTICS_OPT_IN=true -ENV ZENML_SERVER_DEPLOYMENT_TYPE="hf_spaces" -ENV ZENML_LOGGING_VERBOSITY=DEBUG - -################################################################################ -# -# CONFIGURING YOUR ZENML HF SPACES SERVER -# --------------------------------------- -# By default this space is not persistent. All ZenML metadata is stored in -# localstorage in a SQLite database. If you would like to make your storage -# persistent, use the appropriate environment variables below to configure the -# image to use a MySQL-compatible database service that is reachable from the -# container. See https://docs.zenml.io/getting-started/deploying-zenml/docker -# for more information on how to configure these environment variables. - -# You can also configure the secrets store to use for your ZenML server. Be -# sure to use Huggingface Spaces' 'Repository Secrets' feature to store any -# secrets referenced here. See -# https://huggingface.co/docs/hub/spaces-overview#managing-secrets for more -# information on how to configure these environment variables. - -# ENV ZENML_DEFAULT_PROJECT_NAME="" -# ENV ZENML_DEFAULT_USER_NAME="" -# ENV ZENML_DEFAULT_USER_PASSWORD="" -# ENV ZENML_STORE_URL="" -# ENV ZENML_STORE_SSL_CA="" -# ENV ZENML_STORE_SSL_CERT="" -# ENV ZENML_STORE_SSL_KEY="" -# ENV ZENML_STORE_SSL_VERIFY_SERVER_CERT="" - -# ENV ZENML_LOGGING_VERBOSITY="" - -# # SECRETS STORE CONFIGURATION -# ENV ZENML_SECRETS_STORE_TYPE="" -# ENV ZENML_SECRETS_STORE_ENCRYPTION_KEY="" -# ENV ZENML_SECRETS_STORE_CLASS_PATH="" -# ENV ZENML_JWT_SECRET_KEY="" - -# # AWS Secrets Store Configuration -# ENV ZENML_SECRETS_STORE_REGION_NAME="" -# ENV ZENML_SECRETS_STORE_AWS_ACCESS_KEY_ID="" -# ENV ZENML_SECRETS_STORE_AWS_SECRET_ACCESS_KEY="" -# ENV ZENML_SECRETS_STORE_AWS_SESSION_TOKEN="" -# ENV ZENML_SECRETS_STORE_SECRET_LIST_REFRESH_TIMEOUT="" - -# # GCP Secrets Store Configuration -# ENV ZENML_SECRETS_STORE_PROJECT_ID="" -# ENV GOOGLE_APPLICATION_CREDENTIALS="" - -# # Azure Secrets Store Configuration -# ENV ZENML_SECRETS_STORE_KEY_VAULT_NAME="" -# ENV ZENML_SECRETS_STORE_AZURE_CLIENT_ID="" -# ENV ZENML_SECRETS_STORE_AZURE_CLIENT_SECRET="" -# ENV ZENML_SECRETS_STORE_AZURE_TENANT_ID="" - -# # Hashicorp Secrets Store Configuration -# ENV ZENML_SECRETS_STORE_VAULT_ADDR="" -# ENV ZENML_SECRETS_STORE_VAULT_TOKEN="" -# ENV ZENML_SECRETS_STORE_VAULT_NAMESPACE="" -# ENV ZENML_SECRETS_STORE_MAX_VERSIONS="" - -ENTRYPOINT ["uvicorn", "zenml.zen_server.zen_server_api:app", "--log-level", "debug"] -CMD ["--proxy-headers", "--port", "8080", "--host", "0.0.0.0"] diff --git a/spaces/zhoupin30/zhoupin30/src/components/ui/dialog.tsx b/spaces/zhoupin30/zhoupin30/src/components/ui/dialog.tsx deleted file mode 100644 index 925e77fe7858fb218b5115b4e225174a886e0f02..0000000000000000000000000000000000000000 --- a/spaces/zhoupin30/zhoupin30/src/components/ui/dialog.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DialogPrimitive from '@radix-ui/react-dialog' - -import { cn } from '@/lib/utils' -import { IconClose } from '@/components/ui/icons' - -const Dialog = DialogPrimitive.Root - -const DialogTrigger = DialogPrimitive.Trigger - -const DialogPortal = ({ - className, - children, - ...props -}: DialogPrimitive.DialogPortalProps) => ( - -
        - {children} -
        -
        -) -DialogPortal.displayName = DialogPrimitive.Portal.displayName - -const DialogOverlay = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogOverlay.displayName = DialogPrimitive.Overlay.displayName - -const DialogContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - {children} - - - Close - - - -)) -DialogContent.displayName = DialogPrimitive.Content.displayName - -const DialogHeader = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
        -) -DialogHeader.displayName = 'DialogHeader' - -const DialogFooter = ({ - className, - ...props -}: React.HTMLAttributes) => ( -
        -) -DialogFooter.displayName = 'DialogFooter' - -const DialogTitle = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogTitle.displayName = DialogPrimitive.Title.displayName - -const DialogDescription = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DialogDescription.displayName = DialogPrimitive.Description.displayName - -export { - Dialog, - DialogTrigger, - DialogContent, - DialogHeader, - DialogFooter, - DialogTitle, - DialogDescription -} diff --git a/spaces/zideliu/styledrop/timm/optim/adafactor.py b/spaces/zideliu/styledrop/timm/optim/adafactor.py deleted file mode 100644 index 088ce3acd82e2be1b393afafa05f48435e538a1a..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/timm/optim/adafactor.py +++ /dev/null @@ -1,174 +0,0 @@ -""" Adafactor Optimizer - -Lifted from https://github.com/pytorch/fairseq/blob/master/fairseq/optim/adafactor.py - -Original header/copyright below. - -""" -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -import torch -import math - - -class Adafactor(torch.optim.Optimizer): - """Implements Adafactor algorithm. - This implementation is based on: `Adafactor: Adaptive Learning Rates with Sublinear Memory Cost` - (see https://arxiv.org/abs/1804.04235) - - Note that this optimizer internally adjusts the learning rate depending on the - *scale_parameter*, *relative_step* and *warmup_init* options. - - To use a manual (external) learning rate schedule you should set `scale_parameter=False` and - `relative_step=False`. - - Arguments: - params (iterable): iterable of parameters to optimize or dicts defining parameter groups - lr (float, optional): external learning rate (default: None) - eps (tuple[float, float]): regularization constants for square gradient - and parameter scale respectively (default: (1e-30, 1e-3)) - clip_threshold (float): threshold of root mean square of final gradient update (default: 1.0) - decay_rate (float): coefficient used to compute running averages of square gradient (default: -0.8) - beta1 (float): coefficient used for computing running averages of gradient (default: None) - weight_decay (float, optional): weight decay (L2 penalty) (default: 0) - scale_parameter (bool): if True, learning rate is scaled by root mean square of parameter (default: True) - relative_step (bool): if True, time-dependent learning rate is computed - instead of external learning rate (default: True) - warmup_init (bool): time-dependent learning rate computation depends on - whether warm-up initialization is being used (default: False) - """ - - def __init__(self, params, lr=None, eps=1e-30, eps_scale=1e-3, clip_threshold=1.0, - decay_rate=-0.8, betas=None, weight_decay=0.0, scale_parameter=True, warmup_init=False): - relative_step = lr is None - if warmup_init and not relative_step: - raise ValueError('warmup_init requires relative_step=True') - - beta1 = None if betas is None else betas[0] # make it compat with standard betas arg - defaults = dict(lr=lr, eps=eps, eps_scale=eps_scale, clip_threshold=clip_threshold, decay_rate=decay_rate, - beta1=beta1, weight_decay=weight_decay, scale_parameter=scale_parameter, - relative_step=relative_step, warmup_init=warmup_init) - super(Adafactor, self).__init__(params, defaults) - - @staticmethod - def _get_lr(param_group, param_state): - if param_group['relative_step']: - min_step = 1e-6 * param_state['step'] if param_group['warmup_init'] else 1e-2 - lr_t = min(min_step, 1.0 / math.sqrt(param_state['step'])) - param_scale = 1.0 - if param_group['scale_parameter']: - param_scale = max(param_group['eps_scale'], param_state['RMS']) - param_group['lr'] = lr_t * param_scale - return param_group['lr'] - - @staticmethod - def _get_options(param_group, param_shape): - factored = len(param_shape) >= 2 - use_first_moment = param_group['beta1'] is not None - return factored, use_first_moment - - @staticmethod - def _rms(tensor): - return tensor.norm(2) / (tensor.numel() ** 0.5) - - def _approx_sq_grad(self, exp_avg_sq_row, exp_avg_sq_col): - r_factor = (exp_avg_sq_row / exp_avg_sq_row.mean(dim=-1, keepdim=True)).rsqrt_().unsqueeze(-1) - c_factor = exp_avg_sq_col.unsqueeze(-2).rsqrt() - return torch.mul(r_factor, c_factor) - - def step(self, closure=None): - """Performs a single optimization step. - Arguments: - closure (callable, optional): A closure that reevaluates the model and returns the loss. - """ - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - for p in group['params']: - if p.grad is None: - continue - grad = p.grad.data - if grad.dtype in {torch.float16, torch.bfloat16}: - grad = grad.float() - if grad.is_sparse: - raise RuntimeError('Adafactor does not support sparse gradients.') - - state = self.state[p] - grad_shape = grad.shape - - factored, use_first_moment = self._get_options(group, grad_shape) - # State Initialization - if len(state) == 0: - state['step'] = 0 - - if use_first_moment: - # Exponential moving average of gradient values - state['exp_avg'] = torch.zeros_like(grad) - if factored: - state['exp_avg_sq_row'] = torch.zeros(grad_shape[:-1]).to(grad) - state['exp_avg_sq_col'] = torch.zeros(grad_shape[:-2] + grad_shape[-1:]).to(grad) - else: - state['exp_avg_sq'] = torch.zeros_like(grad) - - state['RMS'] = 0 - else: - if use_first_moment: - state['exp_avg'] = state['exp_avg'].to(grad) - if factored: - state['exp_avg_sq_row'] = state['exp_avg_sq_row'].to(grad) - state['exp_avg_sq_col'] = state['exp_avg_sq_col'].to(grad) - else: - state['exp_avg_sq'] = state['exp_avg_sq'].to(grad) - - p_data_fp32 = p.data - if p.data.dtype in {torch.float16, torch.bfloat16}: - p_data_fp32 = p_data_fp32.float() - - state['step'] += 1 - state['RMS'] = self._rms(p_data_fp32) - lr_t = self._get_lr(group, state) - - beta2t = 1.0 - math.pow(state['step'], group['decay_rate']) - update = grad ** 2 + group['eps'] - if factored: - exp_avg_sq_row = state['exp_avg_sq_row'] - exp_avg_sq_col = state['exp_avg_sq_col'] - - exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1)) - exp_avg_sq_col.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-2)) - #exp_avg_sq_row.mul_(beta2t).add_(update.mean(dim=-1), alpha=1.0 - beta2t) # pytorch 1.6+ - #exp_avg_sq_col.mul_(beta2t).add_(update.mean(dim=-2), alpha=1.0 - beta2t) - - # Approximation of exponential moving average of square of gradient - update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col) - update.mul_(grad) - else: - exp_avg_sq = state['exp_avg_sq'] - - exp_avg_sq.mul_(beta2t).add_(1.0 - beta2t, update) - #exp_avg_sq.mul_(beta2t).add_(update, alpha=1.0 - beta2t) # pytorch 1.6+ - update = exp_avg_sq.rsqrt().mul_(grad) - - update.div_((self._rms(update) / group['clip_threshold']).clamp_(min=1.0)) - update.mul_(lr_t) - - if use_first_moment: - exp_avg = state['exp_avg'] - exp_avg.mul_(group["beta1"]).add_(1 - group["beta1"], update) - #exp_avg.mul_(group['beta1']).add_(update, alpha=1 - group['beta1']) # pytorch 1.6+ - update = exp_avg - - if group['weight_decay'] != 0: - p_data_fp32.add_(-group["weight_decay"] * lr_t, p_data_fp32) - #p_data_fp32.add_(p_data_fp32, alpha=-group['weight_decay'] * lr_t) # pytorch 1.6+ - - p_data_fp32.add_(-update) - - if p.data.dtype in {torch.float16, torch.bfloat16}: - p.data.copy_(p_data_fp32) - - return loss \ No newline at end of file diff --git a/spaces/zomehwh/sovits-xiaoke/attentions.py b/spaces/zomehwh/sovits-xiaoke/attentions.py deleted file mode 100644 index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000 --- a/spaces/zomehwh/sovits-xiaoke/attentions.py +++ /dev/null @@ -1,303 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x